text
stringlengths
216
4.52M
meta
dict
\section{Introduction} In recent years, narrow-band optical surveys of the Galaxy and near- and mid-IR mapping of the sky have incessantly increased the population of known Galactic planetary nebulae (PNe) and their immediate precursors, post-AGB stars and proto-PNe \citep[e.g.,][]{Parker_etal06,Setal06,Miszalski_etal08,Viironen_etal09}. This observational effort has allowed us a better assessment of the role of PNe in the Galaxy chemical enrichment and on the processes of PNe formation and evolution. Incidentally, these surveys have revealed a number of PNe with very peculiar morphologies, physical structures and evolutionary situations \citep[e.g.,][]{Mampaso_etal06,Miszalski_etal11}. These new objects are providing interesting case studies to investigate the complexity of the PN phenomenon. Using existing digital sky surveys such as the POSS-I and POSS-II surveys, \citet{Jacoby_etal10} presented Kn\,26, a bipolar PN candidate previously known as the emission line source Lan\,384 \citep{Lanning00,Eracleous_etal02}. An inspection of the narrow-band H$\alpha$ image of Kn\,26 presented by \citet{Jacoby_etal10} suggests a bipolar morphology with an intriguing S-shaped point-symmetric structure, whereas the optical spectroscopy presented by \citet{Eracleous_etal02} supports its classification as a PN. To confirm the PN nature of Kn\,26 and to investigate its morphology, kinematics, physical structure and physical conditions and chemical abundances, we have obtained high spatial-resolution optical and near-IR narrow-band images of this nebula in conjunction with intermediate-dispersion and echelle long-slit spectroscopic observations. The analyses of these data presented in this paper allow us to conclude that Kn\,26 is a true PN (PN\,G084.7--08.0, following the standard rules of nomenclature for PNe), whose spatio-kinematical properties make a new member of the quadrupolar class of PNe \citep{Manchado_etal96}. We next describe the observations in Sect.\ 2 and provide the main results in Sect.\ 3. These are discussed in Sect.\ 4 and summarized in Sect.\ 5. \begin{table*} \caption{Properties of the Narrow-band Filters} \label{tab.filt} \centering \begin{tabular}{lrrc|lrrc} \hline\hline \multicolumn{1}{l}{Optical Filter} & \multicolumn{1}{c}{$\lambda_{\rm c}$} & \multicolumn{1}{c}{$\Delta\lambda$} & \multicolumn{1}{l}{Transmission peak} & \multicolumn{1}{l}{Near-IR Filter} & \multicolumn{1}{c}{$\lambda_{\rm c}$} & \multicolumn{1}{c}{$\Delta\lambda$} & \multicolumn{1}{l}{Transmission peak} \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{(\AA)} & \multicolumn{1}{c}{(\AA)} & \multicolumn{1}{c}{(\%)} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{($\mu$m)} & \multicolumn{1}{c}{($\mu$m)} & \multicolumn{1}{c}{(\%)} \\ \hline {[}O~{\sc iii}] & 5007 & 30~ & 77 & H$_2$ & 2.122 & 0.032 & 70~ \\ H$\alpha$ & 6567 & 8~ & 60 & Br$\gamma$ & 2.166 & 0.032 & 73~ \\ {[}N~{\sc ii}] & 6588 & 9~ & 62 & $K$ continuum & 2.270 & 0.034 & 72~ \\ \hline \end{tabular} \end{table*} \section{Observations} \subsection{Narrow-band imaging} Narrow-band H$\alpha$, [N~{\sc ii}] $\lambda$6583, and [O~{\sc iii}] $\lambda$5007 images of Kn\,26 have been acquired on 2009 June 21 using ALFOSC (Andalucia Faint Object Spectrograph and Camera) at the 2.56m Nordic Optical Telescope (NOT) of the Observatorio de Roque de los Muchachos (ORM, La Palma, Spain). The central wavelength ($\lambda_{\rm c}$), bandwidth ($\Delta\lambda$), and transmission peaks of these filters are provided in Table~\ref{tab.filt}. The EEV 2048$\times$2048 CCD with pixel size 13.5 $\mu$m was used as detector and the exposure time was 900 s for each filter. The images have a plate scale of 0$\farcs$184 pixel$^{-1}$, a field of view (FoV) 6$\farcm$3$\times$6$\farcm$3, and a spatial resolution of 0$\farcs$7, as determined from the FWHM of stars in the FoV. The data were bias-subtracted and flat-fielded by twilight flats using standard IRAF\footnote{ IRAF is distributed by the National Optical Astronomy Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation. } V2.14.1 routines. Figure~\ref{img1}-{\it top} shows a color-composite picture of the optical narrow-band images of Kn\,26. Narrow-band H$_2$ 2.1218 $\mu$m, Br$\gamma$ 2.1658 $\mu$m, and $K$ continuum at 2.270 $\mu$m images of Kn\,26 were obtained on 2010 June 27 using LIRIS \citep[Long-Slit Intermediate Resolution Infrared Spectrograph,][]{Pulido_etal03} at the Cassegrain focus of the 4.2m William Herschel Telescope (\emph{WHT}) at the ORM. As for the optical filters, the central wavelength, bandwidth, and transmission peak of these filters are listed in Table~\ref{tab.filt}. The detector was a 1k$\times$1k HAWAII array with plate scale 0$\farcs$25 pixel$^{-1}$ and the FoV 4$\farcm$3$\times$4$\farcm$3. We obtained series of 4 exposures with integration time 60 s on each filter, for total effective exposure times of 720 s for H$_2$ and Br$\gamma$, and 480 s for $K$ continuum. For each series of 4 exposures, the nebula was placed at the center of each quadrant of the detector to acquire simultaneously the object and, by combining directly the 4 exposures, the sky. The data reduction was carried out using the dedicated software LIRISDR (LIRIS Data Reduction package), a pipeline for the automatic reduction of near-IR data developed within the IRAF environment. The reduction performed by LIRISDR includes standard and additional non-standard steps such as bad pixel mapping, cross-talk correction, flat-fielding, sky subtraction, removal of reset anomaly effect, field distortion correction, and final image shift and co-addition. Figure~\ref{img1}-{\it center} shows a color-composite picture of the near-IR narrow-band images of Kn\,26. The lack of nebular continuum emission and the brighter emission in H$_2$ with respect to Br$\gamma$ results in the red appearance of the nebula in this picture. The spatial resolution, as determined from the FWHM of stars in the FoV, is $\approx$0$\farcs$8. In addition, we have registered the optical and near-IR images to compare the emission in the H$_2$, [N~{\sc ii}], and [O~{\sc iii}] emission lines. The color composite picture is shown in Figure~\ref{img1}-{\it bottom}. \subsection{Spectroscopic observations} Intermediate-resolution long-slit spectra of Kn\,26 were obtained on 2011 October 5, using the ALBIREO spectrograph at the 1.5 m telescope of the Observatorio de Sierra Nevada (OSN), Granada, Spain. A Marconi 2048$\times$2048 CCD was used as a detector, in conjunction with a 400 l~mm$^{-1}$ grating blazed at 5500 \AA. The slit length was $\approx$6\arcmin\ and its width was set at 50 $\mu$m ($\equiv$2.5\arcsec) to match the seeing during the observations. The binning 2$\times$2 of the detector implied plate and spectral scales of 0\farcs30~pix$^{-1}$ and 1.89~\AA~pix$^{-1}$, respectively. The spectral resolution was $\approx$4.7 \AA, the wavelength uncertainty $\approx$1 \AA, and the spectral range covered 3600--7200 \AA. \begin{figure*}[t] \begin{center} \includegraphics[width=11.0cm,bb=0 -8 656 374]{NOT.eps} \includegraphics[width=11.0cm,bb=0 -8 656 374]{WHT.eps} \includegraphics[width=11.0cm]{NOT_WHT.eps} \caption{ Color-composite optical ({\it top}), near-IR ({\it center}), and optical/near-IR ({\it bottom}) narrow-band pictures of Kn\,26. The narrow-band filters and colors assigned to each picture are labeled on them. The FoV is 150\arcsec$\times$85\arcsec, whereas the insets show in greater detail the innermost 8\farcs5$\times$14\farcs5 nebular regions. In all pictures north is up, east to the left. } \label{img1} \end{center} \end{figure*} Two positions with exposures of 1800 seconds were obtained with the slit centered on the central star and oriented along the position angles (P.A.) 112\degr\ and 147\degr, i.e., along the axis of the major bipolar lobes and along the bright S-shaped region. The observations were flux calibrated using spectra of the spectrophotometric standard stars G~191-B2B and Hiltner~600 acquired on the same night. All spectra were bias-subtracted, flat-fielded, wavelength, and flux calibrated following standard IRAF procedures. Long-slit high dispersion spectroscopy on the H$\alpha$ and [N~{\sc ii}] $\lambda$6583 lines of Kn\,26 has been acquired on 2010 June 13 using the Manchester Echelle Spectrometer \citep[MES,][]{Meaburn_etal03} mounted on the 2.1\,m (f/7.5) telescope at the Observatorio Astron\'omico Nacional de San Pedro M\'artir (OAN-SPM, Mexico). The $2048\times2048$ Thomson CCD with a pixel size of $15\mu$m was used, resulting a plate scale of $0\farcs352\,{\rm pixel}^{-1}$ and a dispersion of 0.06\,{\AA}\,pixel$^{-1}$. The 2$^{\prime\prime}$ wide slit was set across the central star and oriented along the axes of the major bipolar lobes (P.A.=110\degr) and minor bipolar lobes (P.A.=65\degr) with on-chip binning 1$\times$1 and 2$\times$2 and spectral resolutions $\approx$6 km~s$^{-1}$ and $\approx$12 km~s$^{-1}$, respectively. The spectra were wavelength calibrated with a Th-Ar arc lamp to an accuracy of $\pm1$\,km\,s$^{-1}$. \section{Results} \subsection{Morphology} The images of Kn\,26 in Figure~\ref{img1} reveal the following morphological components: (1) the major bipolar lobes, a pair of large bipolar lobes extending $\approx$110\arcsec\ along PA $\approx$110\degr; (2) the minor bipolar lobes, a pair of small bipolar lobes extending $\approx$75\arcsec\ along PA $\approx$75\degr; and (3) a central elliptical ring. These components, marked on the sketch of Kn\,26 in Figure~\ref{sketch}, are described in more detail next. \begin{figure}[t] \begin{center} \includegraphics[width=1.0\columnwidth]{sketch.eps} \caption{ Schematic drawing of the two pairs of bipolar lobes of Kn\,26 with the different morphological components labeled on it. The eastern minor and major bipolar lobes recede from us, whereas the western lobes approach to us. } \label{sketch} \end{center} \end{figure} The major bipolar lobes, very prominent in [N~{\sc ii}] and H$_2$, have open ends, becoming very faint at large distances from the nebular center. Their inner regions show a clear point-symmetric brightness distribution defined by two arcs that trace the central ring and the edges of the lobes in these innermost regions. This same point-symmetric intensity distribution is present at other locations of the lobes, very particularly in the H$_2$ image, such as the bars located 36\arcsec--54\arcsec\ from the nebula center that trace the southern edge of the SE bipolar lobe and northern edge of the NW lobe, and the regions at 36\arcsec\ and PA $\approx$75\degr\ and 255\degr\ coincident with the polar caps of the minor bipolar lobes that define the northern and southern edges of the SE and NW bipolar lobes, respectively. The H$\alpha$ image presents similar structures as those observed in [N~{\sc ii}], whereas in the [O~{\sc iii}] image the point-symmetric arcs are observed as a high-excitation region (blue in Fig.~\ref{img1}-{\it top}) along the major nebular axis with an extent of $\approx$5\arcsec\ at both sides of the star at the center of the nebula. The minor bipolar lobes have elliptical shape (Fig.~\ref{sketch}) and are closed, at variance with the major bipolar lobes. The NE lobe has a maximum extent from the center of 31\farcs6, while the SW lobe reaches up to 34\farcs7. The polar regions of these lobes are particularly bright, especially for the NE lobe. As for the major bipolar lobes, the inner regions of the minor bipolar lobes share the arcs that define the central ring. The central ring has an elliptical shape, with its minor axis along PA $\approx$100\degr, i.e., similar but no completely coincident with the orientation of the major bipolar lobes. The size of the ring is 8\farcs3$\times$2\farcs9 in H$_2$, 7\farcs7$\times$2\farcs3 in [N~{\sc ii}], and 7\farcs4$\times$2\farcs1 in H$\alpha$. This ring is formed by two arcs that cross at the tips of the major axis and extend along the edges of the bipolar lobes. The ring formed by these two arcs is not empty, but complex structures are detected inside this ring in different images, particularly two [N~{\sc ii}] and [O~{\sc iii}] bright knots observed at both sides of the star at the center. Figure~\ref{img1}-{\it top} provides information on the spatial variations of the excitation in Kn\,26. The major bipolar lobes present low-excitation and are dominated by [N~{\sc ii}] emission. The H$_2$ emission is particularly bright in the point-symmetric regions. In the minor bipolar lobes, the H$\alpha$ to [N~{\sc ii}] line ratio is larger than in the major bipolar lobes (the green color in Fig.~\ref{img1}-{\it top}). Finally, higher excitation material, as revealed by the [O~{\sc iii}] emission, is concentrated at the center of the nebula, in a region $\approx$9\arcsec\ in size at the center of the nebula that extends along the axis of the major bipolar lobes. Figure~\ref{img1}-{\it bottom} provides information about the relative distribution of molecular (H$_2$) and ionized material ([N~{\sc ii}] and [O~{\sc iii}]) in Kn\,26. The H$_2$ emission delineates the [N~{\sc ii}] emission, which is always inside the nebula. H$_2$ is particularly bright in the point-symmetric regions of the nebula, namely, in the bright point-symmetric arcs and central ring, in the two linear features at the south and north ends of the eastern and western major bipolar lobes, respectively, and at the polar regions of the minor bipolar lobes. \subsection{Kinematics} The position-velocity maps (PV maps) of the H$\alpha$ and [N~{\sc ii}] $\lambda$6583 emission lines presented in Figure~\ref{PV} clearly reveal bipolar kinematics along both the major (PA~100\degr) and minor (PA~65\degr) lobes. The two pairs of bipolar lobes have different kinematical properties, but in both cases the eastern lobe recedes from us, whereas the western lobe moves away from the systemic velocity $v_{\rm LSR}\approx-10$ km~s$^{-1}$ derived from our high-dispersion spectroscopic observations. The major bipolar lobes, registered by the slit along PA=110\degr, are confirmed to be open, with velocity split between the approaching and receding components increasing with distance to the central star. On the other hand, the minor bipolar lobes, registered by the slit along PA=65\degr, are closed and the velocity split also shows a smooth increase that suddenly breaks at a distance $\approx$13\arcsec\ from the central star, where the approaching and receding sides of the lobes converge rather abruptly. \begin{figure*}[t] \begin{center} \includegraphics[width=16cm]{spec.eps} \caption{ Position-velocity (PV) maps in the H$\alpha$ and [N~{\sc ii}] $\lambda$6583 emission lines along the two pairs of bipolar lobes at P.A.'s 110\degr\ (major bipolar lobes) and 65\degr\ (minor bipolar lobes). The levels of the contours overlaid on the PV maps have been selected to emphasize the kinematical structure of the emission in the brightest central regions. The dash-dotted lines overlaid on the [N~{\sc ii}] PV maps correspond to the synthetic emission lines derived from our simultaneous fit to the morphology and kinematics of the two pairs of bipolar lobes. } \label{PV} \end{center} \end{figure*} The distortion of the velocity field of the minor bipolar lobes with respect to a classical hour-glass expansion hints at their interaction with the major bipolar lobes. The brightening of the polar caps of the minor bipolar lobes and the diffuse appearance of the H$\alpha$ line in the PV map in these regions further support this interaction. We note that the minor and major bipolar lobes overlap on the regions covered by the slit along PA=65\degr, however, only emission from one system of bipolar lobes is detected in this PV map. Apparently, the two pairs of bipolar lobes become a unique structure wherever they overlap, i.e., they do not intersect. Finally, we would like to note that the tilt of the brightest regions of the H$\alpha$ and [N~{\sc ii}] emission lines in the PV maps seems different, with the contours of the [N~{\sc ii}] lines in Fig.~\ref{PV} having larger inclinations than those of the H$\alpha$ lines. It is unclear whether this is an effect of the larger thermal broadening of the H$\alpha$ line, an additional contribution from a broad H$\alpha$ line at the location of the central star, or the detection of emission from the bright [N~{\sc ii}] knots by the side of the central star. \subsection{Physical Model} \label{model.sect} We have used the visualization and modeling tool SHAPE \citep{Steffen11} to fit simultaneously the morphology shown in the [N~{\sc ii}] image and the kinematics displayed in the PV maps of the two pairs of expanding bipolar lobes of Kn\,26 by adopting the simple model introduced by \citet{SU85} to describe the structure and expansion of the nebula around the symbiotic Mira R~Aqr, \begin{equation} v_{\rm exp}(\varphi) = v_{\rm e} + (v_{\rm p} - v_{\rm e}) \times {\mid{\sin \varphi}\mid}^{\alpha}, \end{equation} where $\varphi$ is the latitude angle, varying from 0\degr\ at the equator to 90\degr\ at the poles, $v_{\rm e}$ and $v_{\rm p}$ are the polar and equatorial velocities, respectively, and $\alpha$ is a parameter that determines the shape of the bipolar lobes. We have applied this model to the inner bipolar lobes and derived an inclination angle of 55\degr\ with respect to the line of sight, and polar and equatorial velocities of 160$\pm$15 km~s$^{-1}$ and $\sim$10 km~s$^{-1}$, respectively. The quality of the fit is shown by the line over-plotted on the [N~{\sc ii}] echellogram at PA~65\degr\ (Fig.~\ref{PV}). As for the major bipolar lobes, a similar fit is difficult because the bipolar lobes are opened, thus providing little constraint on the polar velocity. A close inspection of the faintest emission from these bipolar lobes in the direct images and echellogram at PA~110\degr\ suggests that the lobes may close at a distance $\sim$63\arcsec\ from the central star. Assuming this size for the major bipolar lobes, the best-fit is achieved for an inclination angle also of 55\degr, and polar and equatorial velocities of 300$\pm$20 km~s$^{-1}$ and $\sim$12 km~s$^{-1}$, respectively. The best-fit model provides a reasonable fit of the [N~{\sc ii}] echellogram at PA~110\degr\ (Fig.~\ref{PV}) and lobe width, whereas the lobe length is uncertain. For the minor lobes, the kinematical age of its model at a distance of $d$ kpc is (1125$\pm$100)$\times d$ yr, whereas for the major lobes only a lower limit $\gtrsim$(1150$\pm$100)$\times d$ yr can be derived. We note that, for the radial velocity $v_{\rm LSR}{\approx}-10$ km~s$^{-1}$, the Galactic coordinates of Kn\,26 ($l$=84\fdg67, $b$=--7\fdg96) imply a distance of 1 kpc for pure circular rotation and for a flat rotation curve. It is thus very likely that the kinematical ages of both pairs of bipolar lobes are in the range 1000--1300 yr. \subsection{Physical Conditions and Chemical Abundances} One-dimensional spectra of the central ring and bipolar lobes of Kn\,26 have been extracted from the long-slit intermediate-dispersion ALBIREO spectra (Figure~\ref{fig.alb}). These spectra include multiple oxygen, neon, sulfur, nitrogen, and argon forbidden lines, as well as hydrogen and helium recombination lines. The intrinsic line intensity ratios scaled to an arbitrary H$\beta$ flux of 100 are listed in Table~\ref{tab.flux} where the \citet{CCM89} extinction law has been used to deredden the measured line intensity ratios using the logarithmic extinction coefficient $c({\rm H}\beta)$=0.30$\pm$0.04 derived from the observed H$\alpha$/H$\beta$ ratio for case B recombination. This value of the logarithmic extinction coefficient is coincident with the reddening of $E(B-V) =$ 0.2 derived by \citet{Eracleous_etal02}. \begin{figure*}[t] \begin{center} \includegraphics[bb=38 386 585 710,width=0.90\linewidth]{ALBIREO_spec.eps} \caption{ One-dimensional spectra of the central ring ({\it top}) and bipolar lobes ({\it bottom}) of Kn\,26. } \label{fig.alb} \end{center} \end{figure*} The line ratios listed in Table~\ref{tab.flux} for the central ring are generally consistent with those presented by \citet{Eracleous_etal02}, but we note that the intensity ratio of the [S~{\sc ii}] $\lambda\lambda$6716,6731 lines in our spectrum is $\approx$6 times lower. An inspection of the spectrum of Lan\,384 presented by \citet{Eracleous_etal02} suggests that the emission line strengths for the [S~{\sc ii}] $\lambda\lambda$6716,6731 lines listed in their Table~3 are erroneous. We also remark that the [O~{\sc iii}] $\lambda$5007/H$\beta$ and He~{\sc ii} $\lambda$4686/H$\beta$ intrinsic intensity ratios that can be derived from the emission line strengths listed in Table~3 of \citet{Eracleous_etal02}, $\approx$5.0 and $\approx$0.6, respectively, imply higher excitation than that of our spectrum of the central ring of Kn\,26 ($\approx$4.0 and $\approx$0.26, respectively). These differences reflect the higher excitation of the central regions of Kn\,26 along the axis of the major bipolar lobes (see Fig.~\ref{img1}-{\it top}) which were preferentially registered by the long-slit used by \citet{Eracleous_etal02} in their spectroscopic observations. At any rate, the relatively high [O~{\sc iii}] $\lambda$5007/H$\beta$ and He~{\sc ii} $\lambda$4686/H$\beta$ intrinsic intensity ratios found in both studies are typical of PNe rather than H~{\sc ii} regions. The nebular analysis software ANNEB \citep{Olguin_etal11} which integrates the NEBULAR package of IRAF/STSDAS \citep{SD95} was further used to derive the physical conditions and nebular abundances of Kn\,26 listed in Table~\ref{tab.chem}. The NEBULAR package uses a 5-level atom approximation to compute the electron temperature, density, and ionic abundances of nebular low density gas for the most important heavy atoms. As for the abundances of ions of helium, these were derived following the method described by \citet{VKL98}, including a correction of collisional effects \citep{Clegg87,BSS99}. Since only one or a few number of ionization stages of heavy elements are observed in the optical spectrum, ionization correction factors have been adopted to compute the elemental abundances \citep{KB94}. The electron density-sensitive ratio [S~{\sc ii}] $\lambda$6716/[S~{\sc ii}] $\lambda$6731 implies a low density for the nebula, $\approx$360 cm$^{-3}$. Such a low electron density, typical of the bipolar lobes of PNe, supports the idea that the apparent ring around the central star is not a real, dense ring, but an effect caused by the projection of the bipolar lobe edges. The electron temperature derived from the [N~{\sc ii}] emission lines, $\approx$9900~K, is notably lower than the temperature of 15000~K derived by \citet{Eracleous_etal02} from the [O~{\sc iii}] emission lines\footnote{ Unfortunately we cannot reproduce the determination of this temperature because the notable brightness of the Hg~{\sc i} $\lambda$4458~\AA\ sky line at OSN, combined with the $\approx$4.7~\AA\ spectral resolution of our spectra, precludes us from an accurate measurement of the intensity of the coronal line of [O~{\sc iii}] at 4363~\AA. }. Since our slit maps regions of lower excitation than that used by \citet{Eracleous_etal02}, the temperature of 9900~K has been used for the determination of ionic abundances of the central region of Kn\,26 listed in Table~\ref{tab.chem}. Compared to other PNe, the chemical abundances of Kn\,26 place it among the type I PNe for its high He/H ratio, but its N/O ratio is low for PNe of this type and will designate it as a type II PNe\footnote{ A type III classification is precluded because the peculiar velocity of Kn\,26, i.e., the difference between its radial velocity and that expected on the basis of a pure circular motion around the Galactic center for sensible distances in the range 1--6 kpc, is smaller than 60~km~s$^{-1}$. } \citep{Peimbert78}. The Ne, S, and Ar to O ratios do not show any obvious abundance anomaly with respect to other PNe \citep{KHM03,HKB04}. We emphasize that if the higher electron temperature derived by \citet{Eracleous_etal02} were to be used, then the helium abundances will increase by 5\%. Furthermore, the He$^+$/H$^+$ abundances implied by the line strength of He~{\sc i} $\lambda$5876 \AA\ for the bipolar lobes is also high, 0.20$\pm$0.03. We are thus confident on the determination of the helium abundances of Kn\,26. \begin{table} \caption{Intrinsic Line Intensity Ratios} \label{tab.flux} \centering \begin{tabular}{lrcc} \hline\hline \multicolumn{1}{c}{Line ID} & \multicolumn{1}{c}{$f(\lambda)$} & \multicolumn{1}{c}{Central Ring} & \multicolumn{1}{c}{Bipolar Lobes} \\ \hline $\lambda$3726+3729 [O~{\sc ii}] & 0.322 & 335$\pm$12 & 495$\pm$40 \\ $\lambda$3869 [Ne~{\sc iii}] & 0.291 & 49.2$\pm$3.3 & $\dots$ \\ $\lambda$3889 H8+He~{\sc i} & 0.286 & 23.0$\pm$1.9 & $\dots$ \\ $\lambda$3970 [Ne~{\sc iii}]+H$\epsilon$ & 0.203 & 26.63$\pm$1.91 & $\dots$ \\ $\lambda$4069+76 [S~{\sc ii}] & 0.238 & 14.8$\pm$2.0 & $\dots$ \\ $\lambda$4101 H$\delta$ & 0.230 & 24.6$\pm$1.4 & $\dots$ \\ $\lambda$4340 H$\gamma$ & 0.157 & 49.9$\pm$2.0 & $\dots$ \\ $\lambda$4471 He~{\sc i} & 0.115 & 4.5$\pm$0.6 & $\dots$ \\ $\lambda$4686 He~{\sc ii} & 0.050 & 26.0$\pm$1.0 & $\dots$ \\ $\lambda$4861 H$\beta$ & 0.000 & 100.0$\pm$2.2 & 100.0$\pm$3.6 \\ $\lambda$4959 [O~{\sc iii}] & $-$0.020 & 130.7$\pm$2.6 & 106.7$\pm$3.7 \\ $\lambda$5007 [O~{\sc iii}] & $-$0.038 & 403$\pm$7 & 319$\pm$9 \\ $\lambda$5198+5200 [N~{\sc i}] & $-$0.104 & 3.9$\pm$0.4 & $\dots$ \\ $\lambda$5755 [N~{\sc ii}] & $-$0.131 & 3.2$\pm$0.4 & $\dots$ \\ $\lambda$5876 He~{\sc i} & $-$0.203 & 17.8$\pm$0.7 & 28$\pm$4 \\ $\lambda$6300 [O~{\sc i}] & $-$0.263 & 52.0$\pm$1.9 & 92$\pm$5 \\ $\lambda$6364 [O~{\sc i}] & $-$0.271 & 16.1$\pm$1.0 & 38$\pm$4 \\ $\lambda$6548 [N~{\sc ii}] & $-$0.296 & 72.9$\pm$2.2 & 91$\pm$4 \\ $\lambda$6563 H$\alpha$ & $-$0.298 & 285$\pm$8 & 285$\pm$13 \\ $\lambda$6584 [N~{\sc ii}] & $-$0.300 & 238$\pm$7 & 275$\pm$12 \\ $\lambda$6678 He~{\sc i} & $-$0.313 & 6.6$\pm$0.4 & $\dots$ \\ $\lambda$6716 [S~{\sc ii}] & $-$0.318 & 41.9$\pm$1.5 & 47.4$\pm$2.4 \\ $\lambda$6731 [S~{\sc ii}] & $-$0.320 & 35.2$\pm$1.3 & 32.8$\pm$1.8 \\ $\lambda$7065 He~{\sc i} & $-$0.364 & 8.8$\pm$0.5 & $\dots$ \\ $\lambda$7136 [Ar~{\sc iii}] & $-$0.374 & 16.0$\pm$0.7 & $\dots$ \\ \hline \end{tabular} \end{table} \begin{table} \caption{Physical Conditions and Abundances of the Central Ring} \label{tab.chem} \centering \begin{tabular}{lc} \hline\hline \multicolumn{1}{l}{Parameter} & \multicolumn{1}{c}{Value} \\ \hline $T_e$ [N~{\sc ii}] & 9900$\pm$660 K \\ $N_e$ [S~{\sc ii}] & 360$\pm$100 cm$^{-3}$ \\ \hline $N$(He$^+$)/$N$(H$^+$) & 0.130$\pm$0.005 \\ $N$(He$^{++}$)/$N$(H$^+$) & 0.030$\pm$0.002 \\ $N$(O$^0$)/$N$(H$^+$) & (1.0$\pm$0.3)$\times$10$^{-5}$ \\ $N$(O$^+$)/$N$(H$^+$) & (1.4$\pm$0.5)$\times$10$^{-4}$ \\ $N$(O$^{++}$)/$N$(H$^+$) & (1.5$\pm$0.4)$\times$10$^{-4}$ \\ $N$(N$^0$)/$N$(H$^+$) & (5.4$\pm$2.7)$\times$10$^{-6}$ \\ $N$(N$^+$)/$N$(H$^+$) & (4.7$\pm$1.0)$\times$10$^{-5}$ \\ $N$(S$^+$)/$N$(H$^+$) & (1.9$\pm$0.4)$\times$10$^{-6}$ \\ $N$(Ar$^{++}$)/$N$(H$^+$) & (1.5$\pm$0.3)$\times$10$^{-6}$ \\ $N$(Ne$^{++}$)/$N$(H$^+$) & (5.2$\pm$1.9)$\times$10$^{-5}$ \\ \hline He/H & 0.160$\pm$0.005 \\ O/H & (3.1$\pm$0.8)$\times$10$^{-4}$ \\ N/H & (1.1$\pm$0.5)$\times$10$^{-4}$ \\ S/H & (1.4$\pm$0.5)$\times$10$^{-5}$ \\ Ar/H & (3.0$\pm$0.9)$\times$10$^{-6}$ \\ Ne/H & (1.3$\pm$0.8)$\times$10$^{-4}$ \\ \hline N/O & 0.34$\pm$0.18 \\ \hline \end{tabular} \end{table} \subsection{The central star of Kn\,26} The star Lan\,384 is detected in all narrow-band images of Kn\,26 inside the elliptical ring-like structure at its center (Fig.~\ref{img1}). Its optical ALBIREO spectrum and spectral energy distribution (SED) additionally including available optical and 2MASS and \emph{WISE} IR photometric measurements (Figure~\ref{fig.sed}) show that the flux of the star raises bluewards from the near-IR $J$ and $H$ bands to the bluest region of the optical spectrum, in agreement with \citet{Lanning00} who first recognized Lan\,384 to be a blue star. The location of Lan\,384 at the center of Kn\,26 and its blue color strongly suggest that it is indeed the central star of the PN. Paradoxically, the star is not located exactly at the center of this ring, as clearly revealed by the insets in the images shown in Figure~\ref{img1}. We measure a displacement of the central star of Kn\,26 by $\approx$0\farcs9 along the direction of the ring's major axis at PA $\approx$10\degr. \begin{figure}[t] \begin{center} \includegraphics[bb=55 215 540 680,width=1.0\linewidth]{new_sed.eps} \caption{ Spectral energy distribution (SED) of the central star of Kn\,26 including the optical ALBIREO spectrum (histogram) and broad-band optical, near-IR, and \emph{WISE} W1 3.4 $\mu$m and W2 4.6 $\mu$m photometric measurements (diamonds). The optical ALBIREO spectrum is shown into further detail in the inset. In both plots, the smooth solid line represents the best fit to the optical spectrum by a white dwarf of temperature 70000~K extincted by $A_V$=0.65 mag $\equiv$ $c$(H$\beta$)=0.30. As discussed in the text, the emission excess in the 2MASS $K_s$ and \emph{WISE} W1 and W2 bands with respect to these two fits is due to the contribution of nebular emission to these bands, and thus they should be regarded as upper limits of the stellar emission. } \label{fig.sed} \end{center} \end{figure} The 2MASS $K_s$ and \emph{WISE} W1 (3.4 $\mu$m) and W2 (4.6 $\mu$m) bands imply an obvious near-IR excess in the SED of Lan\,384. An inspection of these images, however, reveals that these photometric measurements are contaminated by extended nebular emission. Using 2MASS $K_s$ photometric measurements of the stars in the field of view, we have calibrated our narrow-band $K$ continuum image and derived for Lan\,384 a flux density in this band $\approx6$ times lower than that implied from the 2MASS $K_s$ magnitude. Contrary to the 2MASS $K_s$ photometric measurement, the flux density in the $K_c$ filter follows a similar decline to that shown by the 2MASS $J$ and $H$ bands. The available data can be used to estimate the effective temperature of Lan\,384. In the spectral range covered by the SED in Figure~\ref{fig.sed}, the spectrum of the central star of a PN can be well described by a simple black-body model. Adopting a color excess of $E(B-V)=0.2$ consistent with the optical extinction of the nebular spectrum derived in the previous section \citep[but also by][]{Eracleous_etal02} and the extinction law of \citet{CCM89}, the effective temperature of a black-body that best fits the optical spectrum is $\sim$70000~K (solid line in the inset of Fig.~\ref{fig.sed}). This model also provides a reasonable description of the photometric measurements in the $B$, $I$, $J$, $H$, and $K_c$ bands, and its temperature is consistent with the detection of the nebular He~{\sc ii} $\lambda$4686 \AA\ emission line in the nebula implying that about 25\% of helium is doubly ionized in the central regions of Kn\,26 (Tab.~\ref{tab.chem}), which requires effective temperatures $\gtrsim$60000~K \citep[e.g.,][]{Pottasch84}. We note, however, that this temperature should be regarded as a rough estimate because the limited coverage in the blue region of the spectrum used to carry out this fit. Dedicated UV and high-resolution optical spectrophotometric observations of Lan\,384 would be greatly valuable to determine more reliably its effective temperature. \section{Discussion} The spectroscopic information, excitation, presence of a hot central star, and morphology and physical structure clearly confirm the nature of PN of the nebula Kn\,26. Therefore, we propose its identification as PN\,G084.7--08.0 following the standard rules of nomenclature for these objects. The morphological subclass of quadrupolar PNe was introduced by \citet{Manchado_etal96} to describe objects which show one single equatorial waist and two pair of bipolar lobes with symmetry axes oriented along different directions on the plane of the sky. Originally this subclass included K\,3-24, M\,1-75, and M\,2-46, and very likely M\,3-28 and M\,4-14. Since then, the sample of quadrupolar PNe has increased with time up to a number of ten \citep[][this paper]{Manchado_etal96,GM98,CP00,Mampaso_etal06,Vazquez08,Hsia10}, but they are certainly more because some PNe are prone to be classified as quadrupolar \citep[e.g., NGC\,4361 and NGC\,6072,][]{MA01,Kwok_etal10}, whereas other morphological subclasses are closely related \citep[e.g., the Starfish Nebulae,][]{Sahai00}. To date, only one proto-PN, IRAS\,19475+3119, has been reported to have a quadrupolar morphology \citep{Sahai07}. The different orientation of the two pairs of bipolar lobes in quadrupolar PNe has been kinematically confirmed to occur also along the line of sight for M\,2-46 \citep{Manchado_etal96}, NGC\,6881 \citep{GM98}, NGC\,6309 \citep{Vazquez08}, and M\,1-75 \citep{Santander-Garcia10}. The change in the direction of the symmetry axis immediately suggests the rotation of the engine collimating the bipolar outflow that shapes the bipolar lobes. Since this change in direction can be naturally ascribed to the precession of a binary system, quadrupolar PNe have been considered archetypes of PNe formed after the evolution of the central star in a binary system \citep{Manchado_etal96}. In a review of the properties of a sample of quadrupolar PNe, \citet{Mampaso_etal06} concluded that there is little direct evidence of detection of binarity among these sources, which is otherwise a common problem for the search of binarity among PNe \citep{DeMarco09}. One possible exception is the central star of IPHAS\,PN-1, whose near-IR excess provides tantalizing evidence of a binary system \citep{Mampaso_etal06}. The 2MASS $K_s$ and \emph{WISE} W1 and W2 photometric measurements of Lan\,384 suggest a near-IR excess (Fig.~\ref{fig.sed}), but a careful examination of the images led us to conclude that these photometric data are contaminated by extended nebular emission. A more accurate determination of the star flux using our $K$ continuum image confirms that its emission level is consistent with the Rayleigh-Jeans tail of a black-body spectral distribution. Intriguingly, the central star of Kn\,26 is clearly misplaced with respect to the center of the ring-like feature. Central stars displaced from the center along the minor axis of the shell are observed in many PNe \citep[e.g., MyCn\,18 and Hu\,2-1;][]{Sahai99,Miranda01} and can be interpreted as evidence for a binary central star \citep{SRH98}. \citet{Mampaso_etal06} also investigated the nebular abundances of their sample of quadrupolar PNe. They concluded that these sources show a great variety of chemical abundances that generally do not match the predictions for the surface chemical enrichment of single central stars \citep{Marigo_etal03}. The nebular chemical abundances of Kn\,26 neither match those predictions. The abundances of oxygen, nitrogen, neon, and other heavy elements are consistent with those of the Sun and the solar neighborhood \citep{Asplund_etal09,NP12}, i.e., they do not seem to reflect a peculiar chemical enrichment. Moreover, the N/O ratio is low, $\approx$0.34, typical of type II PNe. In contrast, the helium abundances of Kn\,26 are relatively high, with He/H$\approx$0.15, which is more common among type I PNe. Low values of the N/O ratio and high helium abundances are not typically seen in PNe, even in those exhibiting a bipolar morphology \citep{Stan_etal06}, but exceptions can be found in the literature \citep[e.g., the sample of PNe towards the Galactic Bulge described by][]{EC01}. Symbiotic stars use to present extremely high helium abundances and low N/O ratios \citep[e.g.,][]{LC05} as the increased mass-loss caused by the binary companion interactions can curtail the dredge-up of carbon and nitrogen to the envelope, affecting the surface chemical enrichment \citep{Lu_etal08}. We propose that similar processes may have occurred in Kn\,26, resulting in the shortening of the AGB evolution of its progenitor and in the rapid stripping of the stellar envelope to show helium rich regions. The two pairs of bipolar lobes are interwoven in such a way that the small bipolar lobes can be described as a protuberance of the surface of the major bipolar lobes. This significant difference in size between the two pairs of bipolar lobes of Kn\,26 seems to imply some time-lap between them, even though they have very similar kinematical ages. We can envisage the large bipolar lobes forming first, and then, short afterwards, a bipolar ejection along a different direction would have blown sections of the inner regions of the large bipolar lobes to create the minor bipolar lobes. The inner lobes would have initially expanded into a medium already evacuated by the large bipolar lobes, but then they have interacted with the large bipolar lobes walls, resulting in the brightening of the emission and distorted velocity field at the tips of the minor bipolar lobes. At any rate, the time-lap between the ejection of each pair of bipolar lobes is presumably much shorter than the kinematical age of the lobes, i.e., 1100$\times d$ yr. There are other quadrupolar PNe \citep[e.g., M\,1-75,][]{Santander-Garcia10} where the two pairs of bipolar lobes formed in a simultaneous ejection. This should also be certainly the case for the proto-PN IRAS\,19475+3119, as its young age and similar size of the bipolar lobes necessarily imply a small time-lap between the two pairs of bipolar lobes \citep{Sahai07}. On the other hand, there are quadrupolar PNe \citep[e.g., M\,2-46,][]{Manchado_etal96} for which the time-lap between ejections can reach up to a few thousand years. \section{Conclusions} We have used optical and near-IR narrow-band images and optical intermediate- and high-dispersion spectroscopic observations to investigate the physical structure and chemical abundances of Kn\,26. The morphological and kinematical information gathered by these observations reveal that Kn\,26 is a quadrupolar PN, i.e., it has two pairs of bipolar lobes. The two pairs of bipolar lobes have very similar kinematical ages, although the larger size of the major bipolar lobes and the evidence of interaction at the tips of the minor bipolar lobes indicate that the latter formed during a second bipolar ejection. This second ejection was probably close in time to the first one that formed the major bipolar lobes, implying a rapid change of the referential direction of the collimating mechanism. The chemical abundances of Kn\,26 are unusual, with a N/O ratio typical of type II PNe, but a high helium abundance, typical of type I PNe. These chemical abundances cannot be easily reproduced by models of single star evolution, but seem to be typical of symbiotic stars. We suggest that a companion star could indeed shorten the AGB evolution of the progenitor star of Kn\,26 and produce the anomalous chemical abundances after going through a common envelope phase, however, no evidence for a companion star is provided by the optical and IR SED of the central star. The comparison of Kn\,26 with other quadrupolar PNe implies a wide variety of properties. The time-lap between the ejection of the two pairs of bipolar lobes may be short, almost coeval, or larger than the dynamical age of the bipolar lobes, of a few 1000 yr. The chemical abundances are also very different among the members of this group, suggesting different progenitors or evolutionary paths. These results confirm previous conclusions that the subclass of quadrupolar PNe is a rich, not-simple phenomenon. \begin{acknowledgements} MAG, LFM and GR-L are partially funded by grant AYA2008-01934 of the Spanish Ministerio de Ciencia e Innovaci\'on (MICINN), which includes FEDER funds. RV, MAG, and GR-L thank support by grant IN109509 (PAPIIT-DGAPA-UNAM). MAG also acknowledges support of the grant AYA 2011-29754-C03-02, and LFM acknowledges partial support from grant AYA2011-30228-C03.01 of the Spanish MINECO, and by grant IN845B-2010/061 of Xunta de Galicia, all of them partially funded by FEDER funds. GR-L acknowledges support from CONACyT (grant 177864) and PROMEP (Mexico). Finally, we would like to thank the OAN-SPM staff and the CATT for time allocation, L.\ Olgu\'\i n for fruitful discussion and assistance in the use of the ANNEB package, and an anonymous referee whose comments helped us in the analysis and interpretation of the nebular spectrum of Kn\,26. \end{acknowledgements}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:introduction} Unlike other branches of physics, astrophysics cannot apply the third pillar of the scientific method, experimentation. After observing nature and conjecturing laws that govern its behavior, astronomers cannot carry out experiments that confirm or falsify the theory. Experimentation is then substituted by new observations conducted to check the theoretical predictions. The intrinsic inability for directly measuring the celestial objects adds a special difficulty to the astrophysical tasks. We do not have thermometers, weighing scales, tachometers, magnetometers that can directly gauge the physical conditions in the object. Rather we have to be content with indirect evidence or inferences obtained from the only real astrophysical measurements, namely those related to light. The intensity and polarization properties for visible light, the associated electric field for radio frequencies, or the energy or momentum of high energy photons, as functions of space, wavelength, and time, can be fully quantified with \emph{errors} that are directly related to the accuracy of the instruments.\footnote{Actual experiments have also been devised to directly detect neutrinos, other particles, and gravitational waves, but such projects are way beyond the scope of this paper.} From these real measurements, the observational astronomer must deduce or infer the physical quantities that characterize the object with \emph{uncertainties} that depend on both the experimental errors and the assumptions that allow him/her to translate light-derived quantities into the object quantities. Observational astrophysics could hence probably be defined as the art of \emph{inferring} the physical quantities of heavenly bodies from real measurements of the light received from them. Somehow, these astrophysical tasks can be mathematically seen as a mapping between two spaces, namely the space of observables and that of the object's physical quantities. The success of the astronomer then depends on his/her ability (\emph{the art}) to characterize not only the mapping but the two spaces. On the observable side, what really matters is the specific choice of measurable parameters and how well they are measured; that is, how many light parameters are obtained (the signal) and which are the measurement errors (the noise). On the object's physical condition side, what is substantive is the selection of quantities to be inferred. Of course, the finer the ---affordable--- detail in describing any of the two spaces, the better. The keyword is \emph{affordable} because infinite resolution does not exist in the real world: a compromise is always in order between the number of available observables and the number of inferred physical quantities. The representation of both spaces therefore needs approximations that constrain the sub-spaces to be explored and how they are described: which Stokes parameters, with which wavelength and time sampling, and with which instrument profile and resolution on the one hand, and, on the other, which quantities and how they are assumed to vary on the object with time and space. Concerning the mapping, this should represent the physics that generates the observables from the given physical conditions in the object and thus illustrates the dependence of the observables on given physical quantities. Understanding this physics is crucial if the researcher is to select observables that are as ``orthogonal'' as possible; that is, that depend mostly on one physical quantity and not on the others. Certainly, the physics mapping needs approximations as well. These approximations depend a great deal on the observables and on the object's physical quantities; for example, the assumptions cannot be the same if you have fully sampled Stokes profiles or just a few wavelength samples; different hypotheses apply for physical quantities that do or do not vary with depth in the atmosphere, or that are expected to present a given range of magnitudes. Therefore, mappings may include (often over-simplistic) one-dimensional calibration curves between a given observable parameter and a given physical quantity, or complicated multidimensional relationships between observables and quantities that require the definition of a metric or distance in at least one of the two spaces. Even in the simplest situations, the relationship between observables and quantities does not have to be linear and may depend on the specific sub-space of the physical parameters. For example, a calibration curve based on the weak-field approximation may apply for a given range of magnetic fields but saturate for stronger ones (see Sections~\ref{sec:weakfield} and \ref{sec:weakatmosphere}). But, when the problem can be assumed to be multidimensional, covariances appear because single observables rarely depend on just a single quantity (see Section~\ref{section:techniques}). For example, a given spectral line Stokes $V$ profile can seemingly grow or weaken by the same amount owing to changes in temperature or magnetic field strength \citep[e.g.,][]{1996SoPh..164..169D}. An example can be seen in Figure~\ref{fig:stokes}, where two apparently equal $V$ profiles come from two different atmospheres. With all these ingredients at hand, the astrophysical analysis of observations is a non-linear, fully involved, topological task where many decisions have to be made (\emph{the art}) and, hence, cannot be taken for granted. \epubtkImage{}{% \begin{figure}[htbp] \centerline{\includegraphics[width=\textwidth]{figajustefalso}} \caption{Left panel: Open circles: Stokes $V$ profile in units of the continuum intensity of the Fe~{\sc i} line at 630.25 nm synthesized in a model atmosphere in hydrostatic equilibrium, 2000~K cooler than the \cite{1994ApJ...436..400D} model, with a constant longitudinal magnetic field of 800~G, a gradient in velocity from 2~\mbox{$\:$km$\,$s$^{-1}$} at the bottom and 0~\mbox{$\:$km$\,$s$^{-1}$} at the top of the photosphere, and a macroturbulence velocity of 1~\mbox{$\:$km$\,$s$^{-1}$}. Solid line: Stokes $V$ profile of the same line (normalized the same way), synthes\-ized in a model atmosphere 305~K hotter that the former, 270~G weaker, and with a higher macroturbulence velocity of 2.06~\mbox{$\:$km$\,$s$^{-1}$}. Right panel: $T$ and $B$ stratifications for the two models.} \label{fig:stokes} \end{figure}} The techniques by which astronomers have obtained information about the physical conditions in the object have evolved in parallel to technological advancements; that is, to the available means we have of gathering such information. The community has gradually enhanced its knowledge from medium-band measurements including one or several spectral lines to very fine wavelength sampling of the four Stokes profiles of single or multiple spectral lines; from old curves of growth for equivalent widths to highly sophisticated techniques that include the solution of the radiative transfer equation (RTE). The finer the information, the more complete the physical description. Following \citet{2001ASPC..236..487S}, let us consider the simplest case of having a single observable parameter, the Doppler displacement with respect to the rest position of the spectral line, $\Delta \lambda$, and a single physical quantity to derive, the line-of-sight (LOS) velocity, $v_{\rm LOS}$. Imagine that we measure $\Delta \lambda$ by finding the minimum (or the maximum in the case of an emission line) of the intensity profile. The biunivocal mapping between the one-dimensional space of observables ---that containing all possible Doppler displacements--- and the one-dimensional space of physical quantities ---that of LOS velocities--- is given by the Doppler formula \begin{equation} \label{eq:doppler} v_{\rm LOS} = \frac{\Delta\lambda}{\lambda_0} c, \end{equation} where $\lambda_0$ stands for the vacuum rest wavelength position of the line and $c$ for the speed of light. This simple inference relationship requires at least three implicit physical assumptions for the Doppler displacement to be properly defined and measured; namely that a) the solar feature is spatially resolved, b) the line is in pure absorption (or pure emission), and c) $v_{\rm LOS}$ is constant along the LOS. First, if we have unresolved structures we cannot ascribe the inferred velocity to any of them. Second, lines with core reversals, either in absorption or in emission, do not qualify for the extremum-finding method. And third, as soon as we have an asymmetric profile, $\Delta\lambda$ can no longer be properly defined for the line but for a given height through the profile, and then the mapping in Eq.\ (\ref{eq:doppler}) immediately loses its meaning. While in the case of a constant velocity, we properly infer that velocity, in the presence of gradients we infer a value corresponding only to the ---in principle unknown--- layers where the core of our line has been formed (typically the highest layers of the atmosphere). \emph{We measure a velocity but we do not know which one.} Strictly speaking, the same measurement corresponds to different physical quantities depending on the assumptions. Of course we could complicate our problem a little and try to determine the stratification of LOS velocities with height, or simply estimate a gradient, by measuring the so-called bisector, the geometric position of those points equidistant from both wings of the profile at a given depth. At that point, our spaces have increased their dimensions and Eq.\ (\ref{eq:doppler}) is no longer the sole ingredient of our mapping because we must add some more physical assumptions to interpret the different displacements of the bisector in terms of velocities at different heights in the atmosphere. Hence, depending on the assumed physics, the quantitative results may change. This easy example has been used to illustrate that even the simplest inference is dependent on physical assumptions. This is an inherent property of astrophysical measurement and no one can escape from it: the same observable can mean different things depending on the assumed underlying physics. Most of the criticisms of the inversion techniques that are reviewed in this paper often come from this lack of uniqueness of the results. Many authors claim that the inversion of the RTE is an ill-posed problem. This being true, one should realize that astrophysics itself is indeed ill-conditioned, and this is a fact we have to deal with, either willingly or not. The physics connecting the object quantities with the observable parameters is of paramount significance and deserves a little consideration at this point. Radiative transfer is the discipline encompassing the generation and transport of electromagnetic radiation through the solar (stellar) atmosphere. Hence, the mapping between the two spaces will be based upon it and depend on its degrees of approximation. The specification of the radiation field through a scattering atmosphere was first formulated as a physical problem by \cite{1871PhilMag..41...107,1871PhilMag..41...274,1881PhilMag..12...81,1899PhilMag..47...375}. In the astrophysical realm, the problem was posed in the works by \cite{1905ApJ....21....1S} and \cite{1906GottNach...41S} without taking polarization into account. After that, although not known to the astrophysical community, \citet{1929AnnPhys12..23S} presented a theory of anisotropic absorption that is nothing but a rigorous formulation of the radiative transfer equation. Very importantly, he used the formalism proposed by \citet{1852TransCamb..9...399S} to deal with partially polarized light. It was not, however, until the works by \citet{1946ApJ...103..351C,1946ApJ...104..110C,1947Apj...105..424C} that the transfer problem of polarized light was settled as an astrophysical problem on its own. The Stokes formalism has regularly been used since then in the astronomical literature. After Hale's (\citeyear{hale1908}) discovery of sunspot magnetic fields, the interpretation of the solar (stellar) spectrum of polarized light became necessary and a full theory has been developed since the mid 1950s. The first modern formulation of an equation of radiative transfer for polarized light was presented by \cite{1956PASJ....8..108U}, who also provided a solution in the simplified case of a Milne--Eddington (ME) atmosphere. Only absorption processes were taken into account and a complete description had to wait until the works by \cite{1962IZKry..27...148R,1962IzKry..28...259R,1967IzKry..37...56R}, who also included dispersion effects (the so-called magneto-optical effects). These two derivations were phenomenological and somewhat heuristic. A rigorous derivation of the radiative transfer equation (RTE) based on quantum electrodynamics was obtained by \cite{landi+landi1972}. Later, four derivations of the RTE from basic principles of classical physics were published by \cite{jefferies+etal1989}, \citeauthor{1991sopo.work..416S} (\citeyear{1991sopo.work..416S}; see also \citeauthor{1994KAP...book...S} \citeyear{1994KAP...book...S}), \citeauthor{1992soti.book...71L} (\citeyear{1992soti.book...71L}; see also \citeauthor{2004ASSL..307.....L} \citeyear{2004ASSL..307.....L}), and \cite{2003isp..book.....D}. A discussion of the RTE and the several assumptions used in various available inference techniques is deferred to Section~\ref{section:assumptions}. Certainly, any inference has to be based on solutions of the RTE because it relates the observable Stokes spectrum with the unknowns of the problem; namely, the physical quantities characterizing the state of the atmosphere they come from. No matter how simplified such solutions can be, it is natural to compare the observations with theoretical calculations in prescribed sets of physical quantities. The comparison of observational and synthetic parameters results in values for the sought-for quantities that may be refined in further iterations by changing the theoretical prescriptions. This trial-and-error method can be practical when the problem is very simple (involving a few free parameters) but can become unsuitable for practical use if the number of free parameters is large. Even automated trial-and-error ---i.e., Monte Carlo--- methods may fail to converge to a reliable set of physical conditions in the medium. Some more educated techniques are needed to finally work out that convergence between observed and synthetic parameters. Generally speaking, any method in which information about the integrand of an integral equation is obtained from the resulting value of the integral is called an inversion method. In our particular case, it is straightforward to write the synthetic Stokes spectra as an integral involving a kernel that depends on the physical conditions of the atmosphere (see Eq.\ \ref{eq:rteformalsolution}). In fact, the emergent formal solution of the RTE is the most basic type of integral equation, namely a Fredholm equation of the first type, because both integration limits are fixed. Consequently, we will call inversion codes or inversion techniques those methods that (almost) automatically succeed in finding reliable physical quantities from a set of observed Stokes spectra because we shall understand that they indeed automatically solve that integral equation. There is a whole variety of flavors depending on the several hypotheses that can be assumed, but all of them share the characteristic feature of automatically minimizing a distance in the topological space of observables. The idea had already been clearly explained in the seminal work by \cite{1972lfpm.conf..227H}: ``Solve for \vector{B} on the bases of best fit of the observed profiles to the theoretical profiles''. And the free parameters for such a best fit were found through least squares minimization of the profile differences. They obtained only an average longitudinal field component because their Stokes $Q$ and $U$ observations were not fully reliable and magneto-optical effects were not taken into account, but the fundamental idea underlying many of the current techniques can already be found in that very paper, including a simple two-component model to describe the possible existence of spatially unresolved magnetic fields. In a thorough study using synthetic Stokes profiles, \cite{auer+etal1977} proposed a new inversion method based on Unno's theory and tested its behavior in the presence of several realistic circumstances, such as asymmetric profiles, magnetic field gradients, magneto-optical effects, and unresolved magnetic features. This technique was later generalized by \cite{1984SoPh...93..269L} to include magneto-optical and damping effects. The numerical check of the code was fairly successful but neither the original code by \cite{auer+etal1977} nor the new one by \cite{1984SoPh...93..269L} were applied to observations. Independently of the latter authors, the preliminary studies by \cite{1985NASCP2374..341S}, \cite{1985NASCP2374..342L}, and \cite{1985NASCP2374..306S} jelled in what has been one of the most successful ME inversion codes so far by \citet{skumanich+lites1987}, later extended by \citet{1988ApJ...330..493L} to mimic a chromospheric rise in the source function (see Section~\ref{sec:milne}). This code has been extensively used with observational data, most notably those obtained with the Advanced Stokes Polarimeter \citep{1992SPIE.1746...22E}. Based on the thin flux tube approximation, \citet{1990A&A...233..583K} proposed an inversion code for extracting physical information not from the Stokes profiles themselves but from several parameters calculated from $I$ and $V$ observations of a plage and a network. Two years later, \citet{1992A&A...263..339S} presented a new inversion code whereby from the whole Stokes $I$ and $V$ profiles they selected among a handful of prescribed temperature stratifications and inferred height-independent magnetic field strength and inclination, Doppler shift, filling factor (surface fraction in the resolution element covered by magnetic fields), macro- and microturbulent velocities, and some atomic parameters of the spectral line. The very same year, \citet{1992ApJ...398..375R} introduced SIR, an acronym for Stokes Inversion based on Response functions. Like the former codes, SIR ran a non-linear, least-squares, iterative Levenberg--Marquardt algorithm but with a remarkable step-forward feature: physical quantities characterizing the atmosphere were allowed to vary with optical depth. The increase of free parameters can generate a singularity problem: the variation of some atmospheric parameters may not produce any change on the synthetic spectra or, in other cases, different combinations of the perturbation of several parameters may produce the same change in the spectra. The success of SIR lies in regularizing the problem through a \emph{tailored} Singular Value Decomposition method (SVD). This allows, in principle, to look for any arbitrarily complex atmospheric stratification. The three components of the magnetic field, the LOS velocity, the temperature stratification, and the microturbulence may have any height profile. The code also infers height-independent microturbulent velocity and filling factor. The possibility exists for also fitting some atomic parameters \citep[e.g.,][]{2001ApJ...558..830A}. but they are typically fixed in practice. The code can be applied to any number of spectral lines that are observed simultaneously. SIR has been successful in a large number of observing cases and its use is still spreading among the community. Following SIR's strategy (that is, using response functions, nodes, Levenberg--Marquardt, and SVD), an evolution of the \citet{1992A&A...263..339S} code called SPINOR was presented by \citet{1998A&A...336L..65F} that also allowed for height variations of the physical quantities and included the possibility of multi-ray calculations assuming the thin flux tube approximation. \citet{1997ApJ...491..993S} proposed an original inversion code under the MISMA (MIcro-Structured Magnetic Atmosphere) hypothesis (see Sections~\ref{sec:misma} and \ref{sec:mismaatmos}). In 2000, the codes by \citet[][NICOLE ---NLTE Inversion Code based on the Lorien Engine---]{2000ApJ...530..977S} and by \citeauthor{2000ApJ...535..475B} (\citeyear{2000ApJ...535..475B}; see also \citeyear{1997ApJ...478L..45B}) were presented. The first (based on an earlier code by \citeauthor{1998ApJ...507..470S} \citeyear{1998ApJ...507..470S} without taking either polarization or magnetic fields into account) included non-LTE radiative transfer (see Section~\ref{sec:NLTE}), and the second was specifically designed for analyzing Stokes $I$ and $V$ profiles in terms of the thin flux tube approximation by using an analytic shortcut for radiative transfer proposed by \citet[][see Section~\ref{sec:interlaced}]{1995A&A...294..855D}. On their hand, \citet{rees+etal2000} proposed a Principal Component Analysis (PCA), which worked by creating a database of synthetic Stokes profiles by means of an SVD technique. In such a database, given \emph{eigenprofiles} are obtained that are later used as a basis for expanding the observed Stokes profiles. Hence, the description of observations can be made with the help of a few coefficients, thus speeding up the inversion process. One year later, LILIA (LTE Inversion based on Lorien Iterative Algorithm), a code with similar properties as SIR, was presented by \citet{2001ASPC..236..487S} and FATIMA (Fast Analysis Technique for the Inversion of Magnetic Atmospheres), a PCA code, was introduced by \citet{2001ApJ...553..949S}. A different technique was proposed by \citet[][see also \citeauthor{2003NN.....16..355S} \citeyear{2003NN.....16..355S}]{2001ASPC..236..511C} that used artificial neural networks (ANNs) whereby the system was trained with a set of synthetic Stokes profiles. The structure obtained therefrom finds the solution for the free parameters by interpolating among the known ones. Although the training can be slow, the inversion of observational data is very fast. In practice, both the synthetic training set of ANNs and the synthetic database of PCA have employed ME profiles to keep the implementation feasible. Otherwise, the number of free parameters would render the two techniques impracticable. A PCA code to analyze the Hanle effect in the He~{\sc i}~D$_{3}$ line was developed by \citet[][see also \citeauthor{2005ApJ...622.1265C}, \citeyear{2005ApJ...622.1265C}]{2003ApJ...582L..51L}. A substantial modification of the original SIR code, called SIRGAUSS, was presented by \citet{2003ASPC..307..301B} in which the physical scenario included the coexistence of an inclined flux tube ---that is pierced twice by the LOS--- within a background. Such a scenario is used to describe an uncombed field model of sunspot penumbrae \citep{solanki+montavon1993}. An evolution of this inversion code, called SIRJUMP, was later used by \citet{2009ApJ...704L..29L} that was able to infer possible discontinuities in the physical quantities along the LOS. A further code presented by \cite{2004PhDULL...A} was able to deal with the Zeeman effect in molecular lines. The very same year, \citet{2004A&A...414.1109L} published {\sc HeLIx}, an ME inversion code that dealt with the Hanle and the Zeeman effect in the He {\sc i} line at 1083 nm. Another ME inversion code was presented by \citet{2007A&A...462.1137O} with the helpful feature that was written in IDL, so that it is easily manipulated by relatively inexperienced users and employed as a routine in high-level programming pipelines. Also in 2007, \citeauthor{2007A&A...464..323B} took over the \cite{1984SoPh...93..269L} method and extended it to include unresolved magnetic structures. Unfortunately, they fail to obtain the magnetic field strength and the filling factor separately; only their product is reliable. Self-consistent levels of confidence in the ME inversion results were estimated through the code proposed by \citet{2007A&A...476..959A} using Bayesian techniques. A rigorous treatment of optical pumping, atomic level polarization, level crossings and repulsions, Zeeman, Paschen-Back, and Hanle effects on a magnetized slab was included in HAZEL \citep{2008ApJ...683..542A}, with which analysis of the He~{\sc i} D$_3$ and the multiplet at 1083 nm can be carried out. Oriented to its extensive use with the data coming from the Helioseismic and Magnetic Imager \citep{2003ASPC..307..131G} aboard the Solar Dynamics Observatory, \citet{2011SoPh..273..267B} presented VFISV (Very Fast Inversion of the Stokes Vector), a new ME code but with several further approximations and simplifying assumptions to make it significantly faster than other available codes. \citet{2011A&A...535A..45M} presented an alternative inversion code in which, with a significant number of simplifying assumptions on top of the ME approximation (such Stokes $I$ profiles being Gaussians and magneto-optical effects being almost negligible), some moments of the Stokes profiles are used to retrieve the vector magnetic field and the LOS velocity. In 2012, a significant step forward was provided by \citeauthor{2012A&A...548A...5V}, who combined spectral information with the known spatial degradation effects on two-dimensional maps to obtain a consistent restoration of the atmosphere across the whole field of view. An aim similar to van Noort's is followed by \citet{2013A&A...549L...4R}, who, by means of a regularized method (indeed based on PCA), deconvolve the spectropolarimetric data that are later inverted with SIR. Based on the concept of sparsity, \citet{2015A&A...577A.140A} have proposed a novel technique that allows the inversion of two-dimensional (potentially three-dimensional) maps at once. The interested reader can complement this chronological overview with the reviews by \citet{1995INV-DelToroRuizCobo, 1996SoPh..164..169D, 1997INV-DelToroRuizCobo}, \citet{2001ASPC..236..487S}, \citet{2003AN....324..383D}, \citet{2006ASPC..358..107B}, and \citet{2012ApJ...748...83A} and the didactical introductions and discussions by \citet{1994KAP...book...S}, \citet{2003isp..book.....D}, and \cite{2004ASSL..307.....L}. A critical discussion on the different techniques and the specific implementations will be developed throught the paper, which is structured as follows: the basic assumptions of radiative transfer are discussed in Section~\ref{section:assumptions}; the following two Sections discuss the approximations used for the model atmospheres and the Stokes profiles; an analysis of the forward problem, namely the synthesis of the Stokes spectrum, is presented in Section~\ref{section:synthesis}, which is followed by an analysis of the sensitivities of spectral lines to physical quantities (Section~\ref{section:response}); the basics of inversion techniques are analyzed in Section~\ref{section:techniques} and a discussion on inversion results presented in Section~\ref{section:discussion}; finally, Section~\ref{section:conclusions} summarizes the conclusions. An appendix proposes an optimum way of initializing the inversion codes through the use of classical estimates. \newpage \section{Radiative transfer assumptions} \label{section:assumptions} The propagation of electromagnetic energy through a stellar atmosphere ---and its eventual release from it--- is a significantly complex, non-linear, three-dimensional, and time-dependent problem where the properties of the whole atmosphere are involved. From deep layers up to the stellar surface, the coupling between the radiation field and the atmospheric matter implies non-local effects that can connect different parts of the atmosphere. In other words, the state of matter and radiation at a given depth may depend on that at the other layers: light emitted at one point can be absorbed or scattered at another to release part or all of its energy. The description of the whole system, matter plus radiation field, needs to resort to the solution of the coupled equations that describe the physical state of the atomic system and that of the radiation traveling through it. Therefore, we have to simultaneously solve the so-called statistical equilibrium equations and the radiative transfer equation. The first assumption we shall make is that radiative transfer is one dimensional; that is, that the transfer of radiative energy perpendicular to the line of sight can be neglected in the matter--radiation coupling. For most solar applications so far, this assumption has been seen to be valid. Since the purpose of this paper is not directly related to either of the two systems of equations, let us simply point out what their main characteristics and ingredients are, and how the whole problem can be simplified in different situations. We refer the interested reader to the book by \citet{2004ASSL..307.....L} for a full and rigorous account of all the details. Most classical radiative transfer descriptions in the literature do not deal with polarization. They are typically qualified as radiative transfer studies for unpolarized light but the name is ill-chosen. Formally speaking, those analyses are for light traveling through homogeneous and isotropic media \citep{2003isp..book.....D}. As a consequence of that heritage, the community is used to speak about atomic level populations either calculated through the Boltzmann and Saha equations (the LTE approximation; see Sect.\ \ref{sec:LTE}) or not (the non-LTE case; see Section\ \ref{sec:NLTE}). These isotropic descriptions of the transfer problem, however, are not valid when a physical agent such as a vector magnetic field establishes a preferential direction in the medium, hence breaking the isotropy. Moreover, the outer layers of a star are a clear source of symmetry breaking. The exponential density decrease with height makes the radiation field anisotropic: outward opacity is much smaller than inward opacity. This should also be the case with collisions between particles: they are more probable at the bottom than at the top of the atmosphere. In such a situation, the probability is not zero for the various degenerate levels of the atom (with respect to energy) to be not evenly populated and for non-zero coherences or phase relations between them to exist. The atomic system is then said to be polarized and its state is best described with the so-called density operator, $\vector{\rho}$, that provides the probabilities of the sublevels being populated (hence the populations) along with the possible \emph{correlations} or \emph{interferences} between every pair. In the standard representation that uses the eigenvectors of the total angular momentum, $\vector{J}^2$, and of its third component, $\vector{J}_z$, as a basis, the density matrix element \begin{equation} \label{eq:densmatelem} \rho(\alpha j m, \alpha' j' m') = \langle\alpha j m | \rho | \alpha' j' m' \rangle \end{equation} represents the coherence or phase interference between the different magnetic sublevels characterized by their angular momentum quantum numbers. In Eq.\ (\ref{eq:densmatelem}), $\alpha$ and $\alpha'$ stand for supplementary quantum numbers relative to those operators that commute with $\vector{J}^2$ and $\vector{J}_z$. Certainly, the diagonal matrix elements $\rho_{\alpha} (j m, j m) \equiv \rho(\alpha j m, \alpha j m)$ represent the populations of the magnetic sublevels and the sum \begin{equation} \label{eq:pobla} n_j = \sum_m \rho_{\alpha} (j m, j m) = \sum_{m=-j}^{j} \langle\alpha j m | \rho | \alpha j m \rangle \end{equation} accounts for the total population of the level characterized by the $j$ quantum number. At all depths in the atmosphere, evolution equations for these density matrix elements have to be formulated that describe their time ($t$) variations due to the transport of radiation, on the one hand, and to collisions among particles on the other. All interactions with light ---namely, pure absorption ($A$), spontaneous emission ($E$), and stimulated emission ($S$)--- have to be considered. All kinds of collisions ---namely, inelastic ($I$), superelastic ($S$), and elastic ($E$) collisions--- have to be taken into account. Inelastic collisions induce transitions between any level $|\alpha jm\rangle$ and an upper level $|\alpha_u j_u m_u\rangle$ with a consequent loss in kinetic energy. Superelastic collisions induce transitions to a lower energy level $|\alpha_l j_lm_l\rangle$ with an increase in the kinetic energy of collision. Finally, elastic collisions induce transitions between degenerate levels $|\alpha jm\rangle$ and $|\alpha jm'\rangle$; in these, the colliding particle keeps its energy during the interaction. The statistical equilibrium equations (\ref{eq:equilradeqs}) and (\ref{eq:equilcoleqs}) that follow for radiative and collisional interactions, respectively, have slightly different application ranges. The former are valid for the multi-term atom representation and can even be used in the Paschen--Back regime, while the latter are only valid for the special case of the multi-level atom representation (although they can be generalized to the multi-term representation).\footnote{The concepts of multi-level or multi-term representation of an atomic system basically depend on the assumption or not, respectively, that coherences can be neglected among magnetic sub-levels that belong to levels characterized by different quantum numbers $\alpha$ and $j$. See \citet{2004ASSL..307.....L} for a detailed and rigorous description.} We make them explicit here for illustrative purposes only and refer the interested reader to the \citeauthor{2004ASSL..307.....L}'s (\citeyear{2004ASSL..307.....L}) monograph for details. According to that work, the radiative interaction equations in the magnetic field reference frame\footnote{Where the vector magnetic field marks the $Z$ direction.} can be written as $${\displaystyle \frac{\rm d}{{\rm d} t}} \rho_{\alpha} (jm,j'm') = -2\pi {\rm i} \, \nu_{\alpha} (jm,j'm') \rho_{\alpha} (jm,j'm') +$$ $${\displaystyle \sum_{\alpha_l j_l m_l j'_l m'_l}^{\,}} \rho_{\alpha_l} (j_l m_l,j'_lm'_l) \, T_A (\alpha jmj'm',\alpha_l j_l m_lj'_lm'_l) +$$ \begin{equation} \label{eq:equilradeqs} {\displaystyle \sum_{\alpha_u j_u m_u j'_u m'_u}^{\,}} \rho_{\alpha_u} (j_um_u,j'_um'_u) \left[ T_E (\alpha jmj'm', \alpha_u j_u m_u j'_u m'_u) + T_S (\alpha jmj'm', \alpha_u j_u m_u j'_u m'_u) \right] - \end{equation} $${\displaystyle \sum_{j''m''}^{\,}} \left\{ \rho_{\alpha} (jm,j''m'') \left[ R_A (\alpha j'm'j''m'') + R_E (\alpha j''m''j'm') + R_S (\alpha j''m''j'm') \right] \right. +$$ $$\rho_{\alpha} (j''m'',j'm') [ R_A (\alpha j''m''jm) + R_E (\alpha jmj''m'') + R_S (\alpha jmj''m'')] \}$$ where $\nu_{\alpha} (jm,j'm')$ is the frequency difference between the two sublevels and the $T$'s and $R$'s are radiative rates of coherence transfer and relaxation among the sublevels, respectively. Now, the collisional interactions give $${\displaystyle \frac{\rm d}{{\rm d} t}} \rho_{\alpha} (jm,jm') = {\displaystyle \sum_{\alpha_l j_l m_l m'_l}^{\,}} C_I (\alpha j mm',\alpha_l j_l m_l m'_l) \rho_{\alpha_l} (j_lm_l,j_lm'_l) + $$ $${\displaystyle \sum_{\alpha_u j_u m_u m'_u}^{\,}} C_S (\alpha j mm',\alpha_u j_u m_u m'_u) \rho_{\alpha_u} (j_um_u,j_um'_u) +$$ \begin{equation} \label{eq:equilcoleqs} {\displaystyle \sum_{m'' m'''}^{\,}} C_E (\alpha j mm',\alpha j m'' m''') \rho_{\alpha} (jm'',jm''') - \end{equation} $${\displaystyle \sum_{m''}^{\,}} \left[ \frac{1}{2} X(\alpha jmm'm'') \rho_{\alpha}(jm,jm'') + \frac{1}{2} X(\alpha jm'mm'')^* \rho_{\alpha} (jm'',jm') \right. -$$ $$\left. \frac{1}{2} X_E(\alpha jmm'm'') \rho_{\alpha}(jm,jm'') + \frac{1}{2} X_E(\alpha jm'mm'')^* \rho_{\alpha} (jm'',jm') \right],$$ where the $C$'s are collisional transfer rates between levels and the $X$'s are relaxation rates. The indices refer to the corresponding type of collisions and the asterisk denotes the complex conjugate. With the standard notation for the Stokes pseudo-vector $\vector{I} \equiv (I,Q,U,V)^{\scriptscriptstyle {\rm T}}$, where index {\scriptsize T} stands for the transpose, the radiative transfer equation can be written as \citep[e.g.,][]{2003isp..book.....D} \begin{equation} \label{eq:rte} \frac{\rm d \vector{I}}{\rm d \tau_{\rm c}} = \matriz{K} (\vector{I} - \vector{S}), \end{equation} where $\tau_{\rm c}$ is the optical depth at the continuum wavelength, $\matriz{K}$ stands for the propagation matrix, and $\vector{S}$ is the so-called source function vector. Since the continuum spectrum of radiation can safely be assumed flat within the wavelength span of a spectral line and non-polarized as far as currently reachable polarimetric accuracies are concerned, the optical depth, defined as \begin{equation} \label{eq:opticaldepth} \tau_{\rm c} \equiv \int_s^{s_{\rm lim}} \chi_{\rm cont} \, {\rm d}s, \end{equation} is the natural length scale for radiative transfer. Note that the origin of optical depth ($\tau_{\rm c} = 0$) coincides with the outermost boundary of geometrical distances ($s_{\rm lim}$) and is taken where the observer is located so that $\tau_{\rm c}$'s are actual depths in the atmosphere. In Eq.\ (\ref{eq:opticaldepth}), $\chi_{\rm cont}$ is the continuum absorption coefficient (the fraction of incoming electromagnetic energy withdrawn from the radiation field per unit of length through continuum formation processes). The propagation matrix deals with \emph{absorption} (withdrawal of the same amount of energy from all polarization states), \emph{pleochroism} (differential absorption for the various polarization states), and \emph{dispersion} (transfer among the various polarization states). The product of \matriz{K} and $\vector{S}$ accounts for emission. The RTE can then be considered \emph{as a conservation equation}: the energy and polarization state of light at a given point in the atmosphere can only vary because of emission, absorption, pleochroism, and dispersion. Equation\ (\ref{eq:rte}) is strictly valid only under the assumption that the energy and polarization state of light are independent of time. To be more specific, we have assumed that the rate of change of the Stokes parameter profiles is much slower than the radiative and collisional relaxation time scales involved in the problem. A formal solution to the general RTE was proposed for the first time by \citet{1985SoPh...97..239L}, according to whom, the observed Stokes profiles at the observer's optical depth ($\tau_{\rm c}=0$) read \begin{equation} \label{eq:rteformalsolution} \vector{I}(0) = \int_{0}^{\infty} \matriz{O} (0,\tau_{\rm c}) \matriz{K} (\tau_{\rm c}) \vector{S} (\tau_{\rm c}) \rm d \tau_{\rm c}, \end{equation} where $\matriz{O}$ is the so-called evolution operator, and a semi-infinite atmosphere has been assumed as usual. The solution is called \emph{formal} because it is not a \emph{real} solution as long as the evolution operator (and the propagation matrix and the source function vector) are not known. Unfortunately, no easy analytical expression can in general be found for $\matriz{O}$. Only in some particular cases, such as that in Sect.\ \ref{sec:milne}, can a compact form for the evolution operator and an analytic solution of the RTE be obtained. In all other cases, numerical evaluations of $\matriz{O}$ and solutions of the transfer equation are necessary. The emergent Stokes spectrum is obtained through an integral of a product of three terms all over the whole atmosphere. Claiming that some of the Stokes parameters are proportional to one of the matrix elements of $\matriz{K}$ is, at the very least, adventurous. This proportionality can only take place in very special circumstances (e.g., Sections \ref{sec:weakfield} and \ref{sec:weakatmosphere}). \subsection{The non-local thermodynamic equilibrium problem} \label{sec:NLTE} Being a vector differential equation, the RTE should indeed be considered as a set of four \emph{coupled} differential equations. These can only be solved independently in specific media, either isotropic or very simplified ones. But the situation is far more complicated since both \matriz{K} and \vector{S} depend on the material properties described by $\rho_{\alpha} (jm,j'm')$, as well as on external fields such as a macroscopic velocity or a magnetic field. For their part, the radiative and collisional transfer and relaxation rates do depend on the radiation field. Therefore, Eqs.\ (\ref{eq:equilradeqs}), (\ref{eq:equilcoleqs}), and (\ref{eq:rte}) describe a very involved, non-local, non-linear problem, known as the \emph{non-local thermodynamic equilibrium} (NLTE) problem and must be consistently solved altogether. The numerical solution of all those coupled equations requires iterative procedures that are summarized in Figure \ref{fig:NLTE}. \epubtkImage{}{% \begin{figure}[htbp] \centerline{\includegraphics[width=\textwidth]{figNLTE}} \caption{Block diagram of the Stokes profile synthesis under NLTE conditions} \label{fig:NLTE} \end{figure}} By a model atmosphere we understand the set of thermodynamic variables (usually two, e.g., temperature and pressure, $T$ and $p$), dynamic (the macroscopic, bulk line-of-sight velocity field, $v_{\rm LOS}$), magnetic (the vector field \vector{B}, represented by $B$, the strength, $\gamma$, the inclination with respect to the LOS, and $\varphi$, the azimuth), and possibly some other, \emph{ad hoc} variables (such as the micro- and macroturbulence velocities, $\xi_{\rm mic}$ and $\xi_{mac}$, the filling factor, $f$ ---the area fraction of the resolution pixel that is filled with the unknown atmosphere--- and so forth). All these variables have to be specified as functions of the optical depth. Numerically, that model can be represented by a vector \vector{x} of $np+r$ components, $n$ being the number of depth grid points throughout the atmosphere, $p$ the number of physical quantities varying with depth, and $r$ the number of quantities that are assumed constant throughout the LOS. For example, one such model atmosphere would look like \begin{equation} \label{eq:modelatmos} \begin{array}{rcl} \vector{x} & \equiv & [T(\tau_1), T(\tau_2), \ldots, T(\tau_n), p(\tau_1), p(\tau_2), \ldots, p(\tau_n), B(\tau_1), B(\tau_2), \ldots, B(\tau_n),\\ & & \gamma(\tau_1), \gamma(\tau_2), \ldots, \gamma(\tau_n), \varphi(\tau_1),\varphi (\tau_2), \ldots, \varphi(\tau_n), \\ & & v_{\rm LOS}(\tau_1), v_{\rm LOS}(\tau_2), \ldots, v_{\rm LOS}(\tau_n), \xi_{\rm mic}, \xi_{\rm mac}, f]^{\scriptscriptstyle {\rm T}}, \end{array} \end{equation} where we have assumed specifically that both micro- and macroturbulence (as well as the filling factor) are constant with depth. This assumption is based on the fact that experience teaches that the increase in spatial resolution reached with new instruments makes less and less necessary the use of such \emph{ad hoc} parameters. Once this model atmosphere is set, the necessary ingredients for the RTE and the statistical equilibrium equations can be calculated. The solution of the RTE has to be compared with that coming from it after modification driven by the new density matrix elements resulting from the solution of the statistical equations. If the differences are considered small compared with a given threshold, then a new synthetic set of Stokes parameters has been found. If not, the equilibrium equations have to be modified in order to iterate the procedure until convergence is reached. The direct problem of obtaining the Stokes spectrum of a given line coming out from a given model atmosphere then turns out to be very complex. It cannot always be computed with the necessary speed and accuracy. Approximations are, thus, in order. \subsection{The local thermodynamic equilibrium approximation} \label{sec:LTE} Imagine now that coherences among the Zeeman sublevels can be neglected, and that all of them are evenly populated. That is, assume that \begin{equation} \label{eq:ltecondition} \rho(\alpha jm,\alpha' j'm') = \delta_{\alpha \alpha'} \delta_{jj'} \delta_{mm'} \rho_{\alpha j}, \end{equation} where $\delta$ is Kronecker's delta. In such conditions, $n_{j} = (2j+1) \rho_{\alpha j}$. Assume also that $n_{j}$ and the population of other ionic species can be evaluated through the equations of thermodynamic equilibrium at the local temperature (the Boltzmann and Saha laws; e.g., \citeauthor{1992oasp.book.....G}, \citeyear{1992oasp.book.....G}). This assumption will be valid only in the case that the photon mean free path ($\ell = 1/\chi_{\rm cont}$)\footnote{For instance, at the bottom of the photosphere, $\ell \simeq 100$ km.} is small compared to the scale of variation of the physical quantities, i.e., when the atomic populations depend only upon the values of the local physical quantities. Besides, it can be shown that if Kirchoff's law is further assumed, \citep[e.g.,][]{2004ASSL..307.....L} the source function vector reduces to \begin{equation} \label{eq:sourcefunction} \vector{S} = (B_{\nu} (T), 0, 0, 0)^{\scriptscriptstyle {\rm T}}, \end{equation} where, $B_{\nu} (T)$ is the Planck function at the local temperature. These are the conditions of the so-called \emph{local thermodynamic equilibrium} approximation (LTE) and have automatically decoupled the RTE from the material equations. Then, if LTE can be supposed for a given spectral line, the synthesis of its Stokes profiles simplifies significantly because iterative procedures are no longer needed. This is graphically explained in Figure\ \ref{fig:LTE}. \epubtkImage{}{% \begin{figure}[htbp] \centerline{\includegraphics[width=\textwidth]{figLTE}} \caption{Block diagram of the Stokes profile synthesis under LTE conditions.} \label{fig:LTE} \end{figure}} In some circumstances, it may be useful to relax the fulfillment of the Boltzman law and, instead, admit that $\rho_{\alpha j}$ deviate from the LTE values, $\hat{\rho}_{\alpha j}$, so that \begin{equation} \label{eq:depcoeff} \beta_j = \frac{\rho_{\alpha j}}{\hat{\rho}_{\alpha j}} \end{equation} are \emph{departure} coefficients that measure how far the conditions are from LTE. Thus, although radiative transfer remains with the LTE scheme sketched in Fig.\ \ref{fig:LTE}, the second block is affected by Eq.\ (\ref{eq:depcoeff}) and the $\beta$'s are needed to calculate the level populations. As we are going to see, this departure-coefficient approximation can be very useful for formulating NLTE inversion procedures (see Section\ \ref{sec:non-lte}). \epubtkImage{}{% \begin{figure}[htbp] \centerline{\includegraphics[width=0.75\textwidth]{sourcefunction}} \caption{Stokes $I$, LTE source function for various atmospheric models: the umbral model E by \citet[][black line]{1986ApJ...306..284M}, the penumbral model by \citet[][red line]{1994ApJ...436..400D}, the plage model by \citet[][blue line]{solanki1986}, and the quiet-Sun models by \citet[][purple line]{gingerich+etal1971} and \citet[][green line]{vernazza+etal1981}.} \label{fig:source} \end{figure}} \subsection{The Milne--Eddington approximation} \label{sec:milne} An even more simplified approximation is obtained by further assuming that thermodynamics is sufficiently described with a source function that depends linearly on the continuum optical depth, \begin{equation} \label{eq:sourcelin} \vector{S} = (S_{0} + S_{1}\tau_{\rm c}) \, \vector{e}_{0}, \end{equation} where $\vector{e}_{0} \equiv (1,0,0,0)^{\scriptscriptstyle {\rm T}}$, and that the other physical quantities ($\vector{B}$, $v_{\rm LOS}$, etc.) in the model are constant throughout the atmosphere, hence defining a constant \matriz{K}. Figure \ref{fig:source} shows the LTE source function (the first component of the vector in Eq.\ \ref{eq:sourcefunction}) at 525 nm for several realistic model atmospheres, namely, the umbral model E by \citet[][black line]{1986ApJ...306..284M}, the penumbral model by \citet[][red line]{1994ApJ...436..400D}, the plage model by \citet[][blue line]{solanki1986}, and the quiet-Sun models by \citet[][yellow line]{gingerich+etal1971} and \citet[][green line]{vernazza+etal1981}. The hypothesis of linearity does not seem very accurate for all the models. Nevertheless, in spite of its seemingly unrealistic nature, when we are dealing with a weak spectral line, the optical depth interval at which the line is sensitive to the atmospheric quantities is usually small enough to consider that a linear source function is not a bad approximation. There is wide experience in showing how useful the ME approximation is for inferring average values of the magnetic field vector and the LOS velocity, starting with the paper by \citet[][for a check with other approaches see \citeauthor{1998ApJ...494..453W}, \citeyear{1998ApJ...494..453W}]{skumanich+lites1987}. The key point is that the RTE has an analytic solution (Stokes $\vector{I}$ at $\tau_{\rm c} = 0$) under these assumptions \citep[e.g.,][]{2003isp..book.....D}: \begin{equation} \label{eq:milnesolution} \vector{I} (0) = (S_0 + \matriz{K}^{-1} S_1) \, \vector{e}_0. \end{equation} The analytic character of the solution helps in grasping many of the relevant features in line formation; it cannot reproduce Stokes line asymmetries,\footnote{By Stokes line asymmetries or Stokes profile asymmetries we mean deviations from the even (Stokes $I$, $Q$, and $U$) or odd (Stokes $V$) functional shape about the central wavelength of the line. This is commented on in several places in this review, e.g., Secs.\ \ref{sec:misma}, \ref{sec:meatmosphere} and \ref{sec:varying}, and discussed in Section\ \ref{section:synthesis}.} though \citep{auer+heasley1978}. Using this useful feature, \citet{1985SoPh...97..239L} had the clever idea of tailoring the functional shape of the source function so that it might be used to synthesize chromospheric line profiles while preserving an analytic solution because of the constancy with depth of the propagation matrix. Atomic polarization is neglected in this modeling. The so-called ``field-free approximation" is assumed. The latter grants substitution of the scalar components of the source function for those corresponding to the same atom in the absence of a magnetic field \citep{1969SoPh...10..268R}. Later on, \citet{1988ApJ...330..493L} elaborated \citeauthor{1985SoPh...97..239L}'s idea and proposed a new source function that was incorporated into their inversion code to interpret the observed profiles of the Mg~{\sc i}~b lines at 517.27 and 518.36~nm. Specifically, they wrote the RTE in terms of the line center optical depth, $\tau_{0}$, which remains the same as in Eq.\ (\ref{eq:rte}) but substituting \matriz{K} by $\matriz{K}' \equiv r_0 \matriz{K}$, where $r_0$ is the continuum-to-line absorption coefficient ratio and with a new source function $\vector{S}'$ that follows from two distinct continuum and line source functions given by \begin{equation} \label{eq:chromosmilne} \begin{array}{c} \vector{S}_{\rm cont} = \vector{S}, \\ {\displaystyle \vector{S}_{\rm lin} = \vector{S} - \sum_{i=1}^2 A_i {\rm e}^{-\varepsilon_i \tau_0}}, \end{array} \end{equation} where $\vector{S}$ is defined in Eq.\ (\ref{eq:sourcelin}). The exponential shape of the last two terms in $\vector{S}_{\rm lin}$ tries to mimic the consequences in the source function of the actual chromospheric rise of temperature. The $A$'s and $\varepsilon$'s are free parameters that can be tuned to fit the observed profiles. With this formulation, the analytic solution of the transfer equation (at $\tau_{0} = 0$) turns out to be \begin{equation} \label{eq:chrmossolution} \vector{I}(0) = \left[ S_0 + \matriz{K}'^{-1} S_1 - \sum_{i=1}^{2} A_i (\matriz{K}' + \varepsilon_i {\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l}{\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}})(\matriz{K}' - r_0 {\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l}{\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}}) \right] \vector{e}_0, \end{equation} where ${\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l}{\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}}$ stands for the identity $4\times4$ matrix.\footnote{Note that this is not a non-LTE inversion technique but a phenomenological approach that can help in fitting the profiles of chromospheric lines that are indeed formed under conditions far from local thermodynamic equilibrium.} Further exploiting the analytic character of the Milne--Eddington solution, slight modifications in the assumptions were also suggested by \cite{landolfi+landi1996} to deal with small velocity gradients and even with discontinuities along the LOS. In summary, we can say that approximations to the RTE predicated on keeping the \matriz{K} matrix constant or almost constant are useful and still a field for exploitation in observational work. \subsection{The weak-field approximation} \label{sec:weakfield} A further simplification of radiative transfer is sometimes used. When the magnetic field can be assumed constant with depth and weak enough, the resulting Stokes $V$ profile of many lines turns out to be proportional to the longitudinal component of the field, regardless of the remaining physical quantities (see Section\ \ref{sec:weakatmosphere}). Under this assumption (and for not extremely weak fields since linear polarization is zero to first order approximation), the ratio between Stokes $U$ and $Q$ is proportional to the tangent of twice the field azimuth. The weakness of the field is guaranteed provided that \citep[e.g.,][]{2004ASSL..307.....L} \begin{equation} \label{eq:weakfield} g_{\rm eff} \frac{\Delta \lambda_{\rm B}}{\Delta \lambda_{\rm D}} \ll 1, \end{equation} where $g_{\rm eff}$ is the effective Land\'e factor of the line, $\Delta \lambda_{\rm B}$ is the Zeeman splitting, and $\Delta \lambda_{\rm D}$ is the Doppler width of the line. The effective Land\'e factor is given by \begin{equation} \label{eq:lande} g_{\rm eff} = \frac{1}{2} (g_u + g_l) + \frac{1}{4} (g_u-g_l) [j_u(j_u + 1) - j_l(j_l+1)], \end {equation} where $g_u$ and $g_l$ are the Land\'e factors of the upper and lower level of the transition, respectively. In $LS$ coupling, those factors are functions of the quantum numbers: \begin{equation} \label{eq:landelevel} g = \frac{3}{2} + \frac{s(s+1)-l(l+1)}{2j(j+1)}. \end{equation} The Zeeman splitting is given by \begin{equation} \label{eq:zeemansplitting} \Delta \lambda_{\rm B} = \frac{\lambda_0^2 e_0 B}{4\pi mc^{2}}, \end{equation} where $\lambda_0$ is the central, rest wavelength of the line, $e_0$ and $m$ are the charge and mass of the electron, $B$ is the magnetic field strength, and $c$ stands for the speed of light. For its part, the Doppler width is given by \begin{equation} \label{eq:dopplervel} \Delta \lambda_{\rm D} = \frac{\lambda_0}{c} \sqrt{\frac{2kT}{m_a} + \xi_{\rm mic}^2}, \end{equation} where $T$ is the temperature, $k$ is the Boltzmann constant, and $m_a$ is the mass of the atom. From a formal point of view, Eq.\ (\ref{eq:weakfield}) is a good conditioning inequality. However, in practical terms, one should establish what is meant by \emph{much less than} 1. This is addressed in Section\ \ref{sec:weakatmosphere} but we can be sure that the wider the line, the more the weak-field approximation applies. Hence, broad chromospheric lines are good candidates for using it. One of the first attempts at measuring a magnetic field with a chromospheric line, known to the authors of this review, was carried out as early as \citeyear{1990ApJ...361L..81M} by \citeauthor{1990ApJ...361L..81M} who (photographically) observed Stokes $I$ and $V$ profiles of the Ca {\sc ii} H line and interpreted them in terms of the weak-field approximation. This approach remains useful as interest in the chromosphere increases \citep[e.g.,][]{2013A&A...556A.115D}. \subsection{The MISMA hypothesis} \label{sec:misma} Driven by the ubiquitous appearance of Stokes profile asymmetries in observations, \citet{1994ssm..work...29L} suggested considering the atmospheric physical quantities, instead of deterministic stratifications, to have stochastic distributions about mean values \emph{with possible correlation effects} among them. Assuming that the source function nevertheless varies linearly with depth through the whole atmosphere and that the propagation matrix stays constant at the spatial scale of each of the realizations of such a common stochastic distribution, he found an analytic solution for the transfer equation. Certainly inspired by the \citeauthor{1994ssm..work...29L}'s proposal, \citet{1996ApJ...466..537S} put forward a new approach. Realizing that the wavelength symmetries in the propagation matrix elements do indeed avoid such Stokes profile asymmetries in the absence of LOS velocity gradients in the regular formulation of the transfer problem \citep{1992soti.book...71L}, they proposed that the solar atmosphere may be pervaded by MIcro-Structured Magnetic Atmospheres (MISMAs). The hypothesis implies a highly inhomogeneous atmosphere at scales much smaller than the photon mean free path whereby the integration of Eq.\ (\ref{eq:rte}) turns out to be very difficult. An alternative formulation is thus in order by locally averaging the propagation matrix and the emission vector. The resulting equation reads \begin{equation} \label{eq:rtemisma} \frac{{\rm d} \vector{I}}{{\rm d} s} = - \left< \matriz{K}' \right> (\vector{I} -\vector{S}'). \end{equation} It formally looks very much like the regular RTE but is formulated in terms of geometrical distances, $s$; $\matriz{K}' = \chi_{{\rm cont}} \matriz{K}$; \begin{equation} \label{eq:sprimamisma} \vector{S}' \equiv \left< \matriz{K}' \right>^{-1} \left< \matriz{K}' \vector{S} \right>; \end{equation} and the averages are taken over a distance $\Delta s$ that may vary along the optical path. The distance $\Delta s$ is supposed to be still smaller than $\ell$ for Stokes $I$ to be assumed constant within its range. In addition, the averages are considered to vary smoothly along the line of sight. With all these assumptions, Eq.\ (\ref{eq:rtemisma}) is formally the same as Eq.\ (\ref{eq:rte}). All the mathematical tools developed to solve the latter can be used to find a solution to the former. This is so despite the (numerically) inconvenient formulation in terms of geometrical distances: it requires either non-equally-spaced grid points or an increase in computation time. The good news is that, since correlations may exist among the physical parameters of the microstructures, the symmetry properties of matrix $\left<\matriz{K}'\right>$ are automatically destroyed. Hence, asymmetric Stokes profiles can appear naturally. \newpage \section{Degrees of approximation in the model atmospheres} \label{sec:approxmod} Provided that physical atmospheric quantities are bounded functions of the optical depth, we can safely expect that they are either continuous or have some jump (Heaviside-like) discontinuities throughout the line formation region. Therefore, except for the discontinuity points, a Taylor expansion approximation seems simple and sensible. The good feature of Taylor expansions is that you can keep them at a given order of approximation that can be subsequently increased if needed. The sequential approach is of great help in following the principle of Occam's razor ---\emph{lex parsimoniae}--- which, in our opinion, should prevail in the interpretational work. The question arises as to whether an order of approximation is useful or whether it should be increased to give account of the observations. The answer must be found in the degree of accuracy with which we are trying to reproduce the observables. Hence, it has to do with the balance between the signal and the noise: if the next order of approximation only introduces variations that are below, say, three times the rms noise, $\sigma$, then its use is discouraged. If, on the contrary, the difference between the observed and synthetic profiles is greater than $3\sigma$, its use may be advisable.\footnote{By adopting $\sigma$ as a measure of noise we are assuming that the noise statistics is Gaussian and this seems a common and sensible assumption as well. Requiring signals to be larger than $3\sigma$, therefore, implies more than 99.7~\% certainty in the detection. We refer the reader to \citet{2012ApJS..201...22D} for a discussion on polarimetric accuracy and signal-to-noise ratio. For Bayesian selection among model atmospheres, see \citet{2012ApJ...748...83A}.} Let us postpone the discussion to the following sections and present here the various atmospheres we are considering. We start with the zeroth order approximation and assume that physical quantities are constant with depth to continue with gradients, higher order variations, and jumps or discontinuities. \epubtkImage{}{% \begin{figure}[htbp] \centerline{\includegraphics[width=0.8\textwidth]{meprofile}} \caption{Examples of ME Stokes profiles of the Fe~{\sc i} line at 617.3 nm as observed with an instrument whose Gaussian spectral PSF has a FWHM of 6 pm. Two model atmospheres are used that differ only in the magnetic field strength: $B=1200$~G for the black lines and 200~G for the red ones.} \label{fig:meprofile} \end{figure}} \subsection{Constant physical quantities} \label{sec:constantquantities} Let us distinguish among three possibilities, namely, the Milne--Eddington approximation, the weak-field approximation, and an atmosphere where $\vector{B}$ and $v_{{\rm LOS}}$ are constant but where thermodynamics is properly accounted for with a realistic stratification of temperature.\footnote{Indeed, two variables are needed for specifying the thermodynamical state of the medium. However, most of the spectral lines used in typical observations present a very low dependence, if any, on pressure. Therefore, we shall assume that pressure is stratified according to hydrostatic equilibrium throughout the paper.} \subsubsection{The Milne--Eddington atmosphere} \label{sec:meatmosphere} As commented on in Sect.\ \ref{sec:milne}, a Milne--Eddington atmosphere provides an analytic solution to the RTE. With nine parameters, the Stokes profiles of a spectral line can be synthesized. The model parameters are the three components of the magnetic field, $B$, $\gamma$, and $\varphi$, the LOS velocity, $v_{\rm LOS}$, and the so-called thermodynamic parameters: the line-to-continuum absorption coefficient, $\eta_0$ ($=1/r_0$), the Doppler width of the line, $\Delta\lambda_{\rm D}$, the damping parameter, $a$, and the two coefficients for the source function, $S_0$ and $S_1$. The actual values of $\eta_0$, $\Delta\lambda_{\rm D}$, and $a$ may vary significantly throughout the atmosphere. Therefore, assigning one single value for each may be, say, risky. Experience, however, indicates that this is possible. Reasonable fits to actual data can be obtained with this approximation and we can even understand the relationship between the single-valued parameters and their actual stratification \citep{1998ApJ...494..453W}. Only Stokes profiles with definite symmetry properties can be formed in an ME atmosphere. Stokes $I$, $Q$, and $U$ are even functions of wavelength while Stokes $V$ is odd. This is a consequence of the absence of velocity gradients \citep{auer+heasley1978} and will be discussed later in Section\ \ref{section:synthesis}. Figure\ \ref{fig:meprofile} shows two examples of ME profiles corresponding to the Fe~{\sc i} line at 617.3 nm as observed with an instrument whose (Gaussian) spectral profile (point spread function, PSF) has a full width at half maximum (FWHM) of 6 pm. The thermodynamic model parameters are $\eta_0 = 5.06$, $\Delta\lambda_{\rm D} = 2.6$ pm, $a= 0.22$, $S_{0} = 0.1$, and $S_{1} = 0.9$; they come from a fit to the FTS spectrum \citep{1984sfat.book.....K, 1987ftp...book...B}. The magnetic inclination and azimuth are both equal to 30$^{\circ}$; $B=1200$~G for the black lines and 200~G for the red ones. \subsubsection{The weak-field atmosphere} \label{sec:weakatmosphere} As stated in Sect.\ \ref{sec:weakfield}, when $B$ is constant with depth and very weak, then the Stokes $V$ profile turns out to be proportional to the longitudinal component of the magnetic field independently of the remaining quantities. It can be shown \citep[e.g.,][]{2004ASSL..307.....L} that \begin{equation} \label{eq:vpropmag} V(\lambda) \simeq - g_{\rm eff} \, \Delta\lambda_{\rm B} \cos\gamma \, \frac{\partial I_{\rm nm}}{\partial \lambda}, \end{equation} where $I_{\rm nm}$ is the non-magnetic Stokes $I$ profile, corresponding to the line in the absence of a magnetic field. Equation\ (\ref{eq:vpropmag}) has been key for many magnetic inferences. In fact, written as $V = C B_{\parallel}$, it is known as the magnetographic equation since it provides a calibration of the magnetographic signal. When magnetographs used only one or two wavelength samples of the circular polarization, the magnetographic equation was indeed the only means of obtaining estimates of the component of the magnetic field along the line of sight. Nowadays, with modern magnetographs providing more samples in all four Stokes parameters, that equation is still useful for morphological, qualitative estimates but cannot be trusted everywhere and under all circumstances. The modern way to evaluate $C$ indeed implies some radiative transfer calculations in given model atmospheres \citep[e.g.,][]{2011SoPh...268...57M}, and these calculations readily show that the approximation saturates at low magnetic field strengths. In the left panel of Fig.\ \ref{fig:maximav}, we plot the maximum of the Stokes $V$ profile as a function of the field strength (the field is along the LOS, $\gamma=0^{\circ}$) with an instrumental profile FWHM of 6 pm (asterisks) and of 8.8 pm (diamonds). In solid lines, the linear (red) and quadratic (blue) fits are also shown. Only strengths up to 600~G are plotted because the relationship is evidently nonlinear above that threshold. For weaker fields, it is apparent that the instrumental broadening of the profiles helps linearity to hold as differences between the linear and quadratic fits are smaller for the broader PSF. Those differences are for most of the points above $3 \cdot 10^{-3} I_{\rm c}$; that is, more than $3\sigma$, with $\sigma$ being the noise level of the polarization continuum signal of typical observations. Such differences are clearly detectable by current means. Hence, the approximation loses validity for yet weak fields. Deviations from linearity are even clearer if one sees the green lines in the figure, which correspond to linear fits including only data points for which $B$ is less than 200~G. In our example, the weak field approximation for the Stokes $V$ peaks breaks down at fields stronger than 300~G with a FWHM of 6 pm and stronger than 400~G with a FWHM of 8.8 pm. Certainly, if the instrument has a narrower spectral PSF or if the noise is smaller, the approximation fails earlier. The approximation clearly worked better for older instruments. \epubtkImage{}{% \begin{figure}[htbp] \centerline{\includegraphics[width=0.5\textwidth]{maximav}\includegraphics[width=0.5\textwidth]{coreq}} \caption{Maximum of the Stokes $V$ profile as a function of the magnetic field strength for a longitudinal field (left panel). Maximum of the Stokes $Q$ profile as a function of the square magnetic field strength (right panel). Asterisks correspond to an instrumental profile FWHM of 6 pm and diamonds to a FWHM of 8.8 pm. Red lines represent linear fits to the points; blue lines display quadratic fits; green lines correspond to fits for fields weaker than 200~G.} \label{fig:maximav} \end{figure}} Further arguments can be supplied for the user to be cautious about weak field assumptions with typical, visible photospheric lines. The first one is that Eq.\ (\ref{eq:vpropmag}) is hardly applicable, as shown in Fig.\ \ref{fig:weakfieldapprox}, not only because Stokes $V$ does not follow it but because Stokes $I$ deviates from $I_{{\rm nm}}$ even sooner \citep[and, up to first order, $I=I_{{\rm nm}}$ must hold for Eq.\ (\ref{eq:vpropmag}) to be valid; e.g.,][]{2004ASSL..307.....L}. In the left column of the figure, the differences between the left-hand and the right-hand members of the equation are plotted. Colors correspond to 600~G (black), 500~G (red), 400~G (blue), 300~G (green), 200~G (purple), and 100~G (dark green). The dashed, horizontal purple lines mark the $3\sigma$ level. The upper rows are for a FWHM of 6 pm and the bottom row is for a FWHM of 8.8 pm. The plots in the left column are of course consistent with the results from Figure\ \ref{fig:maximav}. Those in the right column are illustrative of how Stokes $I$ varies with the magnetic field strength. Differences between the various profiles can easily be discerned above the $3\sigma$ level. When the profiles themselves are affected by noise, unlike in these plots, detecting the differences may be more difficult but the message is clear: {\bf contrary to the common belief}, the Stokes $V$ profile is not the only tool for estimating the longitudinal component of weak magnetic fields; Stokes $I$ helps a lot and should not be forgotten. The second argument concerns the diagnostic capability for typical lines to disentangle $B$ from $\gamma$ in the weak-field regime. Most statements about the only accurate retrieval to be the longitudinal magnetic field component are based on Eq.\ (\ref{eq:vpropmag}), as if it were the only available tool from radiative transfer. Stokes profiles other than $V$ are often obliterated. It is easy to understand \citep[e.g.,][]{2004ASSL..307.....L}, however, that the mere deviations between $I$ and $I_{\rm nm}$ we have seen in Fig.\ \ref{fig:weakfieldapprox} should imply the appearance of linear polarization signals (provided that the inclination is different from zero): such Stokes $I$ deviations from $I_{\rm nm}$ are second order terms in an expansion of all four Stokes profiles.\footnote{The expansion is in terms of powers of a dimensionless parameter that scales the vector magnetic field, and that is only valid when $\Delta\lambda_{\rm B} \rightarrow 0$.} At second order, Stokes $Q$ and $U$ are no longer zero (or below the noise) either and start to provide additional information. It can also be proven \citep[e.g.,][]{2004ASSL..307.....L} that $Q \propto B^2 \sin^2 \gamma$, as shown in the right panel of Fig.\ \ref{fig:maximav}, where the maximum of Stokes $Q$ is plotted against $B^2$ for a field that is inclined $45^{\circ}$ with respect to the vertical.\footnote{Stokes $Q$ is assumed to be defined here in the reference frame where Stokes $U$ is zero (constant magnetic azimuth).} Here, deviations between linear and quadratic fits are smaller than for the $V$ case (note that the $Y$ scale is an order of magnitude smaller) but the interesting point is that, above $B=200$ G, linear polarization signals begin to be larger than $3\sigma$ and, hence, detectable. \epubtkImage{}{% \begin{figure}[htbp] \centerline{\includegraphics[width=0.9\textwidth]{weakfieldapprox}} \caption{Differences between the Stokes $V$ profile and its weak-field approximation (left column) and differences between the Stokes $I$ profile and that for a zero field strength. Colors indicate values of the longitudinal component of the field. The dashed horizontal lines mark the $3\sigma$ level of typical, modern observations. The upper row is for a FWHM of 6 pm and the bottom one for a FWHM of 8.8 pm. Colors correspond to 600~G (black), 500~G (red), 400~G (blue), 300~G (green), 200~G (purple), and 100~G (dark green).} \label{fig:weakfieldapprox} \end{figure}} A third argument we want to bring to the reader's attention is related to the common belief that weak fields are hardly distinguished from strong fields (say above 1 kG) with a filling factor significantly smaller than 1. We will return to this issue in Secs.\ \ref{sec:analyticrfs} and \ref{sec:weakretreival}, as the problem has already been discussed in the literature \citep[e.g.,][]{2010ApJ...711..312D}. Let us only mention here that the loss of linearity of Stokes $V$ above, say, 400~G and, most importantly, the behavior of Stokes $I$ are reasons enough for the two types of atmospheres to be distinguished by observational means. If Eq.\ (\ref{eq:vpropmag}) were universally accepted, then it would indicate that the RTE is almost useless since the emergent profile is proportional to one of the matrix elements of $\matriz{K}$. Elementary mathematics readily explain that this is not possible except for, perhaps, a small value range of fields. In summary, we must acknowledge that {\bf Stokes $V$ is not proportional to the longitudinal component of the magnetic field}. \subsubsection{Constant vector magnetic field and LOS velocity} \label{sec:constantbv} There is still a third option to deal with constant $\vector{B}$ and $v_{\rm LOS}$. Imagine that the atmosphere is a regular one as far as thermodynamics is concerned but where the magnetic and dynamic quantities do not vary with depth. Since the propagation matrix is no longer constant, no analytic solution of the RTE is available.\footnote{The clause is not very rigorous but is true in practical terms. Indeed, one can conceive other \matriz{K} stratifications that still allow an analytic solution of the RTE \citep{1985SoPh...97..239L}.} One is then led to use numerical techniques to synthesize the spectrum. The atmosphere, however, is greatly simplified since the number of parameters is reduced. This can be very helpful for quicker analyses of the data or as a makeshift for more elaborate subsequent approaches that include variations of $\vector{B}$ and $v_{\rm LOS}$ with the optical depth. This is the approximation used for the first version of the SPINOR code \citep{1992A&amp;A...263..339S} or as an option in the SIR code \citep{1992ApJ...398..375R}. \subsection{Physical quantities varying with depth} \label{sec:varying} The community has gathered a great deal of evidence about variations of $\vector{B}$ and $v_{\rm LOS}$ along the optical path everywhere over the solar disk. In addition, physical laws such as those of magnetic flux and mass conservations demand that these quantities vary with optical depth in a number of structures. The approximations in the former subsections cannot then be considered but as first-step approaches or simplified descriptions of reality. In any case, we can safely assume that stratifications of the physical quantities are bounded functions of $\tau_{\rm c}$ (or whichever variable parameterizing the optical path), as we admitted in the beginning of this Section. A historical landmark for the full acknowledgement of LOS velocity gradients from an observational point of view was established by the discovery by \citet{1974A&amp;A....31..179M} and \citet{1974A&amp;A....35..327I,1974A&amp;A....37...97I,1975A&amp;A....41..183I} of a broadband circular polarization in sunspots. The true explanation was already suggested in the last of those papers, although schematically founded on the assumption of two slabs with \emph{different velocity} and magnetic field strengths. The broadband observations were soon related to spectral line net circular polarization (the integral of the Stokes $V$ profile over the wavelength span of the line): \citet{1975SoPh...42...21G} computed all four Stokes profiles in the presence of an LOS velocity gradient and certainly obtained asymmetric profiles; later on, \citet{auer+heasley1978} demonstrated that a necessary and sufficient condition for such a net circular polarization had to be found in velocity gradients along the line of sight, although they were neglecting magneto--optical effects. Rigorous derivations (including dispersion effects) have later been obtained and can be found, for example, in the elegant work by \citet{1981NCimB..62....1L}. The symmetry properties of the propagation matrix elements predict no net circular polarization (or Stokes $V$ area asymmetry) in the absence of an LOS velocity gradient. Other mechanisms such as insufficient spatial resolution that implies mixtures of individual atmospheres within a pixel, may produce asymmetries in the peaks (the so-called amplitude asymmetries) but the integral of $V$ will remain zero. Therefore, any net circular polarization is unambiguous observational evidence for the presence of velocity gradients. And Stokes $V$ area asymmetries are observed practically everywhere. Unfortunately, no such unambiguous evidence exists for the presence of magnetic field gradients, although we know on physical grounds there are plenty of them, such as those through magnetic canopies where a magnetic layer is overlaying a non-magnetic one. \subsubsection{Parameterizing the stratifications} \label{sec:parameterizing} Among the numerical codes relevant to this review (see Sect.\ \ref{section:techniques}) there are some that acknowledge variations of $\vector{B}$ and $v_{\rm LOS}$. We deal here with what might be called ``normal'' or ``regular'' stratifications, such as those employed by \citet{1992ApJ...398..375R}, \citet{1998A&amp;A...336L..65F}, \citet{2000ApJ...530..977S}, and \citet{2001ASPC..236..487S}, and leave some others, devoted to specific solar features, to the following paragraphs. Since the number of depth grid points used for the numerical integration of the RTE can be high, it may be advisable to reduce the degrees of freedom of the variations with depth of the physical quantities. As commented on above, a reasonable approach would be to follow higher order polynomials in a stepwise form. From constant values to linear, parabolic, third-order polynomial dependences, and so on. Then, if we assume, for instance, that $v_{\rm LOS}$ is linear with $\tau_{\rm c}$, we only need to specify the velocity at two grid points (\emph{nodes} in SIR's terminology) and three if it is parabolic, hence reducing the number of free parameters of the model. We do not need to specify $T$, $\vector{B}$, and $v_{\rm LOS}$ at every single point we use for solving the RTE but only at a few of them. We shall see in Sect.\ \ref{section:techniques} that one can go even further with this kind of approach and consider more involved optical depth dependences if necessary. \subsubsection{The MISMA atmosphere} \label{sec:mismaatmos} As we explained in Sect.\ \ref{sec:misma}, the MISMA hypothesis guarantees the appearance of Stokes profile asymmetries but at the expense of introducing a significant number of extra free parameters. In fact, even in the simplest MISMA atmosphere \citep{1996SoPh..164..203S}, where all the micro-structures are described by ME atmospheres, one has in principle as many as ten free parameters per needed component (also known as micro-structure). To the nine regular ME parameters, the volume occupation fraction for each micro-structure must be added. In more complicated MISMAs, the number of parameters is even higher \citep{1997ApJ...491..993S}. Moreover, in spite of the very detailed physical description where equilibrium equations are required for slender flux tubes, the inclination and azimuth of the magnetic field are kept constant throughout the whole atmosphere, which does not seem very realistic. (Canopies are found almost everywhere owing to the fanning out of magnetic field lines with height.) Last, but not least, {\bf when the structuring of the atmosphere is established at sizes comparable to $\ell$, the average propagation matrix does not result in an RTE as in Eq.\ (\ref{eq:rtemisma}), which is no longer valid}. Modern observations with continuously increasing spatial resolution do indeed show this kind of structuring both in quiet and active regions and sunspots. For example, single magnetic flux tubes of approximately 150 km size have been fully resolved by \citet{2010ApJ...723L.164L}; their evolution followed for half an hour by \citet{2014ApJ...789....6R}; and the internal structure of network magnetic structures revealed \citep{2012ApJ...758L..40M} with {\sc Sunrise}/IMaX observations (\citeauthor{2011SoPh...268...57M}, \citeyear{2011SoPh...268...57M}; \citeauthor{2011SoPh..268....1B}, \citeyear{2011SoPh..268....1B}). In our opinion, the MISMA hypothesis, being a clever idea for producing asymmetries, is advisable as a ``when-all-else-fails" atmosphere but there are yet conventional radiative transfer treatments that provide reasonable interpretation of the observations. \subsubsection{Other special atmospheres} \label{sec:interlaced} This subsection is devoted to three special cases where the physical scenario envisaged to explain the observations requires a specific configuration that is not intended to be universally valid. Those specific configurations, however, help in interpreting the Stokes profiles emerging from given solar features. \paragraph{Interlaced atmospheres} Imagine that you can assume that your line of sight is piercing a number $n$ of alternate boundaries $\left\{ s_{i} \right\}_{i=1,\ldots,n} (s_{1}<s_{2}<\ldots<s_{n})$ between two distinct atmospheres, as when observing from a side two identical thin flux tubes that are close but not stuck to each other. In such a scenario, the structuring of the atmosphere is comparable in size with $\ell$ and, therefore, the MISMA hypothesis does not hold. If you happen to know the solution, $\vector{I}_{\pm 1}$, of the RTE in each of the two atmospheres, labeled $\pm 1$, \citet{1995A&A...294..855D} found out that the formal solution to the problem is \begin{equation} \label{eq:interlaced} \vector{I} (s) = \vector{I}_{+1} (s) + \sum_{i=1}^{n} (-1)^{n-i} \left[ \prod_{j=i}^{n} \matriz{O}_{(-1)^{n-j}} (s_{j+1}, s_{j}) \right] \Delta \vector{I} (s_{i}), \end{equation} for any $s \in [s_{n}, s_{\rm lim}]$, where the $+1$ atmosphere is assumed to be the outermost one, $\Delta \vector{I} (s_{i}) \equiv \vector{I}_{-1} (s_{i}) - \vector{I}_{+1} (s_{i})$, and $\matriz{O}_{\pm 1}$ are the evolution operators for both atmospheres \citep[e.g.][]{1985SoPh...97..239L}. Equation (\ref{eq:interlaced}) is at the root of the flux-tube inversion code by \citet[][see \citeauthor{1996A&A...306..960B}, \citeyear{1996A&A...306..960B} as well]{1997ApJ...478L..45B}. A different treatment of discontinuities along the line of sight was proposed by \citet{2003ASPC..286..235B} where the density of depth grid points is increased in the discontinuity neighborhood. \paragraph{Atmospheres with Gaussian profiles} The existence of net circular polarization in the penumbrae of sunspots was also the driver for \citet{2003ASPC..307..301B} to propose an implementation of the uncombed model by \citet{solanki+montavon1993}. The scenario is based on two components; namely, a magnetic component and a penumbral magnetic flux tube, the latter occupying a fractional area of the resolution element. The model parameters of the penumbral tube are built by Gaussian modifications (in depth) of those in the background. All the Gaussians have the same width and are located at the same depth, but their amplitudes depend on the specific model parameter, of course. With these premises, the SIR code was modified into the so-called SIRGAUS code, which has been used, among others by \citet{2007PASJ...59S.601J}, \citet{2008A&A...481L..17J}, \citet{2010ApJ...713.1310I}, and \citet{2014A&amp;A...566A.139Q}. \paragraph{Atmospheres with jump discontinuities} Discontinuities can be treated numerically by decreasing the depth grid step or by using Eq.\ (\ref{eq:interlaced}). A specific implementation of such discontinuities was first used by \citet{2009ApJ...704L..29L} for an analysis of sunspot light bridges. Like SIRGAUS, it is based on a modification of the SIR code to take this particular scenario into account. In it, two magnetic atmospheres coexist in the resolution pixel: a background atmosphere whereby $\vector{B}$ and $v_{\rm LOS}$ are constant with depth, and another magnetic atmosphere where those quantities have a Heaviside-like discontinuity. This code (called SIRJUMP) has also been used in practice, e.g., by \citet{2012ApJ...758L..40M} and \citet{2012ApJ...748...38S}. \newpage \section{Degrees of approximation in the Stokes profiles} \label{sec:approxprof} Since the ultimate goal of inversions is the \emph{bona fide} reproduction of observed profiles, an analysis of the properties of Stokes spectra as functions of the wavelength is in order. Such an analysis should be aimed at finding the most conspicuous characteristics of the profiles in order for these characteristics to be the best reproduced among all the features. In other words, if, for instance, a given Stokes $I (\lambda)$ profile shows only small deviations from a Gaussian, we should aim to obtain the Gaussian that best simulates the profile and identify the model parameters responsible for this bulk behavior. In some cases we may be satisfied just with this ``coarse'', or not very detailed, description and leave small deviations or nuances to further, in-depth analysis that might even be carried out separately. As we are going to see, this approximation of incremental complexity for the profiles is well in line with the successive approximations we have described for the model atmospheres in Section \ref{sec:approxmod}. The Stokes $Q$, $U$, and $V$ profiles and Stokes $I$ in line depression; that is, \begin{equation} \label{eq:intensityld} I_{\rm d} \equiv 1 - \frac{I}{I_{\rm c}}, \end{equation} as functions of $x \equiv \lambda-\lambda_{0}$ (where $\lambda_{0}$ is the central wavelength of the line), can be decomposed as sums of even and odd functions of $x$, as any other function defined over ${\mathchoice {\rm I\mskip-4mu R} {\rm I\mskip-4mu R}{\rm I\mskip-4.5mu R} {\rm IRmskip-5mu R}}$.\footnote{Strictly speaking, we should take a mean LOS velocity wavelength shift into account as well, if it is large enough.} Specifically, if we call $S (x)$ any one of the profiles, then \begin{equation} \label{eq:s+s-} S (x) = S_{+} (x) + S_{-} (x), \end{equation} where \begin{equation} \label{eq:splussminus} S_{+} (x) \equiv \frac{S(x) + S(-x)}{2} \,\,\, {\rm and} \,\,\, S_{-} (x) \equiv \frac{S(x) - S(-x)}{2}. \end{equation} By construction, $S_{+}$ is even and $S_{-}$ is odd.\footnote{The property is also valid for Stokes $I$. We have, however, chosen $I_{\rm d}$ for reasons that will become clear a little later in this Section.} \epubtkImage{}{% \begin{figure}[htbp] \centerline{\includegraphics[width=0.6\textwidth]{approxprofiles}} \caption{Differences between the Stokes profiles of the Fe~{\sc i} line at 617.3 nm as synthesized in two model atmospheres that differ in the LOS velocity. See text for details.} \label{fig:approxprofiles} \end{figure}} This parity property is very interesting because, as we have seen in former Sections, the Stokes profiles of any line formed in the absence of velocity gradients have definite symmetry (parity) properties. Since asymmetries in regular profiles are relatively small, that is, the profiles usually display a predominant parity character (even for Stokes $I$, $Q$, and $U$, and odd for Stokes $V$), a sum of even and odd profiles may give account of the observed spectra as if the opposite parity component was indeed a \emph{perturbation} related to velocity gradients. This can explain the success of ME inversion codes for fitting many observations \citep[cf. \citeauthor{1998ApJ...494..453W}, \citeyear{1998ApJ...494..453W};][]{2010A&A...518A...2O}. The ME atmosphere accounts for the main bulk of the observed Stokes profiles. In Fig. \ref{fig:approxprofiles} we plot the differences among the Stokes profiles of the Fe~{\sc i} line at 617.3 nm as synthesized in two model atmospheres. Both have the HSRA \citep{gingerich+etal1971} stratification of temperature with $B = 1500$ G and $\gamma=\varphi = 30$\degree. One of the models has a constant $v_{\rm LOS} = 1.87$ \mbox{$\:$km$\,$s$^{-1}$} and the other a small gradient from $v_{\rm LOS} = 2$ \mbox{$\:$km$\,$s$^{-1}$} at the bottom of the atmosphere through 1.75 \mbox{$\:$km$\,$s$^{-1}$} at the top. Both have $\xi_{\rm mac} = 1$ \mbox{$\:$km$\,$s$^{-1}$} and have been convolved with the IMaX PSF. Note that these differential profiles display almost the opposite parity character to their corresponding Stokes profiles. \epubtkImage{}{% \begin{figure}[htbp] \centerline{\includegraphics[width=\textwidth]{reesetal}} \caption{First six eigenprofiles for Stokes $I$, $Q$, $U$, and $V$. They are obtained from observations of the Fe~{\sc i} line pair at 630.1 and 603.2 nm. Adapted from \citet{rees+etal2000}.} \label{fig:eigenprofiles} \end{figure}} A description with only $S_{+}$ or $S_{-}$ can thus provide a first approach analysis to a large number of observations. Indeed, $S_{+}$ should be good for $I$, $Q$, and $U$, and $S_{-}$ for $V$ as the differences are smaller than our ``nominal'' noise of $10^{-3} \, I_{\rm c}$. This, of course, cannot always be the case. Very peculiar Stokes profiles are often observed as our polarization accuracy increases. For instance, \citet{sigwarth+etal1999} first reported the observation of one-lobed $V$ profiles that were later studied in detail by \citet{2000A&amp;A...357..351G} and \citet{sigwarth2001}. Most of these profiles are found in the internetwork \citep[e.g.,][]{2012ApJ...748...38S}. A different description of the Stokes profiles as functions of wavelength was proposed by \citet{rees+etal2000} who suggested that they can be described as sums of given \emph{principal components} or \emph{eigenprofiles}. If those eigenprofiles are contained in a database and are properly selected, they can increasingly give account of the profile shapes just by increasing the number of principal components in the expansion. An example of such eigenprofiles is given in Fig.\ \ref{fig:eigenprofiles}. By adding these components properly weighted, the corresponding Stokes profiles are synthesized. This is the basis for all the PCA inversion techniques presented so far and the concept is fairly simple. A similar approach to that of PCA was proposed by \citet{2003A&A...412..875D}, based on the fact that Stokes $I_{\rm d}$, $Q$, $U$, and $V$ belong to ${\mathchoice {\rm I\mskip-4mu L} {\rm I\mskip-4mu L}{\rm I\mskip-4.5mu L} {\rm IRmskip-5mu L}}^2$, the space of square integrable functions over ${\mathchoice {\rm I\mskip-4mu R} {\rm I\mskip-4mu R}{\rm I\mskip-4.5mu R} {\rm IRmskip-5mu R}}$. Since ${\mathchoice {\rm I\mskip-4mu L} {\rm I\mskip-4mu L}{\rm I\mskip-4.5mu L} {\rm IRmskip-5mu L}}^2$ is a Hilbert space with a well defined scalar product, an exact, infinite expansion of the profiles is possible in terms of any of the several bases of the space. Among those basis systems, \citeauthor{2003A&A...412..875D} selected the family of Hermite functions, $h_n (x)$, because of the similarity between the shapes of the first few elements of the family and the Stokes profiles (see Figure\ \ref{fig:hermite}). Somehow, the Hermite functions (see the aforementioned paper for a definition) provide a suitable basis for approximating the observed profiles with finite expansions of a few terms. Apart from their possible use in inversion codes that has not been investigated so far, the expansion of Stokes profiles in terms of Hermite functions has been used by \citet{2012SoPh..276..415T} for compressing observed Stokes profiles and by \cite{2012ApJS..203....7H} for an automatic solar active region detection. The first authors, after expansion of the Stokes profiles, only keep the coefficients compressed with a conventional algorithm. This way they reduce the storage space by a factor 20 while keeping most of the information virtually noise free. The latter author discriminates the different active regions after looking at the complexity of the emerging Stokes profiles as described by their Hermite-function expansion coefficients. \epubtkImage{}{% \begin{figure}[htbp] \centerline{\includegraphics[width=0.85\textwidth]{hermite}} \caption{First six Hermite functions. The abscissa has to be understood as a normalized wavelength (by the Doppler width of the line, for instance).} \label{fig:hermite} \end{figure}} \newpage \section{A synthesis approach} \label{section:synthesis} As described in the introduction, an approximate knowledge as to how the Stokes profiles react to the various model parameters is advisable as it helps to select the adequate observables as ``orthogonal'' as possible. The ideal way to explore the diagnostic capabilities of Stokes profiles is by means of response functions (see Sect.\ \ref{section:response}). Tackling the problem head-on, that is, synthesizing the profiles in different model atmospheres, may help grasp basic ideas on the Stokes profile behavior, though. The idea is to study how the Stokes profiles vary when the model parameters are modified. This is the aim of this section. \subsection{Constant atmospheres} \label{sec:constantatmos} As we have been doing in the two previous sections, let us start with the easiest case of atmospheres that do not vary with optical depth and, specifically, with ME atmospheres, since their analytic solution of the RTE enables a quick numerical overview of the space of model parameters. Figure~\ref{fig:iqvgrowthgamma} shows Stokes $I$, $Q$, and $V$ for the Fe~{\sc i} line at 617.3 nm with the same thermodynamic parameters used in Figs.\ \ref{fig:meprofile} and \ref{fig:weakfieldapprox}. Since the linear polarization $L^{2} \equiv Q^{2} + U^{2}$ is rotationally invariant and $\varphi$ is constant throughout the ME atmosphere, we are assuming to have selected the preferred reference frame where $U$ is identically zero, so that $L=Q$. In practical terms we have selected $\varphi = 0^{\circ}$ for all the profiles. The magnetic field strength is $B=300$, 500, 900, and 1200~G for the four rows from top to bottom. The magnetic inclination is encoded in color: $\gamma=0$ (dark green), 15 (purple), 30 (pink), 45 (green), 60 (blue), 75 (red), and $90^{\circ}$ (black). If we again assume a typical noise $\sigma=10^{-3} I_{\rm c}$, then the small differences in the core of Stokes $I$ for $B=300$ may not be detected and a neat distinction between $\gamma = 0$ and $15^{\circ}$ or $\gamma = 90$ and $75^{\circ}$ may hardly be reachable with Stokes $Q$ and $V$ at a $3\sigma$ level. Most certainly, however, there should not be any problem to distinguish between 0, 30, 60, and $90^{\circ}$. Of course, the dependences of $I$ and $V$ on $\gamma$ are significant enough when the field is stronger. One should not be restricted to longitudinal components even for this small field strength, and the situation improves further for lower noise levels. This is a well-known issue: in polarimetric observations we are photon starved, indeed as much as night-time astronomers may be for detecting very faint objects. \epubtkImage{}{% \begin{figure}[htbp] \centerline{\includegraphics[width=\textwidth]{iqvgrowthgamma}} \caption{Stokes $I$, $Q$, and $V$ as functions of the magnetic field inclination: dark green is for $\gamma=0$, purple for 15, pink for 30, green for 45, blue for 60, red for 75, and black for $90^{\circ}$. The magnetic field strength is different for the various rows: from top to bottom, $B=300$, 500, 900, and 1200~G. The magnetic azimuth is identically zero.} \label{fig:iqvgrowthgamma} \end{figure}} \epubtkImage{}{% \begin{figure}[htbp] \centerline{\includegraphics[width=\textwidth]{iqvgrowthstrength}} \caption{Stokes $I$, $Q$, and $V$ as functions of the magnetic field strength: black is for $B=1200$, red for 900, blue for 500, green for 300, and purple for 200G. The magnetic field inclination is different for the various rows: from top to bottom, $\gamma=15$, 30, 60, and $75^{\circ}$. The magnetic azimuth is identically zero.} \label{fig:iqvgrowthstregth} \end{figure}} The alternative way to gauge the sensitivity of the Stokes profiles to $B$ and $\gamma$ by synthesizing the profiles is shown in Figure~\ref{fig:iqvgrowthstregth}. The rows correspond to different inclinations: $\gamma=15$, 30, 60, and $75^{\circ}$ from top to bottom. The magnetic field strength is this time encoded in colors: $B=1200$ (black), 900 (red), 500 (blue), 300 (green), and 200~G (yellow). This complementary view shows that the dependence on $B$ is in fact stronger than on $\gamma$. Therefore, properly sampled Stokes profiles with enough polarimetric accuracy should be able to provide the required information to infer the magnetic field strength and inclination separately for most of the strength spectrum. Weaker fields will have bigger uncertainties for sure, but they should not imply a theoretical inability. As a matter of fact, the weaker the fields we want to explore, the smaller the noise we need in our observations, but this is somehow obvious. \subsection{Depth-dependent atmospheres} \label{sec:depthdependent} In the solar atmospheres, physical quantities do vary with depth. Acknowledging such variations almost always implies resorting to numerical solutions of the transfer equation. That was first done by \citet{1969SoPh....9..372B,1969SoPh...10..262B}. Numerical results by \citet{1969SoPh....8..264S,1970SoPh...12...84S} and by \citet[][see Fig.\ \ref{fig:wittmann}]{1971SoPh...20..365W} soon appeared and numerical codes were described \citep[e.g.][\citeauthor{1976A&amp;AS...25..379L}, \citeyear{1976A&amp;AS...25..379L}]{1974SoPh...35...11W}. Those first numerical codes capable of synthesizing the Stokes profiles were based on the fourth-order Runge--Kutta algorithm that is very accurate at the price of being very computationally expensive. A generalization of the method by \citet{1964CR....258.3189F} to polarized light was proposed by \citet{auer+etal1977} and later modified by \citet{1987nrt..book..241R} in order to take magneto-optical effects into account. A fast solution of the RTE, after being reformulated as an integral Volterra equation (first suggested by \citeauthor{1969SoPh....8..264S}, \citeyear{1969SoPh....8..264S}), was formulated by \citet{rees+etal1989} with their so-called DELO method. \epubtkImage{}{% \begin{figure}[htbp] \centerline{\includegraphics[width=0.8\textwidth]{Fig13new}} \caption{Comparison between the observed and computed Stokes $I$ profile of the Fe~{\sc i} line at 630.25~nm. Adapted from \citet{1971SoPh...20..365W}.} \label{fig:wittmann} \end{figure}} An improvement in accuracy and computational speed was obtained by \citet{1998ApJ...506..805B} with an Hermitian method based on developing the Stokes vector as a fourth-order polynomial with depth. In Fig.\ \ref{fig:bellotrubio} we show an example of a synthesis of the same line as in Fig.\ \ref{fig:wittmann}, where clear asymmetries in wavelength in the Stokes profiles can be seen. According to Sections \ref{sec:varying} and \ref{sec:approxprof}, such asymmetries have been produced by the variation with depth of physical quantities. Learning how the various profile features of the four Stokes parameters depend on the many model parameters is certainly difficult and cannot be summarized in this paper. Experience, however, can train a researcher to be able to deduce ---many times after a quick glance (the \emph{art})--- a specific stronger $v_{\rm LOS}$ or $B$ here or there in the atmosphere. The situation is therefore much more complicated than for the ME case and one should rely upon inversions. \epubtkImage{}{% \begin{figure}[htbp] \centerline{\includegraphics[width=0.8\textwidth]{bellotrubio}} \caption{Stokes profiles of Fe~{\sc i} line at 630.25 nm as synthesized with an Hermitian method. Adapted from \citet{1998ApJ...506..805B}.} \label{fig:bellotrubio} \end{figure}} \subsection{MHD simulations} \label{sec:MHD} The advent of magnetohydrodynamic (MHD) simulations such as those by \citet{2003PhD...Voegler...A}, \citet{voegler+etal2005}, and \citet{2012ApJ...750...62R} has opened a new window on the exploration of the solar photosphere. They have enabled calculations that may help to envision what is expected from observations and to interpret them. The simulations also provide predictions that can be confronted with them. The enrichment has been remarkable because the realistic atmospheres resulting from the simulations have physical quantities varying along the optical path without any \emph{a priori} assumptions and may be closer to the actual Sun than other simplified atmospheres. MHD simulations have been used to test the reliability of inversion techniques \citep{2010A&A...518A...2O,2012A&amp;A...543A..34D}. In the first of these works, a confirmation of the predictions by \citet{1996A&A...314..295S} was found: if your inference technique assumes magnetic fields and velocities constant with depth and you use it on data coming from an atmosphere where these quantities are depth dependent, the result is just the average of the actual stratification weighted with the generalized response function to perturbations of that quantity. In the second of these papers, the non-LTE inversion code called NICOLE is tested. Here we report on preliminary results by \citet{2015ApJ...inprep...H} to illustrate the role of simulations as a tool to determine the optimum wavelength sample of Stokes profiles. This is a particularly interesting topic that is very relevant to the observational (and hence interpretational) work. Are the available samples enough for capturing all the information encoded in the Stokes profiles? What is the optimum sampling one should use with a new instrument under development depending on the goals such an instrument aims to fulfill? Figure\ \ref{fig:stokespower} shows the Stokes $I$ (in depression), $Q$, $U$, and $V$ profiles of the Fe~{\sc i} line at 630.25 nm across a slit over a sunspot simulation by \citet{2012ApJ...750...62R} (left column panels) and their corresponding power spectra (right column panels). The simulation contains the transition from the quiet Sun (at both sides of the $X$ dimension) through the penumbra and the umbra of a sunspot. The power spectra of Stokes $V$, $Q$, and $U$ are wider than that of Stokes $I$ as a natural consequence of their shapes. Therefore, a cut-off frequency is better found in the polarization profiles. In this example, Shanon's critical sampling interval is around 1.25 pm/pixel with the remarkable fact that no convolution with an instrumental PSF has been applied to the profiles. This value is coarser than that provided by several ground-based spectrographs with resolutions about $R\simeq 10^{6}$ that would be considered too fine for the required diagnostics. As soon as the profiles are observed by an instrument with a finite width PSF, the Nyquist frequency will shrink to smaller values. Hence, the critical sampling will be coarser. These kinds of calculations can therefore help in designing new instruments and, as far as this paper is concerned, in deciding the adequate spectral sampling for any synthesis or inversion code: if the sampling is too fine, we are wasting computational time; if the sampling is too coarse, we are neglecting available information. \epubtkImage{}{% \begin{figure}[htbp] \centerline{\includegraphics[width=\textwidth]{stokes_power}} \caption{Stokes profiles ($I$ is in line depression) across a sunspot simulation by \citet{2012ApJ...750...62R} (left column) and power spectra (right column). Adapted from \citet{2015ApJ...inprep...H}.} \label{fig:stokespower} \end{figure}} \newpage \section{Response functions} \label{section:response} The needle in an old-fashioned ammeter is useful as far as it moves when a current circulates through the electric circuit. If, for instance, the needle does not move when a given very weak or very strong current passes through, then the ammeter is useless or \emph{insensitive} to those intensities. Keeping this analogy, our observables, the Stokes profiles are useful for inferring given physical quantities as long as they change when those physical quantities vary. Of course, to be detectable, the change should be larger than the noise. Therefore, the correct question to ask of a given spectropolarimetric proxy whether it senses, for example, $T$, $B$, or $v_{\rm LOS}$ is how much it modifies when $T$, $B$, or $v_{\rm LOS}$ change. The preceding section has been an attempt in that direction: we have been changing the various atmospheric quantities and checking the modifications in the Stokes profiles. We have proceeded in the \emph{direct} way, that is, through the solution of the \emph{differential} RTE. This direct approach is not very useful in practice, though, as we already recognized in Section \ref{sec:depthdependent}. Which quantity is to be modified first, at which optical depths, and by how much? If the problem is the simplest we talked about in the introduction (that of measuring velocities from the line core wavelength), then there may be some room for the direct approach. If not, the diagnostic capabilities of the Stokes profiles have to be further explored with the final goal of proceeding the \emph{inverse} way, that is, of solving the \emph{integral} equation known as the formal solution of the RTE (Eq.\ \ref{eq:rteformalsolution}). Here, we have a difficult problem where the observables depend nonlinearly on the unknowns. The nonlinear character is clear: the observables ---on the left-hand side of Eq.\ (\ref{eq:rteformalsolution})--- are equal to an integral of the product of three terms, each depending strongly non-linearly on the physical quantities that characterize the model atmosphere \citep[e.g.,][]{2003isp..book.....D}. Changes in the Stokes spectrum are then very difficult to predict when modifications in the physical parameters occur. As in many other branches of physics, the diagnostic tools come out trough a linearization analysis. We can assume, for instance, that in a very special regime, when perturbations are small enough, changes occur linearly. These are the basics of linearization that, in the realm of solar physics were introduced for non-polarized light by \citet[][see also \citeauthor{1976SoPh...50..239C}, \citeyear{1976SoPh...50..239C}, and \citeauthor{1977A&amp;A....54..227C}, \citeyear{1977A&amp;A....54..227C}]{1971SoPh...20....3M} through the so-called weighting functions, although the name of \emph{response functions} (RFs) did not appear in the literature until the work by \citet{1975SoPh...43..289B}. Since polarization was not taken into account, those analyses were only strictly valid for isotropic media or, as far as we are concerned, for non-magnetic atmospheres. Response functions were introduced within polarized radiative transfer by the brothers \citet{1977A&A....56..111L} but RFs were paid little attention to until the works by \citet{1982SoPh...77...13D,1983SoPh...87..221L}, \citet{1988A&amp;A...204..266G}, \citet{almeida1992}, and \citet{1992ApJ...398..375R,1994A&A...283..129R}. The latter found that the perturbations $\delta x_{i}$ exerted to the $p+r$ atmospheric quantities ($p$ of them varying with height and $r$ constant) induce modifications $\delta \vector{I} (0)$ to the observed Stokes profile given by \begin{equation} \label{eq:deltai} \delta \vector{I} (0) = \sum_{i=1}^{p+r} \int_0^{\infty} \vector{R}_i (\tau_{\rm c}) \, \delta x_i (\tau_{\rm c}) \, {\rm d}\tau_{\rm c}, \end{equation} where \begin{equation} \label{eq:responsefun} \vector{R}_i (\tau_{\rm c}) \equiv \matriz{O}(0, \tau_{\rm c}) \, \left[ \matriz{K} (\tau_{\rm c}) \frac{\partial \vector{S}}{\partial x_i} - \frac{\partial \matriz{K}}{\partial x_i} (\vector{I} - \vector{S}) \right]. \end{equation} Therefore, the modification of $\vector{I} (0)$ is given by a sum of terms, each related to one of the atmospheric quantities characterizing the medium. The terms are integrals over the whole atmosphere of the model atmospheric quantities weighted by the RFs. The physical meaning in Eq.\ (\ref{eq:deltai}) is straightforward: imagine that we change only a given quantity ($T$, $B$, $v_{\rm LOS}$, or any other) with a magnitude unity (i.e., 1 K, 1 G, 1 \mbox{$\:$km$\,$s$^{-1}$}, etc.) in the narrow surroundings of a given continuum optical depth $\tau_0$; then, the subsequent modification in the emergent Stokes spectrum is just the value of the corresponding RF at that optical depth: \begin{equation} \label{eq:rfexample} \delta \vector{I} (0) = \vector{R}_i (\tau_0). \end{equation} Then, since the Stokes profiles are usually recorded normalized to some reference value (e.g., the average, ---unpolarized--- continuum intensity of the quiet Sun), units for RFs are inverse units of the corresponding quantity. That is, the response function to perturbations of temperature is measured in K$^{-1}$; the response to perturbations in the magnetic field strength is measured in G$^{-1}$; and so on. Thus, a response function can be defined as the modification that the Stokes spectrum experiences when the medium undergoes a unit perturbation of a given physical quantity at a given very narrow region in optical depth. Equation (\ref{eq:responsefun}) tells us that these modifications build upon the variations of the propagation matrix and the source function vector with respect to the physical quantities and their evolution through the atmosphere as driven by the evolution operator. The two variations have an opposite sign. This means that they are somehow competing as one could expect. While $\vector{S}$ represents the sources of photons, $\matriz{K}$ represents the sinks. Indeed we know that we do not only speak about photon removal but also about pleochroism and dispersion but, certainly, the propagation matrix role is somehow similar to that of a withdrawal. The counterbalancing between $\vector{S}$ and $\matriz{K}$ is very important to understanding radiative transfer because some analyses forget it and only account for the effects of $\matriz{K}$ (absorption in the non-polarized case). This was clearly pointed out and explained by \citet{1994A&A...283..129R}. Equation (\ref{eq:deltai}) suggests that RFs play the role of partial derivatives of the observed Stokes profiles with respect to the atmospheric quantities once they have been discretized. This role is even more clear when we go down to the real world of a quadrature formula for that equation. Model atmospheres are usually described numerically by a grid of points that are spaced in logarithmic optical depth. Let $\Delta (\log \tau_{\rm c})$ be that spacing. If we call $x_{i,j} \equiv x_i (\tau_j)$ and $\vector{R}_{i,j} \equiv \vector{R}_i (\tau_j)$, then Eq.\ (\ref{eq:deltai}) can be written as \begin{equation} \label{eq:deltaiquad} \delta\vector{I}(0) = \sum_{i=1}^p \sum_{j=1}^n a_j \vector{R}_{i,j} \, \delta x_{i,j} + \sum_{k=1}^r \vector{R}'_k \, \delta x_k, \end{equation} where $a_j = \Delta (\log \tau_{\rm c}) \ln 10 \, c_j \tau_j$, with $c_j$ being the quadrature coefficients. Therefore, if we include $a_j$ in the RFs, as one usually does in graphical representations, then Eq.\ (\ref{eq:deltaiquad}) shows the Stokes spectrum modifications as \emph{linear} expansions of the new variables $x_{i,j}$ and $x_k$. The first term on the right-hand side corresponds to those physical quantities that vary with depth; the second stands for those that are assumed to be constant.\footnote{For the specific meaning of the RF to perturbations of a constant quantity, $\vector{R}'$, see Sect.\ \ref{sec:rfproperties} below.} In summary, we can say that RFs are indeed partial derivatives of $\vector{I}(0)$ with respect to the (numerical) atmospheric parameters and, thus, they directly provide the sensitivities of the Stokes spectrum to perturbations of the physical conditions in the medium. Examples of these RFs are plotted in Figs.\ \ref{fig:figiresponse}, \ref{fig:figqresponse}, \ref{fig:figuresponse}, and \ref{fig:figvresponse}. They have been evaluated for Stokes $I$, $Q$, $U$, and $V$, respectively, of the Fe~{\sc i} line at 630.25 nm to perturbations of the temperature (top row panels), of the magnetic field strength (middle row panels), and of the LOS velocity (bottom row panels). The RF values are multiplied by $10^6\,$K$^{-1}$, $10^6\,$G$^{-1}$, and $10^4\,(\!\mbox{$\:$km$\,$s$^{-1}$})^{-1}$. The two columns correspond to two different model atmospheres. That in the left-hand columns has the temperature stratification of the HSRA model \citep{gingerich+etal1971}, a constant $B=2000$ G, $\gamma = 30\degree$, and $\varphi = 60\degree$; the plasma is at rest in this model. That in the right-hand columns has a 500 K cooler temperature, and a magnetic field 500~G weaker, 20\degree less inclined, and an azimuth of 10\degree; $v_{\rm LOS} = 1.58 + 0.3 \log \tau_{\rm c}$ ($\!$\mbox{$\:$km$\,$s$^{-1}$}). Equation (\ref{eq:deltaiquad}) hints at a way for calculating RFs through what could be called \emph{the brute force method}. This method is a four-step procedure: 1) synthesis of the Stokes spectrum in a given model atmosphere; 2) perturbation of just one of the (numerical) atmospheric parameters by a small amount and synthesis of the spectrum in the new model atmosphere; 3) calculation of the ratio between the difference of the two spectra and the perturbation; 4) repetition of steps 2) and 3) for each optical depth, for each wavelength sample, and for the remaining Stokes parameters. This is a formidable calculation as soon as the number of free parameters is large. Fortunately, Eq.\ (\ref{eq:responsefun}) provides a shortcut since the evolution operator, the propagations matrix, and the source function vector have to be calculated anyway in every synthesis of the spectrum. With only the added calculations of the derivatives, one can easily calculate RFs at the same time as the RTE is solved. This property is extremely useful for inversion codes, as we shall see in Section\ \ref{section:techniques}. Equation (\ref{eq:deltaiquad}) also offers an explicit explanation of the astrophysical ill conditioning we commented on in Section \ref{sec:introduction}: the same modification of $\vector{I}(0)$ may be produced by perturbations of different quantities or by perturbations of a single physical quantity but at several optical depths. That is, the effects of temperature can be similar to those of the magnetic field strength or the effects of perturbing $B$ at $\log \tau_{\rm c} = -0.5$ can be the same as those of perturbing $B$ at $\log \tau_{\rm c} = -3$. Therefore, we cannot say that the changes $\delta\vector{I}(0)$ are produced by perturbations of this physical parameter or that other without considering all of them at the same time. Cross-talk among some parameters may appear and, then, the retrieval of those parameters will be less reliable (see, e.g., Section \ref{sec:analyticrfs}). \subsection{Properties of response functions} \label{sec:rfproperties} A glance at Figs.\ \ref{fig:figiresponse}, \ref{fig:figqresponse}, \ref{fig:figuresponse}, and \ref{fig:figvresponse} readily tells us that some RFs are bigger than the others. This means that our line is more sensitive to some physical quantities than to others. However, the fact that RFs are measured in inverse units of those for their corresponding parameters makes it difficult to compare their relative sensitivity. In this regard, \emph{relative} RFs shed some light. If we consider relative perturbations $\delta x_{i,j} / x_{i,j}$, then we can define relative RFs $\tilde{\vector{R}}_{i,j} \equiv \vector{R}_{i,j} x_{i,j}$. Hence, $\tilde{\vector{R}}_{i,j}$ speak about the response of the Stokes spectrum to relative (i.e., dimensionless) perturbations. Experience shows that relative RFs to $T$ perturbations are the biggest at all depths and wavelengths, clearly indicating that temperature is the most important quantity in line formation. (Indeed, temperature is \emph{the} physical quantity that governs the thermodynamical state of the material medium because we assume that hydrostatic equilibrium prevails throughout our model atmospheres. After this assumption, pressure, the necessary second thermodynamic variable gets automatically prescribed.) Response functions to temperature perturbations start being different from zero at the deepest layers when compared to the remaining quantities. This is because the second term in the right-hand side of Eq.\ (\ref{eq:responsefun}) goes to zero as the continuum optical depth tends to infinity. This was explained by \citet{1994A&A...283..129R}. This physical fact implies that spectral lines tend to be insensitive at these low layers to the other physical quantities. The Stokes profile wavelength symmetries are preserved in RFs: in the absence of velocity gradients, RFs of Stokes $I$, $Q$, and $U$ to any perturbation are even functions of wavelength and RFs of Stokes $V$ are odd. This means that, in fact, velocity gradients increase the diagnostic capabilities of spectral lines. In their absence, half of the profile is useless since the information they provide is exactly the same as the other half.\footnote{Indeed, such redundant information should help in decreasing the uncertainties in the retrievals by reducing the noise by a factor two.} For given purposes, we can conceive constant perturbations with depth, in spite of the quantity being depth dependent. Owing to their nature, some physical quantities may be assumed constant with depth (e.g., macro- and microturbulent velocity, or any of the ME free parameters). In such cases, constant perturbations are in order. If so, then, Eq.\ (\ref{eq:deltai}) tells us that the resulting modification in the Stokes spectrum is given by the product of such a constant perturbation times the integral of the corresponding RF over the whole atmosphere. Hence, we can say that the RF to a constant perturbation, $\vector{R}'$ in Eq.\ (\ref{eq:deltaiquad}), is directly the integral of the regular response function or, in numerical terms, \begin{equation} \label{eq:rfcontantpertur} \vector{R}'_k \equiv \sum_{j=1}^n a_j \vector{R}_{k,j}. \end{equation} As shown by \citet{1994A&A...283..129R}, RFs play the role of a PSF in the general theory of linear systems. Under this general theory, our system ---the Stokes spectrum--- experiences an input (the perturbation) and provides an output, $\delta\vector{I} (0)$. If the input is a Dirac delta, then the output is the corresponding value of the response function. If the input is harmonic throughout the atmosphere, then the response is the Fourier transform of the RF. Response functions are model dependent. This property is extremely important in our understanding of spectral line sensitivities and, thus, in the inversion of the RTE. Instead of being a drawback, such a model dependence helps in disentangling the effects produced by the different quantities in distinct model atmospheres. Even in fixed model atmospheres, it is very difficult to discard one quantity or the other at once, however, and most of them have to be retrieved at the same time. Once this is carried out, one can theoretically understand the meaning of measurements \citep[][\citeauthor{2003isp..book.....D}, \citeyear{2003isp..book.....D}]{1996A&A...314..295S}. \epubtkImage{}{% \begin{figure}[htbp] \centerline{\includegraphics[width=\textwidth]{figiresponse}} \caption{RFs of Stokes $I$ to perturbations of $T$ (top panels), $B$ (middle panels), and $v_{\rm LOS}$ (bottom panels). Units are $10^{-6}\,$K$^{-1}$, $10^{-6}\,$G$^{-1}$, and $10^{-4}\,(\!\mbox{$\:$km$\,$s$^{-1}$})^{-1}$.The two columns correspond to two different model atmospheres. That in the left-hand column has the temperature stratification of the HSRA model \citep{gingerich+etal1971}, a constant $B=2000$ G, $\gamma = 30\degree$, and $\varphi = 60\degree$; the plasma is at rest in this model. That in the right-hand column has a 500 K cooler temperature, and a magnetic field 500~G weaker, 20\degree less inclined, and an azimuth of 10\degree; $v_{\rm LOS} = 1.58 + 0.3 \log \tau_{\rm c}$ ($\!$\mbox{$\:$km$\,$s$^{-1}$}). } \label{fig:figiresponse} \end{figure}} \epubtkImage{}{% \begin{figure}[htbp] \centerline{\includegraphics[width=\textwidth]{figqresponse}} \caption{Same as Fig.\ \ref{fig:figiresponse} for Stokes $Q$.} \label{fig:figqresponse} \end{figure}} \epubtkImage{}{% \begin{figure}[htbp] \centerline{\includegraphics[width=\textwidth]{figuresponse}} \caption{Same as Fig.\ \ref{fig:figiresponse} for Stokes $U$.} \label{fig:figuresponse} \end{figure}} \epubtkImage{}{% \begin{figure}[htbp] \centerline{\includegraphics[width=\textwidth]{figvresponse}} \caption{Same as Fig.\ \ref{fig:figiresponse} for Stokes $V$.} \label{fig:figvresponse} \end{figure}} Last, but not least, the linear nature of RFs allows us to generalize them to any linear combination of Stokes profile wavelength samples. This property helps us understand what can be extracted from different proxies that are traditional in solar polarimetry but, most importantly, helps the astronomer in taking influence of the instrument into account. Since most instruments act as linear systems on light, the detected spectrum is a convolution of the actual spectrum with the instrument spectral PSF. Convolution is linear and, thus, one can easily conclude \citep{1994A&A...283..129R} that the RFs of the convolved spectrum are nothing but those of the original spectrum convolved as well with the PSF.\footnote{Convolutions with the spatial PSF of the instrument are also taken into account in modern inversions that take spatial degradation into account. See Section \ref{sec:spatialdegrad}.} \subsection{Analytic response functions} \label{sec:analyticrfs} When all the terms on the right-hand side of Eq.\ (\ref{eq:responsefun}) can be calculated analytically (see Sect.\ \ref{sec:milne}), RFs are necessarily analytic and then we can use them to gain some physical insight into the diagnostic capabilities of the Stokes spectrum about the physical quantities that characterize the medium. This is the case of the ME approximation, where all the quantities are constant with depth. There, index $j$ in Eq.\ (\ref{eq:deltaiquad}) drops and (after inclusion of the coefficient into the RF) we can properly write: \begin{equation} \label{eq:rfme} \vector{R}_i (\lambda) = \frac{\partial \vector{I}(\lambda)}{\partial x_i}. \end{equation} That is, that RFs are strict partial derivatives of the Stokes spectrum with respect to the free parameters of the problem \citep{2007A&A...462.1137O}. In Eq.\ (\ref{eq:rfme}) we have removed the $\tau_{\rm c} = 0$ indicator in the emergent Stokes spectrum and, rather, we have made explicit the dependence of RFs on wavelength. We have just seen in the former subsection that these RFs are indeed integrals over $\tau_{\rm c}$ and, hence, the dependence on it disappears. This analytic character is particularly important in the controversial discussion about the possibility of disentangling $B$, from $\alpha$ (the filling factor) in the case that our magnetic features are not fully spatially resolved. In Fig.\ \ref{fig:deltoroetal2010} we reproduce Fig. 3 from \citet{2010ApJ...711..312D}. It shows RFs to (constant) perturbations in those two quantities plus $v_{{\rm LOS}}$ in a ME atmosphere. As one can clearly see, both Stokes $V$ RFs to $\alpha$ and to $B$ perturbations are almost proportional among themselves and to the Stokes $V$ profile itself. This can easily be traced back to the expected behavior from the weak field approximation, as expressed in Equation (\ref{eq:vpropmag}). From this proportionality we should conclude that it is indeed very difficult to discern the values of $B$ and $\alpha$ separately. If it were not for Stokes $I$, the widespread belief that only $\alpha B$ or $\alpha B \cos\gamma$ can be retrieved would be true. However, the Stokes $I$ RFs to $\alpha$ and $B$ are neatly different from each other and this necessarily implies that we have the means of inferring the two quantities independently. Stokes $Q$ and $U$ also help in disentangling the field strength and the filling factor for similar reasons as soon as they are above the noise level. \epubtkImage{}{% \begin{figure}[htbp] \centerline{\includegraphics[width=\textwidth]{fig_rf_alfa}} \caption{Stokes I (left panel) and V (right panel) RFs to $v_{\rm LOS}$ (solid, black lines), to $B$ (dotted, blue lines), and to $\alpha$ (dashed, red lines) in the weak-field case. Perturbations of 10 \mbox{$\:$m$\,$s$^{-1}$} for $v_{\rm LOS}$, of 10~G for $B$, and of 0.1 for $\alpha$ have been assumed. Adapted from \citet{2010ApJ...711..312D}.} \label{fig:deltoroetal2010} \end{figure}} Among other features of RFs, the latter authors showed how the thermodynamical parameters of the ME atmosphere can have cross-talk among themselves: their RFs are fairly similar in shape, so that their effects can be misinterpreted by the inversion codes. Notably, the RFs to perturbations of $B$, $\gamma$, $\varphi$, and $v_{\rm LOS}$ are markedly different from one another and with respect to those of $\eta_0$, $\Delta\lambda_{\rm D}$, and $a$. This explains the good result of ME inversion codes in accurately retrieving the magnetic and dynamic parameters while the thermodynamic parameters are sometimes wrong. Our conclusion is also consistent with, and explains, the findings by \citet{2004A&amp;A...414.1109L} who decided to leave the damping parameter fixed with minor changes in the fitted magnetic and dynamic parameters while $\Delta\lambda_{\rm D}$ and $\eta_0$ were significantly affected. Linearity can also be useful for spatially coupled inversion techniques (see Sect.\ \ref{sec:coupled} below). \newpage \section{Inversion techniques} \label{section:techniques} Once we have discussed all the ingredients and assumptions, we can face the main problem in astrophysics, namely that of making theory and observations compatible. In other words, we can face the inversion problem by deriving the unknown physical quantities through comparison between observed and synthetic Stokes profiles.\footnote{The inversion problem could be thought of as the ``observational'' part of the compatibility game. The ``theoretical'' part involves the choice of hypotheses included in the physical scenario which, according to \citet{2010ApJ...711..312D}, defines the assumed model atmosphere. One has to decide whether the radiative transfer is LTE or NLTE, whether the physical quantities are dependent on the optical depth, whether macro- or microturbulence are needed, etc. All these assumptions settle the theoretical framework of the problem.} Figure \ref{fig:nonlteinversion} describes how the problem gets complicated as compared to the mere forward problem in Figure \ref{fig:NLTE}. \epubtkImage{}{% \begin{figure}[htbp] \centerline{\includegraphics[width=\textwidth]{fignonlteinversion}} \caption{Block diagram of the inversion problem under NLTE conditions.} \label{fig:nonlteinversion} \end{figure}} One can clearly see how a new overarching loop is present that indicates the needs for changing the model atmosphere if the synthetic Stokes spectra do not properly fit the observed ones. The problem turns out to be formidable and requires new, specific assumptions that make it tractable. In particular, some of the quantities have to be calculated from the strict NLTE conditions (see Section \ref{sec:non-lte}). Even the simpler LTE problem gets complicated, and indeed becomes iterative, regardless of acknowledging the stratification in the physical quantities of the atmosphere (see Figure \ref{fig:lteinversion}). The needs for modifying such a model atmosphere according to the deviations between observed and synthetic profiles makes a loop necessary after calculating both the synthetic spectra and their derivatives with respect to the free parameters. Fortunately, we know how to calculate these derivatives through RFs at the same time as we synthesize the Stokes spectrum with little extra computational effort. \epubtkImage{}{% \begin{figure}[htbp] \centerline{\includegraphics[width=\textwidth]{figlteinversion}} \caption{Block diagram of the inversion problem under LTE conditions.} \label{fig:lteinversion} \end{figure}} Looking for convergence means measuring the distance between observed and synthetic profiles in the space of observables. Any inversion procedure must have a threshold below which the user can consider that convergence has been reached because the fit cannot be further improved within the current assumptions. \subsection{Topology in the space of observables} \label{sec:chisquare} The topological problem depicted in the Introduction for the inversion problem (or any astrophysical inference) needs to be substantiated in minimizing a distance: a metric in the space of the observables. Since Stokes $I_{\rm d}$, $Q$, $U$, and $V$ belong to ${\mathchoice {\rm I\mskip-4mu L} {\rm I\mskip-4mu L}{\rm I\mskip-4.5mu L} {\rm IRmskip-5mu L}}^2$, the quadratic norm of the difference turns out to be the natural distance between any two profiles. We want to approximate two sets of profiles, so that all four Stokes parameters should be taken into account. Therefore, when in practice we deal with discrete samples, the sought-for distance can be written as \begin{equation} \label{eq:chi2} \chi^2 (\vector{x}) \equiv \frac{1}{N_{\rm f}} \sum_{s=0}^3 \sum_{i=1}^q \left[ I_s^{\rm obs} (\lambda_i) - I_s^{\rm syn} (\lambda_i; \vector{x}) \right]^2 \, w_{s,i}^2, \end{equation} where index $s$ runs for the four Stokes parameters, we assume $q$ wavelength samples, and $N_{\rm f}$ stands for the number of degrees of freedom, that is, the difference between the number of observables ($4q$) and that of the free parameters (the number of elements in $\vector{x}$, $np+r$; see Section \ref{sec:NLTE} and Equation \ref{eq:modelatmos}). $\chi^2 (\vector{x})$ is a merit function of the atmospheric quantities that measures the distance between the observed and the synthetic profiles and has to be minimized in order to achieve a good fit. Having a normalized merit function to the degrees of freedom is useful to warn the user not to use an unreasonably large number of free parameters as compared with the number of observables. In such a case, $\chi^2$ would turn out to be always too big. The \emph{weights} $w_{s,i}$ can be used to favor some data more than the others. For instance, one can set them to the inverse of the measurement errors. For many applications they are simply kept at unity. We can look at $\chi^2 (\vector{x})$ as a scalar field in an ($np+r$)-dimensional space. Since the number of dimensions may be too large, the minimization problem may turn out to be intractable. Before going to specific techniques that make it affordable, let us consider the paths through which we can look for the minimum of the merit function. That is, we have to find the derivatives of $\chi^2$ with respect to the atmospheric free parameters. \citet[][see \citeauthor{1994A&A...283..129R}, \citeyear{1994A&A...283..129R}, and \citeauthor{2003isp..book.....D}, \citeyear{2003isp..book.....D} as well]{1992ApJ...398..375R} showed that such derivatives are directly given by the RFs: \begin{equation} \label{eq:chiderivative} \frac{\partial \chi^2}{\partial x_m} = \frac{2}{N_{\rm f}} \sum_{s=0}^3 \sum_{i=1}^q \left[ I_s^{\rm obs} (\lambda_i) - I_s^{\rm syn} (\lambda_i; \vector{x}) \right] \, w_{s,i}^2 \, R_{m,s} (\lambda_i), \end{equation} where, for the sake of a more compact notation, index $m$ runs from 1 to $np+r$ (including constant and variable physical quantities), the quadrature coefficients in Eq.\ (\ref{eq:deltaiquad}) are assumed to be included in the RFs when needed, and no distinction is made between $\vector{R}$'s and $\vector{R}'$'s. The same authors also demonstrated that the second derivatives can be approximated by \begin{equation} \label{eq:seconderivativechi} \frac{\partial^2 \chi^2}{\partial x_m \partial x_k} \simeq \frac{2}{N_{\rm f}} \sum_{s=0}^3 \sum_{i=1}^q w_{s,i}^2 \, \left[ R_{m,s} (\lambda_i) \, R_{k,s} (\lambda_i) \right]. \end{equation} Regardless of the way we approach its minimum in the hyperspace of parameters, that $\chi^2$ is the natural metric is reinforced by the fact that other metrics have been tried \citep[e.g.][]{2004A&amp;A...414.1109L} that have finally converged to almost the same formulation as in Eq.\ (\ref{eq:chi2}) \citep{2007A&amp;A...462.1147L}. \subsection{Levenberg--Marquardt based inversions} \label{sec:levenberg} The process of profile fitting is nothing but the successive (and iterative) approximation of synthetic Stokes profiles until they reach a minimum distance to the observed ones. Hence, an initial guess model atmosphere is needed to start the procedure. Step by step, the model will be modified, so that the resulting synthetic Stokes profiles will approach more and more the observations. When we are close enough to the $\chi^{2}$ minimum, an approximate, parabolic motion may be useful: \begin{equation} \label{eq:parapprox} \chi^{2} (\vector{x} + \delta\vector{x}) \simeq \chi^{2} (\vector{x}) + \delta\vector{x}^{\scriptscriptstyle {\rm T}} (\nabla \chi^{2} + \frac{1}{2} \matriz{H}' \delta\vector{x}), \end{equation} where the elements of the gradient are given by Eq.\ (\ref{eq:chiderivative}) and $\matriz{H}'$ is one half of the Hessian matrix, whose elements are given by Eq.\ (\ref{eq:seconderivativechi}), that is, $H'_{m,k} = \partial^2 \chi^2/\partial x_m \partial x_k$. In Eq.\ (\ref{eq:parapprox}) a scalar product is understood between a transposed (row) vector and a regular (column) vector. When we are very near the minimum, it is clear that the second term in the right-hand side of Eq.\ (\ref{eq:parapprox}) should be zero, and this is done in the Levenberg--Marquardt (LM) algorithm \citep[e.g.,][]{press+etal1986} by requiring that \begin{equation} \label{eq:gradhessian} \nabla \chi^{2} + \matriz{H} \delta\vector{x} = \vector{0}, \end{equation} where the new matrix $\matriz{H}$ is defined by \begin{equation} \label{eq:hessian} 2 H_{ij} \equiv \left\{ \begin{array}{lll} H'_{ij} (1 + \lambda), & \mbox{if} & i=j, \\ H'_{ij}, & \mbox{if} & i\neq j, \end{array} \right. \end{equation} where $\lambda$ is an \emph{ad-hoc} parameter that helps \emph{tuning} the algorithm for it to work as if the approximation is almost first order ($\lambda$ is large) or fully second order (when $\lambda$ is small). $\lambda$ is changed in every step in the iteration, depending on how far or close we are to the minimum as indicated by the variation of $\chi^2$. At the end of the procedure we will most likely not find the true minimum but, hopefully, will be close enough to neglect the gradient term in Equation (\ref{eq:parapprox}). In such a case we can write \begin{equation} \label{eq:difend} \Delta\chi^2 = \delta\vector{x}^{\scriptscriptstyle {\rm T}} \matriz{H}' \delta\vector{x}. \end{equation} The good news about this relationship is that, since the Hessian matrix is made up of RFs, one can finally obtain an expression for the inversion uncertainties in the physical quantities that are functions of the RFs \citep[see][]{2003isp..book.....D}. \begin{equation} \label{eq:uncertainties} \sigma_m^2 \simeq \frac{2}{np+r} \frac{{\displaystyle \sum_{s=0}^3 \sum_{i=1}^q} \left[ I_s^{\rm obs} (\lambda_i) - I_s^{\rm syn} (\lambda_i; \vector{x}) \right]^2 \, w_{s,i}^2}{{\displaystyle \sum_{s=0}^3 \sum_{i=1}^q} R^2_{m,s} (\lambda_i) w_{s,i}^2}. \end{equation} Certainly, the larger the RFs, the smaller the uncertainties. \subsubsection{Problems in practice} \label{sec:sirstrategy} \paragraph{Nodes and singular value decomposition} With an LM algorithm, inversion of the RTE reduces in summary to solving Eq.\ (\ref{eq:gradhessian}), which implies the inversion of the modified Hessian matrix. One can certainly not expect the same practical problems when $\matriz{H}$ is built for an ME inversion or for a more general assumption where physical quantities vary with depth. Already in the ME case, $\matriz{H}$ has dimensions $9\times 9$ or $10 \times 10$ (if the filling factor is assumed to be different from unity). Inverting a $10 \times 10$ matrix is not difficult but, in the more general case, when the atmosphere is parameterized with a depth grid of 20 or 30 points, the Hessian may have several tens or even hundreds of elements in both dimensions. Inverting such matrices is by no means an easy numerical task. A second problem can appear in practice as $\matriz{H}$ may be a quasi-singular (numerically singular) matrix because of the different sensitivities of the Stokes parameters to the various physical quantities that may vary even by orders of magnitude. One particular Stokes parameter of one specific spectral line may not be sensitive to a given physical quantity at given depths in the atmosphere. We already know, for instance, that, about or below $\log\tau_{\rm c} = 0$, only temperature leaves its fingerprints on the spectrum: the profiles are insensitive to the other quantities (see Section \ref{sec:rfproperties}). Hence, the corresponding matrix elements in $\matriz{H}$ will be close to zero, so that they hamper the Hessian matrix inversion. Here we report on the way SIR deals with these two problems. Other inversion techniques (e.g., LILIA, NICOLE, MILOS) apply similar procedures, although no much explicit information is available. The first problem can be circumvented by using several iteration cycles in each of which the number of free parameters is fixed and increased successively from cycle to cycle. The inversion of quasi-singular matrices is usually carried out through the singular value decomposition technique \citep[SVD; e.g.,][]{press+etal1986}. \paragraph{Nodes and equivalent response functions} Imagine that we only have one physical quantity to deal with in the inversion. Then, the number of free parameters is $n$, the number of depth grid points. Our Hessian is an $n \times n$ matrix. A practical way out of this involved numerical problem is found \citep{1992ApJ...398..375R} by assuming that all depth grid point perturbations are not free but bound by some interpolation formula. For example, we can use polynomial splines. This assumption allows to consider \emph{any} number $n^{\prime}$ of free parameters from $1$ through $n$. If such a number is 1, we assume we are applying a constant perturbation, whatever the original stratification is. The perturbation will be linear if the number is 2, parabolic if it is 3, and so on. As explained in \citet{2003isp..book.....D}, the use of nodes requires the evaluation of \emph{equivalent} RFs at the nodes, $\tilde{\vector{R}}$'s, in order to take information from the whole atmosphere into account. With this technique, the equivalent of Eq.\ (\ref{eq:deltaiquad}) in practice becomes \begin{equation} \label{eq:deltaisyn} \delta \vector{I}^{\rm syn} (\lambda_l) = \sum_{m=1}^{n'p+r} \tilde{\vector{R}}_m (\lambda_l) \, \delta y_m, \end{equation} where $y_m$ is a new notation for the free parameters at the nodes.\footnote{Note that $n^{\prime}$ may be different from quantity to quantity, making the code more versatile.} Constant and depth-varying physical quantities are treated the same in Eq.\ (\ref{eq:deltaisyn}), and the quadrature coefficients are assumed to be included in the RF definitions. \paragraph{Different sensitivities to the various free parameters} Since, by construction, the Hessian matrix is real and symmetric, its inversion is done through diagonalization. The quasi-singularity of the Hessian matrix shows up as some of the diagonal elements, $\gamma_k$, being too close to zero to be inverted with accuracy. This could be for two reasons. The first one is that, within the free parameters belonging to the same physical quantity, some depths are necessarily more important than others: as we have seen in Sect.\ \ref{section:response}, RFs tend to zero at some depths. This problem is numerically solved by setting $1/\gamma_k$ to zero, whenever $\gamma_k$ is considered too small (under a given threshold). By doing so, we are not (over)correcting a parameter that has no relevance at this time.\footnote{This is exactly the trick employed by SVD for its main application: inverting a quasi-singular matrix. According to \citet{press+etal1986}, cancelling the inverse of the smallest eigenvalues provides the least-squares solution.} The second reason for singularity is that some physical quantities are more important than others. Therefore, the sensitivities of Stokes profiles to perturbations of those quantities can be larger (even by an order of magnitude) than perturbations to less significant quantities. This problem is overcome by using relative instead of absolute RFs as we explained in Section \ref{sec:rfproperties}. Nevertheless, and in order to make sure that all physical quantities are considered during each inversion cycle, the zeroing of the less significant diagonal elements is applied separately to each physical quantity. \epubtkImage{}{% \begin{figure}[htbp] \centerline{\includegraphics[width=0.6\textwidth]{convergencia}} \caption{Convergence rate (logarithm of the merit function in Eq. \ref{eq:chi2}) comparison between an inversion run by using fixed initial guesses for the physical quantities (black line) and the same by using the approximate estimates given in the Appendix (red line). One thousand Stokes profiles of the Fe~{\sc i} line at 617.3 nm have been used in the experiment and average results are plotted. They have been synthesized with the MELANIE code \citep{2001ASPC..236..487S} with uniformly distributed field strengths between 0 and 2000~G, inclinations and azimuths between 0\degree and 180\degree, and LOS velocities between -2 and 2 \mbox{$\:$km$\,$s$^{-1}$}. The profiles are sampled at 30 wavelengths regularly spaced every 2 pm. A C-programed version of MILOS \citep{2007A&A...462.1137O} has been used. No noise has been added to the profiles.} \label{fig:acceleration} \end{figure}} \paragraph{Initialization} The lack of uniqueness we were discussing in the Introduction may be revealed in a dependence on the initialization parameters. The community has realized this fact for a long time and codes such as SIR or {\sc HeLIx} have been explicitly tested and showed robust against different initializations \citep{1992ApJ...398..375R, 2004A&amp;A...414.1109L}. Such robustness can nonetheless depend on the specific Stokes profiles and model atmosphere. Therefore, an advisable practice when doubts arise is to use several different initial guesses for estimating the uncertainties in the results for each physical quantity. Several attempts have been performed as well for finding an ideal initialization guess, including specific genetic algorithm procedures only for getting the initial guess (Skumanich, private communication). In our opinion, having initializations almost as complicated as the inversion itself does not make much sense. While preparing a given application, we discovered an outstanding, very economical way of making optimum initial guesses. Such an initialization is very much in the line we have been supporting throughout the paper, namely, the usefulness of a step-by-step approach. Using the classical center-of-gravity \citep{1979A&A....74....1R} and weak-field approximations of the Appendix, we have obtained a remarkable acceleration in the convergence, as shown in Figure\ \ref{fig:acceleration}.\footnote{The initialization for the inclination and azimuth angles are indeed very similar to those proposed in the paper by \citet{auer+etal1977}} According to \citet{2003ApJ...592.1225U}, the center of gravity technique has the remarkable property of being quite insensitive to the spectral resolution of the data. We can add that the results in Fig.\ \ref{fig:acceleration}, which have been obtained from profiles sampled at 30 wavelengths, are indeed fairly similar to those for six wavelength samples like those foreseen for the SO/PHI instrument (SO/PHI is the acronym for the Polarimetric and Helioseismic Imager for the ESA's \emph{Solar Orbiter} mission; see \citeauthor{2015IAUS..305..108S}, \citeyear{2015IAUS..305..108S}). \paragraph{Consistency among different ME implementations} Milne--Eddington inversion techniques have become widely used to infer the vector magnetic field and the LOS velocity of the solar plasma. The physical interpretation of ME results was first investigated by \cite{1997ASPC..118..197W, 1998ApJ...494..453W} while comparing them to those obtained with SIR. A check of the theoretically predicted result and that obtained in practice was carried out in the latter paper for the HAO-ASP code \citep{1985NASCP2374..306S,1988ApJ...330..493L}. Predictions by \citet{1996A&A...314..295S} were confirmed: 1) measurements are essentially the result of averaging the actual parameter stratification with the corresponding generalized response function; 2) the so-called ME thermodynamic parameters had little correlation with the actual (quickly varying with optical depth) thermodynamic parameters. Further practice has usually shown that these ``thermodynamic'' parameters may not be very consistent among different runs for the same spectral line, while $\vector{B}$ and $v_{\rm LOS}$ are fairly accurate. This finding has been physically explained by \citet{2007A&A...462.1137O} who explored the shapes of ME response functions: RFs to perturbations in $\eta_{0}$, $\Delta \lambda_{D}$, and $a$ are fairly similar among themselves and, hence, cross-talk may appear between every two parameters; this is not the case, however, with RFs to perturbations in $\vector{B}$ and $v_{\rm LOS}$, which are neatly different. Although the consistency among different versions of the ME inversion is therefore guaranteed through physical analysis, the various implementations may have different numerical approximations and the technique can even be different (e.g., LM, genetic algorithms, PCA, etc.). Motivated by this fact, \citet{2014A&amp;A...572A..54B} have checked the consistency among the HAO-ASP, {\sc HeLIx}, and VFISV codes. They have found a positive confirmation of the previous results by using MHD simulations as a test bench instead of ME Stokes profiles. \subsubsection{Automatic selection of nodes} \label{sec:automatic} In several specific problems, the optimum choice for parameterizing the atmosphere is actually an art, i.e., a skill that arises after the continuous exercise of intuitive skills. Codes like SIR help the user by enabling several node choices for each parameter. Some different choices can yield similar fits; others can make it impossible to reach a convergent solution. As we have commented on in several places of the present paper, a generally good approach is \emph{lex parsimoniae}: between two solutions reaching a similar fit quality, that with fewest nodes should be selected. However, in practice, one cannot repeat the inversion several times in order to choose the optimum number of nodes for each parameter. The current version of SIR includes an algorithm that automatically selects such a number of nodes for every parameter in each iteration. The algorithm is based on the quest for the roots or zeros of the partial derivative of $\chi^2$ with respect to each parameter, as written in Eq.\ (\ref{eq:chiderivative}). Let $a$ be one of the atmospheric quantities varying with optical depth and $a_p$ its value at $\tau_p$; that is, $a_p$ is one of the elements in the model atmosphere of Eq.\ (\ref{eq:modelatmos}). Let us call $d_{a_p} \equiv (\partial \chi^2/\partial a_p)$. Let us also suppose that we are only dealing with the intensity profile of one spectral line, and that, at a given iterative step, $I^{\rm obs} > I^{\rm syn}$ for all wavelengths. If $R_{p,0} (\lambda_i)$ is positive for all wavelengths and all optical depths, it is then clear that $d_{a_p}$ will also be positive at all depths. To get a better fit, then, we will need to increase $a$ everywhere and just one node might be enough. Following this reasoning, it is easy to conclude that the number of nodes for a given physical quantity should be related to the number of times that the derivative $d_a$ changes its sign over the optical depth range. Obviously, as the derivative depends on the observational data, it is influenced by noise and, consequently, spurious zeros should be eliminated. Consequently, the algorithm determines the number of nodes after looking for positive relative maxima, and negative relative minima, larger in absolute value than a given threshold. An example of the behavior of this automatic selection feature in SIR is shown in Section \ref{sec:complexity}. This automatic selection of the number of nodes can be considered a quantitative implementation of the principle of Occam's razor. Others are indeed possible. An alternative was presented by \citet{2006ApJ...646.1445A}. This author uses the minimum description length principle to effectively find the optimum number of expansion coefficients in PCA-based inversion techniques or the optimum number of nodes for the various atmospheric parameters in the SIR code. This problem is also addressed by \citet{2007ApJ...660.1690A}, who estimated the intrinsic dimensionality of spectropolarimetric data based on nearest neighbor considerations and applying the principle of maximum likelihood. \subsubsection{A non-LTE inversion technique} \label{sec:non-lte} The only available non-LTE inversion technique so far is called NICOLE by \citet{2015A&amp;A...577A...7S}, which is an evolution of the IAC-NLTE code by \citet{2000ApJ...530..977S}. In turn, the latter was adapted from a previous one for non-polarized problems by \citet{1998ApJ...507..470S}. Since even the minute details of the code are extensively described in those papers, let us simply stress here the main assumptions underlying the code for those potential users to know the validity framework of the results. Regarding the minimization technique, NICOLE employs an LM algorithm that proceeds very similarly to SIR by using response functions. Since RFs cannot strictly be calculated in this specific non-linear, non-local problem, the \emph{fixed departure coefficients} (FDC) approximation is used to deal with the derivatives of the LTE atomic level populations once the $\beta$'s in Eq.\ (\ref{eq:depcoeff}) are fixed from a previous calculation \citep{1998ApJ...507..470S}. Although the approximation is not exact and, indeed, the authors show deviations from the correct values, FDC is good enough for the purposes of getting RFs that pave the way for the code to find the minimum distance between the observed and the synthetic profiles. The second important approximation in NICOLE is the field-free approximation \citep{1969SoPh...10..268R}, as we already mentioned in Section \ref{sec:milne}. It consists in obtaining the departure coefficients from an unpolarized, non-LTE code and uses them in a formal solution of the RTE. This way, the $\beta$'s are decoupled from the magnetic field. According to the authors, this approximation is valid because the actual level populations are governed by strong UV (weakly split) lines and those lines with large Zeeman splittings are weak enough not to have a significant influence on the statistical equilibrium equations. NICOLE only deals with polarization induced by the Zeeman effect. Hence, any polarization produced by scattering or depolarization through the Hanle effect are not taken into account. Other assumptions, such as the validity of \emph{complete frequency redistribution}, may have implications for applications in specific spectral lines. It has recently been used in the analysis of the Ca~{\sc ii} line at 854.2 nm \citep{2015ApJ...810..145D}. \subsection{Database-search inversions} \label{sec:pca} The foundations of Principal Component Analysis inversion codes have already been explained in Sect.\ \ref{sec:approxprof}. The expansion of Stokes profiles as linear combinations of eigenprofiles is at the root of the technique. A set of synthetic Stokes profiles of a given spectral line, obtained with a large number of model atmospheres, is used as a training set to decompose each synthetic and observed profile into a sum of a small number of such eigenprofiles. The inversion topological problem, then, is reduced to a search in the low-dimensional space generated by the eigenprofiles or, more specifically, in the space of the expansion coefficients. The technique has proved to be efficient for quick inversions of the observations and looks very promising as a classification tool for profiles that can later be examined with more detailed techniques. We say so because no PCA code so far has been envisaged to go further than an ME or a slab atmosphere. This is natural in a way. One can build a database where a few constant parameters can get values within given ranges but the mere construction of such a database would be a formidable problem if variations with optical depth of the atmospheric physical quantities are considered. The technique, therefore, is unable to deal with gradients and the like, which ---we know--- populate a large fraction of the magnetic Sun. Nevertheless, as a first-order approach it is extremely useful everywhere and is eventually the only available tool to explore the behavior of some solar features \citep[e.g.,][]{2003ApJ...582L..51L,2005ApJ...622.1265C}. According to \citet{2002ApJ...570..379S}, the leading orders of the PCA expansion may have a direct (approximate) interpretation in terms of values for the physical quantities, specifically for the LOS velocity and the vector magnetic field. This result is in line with our discussion in Sects.\ \ref{sec:approxmod} and \ref{sec:approxprof} about successive approximations in the complexity of both the model atmospheres and the profiles. Since the profile database has to be created for each spectral line or group of lines, PCA is very well suited to analyze those spectral lines whose radiative transfer is particularly complicated, either because physical mechanisms other than the Zeeman effect are involved in their formation, or because the scenario is morphologically difficult, as in prominences, spicules, and other chromospheric structures \citep{2002ApJ...575..529L, 2005A&amp;A...436..325L, 2005ApJ...622.1265C, 2009ApJ...703..114C}. Extensions of the technique to stellar problems have also been proposed already \citep{2006ASPC..358..355S, 2006ASPC..358..405R, marian+etal2008b, 2012A&amp;A...544A...4P, 2015A&amp;A...573A..67P}. A particularly interesting feature of the PCA technique is that once the observed Stokes profiles are expanded in terms of the eigenprofiles they become less noisy. This can be helpful for several applications \citep[e.g.,][]{marian+etal2008b, 2013A&amp;A...549L...4R}. \subsection{Other algorithm inversions} \label{sec:otheralg} \subsubsection{Artificial neural network inversions} \label{sec:anns} Artificial neural networks (ANNs) are systems through which a multidimensional input is translated into a multidimensional output by means of a non-linear mapping. The mapping (or, better, the parameters for the mapping) is (are) obtained by a previous process called training where the system is presented with inputs whose target outputs are already known. The process of training can be long and tedious but, once it is finished, the ANN can deal with new inputs with an extremely quick performance. Within the realm of solar physics, only multi-layer perceptrons have been proposed \citep[][\citeauthor{2003NN.....16..355S}, \citeyear{2003NN.....16..355S}]{2001A&amp;A...378..316C}. In these specific ANNs, the input, composed of $N$ \emph{neurons}, is sequentially ---layer by layer--- transformed into an output of $N$ neurons as well, of which a subset are the $M$ elements of the target. Following the notation by \citet{2005ApJ...621..545S}, the propagation rule between the input (layer 0) and the output (layer L) is given by \begin{equation} \label{eq:annpropagation} Y_n^l = f_{l} \left( \sum_{j=1}^{N} W_{n,j}^{l} Y_n^{l-1} + \beta_{n}^{l} \right), \end{equation} where $Y_n^l$ represents the contents of neuron $n$ in layer $l$, $W_{n,j}^{l}$ stands for the \emph{synaptic} weight connecting that neuron with neuron $j$ in layer $l-1$, and $\beta_{n}^{l}$ is a bias level. One or more layers may have a non-linear \emph{activation} function $f_{l}$ which depends on the specific implementation. In fact, the two above mentioned papers use a different $f$. For ANNs, the topological problem is dealt with during the training process, while the $W$'s, the $\beta$'s, and the parameters defining $f$ are determined. It reduces to \citep{2001A&amp;A...378..316C} minimizing a quadratic distance similar to: \begin{equation} \label{eq:annmetric} \xi^{2} = \sum_{i=1}^{P} \sum_{k=1}^{M} \left( Y_{k}^{L} - T_{k}^{i} \right)^{2}, \end{equation} where $P$ is the total number of training input-target vector pairs and $T_{k}^{i}$ is the $k$th target value for the $i$th training pair. Artificial neural networks have been used very seldom with actual data. In fact, as far as we know, only the two applications by \citet{2005ApJ...621..545S} and by \citet{carroll+kopf2008} have been published so far. \subsubsection{Genetic algorithm inversions} \label{sec:genetic} A general description of a genetic algorithm (GA), along with an implementation of the so-called PIKAIA code, was given by \citet{1995ApJS..101..309C}. When compared to simple steepest ascent or descent techniques, these genetic algorithms are an alternative to, e.g., LM. The former can easily be stuck in local minima while GA and LM look and (eventually) find global minima of the multi-variable merit (or fitness) function. Some authors claim \citep[e.g.][]{2004A&amp;A...414.1109L} that GA techniques are more robust in finding global minima than LM, but no direct comparison is known to the authors of this review. Devised for the specific problem of exploiting the chromospheric diagnostic capabilities of the He~{\sc i} multiplet at 1083 nm, \cite{2004A&amp;A...414.1109L} presented the so-called {\sc HeLIx} code. ({\sc HeLIx} is an acronym for Helium Line Information eXtraction.) The code is a direct adaptation of the PIKAIA routine to the He~{\sc i} multiplet formation. The line formation problem includes both the Zeeman and Hanle effects, and the presence of two blending photospheric lines of Si~{\sc i} and Ca~{\sc i}. Therefore, much care has to be taken with the analyzed wavelengths (some have to be weighted to zero) and, above all, with the complex, forward radiative transfer problem. The latter is dealt with for the photospheric Si~{\sc i} line by using the synthesis part of the SPINOR code (although no results from it are reported). The non-LTE effects in the He~{\sc i} triplet are neglected because the line is mostly optically thin. A simple ME atmosphere is assumed instead. The Hanle effect treatment is based on the oscillator model by \citet{2002Natur.415..403T}. The code later evolved (now it is called {\sc HeLIx$^+$}) to include the incomplete Paschen--Back effect \citep{2006A&amp;A...456..367S} and to finally incorporate all the properties of the so-called constant property slab model by \citet{2005ApJ...619L.191T,2002Natur.415..403T} through the addition of the forward synthesis code by \citet{1982SoPh...79..291L} with extensions by \citet{2008PhD...UL...M}. Now, the code shares the synthesis calculation module with HAZEL \citep{2008ApJ...683..542A} and carrying out a direct comparison between the two codes with both numerical and actual observations would be a very interesting exercise, useful for the whole community. This would bring a gauge of pros and cons of GA versus LM algorithms. As a general rule of thumb we can say that genetic algorithm inversions become feasible methods whenever the evaluation of the merit function is extremely fast because this king of algorithms require the evaluation of the merit function a thousand or a million times. \subsubsection{Bayesian inversions} \label{sec:bayesian} An alternative technique to the inversion problem that adds some extra statistical information on the results, namely confidence levels on the free parameters, has been proposed by \citet{2007A&amp;A...476..959A}, based on Bayes' theorem. According to that theorem, once the \emph{posterior} distribution $p(\vector{x}|\vector{I}^{\rm obs})$ is known, the position of its maximum indicates the most probable (a.k.a.\ optimum) combination of parameters that best fits the observations $\vector{I}^{\rm obs}$. The posterior (probability) distribution represents how much we know of the parameters once the observational data set is taken into account. As the reader may have already imagined, the Bayesian inversion is nothing but a maximization of $p(\vector{x}|\vector{I}^{\rm obs})$ instead of a minimization of the $\chi^2$ merit function. We are going to see that, indeed, the two optimization problems collapse to exactly the same result in given cases where the \emph{a priori} knowledge of the problem inserted in the calculations is the simplest (and ---probably--- the safest). Let us explain a little what we mean by this a priori knowledge. According to the theorem, the posterior distribution is proportional to the product of a \emph{prior} distribution $p(\vector{x})$ and the \emph{likelihood} distribution, $p(\vector{I}^{\rm obs}|\vector{x})$: \begin{equation} \label{eq:bayes} p(\vector{x}|\vector{I}^{\rm obs}) \propto p(\vector{x}) \, p(\vector{I}^{\rm obs}|\vector{x}). \end{equation} The likelihood distribution measures the probability that a given model atmosphere $\vector{x}$ can produce synthetic Stokes profiles $\vector{I}^{\rm syn}$ that fit the observed ones, $\vector{I}^{\rm obs}$. If the noise distributions are normal and typically independent of wavelength as it is usual to assume, then the likelihood is defined as \citep{2003itil.book.....M} \begin{equation} \label{eq:likelihood} p(\vector{I}^{\rm obs}|\vector{x}) = {\rm e}^{-\frac{1}{2} \chi^2 (\vector{x})}, \end{equation} where $ \chi^2 (\vector{x})$ is given in Equation (\ref{eq:chi2}). Imagine now, for a moment, that the prior is identically unity, $p(\vector{x}) = $ constant (within a reasonable range), which corresponds to a case where no a priori assumptions are made about the model parameters. (All possibilities are equally probable.) In such a case, Eq.\ (\ref{eq:bayes}) becomes \begin{equation} \label{eq:bayessimple} p(\vector{x}|\vector{I}^{\rm obs}) \propto {\rm e}^{-\frac{1}{2} \chi^2 (\vector{x})} \end{equation} and indicates that maximizing the posterior distribution is {\bf exactly} the same as minimizing $\chi^2$. No matter what the optimization algorithm used for the inversion, introducing Eq.\ (\ref{eq:chi2}) into Eq.\ (\ref{eq:bayessimple}), we have the way for estimating confidence levels as given by the (multidimensional) posterior probability distribution. Two-dimensional cuts of $p(\vector{x}|\vector{I}^{\rm obs})$ allow one to explore the possible degeneracies or cross-talks among each pair of model physical quantities. As a matter of fact, and according to our discussions in Sect.\ \ref{section:response}, response functions and uncertainties derived from Eq.\ (\ref{eq:uncertainties}) provide qualitatively similar confidence levels although Bayes' theorem supplies a more graphical approach \citep[see figures in][]{2007A&amp;A...476..959A}. The only difficulty is then properly sampling the hyperspace of parameters (see below). The prior distribution contains the information we may have of the model parameters without taking the observations into account. If all the model physical quantities are statistically independent, then \begin{equation} \label{eq:prior} p(\vector{x}) = \prod_i^{np+r} p(x_i). \end{equation} Unless other physical information is available, the typical assumptions one can make on the free parameters are the range of reliable values for each of them. Thus, a useful model for the prior distribution can be given by \begin{equation} \label{eq:modelprior} p(x_i) = H(x_i, x_i^{\rm min}, x_i^{\rm max}), \end{equation} where $H(x,a,b)$ is the typical top-hat function \begin{equation} \label{eq:tophat} H(x,a,b) = \left\{ \begin{array}{ll} {\displaystyle \frac{1}{b-a}} & {\rm if} \,\, a \leq x \leq b, \\ 0 & {\rm otherwise.} \end{array} \right. \end{equation} Establishing a prior, therefore, is analogous to the assumptions made by SIR on the (spline) smooth variations of the physical quantities along the atmosphere. The useful feature of $p(\vector{x})$ is that you can even consider correlations between the quantities and model them accordingly. This is in our opinion, however, a risky exercise because over fanciful correlations can be conceived that turn into an even more involved interpretation of the results. One has to make sure that the specific conditions of the problem enable this or that a priori assumption on parameter cross-talk. In summary, if either $p(\vector{x}) = $ constant or is given by Eqs.\ (\ref{eq:prior}), (\ref{eq:modelprior}), and (\ref{eq:tophat}), the optimization problem is the same as that described in Sect.\ \ref{sec:chisquare} and, in principle, the LM algorithm could be used as well. The missing ingredient is the sampling of the free parameter hyperspace. An alternative method is then in order. Sampling the parameter space means repeating the synthesis of Stokes profiles many, many times. Typically, one needs of the order of $10^{np+r}$ samples of the posterior distribution. When the number of free parameters is high, such a brute-force method becomes impracticable. \citet{2009ASPC..405..315A} propose using a ``not-so-brute-force'', Markov chain Monte Carlo method, where marginalized distributions of parameters can be obtained. The educated successive sampling grows linearly with the number of free parameters instead of exponentially. The decrease in computational cost has allowed the authors to deal both with ME atmospheres and with general LTE atmospheres where the physical quantities vary with depth \citep{2012ApJ...748...83A}. \subsection{Inversions accounting for spatial degradation} \label{sec:spatialdegrad} A significant step forward has been adopted by three different techniques after acknowledging the spatial effects of non-ideal instruments \citep{2012A&amp;A...548A...5V,2013A&amp;A...549L...4R,2015A&amp;A...577A.140A}. While the spectral PSF of the instruments was soon incorporated into the inversion codes,\footnote{One typically convolves the synthetic Stokes profiles with the spectral PSF.} the spatial blurring could not be satisfactorily dealt with until these works. Note that, in spectropolarimetry, an extended spatial PSF not only degrades the quality and contrast of images but also introduces a spurious polarization signal that can be misinterpreted as magnetic fields and LOS velocities. Several attempts were proposed for mitigating (or circumventing) this spatial contamination, such as using a non-polarized, global quiet-Sun average \citep{skumanich+lites1987} or a non-polarized, local (1$^{\prime\prime}$ neighborhood) average \citep{2007ApJ...662L..31O}. In these examples, the magnetic structure is assumed to contribute with a filling factor $\alpha$. None of these can be considered fully consistent but they do provide an improvement in robustness and reliability of the results. So-called spatially-coupled inversions \citep{2012A&amp;A...548A...5V}, regularized deconvolution inversions \citep{2013A&amp;A...549L...4R}, and sparse inversions \citep{2015A&amp;A...577A.140A} attack the problem directly although through different means. The first technique uses the SPINOR code and the second employs SIR; a combination of the fast iterative shrinkage-thresholding algorithm \citep{2009ITIP...18.2419B} and the restarting scheme by \citet{2016FouMath...15.715O} is chosen for the third algorithm. \subsubsection{Spatially-coupled inversions} \label{sec:coupled} From a formal point of view, the gradient of the merit function and the Hessian matrix were very helpful in explaining the second order approximation that is behind the Levenberg--Marquardt algorithm. Instead of using $\nabla\chi^2$, let us think of the Jacobian matrix $\matriz{J}$ of the system, which is made up of the derivatives of all the data points (all wavelength samples of the four Stokes profiles) with respect to the free parameters. That is, the elements of $\matriz{J}$ are just the individual terms in the summation of Equation (\ref{eq:chiderivative}). It is then clear that the approximation in Eq.\ (\ref{eq:seconderivativechi}) is equivalent to saying that the Hessian matrix $\matriz{H}' \simeq \matriz{J}^{\scriptscriptstyle {\rm T}} \matriz{J}$ and that $\matriz{H} = \matriz{H}' - {\mbox{\bf diag}} \left[\matriz{H}' \right]$. Consider now the inversion of the RTE for a whole spectropolarimetric image of $n\times m$ pixels individually (the uncoupled case). We can build a big (block-diagonal) new Jacobian $\matriz{J}$ with the ensemble of $\matriz{J}_{k,l}$ of all the individual pixels: \begin{equation} \label{eq:ensemble} \matriz{J} = \left( \begin{array}{ccccc} \matriz{J}_{1,1} & \matriz{0} & \cdots & \cdots & \matriz{0} \\ \matriz{0} & \matriz{J}_{1,2} & \matriz{0} & \cdots & \matriz{0} \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ \matriz{0} & \cdots & \cdots & \matriz{J}_{n,m-1} & \matriz{0} \\ \matriz{0} & \cdots & \cdots & \cdots & \matriz{J}_{n,m} \end{array} \right). \end{equation} \epubtkImage{}{% \begin{figure}[htbp] \centerline{\includegraphics[width=\textwidth]{vannoortfig}} \caption{Stokes images in the wing (+ 5.7 pm) Fe~{\sc i} line at 630.25 nm before (left half) and after (right half) spatially coupled inversion for the Stokes parameters I (top left), Q (top right), U (bottom left) and V (bottom right). Adapted from \citet{2012A&amp;A...548A...5V}.} \label{fig:vannoortfig} \end{figure}} The new (big) $\matriz{H}'$ readily becomes block diagonal, with the blocks being the $\matriz{H}'_{i,j}$ of all the individual pixels: \begin{equation} \label{eq:hensemble} \matriz{H}' = \left( \begin{array}{ccccc} \matriz{H}'_{1,1} & \matriz{0} & \cdots & \cdots & \matriz{0} \\ \matriz{0} & \matriz{H}'_{1,2} & \matriz{0} & \cdots & \matriz{0} \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ \matriz{0} & \cdots & \cdots & \matriz{H}'_{n,m-1} & \matriz{0} \\ \matriz{0} & \cdots & \cdots & \cdots & \matriz{H}'_{n,m} \end{array} \right). \end{equation} The inverse of this matrix (or that of matrix $\matriz{H}$) is easy to obtain by individually inverting each of its block components. Let us address now the spatially coupled inversion problem, where the effects of an assumed uniform PSF $\varphi(x,y)$ across the image are taken into account. The Jacobian can now be written as \begin{equation} \label{eq:bigjacobian} \matriz{J} = \left( \begin{array}{rrcrr} \varphi_{0,0} \matriz{J}_{1,1} & \varphi_{0,-1} \matriz{J}_{1,2} & \cdots & \varphi_{1-n,2-m} \matriz{J}_{n,m-1} & \varphi_{1-n,1-m} \matriz{J}_{n,m} \\ \varphi_{0,1} \matriz{J}_{1,1} & \varphi_{0,0} \matriz{J}_{1,2} & \cdots & \varphi_{1-n,3-m} \matriz{J}_{n,m-1} & \varphi_{1-n,2-m} \matriz{J}_{n,m}\\ \vdots & \vdots & \ddots & \vdots & \vdots \\ \varphi_{n-1,m-2} \matriz{J}_{1,1} & \varphi_{n-1,m-3} \matriz{J}_{1,2} & \cdots & \varphi_{0,0} \matriz{J}_{n,m-1} & \varphi_{0,-1} \matriz{J}_{n,m} \\ \varphi_{n-1,m-1} \matriz{J}_{1,1} & \varphi_{n-1,m-2} \matriz{J}_{1,2} & \cdots & \varphi_{0,1} \matriz{J}_{n,m-1} & \varphi_{0,0} \matriz{J}_{n,m} \end{array} \right), \end{equation} which is no longer diagonal. Nevertheless, since the influence of the PSF should be relatively short and not involve all the image points, the resulting $\matriz{J}$ has a significant number of zero elements (it is sparse). If we call $Y \equiv \varphi \ast \varphi$ the autocorrelation function of the PSF, it can be shown that \citep{2012A&amp;A...548A...5V} \begin{equation} \label{eq:bighessian} \matriz{H}' = \left( \begin{array}{rrr} Y_{0,0} \matriz{J}_{1,1}^{\scriptscriptstyle {\rm T}} \matriz{J}_{1,1} & \cdots & Y_{1-n,1-m} \matriz{J}_{1,1}^{\scriptscriptstyle {\rm T}} \matriz{J}_{n,m} \\ Y_{0,1} \matriz{J}_{1,2}^{\scriptscriptstyle {\rm T}} \matriz{J}_{1,1} & \cdots & Y_{1-n,2-m} \matriz{J}_{1,2}^{\scriptscriptstyle {\rm T}} \matriz{J}_{n,m}\\ \vdots & \ddots & \vdots \\ Y_{n-1,m-2} \matriz{J}_{n,m-1}^{\scriptscriptstyle {\rm T}} \matriz{J}_{1,1} & \cdots & Y_{0,-1} \matriz{J}_{n,m-1}^{\scriptscriptstyle {\rm T}} \matriz{J}_{n,m} \\ Y_{n-1,m-1} \matriz{J}_{n,m}^{\scriptscriptstyle {\rm T}} \matriz{J}_{1,1} & \cdots & Y_{0,0} \matriz{J}_{n,m}^{\scriptscriptstyle {\rm T}} \matriz{J}_{n,m} \end{array} \right). \end{equation} Inverting matrix $\matriz{H}'$ is beyond our reach. However, the inversion of $\matriz{H}$ is affordable ---at least approximately--- because the linear system is sparse, although the number crunching problem is formidable. The authors propose strategies for the approximate $\matriz{H}'$ and recognize that human intervention (the \emph{artistic} part) is in the end more needed in these coupled inversions than in regular uncoupled ones, as expected. Although the improvement with respect to former techniques is clear and the procedure is opening a new avenue for physical inferences in the solar photosphere (see Figure \ref{fig:vannoortfig}), a few unsatisfactory oscillations appear here and there in the application to actual observations. Such oscillations show up a caveat of the technique: the possible amplification of high frequencies and, hence, of noise. It can be argued that the spatially coupled inversions carry out convolutions instead of deconvolutions, but it is also true that any spurious, high-frequency signal may be compatible with the inverted models, provided it is washed out by the convolution with the PSF. What is not clear either to the authors of the present review is the quite unexpected quantitative results in some applications. For example, \citet{2013A&amp;A...557A..24V} claim to have found magnetic field strengths of 7.5 kG at $\log\tau = 0$ in some parts of the penumbra. These values can easily break the observational paradigm where fields stronger of 4 kG have very seldom been observed. Since they may represent a new paradigm, they should be accompanied by an estimate of uncertainties that is not present in the paper. First of all, the fits obtained for the $V$ profile in their fig.\ 3, seem to be different from the observations by at least an order of magnitude greater than the noise at several wavelengths. Such a fit cannot be considered satisfactory. But even more important is the fact that the quoted field strength corresponds to very deep layers in the atmosphere. As explained by \citet{1994A&A...283..129R}, the second term in Eq.\ (\ref{eq:responsefun}) rapidly tends to zero at low layers because the difference between the Stokes profiles and the source function vector quickly vanishes at these layers. In these circumstances, it is easy to see that values for the magnetic quantities at $\log\tau = 0$ are extremely uncertain because the RFs go to zero (Equation \ref{eq:uncertainties}). Unless otherwise justified, those strong values at low layers cannot be interpreted but as (not-very-accurate) extrapolations of the magnetic field strength global stratification if SPINOR uses the equivalent response functions at the nodes of Section \ref{sec:sirstrategy}.\footnote{No explicit note on its usage is known to the authors.} If instead, the RFs are the regular ones, then we are afraid that the (uncoupled) inversion strategy should be modified by changing the nodes to other places in the atmosphere with greater sensitivity to perturbations in the magnetic and dynamic physical quantities. \subsubsection{Regularized deconvolution inversions} \label{sec:regularized} A much simpler and computationally cheaper approach has been proposed by \citet{2013A&amp;A...549L...4R}. Based on the idea of Stokes profile expansion in terms of the principal components provided by a regular PCA technique, instead of deconvolving the Stokes profile images wavelength by wavelength, which is a very expensive and risky process,\footnote{Stokes $Q$, $U$, and $V$ images are almost noise for most wavelengths. Therefore, the risk of noise enhancement is high during any deconvolution.} deconvolution is applied to the PCA coefficient images. The resulting Stokes profiles after deconvolution are then inverted with SIR. The procedure is neat and simple, the uncertainties in the determination of physical quantities can be obtained through Eq.\ (\ref{eq:uncertainties}), and the possible overcorrections due to an excess in deconvolution can easily be controlled. The idea is to assume that PCA expansions up to a degree $D$ are valid to describe the profiles fully before they reach the telescope. Under such an assumption, the observed Stokes profiles can be written as \begin{equation} \label{eq:stokesregular} \vector{I} (\lambda) = \sum_{i=1}^{D} (\vector{\omega}_i \ast \vector{P}) \, \phi_i (\lambda) + \vector{N}, \end{equation} where $\vector{\omega}_i$ are the weights to the PCA eigenprofiles $\phi_i (\lambda)$, $\vector{P}$ stands for the spatial PSF of the instrument, and $N$ is the noise (assumed independent of wavelength). One of the interesting features of this regularized deconvolution is that the noise contamination is largely minimized since the real signal is usually contained in the first few coefficients. Then, the $4\times N$ images $(\vector{\omega}_i \ast \vector{P}) + \vector{N}$ are deconvolved by a Richardson--Lucy algorithm \citep{1972JOSA...62...55R,1974AJ.....79..745L}. The resulting deconvolved Stokes profiles are then inverted with SIR. The main objection one may find to this technique is the same as that for PCA, namely that the most, say, \emph{peculiar} Stokes profiles, which indeed reveal very interesting physics, are not fully fit since one would need more PCA terms. The logical solution for such a problem would be to increase $D$ but this may not be advisable as the final computation time would be too long. A practical circumvention can be provided by classification techniques (e.g., \emph{k-means clustering}) that allow the grouping of the different Stokes profiles according to purely morphological criteria \citep{1967ProcFifth...1...281M,1957BullAcadPol...4...801S,1982IEEE...28...129L}.\footnote{As far as we know, clustering techniques were introduced in solar spectropolarimetry by \citet[][see also \citeauthor{2011A&amp;A...530A..14V}, \citeyear{2011A&amp;A...530A..14V}]{2007ApJ...663.1386P}.} After such a classification, those peculiar profiles can be identified. One can then carry out the PCA expansion tailored to each group of profiles, hence optimizing the global performance. An increase in expansion terms may only be needed for small fractions of pixels in an image, thus keeping the whole inversion efficient. \subsubsection{Sparse inversions} \label{sec:sparse} An extremely interesting new generation of inversion methods has been proposed by \citet{2015A&amp;A...577A.140A}. Based on the concept of sparsity or compressibility, this technique allows us to tackle the inversion of 2D maps ---and potentially 3D data sets--- all at once. The underlying idea of sparsity is the intrinsic redundancy of the data. That is, data can be projected to a parameter space where a reduced set of variables can fully describe that data set. Provided that a linear transformation exists between the data set and the new ---small sized--- parameter space, an affordable inversion of 2D maps can be carried out. In this first paper the authors present a 2D ME inversion based on a wavelet transformation of the model parameter space. They show that reducing the dimensionality of the model unknowns by a factor between three and five yields results comparable to ---or even better than--- pixel-to-pixel inversion. Time saving is among the advantages of this kind of methods, along with their ability to easily compensate for the effects of the telescope PSF and the regularization of solutions introduced by the sparsity hypothesis. Among the drawbacks we find the apparent impossibility of using LM methods ---since the Hessian matrix scales as the square of the number of free parameters. The authors suggest using proximal algorithms that can increase the convergence speed of the standard gradient descent method that it is currently used. \subsection{Summary of inversion techniques} \label{sec:summaryits} This section summarizes in Table \ref{tab:tableinversionsconstant} all the past and current inversion techniques that have been proposed or are in use for solar physics. A distinction is made between those techniques that assume physical quantities that are constant with optical depth and those that allow the quantities to vary over the photosphere. LM stands for Levenberg--Marquardt, ANN for artificial neural networks, GA for genetic algorithm, B for Bayesian, and GD for gradient descent. The overwhelming majority of codes uses the Levenberg--Marquardt algorithm in order to find the minimum distance between the observed and the synthetic profiles.\footnote{The Florence code was modified by \citet{2007A&amp;A...464..323B}.} \footnote{In the HAZEL code, the coarse approach to the minimum distance is made through a Lipschitzian method. See the paper.} \footnote{See \citet{2015A&amp;A...577A...7S} for details about NICOLE.} \begin{savenotes} \begin{table}[htbp] \caption{Inversion techniques. LM stands for Levenberg--Marquardt, ANN for artificial neural networks, GA for genetic algorithm, B for Bayesian, and GD for gradient descent.} \label{tab:tableinversionsconstant} \centering \begin{tabular}{llcc} \\ \toprule Constant quantities\\ \toprule Identifier & Reference & Method & In use \\ \midrule KPNO & \mbox{\citet{1972lfpm.conf..227H}} & LM & \\ HAO-KPNO & \mbox{\citet{auer+etal1977}} & LM & \\ Florence & \mbox{\citet{1984SoPh...93..269L}} & LM & $\checkmark$ \\ HAO-ASP & \mbox{\citet{1985NASCP2374..306S}} & LM & $\checkmark$ \\ IAC MISMA & \mbox{\citet{1997ApJ...491..993S}} & LM & $\checkmark$ \\ CSIRO-Meudon & \mbox{\citet{rees+etal2000}} & PCA & $\checkmark$ \\ HAO MELANIE & \mbox{\citet{2001ApJ...553..949S}} & LM & $\checkmark$ \\ HAO FATIMA & \mbox{\citet{2001ApJ...553..949S}} & PCA & $\checkmark$ \\ AIP ANN & \mbox{\citet{2001A&amp;A...378..316C}} & ANN & \\ HAO He {\sc i} D$_{3}$ & \mbox{\citet{2003ApJ...582L..51L}} & PCA & $\checkmark$ \\ HAO ANN & \mbox{\citet{2003NN.....16..355S}} & ANN & \\ MPS {\sc HeLIx} & \mbox{\citet{2004A&amp;A...414.1109L}} & GA & $\checkmark$ \\ IAC Molecular & \mbox{\citet{2004PhDULL...A}} & LM & \\ IAA MILOS & \mbox{\citet{2007A&A...462.1137O}} & LM & $\checkmark$ \\ IAC HAZEL & \mbox{\citet{2008ApJ...683..542A}} & LM & $\checkmark$ \\ HAO VFISV & \mbox{\citet{2011SoPh..273..267B}} & LM & $\checkmark$ \\ IAC Sparse & \mbox{\citet{2015A&amp;A...577A.140A}} & GD & $\checkmark$ \\ \bottomrule \toprule Variable quantities\\ \toprule Identifier & Reference & Method & In use \\ \midrule ETH Flux tube & \mbox{\citet{1990A&amp;A...233..583K}} & LM & \\ IAC SIR & \mbox{\citet{1992ApJ...398..375R}} & LM & $\checkmark$ \\ ETH IT & \mbox{\citet{1992A&amp;A...263..312S}} & LM & \\ IAC Flux tube & \mbox{\citet{1997ApJ...478L..45B}} & LM & $\checkmark$ \\ ETH SPINOR & \mbox{\citet{1998A&amp;A...336L..65F}} & LM & $\checkmark$ \\ IAC NLTE & \mbox{\citet{2000ApJ...530..977S}} & LM & \\ HAO LILIA & \mbox{\citet{2001ASPC..236..487S}} & LM & $\checkmark$ \\ HAO-IAC NICOLE & \mbox{\citet{2001ASPC..236..487S}} & LM & $\checkmark$ \\ KIS SIRGAUS & \mbox{\citet{2003ASPC..307..301B}} & LM & $\checkmark$ \\ IAA SIRJUMP & \mbox{\citet{2009ApJ...704L..29L}} & LM & $\checkmark$ \\ IAC Bayes & \mbox{\citet{2009ASPC..405..315A}} & B & $\checkmark$ \\ MPS Spatially coupled & \mbox{\citet{2012A&amp;A...548A...5V}} & LM & $\checkmark$ \\ IAC Regularization & \mbox{\citet{2013A&amp;A...549L...4R}} & LM & $\checkmark$ \\ \bottomrule \end{tabular} \end{table} \end{savenotes} \clearpage \section{Discussion on inversion results} \label{section:discussion} \subsection{Increasing complexity in the model atmospheres} \label{sec:complexity} Following the approach of Sects.\ \ref{sec:approxmod} and \ref{sec:approxprof}, we want to discuss in this section how a given set of Stokes profiles can be fit with several assumptions about the stratification of the model atmosphere physical quantities. This discussion sheds light on the ill-conditioning issue we reported on in Sect.\ \ref{sec:introduction}: the same profile can be interpreted in several ways, depending on the complexity of the assumed model atmosphere; the only limiting factor for increasing such a complexity should be the noise in the observations and a reasonable dose of the principle of Occam's razor. To illustrate our discussion we shall be using SIR to carry out all the necessary calculations, in a similar way to that followed by \citet{2006ASPC..358..107B}. Given the SIR strategy explained in Sect.\ \ref{sec:sirstrategy}, the natural way to making a given physical quantity stratification more complex is by increasing the number of nodes and, hence, the polynomial degree of the spline that is assumed to describe such a stratification. Therefore, we shall be inverting the ``observed" profiles with various sets of nodes for each of the different atmospheric quantities. \epubtkImage{}{% \begin{figure}[htbp] \centerline{\includegraphics[width=\textwidth]{Fig90abnew}} \caption{Left column: $v_{\rm LOS}$, $B$, $\gamma$, and $\varphi$ stratifications with optical depth for the observed model atmosphere (black lines) and for the resulting models from the mode 1 (green lines), mode 2 (red lines), and mode 3 (blue lines) runs of SIR. Gray shaded areas cover the uncertainty region of the mode 3 solution. Middle column: the corresponding Stokes profiles. Right column: differences between the observed and inverted Stokes profiles. The abscissa for both the middle and right columns shows the wavelength centered at 630 nm.} \label{fig:Fig90ab} \end{figure}} Among all possible node combinations or modes, we have selected only six for the sake of simplicity. Each mode is characterized by the number of nodes, $n_{\rm B}$, used for $B$, $\gamma$, $\varphi$, and $v_{\rm LOS}$. The number of nodes for $T$, $n_{\rm T}$, is higher by two nodes than that for the magnetic and dynamic quantities, except for mode 1 where $n_{\rm T} = 2$. Mode 1 can be called ``\`a la ME" because it has just $n_{\rm B} = 1$. Since the starting guess model atmosphere has constant $\vector{B}$ and $v_{\rm LOS}$, only constant values for these quantities can result from this inversion. Mode 2 has $n_{\rm B} = 2$, so that linear stratifications are allowed in this mode. Mode 3 has $n_{\rm B} = 3$; hence, parabolic stratifications can come out from SIR in this mode. Mode 4 has $n_{\rm B} = 5$ and the stratification of the magnetic and dynamic quantities can be quartic. Mode 5 has $n_{\rm B} = 7$ and the stratifications can be of order 6. Finally, mode 6 uses the automatic node selection algorithm described in Section \ref{sec:automatic}. We have built a penumbral model atmosphere after making up a bit one of the resulting models from inversion of a \emph{Hinode} observation. We will call it hereafter \emph{the observed model}. Our choice is driven by the shape of the Stokes profiles emerging from such an atmosphere. They are far from being typical even and odd functions of wavelength. The stratifications for $v_{\rm LOS}$, $B$, $\gamma$, and $\varphi$ in the observed model are plotted (from top to bottom) with black lines in the left panels of Figs.\ \ref{fig:Fig90ab} and \ref{fig:Fig90cd}. With this model atmosphere, we have synthesized the two Fe~{\sc i} lines at 630.1 and 630.2 nm, convolved them with the \emph{Hinode} spectropolarimeter PSF, sampled with the instrument wavelength sampling interval, and finally added noise to a level of $10^{-3} \, I_{{\rm c}}$. The so-obtained Stokes profiles will be called \emph{the observed Stokes profiles} and are plotted in black lines in the middle panels of Figs.\ \ref{fig:Fig90ab} and \ref{fig:Fig90cd} (despite being barely discerned). Besides the observed model and profiles, both figures display the resulting models and fit profiles from the corresponding mode runs of SIR. Green, red, and blue lines correspond to modes 1, 2, and 3, respectively, in Fig.\ \ref{fig:Fig90ab}, and to modes 4, 5, and 6, respectively, in Figure \ref{fig:Fig90cd}. The right panels of the two figures show the differences between the observed and the fit profiles in the corresponding colored lines. These differences provide a direct measure of the fit quality. \epubtkImage{}{% \begin{figure}[htbp] \centerline{\includegraphics[width=\textwidth]{Fig90cdnew}} \caption{Same as in Fig.\ \ref{fig:Fig90ab} but with red, green, and blue lines representing modes 4, 5, and 6, respectively.} \label{fig:Fig90cd} \end{figure}} The \`a-la-ME inversion yields a fairly good fit although, as expected, it is unable to reproduce the asymmetries in the profiles. The typical misfit is never larger than 10 \%. The results given by the \`a-la-ME inversion coincide with the actual values at $\log \tau_{\rm c} \simeq -1.5$. The exact coincidence takes place at different depths for each quantity but the important qualitative message to be extracted is that, in spite of its simplicity, the ME approximation is able to retrieve the atmospheric quantities at the mid-photosphere. Looking at the differences in the right panels of Fig.\ \ref{fig:Fig90ab}, the parity rules we commented on in Sect.\ \ref{sec:approxprof} seem not to operate but the reason is clear: the ME model is still too far from the observed model for linearity to hold. Moreover, given the strong asymmetry shown by the profiles, the specific wavelength around which we should symmetrize or anti-symmetrize the profiles has to be calculated because it is certainly different from the nominal rest wavelength of the spectral line. The linear stratification mode does a better job as it provides a mean gradient with which asymmetries start to be reproduced. The fits clearly improve with the parabolic mode. Note that, besides having reduced the misfits, the stratifications of $v_{\rm LOS}$, $B$, $\gamma$, and $\varphi$ are fairly well mimicked between $\log \tau_{\rm c} = -3$ and $\log \tau_{\rm c} = -0.5$. Note that the uncertainties evaluated with Eq.\ (\ref{eq:uncertainties}) ---and displayed in the figure with shaded gray areas--- indicate very well the range of reliability of the resulting stratifications. This is very important in practice where the observed model atmosphere is unknown. In our example, the uncertainties for the three components of the vector magnetic field are almost compatible with the linear stratification results. This is not the case for the LOS velocity where deviations are apparent and pave the road for more complex stratifications to improve the fits. In spite of this fact, the real gauge for deciding to proceed in the increase of nodes through the optical path is noise. As we have been discussing in many places in this paper, only if the differences between the observed and the synthetic profiles are larger than a few times the rms noise can we expect to obtain improvements with alternative model atmospheres. Since our noise still looks small enough when compared with the Stokes profile differences, we try with modes 4, 5, and 6. The retrieved model atmospheres are better than the former and, in particular, the uncertainty shaded areas in the top panels of Fig.\ \ref{fig:Fig90cd} (they correspond to mode 5) indicate that indeed the range of reliability has extended up to $\log \tau_{\rm c} = 0$. Notice that the size in the difference panels of the figure for mode 4 indicates that there is still some room to improve the fits, while mode 5 and 6 have profile differences compatible with the noise of the observations. Therefore, we cannot go any further (but indeed the fit quality is superb). \epubtkImage{}{% \begin{figure}[htbp] \centerline{\includegraphics[width=0.8\textwidth]{asymmetries90}} \caption{Area ($\delta A$) and amplitude ($\delta a$) asymmetry of the Stokes $V$ profile as a function of the number of nodes for $B$. Dashed lines correspond to the asymmetries of the observed profile while shaded areas mark the uncertainties introduced by a noise of 10$^{-3}\, I_{\rm c}$.} \label{fig:asymmetries} \end{figure}} This example illustrates the ability we can have to retrieve very complex stratifications when both the noise is low and asymmetries are present. The latter feature is indeed important. On the one hand, if no asymmetries are present in the observed Stokes profiles we can readily discard part of the complexity: no variations of the LOS velocity with optical depth are present. On the other hand, and very remarkably, asymmetries increase the amount of available information: if profiles are symmetric (either even or odd), half of them can be thrown away although the retrievals will be noisier (see Section \ref{sec:rfproperties}). Since Stokes profile asymmetries have driven most of the evolution of concepts in inversion techniques and, in general, in radiative transfer, a further check on the way our numerical experiments have been able to reproduce those asymmetries is in order. Let us consider the typical definitions for the Stokes $V$ amplitude, $\delta a$, and area, $\delta A$, asymmetries: \begin{equation} \label{eq:asymmetries} \delta a \equiv \frac{a_{\rm b} - a_{\rm r}}{a_{\rm b} + a_{\rm r}}, \,\,\, \delta A \equiv \frac{A_{\rm b} - A_{\rm r}}{A_{\rm b} + A_{\rm r}}, \end{equation} where $a_{\rm b}$ and $a_{\rm r}$ stand for the amplitudes of the Stokes $V$ blue and red lobes, and $A_{\rm b}$ and $A_{\rm r}$ do for the unsigned areas of those lobes, respectively. Figure\ \ref{fig:asymmetries} describes the performance of the different modes in reproducing $\delta a$ and $\delta A$. Obviously, mode 1 shows zero asymmetries. Mode 2 approaches significantly the amplitude asymmetry but not the area one in this example. Mode 4 almost reproduce both quantities. The noise level (represented by the horizontal shaded areas) is so low, however, that only mode 5 and 6 provide an exact result. \epubtkImage{}{% \begin{figure}[htbp] \centerline{\includegraphics[width=\textwidth]{FigIMAXV51}} \caption{Same as Figs.\ \ref{fig:Fig90ab} and \ref{fig:Fig90cd} but for a simulated IMaX observation.} \label{fig:FigIMAXV} \end{figure}} Certainly, if we decrease the amount of information by, for example, decreasing the number of wavelength samples and/or the polarization signal (because of the magnetic field being weaker) we cannot retrieve such complex model atmospheres any longer or, said otherwise, the range of reliability of the results will narrow down significantly so that low-order polynomial approximations are enough to give account of the observations. To exemplify this case, we have simulated a typical {\sc Sunrise}/IMaX observation in mode V5-6, that is, five wavelength samples and six accumulations. The IMaX Fe~{\sc i} line at 525.02 nm was synthesized in a quiet-sun model atmosphere (again taken from the actual observations), convolved with the IMaX PSF, sampled at -8, -4, 4, 8, and 22.7 pm from line center, and added noise at a level of 10$^{-3}\, I_{\rm c}$. Since $Q_{\rm c} = U_{\rm c} = V_{\rm c} = 0$ except perhaps in cases of very large LOS velocities, we just count in a total of $5\times4-3=17$ observables and, consequently, cannot afford to retrieve more than 17 unknowns. These 17 free parameters would cope with our mode 3 (five nodes for $T$ and three nodes for $v_{\rm LOS}$, $B$, $\gamma$, and $\varphi$). Indeed, this number is too high in practice and can only be reached in cases of strong asymmetries for the reasons we have just explained above: symmetries in the profiles reduce the degrees of freedom. Let us then consider mode 2 as the maximum achievable run and invert the observed profiles. Figure \ref{fig:FigIMAXV} is similar to Figs.\ \ref{fig:Fig90ab} and \ref{fig:Fig90cd} and the color codes are the same.\footnote{Panel arrangement is nevertheless rotated by 90$^{\circ}$.} The conclusions are clear, the \`a-la-ME mode provides fair values, and the linear approximation gives a reliable gradient on the physical quantities at around $\log \tau_{\rm c} = -1.5$. \subsection{Inversion retrievals of weak fields} \label{sec:weakretreival} The reliability of inversion retrievals from zones with weak fields is a continuous matter of debate. Concerns are often published with different levels of arguments. As in any other aspect of life, criticism is always more prevalent than praise in any community. This is the case when discussing the ability of spectropolarimetric observations to distinguish weak fields and their inclinations. Most discussions are strongly biased by the fairly common misconception of Stokes $V$ being proportional to the longitudinal component of the magnetic field. We have shown in Sect.\ \ref{sec:weakatmosphere} that this approximation is valid only for a very limited range of values, and that the important observational parameter when dealing with weak signals is noise. Stokes profiles other than $V$ also provide information about $\vector{B}$. As long as the signal is not buried by noise, radiative transfer is powerful enough to provide sufficiently accurate magnetic quantities. Sometimes the criticisms, are not correctly interpreted. An example is the evidence shown by \citet{marian+etal2006} that the pair of Fe~{\sc i} lines at 630 nm is not able to provide a single model for a scenario in which two depth-dependent atmospheres, one magnetic and another non-magnetic, fill a spatial resolution element of about 1$^{\prime\prime}$. This true result has often been interpreted as if the famous pair of lines were unable to provide a reliable inference of the vector magnetic field, and that only infrared lines were valid for such a diagnostic. \citet{2010ApJ...711..312D} explained that the scenario used by the former authors was perhaps too complicated for the available information. That is, that the visible line profiles are not enough to cope with the number of free parameters in a two-component, depth-dependent atmosphere. A simpler model atmosphere with just one magnetic component (and the other non-magnetic) may fit the profiles well enough. The latter authors gave both theoretical and observational arguments to defend the hypothesis that visible lines are reasonable diagnostics even for weak magnetic fields, in spite of infrared lines being more sensitive. Later, \citet[][see also \citeyear{2012A&amp;A...547A..89B}, \citeyear{2013A&amp;A...550A..98B}]{2011A&amp;A...527A..29B} raised doubts about the retrievals of fairly inclined fields when the polarization signals are very weak because they come from quiet, internetwork regions \citep[e.g.,][\citeauthor{2008ApJ...672.1237L},\citeyear{2008ApJ...672.1237L}]{2007ApJ...670L..61O}. In our opinion, again, their claims are partial misinterpretations of the results, as we try to demonstrate with the calculations that follow. Consider the pair of Fe~{\sc i} lines at 630 nm sampled as in the \emph{Hinode} \citep{kosugi+etal2007} spectropolarimeter \citep{2001ASPC..236...33L}. We have synthesized these lines in a quiet-Sun model for constant magnetic field strengths of 10, 15, 20, 25, 40, 50, 60, 75, 90, and 100~G. The inclination and azimuth may take any value but preserve an isotropic distribution, so that each of these values is equally probable. The LOS velocity has been set to zero for all the profiles. One thousand Stokes profile sets have been calculated for each value of $B$. Once synthesized, white noise has been added to the profiles with a standard deviation of $10^{-3}$ or $3.3 \cdot 10^{-4} I_{\rm c}$, simulating ${\rm S/N} = 1000$ or $3000$.\footnote{We have followed here the customary procedure of adding equal rms noise to all four Stokes parameters. However, as stressed by \citet{2012ApJS..201...22D}, smaller noise should be added to Stokes $I$, which is always much better measured. This is so because almost all polarimeters are more efficient for Stokes $I$. In the optimum polarimeter case where Stokes $Q$, $U$, and $V$ have the same polarimetric efficiency, that of Stokes $I$ is higher by a factor of $\sqrt{3}$.} The synthetic profiles were then inverted with SIR \`a-la ME (two nodes in $T$ and one node for the rest of parameters). The results are summarized in Figure\ \ref{fig:histo_gamma6}. \epubtkImage{}{% \begin{figure}[htbp] \centerline{\includegraphics[width=\textwidth]{histo_gamma6}} \caption{Left panel: output magnetic field strength from the inversion as a function of input strength. Colors correspond to the two different values of the S/N. The dashed line marks the bisector of the first quadrant. Error bars represent rms values from the 1000 inversions. Right panel: distributions of the output field inclinations from the ${\rm S/N = 1000}$ inversions. Colors correspond to values of the field strength indicated in the inset. The input distribution of inclinations is represented by the dashed line.} \label{fig:histo_gamma6} \end{figure}} Fields weaker than 75 G (25 G) from the ${\rm S/N = 1000}$ (3000) inversions tend to be overestimated but in none of the cases is the excess output such that it would retrieve too strong a magnetic field. As a matter of fact, the results illustrate very well the fair reliability of the $B$ inference in practically all circumstances. Again, when noise decreases the inversion results improve. The magnetic inclination results, represented in the right panel of the figure, are also very illustrative of what is going on. Only results from the ${\rm S/N} = 1000$ experiments are plotted. One can clearly see that the weakest fields tend to result in an excess of inclined fields. {\bf This is,} however, {\bf an expected result that has nothing to do with any special inability of the visible lines or with the sought after proportionality between Stokes $V$ and the longitudinal component of the magnetic field.} It is rather a consequence of the noise dominating the polarization signals. When the field is very weak, Stokes $V$ is very small, barely exceeding the noise level. At the same time, Stokes $Q$ and $U$ (which should theoretically be zero) simply show noise. Since the $V$ signals are not sufficiently larger than the linear polarization signals, the inversion code has no other option than to interpret the observations as very inclined magnetic fields: it is mostly fitting noise in $Q$ and $U$. The situation clearly improves as the field strength increases. The inclination distribution is well recovered for $B=100\,$G fields, even when S/N is only 1000. Let us now consider a distribution of magnetic field strengths according to the probability density function (PDF) obtained by \citet{2007ApJ...670L..61O} from \emph{Hinode} observations. With these field strengths and an isotropic inclination distribution such as that for Fig.\ \ref{fig:histo_gamma6}, we have synthesized 10000 Stokes profiles to which white noise of rms amplitude of $\sigma = 10^{-3} \cdot I_{\rm c}$ was added. \`A-la ME inversions with SIR have been carried out. Both the inputs (black lines) and the results (red lines) are plotted in the upper panels of Figure \ref{fig:histo_campo}. Fields weaker than 20~G are slightly overestimated, but above 60 or 70~G the strength PDF is very nicely recovered. The inclination PDF shows an excess of horizontal fields in detriment of the more vertical ones. The same PDFs are shown in the lower panels for a selection of pixels where the maximum polarization signal (${\rm max} \{|V|,\sqrt{(Q^{2}+U^{2})} \}$) is greater than $4\sigma$. As one can clearly see, fields weaker than 10~G and a good portion of horizontal fields almost disappear. The inversions react very well. The underestimation of small inclinations and the excess of large ones can be attributed to insufficient S/N. When the experiments are repeated with higher signal-to-noise ratios, the PDFs agree accordingly better. \epubtkImage{}{% \begin{figure}[tbp] \centerline{\includegraphics[width=\textwidth]{histo_campo}} \caption{Upper panels: magnetic field strength (left) and inclination (right) PDFs. $B$ follows the lognormal PDF from \citet{2007ApJ...670L..61O} with $B_{0} = 36.7$ and $\sigma = 1.2$. Inclinations follow the same random, isotropic distribution as for Figure \ref{fig:histo_gamma6}. Bottom panels: same as the upper ones but only for those points where the polarization signal is higher than a threshold (see text for details). Black lines correspond to the input and red lines to the output from \`a-la ME inversions.} \label{fig:histo_campo} \end{figure}} \newpage \section{Conclusions} \label{section:conclusions} The inversion of the radiative transfer equation has been presented as a topological problem that maps the space of observables, the Stokes parameters, onto the space of the object physical quantities. The dependences of such a mapping on the definition of the two spaces implies a number of assumptions that are explicitly or implicitly made by any inference technique, regardless of it being called an inversion or not. Such assumptions determine to a great extent the uncertainties in the astronomical inferences, which depend on both the measurement errors and the analysis technique. In the observational space, one has to select the parameters to be measured and the level of noise with which such measurements are carried out. Signals (measured parameters) are useful insofar they vary after a modification in the object physical quantities. For the variation to be detectable it should be higher than the noise. If the signal does not change above noise levels after a perturbation in the physical quantities, then it is useless and must be discarded. In the object physical space, the number of quantities and their assumed stratification with depth in the atmosphere are the key variables. If a physical quantity at a given depth in the atmosphere produces no measurable effect on the Stokes spectrum, then this quantity should not be looked for through the inversion process. The number of these physical quantities should not exceed the number of observables. The mapping between the two spaces is nothing but radiative transfer. Depending on the spectral line and the way we measure it (that is, the number of wavelength samples, the width of the spectral PSF, etc.) the transfer can be studied through the full non-LTE problem, the LTE approximation or, rather, through further simplifications such as the Milne--Eddington approximation, the weak field approximation, etc. Strictly speaking, no available inversion technique deals with the full non-LTE problem and the only non-LTE code, NICOLE, relies on several approximations such as the fixed departure coefficient approximation or the field-free approximation in order to make the numerical problem tractable. We have provided arguments in favor of proceeding through a step-by-step approach in which the complexity of the problem increases sequentially until convergence has been reached. In this sense we strongly recommend initializing inversions with classical estimates of $B$ and $v_{\rm LOS}$ as provided by the center of gravity technique, and estimates of $\gamma$ and $\varphi$ as provided by the weak field approximation. The criterion for convergence has to be established in terms of noise: if the (rms) difference between observed and synthetic Stokes profiles is less than the typical noise of observations, increasing complexity in the object physical description adds no information. In this regard, we have given both conceptual and technical arguments for the MISMA hypothesis and inversion technique to be abandoned. At the other extreme of the ``complexity spectrum", the weak field approximation must only be used with much care and mainly for very broad chromospheric lines. Nothing in the transfer equation indicates that Stokes $V$ is proportional to $B \cos \gamma$. Only some matrix elements of \matriz{K} are. After integration through the atmosphere, the proportionality is most probably lost. Moreover, the information provided by the other Stokes parameters helps in disentangling the magnetic field strength from the inclination. In particular, Stokes $I$ very soon departs from the zero field conditions that are strictly necessary for the weak field approximation to apply. The step-by-step approach we suggest (and which is indeed implemented in the SIR inversion code) is to be preferred for two reasons: on the one hand, the atmospheric stratification of physical quantities can be described by a Taylor-expansion-like method where it is assumed to be first constant, then linear, later quadratic, and so on; on the other hand, the Stokes profiles ($I$ in line depression), as functions of the wavelength, belong to ${\mathchoice {\rm I\mskip-4mu L} {\rm I\mskip-4mu L}{\rm I\mskip-4.5mu L} {\rm IRmskip-5mu L}}^2$ and, hence, can be expanded in terms of orthonormal bases such as those provided by the Hermite functions or those built for PCA techniques. The various algorithms used for the inversion problem have been reviewed. They include the most widespread Levenberg--Marquardt algorithm, database search inversions, artificial neural networks, genetic algorithms, and Bayesian inferences. The most promising techniques for the near future, namely those which include spatial degradation by the telescope, are also discussed. Among these we find the so-called spatially-coupled inversions by \citet{2012A&amp;A...548A...5V}, the regularized deconvolution inversions by \citet{2013A&amp;A...549L...4R}, and the sparse inversions by \citet{2015A&amp;A...577A.140A}. Suggestions for improving their performance and reliability are also given. This paper ends with a discussion on a pair of topics that might seem controversial. The first one is a description of how the current implementation of SIR deals with a step-by-step approach. The sequential improvement on the fits is explicitly shown. The second topic discusses the reliability of weak field retrievals. The idea of a theoretical inability of Zeeman-sensitive spectral lines in the visible for inferring weak fields accurately is refuted. Instead, the root of the problem is shown to be in the signal-to-noise ratio of the observations. If noise is suitably low, then radiative transfer provides the necessary tools for accurate retrievals. Of course, uncertainties will be proportionally larger than when signals are bigger (i.e., stronger fields), but there are no reasons for not trusting the inversion results. \newpage
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The ratchet effect allows to create a directed particle transport in an unbiased non-equilibrium environment and thus to extract mechanical work from a fluctuating bath \cite{Hanggi2005,Astumian2002,Hanggi2009}. Such a conversion is impossible for macroscopic equilibrium systems and makes the ratchet effect a fundamental non-equilibrium phenomenon. While originally conceived as proof-of-principle examples of rectification schemes producing work from fluctuations \cite{Prost1994,Magnasco1993,Bartussek1994,Faucheux1995} and as possible explanations for the mechanism allowing molecular motors to show directed motion along cytoskeleton filaments \cite{Astumian1997,Julicher1997}, ratchets now form a widespread paradigm with a large realm of applications in atomic, condensed matter and biophysics. Specific applications range from the rectification of atomic \cite{Schiavoni2003a}, colloidal \cite{Rousselet1994} and bacterial motion \cite{DiLeonardo2010,Koumakis2013,Reichhardt2017,Vizsnyiczai2017} to the transportation of fluxons in Josephson junctions arrays \cite{Falo1999,Zolotaryuk2012} and vortices in conformal crystal arrays \cite{Reichhardt2016,Reichhardt2015}. Very recently, it has been demonstrated, that ratchets also allow to control the dynamics of topological solitons in ionic crystals \cite{Brox2017}, enhance photocurrents in quantum wells \cite{Faltermeier2017}, can rectify the chirality of magnetization in artificial spin ice \cite{Gliga2017} and create a light modulated electron transport across organic bulk heterojunctions \cite{Kedem2017}. \begin{figure*}[t] \includegraphics[scale=0.048]{fig1.png} \caption{(a) Schematic diagram of the setup demonstrating the phenomenon of dimensional coupling induced current reversal. The filled dots denote particles and the color (red/blue) indicate the sign of the $x$ component of their velocities (right/left). In a driven quasi-1D lattice (upper panel), most particles travel in the negative $x$-direction resulting in an average transport in this direction. However in the driven 2D lattice (middle and lower panel) having non-zero dimensional coupling $\beta$, most particles initially travel towards the negative $x$-direction but at later times revert their movement, resulting in a dynamical current reversal. Larger values of the coupling $\beta$ reduces the timescale of the current reversal (lower panel). (b) Mean transport velocity of the ensemble along the $x$-direction as a function of time for different values of $\beta$ for a linear and logarithmic (inset) timescale. The grey circles correspond to the reversal timescales $t_{r,\beta}$ for different values of $\beta$. Remaining parameters: $U=0.88$, $a=0.48$, $\alpha=9.61$ and $\gamma=0.62$. There is no directed transport of particles along the $y$-direction.}\label{fig1} \end{figure*} While the fact that a specific setup creates a directed particle transport can typically be predicted based on symmetry properties \cite{Flach2000,Denisov2014}, the strength and even the direction of the emerging currents are far less immediate. In fact, the current direction can often be reverted by changing the value of a certain control parameter or the properties of the rectified objects (e.g. their mass or mobility), without changing the symmetry of the underlying equations. Achieving such current reversals is the key aim of many investigations, as they allow segregation of particle mixtures by transporting particles of e.g. different mass or mobility in opposite directions, where they can be collected. While most ratchets are still studied in one spatial dimension (1D) \cite{Reimann2002,Hanggi2009}, particularly those operating in the Hamiltonian regime \cite{Flach2000,Schanz2001,Schanz2005,Denisov2014,Wulf2012,Liebchen2012}, recent experiments have significantly progressed regarding the construction of highly controllable two-dimensional (2D) ratchet devices. These include cold atoms in ac-driven optical lattices \cite{Lebedev2009,Cubero2012,Renzoni2009} and the very recent example of a fully configurable 2D ratchet based on colloids in holographic optical tweezers \cite{Arzola2017}. Conceptually, the key new ingredient in 2D ratchets is the coupling between the dimensions, which has been shown to allow, in the overdamped regime, for a directed transport at an angle relative to the driving law \cite{Arzola2017, Mukhopadhyay2017} and may also involve transportation completely orthogonal to the driving \cite{Reichhardt2003}. In the present work, we demonstrate that dimensional coupling can even lead to current reversals. A 2D potential landscape having a periodic potential along, for e.g., the $x$-direction but without any potential variation along the perpendicular $y$-direction (henceforth referred to as `quasi-1D lattice') allows for directed particle transport when driven by an appropriately chosen ac-driving force in the $x$-direction (see Fig.~\ref{fig1}a upper panel). Keeping the driving unchanged but performing a structural change of the lattice along the $y$-direction introduces dimensional coupling effects. We show that this coupling does not affect the directed particle current for short timescales, but reverts its direction at longer timescales as compared to the quasi-1D lattice (see Fig.~\ref{fig1}a lower panel). These dimensional coupling induced current reversals (DCIR) occur dynamically in time \cite{Liebchen2012}, as opposed to the standard scenario of asymptotic current-reversals due to a change of system parameter where the direction of current is time-independent \cite{Marconi2007,Mateos2000,DeSouzaSilva2006}. We show that the reversal timescale can be varied by thousands of driving period by varying the structure of the lattice perpendicular to the driving direction (see Fig.~\ref{fig1}a middle panel). The underlying mechanism of these current reversals uses the fact that changing the structure of the lattice along the second dimension allows the particles to explore different regions of phase space which are inaccessible in the quasi-1D lattice. \section{Setup} We consider $N$ non-interacting classical particles in a 2D lattice of elliptic Gaussian barriers laterally driven along the $x$-direction via an external bi-harmonic driving force $f(t)= d_{x} (\sin \omega t + 0.25\sin (2\omega t + \pi/2))$. Here, $d_x$ and $\omega$ are the amplitude and the frequency of the driving, thereby introducing a temporal periodicity of $T=2\pi / \omega$. The system is described by the Hamiltonian: \begin{eqnarray} &H&=\frac{p_x^2}{2m}+\frac{p_y^2}{2m} \nonumber\\ &+&\sum_{i,j=-\infty}^{+\infty} V e^{-\left[\beta_x\left(x-f(t)-(i+\frac{1}{2})L_x\right)^2+\beta_y\left(y-(j+\frac{1}{2})L_y\right)^2\right]}\label{hamil} \end{eqnarray} where the potential barriers have a height $V$ and the equilibrium distances between them along $x$ and $y$ are given by $L_x$ and $L_y$ respectively. This potential breaks both the parity $ x\rightarrow - x + \chi$ symmetry along the $x$ direction and the time-reversal $t\rightarrow -t +\tau$ symmetry (for all possible constants $\chi$ and $\tau$), while preserving parity symmetry along the $y$ direction. Possible realizations of this setup include cold atoms in optical lattices, at microkelvin temperatures, where a classical description is appropriate \cite{Renzoni2009} and which to a good approximation represents a Hamiltonian setup. Introducing dimensionless variables $x'=\frac{x}{L_x}$, $y'=\frac{y}{L_y}$ and $t'=\omega t$ and dropping the primes for simplicity, the equation of motion for a single particle at position ${\bf r}$ with momentum $\bf p$ reads \begin{equation} \ddot{\bf r} = \sum_{m,n=-\infty}^{+\infty} \mathcal{U}\left( {\bf r} - F(t) {\bf e_x} - {\bf R}_{m,n} \right) e^{-\mathcal{G}({\bf r} - F(t) {\bf e_x} - {\bf R}_{m,n})}\label{eqm2} \end{equation} where $F(t)=\left( a\sin t + 0.25a\sin (2 t + \pi/2) ,0\right)$ is the effective driving law, ${\bf e_x}=(1,0)$, ${\bf R}_{m,n}=(m,n)$ denotes the positions of the maxima of the Gaussian barriers where $\left(m-\frac{1}{2}\right)$,$\left(n-\frac{1}{2}\right)$ $\in\mathbb{Z} $ and $\mathcal{U}({\bf r})=\left(Ux,\beta Uy \right)$, $\mathcal{G}({\bf r})=\alpha (x^2 + \gamma y^2)$. The parameter space of our system is therefore essentially five-dimensional, where the dimensionless parameters are given by a reduced barrier height $U=\frac{2V\beta_x}{m\omega ^2}$, an effective driving amplitude $a=\frac{d_x}{L_x}$, as well as the two parameters, $\alpha=\beta_x L_x^2$ and $\gamma=\frac{\beta_y L_y^2}{\beta_x L_x^2}$, characterizing the localization of the Gaussian barriers along the $x$ and $y$ directions. A final key control parameter is $\beta=\frac{\beta_y}{\beta_x}$ which measures the coupling between the two dimensions. The limits $\beta\rightarrow 0$ and $\beta\rightarrow \infty$ both correspond to quasi one dimensional lattices. \section{Results} \begin{figure}[t] \includegraphics[scale=0.11]{fig2.png} \caption{The particle distribution as a function of position $x$ mod $1$ and $v_x$ (in colormap) of all the $N=10^4$ particles propagating in the 2D lattice with $\beta=0.03$ superimposed on the PSOS of the quasi 1D driven lattice (regular islands in red and chaotic seas in blue) at (a) $t=50$ and (b) $t= 1.5\times 10^3$. (c) The particle distribution as a function of $y$ and $t$ showing the spreading of the ensemble along the $y$-direction with time.} \label{fig2} \end{figure} To explore the transport properties of our setup, we initialize $N=10^4$ particles with small random velocities $v_x,v_y \in [-0.1,0.1]$ such that their initial kinetic energies are small compared to the potential height of the lattice. In order to mimic a localized loading of particles into the lattice, we initialize the particles within the square regions $[-0.1,0.1]\times [-0.1,0.1]$ centered around the potential minima of the lattice. Subsequently we time evolve our ensemble up to $t = 10^4$ by numerical integration of Eq.~\ref{eqm2} using a Runge-Kutta Dormand Prince integrator. For $\beta=0$, the lattice is quasi-1D (upper panels in Fig. 1a) and produces a non-zero mean velocity pointing in negative $x$-direction (Fig.~\ref{fig1}b). This behaviour is expected since the system breaks both the parity and time reversal symmetry along the $x$ direction, thus satisfying the necessary criteria for a non-zero directed transport \cite{Flach2000,Schanz2005,Schanz2001}. Since there is no driving in the $y$-direction, the symmetries are preserved and hence there is no directed transport along this direction. The transport in the $x$-direction accelerates until it finally saturates at $\bar{v}_x\simeq-1.4$. We now vary $\beta$ to explore the impact of dimensional coupling effects on the directed transport. As shown in Fig.~\ref{fig1}b, for $\beta=0.03$, the early time transport velocity is negative and approaches a similar speed of $\bar{v}_x\simeq-1.35$, as in the quasi-1D case at around $t\simeq 1.5\times 10^2$. Remarkably, thereafter the transport begins to slow down and vanishes at $t=t_{r,\beta=0.03}\simeq 1.4 \times 10^3$. Further on, it changes sign which leads to a current reversal. Finally, it approaches an asymptotic constant value of $\bar{v}_x\simeq1.2$. Therefore, the structural change of the lattice in the direction orthogonal to the driving force reverts the transport direction. To study this dimensionality-induced current reversal in more detail, we perform our simulations for a stronger dimensional coupling $\beta=0.15$ and $\beta=0.62$, which leads to a qualitatively similar behaviour (see Fig.~\ref{fig1}b). However, we find that the timescale at which the reversal occurs strongly depends on the strength of the dimensional coupling coefficient $\beta$. Specifically for $\beta=0.62$, we obtain $t_r\simeq 3\times 10^2$ showing that the reversal timescale can be tuned by at least a factor of five. \section{Discussion} The underlying mechanism of the DCIR effect depends on two generic ingredients: (i) the existence of a mixed phase space (containing regular and at least two disconnected chaotic components) in the underlying quasi 1D lattice and (ii) the diffusional spreading dynamics in the 2D lattice along the orthogonal direction. We now discuss the occurrence of negative transport in the quasi 1D lattice ($\beta=0$) and will then analyze how the dimensional coupling effect can revert the transport direction. Due to the absence of forces acting along the $y$-direction, the dynamics in the quasi 1D lattice (Fig.\ref{fig1}a) can be decomposed into a constant drift in $y$-direction and a motion in a 1D lattice driven along the $x$-axis. The latter case is described by a three-dimensional (3D) phase space illustrated by taking stroboscopic snapshots of $x(t), v_x(t)$ at $t=n (n\in \mathbb{N})$ of particles with different initial conditions. This leads to Poinc\'{a}re surfaces of section (PSOS) as shown in Fig.~\ref{fig2}a where the reflection symmetry about $v_x=0$ is broken. This PSOS is characterized by two prominent chaotic components or `seas': the upper sea $\mathcal{C}_U$ between $0.75\lesssim v_x \lesssim 6.0$ and the lower sea $\mathcal{C}_L$ between $-3.5\lesssim v_x \lesssim 0.2$. These chaotic seas are separated from each other by regular invariant spanning curves at $v_x \simeq 0.2$ preventing particles to travel between the chaotic components. Hence particles initialized with low initial energies $v_x \in [-0.1,0.1]$ and occupying $\mathcal{C}_L$, matching the initial conditions used in our simulations, undergo chaotic diffusion through the lattice with negative velocities along the $x$-direction until they are uniformly distributed over $\mathcal{C}_L$. As a result, we observe a negative directed transport of the ensemble. Let us now explore the mechanism allowing dimensional coupling ($\beta > 0$) to revert the transport direction: In this case, the phase space is five-dimensional (5D) characterized by $(x,v_x,y,v_y,t)$ which complicates both the illustration and analysis of the transport based on the phase space structures. However up to a certain timescale, the dynamics of the particles even in this higher dimensional phase space can be effectively understood in terms of the dynamic occupation of the ensemble in the quasi 1D PSOS. To show this, we superpose the snapshots of the ensemble particle coordinates $(x,v_x)$ for $\beta=0.03$ on the quasi 1D PSOS at two different times $t=50$ and $t=1.5\times 10^3$ (Fig.~\ref{fig2}). At $t=50$, well before the reversal timescale $t_{r,\beta=0.03}=1.24\times 10^3$, the ensemble population is confined to $\mathcal{C}_L$ in a similar way as we have observed for $\beta=0$ (Fig.~\ref{fig2}a). Physically, this results from the fact that at shorter timescales the particles experience comparatively strong driving forces which allow them to quickly move along the $x$-direction while in $y$-direction they move only very slowly with a velocity largely dictated by the initial conditions. Therefore, for a long time, they stay close to the potential valleys at $y=0$ (Fig.~\ref{fig2}c) where they hardly experience the 2D landscape of the potential. \begin{figure}[t] \includegraphics[scale=0.10]{fig3.png} \caption{The time dependence of (a) position and (b) velocity of a typical particle in the 2D lattice with $\beta=0.03$ initialized in the lower chaotic sea $\mathcal{C}_L$ (Fig.\ref{fig2}a) demonstrating the crossover to the upper chaotic sea $\mathcal{C}_U$ (Fig.\ref{fig2}b). Remaining parameters are the same as in Fig.\ref{fig1}b. Note that for this particular trajectory, the crossover happens at $t\simeq 5\times 10^3$, which is larger than the average reversal timescale $t_{r,\beta=0.03}=1.24\times 10^3$ of the ensemble.} \label{fig3} \end{figure} As time evolves, particles experience more and more of the 2D character of the potential which effectively transfers motion in $x$-direction into motion along the $y$-direction leading to a symmetric spreading of the ensemble along the $y$-direction (Fig.~\ref{fig2}c) . Particles are therefore no longer dictated by the structure of the 1D phase space but can explore the entire 5D phase space. They can, in particular, now freely cross the invariant spanning curves at $v_x\simeq 0.2$ of the 1D phase space to attain significant positive velocities (Fig.~\ref{fig2}b). During the phase of temporal evolution when the particles can cross the invariant curve, the directed current slows down and reduces to zero. It finally becomes positive, since the asymptotic average velocity of the particles along the positive $x$-direction is higher than that along the negative $x$-direction. A typical trajectory demonstrating the crossover from $\mathcal{C}_L$ to $\mathcal{C}_U$ is shown in Fig.~\ref{fig3}. \section{Control of the current reversal} \begin{figure}[b] \includegraphics[scale=0.09]{fig4.png} \caption{(a) The probability density $P(t)$ of the first crossing time (FCT) $t$ required by a particle to cross one lattice site along the $y$-direction for the first time. (b) The mean FCT $\tau_{\beta}$ (in blue) and the reversal timescale $t_{r,\beta}$ (in red) as functions of $\beta$ with corresponding inverse power law fits. The inset shows the linear relationship between $t_{r,\beta}$ and $\tau_{\beta}$.} \label{fig4} \end{figure} Let us finally discuss the dependence of the current reversal time $t_{r,\beta}$ on the parameter $\beta$. Following the above-outlined physical picture, the current reversal occurs at time scales comparable to the time a particle needs to experience a significant deviation from the neighborhood of the minimum of the lattice potential along the $y$-direction. For a particular value of $\beta$ and a given set of initial conditions, one can thus expect the reversal timescale $t_{r,\beta}$ to depend linearly on the average time $\tau_{\beta}$ the particles need to cross one lattice site along the $y$-direction for the very first time. In order to estimate $\tau_{\beta}$ for different values of $\beta$, we simulate ensembles of $10^4$ particles each with initial conditions identical to that used in our setup (Fig.~\ref{fig1}), but for different $\beta$ values and calculate the corresponding probability density $P(t)$ of the first crossing time (FCT) $t$ required by a particle to cross one lattice site along the $y$-direction (Fig.~\ref{fig4}a). As $\beta$ increases, the particles are likely to have shorter FCT and hence can experience the 2D landscape of the potential much earlier. This can be clearly seen in the Fig.~\ref{fig4}b (blue) which shows that the mean FCT $\tau_{\beta}$ decreases with increasing $\beta$ following a $\tau_{\beta} \sim \beta^{-0.6}$ power law. Confirming our expectation, a linear fit is shown to describe the relation between $t_{r,\beta}$ and $\tau_{\beta}$ to a good approximation (see Fig.~\ref{fig4}b (inset)) and hence $t_{r,\beta}$ follows a similar inverse power law $t_{r,\beta}\sim \beta^{-0.55}$ (Fig.~\ref{fig4}b, red). The reversal timescale depends also (weakly) on the initial velocities of the particles and we verified that a decrease of the initial velocity by a factor of 0.01 increases the reversal timescale approximately by a factor of 1.34. \section{Experimental realization} A setup to experimentally observe dimensional coupling-induced current reversals are cold atoms in optical lattices generated by laser beams in the regime of $\mu K$ temperatures where a classical description is appropriate \cite{Renzoni2009}. Setups based on holographic trapping of atoms \cite{Nogrette2014,Kim2016,Barredo2016,Stuart2018} might also provide an interesting and highly controllable alternative. The resulting lattice can be driven by phase modulation using acousto-optical modulators and radio frequency generators. Translating our parameters to experimentally relevant quantities for an optical lattice setup with cold rubidium (Rb$^{87}$) atoms and $780\ nm$ lasers, we obtain the lattice height $V \sim 22 E_r$, the width $\frac{1}{\sqrt{\beta_x}}\sim 252\ nm$, the driving frequency $\omega \sim 10\omega_r$ and the driving amplitude $d_x \sim 390\ nm$, where $E_r$ and $\omega_r$ are the recoil energy and recoil frequency of the atom respectively. Interaction , disorder and noise effects \cite{Liebchen2012,Liebchen2015,Wulf2014}, would probably lead to a slow accumulation of particles within the regular portions of the phase space which may also aid them in crossing the regular barrier confining the initial conditions in the quasi 2D case to negative and only weakly positive velocities and may therefore lead to a slight decrease of the reversal time. \section{Concluding remarks} Dimensional coupling effects in two-dimensional lattices create a new route to produce current-reversals. Conversely to most other cases, the current reversal occurs dynamically here with a characteristic timescale that can be controlled by the strength of the coupling. The underlying mechanism is generic, in the sense that it depends only on the mixed phase space structure of the underlying uncoupled quasi 1D lattice and may therefore apply to a variety of physical systems. \begin{acknowledgments} B.L. acknowledges funding by a Marie Curie Intra European Fellowship (G.A. no 654908) within Horizon 2020 and A.K.M acknowledges a doctoral research grant (Funding ID: 57129429) by the Deutscher Akademischer Austauschdienst (DAAD). T.X. acknowledges financial support by the China Scholarship Council (CSC) during a visit to the University of Hamburg. \end{acknowledgments} \bibliographystyle{apsrev4-1}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section*{Introduction\label{sec:introduction}} Paths of a given length $n$ between two vertices $a,b$ of a Dynkin diagram (extended or not) can be interpreted in terms of classical or quantum $SU(2)$ intertwiners between representations $a \otimes \tau^n$ and $b$, where $\tau$ is the fundamental (spin $1/2$). In a similar way, essential paths are associated with (classical or quantum) morphisms between $a \otimes \tau_n$ and $b$, where $\tau_n$ denotes an irreducible representation. We consider the graded vector space of essential paths (defined by A. Ocneanu for quite general graphs) and its algebra of grade-preserving endomorphisms. The corresponding associative product called {\sl composition product\/} is denoted $\circ$. We first show that the space of essential paths carries an associative algebra structure (denoted $\bullet$) compatible with its natural grading. Its definition involves the usual concatenation product of paths, but the situation is not trivial since the concatenation product of two essential paths is usually not essential. This is actually our main result, and it seems to be new (the existing literature is more concerned with the algebra structures that can be defined at the level of the {\em graded} tensor square of this vector space). Using this ``improved'' concatenation product between essential paths, one can then define --besides the composition product-- two other interesting algebra structures on the algebra of grade-preserving endomorphisms. One of these algebra structures (denoted $\star$) is associated with a filtrated convolution product and gives rise to a weak Hopf algebra structure: this is the Double Triangle Algebra (DTA) introduced by A. Ocneanu in \cite{Ocneanu:paths}. It was studied elsewhere (see \cite{PetZub:Oc}, \cite{Gil:thesis}, \cite{CoTr-DTE}). Another algebra structure, that we call the graded convolution product or simply\footnote{Both $\bullet$ and $\star$ can be understood as convolution products. Given two elements of grades $p$ and $q$, the composition product is trivial unless $p=q$, the graded product gives an element of grade $p+q$ whereas the ``filtered product'' can be decomposed along vector subspaces of all grades $p+q, p+q-2, p+q-4,\ldots $} {\sl graded product}, and again denote by the symbol $\bullet$, can be obtained from the former product by projection on its component of highest degree. However, it is possible and useful to study it directly without making any reference to its filtered relative. This is what we do. Both products $\bullet$ and $\star$ are compatible with the composition of endomorphisms $\circ$. Compatibility here means that the associated coproducts are algebra homomorphisms with respect to the composition product. The use of a particular scalar product allows one to study these three product structures on the same underlying vector space (the diagonal graded tensor square of the space of essential paths). The bialgebra associated with the pair $(\circ, \star)$ is known to be a particular kind of quantum groupoid. However, in this paper we are interested in the bialgebra associated with the pair $(\circ, \bullet)$, and we show that it has a weaker structure: it is a weak bialgebra but not a weak Hopf (bi)-algebra. The whole theory should apply when the diagrams that we consider (usually ADE Dynkin diagrams) are replaced by members of higher Coxeter-Dynkin systems \cite{DiFZub,Ocneanu:Bariloche}: the vector space spanned by the vertices of the chosen diagram is, in particular, a module over the graph algebra associated with a Weyl alcove of $SU(N)$ at some level ---such generalised ${\cal A}$ diagrams are indeed obtained by truncation of the Weyl chamber of $SU(N)$. These systems admit also orbifolds ---${\cal D}$ diagrams--- and exceptionnals. In the higher cases, the grading does not refer to the positive integer that measures the length of a path, but to a particular Young diagram. Therefore the grading is defined with respect to a more general monoid (actually an integral positive cone), and the adjective ``filtrated'' should be understood accordingly. Our paper is organized as follows. In the first section we consider the vector space of all paths on a graph, and show that it is a non-unital bialgebra. In section~\ref{sec:the-EssPaths-algebra} we restrict our attention to the subspace of essential paths and show that we need to introduce a new associative multiplication, $\bullet$, involving an appropriate projection operator, in order to insure stability. This vector space of essential paths is an algebra, but not a bialgebra. In the third section we show that the graded algebra of endomorphisms of essential paths can be endowed with a new product compatible with the grading, for which this space is a weak bialgebra. The non trivial condition insuring compatibility of the coproduct with the multiplication of endomorphisms is exemplified at the end of section~3, in the case of the graph $E_6$. The equation expressing this general condition is obtained in appendix~A, and the general proof showing that such a compatibility condition always holds in our situation is given in section~\ref{sec:weak_bialgebra_condition_proof}. In the fifth section we illustrate, in the case of the graph $A_2$, the fact that the two bialgebra structures respectively associated with the products $(\circ, \star)$ and $(\circ, \bullet)$ differ. Appendix~A is quite general: we consider an arbitrary algebra $A$ endowed with a scalar product and we show that although its endomorphism algebra can be given a coalgebra structure, some non trivial relation has to be satisfied in order for this space to be a bialgebra ---the coproduct on $End(A)$ should be an homomorphism. We also study what happens when the algebra $A$ is graded and when we replace $End(A)$ by a graded diagonal sum of endomorphisms. \section{The space of paths on a graph\label{sec:paths_on_graph}} Take a connected and simply connected graph $G$. For the time being, we do not assume any other extra requirements. At a later stage we will take $G$ to be a tree, and, even more precisely, a Dynkin diagram of type ADE. For instance, a possible graph could be $G=E_{6}$. \begin{figure}[htb] \unitlength 0.6mm \begin{center} \begin{picture}(95,35) \thinlines \multiput(25,10)(15,0){5}{\circle*{2}} \put(55,25){\circle*{2}} \thicklines \put(25,10){\line(1,0){60}} \put(55,10){\line(0,1){15}} \put(25,3){\makebox(0,0){$[\sigma_0]$}} \put(40,3){\makebox(0,0){$[\sigma_1]$}} \put(55,3){\makebox(0,0){$[\sigma_2]$}} \put(70,3){\makebox(0,0){$[\sigma_5]$}} \put(85,3){\makebox(0,0){$[\sigma_4]$}} \put(63,27){\makebox(0,0){$[\sigma_3]$}} \end{picture} \label{graphE6} \end{center} \end{figure} Consider the set of elementary paths on $G$. These are just ordered lists of \emph{neighboring} points $a_{i}$ (or edges $\xi _{k}$ joining two neighboring points) of the graph, \[ [a_{0},a_{1},a_{2},\cdots ,a_{L-1},a_{L}]\qquad a_{i}\in G\] This is clearly a path of length $L$, starting at $a_{0}$ and ending at $a_{L}$. Build a vector space, called $Paths$, by simply considering formal linear combinations over $\mathbb{C}$ of elementary paths. Now define the product of elementary paths by concatenation, ie, by joining the matching endpoints of the two paths (say, of lengths $L$ and $K$) one after the other, \begin{eqnarray*} [a_{0},a_{1},\cdots ,a_{L}]\, [b_{0},b_{1},\cdots ,b_{K}] & = & \left\{ \begin{array}{l} [a_{0},a_{1},\cdots ,a_{L},b_{1},\cdots ,b_{K}] \quad \textrm{if} \ a_{L}=b_{0}\\ 0 \qquad \qquad \qquad \qquad \qquad \qquad \textrm{otherwise} \end{array}\right. \end{eqnarray*} Such an operation creates another elementary path of length $L+K$. This product extends by linearity to the whole vector space, and is associative (this is trivial to see). Moreover, the resulting algebra is \emph{graded} by the length of the paths. Consider additionally the zero-length paths $[a_{0}]$, there will be one such for each point $a_{0}$ of the graph. If the graph is finite, the sum over all points of the graph of the corresponding zero-length paths will be a (left and right) unit for this algebra,\[ \mathbf{1}=\sum _{a_{0}\in G}\: [a_{0}]\] Therefore $Paths$ is a graded associative algebra with unit. We could also define a coalgebra structure on this space, introducing a coproduct that would be group-like for all elementary paths $p$: \[ \Delta p=p\otimes p\] and extending it by linearity. It is straightforward to see that it is coassociative and that it is an algebra homomorphism, $\Delta (pp')=\Delta p\: \Delta p'$. Additionally, the (linear) operation \[\epsilon (p)=1\qquad \textrm{for all elementary} \; p \] is a counit for $\Delta $. The above defined unit is not compatible with the coproduct ($\Delta \mathbf{1}\neq \mathbf{1}\otimes \mathbf{1}$). $Paths$ is therefore a (non-unital) bialgebra. It is infinite dimensional even if the graph $G$ is finite, as paths can be made arbitrarily long by backtracking on $G$. However, as we shall see in the next section, the space $\mathcal{E}$ of essential paths that we consider in this paper is only a vector subspace but not a subalgebra of $Paths$. For this reason, a different approach will be required. \section{The algebra $\mathcal{E}$ of essential paths\label{sec:the-EssPaths-algebra}} \subsection{Essential paths on a graph\label{sec:ess-paths-on-graph}} We will now briefly introduce essential paths on the given graph $G$. Consider first the adjacency matrix of the graph, and call $\beta $ its maximal eigenvalue. Also call $\vec{\mu }=(\mu _{0}=1,\mu _{1},\cdots ,\mu _{N})$ the corresponding eigenvector, normalized such that the entry $\mu _{0}$ associated to a distinguished point $0$ of $G$ is equal to $1$ (this component is minimal). $\vec{\mu }$ is called the Perron-Frobenius eigenvector, and all its components are strictly positive. The next step is to introduce the linear operators \[ C_{k}:Paths\longmapsto Paths\qquad \qquad k=1,2,3,\cdots \] which act on elementary paths as follows: on a path of length $L \leq k$, $C_k$ gives zero, otherwise ($L>k$) its action is given by, \begin{eqnarray*} & & C_{k}\left([a_{0},a_{1},\cdots ,a_{k-1},a_{k},a_{k+1},a_{k+2},\cdots ,a_{L}]\right) \\ & & \qquad \qquad \qquad = \: \delta _{a_{k-1},a_{k+1}}\: \sqrt{\frac{\mu _{a_{k}}}{\mu _{a_{k-1}}}} \quad [a_{0},a_{1},\cdots ,a_{k-1},a_{k+2},\cdots ,a_{L}] \end{eqnarray*} These operators obviously preserve the end-points of the paths they act upon, and shorten their length by $2$ units ---removing a backtrack in the path at position $k$, if any, and giving $0$ otherwise. The {\em essential paths} are defined as those elements of $Paths$ annihilated by all the $C_{k}$'s. They constitute, of course, a vector subspace% \footnote{If the graph is a Dynkin diagram of type ADE then $\beta < 2 $ and $\mathcal{E}$ is finite dimensional, as there are essential paths up to a certain length only, namely from $0$ to $\kappa - 1$, where $\kappa$ is the Coxeter number of the diagram defined by $\beta = 2\, cos (\pi/\kappa)$.} $\mathcal{E}\subset Paths$:\[ \mathcal{E}=\left\{ p\in Paths\, \diagup \, C_{k}p=0\; \forall k\right\} \] We will use $\mathcal{E}_{l}$ to denote the subspace of essential paths of length $l$, and $\mathcal{E}(a\stackrel{l}{\longrightarrow }b)$ if we want to further restrict the set to those paths with definite starting point $a$ and ending point $b$. On the whole $Paths$ there is a natural scalar product, defined on elementary paths $p,p'$ by \[ \left\langle p,p'\right\rangle =\delta _{p,p'}\qquad \qquad (p,p'\; \textrm{elementary})\] and consequently also an orthogonal projector \[ P:Paths\longmapsto \mathcal{E}\] As paths with different lengths or end-points are orthogonal, $P$ can be decomposed as a sum of projectors on each subspace, \begin{eqnarray*} P & = & \sum _{\begin{array}{c} a,b\in G\\ l\in \mathbb{N}\end{array} }\, P_{ab}^{l}\\ P_{ab}^{l} & : & Paths(a\stackrel{l}{\longrightarrow }b)\longmapsto \mathcal{E}(a\stackrel{l}{\longrightarrow }b) \end{eqnarray*} We had on $Paths$ an algebra structure, but actually $\mathcal{E}$ is only a vector subspace and not a subalgebra of $Paths$. Therefore, a new product has to be found on $\mathcal{E}$ if we want to endow it with an algebra structure. The simplest one (it must also be somehow related to the one on $Paths$!), is: \begin{equation} e\bullet e'\equiv P(ee')\label{eq:product-on-EP} \end{equation} where $e,e'$ are essential paths, $P$ is the above orthogonal projector and the product $ee'$ is the concatenation product in $Paths$. We shall prove below the associativity property and find a unit element for this product. \subsubsection{The grading of $\mathcal{E}$} As we did with $Paths$, the space of essential paths can be graded by the length of the paths,\[ \mathcal{E}=\bigoplus _{l\in \mathbb{N}}\mathcal{E}_{l}\] The product $\bullet $ is clearly compatible with this grading because $\left\langle \: ,\, \right\rangle $ is null for paths with different lengths, hence the projector $P$ also preserves the length. For this reason, we shall call it the \underline{graded} product on $\mathcal{E}$. As stated in the Introduction, it is possible to define also a \underline{filtered} product on the same space (which is called $\times$ in \cite{Coque-Cocoyoc}), such that $p \times p'$ can be decomposed on paths of lengths smaller or equal to $length(p) + length(p')$. Moreover, the graded product $\bullet$ could be obtained from the filtered one by restriction to the component of highest length, although this approach will not be followed here. \subsubsection{Example of essential paths on $E_{6}$} The space $\mathcal{E}(E_{6})$ can be constructed using the above definitions, and is of dimension $156$. More precisely, the dimensions of the graded components are $(6,10,14,18,20,20,20,18,14,10,6)$. For instance, the subspace $\mathcal{E}_{2}$ of paths of length $2$ has dimension $14$. It is composed of a subspace corresponding to paths with different endpoints plus a $4$-dimensional subspace of paths with coinciding ends. Inside the latter there is a 2-dimensional subspace of paths which start and end at the point $2$, which is generated by: \begin{eqnarray*} \mathcal{E}(2\stackrel{2}{\longrightarrow }2) & = & \left\{ \frac{1}{N_{1}}\left([2,3,2]-\sqrt{\frac{\mu _{3}}{\mu _{1}}}\, [2,1,2]\right)\right.,\\ & & \left.\frac{1}{N_{2}}\left([2,5,2]-\frac{\sqrt{\mu _{3}\mu _{5}}}{\mu _{1}+\mu _{3}}\, [2,3,2]-\frac{\sqrt{\mu _{1}\mu _{5}}}{\mu _{1}+\mu _{3}}\, [2,1,2]\right)\right\} \\ & = & \left\{ \frac{1}{N_{1}}\left([2,3,2]-\sqrt{-1+\sqrt{3}}\, [2,1,2]\right)\right.\\ & & \left.\frac{1}{N_{2}}\left([2,5,2]-\frac{1}{\sqrt{3}}\, \sqrt{-1+\sqrt{3}}\, [2,3,2]-\frac{1}{\sqrt{3}}\, [2,1,2]\right)\right\} \end{eqnarray*} These paths are orthogonal, and can be normalized with an appropriate choice of the coefficients $N_{i}$. \subsection{Associativity} The product $\bullet $ in $\mathcal{E}$ is associative. In fact, we will prove a stronger condition for the operator $P$, which implies associativity of $\bullet $:\begin{equation} P(P(p_{1})P(p_{2}))=P(p_{1}p_{2})\qquad \textrm{for}\; \textrm{any}\quad p_{i}\in Paths\label{eq:assoc_condition_on_P}\end{equation} To see this take $\; e,e',e''\in \mathcal{E}\; $ then\begin{eqnarray*} (e\bullet e')\bullet e'' & = & P(P(ee')\, e'')=P(P(ee')\, P(e''))=P((ee')\, e'')\\ & = & P(e\, (e'e''))=P(P(e)\, P(e'e''))=P(e\, P(e'e''))\\ & = & e\bullet (e'\bullet e'') \end{eqnarray*} The condition (\ref{eq:assoc_condition_on_P}) may also be rewritten in the completely equivalent way\begin{eqnarray*} P(P(p_{1})P(p_{2}))=P(p_{1}p_{2}) & \Longleftrightarrow & P(P(p_{1})P(p_{2})-p_{1}p_{2})=0\\ & \Longleftrightarrow & I\equiv \left\langle e,p_{1}p_{2}-P(p_{1})P(p_{2})\right\rangle =0\qquad \textrm{for}\; \textrm{all}\quad e\in \mathcal{E} \end{eqnarray*} Now we have to show that $\; I=0\; $ for any $\; p_{i}\in Paths$: \begin{itemize} \item If $\; p_{1},p_{2}\in \mathcal{E}\subset Paths\; $ then $\quad P(p_{i})=p_{i}$ $\quad \Longrightarrow \quad $ $p_{1}p_{2}-P(p_{1})P(p_{2})=0$ $\quad \Longrightarrow \quad $ $I=0$. \item If $\; p_{1}\equiv e_{1}\in \mathcal{E}\; $ but $\; p_{2}\in Paths\; $ then \[ I=\left\langle e,e_{1}(p_{2}-P(p_{2}))\right\rangle =\left\langle e,e_{1}n\right\rangle \] Here $\; n\equiv p_{2}-P(p_{2})\in \mathcal{E}^{\perp }\; $ is orthogonal to $\mathcal{E}$. Without loss of generality, we may assume that the paths involved in $I$ have well defined end-points and length (it is enough to show associativity for such paths, then associativity for linear combinations of those follows immediately): \begin{eqnarray*} p_{1}=e_{1}=e_{1}(a\stackrel{l_{1}}{\longrightarrow }b) & & \\ p_{2}=p_{2}(b'\stackrel{l_{2}}{\longrightarrow }c) & \; \Rightarrow \; & n=n(b'\stackrel{l_{2}}{\longrightarrow }c) \end{eqnarray*} To get a non-trivial scalar product in $I$ we must also take $b'=b$ and \[ e=e(a\stackrel{l_{1}+l_{2}}{\longrightarrow }c)\] As it will be proven in subsection \ref{sub:decomposition-of-EP}, such an essential path $e$ can always be decomposed as: \[ e=\sum _{\begin{array}{c} v\in G\\ i_{v}\end{array} }e_{i_{v}}'(a\stackrel{l_{1}}{\longrightarrow }v)\: e_{i_{v}}''(v\stackrel{l_{2}}{\longrightarrow }c)\] where the sum runs over all intermediate points $v$ appearing in $e$ after $l_{1}$ steps, and possibly several $e_{i_{v}}'$, $e_{i_{v}}''$ for each $v$. Essentiality of $e$ and linear independence of paths of different end-points imply that all the $e_{i_{v}}'$ and $e_{i_{v}}''$ are also essential. But now it is easy to see that \begin{eqnarray*} I=\left\langle e,e_{1}n\right\rangle & = & \left\langle \sum _{v,i_{v}}\, e_{i_{v}}'(a\stackrel{l_{1}}{\longrightarrow }v)\: e_{i_{v}}''(v\stackrel{l_{2}}{\longrightarrow }c)\; ,\; e_{1}(a\stackrel{l_{1}}{\longrightarrow }b)\, n(b\stackrel{l_{2}}{\longrightarrow }c)\right\rangle \\ & = & \sum _{i_{b}}\left\langle e_{i_{b}}'(a\stackrel{l_{1}}{\longrightarrow }b)\: e_{i_{b}}''(b\stackrel{l_{2}}{\longrightarrow }c)\; ,\; e_{1}n\right\rangle \\ & = & \sum _{i_{b}}\left\langle e_{i_{b}}'\: ,\: e_{1}\right\rangle \left\langle e_{i_{b}}''\: ,\: n\right\rangle \end{eqnarray*} Therefore we get $I=0$ because $\; n\perp \mathcal{E}$, so $\; \left\langle e_{i_{b}}''\: ,\: n\right\rangle =0$. \item If both $\; p_{1},p_{2}\in Paths\; $ then $\; p_{i}=e_{i}+n_{i}\; $ with $\; P(p_{i})=e_{i}$ Therefore\begin{eqnarray*} I & = & \left\langle e,(e_{1}+n_{1})(e_{2}+n_{2})-e_{1}e_{2}\right\rangle =\left\langle e,e_{1}n_{2}+n_{1}e_{2}+n_{1}n_{2}\right\rangle \\ & = & 0 \end{eqnarray*} due to the previous case. \end{itemize} \subsection{Unit element} The algebra $\mathcal{E}$ is unital, and the unit element is clearly the same as the one in $Paths$, explicitly given by\begin{equation} \mathbf{1}_{\mathcal{E}}=\sum _{v\in G}e(v\stackrel{0}{\longrightarrow }v)\label{eq:unit_on_EP}\end{equation} where the sum extends over all the points of the graph, and the essential paths $e(v\stackrel{0}{\longrightarrow }v)$ are obviously nothing more than the trivial paths $e(v\stackrel{0}{\longrightarrow }v)\equiv [v]$. Concluding this section, we emphasize that $\mathcal{E}$ is not only a vector space but also an associative algebra. Moreover, it is endowed with a (canonical) scalar product obtained by restriction from the one on $Paths$. It has therefore also a coalgebra structure% \footnote{Identify elements with their duals, and map the product to the dual coproduct.% }, which is not a priori very interesting since the comultiplication will not be an algebra homomorphism in general. The coproduct that we had defined for $Paths$ does not work either (the compatibility property with the product does not hold) since the product itself was modified. Therefore, contrary to $Paths$, the vector space $\mathcal{E}$ endowed with the graded multiplication $\bullet$ does not have a bialgebra structure. \section{The weak-$*$-bialgebra $End_{\#}(\mathcal{E})$} We have already shown in section \ref{sec:the-EssPaths-algebra} that the space $\mathcal{E}$ of essential paths constitutes a graded unital associative algebra. Applying the general construction of Appendix~A (see in particular Eq.~(\ref{eq:homo_condition_graded})) to the particular case of the graded algebra $A=\mathcal{E}$, we show now that a corresponding weak bialgebra structure on the space of its graded endomorphisms does exist. Moreover, we shall see that it has a compatible star operation. We remind again the reader that the product $\bullet$ that we consider now on $End_{\#}(\mathcal{E})$ is graded but that it is possible to construct another product (called $\star$) on the same vector space, which is filtered rather than graded. Moreover, the structure corresponding to the pair ($\circ$, $\star$) is a weak Hopf algebra. This other construction is not studied in the present paper. What we obtain here instead, is a weak bialgebra structure for the pair ($\circ$, $\bullet$). \subsection{Product and coproduct} $\mathcal{E}$ being a graded algebra, its endomorphisms can also be graded. We therefore consider the space $\mathcal{B}$ of length preserving endomorphisms on $\mathcal{E}$, namely \begin{eqnarray*} \mathcal{B}\: \equiv \: End_{\#}(\mathcal{E}) & = & \bigoplus _{n}End(\mathcal{E}_{n})\\ & \stackrel{iso}{\simeq } & \bigoplus _{n}\mathcal{E}_{n}\otimes \mathcal{E}_{n}^{*} \end{eqnarray*} As discussed in section \ref{sub:graded-End(A)}, we now consider the convolution product $\bullet $ on the space of these endomorphisms. Recalling (\ref{eq:convolution_product}) we see that it is determined by the product on the algebra $\mathcal{E}$, which we had also denoted by $\bullet $, meaning concatenation of paths plus re-projection on the essential subspace. Explicitly, on monomials we have\[ (e_{i}\otimes e^{j})\bullet (e_{k}\otimes e^{l})=e_{i}\bullet e_{k}\otimes e^{j}\bullet e^{l}\] We also take the coproduct (\ref{eq:graded_composition_coproduct}), which reads\[ \Delta \left(e_{i}\otimes e^{j}\right)=\sum _{I}\left(e_{i}\otimes e^{(n)I}\right)\otimes \left(e_{I}^{(n)}\otimes e^{j}\right)\qquad \qquad \textrm{whenever}\quad e_{i}\in \mathcal{E}_{n}\: ,\; e^{j}\in \mathcal{E}_{n}^{*}\] but remark that the compatibility condition (\ref{eq:homo_condition_graded}) still remains to be verified. This will be done for a general graph later (see section \ref{sec:weak_bialgebra_condition_proof}), but for any given graph it is interesting to explicitly check equation (\ref{eq:homo_condition_graded}); we illustrate this below in the case of the graph $E_{6}$. \subsubsection{Case $E_{6}$} As an example, we look at the highly non-trivial case of the graph $E_{6}$. We shall consider normalized essential paths of length $4$ on $E_6$ and show how they appear in the $\bullet$ products of essential paths of length $2$ (this is just one possibility among others, of course). We have a natural coproduct on the dual but also, using the chosen scalar product, a coproduct on the same space of essential paths. Hence, we may use the previous calculation to find the expression of the coproduct $D$ of a particular essential path of length $4$ ---at least, that part which decomposes on the tensor products of essential paths of length $2$. Finally, we check that the compatibility condition described by Eq.~(\ref{eq:homo_condition_graded}) is satisfied, so that we can be sure, in advance, that the corresponding graded endomorphism algebra is indeed a week bialgebra. The subspace $\mathcal{E}(2\stackrel{4}{\longrightarrow}2)$ of essential paths of length $4$ is $3$-dimensional and generated by the orthonormalized essential paths $e_{1}(2\stackrel{4}{\longrightarrow }2)$, $e_{2}(2\stackrel{4}{\longrightarrow }2)$ and $e_{3}(2\stackrel{4}{\longrightarrow }2)$. With our convention for choosing the basis the first two read explicitly, up to a normalization factor,\begin{eqnarray*} e_{1}(2\stackrel{4}{\longrightarrow }2) & \propto & \frac{1}{\sqrt{2}}\, \sqrt{1+\sqrt{3}}\, \left([2,3,2,1,2]-[2,3,2,5,2]\right)\\ & & -\left([2,5,2,1,2]-[2,5,2,5,2]\right)-\sqrt{1+\sqrt{3}}\: [2,5,4,5,2]\\ e_{2}(2\stackrel{4}{\longrightarrow }2) & \propto & \sqrt{1+\sqrt{3}}\: [2,1,0,1,2]-\left([2,1,2,1,2]-[2,1,2,5,2]\right)\\ & & +\frac{\sqrt{3}}{2}\, \sqrt{-1+\sqrt{3}}\, \left([2,3,2,1,2]-[2,3,2,5,2]\right)\\ & & +\frac{1}{2}\left(-1+\sqrt{3}\right)\, \left([2,5,2,1,2]-[2,5,2,5,2]\right)\\ & & +\frac{1}{\sqrt{2}}\, \sqrt{-1+\sqrt{3}}\: [2,5,4,5,2] \end{eqnarray*} The generator $e_{2}(2\stackrel{4}{\longrightarrow }2)$ appears as a component in some products of essential paths of length $2$, namely in those products involving paths which have the point $2$ as one of the endpoints. These are:\begin{eqnarray*} e(0\stackrel{2}{\longrightarrow }2) & = & [0,1,2]\\ e(2\stackrel{2}{\longrightarrow }0) & = & [2,1,0]\\ & & \\ e(2\stackrel{2}{\longrightarrow }4) & = & [2,5,4]\\ e(4\stackrel{2}{\longrightarrow }2) & = & [4,5,2]\\ & & \\ e_{1}(2\stackrel{2}{\longrightarrow }2) & \propto & -\sqrt{-1+\sqrt{3}}\: [2,1,2]+[2,3,2]\\ e_{2}(2\stackrel{2}{\longrightarrow }2) & \propto & -[2,1,2]-\sqrt{-1+\sqrt{3}}\: [2,3,2]+\sqrt{3}\: [2,5,2] \end{eqnarray*} The non-trivial products having a contribution in the direction $e_{2}(2\stackrel{4}{\longrightarrow }2)$ are \begin{eqnarray*} e(2\stackrel{2}{\longrightarrow }0)\bullet e(0\stackrel{2}{\longrightarrow }2) & = & \sqrt{1-\frac{1}{\sqrt{3}}}\: e_{2}(2\stackrel{4}{\longrightarrow }2)+\cdots \\ e_{1}(2\stackrel{2}{\longrightarrow }2)\bullet e_{1}(2\stackrel{2}{\longrightarrow }2) & = & -\frac{1}{\sqrt{6\sqrt{3}}}\: e_{2}(2\stackrel{4}{\longrightarrow }2)+\cdots \\ e_{1}(2\stackrel{2}{\longrightarrow }2)\bullet e_{2}(2\stackrel{2}{\longrightarrow }2) & = & -\frac{1}{3}\sqrt{\frac{3}{2}+\sqrt{3}}\: e_{2}(2\stackrel{4}{\longrightarrow }2)+\cdots \\ e_{2}(2\stackrel{2}{\longrightarrow }2)\bullet e_{1}(2\stackrel{2}{\longrightarrow }2) & = & -\sqrt{-\frac{4}{3}+\frac{7}{3\sqrt{3}}}\: e_{2}(2\stackrel{4}{\longrightarrow }2)+\cdots \\ e_{2}(2\stackrel{2}{\longrightarrow }2)\bullet e_{2}(2\stackrel{2}{\longrightarrow }2) & = & -\frac{1}{3}\sqrt{-3+2\sqrt{3}}\: e_{2}(2\stackrel{4}{\longrightarrow }2)+\cdots \\ e(2\stackrel{2}{\longrightarrow }4)\bullet e(4\stackrel{2}{\longrightarrow }2) & = & \sqrt{\frac{3}{2}-\frac{5}{2\sqrt{3}}}\: e_{2}(2\stackrel{4}{\longrightarrow }2)+\cdots \end{eqnarray*} The factors preceding $e_{2}(2\stackrel{4}{\longrightarrow }2)$ in the above formulas are the coefficients $m_{ij}^{k}$ which enter (\ref{eq:m_A_basis}) and (\ref{eq:graded_sum_mm_condition}). The sum of the squares of the above six coefficients equals $1$, and this shows, in a particular example, how condition Eq.~(\ref{eq:homo_condition_graded}) can be checked (remember that it should be satisfied for each definite grading of the coproducts of all elements). Using (\ref{eq:D_A_basis}) we may also write $De_{2}(2\stackrel{4}{\longrightarrow }2)$ as \begin{eqnarray*} && \sqrt{1-\frac{1}{\sqrt{3}}}\: e(2\stackrel{2}{\longrightarrow }0)\otimes e(0\stackrel{2}{\longrightarrow }2) -\frac{1}{\sqrt{6\sqrt{3}}}\: e_{1}(2\stackrel{2}{\longrightarrow }2)\otimes e_{1}(2\stackrel{2}{\longrightarrow }2)\\ && -\frac{1}{3}\sqrt{\frac{3}{2}+\sqrt{3}}\: e_{1}(2\stackrel{2}{\longrightarrow }2)\otimes e_{2}(2\stackrel{2}{\longrightarrow }2) -\sqrt{-\frac{4}{3}+\frac{7}{3\sqrt{3}}}\: e_{2}(2\stackrel{2}{\longrightarrow }2)\otimes e_{1}(2\stackrel{2}{\longrightarrow }2)\\ && -\frac{1}{3}\sqrt{-3+2\sqrt{3}}\: e_{2}(2\stackrel{2}{\longrightarrow }2)\otimes e_{2}(2\stackrel{2}{\longrightarrow }2) +\sqrt{\frac{3}{2}-\frac{5}{2\sqrt{3}}}\: e(2\stackrel{2}{\longrightarrow }4)\otimes e(4\stackrel{2}{\longrightarrow }2)\\ && +\cdots \end{eqnarray*} where the missing terms include tensor products of paths of lengths $(3,1)$, $(1,3)$, $(0,4)$, and $(4,0)$. The last two are clearly $[2]\otimes e_{2}(2\stackrel{4}{\longrightarrow }2)+e_{2}(2\stackrel{4}{\longrightarrow }2)\otimes [2]$. As we will show explicitly% \footnote{This also follows immediately from (\ref{eq:homo_condition_graded}) once this requirement is checked% } in section \ref{sec:weak_bialgebra_condition_proof}, this also means that the path $e_{2}(2\stackrel{4}{\longrightarrow }2)$ itself can be decomposed as \begin{eqnarray*} e_{2}(2\stackrel{4}{\longrightarrow }2) & = & \sqrt{1-\frac{1}{\sqrt{3}}}\: e(2\stackrel{2}{\longrightarrow }0)\bullet e(0\stackrel{2}{\longrightarrow }2)\\ & & -\frac{1}{\sqrt{6\sqrt{3}}}\: e_{1}(2\stackrel{2}{\longrightarrow }2)\bullet e_{1}(2\stackrel{2}{\longrightarrow }2)\\ & & -\frac{1}{3}\sqrt{\frac{3}{2}+\sqrt{3}}\: e_{1}(2\stackrel{2}{\longrightarrow }2)\bullet e_{2}(2\stackrel{2}{\longrightarrow }2)\\ & & -\sqrt{-\frac{4}{3}+\frac{7}{3\sqrt{3}}}\: e_{2}(2\stackrel{2}{\longrightarrow }2)\bullet e_{1}(2\stackrel{2}{\longrightarrow }2)\\ & & -\frac{1}{3}\sqrt{-3+2\sqrt{3}}\: e_{2}(2\stackrel{2}{\longrightarrow }2)\bullet e_{2}(2\stackrel{2}{\longrightarrow }2)\\ & & +\sqrt{\frac{3}{2}-\frac{5}{2\sqrt{3}}}\: e(2\stackrel{2}{\longrightarrow }4)\bullet e(4\stackrel{2}{\longrightarrow }2) \end{eqnarray*} We could write a similar decomposition using instead products of paths of lengths $1$ and $3$, or $3$ and $1$, or even the trivial ones $0$ and $4$, or $4$ and $0$. \subsection{Unit and counit} There is an obvious unit for the product $\bullet $, which works in both the graded and non-graded versions of the endomorphisms of $\mathcal{E}$. Using (\ref{eq:unit_on_EP}), and the dualization map associated with the scalar product (see (\ref{eq:dualization_map})), it can be written as \begin{equation} \mathbf{1}_{\mathcal{B}}\equiv \mathbf{1}_{\mathcal{E}} \otimes \sharp\left(\mathbf{1}_{\mathcal{E}}\right) \label{eq:unit_on_End(EP)} \end{equation} As we already have a coproduct, we can find the counit using the axioms it satisfies. In particular\[ \left(id\otimes \epsilon \right)\Delta (a\otimes u)=a\otimes u\] requires\begin{equation} \epsilon (a\otimes u)\equiv u(a)\label{eq:counit_on_End(EP)} \end{equation} or, equivalently, $\epsilon (\rho )=\mathrm{Tr}(\rho )$. \subsection{Comonoidality} The algebra $\mathcal{B} \equiv End_{\#}(\mathcal{E})$ we have defined is not a bialgebra in the usual sense, since \begin{eqnarray} \Delta \mathbf{1}_{\mathcal{B}} & \neq & \mathbf{1}_{\mathcal{B}}\otimes \mathbf{1}_{\mathcal{B}} \end{eqnarray} therefore $\mathcal{B}$ is a {\em weak bialgebra}. It is, however, comonoidal, which means that it satisfies both the left and right comultiplicativity conditions of the unit \cite{Nill,BoNiSzl},\begin{eqnarray*} \Delta ^{2}\mathbf{1}_{\mathcal{B}} & = & \left(\Delta \mathbf{1}_{\mathcal{B}}\otimes \mathbf{1}_{\mathcal{B}}\right)\bullet \left(\mathbf{1}_{\mathcal{B}}\otimes \Delta \mathbf{1}_{\mathcal{B}}\right)\\ \Delta ^{2}\mathbf{1}_{\mathcal{B}} & = & \left(\mathbf{1}_{\mathcal{B}}\otimes \Delta \mathbf{1}_{\mathcal{B}}\right)\bullet \left(\Delta \mathbf{1}_{\mathcal{B}}\otimes \mathbf{1}_{\mathcal{B}}\right) \end{eqnarray*} The important consequence of this property is that the category of $End_{\#}(\mathcal{E})$-comodules is a monoidal category. We will check explicitly the first property. Using (\ref{eq:unit_on_EP}) and (\ref{eq:unit_on_End(EP)}) with $e_{v}^{(0)} \equiv [v]$ and $e^{(0)v}$ its dual, the LHS becomes\begin{eqnarray*} \Delta ^{2}\mathbf{1}_{\mathcal{B}} & = & \left(\Delta \otimes id\right)\Delta \mathbf{1}_{\mathcal{B}}\\ & = & \sum _{v,w,x,y\in G}\left(e_{v}^{(0)}\otimes e^{(0)x}\right)\otimes \left(e_{x}^{(0)}\otimes e^{(0)y}\right)\otimes \left(e_{y}^{(0)}\otimes e^{(0)w}\right) \end{eqnarray*} This has to be compared with the RHS \begin{eqnarray*} \left(\Delta \mathbf{1}_{\mathcal{B}}\otimes \mathbf{1}_{\mathcal{B}}\right)\bullet \left(\mathbf{1}_{\mathcal{B}}\otimes \Delta \mathbf{1}_{\mathcal{B}}\right) = \; \mathbf{1}_{(1)} \otimes \left(\mathbf{1}_{(2)}\bullet \mathbf{1}_{(1)'}\right)\otimes \mathbf{1}_{(2)'} \qquad \qquad \qquad \qquad \qquad & & \\ \qquad = \; \sum _{\begin{array}{c} v,w,x\\ v',w',x'\end{array} }\left(e_{v}^{(0)} \otimes e^{(0)x}\right) \otimes \left[\left(e_{x}^{(0)}\otimes e^{(0)w}\right)\bullet \left(e_{v'}^{(0)}\otimes e^{(0)x'}\right)\right] \otimes \left(e_{x'}^{(0)}\otimes e^{(0)w'}\right) & & \end{eqnarray*} Considering that the product in square brackets above is \begin{eqnarray*} \left(e_{x}^{(0)}\otimes e^{(0)w}\right)\bullet \left(e_{v'}^{(0)}\otimes e^{(0)x'}\right) & = & e_{x}^{(0)}e_{v'}^{(0)}\otimes e^{(0)w}e^{(0)x'}\\ & = & \delta _{x,v'}\, \delta _{w,x'}\, e_{x}^{(0)}\otimes e^{(0)w} \end{eqnarray*} we conclude that\[ \left(\Delta \mathbf{1}_{\mathcal{B}}\otimes \mathbf{1}_{\mathcal{B}}\right)\bullet \left(\mathbf{1}_{\mathcal{B}}\otimes \Delta \mathbf{1}_{\mathcal{B}}\right)=\sum _{\begin{array}{c} v,w,x\\ w'\end{array} }\left(e_{v}^{(0)}\otimes e^{(0)x}\right)\otimes \left(e_{x}^{(0)}\otimes e^{(0)w}\right)\otimes \left(e_{w}^{(0)}\otimes e^{(0)w'}\right)\] which obviously coincides with the expression we got above for $\Delta ^{2}\mathbf{1}_{\mathcal{B}}$ after an index relabeling. The check of the right comonoidality property is just a trivial variation of the above. Weak multiplicativity of the counit (the {}``dual'' property) does not hold in general. \subsection{Non-existence of an antipode} Given an algebra or coalgebra, the unit and counit must be unique if they exist at all, and this is so for the weak bialgebra $(\mathcal{B}=End_{\#}(\mathcal{E}),\bullet )$. One could hope to find a corresponding antipode to turn this bialgebra into a weak Hopf algebra but this is not possible, as we will show now. We refer the reader to \cite{Nill,BoNiSzl,BoSzl} for axioms concerning the antipode in weak Hopf algebras. There are slight variations among these references, for instance \cite{Nill} defines first left and right pre-antipodes, as an intermediate step to have an antipode. This is not relevant here, as the axioms for an antipode in any of \cite{Nill,BoNiSzl,BoSzl} necessarily imply that $S$ must be such that \begin{equation} S(x_{(1)})\, x_{(2)}=\mathbf{1}_{(1)}\, \epsilon \left(x\, \mathbf{1}_{(2)}\right)\label{eq:antipode_condition} \end{equation} for any element $x$ of the Hopf algebra. Therefore, we can assume that this holds for an element $\rho \in End(\mathcal{E}_{n})$ of the form\[ \rho =a\otimes u\qquad \qquad \textrm{with}\quad a=a^{(n)}\in \mathcal{E}_{n}\; , \quad u=u^{(n)}\in \mathcal{E}_{n}^{*}\; ,\quad n \geq 1\] Using $\Delta \rho =\sum _{I}\left(a\otimes e^{(n)I}\right)\otimes \left(e_{I}^{(n)}\otimes u\right)$ and replacing it in (\ref{eq:antipode_condition}) we get\[ \sum _{I}S\left(a\otimes e^{(n)I}\right)\bullet \left(e_{I}^{(n)}\otimes u\right)\] on the LHS and\[ \sum _{v,w,x\in G}\left(e_{v}^{(0)}\otimes e^{(0)x}\right)\epsilon \left[\left(a\bullet e_{x}^{(0)}\right)\otimes \left(u\bullet e^{(0)w}\right)\right]\] on the RHS. In this last term the sum over points $x,w$ of the graph contributes only when $x$ is the ending point $a_{f}$ of the path $a$, and $w$ is the ending point of (the dual of) $u$. Therefore, we must have\begin{eqnarray*} \sum _{I}S\left(a\otimes e^{(n)I}\right)\bullet \left(e_{I}^{(n)}\otimes u\right) & = & u(a)\, \left(\sum _{v}e_{v}^{(0)}\right)\otimes e^{(0)a_{f}} \end{eqnarray*} We see now that this is not possible, as the LHS gives tensor product factors of grading $\geq n$ ---the product of whatever comes out of the antipode times $e_{I}^{(n)}$ will always be a path of length at least $n$, or the null element--- whereas the RHS involves paths of length zero and is non-null in the general case. Hence, it is not possible to find an operator $S$ which could satisfy the axiom (\ref{eq:antipode_condition}). \subsection{The star operation} We can define a star operation $\star$ on $Paths$ and $\mathcal{E}$ just by reversing the orientation of the paths: \begin{eqnarray*} p^{\star } & = & [a_{L},a_{L-1},\cdots ,a_{1},a_{0}]\, \equiv \, \tilde{p}\qquad \textrm{if}\qquad p=[a_{0},a_{1},\cdots ,a_{L}] \end{eqnarray*} and extending it by anti-linearity. Of course, if $e$ is essential then $e^{\star }$ will also be essential, and a basis of $\mathcal{E}$ can always be chosen so as to have both a vector $e_{i}$ and its conjugate in the basis, thus $e_{i}^{\star }\equiv e_{j}$ for some $j$. The antilinear mapping $\star $ turns $(\mathcal{E},\bullet )$ into a $\star $-algebra, because $P\,\star = \star \,P$; therefore\[ (a\bullet b)^{\star }=b^{\star }\bullet a^{\star }\] and\[ \left(\mathbf{1}_{\mathcal{E}}\right)^{\star }=\mathbf{1}_{\mathcal{E}}\] We can also introduce a conjugation on the algebra $\mathcal{B}=End_{\#}(\mathcal{E})$ by making use of the above one, defining\begin{eqnarray*} \star :End_{\#}(\mathcal{E}) & \longmapsto & End_{\#}(\mathcal{E}) \end{eqnarray*} on monomials by \[ \left(a\otimes u\right)^{\star }\equiv a^{\star }\otimes u^{\star }\] This operation trivially verifies\begin{eqnarray*} \left(\mathbf{1}_{\mathcal{B}}\right)^{\star } & = & \mathbf{1}_{\mathcal{B}}\\ \epsilon \left(\rho ^{\star }\right) & = & \overline{\epsilon \left(\rho \right)} \end{eqnarray*} and\[ \left(\rho \bullet \rho '\right)^{\star }=\left.\rho '\right.^{\star }\bullet \rho ^{\star }\] To prove that\[ \Delta \left(\rho ^{\star }\right)=\left(\Delta \rho \right)^{\star \otimes \star }\] one should only note that\[ \sum _{J}\left(e^{J}\right)^{\star }\otimes e_{J}^{\star }=\sum _{J}e^{J}\otimes e_{J}\] for each orthonormal sub-basis $\left\{ e_{J}\right\} =\left\{ e_{J}^{(n)}\right\} $ of definite grading $n$, which holds because we can always choose the $e_{J}$ such that $e_{J}^{\star }=e_{I}$ for some $I$. This star operation is a normal (non-twisted) one, however it would also be possible to introduce a twisted \cite{CoGaTr-Stars} version. \section{Proof of the weak bialgebra compatibility condition\label{sec:weak_bialgebra_condition_proof}} We prove in this section that, in the case of the algebra of graded endomorphisms of essential paths $\mathcal{B}=End_{\#}(\mathcal{E})$ the condition (\ref{eq:homo_condition_graded}) holds. This condition, as we have seen, insures the homomorphism property of the coproduct. Some auxiliary but relevant results are obtained first. \subsection{Decomposition of essential paths\label{sub:decomposition-of-EP}} An essential path of well defined endpoints $a,b$ and length $L$,\[ e=e(a\stackrel{L}{\longrightarrow }b)\] is necessarily a linear combination \[ \sum _{p}\alpha _{p}\, p(a\stackrel{L}{\longrightarrow }b)\] where all the $p$ are elementary paths from $a$ to $b$. Of course we can now take $0\leq l\leq L$ and rewrite each $p$ using subpaths of lengths $l$, $L-l$, namely $p(a\stackrel{L}{\longrightarrow }b)=p'(a\stackrel{l}{\longrightarrow }v)\, p''(v\stackrel{L-l}{\longrightarrow }b)$ for some $v\in G$, and $p',p''$ elementary too. Therefore,\[ e=\sum _{v\in G}\; \sum _{p',p''}\: \alpha _{vp'p''}\, p'(a\stackrel{l}{\longrightarrow }v)\, p''(v\stackrel{L-l}{\longrightarrow }b)\] As $e$ is essential, in particular it must happen that $C_{k}e=0$ for $k=1,2,\cdots ,l-1$. But for these values of $k$ \[ 0=C_{k}e=\sum _{v\in G}\; \sum _{p''}\; C_{k}\left(\sum _{p'}\: \alpha _{vp'p''}\, p'(a\stackrel{l}{\longrightarrow }v)\right)\, p''(v\stackrel{L-l}{\longrightarrow }b)\] and using the linear independence of the elementary paths $p''$ we see that for each of the possible $p''$ the linear combination in parentheses must be essential:\[ \sum _{p'}\: \alpha _{vp'p''}\, p'(a\stackrel{l}{\longrightarrow }v)\equiv \sum _{i}\beta _{vip''}\, e_{i}^{\prime }(a\stackrel{l}{\longrightarrow }v)\] Here the index $i$ runs over a basis of essential paths of definite endpoints $a,v$ and length $l$. Getting this back into $e$, we get\[ e=\sum _{v\in G}\; \sum _{i,p''}\: \beta _{vip''}\, e_{i}^{\prime }(a\stackrel{l}{\longrightarrow }v)\, p''(v\stackrel{L-l}{\longrightarrow }b)\] We now use that $C_{k}e=0$ for $k=l+1,\cdots ,L-1$, so\[ 0=C_{k}e=\sum _{v,i}\, e_{i}^{\prime }(a\stackrel{l}{\longrightarrow }v)\: C_{k-l}\left(\sum _{p''}\: \beta _{vip''}\, p''(v\stackrel{L-l}{\longrightarrow }b)\right)\] and due to the linear independence of the basis $\left\{ e_{i}^{,}\right\} $ of essential paths we conclude again that for any value of $i$ and $v$ the term in parentheses must be essential:\[ \sum _{p''}\: \beta _{vip''}\, p''(v\stackrel{L-l}{\longrightarrow }b)\equiv \sum _{j}\gamma _{vij}\, e_{j}^{\prime \prime }(v\stackrel{L-l}{\longrightarrow }b)\] Putting this back into $e$, and using $P(e)=e$, we obtain the desired factorization: \begin{eqnarray*} e &=& \sum _{v,i,j}\: \gamma _{vij}\, P \left( e_{i}^{\prime }(a\stackrel{l}{\longrightarrow }v) \, e_{j}^{\prime \prime }(v\stackrel{L-l}{\longrightarrow }b) \right) \\ &=& \sum _{v,i,j}\: \gamma _{vij}\, e_{i}^{\prime }(a\stackrel{l}{\longrightarrow }v) \bullet e_{j}^{\prime \prime }(v\stackrel{L-l}{\longrightarrow }b) \end{eqnarray*} The cases $l=0,L$ are completely trivial. We can formulate this intermediate result as a lemma.\\ \paragraph*{Lemma} Any essential path $e(a\stackrel{L}{\longrightarrow }b)$ of well defined endpoints $a,b$ and length $L$ can be decomposed, for any fixed given positive value $l<L$, as a linear combination of products of shorter essential paths \begin{equation} e(a\stackrel{L}{\longrightarrow }b)=\sum _{v,i,j}\: \gamma _{vij}\, e_{i}^{\prime }(a\stackrel{l}{\longrightarrow }v) \bullet e_{j}^{\prime \prime }(v\stackrel{L-l}{\longrightarrow }b) \label{EP-decomposition} \end{equation} where the sum extends over all possible points $v$ of the graph which can be reached from $a$ and $b$ with essential paths of length $l$ and $L-l$, respectively. If we assume that both (sub)basis $\left\{ e_{i}^{\prime }\right\} $ and $\left\{ e_{j}^{\prime \prime }\right\} $ are orthonormal then also \begin{eqnarray} \sum _{v,i,j}\: \left|\gamma _{vij}\right|^{2} & = & \left\Vert e\right\Vert ^{2}\label{EP-norm} \end{eqnarray} $\blacksquare$ Note that the decomposition (\ref{EP-decomposition}) can be used to build the essential paths recursively. With regard to the dimensionality of this space, remark that when $G$ is a Dynkin diagram of type ADE, the following result (that we do not prove here) is known: The vector space spanned by the vertices $a,b,\cdots$ of $G$ is a module over the graph algebra of $A_{n}$, where $n+1$ is the Coxeter number of $G$ and $A_{n}$ is the commutative algebra with generators $N_{0}, N_{1}, \cdots, N_{n-1}$ obeying the following relations: $N_{0}$ is the unit, $N_{1}$ is the (algebraic) generator with $N_{1} N_{p} = N_{p-1} + N_{p+1}$, if $p < n-1$, and $N_{1} N_{n-1} = N_{n-2}$. If $s$ denotes the number of vertices of $G$, this module action is encoded by $n$ matrices $F_{p}$ of size $s \times s$. They are related to the previous generators by $N_{p} a = \sum_{b} (F_{p})_{ab} b$. The number of essential paths of length $p$ on the graph $G$ is equal to the sum of the matrix elements of $F_{p}$. \subsection{The weak bialgebra condition} The coefficients $m_{nI,mJ}^{(n+m)K}$ that enter the weak bialgebra condition (\ref{eq:graded_sum_mm_condition}) are just the components of products $e_{I}^{(n)}\bullet e_{J}^{(m)}$ of essential paths of lengths $n,m$ respectively, along the directions $e_{K}^{(n+m)}$. Using a more explicit notation than above, the non-trivial contributions are\begin{eqnarray*} m_{e_{n}(a\stackrel{l}{\longrightarrow }c)\, ,\, e_{r}(c\stackrel{L-l}{\longrightarrow }b)}^{e_{k}(a\stackrel{L}{\longrightarrow }b)} & \equiv & \left\langle e_{k}\, ,\, e_{n}\bullet e_{r}\right\rangle \: = \: \left\langle e_{k}\, ,\, e_{n}\, e_{r}\right\rangle \end{eqnarray*} where we have used the definition (\ref{eq:product-on-EP}) for the product, self-adjointness of the operator $P$, and the fact that $e_{k}$ is essential so $P(e_{k})=e_{k}$. Taking $e=e_{k}$ in the decomposition (\ref{EP-decomposition}) we can now write\begin{eqnarray*} m_{e_{n}\, ,\, e_{r}}^{e_{k}} & = & \sum _{v,i,j}\: \gamma _{vij}^{(k)}\, \left\langle e_{i}^{\prime }(a\stackrel{l}{\longrightarrow }v)\, ,\, e_{n}(a\stackrel{l}{\longrightarrow }c)\right\rangle \, \left\langle e_{j}^{\prime \prime }(v\stackrel{L-l}{\longrightarrow }b)\, ,\, e_{r}(c\stackrel{L-l}{\longrightarrow }b)\right\rangle \\ & = & \sum _{v,i,j}\: \gamma _{vij}^{(k)}\, \delta _{vc}\, \delta _{in}\, \delta _{jr} \: = \: \gamma _{cnr}^{(k)} \end{eqnarray*} Therefore the coefficients $\gamma _{cnr}^{(k)}$ that enter the decomposition of $e_{k}$ are the same that those involved in the product. The weak bialgebra condition (\ref{eq:graded_sum_mm_condition}) reduces now to the orthonormality condition (\ref{EP-norm}) of the $e_{k}$ , that is \begin{eqnarray} \sum _{nr}\overline{m_{e_{n}(a\stackrel{l}{\longrightarrow }c)\, ,\, e_{r}(c\stackrel{L-l}{\longrightarrow }b)}^{e_{k}(a\stackrel{L}{\longrightarrow }b)}}\, m_{e_{n}(a\stackrel{l}{\longrightarrow }c)\, ,\, e_{r}(c\stackrel{L-l}{\longrightarrow }b)}^{e_{k'}(a\stackrel{L}{\longrightarrow }b)} & = & \sum _{c,n,r}\: \overline{\gamma _{cnr}^{(k)}}\, \gamma _{cnr}^{(k')} \nonumber\\ & = & \left\langle e_{k}\, ,\, e_{k'}\right\rangle \: = \: \delta _{kk'} \nonumber \end{eqnarray} \section{Comparison of the two bialgebra structures for the $A_{2}$ diagram} The graph $A_{2}$ gives rise to the simplest non-trivial example, an 8-dimensional algebra (whereas $A_{3}$ already produces a 34-dimensional one). It consists of two points and one (bi-oriented) edge. The only essential paths are: $a_{1}\equiv [1]$, $a_{2}\equiv [2]$, and the right and left oriented paths $r\equiv [1,2]$ and $l\equiv [2,1]$ respectively. We shall compare, for this example, the two bialgebra structures mentioned in the text. The first, the graded one, is a weak bialgebra, semi-simple but not co-semi-simple. The second, the filtrated one, is a weak Hopf algebra; it is both simple and co-semi-simple. \subsection{The graded bialgebra structure} The products in $\mathcal{E}(A_{2})$ (corresponding to (\ref{eq:m_A_basis})) are: \begin{eqnarray*} a_{i}\bullet a_{j}=\delta _{ij}a_{i} & \qquad & r^{2}=l^{2}=r\bullet l=l\bullet r=0\\ a_{1}\bullet r=r\bullet a_{2}=r & & a_{2}\bullet r=r\bullet a_{1}=0\\ a_{2}\bullet l=l\bullet a_{1}=l & & a_{1}\bullet l=l\bullet a_{2}=0 \end{eqnarray*} \noindent The dual operation in $\mathcal{E}(A_{2})$, the coproduct corresponding to (\ref{eq:D_A_basis})), is \begin{eqnarray*} Da_{1} = a_{1}\otimes a_{1} & \qquad & Da_{2} = a_{2}\otimes a_{2} \\ Dr = a_{1}\otimes r+r\otimes a_{2} & \qquad & Dl = a_{2}\otimes l+l\otimes a_{1} \end{eqnarray*} \noindent Now we consider $E\equiv End_{\#}(\mathcal{E}(A_{2}))$: we call $\rho _{ij}$ the endomorphism of paths of length zero taking $a_{j}$ into $a_{i}$, which we also identify using the map $\sharp$ as $\rho _{ij}=a_{i}\otimes a_{j}$. We also have the $\rho _{rr},\rho _{rl},\rho _{lr},\rho _{ll}$ acting on the space of paths of length $1$. Thus $E$ has dimension $8$ as a vector space. The product in $E$ is the usual composition product, so \begin{eqnarray*} \rho _{ij}\circ \rho _{kl} & = & \delta _{jk}\rho _{il}\qquad \qquad i,j=1,2\\ \rho _{ij}\circ \rho _{**}=\rho _{**}\circ \rho _{ij} & = & 0\qquad \qquad \qquad *=r,l\\ \rho _{d_{1}d_{2}}\circ \rho _{d_{3}d_{4}} & = & \delta _{d_{2}d_{3}}\rho _{d_{1}d_{4}}\qquad d_{i}=r,l \end{eqnarray*} Obviously, $(E, \circ)$ is the direct sum of two subalgebras, namely $End(\mathcal{E}_{0,1}(A_{2}))$, the endomorphisms of paths of length $i$, both isomorphic to $M_{2x2}(\mathbb{C})$. Regarding the coproduct on $E$, remember that for the graded case we defined\[ \Delta \rho =(P\otimes P)(1\otimes \tau \otimes 1)(D_{A}\otimes D_{A^{*}})\rho \] In our present example this implies \begin{eqnarray*} \Delta \rho _{ij} & = & \rho _{ij}\otimes \rho _{ij}\\ & & \\ \Delta \rho _{rr} & = & \rho _{11}\otimes \rho _{rr}+\rho _{rr}\otimes \rho _{22}\\ \Delta \rho _{ll} & = & \rho _{22}\otimes \rho _{ll}+\rho _{ll}\otimes \rho _{11}\\ \Delta \rho _{rl} & = & \rho _{12}\otimes \rho _{rl}+\rho _{rl}\otimes \rho _{21}\\ \Delta \rho _{lr} & = & \rho _{21}\otimes \rho _{lr}+\rho _{lr}\otimes \rho _{12} \end{eqnarray*} Indeed, in the first case, for example, the calculation reads \begin{eqnarray*} \Delta \rho _{rr} & = & \Delta (r\otimes r)=(P\otimes P)(1\otimes \tau \otimes 1)\left((a_{1}\otimes r+r\otimes a_{2})\otimes (a_{1}\otimes r+r\otimes a_{2})\right)\\ & = & (P\otimes P)\left(a_{1}\otimes a_{1}\otimes r\otimes r+r\otimes r\otimes a_{2}\otimes a_{2}+a_{1}\otimes r\otimes r\otimes a_{2}+r\otimes a_{1}\otimes a_{2}\otimes r\right)\\ & = & a_{1}\otimes a_{1}\otimes r\otimes r+r\otimes r\otimes a_{2}\otimes a_{2}=\rho _{11}\otimes \rho _{rr}+\rho _{rr}\otimes \rho _{22} \end{eqnarray*} because the terms $a_{1}\otimes r\otimes r\otimes a_{2}+r\otimes a_{1}\otimes a_{2}\otimes r$ do not belong to $E\otimes E$ and get projected out by the operator $P\otimes P$. It is easy to check that $\Delta $ is both coassociative and an algebra homomorphism for the product $\circ $. Therefore, $E$ is a bialgebra. The element $\mbox{\rm 1}\hskip-2.8pt \mbox{\rm l}=\rho _{11}+\rho _{22}+\rho _{rr}+\rho _{ll}$ is a unit for $\circ $ but its coproduct is not $\mbox{\rm 1}\hskip-2.8pt \mbox{\rm l} \otimes \mbox{\rm 1}\hskip-2.8pt \mbox{\rm l}$. If we declare the elementary paths $a_1, a_2, r, l$ orthonormal, we obtain an induced scalar product on the space of endomorphisms. We can use it to map the above coproduct to a product that we call $\bullet$. The first algebra (product $\circ$) is isomorphic, by construction, with the semi-simple algebra $M_2(\mathbb{C}) \oplus M_2(\mathbb{C})$. The matrix units, or "elementary matrices", are realized as follows. Each entry denotes a single matrix unit (replace the chosen generator by $1$ and set the others entries to zero): {\scriptsize $ \begin{array}{ccc} \left( \begin{array}{cc} \rho_{11} & \rho_{12} \\ {}&{} \\ \rho_{21} & \rho_{22} \end{array} \right) \oplus \left( \begin{array}{cc} \rho_{ rr} & \rho_{ rl} \\ {}&{} \\ \rho_{ lr} & \rho_{ll} \end{array} \right) \end{array} $ } The graded algebra (product $\bullet$) is not semi-simple. It can be realized\footnote{So, there are four projective irreducible modules and the radical is $\mathbb{C} \oplus \mathbb{C} \oplus \mathbb{C} \oplus \mathbb{C}$ } as a direct sum of two algebras of matrices $2\times 2$ with entries in the ring of Grassman numbers with generators $\{1, \theta\}$, $\theta^2=0$. Indeed, the basis vectors $\{ \rho_{11}, \rho_{rr}, \rho_{ll}, \rho_{22} \}$ generate an algebra isomorphic with {\scriptsize $ \left( \begin{array}{cc} a & b\, \theta \\ {}&{} \\ c \, \theta & d \end{array} \right) $}, where $a,b,c,d$ are complex numbers. Vectors $\{ \rho_{12}, \rho_{rl}, \rho_{lr}, \rho_{21} \}$ generate another copy of the same four-dimensional algebra. The eight generators can be realized as (dots stand for the number $0$): {\scriptsize $ \begin{array}{cc} \rho_{11} = \begin{array}{ccc} \left( \begin{array}{cc} 1& . \\ . & . \\ \end{array} \right) \oplus \left( \begin{array}{cc} . & . \\ . & . \\ \end{array} \right) \end{array} & \rho_{rr} = \begin{array}{ccc} \left( \begin{array}{cc} . & \theta \\ . & . \\ \end{array} \right) \oplus \left( \begin{array}{cc} . & . \\ . & . \\ \end{array} \right) \end{array} \\ \rho_{ll} = \begin{array}{ccc} \left( \begin{array}{cc} . & . \\ \theta & . \\ \end{array} \right) \oplus \left( \begin{array}{cc} . & . \\ . & . \\ \end{array} \right) \end{array} & \rho_{22} = \begin{array}{ccc} \left( \begin{array}{cc} . & . \\ . & 1 \\ \end{array} \right) \oplus \left( \begin{array}{cc} . & . \\ . & . \\ \end{array} \right) \end{array} \\ \rho_{12} = \begin{array}{ccc} \left( \begin{array}{cc} . & . \\ . & . \\ \end{array} \right) \oplus \left( \begin{array}{cc} . & -\theta \\ \theta & 1 \\ \end{array} \right) \end{array} & \rho_{rl} = \begin{array}{ccc} \left( \begin{array}{cc} . & . \\ . & . \\ \end{array} \right) \oplus \left( \begin{array}{cc} . & . \\ \theta & . \\ \end{array} \right) \end{array} \\ \rho_{lr} = \begin{array}{ccc} \left( \begin{array}{cc} . & . \\ . & . \\ \end{array} \right) \oplus \left( \begin{array}{cc} . & \theta \\ . & . \\ \end{array} \right) \end{array} & \rho_{21} = \begin{array}{ccc} \left( \begin{array}{cc} . & . \\ . & . \\ \end{array} \right) \oplus \left( \begin{array}{cc} 1 & \theta \\ - \theta & . \\ \end{array} \right) \end{array} \end{array} $ } \subsection{The filtrated bialgebra structure} The filtrated bialgebra structure associated with $A_2$ (see also \cite{Coque-Cocoyoc}) uses the same composition product $\circ$ but the second product $\star$ is different from $\bullet$. Actually, the case $A_2$ is rather special, in the following sense: there exists an associative structure (call it also $\star $) on the space of essential paths $\mathcal{E}(A_{2})$ such that the filtrated algebra structure that we consider on the eight dimensional space $E$ coincides with the tensor square of the later. This is (unfortunately) not so for other ADE diagrams, not even for the $A_N$ when $N>2$. The product $\star$ on $\mathcal{E}(A_{2})$ is : \begin{eqnarray*} a_{i}\star a_{j}=\delta _{ij}a_{i} & \qquad & r^{2}=l^{2}= 0 \\ {} & \qquad & r\star l= a_1 \, ,\quad l\star r= a_2\\ a_{1}\star r=r\star a_{2}=r & & a_{2}\star r=r\star a_{1}=0\\ a_{2}\star l=l\star a_{1}=l & & a_{1}\star l=l\star a_{2}=0 \end{eqnarray*} Comparing with the multiplication $\bullet$ of the previous section, we see that the difference lies in the values of $r\star l$ and $l\star r$ that, here, do not vanish. The product $\star$ in $E$ is: $$ (u \otimes v) \star (u' \otimes v') \doteq (u \star u') \otimes (v \star v') $$ It is easy to write the multiplication table and to see that this algebra is semi-simple and isomorphic, like $(E, \circ)$, with the direct sum of two full matrix algebras $2\times 2$ over the complex numbers. However, the eight generators are represented in a very different way. With the same reading convention as before, the matrix units are given by: {\scriptsize $ \begin{array}{ccc} \left( \begin{array}{cc} \rho_{11} & \rho_{rr }\\ {}&{} \\ \rho_{ll } & \rho_{22} \end{array} \right) \oplus \left( \begin{array}{cc} \rho_{12} & \rho_{ rl} \\ {}&{} \\ \rho_{lr} & \rho_{21} \end{array} \right) \end{array} .$ } The corresponding coproducts (compare with the previous section) read as follow: $\Delta \rho_{u,v}$, when $u,v = r,l$ are as before, but the $\Delta \rho_{i,j}$, $i,j = 1,2$ are different\footnote{Notice that $\Delta \mbox{\rm 1}\hskip-2.8pt \mbox{\rm l} = (11+ll)\otimes (11 + rr) + (rr+22) \otimes (ll + 22)$}: \begin{eqnarray*} \Delta \rho_{11} &=& \rho_{11} \otimes \rho_{11} + rr \otimes ll \\ \Delta \rho_{12} &=&\rho_{12} \otimes \rho_{12} + rl \otimes lr \\ \Delta \rho_{21} &=& \rho_{21} \otimes \rho_{21} + lr \otimes rl \\ \Delta \rho_{22} &=&\rho_{22} \otimes \rho_{22} + ll \otimes rr \end{eqnarray*}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Many neutron stars are found in Low Mass X-ray Binaries (LMXBs). These neutron stars have relatively low magnetic fields ($B\lesssim10^{8-9}$G) and accrete matter from their low-mass companion ($M\lesssim1M_\odot$) when this companion overflows its Roche lobe. The accretion process can be persistent or episodic. In the latter case the so-called transient systems alternate between phases of accretion (outbursts) and quiescence. Neutron stars in X-ray transients are of particular interest, because in the quiescence phase several properties of neutron stars can be studied. During outburst, the crust of the neutron star is heated up due to various accretion-induced processes such as electron captures, beta-decay, and pycnonuclear reactions \citep[e.g.][]{sato1979,haensel1990}. These deep crustal heating processes release in total $1.5-2.0$ MeV per accreted nucleon in the inner crust of the neutron star \citep{haensel1990,haensel2003,haensel2008}. Additionally, crust cooling studies have shown that there must be an additional shallow heat source that releases typically $1-2$ MeV nuc$^{-1}$ in the outermost layers of the crust \citep[e.g.][]{brown2009}. Even though heat can flow from the crust into the core, the core is heated only marginally on the time scales of outburst durations because of its high heat capacity \citep{colpi2001,wijnands2013}. Consequently, the crust can become hotter than the core during outburst and hence the two become thermally decoupled. Once accretion halts, the crust will cool down to restore crust-core equilibrium \citep{brown1998,rutledge2002b}. To date, the expected crust cooling has been observed from seven systems \citep[e.g.][]{wijnands2002,wijnands2004,fridriksson2010,homan2014,degenaar2011a,degenaar2011b,degenaar2015}, potentially nine \citep{waterhouse2016,parikh2017}; see \citet{wijnands2017} for a review. These sources all showed a decrease in the amount of blackbody radiation emitted after the end of their accretion outburst which implies a decreasing surface temperature. Observational campaigns covering several years after the end of the outbursts allowed for construction of cooling curves for each of the sources, probing the temperatures of increasingly deeper layers of the crust as the time runs \citep{brown2009}. How fast the crust cools down during quiescence depends on the amount of heat stored in the crust during the outburst, on the core temperature, and on crustal properties such as composition, thermal conductivity, and crust thickness. Comparing observations of crust cooling with physical models is therefore a powerful tool to constrain crustal properties \citep{rutledge2002b,brown2009,page2013}. Over the years multiple models have been developed that can track the thermal evolution of accreting neutron stars \citep[e.g.][]{colpi2001,shternin2007,brown2009,page2013,turlione2015}. Such codes take into account different heating processes during outburst, as well as cooling processes in outburst and quiescence via photon and neutrino emission. Adjusting the physical conditions of the neutron star in these codes in order to match the observations of cooling crusts revealed some unexpected results. First of all, observations showed that neutron stars cool down rapidly in the first few hundred days after the end of the outburst. This requires large amounts of heat to be transported inwards in a short time, which implies that accreting neutron stars have crusts with high conductivity \citep{wijnands2004,brown2009}. This required adjustment from the initial theory that the accretion of matter onto the crust would lead to a lattice structure with a significant amount of impurities and therefore a low conductivity \citep{schatz1999,brown2000}. Additionally, several sources were observed to be much hotter than theoretically expected during the crust cooling phase, specifically during the first $\sim100$ days hereof. They showed such a great difference in initial crust temperature compared to the base level (i.e. the observed temperature when crust-core equilibrium is reached in quiescence) that it has become clear that there must be an extra heating source active -- besides deep crustal heating-- during accretion in these sources that deposits heat at shallow depths \citep[e.g.][]{brown2009}. The physical origin of the shallow heating is unknown, but it must release about $1-2$ MeV nuc$^{-1}$ in most sources that need shallow heat to explain the observations \citep[e.g.][]{brown2009,degenaar2011b,page2013,merritt2016}. An exception is MAXI J0556-332, which requires $6-17$ MeV nuc$^{-1}$ \citep{deibel2015, parikh2018}. The first four sources in which crust cooling was observed were quasi-persistent sources with outburst durations $>1$ year. Such sources, in contrast to ordinary transients which have outbursts lasting $\sim$ months, were prime candidates to detect this phenomenon. This is because for the same accretion rate more heat can be stored in the crust in the case of longer outburst durations, leading to more significant cooling curves. However, the fifth source in which crust cooling was detected is IGR J17480-2446 \citep{degenaar2011,degenaar2011b,degenaar2013}, after an outburst that lasted only $\sim7$ weeks. This proved that also during the more common outbursts that last a few weeks, enough heat can be deposited in the crust to have an observable effect. \subsection{Aql X-1 and its potential crust cooling} Recently, \citet{waterhouse2016} argued that the observations of Aql X-1 taken in quiescence can also be interpreted as crust cooling. Aql X-1 is a well-studied transient neutron star LMXB with regular outburst durations and relatively short recurrence times. The earliest observations of the source stem from 1965 \citep{friedman1967} and ever since many outbursts have been observed with different X-ray telescopes \citep[see e.g.][and references therein]{kaluzienski1977, campana2013}. In the past two decades the {\it Rossi X-ray Timing Explorer} (\textit{RXTE}), Monitor of All-Sky X-ray Image (MAXI), and \textit{Neil Gehrels Swift Observatory} (\textit{Swift}) have collectively provided well sampled monitoring of Aql X-1's behaviour. The source is located at a distance of $5$ kpc \citep{rutledge2001,jonker2004} and goes into outburst for $\sim70$ days roughly once a year. Note that these are average values, the actual outburst duration and recurrence time can be significantly different (see Table \ref{tab:outburstproperties}). Thermonuclear type-I X-ray bursts detected during outburst identify the accretor in this system as a neutron star \citep{koyama1981}. The origin of the quiescent emission of Aql X-1 is an ongoing debate \citep[see e.g.][]{rutledge2002a,brown2002,campana2003,cackett2011,cotizelati2014,waterhouse2016}. Both short ($<10^4$ s) and long-term (months-years, sometimes with outbursts in between different quiescent observations) variability has been observed in the source. Suggested causes hereof include variable low-level accretion, difference in envelope compositions (assuming that there is no low-level accretion in quiescence, this interpretation only applies to long scale variability, i.e. the envelope composition can be different after every new outbursts), and thermal relaxation of the crust. Thermal relaxation of the crust has often been excluded as origin of decreases in luminosity for Aql X-1 for two reasons. First of all, \citet{brown1998} suggested that because of the low outburst fluence (integrated outburst luminosity), the crust cannot be heated significantly during outburst by deep crustal heating. Second, \citet{ushomirsky2001} found that quiescent variability can only be explained as thermal relaxation if the source has enhanced neutrino emission in the core. This is inconsistent with the earlier finding that the thermal spectral component of the quiescent luminosity observed from Aql X-1 requires negligible core neutrino emissivity \citep{brown1998,colpi2001}. Therefore, \citet{ushomirsky2001} discarded the crust cooling interpretation. However, in these studies, only heating from the deep crustal heating processes was considered, while we now know that shallow heating can also significantly heat the outer layers. \citet{waterhouse2016} analysed observations taken with the X-Ray Telescope (XRT) on board {\it Swift} after three different outbursts and found that the spectral behaviour of the source is consistent with the predictions for crust cooling. First of all, the surface temperatures determined from the spectral fits followed a declining trend after each of the different outbursts. Second, for the epochs after two very similar outbursts (in terms of both outburst energetics and duration), very similar temperature evolution was observed, in the sense that the two potential cooling curves lined up smoothly. Finally, the third investigated outburst, which was observed to have significantly smaller outburst luminosity and a different outburst profile compared to the other two, behaved in line with the crust relaxation theory. The source was significantly cooler directly after the end of the outburst and its surface temperature decayed faster, indicating that the crust was less powerfully heated during this fainter outburst. However, despite these indications of crust cooling, the authors could not exclude that there is (also) residual accretion in quiescence, especially because short accretion flares have been observed on top of the potential cooling curve after one of the outburst \citep{cotizelati2014}. But, as \citet{waterhouse2016} noted, even if residual accretion is occurring, thermal relaxation of the crust may still dominate the quiescence evolution. Aql X-1 is an interesting source to test crust cooling scenarios, because of its frequent and short outbursts. Additionally, Aql X-1 alternates between hard (predominantly high energy emission) and soft states (predominantly low energy emission) during its brightest outbursts (see e.g. Figure 2 in \citet[]{waterhouse2016} and Figure 1 in \citet{ono2016}), while during fainter outbursts the source seems to reside only in the hard state. This feature of state changes might indicate a change in accretion geometry, which might influence the shallow heating processes \citep{zand2012}. \citet{waterhouse2016} modelled three of the outbursts and subsequent cooling periods and were able to place constraints on some properties such as the shallow heating strength and depth. However, the three periods were modelled individually and without taking into account the variability in the accretion rate onto the neutron star during outburst. In our previous crust cooling research focused on KS 1731-260, we found that variations in accretion rate during outburst can strongly influence the calculated cooling curves and hence the derived crustal parameters \citep{ootes2016}. This especially affects the shallow heating strength, which is very sensitive to short time scale variations as well as to the overall decline in accretion rate near the end of the outburst. Here we show detailed modelling of the temperature evolution in Aql X-1 over 20 years. The continuous and frequent monitoring of the source allows us to track the full outburst history observed from 1996 until July 2015. We use our crust cooling code {\tt NSCool} \citep{page2013}, taking into account accretion rate variability \citep{ootes2016}, to model the 23 outbursts observed in this period. We model all outbursts collectively rather than individually (i.e. in one run of the code we model all 23 outbursts and quiescent episodes chronologically), to investigate the influence of the outburst history on the calculated cooling curves. We use the quiescent observations after five different outbursts to probe the crustal properties of the neutron star and investigate how these properties might change between outbursts. \section{Modelling Thermal Evolution} \subsection{The {\tt NSCool} Code} To model the thermal evolution of Aql X-1 based on its accretion history we use our crust cooling code {\tt NSCool} \citep{page2013,page2016,ootes2016}. This is a one-dimensional code (i.e. assuming spherical symmetry) that solves the energy transport and energy balance equations in a general relativistic approach. \subsubsection{Stellar structure} For the stellar structure, we use one of the pre-built stellar models that is based on the Akmal-Pandharipande-Ravenhall equation of state (A18+$\delta$v+UIX*) for the core \citep{akmal1998}. This structure model assumes that the crust (defined as the density region $1.0\times10^{8}<\rho<1.5\times10^{14} \text{ g cm}^{-3}$) is composed of the burning ashes of accreted material as described by \citet{haensel2008}. In the inner crust we assume dripped neutrons to form an $^1$S$_0$ superfluid \citep{schwenk2003} with the resulting strong suppression of their specific heat. The matter at densities $\rho<10^8$ g cm$^{-1}$ is assumed to form the envelope of the neutron star. In these outermost layers, the hydrogen and helium accreted from the companion is fused into heavier elements. We take $^{56}\text{Fe}$ as final product of these nuclear reactions and consequently to form the basis of the outer crust. Fusion reactions during the accretion outburst do not contribute significantly to the heating of the crust, because the energy is liberated on the surface of the neutron star and immediately radiated away \citep[see e.g. the review by][]{chamel2008}. \begin{figure} \includegraphics[width=\columnwidth]{./Tb-Ts.pdf} \caption{$T_b - T_\text{eff}^\infty$ relationships for envelopes with various amounts of light elements, indicated by their respective column densities, $y_L$, and compared to a heavy element envelope with $y_L = 0$. These models assume a stellar mass and radius of 1.6 $M_\odot$ and 11 km, respectively. The two horizontal, blue, dotted lines show the range of $T_\text{eff}^\infty$ observed during the cooling phases.} \label{fig:Tb-Ts} \end{figure} \subsubsection{The envelope} For each time step {\tt NSCool} solves the thermal evolution equations from the centre of the star up to a boundary density $\rho_\text{b}=10^8 \text{ g cm}^{-3}$. Heat propagation trough the envelope ($\rho<\rho_\text{b}$) only depends on the structure and composition of the envelope and not on the underlying layers. Therefore, we treat the heat propagation through the envelope independently from the crust. The envelope composition is highly variable in time during an outburst since it depends on the accretion rate, the composition of the accreted material and the nuclear reactions taking place. The latter include hydrogen, helium, and carbon burning, which can happen either stably or unstably depending on the accretion rate \citep[see e.g. the review by][]{bildsten1998}. This leads to a shell-like envelope structure with each shell consisting of heavier elements with increasing depth. We assume that in quiescence no residual accretion takes place (and thus neglect any possible effects of accretion flares; we discuss their influence in Section \ref{discussion:flares}) and that the envelope composition at that time is thus equal to the envelope composition at the end of the preceding outburst. This is the only composition of concern for our cooling calculations, and therefore we model each full outburst and cooling curve with constant envelope composition. We generated envelope models in the same line as described in \citet{potekhin1997} and \citet{brown2002} but with a bottom density $\rho_\text{b} = 10^8$ g cm$^{-3}$ (instead of $10^{10}$ g cm$^{-3}$).\footnote{We use a lower boundary density of $\rho_\text{b}=10^8\text{ g cm}^{-3}$ to be able to incorporate shallow heating into our model, which is found to be required -- for some of the other crust cooling sources -- at densities around $\sim 10^9\text{ g cm}^{-3}$.} We parametrise the layer of light elements, comprising a layer of hydrogen + helium on top of a thicker layer of $^{12}$C, by its total column density $y_\text{L}$ (in which it is assumed that the carbon column depth is 10 times the helium column depth) and assume the presence of $^{56}$Fe in the deeper layers. These envelope models provide us with the outer boundary condition of {\tt NSCool} as a relationship between the temperature $T_\text{b}$ at $\rho_\text{b}$ and the redshifted effective temperature $T_\text{eff}^\infty$. The calculated $T_\text{eff}^\infty$ depends on the mass $M$ and radius $R$ of the star through the surface gravity $g_s$ as $T_\text{eff} \propto g_s^{1/4}$ (with $T\text{eff}$ the local effective temperature), and is higher if the envelope contains more light elements because their presence increases the heat conductivity compared to Fe. The resulting $T_b - T_\text{eff}^\infty$ relationships for various amounts of light elements are shown in Figure \ref{fig:Tb-Ts}. \subsubsection{Heating and cooling} We calculate the time-dependent accretion rate during an outburst as described in \citet{ootes2016} based on daily averaged light curve observations (details of the obtained light curve of Aql X-1 will be discussed in Section \ref{sec:lightcurve}). We assume that $1.93 \text{ MeV}$ per accreted nucleon is released in the (inner) crust due to deep crustal heating between $\rho=1.5\times10^9-3.5\times10^{13}\text{ g cm}^{-3}$ as calculated by \citet{haensel2008}. Pycnonuclear reactions contribute the major part of the energy released in the deep crustal heating process. Additionally, we allow a supplementary amount of heat, depending on the accretion rate, to be released in the outer crust to simulate the shallow heating. We model this by setting a total amount $Q_\text{sh}$ of heat to be released per accreted nucleon, resulting in a heating luminosity \begin{equation} H_\text{sh}(t) \equiv Q_\text{sh} \frac{\dot M(t)}{m_\text{u}} \label{Eq:shallow} \end{equation} ($m_\text{u}$ being the atomic mass unit) which is distributed evenly amongst the volume elements between $\rho_\text{sh,min}$ and $\rho_\text{sh,max}=5* \rho_\text{sh,min}$. In quiescence, the neutron star cools down via photon emission from the surface and neutrino cooling from the core. As standard input we assume that neutrino cooling follows the `minimal cooling paradigm' \citep[i.e. no enhanced neutrino emission in the core from direct URCA processes,][]{page2004}. At the end of the outburst the neutron star has a specific temperature profile that depends on the outburst and stellar properties. As we observe the effective temperature to decrease in quiescence, the crust cools down and deeper and deeper layers of the crust can be probed \citep{brown2009}. However, how fast heat propagates through the crust depends on the thermal conductivity, set by the various scattering processes. We consider the contributions to the thermal conductivity of electrons scattering with: electrons \citep{shternin2006}, crystal impurities \citep{yakovlev1980}, and with phonons and ions \citep{gnedin2001}. For the contribution of the electron-impurity scattering to the conductivity we set a free impurity parameter $Q_\text{imp}$ in the crust. In the code we allow three different density regions in the crust for which independent impurity factors can be set, with transition densities defined at $\rho=4\times10^{11}\text{ g cm}^{-3}$ and $\rho=8\times10^{13}\text{ g cm}^{-3}$ such that the three layers comprise respectively the outer crust, neutron drip region, and pasta region. The intrinsic base level -- the boundary temperature $T_\text{b}$ at the time that crust-core equilibrium is restored -- is set by the redshifted, uniform core temperature $\tilde T_0$ prior to accretion. During an outburst most of the heat flows towards the core which will eventually increase the core temperature and thus the intrinsic base level. This is, however, a very slow process and we can easily estimates its time scale $\tau_\mathrm{th}$ as follows. With $\tilde T_0 \simeq 10^8$ K the core has a large specific heat $C_V \simeq 10^{37} - 10^{38}$ erg K$^{-1}$ \citep{wijnands2013,cumming2017}, the higher value referring to a core made of normal degenerate matter and the lower one to a superfluid core. Given the observed average mass accretion rate of $\sim 4 \times 10^{-10}$ M$_\odot$ yr$^{-1}$, deep crustal heating plus shallow heating give a long term average heating rate of at most $\langle H \rangle \sim 10^{35}$ erg s$^{-1}$. We can hence estimate \citep{wijnands2013} that $\tau_\mathrm{th} \simeq C_V T_0/ \langle H \rangle \sim$ 300 -- 3000 yrs, much longer than the 20 yrs we are exploring in this work. Notice that the observed high quiescent luminosity, $L_q \simeq 2 \times 10^{3}$ erg s$^{-1}$, implies an inefficient neutrino emission form its core (see, e.g., figure 1 in \citealt{wijnands2013}) which is, however, automatically included in the above estimate of $\tau_\mathrm{th}$. Finally, note that the observed base level of the cooling curve also depends on the envelope composition, since we do not observe $T_\text{b}$, but $T_\text{eff}^\infty$. Because of this envelope composition dependence, the observed base level can be variable on shorter timescales \citep{brown2002}. \subsubsection{Fit parameters} Previously, we created models with \texttt{NSCool} in which we adjusted the input parameters by hand to reduce the $\chi^2$. The code is now expanded with an automated fitting routine that reduces the total $\chi^2$ (the sum of the $\chi^2$ values of each of the cooling curves for which we have observations). This allows us to obtain more accurate results and calculate errors on each of the parameters. In our models of Aql X-1 we fix the mass and radius according to the values used to obtain the spectral fits: $M=1.6 \text{ M}_\odot$ and $R=11$ km \citep[as used by][in their crust cooling study of this source and whose quiescence data we use in this paper; see Section \ref{qdata}]{waterhouse2016}. The free fit parameters are the initial core temperature $\tilde T_0$, the impurity factor of the crust $Q_\text{imp}$ (in different density regions), the amount ($Q_\text{sh}$) and minimal depth ($\rho_\text{sh,min}$) of the shallow heating, and the envelope light element column depth ($y_\text{L}$). We allow the light element column depth and the amount and the depth of the shallow heating to vary between different outbursts. The origin and properties of the shallow heating is unknown. It is generally assumed that the shallow heating is proportional with the accretion rate (see Eq. \ref{Eq:shallow}). Modelling of different crust cooling sources has shown that this heat source is not constant between the systems, and moreover is not necessarily constant between different outbursts of the same source \citep{deibel2015,parikh2018}. By allowing the amount and depth of shallow heating to vary between outbursts, we try to constrain the likelihood that this parameter is variable in time. \begin{figure*} \includegraphics[width=0.95\textwidth]{./lightcurve_Aql_X-1.pdf} \caption{Bolometric flux light curve of Aql X-1 between 1996 and 2015 determined from observations with {\it RXTE}/ASM ($2-10$ keV), {\it Swift}/BAT ($15-50$ keV), MAXI ($2-20$ keV), and {\it Swift}/XRT ($0.5-10$ keV). The arrows indicate the start times of the accretion outbursts detected during this period. We refer to the different outbursts by the numbers allocated in this plot. See Section \ref{fig:Aql_X-1_individual_outbursts_hardsoft} for the light curves of the individual outbursts.} \label{fig:light_curve} \end{figure*} \subsection{Code input: the 1996-2015 light curve}\label{sec:lightcurve} \subsubsection{The time-dependent accretion rate} \label{sec:conversion} Figure \ref{fig:light_curve} shows the bolometric flux of Aql X-1 that we calculated over the period between April 1996 and July 2015. During this period 23 outbursts were detected and monitored with different instruments. From here on, we refer to specific outbursts by their number as indicated in Figure \ref{fig:light_curve}. From the start of the mission of the {\it RXTE} in 1996, Aql X-1 was monitored almost daily with its All Sky Monitor (ASM), with the exception of Sun-constrained windows. By the time of decommission of the {\it RXTE} in 2012, the {\it Swift} mission had been launched (2004) and the MAXI (2009) on board of the {\it International Space Station (ISS)} was activated, providing further coverage of the activity of the source. Besides the {\it Swift} Burst Alert Telescope (BAT) that carries out monitoring observations, the {\it Swift} telescope is equipped with the X-Ray Telescope (XRT), which has been used for many pointing observations of Aql X-1 during different outbursts. Although outbursts of Aql X-1 were observed before 1996 we do not model these, because there were no regular observations during these outbursts. Additionally, there were no continuous monitoring observations between these accretion periods and the start of the ASM monitoring and we therefore do not know if any outbursts have been missed in that period. To construct the light curve of Aql X-1 we combined data from the {\it RXTE}/ASM \citep[$2-10$ keV,][]{levine1996}\footnote{\url{https://heasarc.gsfc.nasa.gov/docs/xte/ASM/sources.html}}, the BAT \citep[$15-50$ keV,][]{krimm2013}\footnote{\url{https://swift.gsfc.nasa.gov/results/transients/AqlX-1/}} and XRT \citep[$0.5-10$ keV,][]{evans2007,evans2009}\footnote{XRT light curves of Aql X-1 were obtained from the {\it Swift}/XRT online tool: \url{http://www.swift.ac.uk/user_objects/}} on board {\it Swift}, and MAXI \citep[$2-20$ keV,][]{matsuoka2009}\footnote{\url{http://maxi.riken.jp/star_data/J1911+005/J1911+005.html}}. For the ASM and XRT observations, we averaged multiple pointings taken on one day such that we obtained daily averaged count rate light curves of Aql X-1 for each instrument. In order to combine the data and determine the bolometric flux, one needs to correct for the fact that each of the different instruments operates in different energy ranges and has a different response. Aql X-1 is not constantly bright in each energy range because it has a variable spectral shape (i.e., when the source is in the hard state compared when it is in the soft state). Therefore, we first tried to identify when the source was in the hard and the soft state during each outburst and then determined conversion factors to bolometric flux for each of those. Figure \ref{fig:Aql_X-1_individual_outbursts_hardsoft} shows the individual light curves of the outbursts of Aql X-1 with the determined spectral state indicated. For outbursts $18-23$ we separate when Aql X-1 was in the hard and the soft state using the hardness ratio determined from MAXI observations. We calculate the hardness ratio as the count rate in the $10-20$ keV band (hard state) divided by the count rate in the $2-10$ keV band (soft state). We define observations for which the hardness ratio is smaller than $0.1$ as soft state observations and if the ratio is larger than 0.1 as hard state observations. With this method we were able to determine the time of source state transition in outbursts $18-22$. During these five outburst the source starts in the hard state, but during the rise of the outbursts it transitions to the soft state. For the main extent of the outburst, including the peak, the source stays in the soft state, and only returns back to the hard during the outburst decay. During the full extent of outburst 23, the source was in the hard state. For outbursts $1-10$ we only obtained ASM monitoring data and for outbursts $11-17$ from ASM and BAT. Therefore, we cannot use the distinction method based on MAXI data. However, the ASM operated in different energy bands as well and although the bands span a narrower energy range, we can use ASM ratios to make a rough estimate of the spectral state of the source during an outburst. We calculated ASM ratios by dividing the A band intensities ($1.5-3$ keV) by the C band intensities ($5-12$ keV) and define a rough ratio limit of $1.0$. If the ASM ratio during an outburst is primarily larger than $1.0$ we define the outburst as a soft state outburst, and if the ratio is generally smaller than $1.0$ we consider the source to be in the hard state during the outburst. The ASM data are not accurate enough to discriminate state changes during outburst, and therefore we only estimate the spectral state over the entire outburst from ASM ratios. We find that outbursts $1-4$ and $6-9$ are predominantly soft sate outbursts (with possible hard state episodes at the beginning and the end of the outburst, but we are unable to confirm this from the ASM data), while the ratios indicate that during outbursts 5 and 10 the source was always in the hard state. For outbursts $11-17$ we obtained BAT data as well, which enables an additional method of state determination. Since the ASM and BAT instruments observe in respectively soft and hard X-rays, we can separate the different spectral states of the source from the relative intensities. We convert the count rates into Crab intensities using conversion factors of $1 \text{ Crab}=75 \text{ count s}^{-1}$ for the ASM data and $1 \text{ Crab}=0.22 \text{ count s}^{-1} $ for BAT data \citep{krimm2013}. We find that outburst $11-14$, 16, and 17 have similar intensities from ASM and BAT observations, indicating that during these outbursts the source was constantly in the hard state. For outburst 15 we find a drop in the BAT intensities compared to ASM during the middle part of the outburst which we identify as a transition to the soft state. These state identifications are consistent with the ratios determined from ASM observations. To convert the count rates measured when the source was in either the hard or the soft state into bolometric fluxes, we use bolometric flux measurements reported in the literature. \citet{king2016} measured the bolometric flux ($0.1-100$ keV) of Aql X-1 in the soft state from simultaneous {\it NuSTAR} and {\it Swift}/XRT observations at MJD $=56755$ to be $F_{0.1-100\text{ keV}}=1.028\times10^{-8}\text{ erg cm}^{-2}\text{ s}^{-1}$. The XRT intensity at the time of this observation was $231.98\text{ count s}^{-1}$. From this measurement we extract an XRT count rate to bolometric flux conversion factor of $4.4\times 10^{-11}\text{ erg cm}^{-2}\text{ count}^{-1}$. Next, we determined for each day that the source was in the soft state and at which XRT data overlaps with data from one of the other three telescopes a conversion factor from either ASM, BAT or MAXI count rate into XRT count rate. We then calculate for each of the three instruments the average soft state conversion factor for count rate into XRT count rates. We assume that the XRT count rate to bolometric flux conversion factor obtained for MJD $=56755$ holds for all soft state observations and combine this with the count rate conversion factors to obtain for each instrument a soft state count rate to bolometric flux conversion factor (see Table \ref{table:conversionfactors}). \begin{table} \centering \caption{Calculated count rate to bolometric flux conversion factors in units of $\text{ erg cm}^{-2}\text{ count}^{-1}$ for the hard and soft states of Aql X-1 for each instrument used in this study.} \label{table:conversionfactors} \begin{tabular}{lllll} \hline & {\it RXTE}/ASM & {\it Swift}/BAT & MAXI & {\it Swift}/XRT \\ \hline Soft state & $4.5\times 10^{-10}$ & $2.2\times 10^{-6}$ & $8.4\times 10^{-9}$ & $4.4\times 10^{-11}$ \\ Hard state & $1.5\times 10^{-9}$ & $2.7\times 10^{-7}$ & $2.3\times 10^{-8}$ & $1.5\times 10^{-10}$ \\ \hline \end{tabular} \end{table} To calculate hard state conversion factors we use the bolometric luminosities that were obtained from best-fit spectral modelling of {\it Suzaku} observations reported by \citet{zhang2016}. This research reports the analysis of five hard state observation of Aql X-1 for which we have data in our sample taken on the same day (MJD$= 54377, 54382, 54388, 54392, 55852$). For each date we first convert the reported luminosities back to fluxes (assuming a distance of 5 kpc) and then calculate the count rate to flux conversion factor for the individual instruments. We note that XRT and MAXI observations only coincide with one {\it Suzaku} observation, while ASM and BAT coincide with four and five respectively. We use for each instrument the mean hard state conversion factor calculated (see Table \ref{table:conversionfactors}) for all hard state observations. Finally, a daily averaged accretion rate is calculated from the bolometric flux \citep[see ][for details]{ootes2016}. We assume the fraction of accreted mass going into X-ray luminosity to be 0.2 and use a distance of $5.0$ kpc. For days with observations from multiple instruments we use data from one instrument in the following order of priority: XRT, MAXI, ASM, BAT. Since the sensitivity of the ASM deteriorated near the end of the {\it RXTE} mission \citep[see e.g. the performance analysis by][]{vrtilek2013,grinberg2013}, we prioritise BAT over ASM observations for MJD $>55170$. \subsubsection{Quiescent data}\label{qdata} We model the temperature evolution of Aql X-1 to fit the cooling data obtained after outbursts 14, 19, 20, 21, and 23. For outbursts 20, 21, and 23 we use the temperatures obtained from {\it Swift}/XRT observations by \citet{waterhouse2016}. After outburst 23, three additional \textit{Swift} observations were taken that were not previously reported. We analysed those observations in the same way as reported by \citet{waterhouse2016}; we refer to that paper for full details. After outbursts 14 and 19 respectively two and one {\it Swift}/XRT observations were available and those were included in our paper in the same way. \begin{figure} \includegraphics[width=0.99\columnwidth]{./2013_decay_fits.pdf} \caption{XRT count rate as function of time for outburst 21. The end time $t_0$ of the outburst was determined as the intersection of fits through respectively the outburst decay and the quiescent observations. The grey points indicate the observations that we assume to be part of the outburst decay.} \label{fig:decay_fit} \end{figure} \subsubsection{Outburst start and end times} We determine the start times ($t_\text{start}$) of all outbursts as the date that the source becomes detectable according to our obtained light curves (Figure \ref{fig:light_curve}). Similarly, the end dates ($t_0$) of the 18 outbursts for which we have no quiescence {\it Swift}/XRT observations are determined to be the date when the source became undetectable according to our light curves obtained by the ASM, MAXI, and/or BAT. We list these times in Table \ref{tab:outburstproperties}, as well as the accretion rate averaged over the full outburst, $\langle\dot{M}\rangle$, as determined from the daily averaged bolometric flux. The determined start and end times compare well with those reported by \citet{campana2013} The determination of the end dates of outbursts for which we have {\it Swift}/XRT quiescence observations was done in more detail (i.e. the five outbursts for which we compare the calculated cooling curves with quiescence observations), since the exact end date is of influence for the cooling calculations. For outburst 19 we use the end date calculated by \citet{campana2014}, and for outburst 20, for which the outburst decay was missed, we use the estimated end time reported by \citet{waterhouse2016}. To determine the end date of outbursts 14, 21 and 23, we fitted the decay curve observed with {\it Swift}/XRT as well as the quiescent curve with an exponential decay function and calculated the end date of the outburst as the intersection of the two (see Figure \ref{fig:decay_fit} for the results for outburst 21). We list the $t_0$ values that we determined for those outbursts in Table \ref{tab:outburstproperties} as well. It should be noted that \citet{waterhouse2016} determined $t_0=56518$ as end date for outburst 21 (5 days earlier) and $t_0=57100$ for outburst 23 (4 days earlier) with a similar method. As this is a significant difference for outburst 21 considering the dates of the first quiescence observations (MJD = 56524; either 1 or 6 days after $t_0$, depending on which value for $t_0$ is used), we will compare results that can be obtained with the two different end dates in Section \ref{res:2012outburst}. Elsewhere we will use for outburst 21 the end date that we determined from Figure \ref{fig:decay_fit}. For outburst 23 we do not compare the results for different outburst end times, as for this outburst the first quiescence observation is later in time. This makes the effect of a different end time significantly smaller considering the logarithmic time scale over which we observe crust cooling. For outburst 20 we also take into account the two reflares shortly after the end of the outburst, first reported by \citet{cotizelati2014}. These started 82 and 208 days after the end of the outburst and last respectively for 32 and 7 days. \subsubsection{Envelope composition: upper limits} \begin{table} \begin{center} \caption{Determined outburst properties: start, $t_\text{start}$, and end time $t_0$ of the outburst, time-averaged accretion rate, $\langle\dot{M}\rangle$, as percentage of the Eddington mass accretion rate (with $\dot{M}_\text{Edd}=1.73\times10^{18}\text{ g s}^{-1}=2.74\times10^{-8}M_\odot\text{ yr}^{-1}$), time of the last detected type-I X-ray burst before the end of the outburst, $t_\text{XRB}$, and the calculated upper limit on the helium column depth, $Y_\text{He,max}$, based on the last observed X-ray burst. The outbursts are numbered as in Figure \ref{fig:light_curve}. We also report the year in which the outburst took place.} \label{tab:outburstproperties} \begin{tabular}{lllllll} \hline \# & year &$t_\text{start}$ & $t_\text{0}$ & $\langle\dot{M}\rangle$ & $t_\text{XRB}^\dag$ & $Y_\text{He,max}$ \\ &&&&$\%\dot{M}_\text{Edd}$&&g cm$^{-2}$\\\hline 1 & 1996 & 50232 & 50313 & \:\;$0.54$ & 50288.4 & $1.4\times10^{9}$ \\ 2 & 1997 & 50460 & 50514 & \:\;$5.34$ & 50509.0 & $5.5\times10^{8}$ \\ 3 & 1997 & 50660 & 50716 & \:\;$3.96$ & 50701.5 & $3.0\times10^{9}$ \\%1997 4 & 1998 & 50868 & 50940 & \:\;$8.67$ & 50886.5 & $4.9\times10^{10}$ \\%1998 5 & 1999 & 51308 & 51493 & \:\;$5.15$ & 51439.8 & $1.5\times10^{10}$ \\%1999 6 & 2000 & 51809 & 51873 & $10.32$ & 51856.2 & $5.2\times10^{9}$ \\%2000 7 & 2001 & 52074 & 52123 & \:\;$0.85$ & 52100.8 & $1.6\times10^{9}$ \\%2001 8 & 2002 & 52315 & 52359 & \:\;$3.43$ & 52354.0 & $6.8\times10^{8}$ \\%2002 9 & 2003 & 52686 & 52743 & $10.45$ & 52735.4 & $9.8\times10^{8}$ \\%2003 10 & 2004 & 53029 & 53177 & \:\;$4.86$ & 53153.8 & $2.3\times10^{10}$ \\%2004 11 & 2005 & 53456 & 53507 & \:\;$3.90$ & 53490.9 & $3.0\times10^{9}$ \\%2005 12 & 2005 & 53679 & 53760 & \:\;$3.09$ & 53720.2 & $1.1\times10^{10}$ \\%2005 13 & 2006 & 53941 & 53973 & \:\;$2.06$ & 53962.5 & $7.7\times10^{8}$ \\%2006 14 & 2007 & 54236 & 54288 & \:\;$2.13$ & 54259.2 & $2.6\times10^{9}$ \\%2007 15 & 2007 & 54345 & 54391 & \:\;$4.65$ & 54384.0 & $8.6\times10^{8}$ \\%2007 16 & 2008 & 54603 & 54689 & \:\;$2.91$ & - & - \\%2008 17 & 2009 & 54895 & 54926 & \:\;$2.25$ & - & - \\%2009 18 & 2009 & 55139 & 55282 & \:\;$3.64$ & 55256.2 & $6.8\times10^{9}$ \\%2009 19 & 2010 & 55388 & $55491^*$ & \:\;$3.50$ & 55472.8 & $5.2\times10^{9}$ \\%2010 20 & 2011 & 55843 & 55919 & \:\;$9.87$ & 55905.3 & $9.0\times10^{8}$ \\%2012 21 & 2013 & 56444 & 56523 & \:\;$8.54$ & $56493^\S$ & $8.6\times10^{9}$ \\%2013 22 & 2014 & 56838 & 56886 & \:\;$4.71$ & $56856^\ddag$ & $1.3\times10^{10}$ \\%2014 23 & 2015 & 57039 & 57104 & \:\;$1.96$ & - & - \\ \hlin \end{tabular} \end{center} $^*$Calculated by \citet{campana2014}\\ $^\dag$ collected from {\it MINBAR}, except for $^\S$ which was reported by \citet{serino2016} and $^\ddag$ reported by \citet{king2016} \end{table} It is generally assumed that during a type-I X-ray burst all light elements in the envelope are fused into heavier elements \citep[although see the recent work by][]{keek2017}. The triple-$\alpha$ reaction that ignites the burst fuses the fuel into carbon, after which further nucleosynthesis reactions, depending on the temperature, density, and amount of H present, lead to more proton rich burning ashes \citep[e.g.][]{schatz2001}. This means that over the course of such a burst, the envelope composition changes drastically. After the burst, a new layer of light elements builds as the source continues to accrete. Consequently, for sources that undergo frequent X-ray burst such as Aql X-1, the composition of the envelope at the end of the outburst will depend on how long ago the last X-ray burst occurred. This gives us an opportunity to place upper limits on the amount of light elements in the envelope. For each outburst, we search the Multi-INstrument Burst ARchive (MINBAR)\footnote{The MINBAR database, maintained by Duncan Galloway, can be found at \url{http://burst.sci.monash.edu/minbar}.} as well as the literature for the time of the last detected type-I X-ray burst ($t_\text{XRB}$; see Table \ref{tab:outburstproperties}). Based on the average accretion rate in the time between the last X-ray burst and the end of the outburst we calculate the amount of material that is accreted in this period. Assuming a stellar radius $R=11$ km an upper limit on the helium column depth $Y_\text{He,max}$ is derived (see Table \ref{tab:outburstproperties}). It should be noted that this only places very rough upper limits on the envelope composition, since X-ray bursts are easily missed by monitoring telescopes because of their short duration (a few minutes at most). Simple models show that for local mass accretion rates $0.1\lesssim\dot{m}/\dot{m}_\text{Edd}\lesssim1.0$, helium will ignite unstably as soon as the accreted layer reaches the thermal instability limit which has been calculated to be at $y_\text{ign}\simeq3\times10^8 \text{ g cm}^{-2}$ \citep{bildsten1998}. Unfortunately, all upper limits are higher than the column depth required to trigger an X-ray burst. We could therefore not use these results as constraints in our models. \section{Results} \subsection{Comparing models} We fitted the cooling curves of Aql X-1 for a variety of models, taking into account different assumptions for the input parameters. Here we describe each of these models and their results. In all models, the assumed mass is $M=1.6 \text{ M}_\odot$, and the radius $R=11$ km, consistent with the spectral fits of the quiescent cooling data \citep{waterhouse2016}. All 23 outbursts are taken into account in each model, but we allow the parameters for envelope composition and shallow heating to change between the outbursts (unless stated otherwise). For each model the total $\chi^2$ is reduced in order to obtain the best-fit parameters and we calculate $1\sigma$ errors on each of the parameters. It should be noted that for outburst 19 we have only one quiescent observation, and hence the parameters of this outburst cannot be constrained with high accuracy. For the 18 outbursts for which we have no cooling observations, we fix the parameters to default values unless described differently. The value for a fixed envelope composition is $y_\text{L}=10^8\text{ g cm}^{-2}$. We assume this is a valid estimate, considering the accreted light element layer required to trigger a type-I X-ray burst ($y_\text{ign}\simeq3\times 10^8\text{ g cm}^{-2}$). The default amount of shallow heating is $Q_\text{sh}=1.5$ MeV nuc$^{-1}$, at $\rho_\text{sh,min}=4\times 10^8\text{ g cm}^{-3}$, which equals the typical amount of shallow heating found for other outbursts. \subsubsection{Primary model: a low impurity crust and variable shallow heating and envelope composition} \begin{figure} \includegraphics[width=\columnwidth]{./Results_Aql_X-1_bestfit_2.pdf} \caption{Calculated cooling curves of outbursts 14, 19, 20, 21, and 23 for our primary model of Aql X-1. The amount and depth of shallow heating and envelope composition were changed between different outbursts to reduce the total $\chi^2$ value, while all other input parameters are constant for all 23 modelled outbursts. The arrows in the figure of cooling curve 20 indicate the times of the observed accretion flares \citep{cotizelati2014}. Cooling curves of the remaining 18 modelled outbursts are not shown, because no quiescent observations are available to compare the calculated model with.} \label{fig:bestfit} \end{figure} \begin{table} \setlength{\extrarowheight}{0.1cm} \begin{center} \caption{Fit parameters with $1\sigma$ errors for the primary model. In this model the crust impurity parameters are fixed to 1.0. We fitted for the initial core temperature $\tilde T_0$ and the shallow heating and envelope parameters. For the five outbursts with quiescent observations the shallow heating and envelope parameters were kept untied between outbursts in the fit. For the other 18 outbursts these parameters were fixed to canonical values (indicated by the values without errors). } \label{tab:bestfit} \begin{tabular}{lllll} \hline Outburst & $\tilde T_{0}$& $Q_\text{sh}$ & $\rho_\text{sh,min}$ & $\log(y_\text{L})$ \\ & $\times10^7$ K& MeV nuc$^{-1}$ & $\times10^9$ g cm$^{-3}$ & g cm$^{-2}$ \\ \hline 14 & & $2.9^{+1.6}_{-1.2}$ & $0.1^{+1.3}_{*}$ & $8.2^{+1.2}_{*}$ \\ 19 & & $1.3^{+1.4}_{-1.0}$ & $0.3^{+35.9}_{*}$ & $8.8^{*}_{*}$ \\ 20 & & $3.7^{+1.5}_{-0.9}$ & $0.4^{+7.9}_{*}$ & $8.3^{+0.7}_{-0.9}$ \\ 21 & &$2.3^{+0.5}_{-0.3}$ & $0.4^{+0.7}_{*}$ & $8.8^{+1.1}_{-1.5}$ \\ 23 & &$0.9^{+0.8}_{-0.6}$ & $2.8^{+13.7}_{*}$ & $9.8^{*}_{-2.5}$ \\ Other & $8.9^{+2.3}_{-1.5}$ &$1.5$ & $0.4$ & $8.0$ \\ \vspace{-0.35cm} \\ \hline \end{tabular} \end{center} $^* $ The error exceeds the maximum or minimum allowed value of the parameter in our model. \end{table} \begin{figure*} \includegraphics[width=\textwidth]{./Results_full_model_bestfit.pdf} \caption{Calculated effective temperature as function of time during all 23 modelled outburst, for the primary model assuming a low-impurity crust. The red points indicate the quiescent observations during the cooling curves of outburst 14, 19, 20, 21 and 23 (see Figure \ref{fig:bestfit} for a close-up of those cooling curves). The red lines represent the observable base level of the source for the envelope composition assumed at that time; i.e. the effective temperature to which the source would cool down if the recurrence time was long enough to restore crust-core equilibrium. The discontinuities in both the calculated effective temperature and the base levels are caused by the fact that we allow an instantaneous change in envelope composition at the start of a new outburst. The envelope composition remains constant during the course of an outburst and subsequent quiescent period. The vertical dotted line indicates the start of the 2016 outburst \citep{sanna2016a,sanna2016b}, and shows that the source needs more time to cool down to the base level than the recurrence time.} \label{fig:full_bestfit} \end{figure*} Figure \ref{fig:bestfit} shows the calculated cooling curves for the model in which we leave the core temperature free and allow the envelope composition and amount and depth of shallow heating to vary per each of the five outbursts with cooling data. The best-fit parameters are shown in Table \ref{tab:bestfit} and also as model 1 in Table \ref{allmodelparams}. In this model we kept the impurity parameter of all density regions in the crust fixed at $Q_\text{imp}=1$. We investigated various possibilities for the impurity parameter of the crust by changing the input parameters by hand, but found that none of the cooling curves showed indications for an inner or outer crust that is either more or less conductive than the canonical value of $Q_\text{imp}=1$. We will refer to the model described in this section as the primary model. Matching the calculated cooling curves with the observations requires to find the right combination of the amount and depth of shallow heating and envelope composition for each individual outburst, while taking into account that the core temperature remains constant for all outbursts. We find that for outbursts 14, 20, and 21 we need a significant amount of shallow heating of $2.3-3.7$ MeV nuc$^{-1}$ at shallow depths ($1-4 \times 10^8 \text{ g cm}^{-3}$) to explain the steep cooling curves (see Figure \ref{fig:bestfit}). Cooling curve 23 has a significantly shallower temperature evolution and consequently only $0.9$ MeV nuc$^{-1}$ is needed at a larger depth of $2.8\times 10^9\text{ g cm}^{-3}$ to explain these observations. In this best fit model, we find $Q_\text{sh}=1.3$ MeV nuc$^{-1}$ for outburst 19. Assuming that the neutron star has a high conductivity crust (low impurity), it needs $\sim1500$ days to cool back to crust-core equilibrium as can be seen from the last outburst in Figure \ref{fig:full_bestfit}. The observed base level corresponding to crust-core equilibrium depends on the core temperature and the envelope composition. Since the cooling observations only reach to $\sim300$ days into quiescence and the source does not show quiescent episodes lasting multiple years in the observed epoch, we do not know what the base level of Aql X-1 is. From our model we find that the data is best fitted for a core temperature of $\tilde T_0=8.9\times10^7$ K, in combination with light element column depths reaching from $y_\text{L}=1.6\times10^8$ g cm$^{-2}$ to $y_\text{L}=6.3\times10^9$ g cm$^{-2}$ for the different outbursts with cooling data corresponding to observed base levels of $\sim 88-99$ eV. Since the envelope composition determines the conversion from boundary temperature to effective temperature, varying this parameter shifts the complete calculated cooling curve to higher or lower temperatures and thus also the base level (see Figure \ref{fig:Tb-Ts}). This means that even though the core temperature does not vary, the base level does when the envelope composition is assumed to change between outbursts \citep{brown2002}. Figure \ref{fig:full_bestfit} shows the results of the full model: the calculated effective temperature as function of time during all outbursts and subsequent cooling phases. The red line indicates the base level to which the source will cool down if it restores crust-core equilibrium, which is determined by the core temperature in combination with a variable envelope composition. Because we vary the envelope composition per outburst, the base level goes up or down between outbursts. From this figure one can clearly see that the source does not have enough time between outbursts to cool down to the base level. The two reflares at 82 and 208 days after the end of outburst 20 were taken into account in the primary model. We assumed for the reflares the same conditions as during the main outburst, but kept the shallow heating strength fixed at 0 MeV nuc$^{-1}$. The reflares show no effect on the cooling curve (Figure \ref{fig:bestfit}). If shallow heating is turned on during the reflares at the same strength as during the main outburst, no change in $\chi^2$ is observed. \subsubsection{Constant envelope composition} To match the calculated cooling curves with the observations we varied the envelope composition for different outbursts in the primary model. We also created a model in which we kept the envelope composition tied between all 23 outbursts. The envelope composition was allowed to change, but had to remain constant between all outbursts. The other free parameters were the core temperature and shallow heating strength and depth (both of which were untied). The $\chi^2$ value obtained when keeping the envelope composition constant between outbursts was slightly higher compared to the primary model (see model 2 in Table \ref{allmodelparams}). The differences in steepness and initial surface temperature of the cooling curve between outbursts can be (partly) compensated for by varying the other fit parameters. However, in order to keep the envelope composition tied between outbursts, a larger difference in shallow heating parameters between the outbursts is required. Specifically, for outburst 23 the required shallow heating increases to 16.4 MeV nuc$^{-1}$ at unusual high density ($\rho_\text{sh,min}=1.7\times 10^{10}\text{ g cm}^{-3}$). \subsubsection{Constant shallow heating parameters} The fit values for the amount and depth of shallow heating for the different outbursts in the primary model are similar to the typical values found in crust-cooling systems that require shallow heat. However, since the origin of the shallow heat has not yet been constrained, it is unclear how much the amount of shallow heating differs either per source, per outburst, or even during an outburst. \citet{deibel2015} and \citet{parikh2018} showed that the outbursts of MAXI J0556-332 are best-fitted with different shallow heating parameters. One possibility to explain variations in shallow heat from different outbursts of one source is that the shallow heating might be related to the accretion flow configuration as proposed by \citet{zand2012}. We tried therefore a variety of models to investigate different possibilities for shallow heating and compared the obtained calculated cooling curves with those obtained using the primary model. First we created a model in which we kept all shallow heating parameters (both strength and depth) tied between all outbursts. The obtained model has a significantly higher $\chi^2$ than the primary model (see model 3 in Table \ref{allmodelparams}). This is due to the fact that cooling curves 14, 20, and 21 have significantly steeper slopes and a larger observed temperature drop than cooling curve 23, which is difficult to account for when using a constant amount of shallow heating. Changing the envelope composition between outbursts cannot compensate for this effect, because this shifts the entire cooling curve while leaving the steepness unaffected. We then also tried two models in which we tied respectively only the shallow heating strength (model 4) and depth (model 5) between the outbursts with cooling observations. Model 4 has a slightly higher $\chi^2$ and model 5 a similar $\chi^2$ compared to the primary model. Second, we created a model in which we could turn off shallow heating when Aql X-1 is either in the hard or the soft state during an accretion outburst. During outbursts 14 and 23 Aql X-1 was observed to be in the hard state throughout the entire outburst, while during outbursts 19, 20, and 21 Aql X-1 was mainly in the soft state, except for a small part of the rise and decay of the outburst (see Figure \ref{fig:Aql_X-1_individual_outbursts_hardsoft}). First we created models in which we kept the amount and depth of shallow heating (activated only when the source was in either the hard or soft state) tied between outburst. However, neither a good fit of the data can be obtained in this way for the hard state shallow heating assumption nor for the soft state shallow heating assumption. Therefore we then tried models in which the amount of hard/soft state shallow heating is allowed to be different per outburst. Turning on shallow heating only when the source is in the soft state allows us to obtain good agreement between the calculated cooling curves and observations for outbursts 19, 20, 21, and 23, as long as the amount of shallow heating is allowed to vary per outburst. However, this model cannot reproduce the data of cooling curve 14, because during this outburst the source only resides in the hard state. Obtaining a $20$ eV drop in the first 20 days after the end of the outburst is not possible without shallow heating. This problem does not arise for outburst 23, because that cooling curve is so shallow that it can be modelled without shallow heating. If shallow heating is turned on only when the source in the hard state, a model can be obtained that provides a satisfactory fit of the observations for all outbursts. However, outbursts 20 and 21 have rather significant temperature drops in their cooling curves while the source is in the hard state for a very short amount of time during these outbursts. On the other hand, outburst 23 has a very shallow cooling curve while the source is in the hard state for a long time. Consequently, to obtain good quality fits for all cooling curves, the difference in the amount of shallow heating between outbursts has to be a factor $\sim10$ when only allowing for shallow heat when the source is observed to be in the hard state. \subsubsection{Low conductivity pasta layer}\label{res:pasta} In the primary model we did not take into account the effect that a pasta layer deep inside the inner crust of a neutron star can have on the temperature evolution during and after outbursts. A pasta layer might have a high impurity factor \citep{pons2013,horowitz2015} that can slow down the heat flow to the core and can hence allow a large difference in temperature between the core and the crust. This will influence the cooling curve in last phase before it reaches the base level \citep[see e.g.][]{deibel2016}. To test how a pasta layer would influence our obtained results, we created a model that includes a high impurity layer (with either $Q_\text{imp}=40$ or $Q_\text{imp}=100$ \citep{horowitz2015,pons2013}) representing the pasta phase in the crust at a depth $\rho>8\times 10^{13} $ g cm$^{-3}$. Outside the pasta layer, the crust is assumed to be highly conductive ($Q_\text{imp}=1$). When a pasta layer is included in our model, very little influence on the calculated cooling curves is observed if we use the same input parameters as for the primary model (Table \ref{tab:bestfit}). All cooling curves lay $\sim 1-2$ eV above those of the primary model, due to the fact that heat can flow less easily into the core. However, the increase in $\chi^2$ value can easily be compensated for by adjusting other parameters (for example, the envelope composition). As the maximum duration of the quiescence period between the modelled outbursts ($\sim 500$ days) is shorter than the thermal time scale of the deepest layers of the crust, the observed cooling curves never reach the phase where they are sensitive to the conductivity of the pasta layer. We can therefore not constrain the presence of a low conductivity pasta layer in Aql X-1. \subsection{Outburst 21: determining $t_0$ and modelling the outburst }\label{res:2012outburst} Since we used a different start time $t_0$ for the quiescent phase of outburst 21 than \citet{waterhouse2016}, we made additional models using their end time to test if we can obtain a similarly good fit of the data if we use their quiescent start time. With this end date, the otherwise last decay point becomes part of the quiescence observations resulting in an extra point that is considered to be part of the cooling period. This observation is only two days after the end of the outburst (that is assumed by \citet{waterhouse2016}) and the temperature measured is $182$ eV. This is significantly higher than the initial temperatures of any of the other cooling curves and requires a steep decay over the first $\sim 40$ days of quiescence. \begin{figure} \includegraphics[width=\columnwidth]{./Results_outburst_21.pdf} \caption{Best-fit calculated cooling curves for outburst 21 when using the end time of the outburst calculated by \citet{waterhouse2016}, instead of the end time that is calculated in this work. The first model assumes constant accretion rate during the outburst and does not take into account previous outbursts (dashed curve) as in \citet{waterhouse2016}, while the second model uses the method described in this work (solid curve). Note that for each of the models we show the best-fit of the data and therefore the models have different input parameters.} \label{fig:2012_models} \end{figure} Besides the difference in $t_0$, the study performed by \citet{waterhouse2016} differs from ours in the fact that it does not take into account previous outbursts, and it uses a constant mass accretion rate during the outburst. Moreover, the constant accretion rate assumed during outburst 21 ($\dot{M}=0.23 \dot{M}_\text{Edd}$) is higher than the time-averaged accretion rate that we determined ($\langle\dot{M}(t)\rangle=0.10\dot{M}_\text{Edd}$ if we assume their outburst start and end time). Finally, the outburst duration that \citet{waterhouse2016} assumed (0.15 yrs) is shorter than the duration that we determined (0.21 yrs). Therefore, we made two models that both assume the end time for outburst 21 determined by \citet{waterhouse2016}: one that is otherwise the same as the primary model, and one that uses a constant mass accretion rate equal to that of \citet{waterhouse2016} for a duration of 0.15 yrs, without taking into accretion previous outbursts. The second model thus uses the same assumptions as made by \citet{waterhouse2016}. The effects of taking into account previous outbursts will be discussed in Section \ref{sec:historyinfluence}. Figure \ref{fig:2012_models} shows the best-fit calculated cooling curves for these two models. It should be noted that the best-fit parameters of the two models are different. It is evident that a reasonable fit through all four observations is only possible if a constant accretion rate is assumed during the outburst (dashed curve), although this fit still undershoots the first quiescence observation. In this case the crust remains hot until the end of the outburst and consequently has a relatively steep decay. On the other hand, for the model that uses a time-variable accretion rate based on the observed light curve (solid curve), the temperature profile in the crust changes during the outburst decay; the outer layers of the crust already cool before the end of the outburst due to the decreasing accretion rate. Consequently, the calculated cooling curve is more convex compared to that of the model that assumes a constant accretion rate and we were not able to calculate a cooling curve for this model that is steep enough to reasonably fit all four observations. \section{Discussion} \subsection{Modelling multiple outbursts} We extended our neutron star crust cooling model {\tt NSCool} \citep{page2013,ootes2016} to model the accretion rate variability during multiple accretion outbursts based on the observed light curves and applied this to Aql X-1, for which 23 outbursts have been observed between 1996 and 2015. After five of those observed outbursts, {\it Swift}/XRT quiescence observations have been obtained that can be interpreted as cooling of the accretion-heated neutron star crust \citep{waterhouse2016}. We thus assume that the observed intensity decay in quiescence is caused by crust cooling and not by accretion phenomena (with the exception of the two observed accretion flares after outburst 20). Aql X-1 serves as a test source to investigate the effects of the accretion history on current outbursts in transients with short outbursts and recurrence times. We fitted the cooling curves of the five outbursts with cooling data to look for the best-fit crustal parameters, while taking into account the full accretion history. In the fitting process we investigated the possibility that some of the crustal parameters change between different outbursts. We found that the calculated cooling curves compare well with the observations if we assume a high thermal conductivity crust ($Q_\text{imp}=1$) and if we vary per outburst the envelope composition of the crust and the amount and depth of heat generated by the shallow heating mechanism. A model in which all parameters are tied between the outbursts does not reproduce the data well (see model 6 in Table \ref{allmodelparams}). The amount and depth of the shallow heating are similar to the values found for other crust cooling sources. Most importantly, we find that Aql X-1 does not have enough time between outbursts (250 days on average, with a maximum of $\sim500$ days) to cool down to the crust-core equilibrium state (the base level of the cooling curve, which depends on the initial core temperature and the (variable) envelope composition). For this reason we cannot constrain the base level of this source (the required quiescence period would have to be $>10^3$ days to observe this). The presence of a potentially high-impurity (low conductivity) pasta layer in the inner crust remains unconstrained as well. Effects of such a layer can only be observed a few thousand days after the start of quiescence, which is much longer that the outburst recurrence time. Outbursts 20 and 21 were previously fitted by \citep{waterhouse2016}. Comparing outbursts 20 and 21 from our primary model with the results of \citet{waterhouse2016}, we find similar results for the shallow heating strength and also find that the best-fit crustal parameters of the two outbursts are consistent with each other within the $1\sigma$ error. We made an additional model in which we tied all parameters of outburst 20 and 21 (model 7 in Table \ref{allmodelparams}) which confirms that the two remarkably similar outbursts can be modelled with the same assumptions for the microphysics in the crust. Consistently with \citep{waterhouse2016}, from our cooling calculations we predict the base level of Aql X-1 to be $88-99$ eV. However, different from that research, we used time-dependent accretion rates during the outburst calculated from the observed light curve and take into account the effects of previous outbursts on the thermal state of the neutron star crust. Also, we use a different end time for outbursts 21 and 23 (see Section \ref{sec:discussion-endtime}). \subsubsection{Constraining crustal parameters from multiple outburst modelling} Modelling in detail the effects of the full outburst history on the thermal state of the neutron star crust provides advantages towards constraining the microphysics of the crust compared to modelling only one outburst. \citet{waterhouse2016} showed that Aql X-1 has two outbursts that are so similar in both outburst and quiescence behaviour, that their cooling curves can be combined and modelled as one. But even for outbursts where this is not the case, it is advantageous to model all observed outbursts collectively; by which we mean that we model the full accretion and quiescence history (chronologically) in one run of the code. Whereas some parameters might change between outbursts (such as composition of the envelope and likely the strength and depth of the shallow heating), others must be constant: mass, radius, and initial core temperature. In this study we have assumed that the lattice impurity parameter (which sets the thermal conductivity) of different layers of the crust is constant between outbursts. Modelling multiple cooling curves collectively while keeping some parameters constant will reduce the degeneracy between input parameters that exists in cooling models. While for one cooling curve very similar results can sometimes be obtained with different combinations of parameters, this degeneracy is partly lifted when modelling multiple cooling curves. This is enhanced by the fact that with more cooling curves of the same source, the sampling of each part of the cooling curve (corresponding to the thermal properties of specific layers of the crust) will increase. Although there is not one constant underlying cooling curve for all outbursts, at similar times in different cooling curves, the same layers are probed. This means that if one cooling curve provides multiple possibilities for some crustal parameter, these possibilities might be further constrained from another cooling curve if it has better observational sampling around the time that the effects of that parameter can be observed. \begin{figure*} \includegraphics[width=0.975\columnwidth]{./Tprofile.pdf}\qquad\qquad\includegraphics[width=0.975\columnwidth]{./Tprofiles_23.pdf} \caption{Left: temperature profiles of the crust of the neutron star in Aql X-1 at the start of each outburst as obtained with the primary model. The dashed curve is the temperature profile at the start of outburst 1, when the crust and the core are in thermal equilibrium. The coloured numbers indicate the outburst numbers. Right: temperature profile at the end of outburst 23 with (solid curve) and without (dashed curve) taking into account the full accretion history during the 22 previous outbursts. Both panels use the crustal parameters as determined from the primary model. Note that both panels show the local, non-redshifted temperature (a plot of the red-shifted, $\tilde T$ internal temperature would show a uniform $\tilde T$ in the core). The dashed vertical line indicates the crust-core transition. } \label{fig:Tprofiles} \end{figure*} In the case of Aql X-1 it was challenging to model the differences in initial temperatures (the temperature observed around the start of the cooling curve), and steepness (cooling rate) between the different cooling curves (see Figure \ref{fig:bestfit}). These differences are partly caused by differences in the outburst properties (e.g. duration, mass accretion rate, and outburst decay rate), but otherwise depend on the input parameters of the model. Outbursts 14 and 23 are both $\sim 120$ eV at the start of the cooling curve, but cooling curve 14 shows a 10 eV drop in temperature in only 10 days, while cooling curve 23 only shows a $\sim5$ eV decay in temperature over more than 100 days. On the other hand, cooling curve 21 starts out very hot at $159$ eV one day after the start of quiescence, and cooling curve 20 shows a decay in temperature at late times ($\gtrsim100$ days into quiescence) when the cooling curve of outburst 23 seems roughly flat. Outbursts 20 and 21 had larger overall accretion rates than outbursts 14 and 23. Due to its high outburst accretion rate the high initial temperature of outburst 21 (159 eV) could be reproduced with a slightly smaller amount of shallow heating compared to outburst 14, even though after this outburst the source was observed at a lower temperature of 120 eV. Because of the higher accretion rate during outbursts 20 and 21, a lot of heat is stored in the outer crust when using a significant amount of shallow heat (2.3-3.7 MeV nuc$^{-1}$ in this case). This heat (that comes on top of the residual heat from the previous outbursts) can account for the late time-decay in temperature of the cooling curve corresponding to that outburst. To overcome the differences in the cooling trends while using the same initial core temperature, outburst 14 had to have significantly stronger shallow heating than outburst 23 to obtain the observed steeper initial decay, in combination with a lower light element column depth to reduce the initial temperature to a similar level between the two. Similar arguments hold when one compares the cooling curves of outbursts 20 and 23. During these cooling periods the source has a similar observed temperature at $\sim200$ days in to quiescence, while for cooling curve 20 the cooling trend at that time is much stronger than for curve 23 (comparing the observations of the two cooling curves $\sim100-300$ days in quiescence). This indicates that cooling curve 20 might have a lower base level than cooling curve 23 and hence that, apart from stronger shallow heating during the outburst, the two are best modelled with different envelope compositions (i.e. cooling curve 20 must have a lower light element column depth). This is also what we find in the primary model, although the envelope composition of outburst 23 is not well-constrained. All in all, despite the limited coverage of the quiescence episodes of Aql X-1, reasonable constraints can be obtained when modelling all outbursts collectively. \subsubsection{The influence of the outburst history on the current outburst and cooling calculations}\label{sec:historyinfluence} By modelling the full outburst history of Aql X-1 collectively we showed that with the assumed crustal properties, the source does not have enough time between outbursts to cool down to its base level (Figure \ref{fig:full_bestfit}). This means that the crust-core equilibrium is not fully restored after an outburst before the start of the next one. The left panel in Figure \ref{fig:Tprofiles} shows the temperature profiles in the neutron star crust at the onset of each outburst for the primary model. The dashed black curve is the temperature profile at the start of the first outburst and hence shows the temperature profile when the crust and core are in equilibrium. The first outburst is rather faint and has a low time-averaged accretion rate (see Figure \ref{fig:light_curve}, Table \ref{tab:outburstproperties}), but nevertheless at the onset of the second outburst (147 days after the end of the first outburst) the temperature profile (cyan curve in Figure \ref{fig:Tprofiles}) is elevated compared to crust-core equilibrium. If all outburst and recurrence times were similar, this would have an enhancing effect in the sense that at the start of each consecutive outburst the crust will be a little bit hotter. However, this is not the case for our model of Aql X-1 since the recurrence times, accretion rates, outburst profiles, and the shallow heating differ per outburst. As can be seen from Figure \ref{fig:Tprofiles}, for each new outburst the initial temperature profile is different. At the start of outburst 19 the outer crust is still very hot because outburst 18 had a relatively long duration and the quiescence period between the two outbursts is one of the shortest (106 days). On the other hand, at the onset of outburst 18 the temperature profile is not extreme, because the preceding outburst was short (31 days) and the quiescence period after both outburst 16 and 17 were relatively long (206 and 213 days respectively), allowing much heat to flow out of the crust (into the core and also from the surface). Yet the crust is hottest at the onset of outburst 7, which was preceded by an outburst with average outburst duration (64 days) and quiescence period (201 days), but which had a high time-averaged accretion rate ($\langle\dot{M}\rangle=$0.10 $\dot{M}_\text{Edd}$). The right panel of Figure \ref{fig:Tprofiles} visualises the consequences of taking into account previous outbursts in our crust cooling model. Here we show the temperature profile at the end of outburst 23 for the primary model taking into account all preceding outbursts (solid curve) and the temperature profile assuming the same crustal parameters, but without taking into account the accretion history (dashed curve). A significant difference in temperature at all layers of the crust can be observed resulting in a difference in observed surface temperature of $\sim 6$ eV at the onset of the cooling curve. Moreover, the decay trend at later times in the cooling curve is also affected due to the significant difference in the amount of heat stored in the deeper layers of the crust. This shows that for sources with short recurrence times, such as Aql X-1, taking into account the accretion history will have a strong effect on the exact values of the inferred crustal parameters of the neutron star. \subsection{Variation in parameters between outbursts} \subsubsection{Variable envelope composition} The envelope composition was allowed to vary between outbursts in the primary model. When the envelope composition was tied between outbursts the $\chi^2$ increased slightly, and to reproduce the data the shallow heating of outburst 23 had to be unusually high and deep. This suggests that the envelope composition at the end of each outburst is different. A changing envelope composition between outbursts is supported by our current understanding of the envelopes of accreting neutron stars \citep{brown2002}. The fact that type-I X-ray bursts are observed for this source indicates that there is constant buildup of a layer of light elements that is explosively fused into heavier elements once the layer reaches a column depth at which it becomes thermally unstable. The envelope composition at the end of an outburst would then depend on how much matter has been accreted since the last X-ray burst and likely this amount of matter varies significantly for each outburst. \subsubsection{Variable shallow heat?} In our primary model, we found that the strength of shallow heating varied per outburst between 0.9 and 3.7 MeV nuc$^{-1}$ and the minimum depth between $10^8-3\times 10^9\text{ g cm}^{-3}$. Although the 1$\sigma$ errors on the shallow heating depth in this model do overlap, this is not the case for the shallow heating strength. A model in which we kept both shallow heating parameters tied between outbursts confirms that the data cannot be reproduced with constant shallow heating parameters. This supports the conclusion from \citet{deibel2015} and \citet{parikh2018} for MAXI J0556-332 that not all outbursts of the same source need the same shallow heating. If only one of the shallow heating parameters is tied between outbursts, the $\chi^2$ is close to that of the primary model, but it requires a larger variation of the shallow heating parameter that is not tied between outbursts. Although the shallow heating parameters of the primary model are not extreme compared to other sources, one might ask whether or not it is possible that the amount of shallow heating changes between or even during outbursts. For some of the suggested origins of shallow heating it might be possible that the shallow heating is time-variable. If the origin of the shallow heating is related to the nuclear reactions taking place in the crust \citep[e.g. through additional electron captures, fusion reactions, or due to uncertainties in the nuclear symmetry energy, ][]{estrade2011,horowitz2008,steiner2012}, is seems unlikely that the amount of heat released per accreted nucleon would vary per outburst, since those reaction rates should be directly proportional to the accretion rate. Alternatively, if the strong decay in the initial part of the cooling curve is caused by compositionally driven convection in the ocean due to chemical separation of light and heavy elements at the ocean-crust boundary \citep[which reduces the need of additional heat source,][]{horowitz2007,medin2011,medin2014,medin2015}, then the rate of decay should depend on the ocean composition. Although the ocean composition will evolve over time, it is unclear if the required level of variation between outbursts can be obtained over the short timescales of the outbursts of Aql X-1. Additionally, this mechanism cannot account for the large amount of shallow heating required for MAXI J0556-332. Another mechanism that has been proposed for the additional heat required to model the observed cooling curves originates from differential rotation between the liquid ocean and the solid crust as matter forms a levitation belt around the equator of neutron star when it accretes from a disk \citep{inogamov2010}. In this model, gravity waves excited in the atmosphere spin up this layer which extends down to the ocean. At the ocean-crust boundary turbulent breaking causes heat release that might account for the shallow heating of the outmost layer of the neutron star crust. The amount of shallow heat than can be released by this mechanism depends, among other things, on how far the accretion disk extends towards the surface of the neutron star. This would suggest that the amount and of shallow heating can be time-variable, because the inner disk radius can vary in time as well \citep[see e.g. the review by][]{done2007}. It would also allow the depth to be time-variable, because the depth of the ocean can change in time. Finally, the shallow heating strength might depend on the geometry of the accretion flow, which is time-variable. The accretion process can be spherical, for example when the source accretes from a coronal flow or some sort of radiative inefficient accretion flow, or non-spherical when the accretion disk reaches (close to) the neutron star surface. It is often assumed that quasi-spherical accretion is related to the hard spectral state during outburst and non-spherical accretion to the soft state. It might be that the shallow heating process can only be activated in one of the two circumstances or that the shallow heating process is non-uniform when the accretion process is too. If indeed the spectral hardness of the emission during outburst is related to the accretion geometry \citep[as e.g.][proposed for accreting black holes]{esin1997}, one would expect that the shallow heating might be related to a specific accretion state. Our models in which we allowed shallow heating to be active only when the source is in either the soft state or the hard state did not provide constraining results. During the full extent of outbursts 14 and 23 the source was observed to be in the hard state, but while outbursts 23 can be modelled without shallow heating (i.e. if shallow heating in only active in the soft state), this seems unlikely for outburst 14. However, it should be emphasised that cooling curve 14 consists of only 2 observations, and therefore we cannot draw strong conclusions based on this cooling curve. On the other hand, if we activate shallow heating only when the source is in the hard state the amounts of shallow heating per outburst had to be different by a factor $\sim10$ in order to fit all cooling curves. This is due to the fact that during outbursts 20 and 21 the source is in the hard state only for short durations at the start and end of the outbursts, while their cooling curves require a large amount of deposited heat, which can only be accounted for if the shallow heat during these short lived hard state episodes is very strong. If the shallow heating is state-dependent the cooling curves of Aql X-1 can be fitted best if the strength and depth are allowed to change. We have not found any evidence that shallow heating can be constant between outbursts if it is active only during one specific spectral state. \subsection{Constraining envelope compositions from type-I X-ray bursts} For the primary model we find that the thickness of the helium layer in the envelope following from the best-fit light element column depths are for all outbursts smaller than the column depth limit for which an X-ray burst is expected, except for outburst 23. The cooling curve of this outburst is well fitted with a column depth that provides a helium layer that is slightly larger than the ignition column depth, but the $1\sigma$ lower limit of the envelope composition lies well below the expected ignition depth. Moreover, to ignite a type-I X-ray burst the temperature in the crust also has to be sufficient, and this is especially in the decay of the outburst not necessarily the case. In this paper we tried to constrain the envelope composition not only from the cooling curves, but also from the amount of matter accreted since the last reported observed X-ray bursts during the outburst. All upper limits on the helium column depths that we calculated in this way (see Table \ref{tab:outburstproperties}) exceed the level required to trigger a burst. Therefore, it seems likely that more bursts have happened after the last reported detected one. If that is indeed the case, at least one other layer of light elements would have been fused into heavier elements and the actual column depth of light elements at the end of the outburst would be lower than we calculated. Although the calculated upper limits on the thickness of the helium layer in the envelope for Aql X-1 seem unlikely, this method to constrain the envelope composition can be used in future models of crust cooling X-ray bursters. The envelope composition is one of the parameters that are difficult to constrain \citep[unless a significant part of the cooling curve is sampled, see][]{cumming2017}, and have a significant influence on the calculated cooling curve. If the outburst decay is well sampled with observations, this would allow precise determination of the end of the outburst as well as the accretion rate during the decay. On top of those improvements this would increase the chance of detecting the last X-ray burst during the outburst. Alternatively, simulations of such events might allow to determine when the last type-I X-ray burst has occurred during an outburst \citep{johnston2017}. Although further theoretical development is required to accurate determine the times of bursts, this would be a very powerful tool to constrain the envelope composition. If an X-ray burst is detected or theoretically predicted late into de outburst decay, combining this date with exact knowledge of the time-dependent accretion rate during the outburst decay and the outburst time, would allow for a better constrained upper limit of the light element column depth. With better constraints on the envelope composition the shallow heating and core temperature can in consequence be constrained to higher accuracy as well. \subsection{Importance of determining the correct end of the outburst}\label{sec:discussion-endtime} Our crust cooling model of Aql X-1 deviates from that of \citet[][]{waterhouse2016}. The main differences are that we model all outbursts collectively rather than individually to take into account any remaining effects from preceding outbursts, as well as that we use a time-dependent accretion rate. Additionally, we recalculated the end dates of the outbursts. For outbursts 21, we determined the end of the outburst to be five days later compared to \citet{waterhouse2016}. As a consequence, one {\it Swift}/XRT observation that was first considered to be obtained when the source was in quiescence is now considered to be taken when the outburst was not fully over yet. An argument that this observation was taken in quiescence is that the source spectrum did not require a power law component. Although the origin of this component is unknown, it is often associated with accretion \citep[see the discussion in][]{wijnands2015}. Therefore, if this observation was taken when the source was still accreting it is unknown why this power-law was not observed. On the other hand, the observation lies significantly above the trend of quiescence XRT observations (see Figure \ref{fig:decay_fit}), favouring the decay interpretation. The models that we carried out to investigate the consequences of using the end date determined by \citet{waterhouse2016} and using their accretion assumptions, showed that the steep initial decay that is required with this end date cannot be reproduced unless a constant accretion rate during outburst is assumed (Figure \ref{fig:2012_models}). However, this is inconsistent with the observed light curve. When the accretion rate is determined from the light curve, the crust of the neutron star already starts to cool during the outburst decay, resulting in a shallower cooling curve \citep[see also][]{ootes2016}. Consequently, the potential first cooling observation cannot be fitted along with the next three cooling observations with the model that accurately models the outburst light curve. We checked that the difference between the two models in time-averaged accretion rate over the full outburst in combination with the difference in assumed outburst duration does not influence this outcome. Although the decay of outburst 21 was relatively well sampled with {\it Swift}/XRT, the sampling rate of once per $1-5$ days is insufficient to constrain the outburst end date to one day. Our findings show that it is critical to have well-sampled monitoring observations during the decay of the outburst to constrain the end time of the outburst. Additionally, from comparison of two models discussed here, we find once more that taking into account accretion rate variability has a significant influence on the calculated cooling curves, in agreement with \citet{ootes2016}. \subsection{Accretion flares}\label{discussion:flares} \citet{cotizelati2014} reported the detection of two accretion flares after outburst 20. We took these flares into account in our model of Aql X-1, and find that when the same model parameters are used for the flares as for outburst 20, the accretion flares have no influence on the calculated cooling curve. The accretion rate during the flares (determined from the light curve) is low enough that the flares do not cause an increase in the temperature during cooling curve 20. Only when the amount of shallow heating during the flares is increased to more than 5.2 MeV nuc$^{-1}$, the $\chi^2$ is affect. This indicates that the observed flares in Aql X-1 do not alter the underlying cooling curve from outburst 20. This is consistent with the conclusion from \citet{fridriksson2011} who calculated potential the effect of flares in XTE J1701 -- 462. It should be noted that the presence of an accretion flare can affect the envelope composition. As more matter is dumped on the surface of the neutron star during the flare, the light element column depth increases. However, the calculated average accretion rate during the reported flares in Aql X-1 is so low ($\sim 10^{-4}\text{ }\dot{M}_\text{Edd}$), that the increase in light envelope column depth would be respectively $2.8\times10^{7}\text{ g cm}^{-2}$ and $1.1\times10^{7}\text{ g cm}^{-2}$, which falls within the error bar on the envelope composition of outburst 20 for the primary model. However, if the accretion rate during the reflare is larger, or if the assumed light element column depth at the end of the outburst is smaller, this effect on the envelope composition could be more significant and has to be taken into account. The observed effective temperature will increase after a reflare if the light element column depth increases, but it is also possible that a reflare causes a significant drop in the observed cooling curve if an X-ray burst occurs during the accretion reflare. The light element column depth can in the latter case be smaller after the reflare than after the main outburst. \section{Conclusions} By extending our crust cooling code \texttt{NSCool} to model multiple outbursts, we were able to model the accretion history of Aql X-1 from 1996 up to 2015. Assuming that the quiescence observations of this source reported on by \citet{waterhouse2016} can be interpreted as cooling of the crust we fitted the cooling curves to constrain the crustal parameters of the neutron star. Our main conclusions are: \begin{itemize} \item Aql X-1 does not have enough time between outbursts to restore crust-core equilibrium. Therefore, the lowest observed temperature of Aql X-1 is expected to be higher than the base level. Additionally, since thermal equilibrium is not restored between outbursts, the calculated cooling curves are strongly dependent on the accretion history. It is therefore important to take previous outbursts into account when modelling transients with relatively short recurrence times. \item We find that the observed quiescence observation of Aql X-1 are well fitted if the envelope composition and shallow heating parameters (both strength and depth) are allowed to vary between outbursts. If both the shallow heating depth and strength are tied between the outbursts, the data are not well reproduced. Attempts to connect shallow heating during an accretion outburst to one spectral state were unfruitful. \item We were not able to constrain the presence of a high impurity pasta layer in Aql X-1, because the quiescence periods between the outbursts are shorter than the thermal timescale of the deepest layers of the crust. \item We attempted to constrain the envelope composition from the time of the last observed type-I X-ray burst. Although all upper limits on the column depth of the light element layer exceed the column depth required to ignite a new burst, this method can be used in future studies if a type-I burst is observed shortly before the end of the accretion outburst. \item The two small accretion reflares observed after one of the outburst were found to have no significant effect on our calculations. \end{itemize} \section*{Acknowledgements} LSO and RW are supported by a NWO TOP Grant, module 1, awarded to RW. DP is partially supported by the Mexican Conacyt (CB-2014-01, \#240512). ND acknowledges support from an NWO Vidi grant. This work benefited from discussions at the Physics and Astrophysics of Neutron Star Crusts workshop 2016 supported by the National Science Foundation under Grant No. PHY-1430152 (JINA Center for the Evolution of the Elements). We made use of the \textit{Swift}/BAT transient monitor results provided by the \textit{Swift}/BAT team, the MAXI data provided by RIKEN, JAXA and the MAXI team. This paper uses preliminary analysis results from the Multi-INstrument Burst ARchive (MINBAR), which is supported under the Australian Academy of Science's Scientific Visits to Europe program, and the Australian Research Council's Discovery Projects and Future Fellowship funding schemes. This research has made use of NASA's Astrophysics Data System Bibliographic Services of the data supplied by the UK Swift Science Data Centre at the University of Leicester. \bibliographystyle{mnras}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{I. Introduction} It is well know that there exists a deep relation between topological phases of matter and gauge theories. In fact, at ground state, topological matter can be described by suitable topological quantum field theories \cite{Fradkin,Wen,Qi, Palumbo1}, which can be classified in terms of their topological invariants and underlying gauge groups. For instance, the interacting edge states of topological phases can be derived from suitable gauge theories \cite{Palumbo2}. Another further example is the Berry phase, which plays a crucial role in several topological systems and represents a geometric phase related to a gauge connection (Berry connection) in the momentum space \cite{Niu}. The Abelian Berry phase is given in terms of a U(1) principle bundle, formally the same one that appears in electromagnetism. This picture can be naturally extended to non-Abelian phases.\\ In recent years, there have been several efforts and proposals concerning the geometric properties of topological phases due to their importance in quantum matter. For instance, an effective geometric language has been employed to describe quantum thermal properties, such as the thermal Hall effect \cite{Read, Cappelli, Zhang, Ryu2, Stone, Palumbo3}. Novel topological responses of quantum topological fluids, such as Hall viscosity \cite{Avron,Read2}, appear when these systems are coupled to a curved background \cite{Fradkin2,Son,Kimura,Wiegmann,Gromov,You,Cappelli2}. Geometry has been also employed in the study of scattering processes in topological insulators \cite{Parente,Fukui} and in the analysis of ripples and novel emergent phenomena in graphene \cite{Vozmediano2,Gibbons,Iorio1}. Furthermore, advanced geometric theories, based on non-commutative geometry \cite{Bellissard, Susskind, Poly} and holography \cite{Zaanen,Hartnoll,McGreevy,Sachdev,Amoretti} have been employed in the study of fractional quantum Hall states, non-Fermi liquids, strongly-correlated systems, etc. (for other recent applications of curved-space formalism to condensed matter systems, see, e.g. Refs.~\cite{Vitelli,Ortix}). Importantly, the geometric properties of quantum systems can be based on the gauge principle: the coupling between matter and spacetime is identified by gauging the global spacetime symmetries of the matter field \cite{Nakahara}. In the case of Dirac materials \cite{Bernevig,Ryu}, the quasiparticles are given by Dirac fermions and the Lorentz symmetries (i.e. rotations and boosts) emerge at ground state. Thus, in order to study these phases in curved spacetime, one has to gauge the Lorentz group and replaces the derivative with the covariant derivative, where the corresponding spin connection takes values in the Lorentz algebra \cite{Nakahara}. The presence of the external magnetic field induces in the quantum states further symmetries, called magnetic translations, which are the proper symmetries of a (infinite) plane in presence of a constant magnetic field \cite{GMP, Cappelli3,Duval}. Although these symmetries play a crucial role in the understand of the quantum Hall fluids, such as their incompressibility, so far, they have not yet properly incorporated in any geometric model of relativistic topological phases (for the non-relativistic case, see, e.g. Refs.~\cite{Cappelli2,Haldane}). The main goal of this paper is to present a novel geometric model of time-reversal-invariant topological insulators in three dimensions \cite{Qi,Bernevig,Ryu} that takes into account both Lorentz symmetries and magnetic translations. The corresponding geometric action on the gapped boundary, where the gap is induced by an external electromagnetic field, is implemented by gauging the Maxwell algebra \cite{Bacry,Schrader,Beckers,Lukierski}. This represents a non-central extension of the Poincar\'e algebra. It allows us to define a new effective topological field theory for the gapped boundary, given by a Chern-Simons theory with a gauge connection that takes value in the 2+1-dimensional Maxwell algebra \cite{Cangemi,Szabo,Hosein}. The final action, written in terms of dreibein, spin connection \cite{Nakahara} and electromagnetic gauge potential contains three main elements. The first one is the standard Abelian Chern-Simons term that describes the quantum Hall conductance \cite{Wen}. The second one is given by a purely geometric contribution that describes the torsional Hall viscosity \cite{Fradkin2} and is compatible with a geometric theory recently proposed for topological superconductors \cite{Palumbo3}. Importantly, these terms define an effective exotic AdS gravitational model \cite{Witten} dual to a unitary CFT with chiral central charge $c=1$. Finally, the third one contains a novel non-minimal coupling between the Abelian gauge field and the curved background and resembles to a relativistic version of the Wen-Zee theory \cite{Wen-Zee} proposed in the study of quantum Hall fluids on a curved background. We will show that in the flat limit, our model is in agreement with the main properties of relativistic quantum Hall states.\\ The paper is structured as follows: In Sec.~II, we summarize the main properties of three-dimensional topological insulators, by focusing in their effective description in terms of Dirac Hamiltonian. Then, we derive in Sec.~III an effective geometric action by integrating out the Dirac field. By applying the holographic correspondence, we show that the gravitational theory is dual to a CFT with central charge $c=1$, which describes one-dimensional Dirac modes propagating along defect lines created on the gapped boundary. In Sec.~IV, we introduce the Maxwell algebra and we show that it correctly takes into account the magnetic translations, induced on the boundary of the system by the presence of an external electromagnetic field. We show in Sec.~V that the the model found in Sec.~III can be nicely extending by gauging the Maxwell algebra. This allows us to derive new geometric terms and one of them resembles to the Wen-Zee term in quantum Hall states. We present our conclusions in Sec.~VI. \begin{figure}[!ht] \centering \includegraphics[width=0.20\textwidth]{disegno} \caption{Schematic representation of a three-dimensional topological insulator (TI) with an external electromagnetic (EM) field represented by an arrow. This gauge field opens a gap on the boundary of the TI, i.e. the two-dimensional Dirac modes acquire a mass and the translation symmetries are replaced by magnetic-translation symmetries, which characterize the properties of the relativistic quantum Hall states induced on the boundary.} \label{Fig1} \end{figure} \section{II. Three-dimensional topological insulators} We start by summarizing the main properties of three-dimensional topological insulators in the quantum-field-theory framework. The microscopic Hamiltonian is $H=\sum_{\boldsymbol{p}}\boldsymbol{\psi}_{\boldsymbol{p}}^{\dagger}h(\boldsymbol{p})\boldsymbol{\psi}_{\boldsymbol{p}}$, where $\boldsymbol{p}\in[0,2\pi)\times[0,2\pi)\times[0,2\pi)$ and $h(\boldsymbol{p})$ is a generic kernel Hamiltonian belonging to the class AII of free-fermion models \cite{Ryu,Qi}. It is well known that the continuum real-space Hamiltonian is a Dirac Hamiltonian \begin{eqnarray} \label{Hamiltonian} H =\int d^3x \,\, \psi^{\dagger}(i\xi^{j}\partial_{j}+i\,\zeta\, m)\psi, \end{eqnarray} where $j=1,2,3$, $\xi^{j}=\sigma^{1} \otimes \sigma^{j}$, $\zeta =\sigma^{3} \otimes \mathbb{I}_{2 \times 2}$, $\mathbb{I}_{2 \times 2}$ is the identity matrix, $\sigma^{j}$ are the Pauli matrices, $m$ is the Dirac mass and $\psi$ is a four-component spinor. Due to the charge conservation in topological insulators, we can study their topological response to the presence of an external electromagnetic field $A_{\mu}$, see Fig.1. Thus, the corresponding fermionic action defined on a Lorentzian manifold $M$ can be written as follows \begin{eqnarray} \label{Diracflat} S^\text{3D}[\psi,\overline{\psi}]=\int_{M} d^{4}x\,\, \overline{\psi}\,(i\,\gamma^{\mu}\partial_{\mu}+\gamma^{\mu}A_{\mu}-m)\psi, \end{eqnarray} where $\mu=0,1,2,3$, $\overline{\psi}=\psi^{\dagger}\gamma^{0}$, and $\gamma^{j}=\gamma^{0}\xi^{j}$ are the Dirac matrices. Because we are considering a U(1) dynamical gauge field, we have to add up to the above action the Maxwell term \begin{eqnarray} \label{Maxwell} S^\text{M}[A_{\mu}]=\frac{1}{4}\int_{M} d^{4}x\,F^{\mu\nu}F_{\mu\nu}, \end{eqnarray} where $F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}$ is the Faraday tensor. In order to derive the topological effective theory that describes the three-dimensional topological insulator in the low-energy limit, we integrate out the spinor field in the partition function of $S^\text{3D}[\psi,\overline{\psi}, A_{\mu}]$. The corresponding effective action $S^\text{3D}_{\rm eff}[A_{\mu}]$ defined by \begin{eqnarray} e^{i S^\text{3D}_{\rm eff}[A_{\mu}]}=\int D\overline{\psi}\,\textit{D}\psi\,e^{i S^\text{3D}[\psi,\overline{\psi}, A_{\mu}]}, \end{eqnarray} has always a topological term that becomes dominant at low energy and describes the properties of the ground state. This term in 3+1 dimensions is proportional to the axion topological term and the final effective bosonic action is simply given by \begin{eqnarray} \label{EFT} S=\frac{1}{4}\int_{M} d^{4}x\,\left(F^{\mu\nu}F_{\mu\nu}+\frac{\theta}{8\pi^{2}}\epsilon^{\mu\nu\alpha\beta}\,F_{\mu\nu}F_{\alpha\beta}\right), \end{eqnarray} where $\theta=\pi$ in the case of topological insulators in the non-trivial $Z_{2}$ topological phase \cite{Qi}. Importantly, only when $\theta=0, \pi$ the time-reversal symmetry in the bulk is respected by the effective action (\ref{EFT}). However, the presence of an electromagnetic field breaks the time-reversal symmetry on the boundary and generate a mass term in the two-dimensional helical Dirac modes. These massive modes coupled to $A_{\mu}$ are represented by a 2+1-dimensional massive Dirac action similarly to that one expressed in Eq.~(\ref{Diracflat}). In this case the corresponding topological effective action is given by Chern-Simons theory that can be simply derived by employing the Stokes theorem on the topological axion term in Eq.~(\ref{EFT}). In fact, this is a total derivative that gives rise to an Abelian Chern-Simons term on the boundary. This topological quantum field theory describes the Hall conductance of relativistic quantum Hall states on the surface \cite{Qi}. For simplicity, we consider the system defined on a Lorentzian manifold $M=R\times \Sigma$ where space-like part $\Sigma$ has periodic boundary conditions in $x$ and $y$, while the periodicity in $z$ is broken in order to have a boundary made by two disconnected surface. Similarly to the case of topological superconductors \cite{Palumbo4}, here the whole gapped boundary can be seen an effective two-dimensional topological phase belonging the class A \cite{Ryu} and supports topological protected effective edge modes. These one-dimensional Dirac edge modes are trapped by the defect lines that can be created on the gapped surfaces by employing a couple of local Zeeman fields \cite{Palumbo5}. In the next section, we will show that the holographic correspondence, already employed in the case of topological superconductors \cite{Palumbo3}, is still valid in the case of topological insulators and allows us to derive the right value of the chiral central charge associated to the chiral Dirac modes. \section{III. Holographic correspondence in topological insulators} In order to derive an effective geometric theory for the three-dimensional topological insulator, we first observe that the Dirac theory is invariant under the global Poincar\'e group. In other words, in the low-energy limit, topological insulators support further emergent relativistic symmetries, given by Lorentz boosts and rotations, and spacetime translations. By gauging these symmetries, we can understand how the system behaves under local geometric transformations that preserve locally the Poincar\'e group. This approach has been also employed in the study of geometric defects in topological phases and in the generalization of the Luttinger theory \cite{Luttinger}, where a minimal coupling between fermions and the background geometry has been used in order to derive thermal quantum effects, such as the thermal Hall effect \cite{Read, Cappelli, Zhang, Ryu2, Stone, Palumbo3}. The gauging procedure in the fermion theory is made by replacing the standard derivative with a covariant derivative and by introducing the tetrads, which allows us to write the Dirac action in the curved space $M_{c}$, given by \begin{eqnarray}\label{Dirac2} S^\text{3D}[\psi,\overline{\psi}, A_{\mu}, \omega_{\mu}, e_{a}^{\mu}]=\int_{M_{c}} d^{4}x\,|e|\, \overline{\psi}\,(i\,\widehat{\gamma}^{\mu}D_{\mu}-m)\psi, \end{eqnarray} where $|e|$ is the determinant of the tetrads $e_{a}^{\mu}$, $\widehat{\gamma}^{\mu}=e_{a}^{\mu}\gamma^{a}$ and $D_{\mu}=i\partial_{\mu}+A_{\mu}+\omega_{\mu}$, where $\omega_{\mu}$ is the spin connection \cite{Nakahara}. Clearly, in the flat limit, we recover the (\ref{Diracflat}), because in that case $|e|=1$, $\omega_{\mu}=0$ and $e_{a}^{\mu}\gamma^{a}=\delta_{a}^{\mu}\gamma^{a}=\gamma^{\mu}$. Importantly, we are interested to a torsion-full spin connection that is able to take into account also possible dislocations in the system. \cite{Fradkin2,Vozmediano,Zaanen2}. More important, this choice is compatible with the fact that in the gauge-theory language, $\omega_{\mu}$ and $e_{a}^{\mu}$ are independent fields, with the former related to the Lorentz symmetries and the latter related to the spacetime translations. This has important implications in the derivation of the topological effective theory in curved space. By integrating out fermions, we have that \cite{Zanelli} \begin{align}\label{Zanelli} & S_\text{top}^\text{3D}[A_{\mu},\omega_{\mu},e_{\mu}]= \frac{1}{32\pi}\int_{M_{c}} d^{4}x\,\epsilon^{\mu\nu\alpha\beta}\,F_{\mu\nu}F_{\alpha\beta}\,-\nonumber \\ & k\,\int_{M_{c}} d^{4}x\, \epsilon^{\mu\nu\alpha\beta}\, \text{tr}\, \Big[ R_{\mu\nu}R_{\alpha\beta}+ {1\over \eta^{2}}\Big(T_{\mu\nu}T_{\alpha\beta}-R_{\mu\nu}e_{\alpha}e_{\beta}\Big)\Big], \end{align} where $k=\frac{1}{192\pi}$ and $\eta$ is a dimensionful parameter related to the Hall viscosity \cite{Fradkin2, Hughes}. Here, $R_{\mu\nu}=R_{\mu\nu}^{ab}\,i\,[\gamma_{a},\gamma_{b}]/4$ and $T_{\mu\nu}=\frac{i}{2}\gamma_{a}T^{a}_{\mu\nu}$, are the Riemann and torsion tensor, respectively given by \begin{align} R_{\mu\nu}^{ab} & =\partial_{\mu} \omega_{\nu}^{ab}-\partial_{\nu}\omega_{\mu}^{ab}+\omega_{\mu\,\,c}^{a}\omega_{\nu}^{cb}-\omega_{\nu\,\,c}^{a}\omega_{\mu}^{cb}, \nonumber \\ T^{a}_{\mu\nu} & =\partial_{\mu}e_{\nu}^{a}-\partial_{\nu}e_{\mu}^{a}+\omega_{\mu\,\, b}^{a}e_{\nu}^{b}-\omega_{\nu\,\, b}^{a}e_{\mu}^{b}. \hspace{0.5cm} \end{align} In the action (\ref{Zanelli}), we recognize the Pontryagin invariant \cite{Eguchi} in the second term, while the third term is proportional to the Nieh-Yan invariant \cite{Nieh}. However, this is not the conclusive topological action because, so far, we have omitted possible contributions in the effective action induced by the Maxwell term. In fact, the Maxwell theory in 3+1 dimensions supports the duality symmetry, defined by the following transformation \begin{align} F_{\mu\nu}\rightarrow F'_{\mu\nu}=F_{\mu\nu}\, \cos\, \theta+\widehat{F}_{\mu\nu}\, \sin\, \theta \end{align} where $\theta$ is an angle and $\widehat{F}_{\mu\nu}=\frac{1}{2}\epsilon_{\mu\nu\alpha\beta}F^{\alpha\beta}$. However, this symmetry holds only in the flat case and is broken by quantum effects when the theory is defined on a generic curved spacetime, as shown in Refs.~ \cite{Reuter,Dolgov,Agullo}. It is possible to show that the duality anomaly induces a topological term proportional to the Pontryagin invariant, similarly to that one found in the fermionic sector. Thus, We replace $k$ with $k'=2k$ in Eq.~(\ref{Zanelli}), in order to define the total effective topological theory. Notice, that the contribution of the torsion term proportional to $1/\eta^{2}$ is not really relevant in the holographic correspondence, as we will see below. We can now easily derive the two-dimensional effective theory $S_{\rm top,k}^\text{2D}$ on the gapped boundary by employing the Stokes theorem due to the fact that $S_{\rm top}^\text{3D}$ is a total derivative. We find that \begin{align} \label{DoubleCS} & S_{\rm top,k}^\text{2D}[\omega_{\mu},e_{\mu},A_{\mu}] = \frac{2}{8\pi}\int d^{3}x\,\epsilon^{\mu\nu\lambda}A_{\mu}\partial_{\nu}A_{\lambda}\,-\nonumber \\ & 2 k'\,\int d^{3}x\, \epsilon^{\mu\nu\lambda} \text{tr} \left(\omega_{\mu}\partial_{\nu}\omega_{\lambda}+\frac{2}{3}\,\omega_{\mu}\omega_{\nu}\omega_{\lambda}+\right. \left.\frac{1}{\eta^{2}}\,T_{\mu\nu}e_{\lambda}\right), \end{align} where the factor $2$ in front of both terms takes into account both the two disconnected surfaces. The second line in this action describes the exotic AdS gravity \cite{Witten}, which, differently from the standard Einstein-Hilbert theory, breaks parity and time-reversal symmetry. This is agreement with the symmetries of the gapped boundary and allows us to employ the holographic correspondence in order to derive the corresponding topological chiral central charge \cite{Palumbo3}. It is possible to derive its value by analyzing the holographic stress-energy tensor, defined on the (asymptotic) boundary of AdS$_{2+1}$ \cite{Kraus}. It can be shown that the holographic stress-energy tensor $\tau^{uv}=\tau^{u}_{i}e^{i v}$ is not symmetric, \begin{eqnarray}\label{Lorentz1} \tau^{[uv]} =\tau^{uv}-\tau^{vu}= \frac{2\pi\,k'}{|e|}\epsilon^{ij}\,R_{ij}^{uv}, \end{eqnarray} where $R_{ij}^{uv}$ ($u,v=0,1$) is the Riemann tensor defined on the asymptotic boundary. The failure of $\tau^{[uv]}$ to be symmetric implies that on the (1+1)-D asymptotic boundary there is a gravitational (Lorentz) anomaly \cite{Cappelli}. This quantum anomaly appears in the dual chiral CFT and is proportional to the chiral central charge $c$ \cite{Cappelli, Stone} \begin{eqnarray}\label{Lorentz} \tau_\text{CFT}^{[uv]} = \frac{c}{48 |e|}\,\epsilon^{ij}\,R_{ij}^{uv}. \end{eqnarray} Due to the AdS$_{2+1}$/CFT$_{1+1}$ correspondence, Eqs.~(\ref{Lorentz1}, \ref{Lorentz}) imply that \cite{Blagojevic, Klemm} \begin{eqnarray} c=192\pi\,k=1. \end{eqnarray} In other words, the CFT describes a single one-dimensional Dirac mode trapped by defect lines created on the gapped boundary of the three-dimensional system. We can also see these defects as an effective boundary of the two-dimensional relativistic quantum Hall fluid. \section{IV. Magnetic translations and the Maxwell algebra} In the previous section, we have seen that a geometric model is compatible with the main properties of a three-dimensional topological insulator with gapped boundary. However, in presence of an external magnetic field, there are further symmetries, called magnetic translations $t_{u}$ \cite{GMP, Cappelli3,Bernevig2}, that have not yet encoded in the geometric model. It is well known that in the (relativistic) quantum Hall states, the ordinary spacial translations are replaced by $t_{u}$, which are the proper symmetries of an (infinite) plane in presence of a constant magnetic field. They are defined as follows \begin{align} t_{u}=e^{i\,u_{a}K^{a}}, \end{align} where $u=\{u_{a}\}=\{u_{x},u_{y}\}$ is the finite translation vector, such that $t_{u}:\, x_{a}\rightarrow x_{a}'=x_{a}+u_{a}$, while the generators $K_{a}$ are given by \begin{align}\label{MT} K_{a}=p_{a}-A_{a}, \hspace{0.5cm} [K_{a}, K_{b}]=\frac{i}{l^{2}}\,\epsilon_{ab}, \end{align} where $p_{a}$ are the standard momenta that commute, i.e. $[p_{a},p_{b}]=0$ (to not confuse with the operators in Eq.~ (\ref{commutators})), $A_{a}$ is the gauge potential and $l=1/\sqrt{q\,B}$ is the magnetic length, with $B$ the magnetic field and $q$ the electric charge. At this point, it can be easily shown that \begin{align}\label{GMP} [t_{u},t_{v}]=-2 i\,\sin \left(\frac{1}{2\,l^{2}}\, u \wedge v\right) t_{u+v}. \end{align} This defines the magnetic translation algebra, also known as Girvin-Plazmann-MacDonald (GMP) algebra \cite{GMP}, which is at the base of the area-preserving diffeomorphisms on the plane \cite{Cappelli3,Kogan}, which explains the incompressibility of quantum Hall fluids. Moreover, the area-preserving-diffeomorphisms is mathematically described through the $W_{\infty}$ algebra, which allows to derive also the Wen-Zee term, as shown in Ref.~\cite{Cappelli2}. Thus, at least in non-relativistic topological phases, the Wen-Zee theory is related to the area-preserving diffeomorphisms and magnetic translations. What is the situation in the case of relativistic systems? \cite{Son2} In the next section, we will propose a novel geometric model by considering the Maxwell algebra as fundamental algebra of the relativistic quantum Hall states. This algebra represents a non-central extension of the 2+1-dimensional Poincar\'e algebra and takes into account the presence of an electromagnetic field in the Minkowski spacetime \cite{Cangemi,Szabo,Hosein}. This allows us to fully encode the magnetic translations in our geometric model. At a more formal level, the adoption of the Maxwell algebra represents a modification of the tangent bundle in contrast with the more conventional approach (employed also in the previous section) where one introduces a U(1) principle bundle \cite{Nakahara}. The Maxwell algebra in 2+1 dimensions is then defined by the following commutators \begin{align}\label{commutators} [P_{a},P_{b}]=\epsilon_{abc}Z^{c},\hspace{0.2cm} [J_{a},J_{b}]=\epsilon_{abc}J^{c}, \hspace{0.2cm} [J_{a},P_{b}]=\epsilon_{abc}P^{c}, \nonumber \\ [J_{a},Z_{b}]=\epsilon_{abc}Z^{c},\hspace{0.2cm} [P_{a},Z_{b}]=[Z_{a},Z_{b}]=0, \hspace{0.8cm} \end{align} where $a,b,c=0,1,2$, $P_{a}$ are the generators of spacetime translations, $J_{a}=\frac{1}{2}\epsilon_{abc}J^{bc}$ are the dual generators of Lorentz rotations and boosts and $Z_{a}$ are the new generators of the Maxwell algebra. The corresponding (internal) invariant metric $\Omega_{AB}=\langle X_{A}, X_{B}\rangle$, with $X_{A}=(P_{a},J_{a}, Z_{a})$ and $A,B=0,..,8$, is identified through the following relations \begin{align} \langle P_{a},P_{b}\rangle=\gamma\,\eta_{ab},\hspace{0.2cm} \langle J_{a},J_{b}\rangle=\alpha\, \eta_{ab}, \hspace{0.2cm} \langle J_{a},P_{b}\rangle=0, \nonumber \\ \langle J_{a},Z_{b}\rangle=\gamma\,\eta_{ab},\hspace{0.2cm} \langle P_{a},Z_{b}\rangle=\langle Z_{a},Z_{b}\rangle=0, \hspace{0.8cm} \end{align} where $\alpha$ and $\gamma$ are real parameters. Importantly, this algebra is not semi-simple and this implies that the matrix form of $\Omega^{AB}$ (with $\Omega_{AC}\Omega^{BC}=\delta_{A}^{B}$) does not coincide with that one of $\Omega_{AB}$ \cite{Witten2}. In other words, the Casimir operator $W=\Omega_{AB}X^{A}X^{B}$ is not equivalent to $W'=\Omega^{AB}X_{A}X_{B}$, which has an important implication in the construction of the gauge invariant action as we will see in the next section. We can easily see that the Maxwell algebra takes into account both the Lorentz symmetries and the (spatial) magnetic translations of the relativistic quantum Hall states. For simplicity, we avoid the Lorentz rotations and boosts and we focus on the first spatial anti-commuting relations defined in Eq.~ (\ref{commutators}), i.e. \begin{align} [P_{a},P_{b}]=\epsilon_{ab} Z^{0}. \end{align} With the following identifications \begin{align} P_{a}\equiv l\,K_{a}, \hspace{0.3cm} Z^{0}\equiv i, \end{align} we exactly recover the anti-commuting relations in Eq.~(\ref{MT}) and consequently we find the magnetic translation algebra defined in Eq.~ (\ref{GMP}). \section{V. Generalized Wen-Zee term and Hall viscosity} As we have seen in Sec. III, the U(1) gauge field and the curved background do not mix at the level of the effective action. This is in contract with the situation in non-relativistic systems, where the Wen-Zee term characterizes the topological states defined on curved surfaces \cite{Wen-Zee}. In that case, the Abelian spin connection $\widehat{\omega}_{\lambda}$ is coupled to the electromagnetic gauge field through the Wen-Zee term \begin{align}\label{WZ} S_{\rm WZ} = \frac{\nu \bar{s}}{4\pi}\int d^{3}x\, \epsilon^{\mu\nu\lambda} A_{\mu}\partial_{\nu}\widehat{\omega}_{\lambda}, \end{align} where $\nu$ is the filling factor and $\bar{s}$ is the shift. This action in a torsionless background describes the Hall viscosity in term of the parameter $\bar{s}$, which corresponds to the intrinsic angular momentum of the low-energy excitations of the quantum Hall fluid. In relativistic topological systems, a relativistic version of Wen-Zee term is not obvious, mainly because in these cases the spin connection is non-Abelian and the Hall viscosity is given in terms of the torsional viscosity \cite{Fradkin2} (note that a relativistic version of this term has been proposed so far only in the hydrodynamic approach \cite{Son2}). We now show that a relativistic Wen-Zee term on the gapped boundary can be consistently derived by gauging the 2+1-dimensional Maxwell algebra introduced in the previous section. The gauge connection $\mathcal{A_{\mu}}$ that takes values in this algebra and is given by \begin{eqnarray} \mathcal{A_{\mu}}=\mathcal{A}^{A}_{\mu}X_{A}=\frac{1}{\beta}\,e_{\mu}^{a}P_{a}+\omega_{\mu}^{a}J_{a}+\widehat{A}_{\mu}^{a}Z_{a}, \end{eqnarray} where $\beta$ is a dimensionful parameter and $\widehat{A}_{\mu}^{a}$ are three Abelian gauge fields. In terms of the curvature tensor $\mathcal{F_{\mu\nu}}$, we find \begin{eqnarray} \mathcal{F_{\mu\nu}}=\mathcal{F_{\mu\nu}}^{A}X_{A}=T_{\mu\nu}^{a}P_{a}+R_{\mu\nu}^{a}J_{a}+\widehat{F}_{\mu\nu}^{a}Z_{a}, \end{eqnarray} where $T_{\mu\nu}^{a}$ and $R_{\mu\nu}^{a}$ are the torsion and Riemann tensor, respectively, while \begin{eqnarray} \widehat{F}_{\mu\nu}^{a}=\partial_{\mu}\widehat{A}_{\nu}^{a}-\partial_{\nu}\widehat{A}_{\mu}^{a}+\epsilon_{bc}^{a}\left(\frac{1}{\beta^{2}}\, e^{b}_{\mu}\,e^{c}_{\nu}+\omega_{\mu}^{b}\widehat{A}_{\nu}^{c}+\widehat{A}_{\mu}^{b}\omega_{\nu}^{c}\right). \end{eqnarray} The most general Chern-Simons action based on the Maxwell algebra is then given by \begin{align}\label{MaxCS} S_{\rm CS}[\mathcal{A_{\mu}}]=\hspace{3.0cm} \nonumber \\ -\frac{1}{4\pi}\int d^{3}x\, \epsilon^{\mu\nu\lambda}\left[\left(\frac{1}{2}\Omega_{AB}\mathcal{A}^{A}_{\mu}\mathcal{F}_{\nu\lambda}^{B}-\frac{1}{3}\Omega_{CD}f^{D}_{AB}\mathcal{A}^{A}_{\mu}\mathcal{A}^{B}_{\nu}\mathcal{A}^{C}_{\lambda}\right)\right.\nonumber\\ +\left.\left(\frac{1}{2}\Omega^{AB}\mathcal{A}_{\mu A}\mathcal{F}_{\nu\lambda B}-\frac{1}{3}\Omega^{CD} f_{D}^{AB}\mathcal{A}_{\mu A}\mathcal{A}_{\nu B}\mathcal{A}_{\lambda C}\right)\right], \end{align} where $f^{D}_{AB}$ are the structure constants, such that $[X_{A},X_{B}]=f_{AB}^{D}X_{D}$ (a similar relation holds for $f_{D}^{AB}$). Let us now expand the above actions in terms of the physical gauge fields. In order to simplify the notation, here we employ the form formalism in the final topological action \begin{align}\label{finalCS} S_{\rm CS}[e,\omega, \widehat{A}]= \int \rm tr [\varrho_{1}CS(\omega)+\varrho_{2}\,e\wedge D_{\omega}e+ \nonumber \\ \varrho_{3}\widehat{A}\wedge (R+\varrho_{4}\,e\wedge e)+\varrho_{5}\widehat{A}\wedge D_{\omega}\widehat{A}], \end{align} with \begin{align} {\rm CS}(\omega)=\omega \wedge d\omega+\frac{2}{3}\,\omega\wedge \omega\wedge \omega. \end{align} The trace is taken on the gauge index, $D_{\omega}=d+\omega$ is the exterior covariant derivative, such that $T=D_{\omega}e$, and $\varrho_{i}$ are functions of the parameters $\alpha$, $\beta$ and $\gamma$, namely \begin{align} \varrho_{1}=-\frac{\alpha}{4\pi}, \hspace{0.2cm} \varrho_{2}=-\frac{1}{4 \pi\beta^{2}}\left(\gamma+\frac{1}{\gamma}\right), \hspace{0.2cm} \varrho_{3}=-\frac{1}{2\pi}\left(\gamma+\frac{1}{\gamma}\right), \nonumber \\ \varrho_{4}=\frac{\alpha}{4\pi\beta^{2} \gamma^{2}}, \hspace{0.3cm} \varrho_{5}=\frac{\alpha}{4\pi\gamma^{2}}. \hspace{2.6cm} \end{align} By comparing these parameters with those ones in Eq.~(\ref{DoubleCS}), we find $\alpha=\gamma^{2}=1/12$, $\beta^{2}=26\sqrt{3}\,\eta^{2}$, and $\eta^{2}$ is proportional to the inverse of the torsional Hall viscosity. Bear in mind that so far we have dealt with three independent U(1) gauge fields $\widehat{A}_{\mu}^{a}$ even if only one can be associated to the physical electromagnetic field. This is identified with \begin{align}\label{gauge} \widehat{A}_{\mu}^{0}\equiv A_{\mu}, \end{align} while $\widehat{A}_{\mu}^{1}$ and $\widehat{A}_{\mu}^{2}$ can be seen as auxiliary fields. Notice that if we fix to zero these two gauge fields, then we lose the gauge invariance of the theory. At this point, several comments about the action in Eq.~(\ref{finalCS}) are necessary in order to clarify and strength the main properties of our model.\\ Firstly, the Chern-Simons action in Eq.~(\ref{finalCS}) represents a natural but non-trivial generalization of the effective theory found in Eq.~(\ref{DoubleCS}). Moreover, we have found also a relativistic version the Wen-Zee term (\ref{WZ}) given by the third term in Eq.~(\ref{DoubleCS}). \\ Secondly, the torsional Hall viscosity derived in Ref.~\cite{Fradkin2} remains unchanged here. In fact, we can calculate the stress-energy tensor $\mathcal{T}_{a}^{\mu}$ by varying the action with respect to the dreibein $e_{\mu}^{a}$ \begin{align}\label{Stress-Energy} \mathcal{T}_{a}^{\mu}=\frac{\delta S_{\rm CS}}{\delta\,e^{a}_{\mu}}=-\epsilon^{\mu\nu\lambda}\left(\frac{1}{24 \pi\eta^{2}}T_{a\nu\lambda}+\frac{1}{48 \pi^{2}\eta^{2}}\epsilon_{abc}\,\widehat{A}^{b}_{\nu}e^{c}_{\lambda}\right). \end{align} Here, we find an extra contribution to the stress-energy tensor given by the above second term. However, both terms are proportional to the inverse of $\eta^{2}$, which is only dimensionful parameter and is related to the torsional Hall viscosity. Thus, our relativistic Wen-Zee term does not contribute to the Hall viscosity because $\omega$ and $e$ are independent fields due to the non-null torsion. Importantly, the torsional Hall viscosity appears also in the flat limit as we see now. The current $\mathcal{J}^{\mu}$ associated to the variation of the action with respect to the electromagnetic gauge field $A_{\mu}$ in the Minkowski spacetime (i.e. $\omega_{\mu}=0$ and $e_{\mu}^{a}=\delta_{\mu}^{a}$), is given by \begin{eqnarray}\label{Hall} \mathcal{J}^{i}=\left.\frac{\delta S_{\rm CS}}{\delta\,A_{i}}\right|_{\rm flat}=\epsilon^{ik}\frac{1}{2\pi}E_{k}, \end{eqnarray} with $i,k=x,y$ and \begin{eqnarray}\label{GaussCS} \mathcal{J}^{0}=\left.\frac{\delta S_{\rm CS}}{\delta\,A_{0}}\right|_{\rm flat}=\frac{1}{2\pi}(B+B_{0}), \end{eqnarray} where $B_{0}=-\frac{1}{24\pi\eta^{2}}$, and $E_{k}$ and $B$ represent the electric and magnetic field, respectively. Eq.~(\ref{Hall}) represents the standard quantum Hall law, while Eq.~(\ref{GaussCS}) is the Chern-Simons Gauss law \cite{Frohlich} and contains a novel contribution induced by the torsional Hall viscosity. In the flat limit, this can see as an effective constant magnetic field $B_{0}$ for a simple reason. As shown in Ref.~\cite{Ryu3}, the torsional Hall viscosity for the gapped boundary of three-dimensional topological systems, is proportional to the external constant magnetic (Zeeman) field that induces a Dirac mass in the surface states. Moreover, always in the flat background, the fourth term in the action (\ref{finalCS}) that depends on the electromagnetic field, reduces to a chemical potential term, namely \begin{eqnarray} \left.\frac{1}{96 \pi^{2}\eta^{2}}\int d^{3}x\, \epsilon^{\mu\nu\lambda}\epsilon_{0ab}\,e_{\mu}^{a}A_{\nu}e^{b}_{\lambda}\right|_{\rm flat}\longrightarrow\nonumber \\ \longrightarrow \int d^{3}x\, \kappa A_{0}\equiv \int d^{3}x\, I^{\mu}A_{\mu}^{\rm ext}, \end{eqnarray} where $\kappa=\frac{1}{2\pi}B_{0}$ is the chemical potential, while \begin{eqnarray} I^{\mu}=\frac{1}{2\pi}\,\epsilon^{\mu\nu\lambda}\partial_{\nu}A_{\lambda}, \hspace{0.3cm} A_{i}^{\rm ext}=\frac{B_{0}}{2}\epsilon_{i j}x^{j}, \end{eqnarray} are the topological current and an external gauge potential, respectively. This exactly reproduces the constant magnetic background in the quantum Hall fluids and is also in agreement with the chemical potential term derived in Ref.~\cite{Tong}. We then conclude that our geometric model is compatible with the main properties of the relativistic quantum Hall states. We finally speculate on the possible interpretation of our (charged) geometric model in the holographic context. It is not an easy task to deal with a non-minimal coupling between geometry and gauge fields. Formally, each Abelian field is associated to a U(1) gauge group, which is isomorphic to special orthogonal group SO(2). By performing a Wigner-Inonu contraction \cite{Gilmore} on this group, we have SO(2)$\rightarrow \mathcal{E}(1)$, where $\mathcal{E}(1)$ is the Euclidean group that describes translations in one dimension. In this way, the three U(1) gauge groups can be related to three independent translations. In the gauge-theory language this implies that the gauge fields $\widehat{A}_{\mu}^{a}$ induce a dreibein built by gauging the three $\mathcal{E}(1)$ algebras. We can possibly write \begin{eqnarray}\label{GaussCS} \widehat{A}_{\mu}^{a}\rightarrow \Theta\, e^{a}_{\mu}, \end{eqnarray} where $\Theta$ is a coefficient that we assume to be constant. With the above replacement, the action in Eq.~(\ref{finalCS}) becomes purely geometric, including now also an Einstein-Hilbert term and a cosmological constant $\Lambda \propto \varrho_{4}$. It coincides with the Mielke-Baekler action \cite{Mielke}, which is a model that has been already studied in the holographic context and the corresponding Virasoro central charges have been derived \cite{Blagojevic,Klemm}. As we have shown in the Sec. III, only the chiral central charge $c$ is related to the topological phase. Thus, in this generalized geometric model, we recover $c=1$. \section{VI. Conclusions} In this work, we have proposed a new geometric model of topological insulators based on the Maxwell algebra. This is a non-central extension of the Poincar\'e algebra that takes into account the symmetries of the gapped boundary states, i.e. the Lorentz symmetries and magnetic translations. The Chern-Simons theory that describes these states is built in terms of a gauge connection that takes values in the Maxwell algebra. The standard U(1) Chern-Simons theory is consistently reproduced in this model together with gravitational terms and two novel ones that represent a non-minimal coupling between the electromagnetic field and the curved background. We have shown that the purely gravitational part of the theory is compatible with the presence of one-dimensional Dirac modes propagating along the defect lines created on the gapped boundary. The corresponding CFT with chiral central charge $c=1$ has been derived through the holographic correspondence. Importantly, our approach can be applied also to topological phases, such as two-dimensional Chern insulators, where the magnetic translations (under suitable geometric conditions) occur without the presence of any external electromagnetic field \cite{Roy1,Roy2}. In conclusion, our theory opens the way to the application of Maxwell geometry in topological phases of matter. \section{Acknowledgments} We thank Andrea Cappelli, Jan Zaanen, Hai-Qing Zhang and Enrico Randellini for comments and suggestions. This work is part of the DITP consortium, a program of the Netherlands Organisation for Scientific Research (NWO) that is funded by the Dutch Ministry of Education, Culture and Science (OCW).
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{#1}} \renewcommand{\theequation}{\thesection.\arabic{equation}} \newcommand{\vs}[1]{\rule[- #1 mm]{0mm}{#1 mm}} \newcommand{\hs}[1]{\hspace{#1mm}} \newcommand{\mb}[1]{\hs{5}\mbox{#1}\hs{5}} \newcommand{\begin{eqnarray}}{\begin{eqnarray}} \newcommand{\end{eqnarray}}{\end{eqnarray}} \newcommand{\wt}[1]{\widetilde{#1}} \newcommand{\und}[1]{\underline{#1}} \newcommand{\ov}[1]{\overline{#1}} \newcommand{\sm}[2]{\frac{\mbox{\footnotesize #1}\vs{-2}} {\vs{-2}\mbox{\footnotesize #2}}} \newcommand{\partial}{\partial} \newcommand{\epsilon}{\epsilon} \newcommand{\mbox{\rule{0.2mm}{2.8mm}\hspace{-1.5mm} R}}{\mbox{\rule{0.2mm}{2.8mm}\hspace{-1.5mm} R}} \newcommand{Z\hspace{-2mm}Z}{Z\hspace{-2mm}Z} \newcommand{{\cal D}}{{\cal D}} \newcommand{{\cal G}}{{\cal G}} \newcommand{{\cal K}}{{\cal K}} \newcommand{{\cal W}}{{\cal W}} \newcommand{\vec{J}}{\vec{J}} \newcommand{\vec{\lambda}}{\vec{\lambda}} \newcommand{\vec{\sigma}}{\vec{\sigma}} \newcommand{\vec{\tau}}{\vec{\tau}} \newcommand{\vec{W}}{\vec{W}} \newcommand{\stackrel{\otimes}{,}}{\stackrel{\otimes}{,}} \newcommand{\NP}[1]{Nucl.\ Phys.\ {\bf #1}} \newcommand{\PL}[1]{Phys.\ Lett.\ {\bf #1}} \newcommand{\NC}[1]{Nuovo Cimento {\bf #1}} \newcommand{\CMP}[1]{Comm.\ Math.\ Phys.\ {\bf #1}} \newcommand{\PR}[1]{Phys.\ Rev.\ {\bf #1}} \newcommand{\PRL}[1]{Phys.\ Rev.\ Lett.\ {\bf #1}} \newcommand{\MPL}[1]{Mod.\ Phys.\ Lett.\ {\bf #1}} \newcommand{\BLMS}[1]{Bull.\ London Math.\ Soc.\ {\bf #1}} \newcommand{\IJMP}[1]{Int.\ Jour.\ of\ Mod.\ Phys.\ {\bf #1}} \newcommand{\JMP}[1]{Jour.\ of\ Math.\ Phys.\ {\bf #1}} \newcommand{\LMP}[1]{Lett.\ in\ Math.\ Phys.\ {\bf #1}} \renewcommand{\thefootnote}{\fnsymbol{footnote}} \newpage \setcounter{page}{0} \pagestyle{empty} \vs{30} \begin{center} {\LARGE {\bf $N=1,2$ Super-NLS Hierarchies}}\\ [.25cm] {\LARGE {\bf as Super-KP Coset Reductions.}}\\[1cm] \vs{8} {\large {F. Toppan}}\\ \quad \\ {\em Laboratoire de Physique Th\'eorique ENSLAPP \footnote{URA 14-36 du CNRS, associ\'ee \`a l'Ecole Normale Sup\'erieure de Lyon, et au Laboratoire d'Annecy-le-Vieux de Physique des Particules (IN2P3-CNRS).},}\\ {\em ENS Lyon, 46 all\'ee d'Italie,} \\ {\em F-69364 Lyon Cedex 07, France.}\\ {E-mail: ftoppan@enslapp-ens-lyon.fr} \end{center} \vs{8} \centerline{ {\bf Abstract}} \indent We define consistent finite-superfields reductions of the $N=1,2$ super-KP hierarchies via the coset approach we already developped for reducing the bosonic KP-hierarchy (generating e.g. the NLS hierarchy from the $sl(2)/U(1)-{\cal KM}$ coset). We work in a manifestly supersymmetric framework and illustrate our method by treating explicitly the $N=1,2$ super-NLS hierarchies. W.r.t. the bosonic case the ordinary covariant derivative is now replaced by a spinorial one containing a spin ${\textstyle {1\over 2}}$ superfield. Each coset reduction is associated to a rational super-${\cal W}$ algebra encoding a non-linear super-${\cal W}_\infty$ algebra structure. In the $N=2$ case two conjugate sets of superLax operators, equations of motion and infinite hamiltonians in involution are derived. Modified hierarchies are obtained from the original ones via free-fields mappings (just as a m-NLS equation arises by representing the $sl(2)-{\cal KM}$ algebra through the classical Wakimoto free-fields). \vfill \rightline{{\small E}N{\large S}{\Large L}{\large A}P{\small P}-L-467/94} \rightline{April 1994} \newpage \pagestyle{plain} \renewcommand{\thefootnote}{\arabic{footnote}} \setcounter{footnote}{0} \sect{Introduction} \indent The hierarchy of integrable equations leading to a solitonic behaviour has been a widely studied subject since the fundamental work of Gardner, Green, Kruskal and Miura \cite{GGKM} concerning the KdV equation. In more recent time it has particularly deserved physicists' attention expecially in connection with matrix models, which are a sort of effective description for two-dimensional gravity. In such a context the partition functions of matrix models satisfy the so-called Virasoro-${\cal W}$ constraints and can be expressed in terms of the $\tau$-functions of the hierarchies of classical integrable equations (for a review on that, see e.g. \cite{mar} and references therein).\par Looking at a classification of all possible hierarchies is therefore a very attracting problem for both mathematical and physical reasons. Working in the formalism of pseudo-differential operators (PDO) such a problem can be formalized as follows: determining all possible algebraic constraints, consistent with the KP flows, on the infinite fields entering the KP operator (reduction procedure). Apart from the well-known solutions of Drinfeld-Sokolov type \cite{DS}, which can be expressed in terms of purely differential Lax operators, in the literature other solutions, called multi-fields KP reductions, have been obtained \cite{{ara},{bonora},{bonora2}}. Their basic features can be stated as follows: in the dispersionless limit they give rise to a Lax operator fractional in the momentum $p$. Moreover, the algebra of Virasoro-${\cal W}$ constraints turns out to be a ${\cal W}_\infty$ algebra.\par Inspired by the works \cite{yuwu} (see also \cite{bakas}), in a previous paper \cite{toppan} we have shown how such reductions can be derived via a coset construction, involving a factorization of a Kac-Moody subalgebra out of a given Kac-Moody or polynomial ${\cal W}$ algebra. In our framework we have immediately at disposal a Poisson-brackets structure providing a (multi)hamiltonian dynamics. Furthermore, the non-linear ${\cal W}_\infty$ algebra can be compactely interpreted as a finite rational ${\cal W}$ algebra (to our knowledge, the notion of rational ${\cal W} $ algebra has been first introduced in \cite{feher}; in \cite{DFSTR} it has been shown that rational ${\cal W}$ algebras appear in the somehow different context of coset construction; a detailed analysis of classical rational ${\cal W}$ algebras and their quantum deformations has been given in \cite{feher2}).\par Even if we have not yet attempted to give a general formal proof the examples worked out so far strongly suggest that to each coset is associated a corresponding KP reduction; there exists indeed a well-defined procedure telling how to associate to a given coset a possible KP reduction. In the absence of a general theorem the consistency of the derived reduced operator with the KP flows should be explicitly checked, leading to lenghty but straightforward computations. No counterexample has been found so far.\par A point should be made clear: in our framework we do not need to introduce Dirac's brackets since we do not impose hamiltonian reductions; due to that we are able to derive modified hierarchies via free-field mappings provided by (the classical version of) the Wakimoto representation \cite{waki} and their generalizations.\par In this paper we address the problem of extending the previous bosonic construction to the $N=1,2$ supersymmetric cases, leading to consistent coset reductions of the super-KP hierarchy. The fundamental reference we will follow concerning the definition of the KP hierarchy for odd-graded derivative is \cite{manin}. Supersymmetric integrable hierarchies have a vast literature, see among others e.g. \cite{others} and for the $N=2$ case \cite{N2}.\par We will work in a manifestly supersymmetric formalism and illustrate our procedure by explicitly showing the coset derivation of the $N=1,2$ super-NLS equations. Due to the above considerations our framework can be straightforwardly applied to derive more complicated coset theories. \par The basic difference with respect to the bosonic case lies in the fact that now the subalgebra we will factor out is generated by spin ${\textstyle{1\over 2}}$ supercurrents which enter a spinorial, fermionic, covariant derivative.\par The supercurrents algebra generating the $N=1$ super-NLS theory involve two oppositely charged fermionic superfields of spin ${{\textstyle{1\over 2}}}$. The coset construction can be performed as in the bosonic case leading to a non-linear super-${\cal W}$ algebra involving an infinite number of primary superfields, one for each integral or half integral value of the spin $s\geq 1$. Such superalgebra can be regarded as a rational super-${\cal W}$ algebra. It guarantees the existence of a consistent reduction of the super-KP hierarchy, which in its turn implies the integrability of the super-NLS equation. The totally new feature with respect to the bosonic case, i.e. that the subsector of fermionic superfields only appears in the reduced super-KP operator, will be fully discussed.\par The first two fermionic superfields in the coset super-algebra are the first two (fermionic) hamiltonian densities of the super-NLS equation. Two compatible super-Poisson brackets are derived as in the bosonic case. A superWakimoto representation for the supercurrents algebra enables us to introduce the associated modified super-NLS hierarchy.\par Our results concerning the $N=1$ super-NLS equation should be compared with that of \cite{das} (and \cite{roe}). The equation we derive coincide with the one analyzed in \cite{das}. While the coefficients in \cite{das} were suitably chosen in order to provide integrability, in our case they are automatically furnished by the coset construction. We remark here that the Lax operator given in \cite{das} is of matricial type, while our super-KP reduced operator is of scalar type. More comments on the connection of the two approaches will be given in the text.\par For what concerns the $N=2$ case we will make use of the formalism, already developped in \cite{IT} for Toda theories, based on chiral and antichiral superfields. They are equivalent to $N=1$ superfields, which allows us to reduce the $N=2$ case to the previous one. Any object in the $N=2$ theory (namely superfields, covariant derivatives, hamiltonians, Lax operators) has its chirally conjugated counterpart.\par The scheme of the paper is the following: in the next two sections the bosonic construction is reviewed in detail and the basic structures which are used also in the super-case are discussed. We would like to point out that some of the results here presented are new. Next, the $N=1$ formalism is introduced and the definition of super-algebra cosets is given. The $N=1$ super-NLS hierarchy is analyzed. The last two sections are devoted to introduce the formalism and extend the results to the $N=2$ case. \sect{The coset reduction of the bosonic KP hierarchy.} \indent In this section we summarize the basic results of \cite{toppan} concerning the coset reduction of the bosonic KP hierarchy.\par Let us state the problem first: the KP hierarchy (we follow \cite{dickey} and the conventions there introduced) is defined through the pseudodifferential Lax operator \begin{eqnarray} L &=& \partial + \sum_{i=0}^{\infty} U_i\partial^{-i} \label{kp} \end{eqnarray} where the $U_i$ are an infinite set of fields depending on the spatial coordinate $x$ and the time parameters $t_k$. Let us denote as ${L^k}_+$ the purely differential part of the $k$-th power of the $L$ operator; an infinite set of differential equations, or flows, for the fields $U_i$ is introduced via the equations \begin{eqnarray} {\partial L\over \partial t_k}& = &[ {L^k}_+,L] \label{flows} \end{eqnarray} The quantities \begin{eqnarray} F_k &=& <L^k> \label{first} \end{eqnarray} are first integrals of motion for the flows (\ref{flows}). Here the symbol $<A>$ denotes the integral of the residue ($<A>=\int dw a_{-1} (w)$) for the generic pseudodifferential operator $A = ...+ a_{-1} \partial^{-1} +... $.\par An infinite set of compatible Poisson brackets structures can be introduced, leading to a (multi)-hamiltonian structure for the flows (\ref{flows}). The first integrals of motion are hamiltonians in involution with respect to all Poisson brackets. \par The flows (\ref{flows}) involve an infinite set of fields. The reduction procedure of the KP hierarchy consists in introducing algebraic constraints on such fields, so that only a finite number of them would be independent. Such constraints must be compatible with the flows (\ref{flows}). As a final result one gets a hierarchy of integrable differential equations involving a finite number of fields only.\par The canonical way to perform a reduction consists in imposing the constraint \begin{eqnarray} L^n ={L^n}_+ \label{kdvred} \end{eqnarray} which tells that the $n$-th power of $L$ is a purely differential operator, for a given positive integer $n=2,3,...$ . Such reductions lead to generalized KdV hierarchies: for $n=2$ one gets the KdV equation, for $n=3$ the Boussinesq one and so on. The hamiltonian structure for such reduced hierarchies is induced from the hamiltonian structure of the original unreduced KP. These hierarchies are the ones originally described by Drinfeld-Sokolov \cite{DS}. Under the Poisson brackets structure the fields entering $L^n$ satisfy a classical finite non-linear ${\cal W}$ algebra (of polynomial type).\par In the limit of dispersionless Lax equation (which is taken by assuming the fields not depending on the spatial coordinate $x$) and in the Fourier-transformed basis (the operator $\partial \equiv p$, $p$ the momentum) the Lax operator $L^n$ is just given by a polynomial in $p$ of order $n$.\par The set of reductions given by the constraint (\ref{kdvred}) does not exhaust the set of all possible reductions compatible with the flows of KP. Indeed in the literature other consistent reductions have been discussed (see e.g. \cite{ara},\cite{bonora}). They are called multi-fields reductions of KP. In the language of \cite{bonora2} they are labelled by two positive integers $p,q$ and called generalized ($p,q$) KdV hierarchies. For this new class of reductions there exists no integer $n$ such that the constraint (\ref{kdvred}) holds. As a basic feature of this new class, a non-linear $W_\infty$ algebra is associated to each reduction, instead of just the polynomial ${\cal W}$ algebra associated to the standard Drinfeld-Sokolov reductions.\par In \cite{toppan} we have shown, working out explicitly some examples, that this new set of reductions can be derived from factoring a Kac-Moody subalgebra out of a given Kac-Moody or polynomial ${\cal W}$ algebra (coset construction). Furthermore, the structure of non-linear ${\cal W}_\infty$ algebra associated to such a coset is encoded in an underlining structure of finite rational ${\cal W}$ algebra (since the notion of rational ${\cal W}$ algebra has been fully explained in \cite{{DFSTR},{toppan}}, we will not discuss it here). Even if we do not dispose of a formal proof telling that any coset factorization determines its corresponding KP reduction, we believe this statement to be true. Indeed, for any example of coset worked out so far we were able to find its associated KP-reduced hierarchy.\par Before going ahead, let us constraint $U_0\equiv 0$ in (\ref{flows}) and let us discuss the first two flows for $k=1,2$. We get respectively \begin{eqnarray} {\partial\over\partial t_1 } U_j &=& U_j' \nonumber\\ {\partial\over\partial t_2 } U_j &=& U_j '' + 2 U_{j+1} ' - 2 \sum_{r=1}^{j-1} (-1)^r\left( \begin{array}{c} j-1\\ r \end{array}\right) U_{j-r} \partial^r U_1 \label{eqmo} \end{eqnarray} (from now on we use the standard convention of denoting the spatial derivative with a prime and the time derivative with a dot if no confusion concerning the flow arises) for any $j=1,2,...$ .\par The first flow is trivial, while the second one provides a set of non-linear equations. \par For later convenience (and in order to derive the KP reduction we are going to discuss from an underlininig coset algebra which provides the hamiltonian structure) let us introduce at this point a covariant derivative $\cal D$ (whose precise definition will be given later), acting on covariant fields with definite charge. An important point is that the covariant derivative satisfies the same rules, in particular the Leibniz rule, as the ordinary derivative and coincides with the latter one when acting upon chargeless fields.\footnote{the following discussion will be limited to covariant derivatives defined for an abelian $U(1)-{\cal KM}$ algebra, even if the non-abelian case can be considered on the same foot as well.} At a formal level, the formulas giving the action of covariant derivatives on covariant fields look the same as those involving ordinary derivatives. An example of that is the following important commutation rule \begin{eqnarray} {\cal D}^{-k} f&=& f{\cal D}^{-k} +\sum_{r=1}^\infty(-1)^r \left( \begin{array}{c} k+r-1\\ r \end{array}\right) f^{(r)} {\cal D}^{-k-r} \label{comrul} \end{eqnarray} (here $f^{(r)} \equiv {\cal D}^r f $ and $k$ is a positive integer).\par A consistent reduced version of the KP hierarchy can be expressed as the Lax operator \begin{eqnarray} L &=& {\cal D} + J_- {\cal D}^{-1} J_+\equiv \partial +J_-\cdot {\cal D}^{-1}J_+\label{nls} \end{eqnarray} Let us introduce the composite fields $V_n = J_- \cdot {\cal D}^n J_+$. The reduction (\ref{nls}) implies the identification \begin{eqnarray} U_{n} &=& (-1)^{n-1}V_{n-1}, \quad\quad n=1,2,...\label{subst} \end{eqnarray} where the $U_n$'s are the fields appearing in (\ref{kp}). It can be easily checked that the above position is indeed a reduction, namely that is consistent with the flows (\ref{flows}); this statement is proved as follows: at first one should notice that, due to the properties of the covariant derivative, an algebraic relation holds \begin{eqnarray} V_{p+1} \cdot V_{0} &=& V_{0} \cdot \partial V_{p} + (V_1- \partial V_{0}) V_{p} \label{alg} \end{eqnarray} which allows to algebraically express the fields $V_p$, for $p\geq 2$, in terms of the fundamental ones $V_0 $ and $V_1$. Due to standard properties of the Newton binomial, the equations for $j > 2$ in the flows (\ref{eqmo}, $b$) are compatible with the algebraic relation (\ref{alg}) after taking into account the substitutions (\ref{subst}). \par So far we have discussed the reduction of the KP hierarchy at a purely algebraic level, without mentioning any hamiltonian structure. Up to now the introduction of a covariant derivative was not effective since, as we have already remarked, covariant and ordinary derivatives play the same role if only algebra is concerned. The introduction of a covariant derivative is at least a very convenient tool to make contact with the hamiltonian dynamics and it proves to be crucial for regarding the (\ref{nls}) reduction as a coset construction.\par Let us assume the fields $J_\pm (x), J_0(x)$ to satisfy the $sl(2)$ Kac-Moody algebra \begin{eqnarray} \{J_+(z),J_-(w)\} &=& \partial_w\delta(z-w) - 2 J_0 (w) \delta(z-w) \equiv {\cal D} (w)\delta(z-w) \nonumber \\ \{J_0(z), J_\pm (w)\} &=& \pm J_\pm (w) \delta (z-w) \nonumber \\ \{J_0 (z),J_0(w)\} &=& {-\textstyle{1\over 2}}\partial_w\delta(z-w)\nonumber\\ \{J_\pm(z),J_\pm(w)\} &=& 0 \label{kmalg} \end{eqnarray} the covariant derivative $\cal D$ is defined acting on covariant fields $\Phi_q$ of definite charge $q$ as \begin{eqnarray} {\cal D} &=& (\partial +2q J_0)\Phi_q \end{eqnarray} The property of covariance for the field $\Phi_q$ being defined through the relation \begin{eqnarray} \{ J_0 (z), \Phi_q (w) \} &=& q \Phi_q(w)\delta (z-w) \end{eqnarray} As its name suggests, the covariant derivative maps covariant fields of charge $q$ into new covariant fields having the same charge. In particular $J_\pm $ are covariant fields with respect to $J_0$ and have charge $\pm 1$ respectively, so that \begin{eqnarray} {\cal D} J_\pm&= & \partial J_\pm \pm 2 J_0 \cdot J_\pm \end{eqnarray} The algebraic relations (\ref{kmalg}) of the $sl(2)$-Kac-Moody can be seen as a first Poisson bracket structure (denoted as $ \{\cdot, \cdot \}_1$) for the reduced (\ref{nls}) hierarchy. It is a trivial check indeed to show that the first two integrals of motion $F_{1,2}$ (\ref{first}) are proportional to $H_{1,2}$: \begin{eqnarray} H_1&=&\int (J_-\cdot J_+)\nonumber\\ H_2&=& -\int (J_-\cdot {\cal D}J_+) \label{hami} \end{eqnarray} which are hamiltonians in involution with respect to the (\ref{kmalg}) Poisson brackets; $H_{1,2}$ reproduce respectively the first and the second flow of (\ref{eqmo}) under the substitution (\ref{subst}): \begin{eqnarray} {\partial\over\partial t_1 } V_n &=& \{ H_1, V_n\} = V_n ' \nonumber\\ {\partial\over\partial t_2 } V_n &=& \{ H_2, V_n\} = V_n '' -2V_{n+1} ' -2 \sum_{r=1}^n \left( \begin{array}{c} n\\ r \end{array}\right) V_{n-r} \partial^rV_0 \label{eqmo2} \end{eqnarray} \par for $n=0,1,...$ .\par Our framework allows us to accomodate a second compatible Poisson brackets structure which is given by \begin{eqnarray} \{ J_-(z), J_- (w)\}_2&=& 0\nonumber\\ \{ J_+ (z), J_+ (w) \}_2 &=& -\delta (z-w) (J_+)^2 (w)\nonumber\\ \{ J_+ (z), J_- (w) \}_2 &=& {{\cal D}_w}^2 \delta (z-w) +\delta (z-w) J_+(w)J_-(w) \end{eqnarray} To understand the above relations, one should notice that they are obtained from the corresponding relations for the first Poisson brackets structure (\ref{kmalg}) after taking into account the substitutions \begin{eqnarray} J_- &\mapsto& J_-\nonumber \\ J_+ &\mapsto &-{\cal D}J_+ \end{eqnarray} The compatibility of first and second Poisson brackets simply means the following equality being satisfied \begin{eqnarray} {\dot f} &=& \{ H_1, f\}_2 = \{ H_2, f\}_1 \end{eqnarray} \par The composite fields $V_n$ entering (\ref{kmalg}) are by construction chargeless, i.e. they have vanishing Poisson brackets with respect to $J_0$ \begin{eqnarray} \{ J_0 (z), V_n (w) \} &=& 0 \label{comm} \end{eqnarray} They constitute a linearly independent basis for the composite chargeless bilinear fields (bilinear invariants); namely any such field can be obtained as a linear combination of the $V_n$ fields and ordinary derivatives acting on them. Under the first Poisson brackets structure the fields $V_n$ form a closed non-linear algebra. The only finite subset which is closed with respect to this algebra is given by $V_0$ itself: as soon as every other field is added, one needs the whole infinite set of fields to close the algebra. These bilinear invariants therefore provide the reduction (\ref{nls}) with the structure of a non-linear $W_\infty$ algebra. Since however the $V_n$ fields, even if linearly independent, are not algebraically independent due to relations like (\ref{alg}), the non-linear ${\cal W}_\infty$ algebra structure can be regarded as encoded in the more compact structure of finite rational ${\cal W}$ algebra. For more details and for the explicit expression of such rational algebra see \cite{toppan}. \par The fact that the fields $V_n$ have vanishing Poisson brackets with respect to the $U(1)-{\cal KM}$ subalgebra of the Kac-Moody $sl(2)$ means that we have found the explicit link between our KP-reduction (\ref{nls}) and the coset factorization.\par In \cite{toppan} another such reduction was considered in full detail; it was associated to the Lax operator \begin{eqnarray} {\tilde L}& =& {\cal D}^2 + T + W_-\cdot {\cal D}^{-1} W_+ \label{op} \end{eqnarray} Such operator has not the form of a KP operator, but it is however possible to introduce the uniquely defined ``square root" $ {\tilde L}^{{1\over 2}}$ of ${\tilde L}= {\tilde L}^{{1\over 2}} \cdot {\tilde L}^{{1\over 2}}$ which is of KP-type (${\tilde L}^{{1\over 2}}= {\cal D} +... $). The fields $T, W_\pm$ entering (\ref{op}), are respectively a chargeless stress-energy tensor and two (opposite charged) bosonic spin ${\textstyle{3\over 2}}$ fields; the charge being defined with respect to an $U(1)-{\cal KM} $ current $J$ entering the covariant derivative. The fields $J, T, W_\pm$ form a closed algebra which is nothing else that the non-linear Polyakov-Bershadski ${\cal W}$ algebra \cite{polya}. It plays the same role of first Poisson brackets structure leading to a hamiltonian dynamics for the flow associated to the (\ref{op}) Lax operator, just like the $sl(2)-{\cal KM}$ algebra in the previous case. The same steps done before can be repeated in this case too.\par In general, starting from a given coset algebra, it is quite an easy Ansatz to find out the form of the reduced KP Lax operator; the following steps should be performed: at first the Kac-Moody currents of the factorized subalgebra should be accomodated into a single covariant derivative, then with the help of dimensional considerations one should identify the $U_n$ fields of (\ref{kp}) with invariants constructed out of covariant fields, the original ones in the algebra as well as the covariant derivatives applied on them. The only difficulty left consists in explicitly checking the consistency of such reduction with the KP flow, as well as its link with the hamiltonian dynamics provided by the algebra itself.\par We close this section with a remark: in the limit of dispersionless Lax equation, and taking into account that $J_0$ is a constant ($\equiv \alpha$) with respect to any flow due to the relations (\ref{comm}), the reduced operators (\ref{nls}) and (\ref{op}) are respectively given by \begin{eqnarray} L&\rightarrow& p + {\lambda \over p+ \alpha} \nonumber\\ {\tilde L}& \rightarrow & p^2 + t +{\lambda \over p+\alpha} \end{eqnarray} with $\alpha ,\lambda$ and $ t $ constants.\par It is remarkable that the reductions associated to rational ${\cal W}$ algebras lead, in the dispersionless limit, to Lax operators fractional in $p$ (this is always true in any case of coset construction), while the Drinfeld-Sokolov reductions associated to polynomial ${\cal W}$ algebras lead to Lax operators polynomial in $p$. \section{From NLS to a modified NLS hierarchy via Wakimoto representation of the $sl(2)-{\cal KM}$ algebra.} \indent In this section we study more closely the hierarchy associated to the reduced KP operator (\ref{nls}). We show that it coincides with the two-components formulation of the NLS hierarchy. In terms of the second hamiltonian $H_2 = -\int (J_- {\cal D } J_+)$ we get indeed the following equations \begin{eqnarray} {\dot {J_\pm}}&= & \{ J_\pm , H_2\}_1=\pm {\cal D}^2 J_\pm \pm 2 (J_+ J_-)J_\pm \label{nls2}\end{eqnarray} This is the coupled system associated to the NLS equation. Due to the results mentioned in the previous section it is consistent to set $J_0\equiv 0$, which further implies ${\cal D}^2 J_\pm = {J_\pm}'' $. Next, the standard NLS equation is recovered by letting the time being imaginary. Such ``Wick rotation" allows making the identification \begin{eqnarray} {J_-}^\star &=& J_+ = u \end{eqnarray} We obtain finally \begin{eqnarray} i {\dot u} &=& u'' + 2 u|u|^2 \end{eqnarray} which is the NLS equation in its standard form \cite{fadtak}.\par At this point we should recall that equivalent integrable equations can arise in two different ways: either because they are associated to different hamiltonians belonging to the same hierarchy of hamiltonians in involution, or because there exists a mapping between them. This is the case concerning the relation between KdV and m-KdV equations, the latter being the equation involving the free field $\varphi$, which is related via Miura transformation to the $v$ field satisfying the KdV equation; for the KdV Lax operator this reads as follows \begin{eqnarray} \partial^2 + v& =&(\partial-\varphi)(\partial + \varphi ) = \partial^2 +\varphi ' - \varphi^2 \end{eqnarray} Generalizations of this construction hold for any hierarchy of Drinfeld-Sokolov type.\par The framework we developped in the previous section is particularly useful for describing the analogue free-fields mappings in the case of coset reductions. There exists indeed a standard free field representation of the $sl(2)-{\cal KM}$ algebra which is given by the (classical) Wakimoto representation \cite{waki}. It is realized in terms of the weight $1$ field ${\nu} $\footnote{ in the standard notation for the quantum Wakimoto representation $\nu \equiv \partial \phi$, $\phi$ is the fundamental field satisfying the OPE $\phi (z)\phi(w)\sim log (z-w)$.} and the bosonic $\beta-\gamma$ system of weight $(1,0)$, satisfying the algebra \begin{eqnarray} \{\beta (z) , \gamma (w) \} &=& -\{ \gamma (z) , \beta (w) \} = \delta (z-w)\nonumber\\ \{\nu (z), \nu (w) \} &=& \partial_w \delta (z-w) \end{eqnarray} (any other Poisson bracket is vanishing).\par The $sl(2)-{\cal KM}$ algebra given in (\ref{alg}) is reproduced through the identifications \begin{eqnarray} J_+&=&\beta\nonumber\\ J_0 &=& -\beta \gamma + {i\over {\sqrt 2}}\nu \nonumber\\ J_- &=& \beta \gamma^2 -i{\sqrt 2} \gamma \nu +\partial\nu \end{eqnarray} Representing the hamiltonian $H_2 $ in terms of the Wakimoto fields, one can derive the coupled system \begin{eqnarray} {\dot {\beta }}&= & \{\beta, H_2\}_1 = \beta '' + 2\beta^2\gamma ' -2\beta^3\gamma^2\nonumber\\ {\dot {\gamma}} &=& \{\gamma , H_2\}_1=\gamma '' -2\gamma^2\beta ' -2\gamma^3\beta^2 \label{mnls} \end{eqnarray} (we used here the consistent constraint $J_0 = 0$ to get rid of the field $\nu$ in the above equations). The fields $\beta , \gamma$ enter in a symmetric way in the above system and we can forget about the different weights between them we originally introduced to define the Wakimoto representation. If we let indeed the spatial coordinate being imaginary ($\partial_x \mapsto i\partial_x $), it is consistent to set \begin{eqnarray} \gamma^\star&= &\beta = \lambda \end{eqnarray} so that the final result is \begin{eqnarray} {\dot {\lambda}}& =& -\lambda '' + 2\lambda (i\lambda\partial\lambda^\star - |\lambda|^4 ) \label{redmnls} \end{eqnarray} \par It is a remarkable fact that the modified NLS system (\ref{mnls}) should be regarded as some sort of dual version of the original NLS system (\ref{nls2}). In the latter case the reduction to the single component NLS equation is done by assuming the time being imaginary, while in the m-NLS case this is provided by assuming the space being imaginary. \par The construction here discussed can be trivially extended to any coset arising from generic Kac-Moody algebra. The free-fields analogue of the Wakimoto representation is in this case provided by (the classical version of) the results of ref. \cite{GMMOS}.\par Let us finally stress the point that in our approach to the KP-coset reduction the connection with the free-fields representation is particularly explicit, since we did not need to introduce any Dirac brackets arising from the constraint $J_0\equiv 0$: in our framework all computations are performed using the original Poisson brackets structure. \section{The coset derivation of the $N=1$ super-NLS equation.} \indent In this section we will set up a manifestly supersymmetric framework to derive via coset construction $N=1$ supersymmetric integrable hierarchies. There are two basic motivations for doing that. The first concerns of course the construction of superintegrable hierarchies, which are interesting by their own, and have been widely studied in the literature (see e.g. \cite{{others},{N2}}). The second motivation lies in better understanding the coset construction itself. Before any attempt of classifying the cosets and before giving general formal proofs of their link with the hierarchies, it is interesting to investigate how they look in the case of superalgebras.\par It should be kept in mind that even if our discussion will concern the super-NLS hierarchy only, in no respect this example is crucial. The same approach here discussed can be straightforwardly applied to derive other supersymmetric coset hierarchies. It is enough for that to apply the machinery here developped to any given coset algebra. The advantage of discussing the super-NLS case lies in its technical simplicity. \par The super-NLS case is however not an academical exercise and it is interesting to compare our results with that of \cite{{roe},{das}}. In \cite{roe} two distinct supersymmetrizations, one of these involving a free parameter, of the NLS equation have been proposed. It is stated that both lead to an integrable hierarchy. In \cite{das} manifestly supersymmetric NLS equations have been investigated. It has been shown that applying on such equations conventional tests of integrability only the supersymmetric system without any free parameter is selected. Moreover there exists a discrepancy in the coefficients with respect to \cite{roe}. The coset construction we are going to discuss will automatically provide the super-NLS integrable system of ref. \cite{das} with the same coefficients (therefore supporting the statement of \cite{das} that a misprint occurs in \cite{roe}). Our coset construction implies that associated to such system there exists a non-linear super-${\cal W}_\infty$ algebra involving an infinite series of primary bosonic (of integral dimension $h=1,2, ... $) and fermionic (of half-integral dimension $ h={\textstyle{3\over 2}}, {\textstyle{5\over 2}}, ...$) $N=1$ superfields. Such super-${\cal W}_\infty$ algebra can be regarded as a rational super-${\cal W}$ algebra. The existence of this non-linear super-${\cal W}_\infty$ algebra is already an indication of the integrability properties of our super-NLS system. This statement is made precise by associating to the coset a consistent reduction of the super-KP hierarchy. Our Lax operator is different from the one discussed in \cite{das}. \par Let us fix now our conventions concerning the superspace. We denote with capital letters the $N=1$ supercoordinates ($X\equiv, x, \theta$, with $x$ and $\theta $ real, respectively bosonic and grassmann, variables). The supersymmetric spinor derivative is given by \begin{eqnarray} D \equiv D_X &=& {\partial\over \partial\theta} +\theta {\partial\over \partial x} \end{eqnarray} With the above definition $ {D_X}^2 ={\textstyle{\partial\over \partial x}}$. \par The supersymmetric delta-function $\Delta (X,Y)$ is a fermionic object \begin{eqnarray} \Delta (X,Y) &=& \delta (x-y) (\theta -\eta) \end{eqnarray} It satisfies the relations \begin{eqnarray} \Delta (X,Y) &=& -\Delta (Y,X) \quad\quad\quad D_X\Delta (X,Y) - D_Y\Delta (X,Y) \end{eqnarray} Our convention for the integration over the grassmann variable is \begin{eqnarray} \int d\theta \cdot \theta &=& -1 \end{eqnarray} For any given superfield $F(X)$ we get then \begin{eqnarray} \int dY \Delta (X, Y )F(Y) &=& F(X) \end{eqnarray} As in the bosonic case, the (super)-line integral over a total derivative gives a vanishing result. The canonical dimensions $d$ are respectively given by \begin{eqnarray} d(D) &=& d(\Delta) = -d(\theta) = -2d(x) ={\textstyle {1\over 2}} \end{eqnarray} The role which in the bosonic case is played by the ordinary derivative is now played by the spinor derivative of dimension $d={\textstyle {1\over 2}}$. It makes plausible that now covariant spinor derivatives should be constructed in terms of spin ${{\textstyle {1\over 2}}}$ fermionic superfields. An example of supersymmetric rational ${\cal W}$ algebra involving such kind of derivatives has indeed been given in \cite{DFRS}. \par The $N=1$ counterpart of the $U(1)-{\cal KM}$ current $J_0(z)$ should be expressed by the fermionic superfield $\Psi_0(X)= \psi_0 (x) + \theta J_0(x)$, satisfying the super-Poisson brackets relation\footnote{ we recall that super-Poisson brackets are symmetric when taken between odd elements, antisymmetric otherwise.} \begin{eqnarray} \{ \Psi_0 (X), \Psi_0 (Y) \} &=& D_Y \Delta (X,Y) \label{zerosusyalg} \end{eqnarray} which implies, at the level of components \begin{eqnarray} \{\psi_0(x),\psi_0(y)\}&=&-\delta (x-y)\nonumber\\ \{J_0(x),J_0(y)\}&=&-\partial_y\delta (x-y) \end{eqnarray} Super-covariant fields and the supercovariant derivative can now be introduced through \begin{eqnarray} \{ \Psi_0 (X), \Phi_q (Y) \} &=& q\Delta (X,Y) \Phi_q (Y)\nonumber\\ {\cal D}\Phi_q &=& D\Phi _q + q \Psi_0 \Phi_q \end{eqnarray} $\Phi_q$ is a covariant superfield (either bosonic or fermionic).\par We are now in the position to discuss the algebra providing the first (super)-Poisson brackets structure for the super-NLS equation. As suggested in \cite{das}, the component fields should be accomodated in two fermionic spin ${\textstyle{1\over 2}}$ superfields $\Psi_\pm = \psi_\pm + \theta J_\pm $. With the above choice one can identify the bosonic components $J_\pm$ with the analogue fields we already encountered in the bosonic case. The relevant algebra can therefore be simply guessed to be the supersymmetric analogue of the $sl(2)-{\cal KM}$ algebra, introduced through the relations \begin{eqnarray} \{ \Psi_0 (X), \Psi_\pm (Y) \} &=& \pm \Delta (X,Y) \Psi_\pm (Y)\nonumber \\ \{ \Psi_+ (X), \Psi_- (Y) \} &=& {\cal D}_Y \Delta (X,Y) = D_Y \Delta (X,Y) + \Delta (X,Y) \Psi_0(Y) \label{susyalg} \end{eqnarray} One indeed recover the (\ref{kmalg}) algebra by setting all the component spin ${\textstyle{1\over 2}}$ fermionic fields equal to $0$.\par We can define, just like in the bosonic case, the composite superfields $V_n (X)$, where \begin{eqnarray} V_n &=& \Psi_- {\cal D}^{n} \Psi_+ \quad\quad n=0,1,2,... \label{superinv} \end{eqnarray} By construction they have vanishing Poisson brackets with respect to $\Psi_0 $: \begin{eqnarray} \{ \Psi_0 (X), V_n (Y) \} &=& 0 \end{eqnarray} The superfields $V_n$ are respectively bosonic for even values of $n$ and fermionic for odd values. They play the same role as the corresponding fields in the purely bosonic case: they constitute a basis of linearly independent superfields for the chargeless composite superfields. The super Poisson brackets (\ref{zerosusyalg},\ref{susyalg}) provide such basis of fields with the structure of non-linear super-${\cal W}_\infty$ algebra that will be discussed later in more detail. \par In order to associate to the coset algebra a hamiltonian dynamics like we did in the bosonic case we proceed as follows: we recall that the superfields $V_n$ have positive dimensions $d(V_n) = {\textstyle { n+2\over 2}}$; then we look for all possible hamiltonian densities of a given dimension that one can algebraically construct out of the superfields $V_n$ and the covariant derivative (of dimension ${\textstyle {1\over 2}}$) acting upon them. For any given dimension only a finite number of such combinations are allowed. Since now we are working in a manifestly supersymmetric framework and the (super)-line integral is fermionic, so the hamiltonian densities must be fermionic of half-integral dimension. The first two possible hamiltonian densities at dimension ${\textstyle {3\over 2}}$ and ${\textstyle {5\over 2 }}$ respectively are just given by $V_1$ and $V_3$. The latter is indeed the unique, up to a total derivative, chargeless $d ={\textstyle{5\over 2}}$ object.\par It can easily checked now that $H_{1,2}$ given by \begin{eqnarray} H_1 &=& \int dX V_1 (X) = \int dX (\Psi_- \cdot {\cal D}\Psi_+)\nonumber\\ H_2 &=& \int dX V_3 (X) = \int dX (\Psi_-\cdot {\cal D}^3 \Psi_+) \label{superhami} \end{eqnarray} have vanishing Poisson brackets among themselves with respect to (\ref{zerosusyalg},\ref{susyalg}) and can therefore been regarded as hamiltonians in involution. Two compatible flows are defined through \begin{eqnarray} {\partial \over \partial t_1 } \Psi_\pm &=& \{ H_1, \Psi_\pm \} = {\cal D}^2 \Psi_\pm \nonumber\\ {\partial \over \partial t_2 } \Psi_\pm &=& \{ H_2, \Psi_\pm \} = \pm {\cal D}^4 \Psi_\pm \mp \Psi_\pm { D}( \Psi_\mp {\cal D} \Psi_\pm ) \label{superNLS} \end{eqnarray} The latter equation is the $N=1$ supersymmetric version of the two-components NLS. As in the bosonic case, if we let the time $t_2$ be imaginary we can consistently set \begin{eqnarray} \Psi_+ = {\Psi_-}^\star = \Psi \nonumber \end{eqnarray} to get the super-NLS equation \begin{eqnarray} i{\dot \Psi} &=& \Psi^{(4)} -\Psi D(\Psi^\star \Psi^{(1)}) \end{eqnarray} (in order to simplify the notation from now on the symbol $A^{(n)} \equiv {\cal D}^{n} A $ will be used). \par Since ${\dot \Psi_0}=0$ makes consistent to set $\Psi_0 =0$, the above equation leads to the following system in component fields ($\Psi= \phi +\theta q$, $\phi$ fermionic and $q$ bosonic): \begin{eqnarray} i{\dot \phi} &=& \phi_{xx} + \phi ( \phi^\star \phi_x - q^\star q )\nonumber \\ i{\dot q } &=& q_{xx} - (q q^\star) q + (\phi {\phi_x}^\star -\phi_x\phi^\star) q +(\phi \phi^\star)q_x \end{eqnarray} As already stated, this equation coincides with the integrable super-NLS equation of ref. \cite{das}.\par The supersymmetric character of the above equations is guaranteed by the invariance of the hamiltonians $H_{1,2}$ under the transformations \begin{eqnarray} \delta \Psi_\pm &=& \pm\varepsilon {\cal D}\Psi_\pm \end{eqnarray} where $\varepsilon $ is a grassmann parameter.\par The existence of a bihamiltonian structure is derived as in the bosonic case. The second super-Poisson brackets structure is given by \begin{eqnarray} \{ \Psi_- (X), \Psi_- (Y) \}_2 &=& 0\nonumber\\ \{\Psi_- (X), \Psi_+ (Y) \}_2 &=& \Delta^{(3)} -\Delta^{(1)}\Psi_-\Psi_+ +\Delta {\Psi_-}^{(1)}\Psi_+\nonumber\\ \{\Psi_+ (X), \Psi_+ (Y) \}_2 &=& \Delta^{(2)} \Psi_+{\Psi_+}^{(1)} -\Delta^{(1)}\Psi_+{\Psi_+}^{(2)} +\Delta {\Psi_+}^{(1)}{\Psi_+}^{(2)} \end{eqnarray} (The superfields on the right hand side are evaluated in $Y$ and $\Delta^{(n)}= {{\cal D}_Y}^{n} \Delta (X,Y) $).\par This second Poisson brackets structure is derived from the first one after the substitutions \begin{eqnarray} \Psi_- &\mapsto & \Psi_- \nonumber\\ \Psi_+ &\mapsto& {\cal D}^{2} \Psi_+ \end{eqnarray} are taken into account. \par The compatibility of the two Poisson brackets structure is ensured, like in the bosonic case, by the relation \begin{eqnarray} {{d F\over d t}} &=& \{ H_1, F\}_2 = \{ H_2, F\}_1 \end{eqnarray} Precisely like the bosonic case, the two hamiltonians $H_{1,2}$ are the first two of an infinite series of hamiltonians mutually in involution. This statement will be justified later when we show how to associate to the system (\ref{superNLS}) a reduction of the super-KP hierarchy.\par A comment is in order. The algebra (\ref{zerosusyalg},\ref{susyalg}) is the simplest possible algebra realized in terms of supercurrents and allowing a Kac-Moody coset construction. There is another very simple supercurrent algebra, which is realized by just coupling to the $\Psi_0$ superfield two bosonic superfields $\Phi_\pm$ (instead of two fermionic ones) of dimension ${\textstyle{1\over 2}}$. The expression of this algebra looks like (\ref{susyalg}) but now one has to take into account the antisymmetric property when exchanging $\Phi_\pm$ in the super-Poisson brackets. If we define the charges being the super-line integral over the supercurrents (as $H=\int dX \Psi_0$, $E_\pm = \int dX \Psi_\pm$), then the algebra (\ref{zerosusyalg},\ref{susyalg}) generates a global $sl(2)$ algebra for the charges, while the algebra determined by $\Psi_0$ and the bosonic supercurrents is promoted to the global superalgebra $osp(1|2)$ (with generators $H$, $F_\pm = \int dX \Phi_\pm$). In \cite{das} a zero-curvature formulation for the system (\ref{superNLS}) was found; it is based on the $sl(2)$ algebra. They claimed being unable to derive an analogue formulation starting from $osp(1|2)$. The reason is simply because this is associated with a radically different system, the dynamics being in this case defined for the bosonic $\Phi_\pm $ superfields. The fact that the dynamics differs from the fermionic case can be immediately seen using the following argument: an invariant composite superfield $W_0 \cdot W_1 $ ($W_n = \Phi_-{\cal D}^n \Phi_+$) is allowed entering in the second hamiltonian density of dimension ${\textstyle{5\over 2}}$, while the corresponding composite superfield $V_0\cdot V_1$ is vanishing in the fermionic case for the antisymmetry of $\Psi_\pm$. Our coset construction can be performed for this bosonic case as well, leading to an interesting superintegrable system, which of course has nothing to do with the supersymmetrization of the NLS equation, since the component bosonic fields have spin ${\textstyle{1\over 2}}$ and not $1$. It is likely that for such a system the zero-curvature formulation would be based on the superalgebra $osp(1|2)$. We leave a detailed discussion of it for a further publication. \section{Comments on the non-linear super-${\cal W}_\infty$ coset algebra.} \indent Let us make some more comments here concerning the non-linear super-${\cal W}_\infty$ algebra structure of the coset algebra. Its linear generators are the superfields $V_n$, $n$ non-negative integer, defined in (\ref{superinv}). The superfields are bosonic for even values of $n$, fermionic for odd values. The set $\{V_0, V_1\}$ constitutes a finite super-algebra, given by the Poisson brackets \begin{eqnarray} \{ V_0 (X), V_0(Y) \} &=& -\Delta (X,Y) (DV_0 +2V_1)(Y)\nonumber\\ \{ V_0 (X), V_1(Y) \} &=& \Delta^{(2)} (X,Y)) V_0(Y) +\Delta^{(1)}(X,Y) V_1(Y) -\Delta (X,Y) DV_1 (Y)\nonumber\\ \{ V_1 (X), V_1(Y) \} &=& -2\Delta^{(2)} (X,Y) V_1(Y) -\Delta (X,Y) {D_Y}^2V_1(Y) \label{supcos} \end{eqnarray} In terms of component fields it is given by two bosons of spin $1$ and $2$ respectively, and two spin ${\textstyle {3\over 2}}$ fermions. It is the maximal finite subalgebra of the coset superalgebra: as soon as any other superfield is added to $V_0,V_1$, the whole set of fields $V_n$ is needed to close the algebra, giving to the coset the structure of a super-${\cal W}_\infty$ algebra. Moreover such algebra closes in non-linear way. \par Using the techniques developped in \cite{DFSTR} it is possible to show the existence of an equivalent basis for expressing our super-${\cal W}_\infty$ algebra, given by the infinite set of superfields $W_h (X)$, which are primary with conformal dimension $h$ with respect to the stress-energy tensor (having vanishing central charge) $T(X) \equiv W_{\textstyle{3\over 2}}(X)$. To any integral value of $h$ ($h=1,2,..$) is associated a bosonic primary superfield; to any half-integral value ($h ={\textstyle{3\over 2}}, {\textstyle{5\over 2}},... $) a fermionic one. \par The condition of being primary means that the superfields $W_h$ satisfy the relation \begin{eqnarray} \{ T(X), W_h(Y) \} &=& -{h}\Delta^{(2)}(X,Y) W_h(Y) +{1\over 2} \Delta^{(1)}(X,Y) DW_h (Y)-\Delta (X,Y) D^2 W_h (Y)\nonumber\\ \end{eqnarray} We have at the lowest orders \begin{eqnarray} W_1 &=& V_0 =\Psi_- \Psi_+\nonumber \\ T &=& V_1 + {\textstyle{1\over 2 } }DV_0 = \nonumber\\ &=& {\textstyle{1\over 2 } } {\cal D} {\Psi_-}\cdot\Psi_++ {\textstyle{1\over 2 } } \Psi_-\cdot {\cal D}{\Psi_+}\nonumber\\ W_2 &=& 3V_2 + DT -{\textstyle {3\over 2}} \partial V_0 = \nonumber\\ &=& \Psi_-\cdot {\cal D}^2{\Psi_+} +{\cal D}{\Psi_-}\cdot {\cal D} {\Psi_+} -{\cal D}^2{\Psi_-}\cdot {\Psi_+} \end{eqnarray} We wish finally to make some comments on the rational character of the above defined super-${\cal W}_\infty$ algebra: the whole set of algebraic relations can be expressed just in terms of closed rational super-${\cal W}$ algebra involving $4$ superfields as the following reasoning shows: let us introduce the superfields \begin{eqnarray} \Lambda_p &=_{def}& {\cal D} \Psi_- \cdot {\cal D}^{(p+1)} \Psi_+\nonumber \end{eqnarray} then \begin{eqnarray} \Lambda_p &=& D V_{p+1} - V_{p+2} \nonumber \label{lambdadef} \end{eqnarray} Due to standard properties of the covariant derivative we can write down for the superfields $\Lambda_p$ the analogue of the relation (\ref{alg}) of the bosonic case: \begin{eqnarray} \Lambda_0 \Lambda_{p+1} &=& \Lambda_0 D\Lambda_p + (\Lambda_1 - D\Lambda_0)\Lambda_p \label{ratio2} \end{eqnarray} which implies that $\Lambda_p$ are rational functions of $\Lambda_{0,1}$, which in their turns are determined by $V_i$, $i=0,1,2,3$. \\ Inverting the relation (\ref{lambdadef}) we can express any higher field $V_{p+1}$ in terms of $V_p$, $\Lambda_{p-1}$. As a consequence of this we have the (rational) closure of the superalgebra on the superfields $V_0,V_1, V_2, V_3$. \par We remark that now is not possible, like in the bosonic case, to determine higher order superfields $V_p$ from the formula (\ref{ratio2}) by simply inserting $V_0$, $V_p$ in place of $\Lambda_0$, $\Lambda_p$: this is due to the fact that any product $V_0\cdot V_{p+1} $ identically vanishes since it is proportional to a squared fermion (${\Psi_-}^2=0$). That is the reason why four superfields are necessary to produce a finite rational algebra and not just two as one would have nively expected. \section{The $N=1$ superWakimoto representation and the modified super-NLS equation.} \indent In this very short section we will repeat the construction of section 3, furnishing the $N=1$ super-Wakimoto representation of the (\ref{zerosusyalg},\ref{susyalg}) algebra and associating to the super-NLS equation (\ref{superNLS}) its modified version.\par The classical super-Wakimoto representation is realized in terms of three free superfields, denoted as $B, C, N$: \begin{eqnarray} B(X) &=& b(x) + \theta \beta (x)\nonumber\\ C (X) &=& \gamma (x) + \theta c (x)\nonumber\\ N(X)& =& \mu (x) + \theta \nu (x) \end{eqnarray} $B, N $ are assumed to be fermionic of dimension ${\textstyle {1\over 2}}$, while $C$ is assumed to be a $0$-dimensional bosonic superfield coupled to $B$. At the level of components we have in particular the already encountered bosonic $\beta - \gamma$ system of weight $(1,0)$, plus now a fermionic $b-c$ system of weight $({\textstyle {1\over 2}},{\textstyle {1\over 2}})$.\par The free superfields super-Poisson brackets are given by \begin{eqnarray} \{ B (X), C (Y) \} &=& \{ C(X), B(Y) \} =\Delta (X,Y)\nonumber \\ \{ N (X), N (Y) \} &=& D_Y \Delta (X,Y) \end{eqnarray} The (\ref{zerosusyalg},\ref{susyalg}) superalgebra is reproduced in terms of the superfields $B,C,N$ through the identifications \begin{eqnarray} \Psi_+ &=& B\nonumber\\ \Psi_0 &=& - B C + N\nonumber \\ \Psi _- &=& -{1\over 2} B C^2 + C N - DC \label{superwak} \end{eqnarray} Representing $H_2$ in (\ref{superhami}) via the above system we get an evolution equation for $B,C$. As in the bosonic case the $N$ superfield can be expressed through $B,C$ by setting $\Psi_0=0$. Finally, by letting the space being imaginary it is consistent to further set \begin{eqnarray} ({\cal D }B)^\star &=& C \end{eqnarray} which implies \begin{eqnarray} \beta (x) &=& \gamma (x); \quad\quad b' (x) = c(x) \end{eqnarray} At the end we arrive at the supersymmetric generalization of eq. (\ref{redmnls}), which is given by \begin{eqnarray} {\dot B} &=& - D^4B + B ({D}(C^\star D C ) - {\textstyle{1\over 2 } }|C|^4 ) \end{eqnarray} \section{Integrable properties of the $N=1$ super-NLS equation: the super-KP reduction.} \indent We have already discussed the indications of integrability associated to the super-NLS equation arising from its bihamiltonian structure. Moreover we are aware of the results of $\cite{das}$ concerning the integrability. In this section we will show that the equation (\ref{superNLS}) deserves the name of super-NLS hierarchy by explicitly associating to it a reduction of the super-KP operator. Before doing that let us spend some words on the supersymmetric (with graded derivative) version of the KP hierarchy. The standard reference we follow in this case is \cite{manin}.\par The super-KP operator is given by \begin{eqnarray} L &=& D +\sum_{i=0}^\infty U_i (X) D^{-i} \end{eqnarray} where now $D$ is the fermionic derivative and the $U_i$'s are superfields. For even values of $i$ they are fermionic, for odd values bosonic. In the following we will be interested only to the flows associated to even (bosonic) time. For a discussion concerning odd-time flows see e.g. \cite{ramos}. The even-time flows are defined through \begin{eqnarray} {\partial L \over \partial t_k} &=& [ {L^{2k}}_+, L] \end{eqnarray} where ${L^r}_+$ denotes the purely differential part of $L^r$. The above flows provide a set of equations for the infinite series of superfields $U_i$. To derive such equations we recall that $D^{-1} = D \partial^{-1}$ and the commutation rule (\ref{comrul}) can be employed.\par If we set the constraint \begin{eqnarray} DU_0 + 2 U_1 &=& 0 \end{eqnarray} then ${L^2}_+ = D^2=\partial$ and the first flow is trivial. With the above constraint we get \begin{eqnarray} {L^4}_+ &=& D^4 +FD +B \end{eqnarray} where \begin{eqnarray} F &=& 2 DU_1\nonumber\\ B &=& 4U_3 + 2 DU_2 -6 U_1U_1 \end{eqnarray} The second flow ($k=2$) is non-trivial and provides the following set of equations \begin{eqnarray} {\partial {U_{2n}}\over \partial t_2} &=& {U_{2n}}^{(4)} + 2 {U_{2n+2}}^{(2)} +F {U_{2n}}^{(1)} + 2 F U_{2n+1}-U_{2n-1}B^{(1)}+\nonumber\\ && \sum_{r=1}^{n-1} (-1)^{r+1} \left( \begin{array}{c} n-1\\ r \end{array}\right) (U_{2n-2r} B^{(2r)} +U_{2n-2r-1}B^{(2r+1)})+\nonumber\\ && \sum_{r=1}^n (-1)^r \left( \begin{array}{c} n\\ r \end{array}\right) U_{2n-2r+1} F^{(2r)} \end{eqnarray} for the fermionic superfields, and \begin{eqnarray} {\partial {U_{2n-1}}\over \partial t_2} &=& {U_{2n-1}}^{(4)} + 2{U_{2n+1}}^{(2)} +F{U_{2n-1}}^{(1)} - F^{(1)} U_{2n-1} +\nonumber\\ && \sum_{r=1}^{n-1} (-1)^{r+1} \left( \begin{array}{c} n-1\\ r \end{array}\right) ( U_{2n-2r-1}B^{(2r)} + U_{2n-2r-1}F^{(2r+1)} +U_{2n-2r}F^{(2r)})\nonumber\\ \end{eqnarray} for the bosonic ones.\par In order to define the reduced super-KP operator we compare this flows with the set of equations \begin{eqnarray} {\dot V_n} &=& \{ V_n, H_2\} \end{eqnarray} for the superfields $V_n = \Psi_- {\cal D}^n \Psi_+ $ introduced in (\ref{superinv}), provided by the hamiltonian $H_2$ given in (\ref{superhami}), with respect to the (\ref{zerosusyalg},\ref{susyalg}) Poisson brackets structure.\par We get the following equations, for respectively fermionic and bosonic superfields \begin{eqnarray} {\partial V_{2n+1} \over \partial t_2 } &=& \partial^2 V_{2n+1} - 2\partial V_{2n+3} - V_{2n+1}\partial V_0 -V_{2n}\partial V_1 +\nonumber\\ && \sum_{k=0}^{n-1} \left( \begin{array}{c} n\\ k \end{array}\right) (V_{2k+1} \partial^{n-k} DV_1 -V_{2k} \partial^{n-k+1}V_1) \end{eqnarray} and \begin{eqnarray} {\partial V_{2n} \over \partial t_2 } &=& \partial^2 V_{2n} - 2\partial V_{2n+2} -V_{2n}\partial V_0 +\nonumber\\ && \sum_{k=0}^{n-1} \left( \begin{array}{c} n\\ k \end{array}\right) V_{2k} \partial^{n-k} DV_1 \end{eqnarray} In order to produce a consistent super-KP reduction we must be able to fit the above equations in the corresponding equations for the $U_i$ superfields. This can not be done, or at least we were unable to do that, for the whole set of $V_n$ superfields. However the following considerations can be made: we remark that the equations of motion for bosonic superfields (labelled by an even integer) involve on the right hand side bosonic superfields only. It is therefore consistent with the dynamics to set all the bosonic superfields $V_{2n}\equiv 0$. We argue that this constraint should be imposed in order to proceed to the right supersymmetrization of the NLS hierarchy: indeed the corresponding generators of the coset algebra in the bosonic case are given by the $J_-{\cal D}^nJ_+$ fields, which implies having a single bosonic field for each integral value of the spin ($n+2$). In the supersymmetric theory one expects that the fermionic counterparts should be associated to such fields: for each half-integral value of the spin one should have a single fermionic field. The set of superfields $V_n$, $n=0,1,2,...$ is in this respect highly redundant: it provides two bosons and two fermions respectively for each integer and half-integer spin value $s\geq {\textstyle{3\over 2}}$, plus a single spin $1$ bosonic field arising from $V_0$ which plays no role in the NLS hierachy. To get rid of this redundancy, a constraint which kills the extra degrees of freedom should be imposed. A constraint which allows doing that is just provided by setting \begin{eqnarray} V_{2n} &=& 0 \quad\quad for \quad n=0,1,2,.. \label{boscon} \end{eqnarray} It is remarkable the consistency of this constraint with the dynamics, as we have just pointed out.\par After taking into account of (\ref{boscon}), the equation for the fermionic superfields $V_{2n-1}$ is reduced to \begin{eqnarray} {\dot V}_{2n-1} &=& \partial^2 V_{2n-1} - 2\partial V_{2n+1} +\sum_{k=0}^{n-1} \left( \begin{array}{c} n-1\\ k \end{array}\right) V_{2k-1} \partial^{n-k} DV_1 \end{eqnarray} It is immediately checked at this point that a consistent reduction of the super-KP hierarchy is recovered by setting \begin{eqnarray} U_{2n-1} &=& 0 \nonumber\\ U_{2n} &=& {\textstyle{1\over 2}} (-1)^n V_{2n-1}\quad\quad for\quad n=1,2,... \end{eqnarray} The corresponding reduced super-KP operator can be compactely written as \begin{eqnarray} L &=& D +{\textstyle{1\over 2}} \Psi_- {\cal D}^{-2} {\Psi_+}^{(1)} \end{eqnarray} with ${\Psi_+}^{(1)}={\cal D} \Psi_+$.\par The integrability properties of the super-NLS hierarchy are established due to the existence of such Lax operator. \section{The $N=2$ formalism.} \indent Let us introduce here the framework and conventions for working in a manifestly supersymmetric $N=2$ formalism.\par The $N=2$ superspace is parametrized by the bosonic $x$ coordinate and two grassmann variables $\theta, {\overline\theta}$. A generic superfield is then expanded as \begin{eqnarray} \Phi(X) &=& \phi (x) +\theta f(x) +{\overline \theta} {\overline f}(x) + \theta{\overline\theta} g(x) \label{n2super} \end{eqnarray} The $N=1$ case is recovered when letting $\theta={\overline\theta}$.\par Two spinor derivatives $ {\tilde D}, {\overline D} $ are defined as \begin{eqnarray} {\tilde D} &=& {\partial \over \partial\theta} +{\overline \theta } \partial_x\nonumber\\ {\overline D} &=&{\partial \over \partial{\overline\theta}} +{ \theta } \partial_x \end{eqnarray} They satisfy the relations \begin{eqnarray} {\tilde D}^2 = {\overline D}^2 &=& 0\nonumber\\ \{ {\tilde D}, {\overline D} \} &=& 2\partial_x \end{eqnarray} It is convenient (we come later on this point) to describe the $N=2$ theory in terms of constrained superfields, namely the chiral (${\tilde \Psi}$) and antichiral (${\overline \Psi}$) superfields, defined respectively by \begin{eqnarray} {\overline D} {\tilde \Psi} &=& 0 \nonumber\\ {\tilde D} {\overline \Psi} &=& 0 \end{eqnarray} Due to the above relation the derivated superfields ${\tilde D}\Phi $ and ${\overline D}\Phi$ are respectively antichiral and chiral superfields.\par The condition of chirality implies the following expansions in component fields \begin{eqnarray} {\tilde A} &=& a(x) + \theta \alpha (x) + \theta{\overline \theta} a(x)'\nonumber\\ {\overline B} &=& b(x) +{\overline \theta} {\beta}(x) -\theta{\overline\theta} b(x)' \end{eqnarray} and the derivated superfields are \begin{eqnarray} {\tilde D} {\tilde A} &=& \alpha (x) + 2{\overline\theta} a(x)' -\theta{\overline\theta} a(x)''\nonumber\\ {\overline D} {\overline B} &=& \beta (x) + 2{\theta} b(x)' +\theta{\overline\theta} b(x)'' \end{eqnarray} It is remarkable that chiral and antichiral superfields can be expressed as $N=1$ superfields in relation with the superspaces \begin{eqnarray} {\hat X} &=& ({\hat x} = x+\theta{\overline \theta}, \theta)\nonumber\\ {\check X} &=& ({\check x} = x-\theta{\overline \theta},{\overline \theta}) \end{eqnarray} respectively.\par Moreover if we introduce the $N=1$ spinor derivative $D$ as \begin{eqnarray} D&=& D_X ={\partial\over\partial\theta} + 2\theta {\partial\over \partial x} \label{newder} \end{eqnarray} (allowing a factor $2$ difference with respect to the convention used in the previous sections), then we can write the derivated superfields as \begin{eqnarray} {\tilde D} {\tilde A} &\equiv& D {\tilde A}|_{\check X}\nonumber\\ {\overline D} {\overline B} &\equiv& D {\overline B}|_{\hat X} \end{eqnarray} The existence of the $N=1$ superfield representation for chiral and antichiral superfields is particularly useful for our purposes because it allows defining the $N=2$ supersymmetric theory in terms of the $N=1$ superfield formalism developped in the previous sections. In particular we can define super-Poisson brackets structures as done before: they will depend on the $N=1$ supersymmetric delta-function already encountered (and the (\ref{newder}) derivative acting on it).\par The supersymmetric line integral for chiral and antichiral superfields are given respectively by \begin{eqnarray} d{\hat X} &\equiv& d X\quad\quad\quad X=(x,\theta)\nonumber\\ d{\check X} &\equiv&d{\overline X}\quad\quad\quad{\overline X} =(x,{\overline\theta}) \end{eqnarray} The two equivalence relations are due to the fact that the term proportional to ${\theta{\overline\theta}}$ is a total derivative for both chiral and antichiral superfields .\par Let us spend now some more words about using (anti)-chiral superfields to describe $N=2$ theories. The dynamics of a real superfield $\Psi $ can always be recovered from the dynamics of two conjugated chiral and antichiral superfields $ {\tilde \Psi}$, $ {\overline \Psi}$ (we recall that the $g(x)$ field appearing in (\ref{n2super}) is just an auxiliary field, dynamically determined in terms of the component fields $\phi , f, {\overline f}$).\par It turns out that the $N=2$ dynamics can be expressed by using two conjugated sets of equations of motion for chiral and antichiral superfields. Such equations of motion are defined in terms of conjugated (anti-)chiral hamiltonians whose combination gives a single real hamiltonian. For integrable systems the dynamics can also be expressed through two chirally conjugated Lax operators whose combination provide a single real Lax operator. Further details concerning such construction can be found in (\cite{IT}). In the following we will define the $N=2$ super-NLS equation in terms of these two conjugated sets of chiral superfields. \section{The $N=2$ super-NLS hierarchy.} \indent Let us introduce now the $N=2$ super-NLS hierarchy, extending to this case the procedure already worked out for the bosonic and $N=1$ NLS theories. \par According to the discussion developped in the previous section, it is clear that now we should define our $N=2$ hierarchy by ``doubling" the number of superfields of the $N=1$ case: we should look for two (chiral and antichiral) covariant derivatives defined in terms of the spin ${\textstyle{1\over 2}}$ superfields ${\tilde \Psi_0}, {\overline \Psi_0}$. Moreover we should have two sets of opposite charged (anti-)chiral superfields ${\tilde\Psi}_\pm$, ${\overline\Psi}_\pm$. These two sets of superfields should be seen as chirally conjugated. \par Let us define now the two conjugated covariant derivatives: we introduce first the conjugate spin ${\textstyle{1\over 2}}$ superfields ${\tilde\Psi}_0, {\overline\Psi}_0$. There is a freedom in choosing the normalization condition for their super-Poisson brackets algebra. Let us fix it by assuming \begin{eqnarray} \{ {\tilde\Psi}_0(X),{\tilde\Psi}_0(Y)\}&=& \{ {\overline\Psi}_0(X),{\overline\Psi}_0(Y)\} = D_Y\Delta (X,Y)\nonumber\\ \{{\tilde\Psi}_0(X),{\overline\Psi}_0(Y)\}&=& 0 \label{superu1} \end{eqnarray} with $D_Y$ given by (\ref{newder}).\par Next the notion of covariant superfield can be introduced: $V$ is said covariant with charges $({\tilde q},{\overline q})$ if it satisfies the relations \begin{eqnarray} \{ {\tilde\Psi_0}(X),{V(Y)}\}&=& {\tilde q}\Delta (X,Y) V(Y) \end{eqnarray} and an analogous one with ${\tilde \cdot}\mapsto {\overline\cdot}$ .\par The covariant derivative ${\cal D}$, mapping covariant superfields of charges $({\tilde q},{\overline q})$ into superfields of the same charge, is in this case given by \begin{eqnarray} {\cal D} V &=& (D +{\tilde q}{\tilde \Psi_0} +{\overline q}{\overline \Psi_0}) V \end{eqnarray} At this point we have all the ingredients to define the complete supercurrents algebra involving ${\tilde\Psi}_0,{\tilde\Psi}_\pm,{\overline\Psi}_0,{\overline\Psi}_\pm$ which allows us to define the $N=2$ super-NLS theory. After a little inspection one can realize that our game can be played by simply postulating such algebra as given by two separated copies of the $N=1$ (\ref{zerosusyalg},\ref{susyalg}) algebra. A fundamental point is that now, in order to recover the non-trivial equations of motion which involve together chiral and antichiral superfields, the two $N=1$ supercurrents algebras should mix chiral and antichiral superfields.\par We can assume the two copies being given by (${\tilde\Psi}_- , {\overline\Psi}_0,{\overline\Psi}_+$) and (${\overline\Psi}_-, {\tilde\Psi}_0, {\tilde\Psi}_+$), with the following charges for the ${\tilde\Psi}_\pm,{\overline\Psi}_\pm$ superfields \begin{eqnarray} {\tilde\Psi}_-&\equiv& (0,-1)\nonumber\\ {\overline\Psi}_+&\equiv& (0,1)\nonumber\\ {\overline\Psi}_-&\equiv& (-1,0)\nonumber\\ {\tilde\Psi}_+ &\equiv& (1,0) \end{eqnarray} The complete algebra is given by \begin{eqnarray} \{{\overline\Psi}_0 (X), {\overline\Psi}_+(Y) \} &=& \Delta (X,Y){\overline \Psi}_+(Y) \nonumber\\ \{{\overline\Psi}_0 (X), {\tilde\Psi}_-(Y) \} &=& -\Delta (X,Y){\tilde \Psi}_-(Y) \nonumber\\ \{{\overline\Psi}_+ (X), {\tilde\Psi}_-(Y) \} &=& (D_Y -{\overline\Psi}_0(Y) ) \Delta (X,Y) = {\cal D}_Y \Delta (X,Y)\nonumber\\ \{{\tilde\Psi}_0 (X), {\tilde\Psi}_+(Y) \} &=& \Delta (X,Y){\tilde \Psi}_+(Y) \nonumber\\ \{{\tilde\Psi}_0 (X), {\overline\Psi}_-(Y) \} &=& -\Delta (X,Y){\overline \Psi}_-(Y) \nonumber\\ \{{\tilde\Psi}_+ (X), {\overline\Psi}_-(Y) \} &=& (D_Y -{\tilde\Psi}_0 (Y) ) \Delta (X,Y)= {\cal D}_Y \Delta (X,Y) \label{susyalg2} \end{eqnarray} Together with (\ref{superu1}). All other super-Poisson brackets are vanishing.\par There exists of course a superWakimoto representation, provided by two sets of chirally conjugated superfields: the bosonic superfields ${\hat C}, {\check C}$ of weight $0$, and the fermionic ones ${\hat B}, {\check B}, {\hat N}, {\check N} $ of weight ${\textstyle{1\over 2}}$. The $B$'s and $C$'s superfields generate two coupled systems. \par The superalgebra of the free Wakimoto superfields is just provided by \begin{eqnarray} \{ {\hat B}(X),{\hat C}(Y)\}&=&\Delta (X,Y)\nonumber\\ \{ {\hat C}(X),{\hat B}(Y)\}&=&\Delta (X,Y)\nonumber\\ \{ {\hat N}(X),{\hat N}(Y)\}&=&D_Y\Delta (X,Y) \end{eqnarray} and an equivalent relation with ${\hat\cdot}\mapsto{\check\cdot}$. The superfields identifications are the same as in (\ref{superwak}): \begin{eqnarray} {\overline \Psi}_+ &=& {\hat B}\nonumber\\ {\overline \Psi}_0 &=& -{\hat B}{\hat C} + {\hat B}\nonumber\\ {\tilde \Psi}_- &=& -{\textstyle{1\over 2}} {\hat B}{\hat C}^2 +{\hat C}{\hat N}- D{\hat C} \end{eqnarray} and the analogous relations involving the second set of superfields.\par Inspired by the $N=1$ results we can define at this point our dynamics as determined by the two conjugated sets of (anti-)chiral hamiltonians in involution. The first two (${\tilde H}_{1,2}$ and the conjugates ${\overline H}_{1,2}$) are given by \begin{eqnarray} {\tilde H}_1 &=& \int d{ X}{\tilde{\cal H}}_1 =\int dX ({\tilde \Psi}_- {{\cal D}}{\overline \Psi}_+)\nonumber\\ {\tilde H}_2 &=& \int dX {\tilde {\cal H}}_2 =\int d{ X} ({\tilde \Psi}_- {{\cal D}}^3{\overline \Psi}_+) \end{eqnarray} and \begin{eqnarray} {\overline H}_1 &=& \int d{\overline X}{\overline {\cal H}}_1 =\int d{\overline X}( {\overline \Psi}_- {{\cal D}}{\tilde \Psi}_+)\nonumber\\ {\overline H}_2 &=& \int d{\overline X}{\overline {\cal H}}_2 =\int d{\overline X} ({\overline \Psi}_- {{\cal D}}^3{\tilde \Psi}_+) \end{eqnarray} The real hamiltonians are given by \begin{eqnarray} H_{1,2} &=& {\tilde H}_{1,2} + {\overline H}_{1,2}\nonumber\\ \end{eqnarray} They are invariant under the $N=2$ supersymmetry transformations \begin{eqnarray} \delta {\tilde\Psi}_\pm &=& \pm\varepsilon {{\cal D}}{\overline \Psi}_\pm\nonumber\\ \delta {\overline\Psi}_\pm &=& \pm{\overline \varepsilon} {{\cal D}}{\tilde \Psi}_\pm \end{eqnarray} Moreover the hamiltonian densities ${\tilde {\cal H}}_j, {\overline{\cal H}}_j$ have by construction vanishing Poisson brackets with respect to the subalgebra generators ${\tilde\Psi}_0,{\overline\Psi}_0$, namely they are in the commutant. \par The equations of motion are introduced through the following equations \begin{eqnarray} {\partial\over\partial t_j } { F} &=& \{ H_j, F\} \end{eqnarray} After using the algebraic relations (\ref{superu1},\ref{susyalg2}), and taking into account that we can consistently set \begin{eqnarray} {\tilde\Psi}_0={\overline\Psi}_0 &=& 0 \end{eqnarray} we get the flows: \begin{eqnarray} {\partial\over\partial t_1 } {\tilde\Psi}_\pm &=& {\tilde\Psi}_\pm '\nonumber\\ {\partial\over\partial t_1 }{\overline\Psi}_\pm &=& {\overline\Psi}_\pm ' \end{eqnarray} and \begin{eqnarray} {\partial\over\partial t_2} {\tilde\Psi}_\pm &=& \pm {\tilde\Psi}_\pm'' \mp {\tilde\Psi}_\pm {\overline D} ({\overline\Psi}_\mp {\tilde D} {\tilde \Psi}_\pm )\nonumber\\ {\partial\over\partial t_2 }{\overline\Psi}_\pm &=& \pm {\overline\Psi}_\pm'' \mp {\overline\Psi}_\pm {\tilde D} ({\tilde\Psi}_\mp {\overline D} {\overline \Psi}_\pm ) \end{eqnarray} The second flow provides the two-components $N=2$ super-NLS equation.\par Notice that the chirality condition is respected by the equations of motion as it should be.\par On the right hand side chiral and antichiral superfields are coupled together in the non-linear term. This ensures the theory having the genuine feature of a non-trivial $N=2$ supersymmetry. The $N=1$ equation is recovered by assuming $\theta={\overline\theta}$ which implies ${\tilde\Psi}_\pm ={\overline\Psi}_\pm$.\par It is clear that one can straightforwardly repeat the same steps as done in the $N=1$ construction. The same structures appear in this case as well. Let us recall them briefly:\par i) existence of a compatible bihamiltonian structure relating the first two hamiltonians.\par ii) $N=2$ generalization of the modified super-NLS equation arising by the superWakimoto representation for the algebra (\ref{superu1},\ref{susyalg2}).\par iii) existence of the (coset) $N=2$ non-linear super-${\cal W}_\infty$ algebra, promoted to be a finite rational super-${\cal W}$ algebra. It is linearly generated by the chargeless superfields \begin{eqnarray} V_{2n} &=& {\tilde \Psi}_- {\cal D}^{2n} {\overline {\Psi}}_+\nonumber\\ V_{2n+1} &=&{\tilde \Psi}_- {\cal D}^{2n+1} {\overline {\Psi}}_+\nonumber\\ W_{2n} &=& {\overline\Psi}_- {\cal D}^{2n} {\tilde{\Psi}}_+\nonumber\\ W_{2n+1} &=&{\overline \Psi}_- {\cal D}^{2n+1} {\tilde {\Psi}}_+ \end{eqnarray} The fermionic superfields $V_{2n+1}$, $W_{2n+1}$ have half-integral spin ${\textstyle {2n+3\over 2}}$. When evaluated at ${\tilde\Psi}_0={\overline\Psi}_0=0$ they are respectively chiral and antichiral, and can be expressed as \begin{eqnarray} V_{2n+1} &=&{\tilde \Psi}_-(2\partial )^n {\overline D} {\overline {\Psi}}_+\nonumber\\ W_{2n+1} &=&{\overline \Psi}_- (2\partial )^n{\tilde D} {\tilde {\Psi}}_+ \end{eqnarray} The bosonic superfields $V_{2n}$, $W_{2n}$ of spin $n+1$ have not a definite chirality. Notice that, as it should be, our $N=2$ super-${\cal W}$ algebra admits a ``doubled" number of superfields with respect to the $N=1$ case.\par iv) existence of a dynamically consistent constraint, which allows setting the bosonic superfields $V_{2n}$, $W_{2n}$ equal to zero. This implies in its turn a ``reduced dynamics" involving only the chiral and antichiral fermionic superfields; such a dynamics is particularly important because it gives rise to a consistent reduction of the $N=2$ super-KP hierarchy provided by the two conjugate Lax operators of definite chirality. \par These two conjugate Lax operators are given by \begin{eqnarray} {\tilde L} &=& {{\cal D}} + {\tilde\Psi}_- {\cal D}^{-2} {{\overline\Psi}_+}^{(1)} \nonumber\\ {\overline L} &=& {{\cal D}} + {\overline\Psi}_- {\cal D}^{-2} {{\tilde\Psi}_+}^{(1)} \end{eqnarray} where \begin{eqnarray} {{\overline\Psi}_+}^{(1)}&=& {{\cal D}}{\overline\Psi}_+\nonumber\\ {{\tilde\Psi}_+}^{(1)}&=& {{\cal D}}{\tilde\Psi}_+ \end{eqnarray} ${\tilde L} $ is chiral, ${\overline L}$ antichiral.\par Once expanded, they are expressed in terms of the $V_{2n+1}$, $W_{2n+1}$ superfields respectively, which are invariants under the $N=2$ Kac-Moody superalgebra (\ref{superu1}): \begin{eqnarray} {\tilde L} &=& {\tilde{ D}} +\sum_{k=0}^\infty (-1)^kV_{ 2k+1} \partial^{-k}\nonumber\\ {\overline L} &=& {\overline{ D}} +\sum_{k=0}^\infty (-1)^kW_{ 2k+1} \partial^{-k} \label{supkpred} \end{eqnarray} (we have replaced the covariant derivative with the standard one, which is allowed when ${\tilde L}$, ${\overline L}$ act on chargeless superfields).\par The dynamics for the $V_{2n+1}$, $W_{2n+1}$ superfields derived in terms of flows of the super-KP reduced operator (\ref{supkpred}) coincides with the just mentioned ``reduced dynamics" of $V_{2n+1}$, $W_{2n+1}$ arising from the hamiltonian formulation. {}~\quad\\ \vfill {\Large {\bf Conclusions}} \indent In this paper we have furnished a method to derive what we can call (in analogy to the bosonic case) multi-superfields reductions of the super-KP hierarchy, which are a further generalization of the commonly studied generalized super-KdV hierarchies.\par In the particular example here considered we obtained some new results concerning the form of the super-Lax operator, the connection with a super${\cal W}_\infty$ algebra, the link with the modified super-NLS equation, etc.\par According to our ``coset method" the multifields reductions are obtained from cosets of (in this case super) Kac-Moody algebras.\par We would like to spend some words about the coset method and why it deserves being further investigated: it allows having a nice algebraic interpretation for the Poisson brackets structures of the theories involved; more than that, it could provide an algebraic classification of the multi-fields (super) KP reductions if the attracting hypothesis that they are all associated to cosets proves to be correct. Since our method makes use of covariant derivatives and is not based on a hamiltonian reduction (and consequently on Dirac's brackets) it implies a nice free-fields interpretation and mapping to modified hierarchies as explained in the paper. This could prove useful when discussing quantization (it is tempting indeed to repeat our procedure for let's say the $q$-deformed affine $sl(2)$ algebra).\par In order to attack the most important point, concerning the classification of the (super) KP reductions some preliminary results will be needed: we can mention for instance understanding the coset method in the light of the AKS scheme, expliciting the connection between the (unreduced) KP hierarchy Poisson brackets structure and those coming from the cosets, computing the associated $r$-matrices with methods like those developped in \cite{rmat}. Such results are needed for a formal proof of the statement that any coset gives rise to a KP-reduction. We will address all these points in forthcoming papers. {}~\\~\\ \noindent {\large{\bf Acknowledgements}} {}~\\~\\ I wish to acknowledge useful discussions had with L. Feher and P. Sorba. {}~\\ {}~\\
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Interpolators} The single particle interpolating operators for $\Sigma_c$, $\bar{D}$ and $\bar{D}^*$ are: \begin{eqnarray} \Sigma_{c,\alpha}^{++}&=&\epsilon^{ijk}({u^i}^T C\gamma_5 c^j) u^k_\alpha \\ \Sigma_{c, \alpha}^{+} &=& \frac{1}{2}\epsilon^{ijk}[({u^i}^T C\gamma_5 c^j) d^k_\alpha + ({d^i}^T C\gamma_5 c^j) u^k_\alpha] \\ D^- &=& \bar{c} \gamma_5 d, \quad \bar{D}^0 = \bar{c} \gamma_5 u \\ D_k^{*-} &=& \bar{c} \gamma_k d, \quad \bar{D}_k^{*0} = \bar{c} \gamma_k u, \quad k = 1,2,3, \ \end{eqnarray} where $C$ is the charge conjugation matrix. The two-particle operators for $\Sigma_c \bar{D}$ and $\Sigma_c\bar{D}^*$ with $I(J^P) = \frac{1}{2}({\frac{1}{2}}^-)$ are \begin{eqnarray} \mathcal{O}^{\Sigma_c\bar{D}}_{\mathbf{p_1}, \mathbf{p_2}} &=&\sum_{\alpha, \mathbf{p_1}, \mathbf{p_2}} C_{\alpha, \mathbf{p_1}, \mathbf{p_2}} \big( \sqrt{\frac{2}{3}} \Sigma_{c, \alpha}^{++} (\mathbf{p_{1}})D^{-} (\mathbf{p_{2}}) \nonumber \\ && - \sqrt{\frac{1}{3}}\Sigma_{c, \alpha}^{+}(\mathbf{p_{1}}) \bar{D}^0(\mathbf{p_{2}}) \big),\\ \mathcal{O}^{\Sigma_c\bar{D}^*}_{\mathbf{p_1}, \mathbf{p_2}}&=& \sum_{\alpha, k, \mathbf{p_1}, \mathbf{p_2}} C_{\alpha, k, \mathbf{p_1}, \mathbf{p_2}} \big( \sqrt{\frac{2}{3}} \Sigma_{c, \alpha}^{++}(\mathbf{p_1}) D_k^{*-}(\mathbf{p_2}) \nonumber \\ && - \sqrt{\frac{1}{3}}\Sigma_{c, \alpha}^{+}(\mathbf{p_1}) \bar{D}_k^{*0}(\mathbf{p_2})\big). \end{eqnarray} We use three $\Sigma_c\bar{D}$ operators with $|\mathbf{p_{1,2}}| = 0, 1$ and $\sqrt{2}$(in units of $2\pi/L$) and two $\Sigma_c\bar{D}^*$ operators with $|\mathbf{p_{1,2}}| = 0$ and $1$. The coefficients $C_{\alpha, \mathbf{p_1}, \mathbf{p_2}}$ and $C_{\alpha, k, \mathbf{p_1}, \mathbf{p_2}}$ are chosen so that these operators transform in the $G_1^-$ irrep of the cubic group. $G_1$ is a two-dimensional representation. We use only the first row which is sufficient for the calculation. The coefficients are listed in TABLE~\ref{Table:SigmacDOps} and TABLE~\ref{Table:SigmacDstarOps} for $\Sigma_c\bar{D}$ and $\Sigma_c\bar{D}^*$ respectively. Note that these coefficients are worked out using the Dirac-Pauli representation for Dirac $\gamma$ matrices. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline & $\alpha$ &$\mathbf{p_1}$ &$\mathbf{p_2}$ & $C_{\alpha, \mathbf{p_1}, \mathbf{p_2}}$ \\ \hline \hline $|\mathbf{p_{1,2}}| = 0$ & 1 & (0,0,0) & (0,0,0) & 1 \\ \hline \hline \multirow{6}{*}{$|\mathbf{p_{1,2}}| = 1$} &1 & (-1,0,0) & (1,0,0) & 1\\ \cline{2-5} &1 & (1,0,0) &(-1,0,0) & 1 \\ \cline{2-5} &1 & (0,-1,0) &(0,1,0) & 1 \\ \cline{2-5} &1 & (0,1,0) &(0,-1,0) & 1 \\ \cline{2-5} &1 & (0,0,-1) &(0,0,1) & 1 \\ \cline{2-5} &1 & (0,0,1) &(0,0,-1) & 1 \\ \hline \hline \multirow{12}{*}{$|\mathbf{p_{1,2}}| = \sqrt{2}$} &1 & (-1,-1,0) & (1,1,0) & 1\\ \cline{2-5} &1 & (1,1,0) &(-1,-1,0) & 1 \\ \cline{2-5} &1 & (-1,0,-1) &(1,0,1) & 1 \\ \cline{2-5} &1 & (1,0,1) &(-1,0,-1) & 1 \\ \cline{2-5} &1 & (0,-1,-1) &(0,1,1) & 1 \\ \cline{2-5} &1 & (0,1,1) &(0,-1,-1) & 1 \\ \cline{2-5} &1 & (-1,1,0) &(1,-1,0) & 1\\ \cline{2-5} &1 & (1,-1,0) &(-1,1,0) &1 \\ \cline{2-5} &1 &(-1,0,1) &(1,0,-1) &1 \\ \cline{2-5} &1 &(1,0,-1) &(-1,0,1) &1 \\ \cline{2-5} &1 &(0,1,-1) &(0,-1,1) &1 \\ \cline{2-5} &1 &(0,-1,1) &(0,1,-1) & 1 \\ \hline \end{tabular} \caption{The coefficients of the $\Sigma_c\bar{D}$ operators.} \label{Table:SigmacDOps} \end{table} \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline &$\alpha$ & $k$ &$\mathbf{p_1}$ &$\mathbf{p_2}$ & $C_{\alpha, k, \mathbf{p_1}, \mathbf{p_2}}$ \\ \hline \hline \multirow{3}{*}{$|\mathbf{p_{1,2}}| = 0$} &1 &3 &(0,0,0) & (0,0,0) & 1\\ \cline{2-6} &2 &1 &(0,0,0) & (0,0,0) & $1$\\ \cline{2-6} &2 &2 &(0,0,0) & (0,0,0) & $-i$\\ \hline \hline \multirow{6}{*}{$|\mathbf{p_{1,2}}| = 1$} &1 &3 &(0,0,-1 & (0,0,1) & 1\\ \cline{2-6} &1 &3 &(0,0,1) & (0,0,-1) & $1$\\ \cline{2-6} &2 &1 &(1,0,0) & (-1,0,0) & $1$\\ \cline{2-6} &2 &1 &(-1,0,0) & (1,0,0) & $1$\\ \cline{2-6} &2 &2 &(0,-1,0) & (0,1,0) & $-i$\\ \cline{2-6} &2 &2 &(0,1,0) & (0,-1,0) & $-i$\\ \hline \end{tabular} \caption{The coefficients of the $\Sigma_c\bar{D}^*$ operators.} \label{Table:SigmacDstarOps} \end{table} \section{Computation and analysis of the correlation functions} The distillation quark smearing method is used to compute the quark propagators. The quark smearing operator is composed of a small number($N_{ev}$) of the eigenvectors of the three-dimensional Laplacian that correspond to the $N_{ev}$ lowest eigenvalues. We compute the propagators with $N_{ev} = 100$ for the L32 ensemble and $N_{ev} = 200$ for the L48 ensemble. The single particle energies are extracted from the two-point correlation functions of the pertinent single particle operators. In FIG.~\ref{Figure:em_Sigmac}, we present the effective energies of $D$, $D^*$ and $\Sigma_c$ at the lowest five momenta for the ensemble L48. The fit of the five energies to the dispersion relation for each particle is shown in FIG.~\ref{Figure:Dispersion}. \begin{figure*}[tb] \includegraphics[width =0.33 \textwidth]{em_pion_uc_2pt_all.pdf}\includegraphics[width =0.33 \textwidth]{em_rho_uc_2pt_all.pdf}\includegraphics[width =0.33 \textwidth]{em_Sigmac_pp_all.pdf} \caption{Effective energies of $D$, $D^*$ and $\Sigma_c$ at the five lowest momenta for the ensemble L48.} \label{Figure:em_Sigmac} \end{figure*} \begin{figure*}[tb] \includegraphics[width =0.4 \textwidth]{D_Dstar_dispersion.pdf}\includegraphics[width =0.4 \textwidth]{corr_Sigmac_pp_dispersion.pdf} \caption{Fits of the energies of $D$, $D^*$ and $\Sigma_c$ to the dispersion relation for the ensemble L48. The values of $\chi^2$ of the fits are shown in the plots.} \label{Figure:Dispersion} \end{figure*} The finite volume two-particle energies are obtained from the matrix of the correlation functions of the five operators described in the last section. The charm quark annihilation diagrams are ignored in the calculation of the correlation functions. Solving the generalized eigenvalue problem(GEVP) \begin{equation} C(t) v^n(t) = \lambda^n(t) C(t_0) v^n(t), \end{equation} the energies are determined by fitting the eigenvalues $\lambda^n(t)$ to the form \begin{equation} \lambda^n(t) = (1-A_n)e^{-E_n(t-t_0)} + A_ne^{-E_n^\prime(t-t_0)}, \end{equation} where the fit parameters are $A_n$, $E_n$ and $E_n^\prime$. This form allows for a second exponential to capture the residual contaminations from the excited states. We tried four different values of $t_0$: 4, 6, 8 and 10, and did not observe differences in the fitted energies. The fits of the five eigenvalues for $t_0=4$ are shown in FIG.~\ref{Figure:Fits_Eigvals} for the ensemble L48. The fitted energies are collected in TABLE~\ref{Table:energies} for both ensembles. We also presented the three energies extracted from the GEVP analysis using only the $\Sigma_c\bar{D}$ operators and the two energies using only the $\Sigma_c\bar{D}^*$ operators. They agree perfectly with the values using all five operators, indicating negligible mixing between the $\Sigma_c\bar{D}$ and $\Sigma_c\bar{D}^*$ operators. \begin{figure*}[tb] \includegraphics[width =0.33 \textwidth]{eigvals_0_t0_4_2exp_L48.pdf}\includegraphics[width =0.33 \textwidth]{eigvals_1_t0_4_2exp_L48.pdf}\includegraphics[width =0.33 \textwidth]{eigvals_2_t0_4_2exp_L48.pdf} \includegraphics[width =0.33 \textwidth]{eigvals_3_t0_4_2exp_L48.pdf}\includegraphics[width =0.33 \textwidth]{eigvals_4_t0_4_2exp_L48.pdf} \caption{Fits of the eigenvalues $\lambda_n(t)$. Plotted are the data $\lambda_n(t) e^{E_n(t-t_0)}$ and the fits. The blue points are those included in the fits.} \label{Figure:Fits_Eigvals} \end{figure*} \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{} & all ops. & $\mathcal{O}^{\Sigma_c \bar{D}}$ & $\mathcal{O}^{\Sigma_c \bar{D}^*}$ \\ \hline \hline \multirow{5}{*}{L48} &$aE_0$ &1.7738(09) &1.7738(09) &1.8160(10) \\ \cline{2-5} &$aE_1$ &1.7845(11) &1.7845(11) &1.8326(12) \\ \cline{2-5} &$aE_2$ &1.8051(11) &1.8051(11) & -- \\ \cline{2-5} &$aE_3$ &1.8160(10) & -- & -- \\ \cline{2-5} &$aE_4$ &1.8326(12) &-- & -- \\ \hline \hline \multirow{5}{*}{L32} &$aE_0$ &1.7747(12) &1.7747(12) &1.8167(20) \\ \cline{2-5} &$aE_1$ &1.8025(19) &1.8025(20) &1.8535(16) \\ \cline{2-5} &$aE_2$ &1.8166(20) &1.8389(21) & -- \\ \cline{2-5} &$aE_3$ &1.8389(21) &-- & -- \\ \cline{2-5} &$aE_4$ &1.8535(16) &-- &-- \\ \hline \end{tabular} \caption{The finite volume two-particle energies. For each ensemble, we list the five energies extracted from the GEVP analysis using all five operators(all ops.). The three energies using only the $\Sigma_c \bar{D}$ operators and the two energies using only the $\Sigma_c\bar{D}^*$ operators are also presented for comparison.} \label{Table:energies} \end{table} \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} A common difficulty in using deep learning for medical tasks is acquiring high-quality annotated datasets. There are several reasons for this: (1) using patient data requires ethical clearance, anonymisation and careful curation; (2) generating ground-truth labels may require expertise from clinicians whose time is limited and expensive; (3) clinical datasets are typically highly class-imbalanced with vastly more negative than positive examples. Thus acquiring sufficiently large datasets is often expensive, time-consuming, and frequently infeasible. As such, there is great interest in developing machine learning methods to use medical data and annotations efficiently. Examples of successful previous approaches include aggressive data augmentation~\cite{Ronneberger15} and generating synthetic images for training~\cite{Ghorbani20}. Alternatively, one can use {\em self-supervised pre-training} to learn useful representations of data, reducing annotation requirements for downstream learning tasks. This method has already shown much success in other areas of machine learning such as natural image classification~\cite{Chen20a, Henaff20, He20} and natural language processing~\cite{Mikolov13,Devlin19,Radford19,Brown20a}. In this paper, we develop a self-supervised learning approach for cases where pairs of different-modality images corresponding to the same subject are available. We introduce a novel pre-training task, where a model must to match together different-modality scans showing the same subject by comparing them in a joint, modality-invariant embedding space. If these modalities are substantially different in appearance, the network must learn semantic data representations to solve this problem. In itself, this is an important task. Embeddings obtained from the trained networks allow us to check if two different scans show the same subject in large anonymised datasets (by verifying that their embeddings match). It also defines a notion of similarity between scans that has applications in population studies. However, the main reward of our method are the semantic {\em spatial} representations of the data learnt during training which can be leveraged for a range of downstream tasks. In this paper we demonstrate the embeddings can be used for unsupervised rigid multi-modal scan registration, and cross-modal segmentation with opposite-modality annotations. The layout of this paper is as follows: Section~\ref{sec:matching-scans} describes the cross-modal matching task in detail, including the network architecture, loss function, and implementation details, as well as experimental results from a large, publically-available whole body scan dataset. Section~\ref{sec:unsupervised-registration} introduces algorithms using the embeddings learnt in Section~\ref{sec:matching-scans} for fast unsupervised multi-modal scan registration which are shown to succeed in cases where conventional registration approaches fail. In Section~\ref{sec:segmentation}, we then use these registrations to transfer segmentation maps between modalites, showing that by using the proposed cross-modal registration technique, anatomical annotations in DXAs can be used to train a segmentation network in MR scans. \subsection{Related Work} Self-supervised representation-learning is an incredibly active area of research at the moment. The current dominant praxis is to train models to perform challenging self-supervised learning tasks on a large dataset, and then fine-tune learnt representations for specific `downstream' tasks using smaller, annotated datasets. Major successes have been reported in image classification~\cite{Asano20a,Chen20a,Henaff20,Grill20,Caron20}, video understanding~\cite{Han20b,Qian2020} and NLP\cite{Mikolov13, Howard18, Radford19}, with self-supervised approaches often matching or exceeding the performance of fully-supervised approaches. Due to the existence of a few large, publically available datasets (such as ~\cite{Johnson19}), yet lack of large annotated datasets suitable for most medical tasks, self-supervised learning shows great promise in the medical domain. For example, previous work has shown it can be used to improve automated diagnosis of intervertebral disc degeneration~\cite{Jamaludin17a} and common segmentation tasks~\cite{Taleb20}. In~\cite{Taleb19}, it also is shown that using multiple MR sequences in self-supervised learning improves performance in brain tumour segmentation. Data with multiple modalities is a natural candidate for self-supervised approaches, as one can use information from one modality to predict information in the other. For example, previous work has shown self-supervised methods can benefit from fusion of the audio and visual streams available in natural video data~\cite{Alwassel20,Arandjelovic17,Arandjelovic18,Owens18,Korbar18}. In this paper we build on this multi-modal approach by extending it to explicit spatial registration across the modalities. \subsection{Dataset Information, Acquisition and Preparation} For the experiments in this paper we use data from the UK Biobank~\cite{Biobank15}, a large corpus of open-access medical data taken from over 500,000 volunteer participants. A wide variety of data is available, including data related to imaging, genetics and health-related outcomes. In this study we focus on two whole body imaging modalities collected by the Biobank: (1) 1.5T, 6-minute dual-echo Dixon protocol magnetic resonance (MR) scans showing the regions from approximately the neck to the knees of the participant with variation due to the subject's height and position in the scanner; (2) Dual energy x-ray absorptiometry (DXA) scans showing the entire body. In total, at the time of data collection, the Biobank consisted of 41,211 DXA scans and 44,830 MR scans from unique participants. Our collected dataset consists of pairs of same-subject multi-sequence MR and DXA scans, examples of which can be seen in Figure~\ref{fig:example-scan-pairs}. In total we find 20,360 such pairs. These are separated into training, validation and test sets with a 80/10/10\% split (16,213, 2,027 and 2,028 scan pairs respectively). Scan pairs are constructed using (1) the fat-only and water-only sequences of the Dixon MR scans, and (2) the tissue and bone images from the DXA scans. For the purposes of this study, we synthesize 2D coronal images from the 3D MR scans by finding the mid-spinal coronal slice at each axial scan line using the method described in~\cite{Windsor20b}. All scans are resampled to be isotropic and cropped to a consistent size for ease of batch processing ($501 \times 224$ for MR scans and $800 \times 300$ for DXA scans). These dimensions maintain an equal pixel spacing of 2.2mm in both modalities. The scans are geometrically related in that the MRI field of view (FoV) is a cropped, translated and slightly rotated transformation of the DXA scan's FoV. Both scans are acquired with the subjects in a supine position, and there can be some arm and leg movements between the scans. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{figures/example-scans.eps} \caption{\textbf{Guess Who?} Example scan pairs from our dataset. The top row shows bone (left) and tissue (right) DXA scans from the dataset. The bottom row shows synthesized mid-coronal fat-only (left) and water-only (right) Dixon MR slices. In this paper, semantic spatial representations of the scans are learnt by matching corresponding DXA and MR scan pairs. Can you match these pairs?\protect\footnotemark{} } \label{fig:example-scan-pairs} \end{figure} \section{Matching Scans Across Modalities} \label{sec:matching-scans} This section describes the framework used to match same-subject scans across the DXA and MRI modalities. As shown in Figure~\ref{fig:example-scan-pairs}, this is hard to perform manually with only a few scans. Differences in tissue types visible in DXA and MRI mean many salient points in one modality are not visible at all in the other. Furthermore, the corresponding scans are not aligned, with variation in subject position, pose and rotation. To tackle this problem, we use the dual encoder framework shown in Figure~\ref{fig:contrastive-networks}, tasking it to determine the best alignment between the two scans such that similarity is higher for aligned same-subject scans than for aligned different-subject scans. Since both the DXA and MRI scans are coronal views and subject rotations relative to the scanner are very small, an approximate alignment requires determining a 2D translation between the scans. The similarity is then determined by a scalar product of the scans' spatial feature maps after alignment. In practice, this amounts to 2D convolution of the MRI's spatial feature map over the DXA's spatial feature map, and the maximum value of the resulting correlation map provides a similarity score. \footnotetext{$A\rightarrow 5$, $B\rightarrow 3$, $C\rightarrow4$, $D\rightarrow 2$, $E\rightarrow 1$, $F \rightarrow 6$} The network is trained end-to-end by Noise Contrastive Estimation~\cite{Gutmann12} over a batch of $N$ randomly sampled matching pairs. If $M_{ij}$ represents the similarity between the $i^{th}$ DXA and $j^{th}$ MRI, where $i=j$ is a matching pair and $i\neq j$ is non-matching, and $\tau$ is some temperature parameter, the total loss for the $k$-th matching pair, $\ell_k$, is given by \begin{equation} \ell_k = -\left(\log\frac{\exp(M_{kk}/\tau)}{\sum_{j=1}^N\exp(M_{kj}/\tau)} +\log\frac{\exp(M_{kk}/\tau)}{\sum_{j=1}^N\exp(M_{jk}/\tau)}\right) \end{equation} \begin{figure}[h] \centering \includegraphics[width=\linewidth]{figures/contrastive_networks_correlator.eps}% \caption{The dual encoding configuration used for contrastive training. Two CNNs ingest scans of the respective modalities, outputting coarse spatial feature maps (A). The feature maps of each DXA-MRI pair are normalised and correlated to find the best registration (B). Using this registration, the maximum correlation is recorded as the similarity between the two scans (C). The architecture used for both spatial encoders is shown in the appendix.} \label{fig:contrastive-networks} \end{figure} \subsection{Experiments} This section evaluates the performance of the proposed configuration on the cross-modal scan-matching task. To determine the relative importance of each MRI sequence and each DXA type, we train networks varying input channels to each modality's encoder (see Figure~\ref{tbl:retrieval-performance} for the configurations used). To demonstrate the value of comparing spatial feature maps of scans instead of a single global embedding vector, we compare to a baseline network that simply pools the spatial feature maps into a scan-level descriptor, and is trained by the same contrastive method. Details of this baseline are given in the appendix. \paragraph{Implementation.} Networks are trained with a batch size of 10 using an Adam optimizer with a learning rate of $10^{-5}$ and $\mathbf{\beta}=(0.9,0.999)$. A cross-entropy temperature of $T=0.01$ is used (a study of the effect of varying this is given in the appendix). Spatial embeddings are 128-dimensional. Training augmentation randomly translates the both scans by $\pm5$ pixels in both axis and alters brightness and contrast by $\pm$20\%. Each model takes 3 days to train on a 24GB NVIDIA Tesla P40 GPU. Networks are implemented in PyTorch v.1.7.0. \paragraph{Evaluation measures.} We evaluate the quality of the learnt embeddings on the test set by assessing the ability of the system to: (1) retrieve the matching opposite modality scan for a given query scan based on similarity; (2) verify if a given scan pair is matching or not. In the latter case, positive matches to a scan are defined as those with similarities above a threshold, $\phi$, and negative matches have similarity $\leq \phi$. Considering all possible DXA-MRI scan pairs (matching \& non-matching), we can then generate an ROC curve by varying $\phi$ from -1 to 1. For the retrieval task, we report top-1 and top-10 recall based on similarity across all test set subjects, and the mean rank of matching pairs. For the verification task, we report the ROC area-under-curve (AUC) and the equal error rate (EER) (i.e. when $TPR=FPR$). \begin{figure}[ht] \centering \begin{subfigure}{.38\textwidth} \includegraphics[width=0.9\textwidth]{figures/scan_varying_roc_curves_zoomed.eps} \caption{ROC Curve} \label{fig:roc-curve} \end{subfigure} \begin{subfigure}{.54\textwidth} \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}ccccccc} \toprule \multicolumn{2}{c}{\multirow{2}{*}{Input}} & \multicolumn{2}{c}{Verification} & \multicolumn{3}{c}{Retrieval} \\ {} & {} & \multirow{2}{*}{AUC} & \multirow{2}{*}{\shortstack{EER\\(\%)}} & \multicolumn{2}{c}{\% Recall} & \multirow{2}{*}{\shortstack{Mean\\Rank}}\\ \multicolumn{1}{c}{DXA} & \multicolumn{1}{c}{MRI} & {} & {} & \multicolumn{1}{c}{@1} & \multicolumn{1}{c}{@10} & {} \\ \midrule \multicolumn{2}{c}{Baseline} & 0.9952 & 2.57 & 26.3 & 78.7 & 9.246 \\ \hline B & F & 0.9992 & 0.77 & 89.4 & 99.4 & 2.106 \\ B & F,W & \bfseries 0.9993 & 0.84 & 87.7 & 99.4 & 2.079 \\ T & F,W & 0.9986 & 1.14 & 83.1 & 98.4 & 3.013 \\ B,T & F & 0.9989 & 0.98 & 85.8 & 98.7 & 2.569 \\ B,T & W & \bfseries 0.9993 & 0.70 & 90.1 & 99.4 & \bfseries 1.920 \\ B,T & F,W & 0.9992 & \bfseries 0.60 & \bfseries 90.7 & \bfseries 99.5 & 2.526 \\ \midrule \end{tabular*} \caption{Retrieval and Verification Performance} \label{tbl:retrieval-performance} \end{subfigure} \caption{ Verification and retrieval performance on the 2,028 scan test dataset with varying inputs of bone DXA (B), tissue DXA (T), fat-only MR (F), and water-only MR (W). (\ref{sub@fig:roc-curve}) shows an ROC curve for the verification task. Table (\ref{sub@tbl:retrieval-performance}) reports performance statistics for the models, including equal error rate (EER), area under curve (AUC), recall at ranks 1 \& 10 and the mean rank of matches. } \label{fig:all-contrastive-performance} \end{figure} \paragraph{Results.} Figure~\ref{fig:all-contrastive-performance} shows the ROC curve and performance measures for varying input channels. All configurations vastly exceed the baseline's performance, indicating the benefit of spatial scan embeddings as opposed to scan-level descriptor vectors. The full model achieves a top-1 recall of over 90\% from 2028 test cases. The tissue DXA-only model performs worst of all configurations suggesting bone DXAs are much more informative here. Extended results and recall at $K$ curves are given in the appendix. \paragraph{Discussion.} The strong performance of the proposed method on the retrieval task by matching spatial (as opposed to global) features is significant; it suggests the encoders learn useful semantic information about specific regions of both scans. This has several possible applications. For example, one could select a query ROI in a scan, perhaps containing unusual pathology, calculate its spatial embeddings and find similar examples across a large dataset (see \cite{Simonyan11} for a more detailed discussion of this application). More conventionally, the learnt features could be also used for network initialization in downstream tasks on other smaller datasets of the same modality, potentially increasing performance and data efficiency. As a demonstration of the usefulness of the learnt features, the next section of this paper explores using them to register scans in a completely unsupervised manner. \section{Unsupervised Registration Of Multi-Modal Scans} \label{sec:unsupervised-registration} A major advantage of this contrastive training method is that dense correspondences between multi-modal scans are learnt in a completely self-supervised manner. This is non-trivial; different tissues are shown in each modality, making intensity-based approaches for same- or similar-modality registration~\cite{Viola95a, Lowe04} ineffective. Here we explore this idea further, developing three methods for estimating rigid registrations between the modalities. Each method is assessed by measuring L2-distance error when transforming anatomical keypoints from the MRIs to the DXA scans. For each proposed registration method the aim is to estimate the three transformation parameters; a 2D translation and a rotation. \paragraph{1.\ Dense Correspondences}: During training, the contrastive framework attempts to align dense spatial feature maps before comparing them. We can use this to determine the registration translation by convolving the feature maps together and measuring the point of maximum response as the displacement between the images (as in Figure~\ref{fig:contrastive-networks}, stages A, B). The rotation between the scans is found by rotating the MRI scan across a small range of angles, convolving the feature maps, and recording the angle which induces the greatest alignment. \paragraph{2.\ Salient Point Matching}: The dense correspondence method is slow, especially on a CPU, as it requires multiple convolution operations with large kernels. To speed up registration we need use only a few salient points between the feature maps. These can be found by matching pairs of points based on correlations and then employing Lowe's second nearest neighbour ratio test~\cite{Lowe99} to remove ambiguous correspondences, followed by RANSAC estimation of the transformation. Details of this algorithm are given in the appendix. Example correspondences found by this method are shown in Figure~\ref{fig:lowes-corr}. \begin{figure}[t] \centering \includegraphics[width=0.9\textwidth]{figures/lowes_correspondances.eps} \caption{Salient point correspondences between scan pairs found by Lowe's ratio test \& RANSAC. The fat-only channel of MRI source image is shown on the left, with the target DXA bone scan shown on the right} \label{fig:lowes-corr} \setlength{\belowcaptionskip}{-1cm} \end{figure} \paragraph{3.\ Refinement Regressor:} The previous approaches generate robust approximate registrations between the two images but are limited by the resolution of the feature maps they compare ($8\times$downsample of a $2.2$mm pixel spacing original image). To rectify this issue we use a small regression network to refine predictions by taking the almost-aligned feature maps predicted by the aforementioned methods and then outputting a small refinement transformation. High-precision training data for this task can be generated `for free' by taking aligned scan pairs from the salient point matching method, slightly misaligning them with a randomly sampled rotation and translation and then training a network to regress this random transformation. The regression network is trained on 50 aligned pairs predicted by the previous salient point matching method and manually checked for accuracy. For each pair, several copies are generated with slight randomly sampled translations and rotations at the original pixel resolution. For each transformed pair, the DXA and MRI spatial feature maps are then concatenated together, and used as input to a small CNN followed by a fully-connected network that estimates the three parameters of the transformation for each pair. \paragraph{Experiments.} To measure the quality of these registrations, 5 keypoints were marked in 100 test scan pairs: the femoral head in both legs (hip joints), humerus head in both arms (shoulder joints) and the S1 vertebra (base of the spine). MRI keypoints are annotated in 3D and then projected into 2D. These keypoints provide the ground truth for assessing the predicted transformations. Example annotations are shown in Figure~\ref{fig:keypoint-transfer}. We then measure the mean L2-localisation error when transferring the keypoints between modalities using rigid body transforms predicted by the proposed methods. We compare to baselines of (i) no transformation (i.e.\ the identity); and (ii) rigid mutual information maximisation\footnote{Using MATLAB's \texttt{imregister} with \texttt{MattesMutualInformation}\cite{Mattes01} as an objective.}. To measure annotation consistency and error induced by change in subject pose, we also report the error of the `best-possible' rigid transformation keypoints - that which minimises the mean L2 transfer error. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{figures/registration_qualitative.eps} \caption{Example results from each registration method. (a) \& (b) show keypoints for the MRI and the DXA. The MRI \& keypoints are registered to the DXA by: (c) no transform; (d) mutual information maximisation; (e) Dense correspondences as in pre-training; (f) Salient point matching via Lowe's ratio test; (g) Applying the refinement regressor to (f). } \label{fig:keypoint-transfer} \end{figure} \paragraph{Results.} Table~\ref{tbl:keypoint-transfer-results} shows the localisation error achieved by each method. All methods yield accurate registrations between the images. The best method is found to salient point matching followed by the refinement network, which is also shown to be fast on both a GPU and CPU. We attempted to calculate SIFT and MIND features in both modalities and match them as proposed in~\cite{Toews13} and~\cite{Heinrich12} repectively however these approaches did not work in this case (see appendix). \paragraph{Discussion.} In this setting, our methods were found to outperform other approaches for multi-modal registration (mutual information, MIND and SIFT). We believe the reason for this is that DXA scans show mostly bony structures, whereas most visual content in MRI is due to soft tissues which can't be differentiated by DXA. As such, most pixels have no obvious intensity relation between scans. However, accurate registration between the scans is important as it allows collation of spatial information from both modalities. This can be exploited in at least two ways: (i) for joint features; registration allows shallow fusion of spatial features from both modalities. This could be useful in, for example, body composition analysis, conventionally done by DXA but which may benefit from the superior soft tissue contrast of MRI~\cite{Borga18}. (ii) for cross-modal supervision; registration allows prediction of dense labels from one modality which can then be used as a training target for the other. For example one could diagnose osteoporosis/ fracture risk at a given vertebral level from MR using labels extracted from DXA by conventional methods. \begin{table}[h!] \centering \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}lcccccccc} \toprule \multirow{2}{*}{Method} & \multicolumn{5}{c}{Keypoint Transfer Error (cm)} & \multicolumn{2}{c}{Time(s)}\\ {} & \multicolumn{1}{c}{HJ} & \multicolumn{1}{c}{S1} & \multicolumn{1}{c}{SJ} & \multicolumn{1}{c}{Median(all)} & \multicolumn{1}{c}{Mean(all)} & {GPU} & {CPU}\\ \toprule No Trans. & 22.1$\pm$5.0 & 21.7$\pm$5.3 & 22.3$\pm$5.0 & 21.9 & 22.01$\pm$5.1 & 0 & 0 \\ Mut. Inf. & 2.23$\pm$1.3 & 2.67$\pm$1.4 & 2.75$\pm$2.2 & 2.21 & 2.52$\pm$1.7 & - & 1.0\\ \hline Dense Corr. & 1.48$\pm$0.8 & 1.52$\pm$0.8 & 2.05$\pm$1.2 & 1.52 & 1.72$\pm$1.0 & 1.5 & 5.7 \\ Sal. Pt. Mt. & 1.34$\pm$0.9 & 1.37$\pm$1.0 & 2.04$\pm$1.4 & 1.46 & 1.63$\pm$1.3 & 0.4 & 1.1 \\ Regressor & 1.24$\pm$0.8 & 1.30$\pm$0.9 & 1.44$\pm$0.9 & 1.12 & 1.32$\pm$0.9 & 0.9 & 1.5 \\ \hline Best Poss. & 0.84$\pm$0.4 & 0.84$\pm$0.5 & 0.87$\pm$0.4 & 0.84 & 0.87$\pm$0.4 & - & - \\ \midrule \end{tabular*} \caption{ Keypoint transfer error for the proposed methods. We report the mean and median error for all keypoints combined and for the hip joints (HJ), shoulder joints (SJ) and S1 individually. Runtime on a GPU \& CPU is also shown. } \label{tbl:keypoint-transfer-results} \end{table} \subsection{Cross-Modal Annotation Transfer} \label{sec:segmentation} A benefit of the demonstrated cross-modal registrations is that they allow the transfer of segmentations between significantly different modalities, meaning segmentation networks can be trained in both modalities from a single annotation set. This is useful in cases when a tissue is clearly visible in one modality but not the other. For example, here the pelvis is clearly visible in the DXA scan but not in the MRI slice. As an example of using cross-modal annotation transfer, the spine, pelvis and pelvic cavity are segmentated in DXA scans using the method from~\cite{Jamaludin18a}. These segmentations are then transferred to the MRI scans by the refinement network from section~\ref{sec:unsupervised-registration}. where they act as pixel-wise annotations to train a 2D U-Net~\cite{Ronneberger15} segmentation network. Compared to manual segmentation of the spine performed in 50 3D MR scans and projected into 2D, this network achieves good performance, with a mean Dice score of 0.927 showing the quality of the transferred annotations. Examples are shown in the appendix. \section{Conclusion} \label{sec:conclusion} This paper explores a new self-supervised task of matching different-modality, same-subject whole-body scans. Our method to achieves this by jointly aligning and comparing scan spatial embeddings via noise contrastive estimation. On a test dataset of 2028 scan pairs our method is shown to perform exceptionally well with over 90\% top-1 recall. We then show the learnt spatial embeddings can be used for unsupervised multi-modal registration in cases where standard approaches fail. These registrations can then be used to perform cross-modal annotation transfer, using DXA segmentations to train a MRI-specific model to segment anatomical structures. Future work will explore using the learnt spatial embeddings for other downstream tasks and extend this method to 3D scans. \paragraph{Acknowledgements.} Rhydian Windsor is supported by Cancer Research UK as part of the EPSRC CDT in Autonomous Intelligent Machines and Systems (EP/L015897/1). We are also grateful for support from a Royal Society Research Professorship and EPSRC Programme Grant Visual AI (EP/T028572/1). \bibliographystyle{splncs04}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:intro} Formal grammars are a popular class of knowledge representation that is traditionally confined to the modeling of natural and computer languages. However, several extensions of grammars have been proposed over time to model other types of data such as images \cite{fu1982syntactic,Zhu06as,Jin06ca} and events \cite{Ivanov00ro,Ryoo06ro,Pei11pv}. One prominent type of extension is stochastic And-Or grammars (AOG) \cite{Zhu06as}. A stochastic AOG simultaneously models compositions (i.e., a large pattern is the composition of several small patterns arranged according to a certain configuration) and reconfigurations (i.e., a pattern may have several alternative configurations), and in this way it can compactly represent a probabilistic distribution over a large number of patterns. Stochastic AOGs can be used to parse data samples into their compositional structures, which help solve multiple tasks (such as classification, annotation, and segmentation of the data samples) in a unified manner. In this paper we will focus on the context-free subclass of stochastic AOGs, which serves as the skeleton in building more advanced stochastic AOGs. Several variants of stochastic AOGs and their inference algorithms have been proposed in the literature to model different types of data and solve different problems, such as image scene parsing \cite{Zhao11ip} and video event parsing \cite{Pei11pv}. Our first contribution in this paper is that we provide \emph{a unified representation framework} of stochastic AOGs that is agnostic to the type of the data being modeled; in addition, based on this framework we propose \emph{a domain-independent inference algorithm} that is tractable under a reasonable assumption. The benefits of a unified framework of stochastic AOGs include the following. First, such a framework can help us generalize and improve existing ad hoc approaches for modeling, inference and learning with stochastic AOGs. Second, it also facilitates applications of stochastic AOGs to novel data types and problems and enables the research of general-purpose inference and learning algorithms of stochastic AOGs. Further, a formal definition of stochastic AOGs as abstract probabilistic models makes it easier to theoretically examine their relation with other models such as constraint-based grammar formalism \cite{shieber1992constraint} and sum-product networks \cite{Poon11}. In fact, we will show that many of these related models can be seen as special cases of stochastic AOGs. Stochastic AOGs model compositional structures based on the relations between sub-patterns. Such probabilistic modeling of relational structures is traditionally studied in the field of statistical relational learning \cite{getoor2007introduction}. Our second contribution is that we provide \emph{probabilistic logic interpretations} of the unified representation framework of stochastic AOGs and thus show that stochastic AOGs can be seen as a novel type of statistical relational models. The logic interpretations help clarify the relation between stochastic AOGs and a few existing statistical relational models and probabilistic logics that share certain features with stochastic AOGs (e.g., tractable Markov logic \cite{domingos2012tractable} and stochastic logic programs \cite{muggleton1996stochastic}). It may also facilitate the incorporation of ideas from statistical relational learning into the study of stochastic AOGs and at the same time contribute to the research of novel (tractable) statistical relational models. \section{Stochastic And-Or Grammars} \label{sec:aog} An AOG is an extension of a constituency grammar used in natural language parsing \cite{Manning99book}. Similar to a constituency grammar, an AOG defines a set of valid hierarchical compositions of atomic entities. However, an AOG differs from a constituency grammar in that it allows atomic entities other than words and compositional relations other than string concatenation. A stochastic AOG models the uncertainty in the composition by defining a probabilistic distribution over the set of valid compositions. Stochastic AOGs were first proposed to model images \cite{Zhu06as,Zhao11ip,Wang13hs,rothrock2013integrating}, in particular the spatial composition of objects and scenes from atomic visual words (e.g., Garbor bases). They were later extended to model events, in particular the temporal and causal composition of events from atomic actions \cite{Pei11pv} and fluents \cite{Fire13uc}. More recently, these two types of AOGs were used jointly to model objects, scenes and events from the simultaneous input of video and text \cite{tu2014joint}. In each of the previous work using stochastic AOGs, a different type of data is modeled with domain-specific and problem-specific definitions of atomic entities and compositions. Tu et al. \cite{tu2013unsupervised} provided a first attempt towards a more unified definition of stochastic AOGs that is agnostic to the type of the data being modeled. We refine and extend their work by introducing parameterized patterns and relations in the unified definition, which allows us to reduce a wide range of related models to AOGs (as will be discussed in section \ref{sec:aog:related}). Based on the unified framework of stochastic AOGs, we also propose a domain-independent inference algorithm and study its tractability (section \ref{sec:aog:inf}). Below we start with the definition of stochastic context-free AOGs, which are the most basic form of stochastic AOGs and are used as the skeleton in building more advanced stochastic AOGs. A \emph{stochastic context-free AOG} is defined as a 5-tuple $\langle \Sigma, N, S, \theta, R \rangle$: \begin{description} \item[$\Sigma$] is a set of terminal nodes representing atomic patterns that are not decomposable; \item[$N$] is a set of nonterminal nodes representing high-level patterns, which is divided into two disjoint sets: And-nodes and Or-nodes; \item[$S \in N$] is a start symbol that represents a complete pattern; \item[$\theta$] is a function that maps an instance of a terminal or nonterminal node $x$ to a parameter $\theta_x$ (the parameter can take any form such as a vector or a complex data structure; denote the maximal parameter size by $m_\theta$); \item[$R$] is a set of grammar rules, each of which takes the form of $x \rightarrow C$ representing the generation from a nonterminal node $x$ to a set $C$ of nonterminal or terminal nodes (we say that the rule is ``headed'' by node $x$ and the nodes in $C$ are the ``child nodes'' of $x$). \end{description} The set of rules $R$ is further divided into two disjoint sets: And-rules and Or-rules. \begin{itemize} \item An And-rule, parameterized by a triple $\langle r, t, f \rangle$, represents the decomposition of a pattern into a configuration of non-overlapping sub-patterns. The And-rule specifies a production $r: A \rightarrow \{x_1, x_2, \ldots, x_n\}$ for some $n \geq 2$, where $A$ is an And-node and $x_1, x_2, \ldots, x_n$ are a set of terminal or nonterminal nodes representing the sub-patterns. A relation between the parameters of the child nodes, $t(\theta_{x_1}, \theta_{x_2}, \ldots, \theta_{x_n})$, specifies valid configurations of the sub-patterns. This so-called \emph{parameter relation} is typically factorized to the conjunction of a set of binary relations. A \emph{parameter function} $f$ is also associated with the And-rule specifying how the parameter of the And-node $A$ is related to the parameters of the child nodes: $\theta_A = f(\theta_{x_1}, \theta_{x_2}, \ldots, \theta_{x_n})$. We require that both the parameter relation and the parameter function take time polynomial in $n$ and $m_\theta$ to compute. There is exactly one And-rule that is headed by each And-node. \item An Or-rule, parameterized by an ordered pair $\langle r, p \rangle$, represents an alternative configuration of a pattern. The Or-rule specifies a production $r: O \rightarrow x$, where $O$ is an Or-node and $x$ is either a terminal or a nonterminal node representing a possible configuration. A conditional probability $p$ is associated with the Or-rule specifying how likely the configuration represented by $x$ is selected given the Or-node $O$. The only constraint in the Or-rule is that the parameters of $O$ and $x$ must be the same: $\theta_O = \theta_x$. There typically exist multiple Or-rules headed by the same Or-node, and together they can be written as $O \rightarrow x_1 | x_2 | \ldots | x_n$. \end{itemize} Note that unlike in some previous work, in the definition above we assume deterministic And-rules for simplicity. In principle, any uncertainty in an And-rule can be equivalently represented by a set of Or-rules each invoking a different copy of the And-rule. Fig.\ \ref{fig:ex1}(a) shows an example stochastic context-free AOG of line drawings. Each terminal or nonterminal node represents an image patch and its parameter is a 2D vector representing the position of the patch in the image. Each terminal node denotes a line segment of a specific orientation while each nonterminal node denotes a class of line drawing patterns. The start symbol $S$ denotes a class of line drawing images (e.g., images of animal faces). In each And-rule, the parameter relation specifies the relative positions between the sub-patterns and the parameter function specifies the relative positions between the composite pattern and the sub-patterns. \begin{figure*}[t]\centering \subfigure[]{\includegraphics[scale=.4]{ex1g}} \hspace{1ex} \subfigure[]{\includegraphics[scale=.4]{ex1p}} \caption{(a) A graphical representation of an example stochastic AOG of line drawings of animal faces. Each And-rule is represented by an And-node and all of its child nodes in the graph. The spatial relations within each And-rule are not shown for clarity. Each Or-rule is represented by an Or-node and one of its child nodes, with its probability shown on the corresponding edge. (b) A line drawing image and its compositional structure generated from the example AOG. Again, the spatial relations between nodes are not shown for clarity. The probability of the compositional structure is partially computed at the top right.} \label{fig:ex1} \end{figure*} With a stochastic context-free AOG, one can generate a compositional structure by starting from a data sample containing only the start symbol $S$ and recursively applying the grammar rules in $R$ to convert nonterminal nodes in the data sample until the data sample contains only terminal nodes. The resulting compositional structure is a tree in which the root node is $S$, each non-leaf node is a nonterminal node, and each leaf node is a terminal node; in addition, for each appearance of And-node $A$ in the tree, its set of child nodes in the tree conforms to the And-rule headed by $A$, and for each appearance of Or-node $O$ in the tree, it has exactly one child node in the tree which conforms to one of the Or-rules headed by $O$. The probability of the compositional structure is the product of the probabilities of all the Or-rules used in the generation process. Fig.\ \ref{fig:ex1}(b) shows an image and its compositional structure generated from the example AOG in Fig.\ \ref{fig:ex1}(a). Given a data sample consisting of only atomic patterns, one can also infer its compositional structure by parsing the data sample with the stochastic context-free AOG. We will discuss the parsing algorithm later. Our framework is flexible in that it allows different types of patterns and relations within the same grammar. Consider for example a stochastic AOG modeling visually grounded events (e.g., videos of people using vending-machines). We would have two types of terminal or nonterminal nodes that model events and objects respectively. An event node represents a class of events or sub-events, whose parameter is the start/end time of an instance event. An object node represents a class of objects or sub-objects (possibly in a specific state or posture), whose parameter contains both the spatial information and the time interval information of an instance object. We specify temporal relations between event nodes to model the composition of an event from sub-events; we specify spatial relations between object nodes to model the composition of an object from its component sub-objects as well as the composition of an atomic event from its participant objects; we also specify temporal relations between related object nodes to enforce the alignment of their time intervals. Note that different nonterminal nodes in an AOG may share child nodes. For example, in Fig.\ref{fig:ex1} each terminal node representing a line segment may actually be shared by multiple parent nonterminal nodes representing different line drawing patterns. Furthermore, there could be recursive rules in an AOG, which means the direct or indirect production of a grammar rule may contain its left-hand side nonterminal. Recursive rules are useful in modeling languages and repetitive patterns. In some previous work, stochastic AOGs more expressive than stochastic context-free AOGs are employed. A typical augmentation over context-free AOGs is that, while in a context-free AOG a parameter relation can only be specified within an And-rule, in more advanced AOGs parameter relations can be specified between any two nodes in the grammar. This can be very useful in certain scenarios. For example, in an image AOG of indoor scenes, relations can be added between all pairs of 2D faces to discourage overlap \cite{Zhao11ip}. However, such relations make inference much more difficult. Another constraint in context-free AOGs that is sometimes removed in more advanced AOGs is the non-overlapping requirement between sub-patterns in an And-rule. For example, in an image AOG it may be more convenient to decompose a 3D cube into 2D faces that share edges \cite{Zhao11ip}. We will leave the formal definition and analysis of stochastic AOGs beyond context-freeness to future work. \subsection{Related Models and Special Cases}\label{sec:aog:related} Stochastic context-free AOGs subsume many existing models as special cases. Because of space limitation, here we informally describe these related models and their reduction to AOGs and leave the formal definitions and proofs in \ref{sm:sec:rmsc}. Stochastic context-free grammars (SCFG) are clearly a special case of stochastic context-free AOGs. Any SCFG can be converted into an And-Or normal form that matches the structure of a stochastic AOG \cite{Tu08}. In a stochastic AOG representing a SCFG, each node represents a string and the parameter of a node is the start/end positions of the string in the complete sentence; the parameter relation and parameter function in an And-rule specify string concatenation, i.e., the substrings must be adjacent and the concatenation of all the substrings forms the composite string represented by the parent And-node. There have been a variety of grammar formalisms developed in the natural language processing community that go beyond the concatenation relation of strings. For examples, in some formalisms the substrings are interwoven to form the composite string \cite{pollard1984generalized,johnson1985parsing}. More generally, in a grammar rule a linear regular string function can be used to combine lists of substrings into a list of composite strings, as in a linear context-free rewriting system (LCFRS) \cite{weir1988characterizing}. All these grammar formalisms can be represented by context-free AOGs with each node representing a list of strings, the node parameter being a list of start/end positions, and in each And-rule the parameter relation and parameter function defining a linear regular string function. Since LCFRSs are known to generate the larger class of mildly context-sensitive languages, context-free AOGs when instantiated to model languages can be at least as expressive as mildly context-sensitive grammars. Constraint-based grammar formalisms \cite{shieber1992constraint} are another class of natural language grammars, which associate so-called feature structures to nonterminals and use them to specify constraints in the grammar rules. Such constraints can help model natural language phenomena such as English subject-verb agreement and underlie grammatical theories such as head-driven phrase structure grammars \cite{Pollard1988information}. It is straightforward to show that constraint-based grammar formalisms are also special cases of context-free AOGs (with a slight generalization to allow unary And-rules), by establishing equivalence between feature structures and node parameters and between constraints and parameter relations/functions. In computer vision and pattern recognition, stochastic AOGs have been applied to a variety of tasks as discussed in the previous section. In addition, several other popular models, such as the deformable part model \cite{felzenszwalb2008discriminatively} and the flexible mixture-of-parts model \cite{yang2011articulated}, can essentially be seen as special cases of stochastic context-free AOGs in which the node parameters encode spatial information of image patches and the parameter relations/functions encode spatial relations between the patches. Sum-product networks (SPN) \cite{Poon11} are a new type of deep probabilistic models that extend the ideas of arithmetic circuits \cite{darwiche2003differential} and AND/OR search spaces \cite{dechter2007and} and can compactly represent many probabilistic distributions that traditional graphical models cannot tractably handle. It can be shown that any decomposable SPN has an equivalent stochastic context-free AOG: Or-nodes and And-nodes of the AOG can be used to represent sum nodes and product nodes in the SPN respectively, all the node parameters are set to null, parameter relations always return true, and parameter functions always return null. Because of this reduction, all the models that can reduce to decomposable SPNs can also be seen as special cases of stochastic context-free AOGs, such as thin junction trees \cite{bach2001thin}, mixtures of trees \cite{meila2001learning} and latent tree models \cite{choi2011learning}. \subsection{Inference} \label{sec:aog:inf} The main inference problem associated with stochastic AOGs is parsing, i.e., given a data sample consisting of only terminal nodes, infer its most likely compositional structure (parse). A related inference problem is to compute the marginal probability of a data sample. It can be shown that both problems are NP-hard (see \ref{sm:sec:np} for the proofs). Nevertheless, here we propose an exact inference algorithm for stochastic context-free AOGs that is tractable under a reasonable assumption on the number of valid compositions in a data sample. Our algorithm is based on bottom-up dynamic programming and can be seen as a generalization of several previous exact inference algorithms designed for special cases of stochastic AOGs (such as the CYK algorithm for text parsing). Algorithm \ref{alg:inf} shows the inference algorithm that returns the probability of the most likely parse. After the algorithm terminates, the most likely parse can be constructed by recursively backtracking the selected Or-rules from the start symbol to the terminals. To compute the marginal probability of a data sample, we simply replace the max operation with sum in line \ref{alg:max2} of Algorithm \ref{alg:inf}. In Algorithm \ref{alg:inf} we assume the input AOG is in a generalized version of Chomsky normal form, i.e., (1) each And-node has exactly two child nodes which must be Or-nodes, (2) the child nodes of Or-nodes must not be Or-nodes, and (3) the start symbol $S$ is an Or-node. By extending previous studies \cite{lange2009cnf}, it can be shown that any context-free AOG can be converted into this form and both the time complexity of the conversion and the size of the new AOG is polynomial in the size of the original AOG. We give more details in \ref{sm:sec:cnf}. \renewcommand{\algorithmicrequire}{\textbf{Input:}} \renewcommand{\algorithmicensure}{\textbf{Output:}} \begin{algorithm}[t] \begin{algorithmic}[1]\small \REQUIRE a data sample $X$ consisting of a set of non-duplicate instances of terminal nodes, a stochastic context-free AOG $G$ in Chomsky normal form \ENSURE the probability $p^*$ of the most likely parse of $X$ \STATE Create an empty map $M$ \quad \COMMENT{$M[i,O,\theta,T]$ stores the probability of a valid composition of size $i$ with root Or-node $O$, parameter $\theta$, and set $T$ of terminal instances.} \FORALL{$x \in X$} \label{alg:s1:s} \STATE $a \leftarrow$ the terminal node that $x$ is an instance of \STATE $\theta \leftarrow$ the parameter of $x$ \FORALL{Or-rule $\langle O \rightarrow a,\ p \rangle$ in $G$} \STATE $M[1,O,\theta,\{x\}] \leftarrow p$ \ENDFOR \ENDFOR \label{alg:s1:e} \FOR{$i=2$ \TO $|X|$} \label{alg:si:s} \FOR{$j=1$ \TO $i-1$} \FORALL{$\langle O_1, \theta_1, p_1 \rangle : M[j, O_1, \theta_1, T_1] = p_1$} \FORALL{$\langle O_2, \theta_2, p_2 \rangle : M[i-j, O_2, \theta_2, T_2] = p_2$} \FORALL{And-rule $\langle A \rightarrow O_1 O_2,\ t,\ f \rangle$ in $G$} \IF{$t(\theta_1, \theta_2) = True$ and $T_1 \bigcap T_2 = \emptyset$} \STATE $\phi \leftarrow f(\theta_1, \theta_2)$ \STATE $T \leftarrow T_1 \bigcup T_2$ \FORALL{Or-rule $\langle O \rightarrow A,\ p_O \rangle$ in $G$} \STATE $p \leftarrow p_O p_1 p_2$ \IF{$M[i,O,\phi,T]$ is null} \STATE $M[i,O,\phi,T] \leftarrow p$ \ELSE \STATE $M\![i,O,\phi,T]\!\!\leftarrow\!\max\{p,M\![i,O,\phi,T]\}$\label{alg:max2} \ENDIF \ENDFOR \ENDIF \ENDFOR \ENDFOR \ENDFOR \ENDFOR \ENDFOR \label{alg:si:e} \RETURN $\max_\theta M[|X|, S, \theta, X]$ \, \COMMENT{$S$ is the start symbol} \label{alg:ss} \end{algorithmic} \caption{Parsing with a stochastic context-free AOG}\label{alg:inf} \end{algorithm} The basic idea of Algorithm \ref{alg:inf} is to discover valid compositions of terminal instances of increasing sizes, where the size of a composition is defined as the number of terminal instances it contains. Size 1 compositions are simply the terminal instances (line \ref{alg:s1:s}--\ref{alg:s1:e}). To discover compositions of size $i>1$, the combination of any two compositions of sizes $j$ and $i-j\ (j<i)$ are considered (line \ref{alg:si:s}--\ref{alg:si:e}). A complete parse of the data sample is a composition of size $|X|$ with its root being the start symbol $S$ (line \ref{alg:ss}). The time complexity of Algorithm \ref{alg:inf} is $O(|X|^2 c^2 |G|(|X|+|G|))$ where $c = \max_i |C_i|$ and $C_i$ is the set of valid compositions of size $i$ in the data sample $X$. In the worst case when all possible compositions of terminal instances from the data sample are valid, we have $c = \binom{|X|}{\left\lfloor|X|/2\right\rfloor}$ which is exponential in $|X|$. To make the algorithm tractable, we restrict the value of $c$ with the following assumption on the input data sample. \theoremstyle{plain} \newtheorem*{csa}{Composition Sparsity Assumption} \begin{csa} For any data sample $X$ and any positive integer $i \leq |X|$, the number of valid compositions of size $i$ in $X$ is polynomial in $|X|$. \end{csa} This assumption is reasonable in many scenarios. For text data, for a sentence of length $m$, a valid composition is a substring of the sentence and the number of substrings of size $i$ is $m-i+1$. For image data, if we restrict the compositions to be rectangular image patches (as in the hierarchical space tiling model \cite{Wang13hs}), then for an image of size $m = n \times n$ it is easy to show that the number of valid compositions of any specific size is no more than $n^3$. \section{Logic Perspective of Stochastic AOGs} \label{sec:logic} In a stochastic AOG, And-rules model the relations between terminal and nonterminal instances and Or-rules model the uncertainty in the compositional structure. By combining these two types of rules, stochastic AOGs can be seen as probabilistic models of relational structures and are hence related to the field of statistical relational learning \cite{getoor2007introduction}. In this section, we manifest this connection by providing probabilistic logic interpretations of stochastic AOGs. By establishing this connection, we hope to facilitate the exchange of ideas and results between the two previously separated research areas. \subsection{Interpretation as Probabilistic Logic} \label{sec:logic:fol} We first discuss an interpretation of stochastic context-free AOGs as a subset of first-order probabilistic logic with a possible-world semantics. The intuition is that we interpret terminal and nonterminal nodes of an AOG as unary relations, use binary relations to connect the instances of terminal and nonterminal nodes to form the parse tree, and use material implication to represent grammar rules. We first describe the syntax of our logic interpretation of stochastic context-free AOGs. There are two types of formulas in the logic: And-rules and Or-rules. Each And-rule takes the following form (for some $n \geq 2$). \begin{multline*} \forall x \, \exists y_1,y_2,\ldots,y_n, A(x) \rightarrow \bigwedge_{i=1}^n \left( B_i (y_i) \land R_i (x,y_i) \right) \\ \land R_\theta (\theta(x),\theta(y_1),\theta(y_2),\ldots,\theta(y_n)) \end{multline*} The unary relation $A$ corresponds to the left-hand side And-node of an And-rule in the AOG; each unary relation $B_i$ corresponds to a child node of the And-rule. We require that for each unary relation $A$, there is at most one And-rule with $A(x)$ as the left-hand side. The binary relation $R_i$ is typically the \texttt{HasPart} relation between an object and one of its parts, but $R_i$ could also denote any other binary relation such as the \texttt{Agent} relation between an action and its initiator, or the \texttt{HasColor} relation between an object and its color. Note that these binary relations make explicit the nature of the composition represented by each And-rule of the AOG. $\theta$ is a function that maps an object to its parameter. $R_\theta$ is a relation that combines the parameter relation and parameter function in the And-rule of the AOG and is typically factorized to the conjunction of a set of binary relations. Each Or-rule takes the following form. \[ \forall x, A(x) \rightarrow B(x) \; : p \] The unary relation $A$ corresponds to the left-hand side Or-node and $B$ to the child node of an Or-rule in the AOG; $p$ is the conditional probability of $A(x) \rightarrow B(x)$ being true when the grounded left-hand side $A(x)$ is true. We require that for each true grounding of $A(x)$, among all the grounded Or-rules with $A(x)$ as the left-hand side, exactly one is true. This requirement can be represented by two additional sets of constraint rules. First, Or-rules with the same left-hand side are mutually exclusive, i.e., for any two Or-rules $\forall x, A(x) \rightarrow B_i(x)$ and $\forall x, A(x) \rightarrow B_j(x)$, we have $\forall x, A(x) \rightarrow B_i(x) \uparrow B_j(x)$ where $\uparrow$ is the Sheffer stroke. Second, given a true grounding of $A(x)$, the Or-rules with $A(x)$ as the left-hand side cannot be all false, i.e., $\forall x, A(x) \rightarrow \bigvee_i B_i(x)$ where $i$ ranges over all such Or-rules. Further, to simplify inference and avoid potential inconsistency in the logic, we require that the right-hand side unary relation $B$ of an Or-rule cannot appear in the left-hand side of any Or-rule (i.e., the second requirement in the generalized Chomsky normal form of AOG described earlier). We can divide the set of unary relations into two categories: those that appear in the left-hand side of rules (corresponding to the nonterminal nodes of the AOG) and those that do not (corresponding to the terminal nodes). The first category is further divided into two sub-categories depending on whether the unary relation appears in the left-hand side of And-rules or Or-rules (corresponding to the And-nodes and Or-nodes of the AOG respectively). We require these two sub-categories to be disjoint. There is also a unique unary relation $S$ that does not appear in the right-hand side of any rule, which corresponds to the start symbol of the AOG. Now we describe the semantics of the logic. The interpretation of all the logical and non-logical symbols follows that of first-order logic. There are two types of objects in the universe of the logic: normal objects and parameters. There is a bijection between normal objects and parameters, and function $\theta$ maps a normal object to its corresponding parameter. A possible world is represented by a pair $\langle X,L \rangle$ where $X$ is a set of objects and $L$ is a set of literals that are true. We require that there exists exactly one normal object $s \in X$ such that $S(s) \in L$. In order for all the deterministic formulas (i.e., all the And-rules and the two sets of constraint rules of all the Or-rules) to be satisfied, the possible world must contain a tree structure in which: \begin{enumerate} \item each node denotes an object in $X$ with the root node being $s$; \item each edge denotes a binary relation defined in some And-rule; \item for each leaf node $x$, there is exactly one terminal unary relation $T$ such that $T(x) \in L$; \item for each non-leaf node $x$, there is exactly one And-node unary relation $A$ such that $A(x) \in L$, and for the child nodes $\{y_1,y_2,\ldots,y_n\}$ of $x$ in the tree, $\{B_i (y_i)\}_{i=1}^n \cup \{R_i (x,y_i)\}_{i=1}^n \cup \{R_\theta (\theta(x),\theta(y_1),\theta(y_2),\ldots,\theta(y_n))\} \subseteq L$ according to the And-rule associated with relation $A$; \item for each node $x$, if for some Or-node unary relation $A$ we have $A(x) \in L$, then among all the Or-rules with $A$ as the left-hand side, there is exactly one Or-rule such that $B(x) \in L$ where $B$ is the right-hand side unary relation of the Or-rule, and for the rest of the Or-rules we have $\lnot B(x) \in L$. \end{enumerate} We enforce the following additional requirements to ensure that the possible world contains no more and no less than the tree structure: \begin{enumerate} \item No two nodes in the tree denote the same object. \item $X$ and $L$ contain only the objects and relations specified above. \end{enumerate} The probability of a possible world $\langle X,L \rangle$ is defined as follows. Denote by $R^{Or}$ the set of Or-rules. For each Or-rule $r: \forall x, A(x) \rightarrow B(x)$, denote by $p_r$ the conditional probability associated with $r$ and define $g_r := \{x\in X | A(x)\in L\ \land\ B(x)\in L\}$. Then we have: \[ P(\langle X,L \rangle) = \prod_{r \in R^{Or}} {p_r}^{|g_r|} \] In this logic interpretation, parsing corresponds to the inference problem of identifying the most likely possible world in which the terminal relations and parameters of the leaf nodes of the tree structure match the atomic patterns in the input data sample. Computing the marginal probability of a data sample corresponds to computing the probability summation of the possible worlds that match the data sample. Our logic interpretation of stochastic context-free AOGs resembles tractable Markov logic (TML) \cite{domingos2012tractable,webb2013tractable} in many aspects, even though the two have very different motivations. Such similarity implies a deep connection between stochastic AOGs and TML and points to a potential research direction of investigating novel tractable statistical relational models by borrowing ideas from the stochastic grammar literature. There are a few minor differences between stochastic AOGs and TML, e.g., TML does not distinguish between And-nodes and Or-nodes, does not allow recursive rules, enforces that the right-hand side unary relation in each Or-rule is a sub-type of the left-hand side unary relation, and disallows a unary relation to appear in the right-hand side of more than one Or-rule. \subsection{Interpretation as a Stochastic Logic Program} Stochastic logic programs (SLP) \cite{muggleton1996stochastic} are a type of statistical relational models that, like stochastic context-free AOGs, are a generalization of stochastic context-free grammars. They are essentially equivalent to two other representations, independent choice logic \cite{poole1993probabilistic} and PRISM \cite{sato2001parameter}. Here we show how a stochastic context-free AOG can be represented by a pure normalized SLP \cite{cussens2001parameter}. Since several inference and learning algorithms have been developed for SLPs and PRISM, our reduction enables the application of these algorithms to stochastic AOGs. In our SLP program, we have one SLP clause for each And-rule and each Or-rule in the AOG. The overall structure is similar to the probabilistic logic interpretation discussed in section \ref{sec:logic:fol}. For each And-rule, the corresponding SLP clause takes the following form: \begin{align*} 1.0: & \; a(X,P) \textrm{ :- } b_1(X_1,P_1), b_2(X_2,P_2), \cdots, b_n(X_n,P_n), \\ & \hspace{2.2em} append([X_1,\ldots,X_n],X), r_1(X,X_1), r_2(X,X_2), \\ & \hspace{2.2em} \cdots, r_n(X,X_n), r_\theta(P,P_1,\ldots,P_n). \end{align*} The head $a(X,P)$ represents the left-hand side And-node of the And-rule, where $X$ represents the set of terminal instances generated from the And-node and $P$ is the parameters of the And-node. In the body of the clause, $b_i$ represents the $i$-th child node of the And-rule, $r_i$ represents the relation between the And-node and its $i$-th child node, $append(\ldots)$ states that the terminal instance set $X$ of the And-node is the union of the instance sets from all the child nodes, and $r_\theta$ represents a relation that combines the parameter relation and parameter function of the And-rule. For relations $r_i$ and $r_\theta$, we need to have additional clauses to define them according to the type of data being modeled. For each Or-rule in the AOG, if the right-hand side is a nonterminal, then we have: \[ p: \; a(X,P) \textrm{ :- } b(X,P). \] where $p$ is the conditional probability associated with the Or-rule, $a$ and $b$ represent the left-hand and right-hand sides of the Or-rule respectively, whose arguments $X$ and $P$ have the same meaning as explained above. If the right-hand side of the Or-rule is a terminal, then we have: \[ p: \; a([t],[\ldots]). \] where $t$ is the right-hand side terminal node and the second argument represents the parameters of the terminal node. Finally, the goal of the program is \[ \textrm{:-} \ s(X,P). \] which represents the start symbol of the AOG, whose arguments have the same meaning as explained above. \section{Conclusion} Stochastic And-Or grammars extend traditional stochastic grammars of language to model other types of data such as images and events. We have provided a unified representation framework of stochastic AOGs that can be instantiated for different data types. We have shown that many existing grammar formalisms and probabilistic models in natural language processing, computer vision, and machine learning can all be seen as special cases of stochastic context-free AOGs. We have also proposed an inference algorithm for parsing data samples using stochastic context-free AOGs and shown that the algorithm is tractable under the composition sparsity assumption. In the second part of the paper, we have provided interpretations of stochastic context-free AOGs as a subset of first-order probabilistic logic and stochastic logic programs. Our interpretations connect stochastic AOGs to the field of statistical relational learning and clarify their relation with a few existing statistical relational models.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Although gravity is the weakest force in nature and naive dimensional arguments suggest that its effect on high energy collisions is insignificant below energies of the Planck scale $M_P=G_N^{-1/2}=1.22\times 10^{19}~GeV$, recent advances in M-theory~\cite{mtheory} have motivated the consideration of Kaluza-Klein theories which allow gravitational interactions to become strong at relatively modest energies compared to the Planck scale, perhaps as low as $1~TeV$~\cite{add1,add2}. M-theory is a special case of Kaluza-Klein theories where there are total of 11 dimensions. Four of these are of course the usual space time while the other 7 are compact. The possibility which emerges from~\cite{add1,add2} is that while some of the compact dimensions have lengths near the Planck scale, ${n}$ of these dimensions may be compactified with a distance scale $R$ which is much larger. This leads to an effective Planck mass $m_D$, perhaps of the order of $1~TeV$ which is related to the size $R$ of the new dimension according to~\cite{add1}: \begin{eqnarray} 8\pi R^{n} m_D^{2+{n}}\sim M_P^2 ~. \label{Rsize} \end{eqnarray} \noindent In this scenario, at distances $d<R$ the Newtonian inverse square law will fail~\cite{add1}. If ${n}=1$ and $m_D$=$1~TeV$, then R is of the order of $10^{8}~km$, large on the scale of the solar system, which is clearly ruled out by astronomical observations. However, if ${n}\geq 2$ then $R< 1~mm$; there are no experimental constraints on the behavior of gravitation at such scales~\cite{cavin} so these models may not be inconsistent with experimental results. Astonishingly enough if $m_D\sim 1~TeV$ then gravitons may be readily produced in accelerator experiments. This is because the extra dimensions give an increased phase space for graviton radiation. Another way of looking at this situation is to interpret gravitons which move parallel to the 4 dimensions of space time as the usual gravitons giving rise to Newtonian gravity while the gravitons with momentum components perpendicular to the brane are effectively a continuum of massive objects. The density of gravitons states is given by~\cite{add1,add2,wells,taohan}: \begin{eqnarray} D(m^2)={dN\over d m^2}={1\over 2} S_{{n}-1} {\bar M_P^2 m^{{n}-2}\over m_D^{{n}+2}} ~, \end{eqnarray} \noindent where $m$ is the mass of the graviton, $\bar M_P=M_P/\sqrt{8\pi}$ and $S_k=2\pi^{(k+1)/2}/\Gamma[(k+1)/2]$. The probability of graviton emission may thus become large when the sum over the huge number of graviton modes is considered. Of course this distribution cannot increase in this way forever. At energies $> m_D$ the effects of the fundamental theory should become manifest and so we will suppose that the distribution is cutoff at $\sim m_D$. Gravitons with polarizations that lie entirely within the physical dimensions are effective spin 2 objects. Gravitons with polarizations partially or completely perpendicular to the physical brane are vector and scalar objects. Processes that are sensitive to the scalar states are of particular interest because the scalar couplings are proportional to mass and so are often weakly coupled to processes exclusively involving particles of low mass. The prospect that a realistic compactification of M-theory leads to processes that can be readily observed has lead to considerable phenomenological activity~\cite{wells}-\cite{gammagamma_ZZ}. Broadly speaking there are two kinds of processes where the effects due to this form of gravitation may be detected. First of all, a real graviton may be produced which leaves the detector resulting in a signature involving missing mass and energy, see e.g., \cite{real,ABS1}. One advantage of this class of reactions is that if a signal is seen, the missing mass spectrum would give strong evidence that these gravity theories are involved and indicate the value of ${n}$. The disadvantage is that the rates tend to be suppressed at large ${n}$ due to the lower density of states; hence the limits that may be set on $m_D$ tend to be less restrictive at larger ${n}$. Secondly, there are processes mediated by virtual gravitons, see e.g., \cite{virtual,4gammapaper,gammagamma_ZZ}. When a virtual graviton is exchanged, each of the graviton states adds to the amplitude coherently, thus in the sum over gravitons the density of states cancels the $1/M_P^2$ from the gravitational coupling. The disadvantage of these processes is that if a signal is seen, it is unlikely to be easy to prove that gravitational interaction is responsible as opposed to some other new physics. Of course if a limit is being set by the absence of a signal, this is not a problem. The advantage of these processes is that the whole tower of graviton states acts coherently so the results are largely independent of ${n}$. Limits can thus be set on all values of ${n}$ simultaneously. In this paper we will consider process which are of the latter type. In any virtual process, there will in general exist some Standard Model (SM) background. Clearly it is beneficial to choose processes where the SM background is so small that it does not limit the bound that can be placed on $m_D$. The class of process which we consider here are of the form $V_1V_2\to V_3 V_4$ where $V_i$ are SM gauge bosons which may be distinct or identical. If the tree level coupling between these four specific bosons does not follow from the gauge theory, then the scattering proceeds only at fourth order in the gauge coupling and is therefore highly suppressed as is the case e.g. for $\gamma\gamma\to \gamma\gamma$, $ZZ\to ZZ$ or $gg \to \gamma \gamma$, $gg \to ZZ$. For each of the virtual processes of the form $V_1V_2\to V_3V_4$ which involve the exchange of gravitons, we need to sum the propagator over all of the possible graviton states. The amplitudes considered here can be factored into one of the following forms: \begin{itemize} \item[(a)] ${\cal M}=\left ( i/( s-m^2) \right )\kappa^2 \hat {\cal M}_s~,$ \item[(b)] ${\cal M}=\left ( i/ (t-m^2) \right )\kappa^2 \hat {\cal M}_t~,$ \item[(c)] ${\cal M}=\left ( i/ (u-m^2) \right )\kappa^2 \hat {\cal M}_u~,$ \end{itemize} \noindent where $m$ is the mass of the graviton and all the $m$ dependence is in the propagator factor. Also, $\kappa=\sqrt{16 \pi G_N}$ \cite{taohan}, where $G_N$ is the Newtonian gravitational constant. In case (a), for example, the total amplitude including all graviton exchanges is thus: \begin{eqnarray} {\cal M}^{tot}_s=\hat{\cal M}_s \sum_\nu {i\over s-m_\nu^2}~, \end{eqnarray} \noindent where $\nu$ indexes the graviton masses $m_\nu$. We write the sum: \begin{eqnarray} \sum_\nu {i\over s-m_\nu^2}=D(s) ~, \end{eqnarray} \noindent where the value of $D(s)$ calculated in~\cite{wells,taohan} is: \begin{eqnarray} \kappa^2 D(s)= -i{16\pi\over m_D^4}F + O({s\over m_D^2}) ~. \end{eqnarray} \noindent The constant $F$ contains all the dependence on $n$ and is given by: \begin{eqnarray} F=\left \{ \begin{array}{cl} \log(s/m_D^2) & ~~~{\rm for}~n=2\\ 2/(n-2) & ~~~{\rm for}~n>2 \end{array} \right . \label{eqf}~. \end{eqnarray} \noindent Likewise for the $t$-channel, we can define: \begin{eqnarray} \sum_\nu {i\over t-m_\nu^2}=D_E(t) ~, \end{eqnarray} \noindent and similarly for the $u$-channel. In the case of a $2\to 2$ process, to lowest order in $s/m_D^2$, $D_E(t)=D_E(u)=D(s)$ \cite{taohan}. Thus, in general, the sum of all three channels is: \begin{eqnarray} {\cal M} \approx \kappa^2 D(s) ( \hat{\cal M}_s+ \hat{\cal M}_t+ \hat{\cal M}_u ) \approx -i{16\pi\over m_D^4}F ( \hat{\cal M}_s+ \hat{\cal M}_t+ \hat{\cal M}_u ) ~. \end{eqnarray} \noindent Defining $z=\cos\theta$ where $\theta$ is the angle between $V_1$ and $V_3$ in the cms frame, the differential cross section is thus given by: \begin{eqnarray} {d\sigma\over d z} = \left ( {8\pi F^2\over s m_D^8 B {\cal P}} \right ) \left ( {2 | \vec P_3 | \over\sqrt{s}} \right ) \sum_{polarization,color} \left | \hat{\cal M}_s+ \hat{\cal M}_t+ \hat{\cal M}_u \right |^2 \end{eqnarray} \noindent where $B=2$ for identical final state particles and 1 otherwise while ${\cal P}$ is the number of initial color times polarization states averaged over. \section{Gauge Boson Scattering} Let us first enumerate some of the instances of this kind of scattering which can be of interest. We will break it down into the following categories: \begin{itemize} \item $Z$ and $\gamma$ only: \begin{eqnarray} \begin{array}{ll} {\rm (a)}~~~\gamma\gamma\to\gamma\gamma ~~~~~~&~~~~~~ {\rm (b)}~~~\gamma\gamma\to ZZ \\ {\rm (c)}~~~ZZ\to\gamma\gamma ~~~~~~&~~~~~~ {\rm (d)}~~~\gamma Z\to\gamma Z \end{array} \end{eqnarray} \item 2 $W$'s with $Z$, $W$ and $\gamma$: \begin{eqnarray} \begin{array}{ll} {\rm (e)}~~~\gamma\gamma\to W^+W^- ;~ W^+W^-\to\gamma\gamma & {\rm (f)}~~~W\gamma\to W\gamma \\ {\rm (g)}~~~ZZ\to W^+W^- ;~ W^+W^-\to ZZ & {\rm (h)}~~~WZ\to WZ \\ {\rm (i)}~~~W^+W^-\to W^+W^- ;~ W^-W^-\to W^-W^- & \ \end{array} \end{eqnarray} \item Processes with 2 gluons \begin{eqnarray} \begin{array}{ll} {\rm (j)}~~~gg\to\gamma\gamma;~\gamma\gamma\to gg ~~~~~~&~~~~~~ {\rm (k)}~~~g\gamma\to g\gamma \\ {\rm (l)}~~~gg\to ZZ;~ZZ \to gg ~~~~~~&~~~~~~ {\rm (m)}~~~gZ \to gZ \\ {\rm (n)}~~~gg\to WW;~WW \to gg ~~~~~~&~~~~~~ {\rm (o)}~~~gW \to gW \end{array} \end{eqnarray} \item Four gluon coupling: \begin{eqnarray} \begin{array}{ll} {\rm (p)}~~~gg\to gg ~~~~~~&~~~~~~ \ \end{array} \end{eqnarray} \item Four $Z$ coupling: \begin{eqnarray} \begin{array}{ll} {\rm (q)}~~~ZZ\to ZZ ~~~~~~&~~~~~~ \ \end{array} \end{eqnarray} \end{itemize} These processes will proceed through the Feynman diagrams shown in Fig.~1 where the dashed line represents a spin 2 or spin 0 graviton. The exchange may be in various combinations of the $s$, $t$ and $u$-channels depending on what the external bosons are. The spin 0 exchange is only operative if all of the bosons are massive, specifically (g), (i), (h), and (q). Diagrams which contain either 4 gluons as in case (p), at least 2 $W$'s as in cases (e)-(h) and 4 $Z$'s as in case (q) can proceed through tree level SM processes and so the possible large backgrounds must be considered. The other processes will not have tree level SM backgrounds. In what follows we will not consider any process involving $W$-bosons such as in cases (e)-(h), (n) and (o). We will also not explicitly consider the processes $ZZ \to gg$ (case (l)) and $gZ \to gZ$ (case (m)). We note, however, that the formulae we give in the appendices can be easily generalized to include those processes for future use. \begin{figure}[htb] \psfull \begin{center} \leavevmode \epsfig{file=fig1.ps,height=6cm,width=6cm,bbllx=0cm,bblly=2cm,bburx=20cm,bbury=25cm,angle=0} \end{center} \caption{\emph{The three Feynman diagrams that give rise to gauge boson scattering through virtual graviton exchange. Dashed lines stand for the exchanges of a spin 2 or a spin 0 graviton.}} \label{fig1} \end{figure} \section{Electron-Positron Colliders} To experimentally study the scattering of gauge bosons it is usually necessary to consider reactions where the initial state consists of fermions, in particular $e^+$, $e^-$ $p$ or $\bar p$. At high energies virtual gauge bosons are then produced nearly on shell and collinear with the initial particles. If one uses the effective boson approximation~\cite{evba,collider} one may regard the fermion beams as sources of gauge bosons and at leading log ignore the virtuality of these bosons in calculating the cross section. At electron-positron colliders a number of reactions of this type involving bosons may be studied. The simplest is perhaps $\gamma\gamma\to\gamma\gamma$. It should be noted, however, that an electron-positron collider can be converted into an almost monochromatic photon-photon collider using back-scatter laser technique \cite{photoncollider}. An NLC running in such a photon mode has an advantage that the initial photons may be polarized. In fact, the reaction $\gamma\gamma\to\gamma\gamma$ has been considered before in~\cite{4gammapaper} in the context of a photon-photon collider. There it was shown that the discovery reach of low energy gravity through the process $\gamma\gamma\to\gamma\gamma$ may be considerably improved if the initial photons are polarized. The same sensitivity to the initial photons polarization was also found in \cite{gammagamma_ZZ} for the reactions $\gamma \gamma \to W^+W^-,~ZZ$. Here, we wish instead to explore yet another aspect of this type of $V_1 V_2 \to V_3 V_4$ scattering processes, namely, virtual gauge boson emission from the initial $e^+e^-$ of a NLC running in its ``simple'' mode\footnote{For definiteness we discuss gauge boson scattering in an electron-positron collider. Our results below, however, are clearly extendable to muon colliders as well.}. Clearly, in the case of $V_1=V_2=\gamma$, $\gamma\gamma\to VV$ will then lead to a signature of the form $e^+ e^- \to VV e^+ e^-$ which is different from the hard process $\gamma\gamma\to VV$ (directly observable in a photon-photon collider) and has its own kinematic characteristics. For instance, in the case of an electron-positron collider, two photon processes are thus calculated using the equivalent photon approximation so that if a cross section $\sigma_{\gamma\gamma\to X}$ is known, the cross section for $e^+e^-\to e^+e^-+X$ via the two photon mechanism is given by: \begin{eqnarray} \sigma_{e^+e^-\to e^+e^-X} = \left ( {\alpha\over 2\pi} \log (s_0/4 \hat m_e^2) \right )^2 \int_0^1 f(\tau) \sigma_{\gamma\gamma\to X}(\tau s_0)~d\tau ~, \end{eqnarray} \noindent where $s_0$ is the square of the center of mass energy of the initial $e^+e^-$ and: \begin{eqnarray} f(\tau)={1\over \tau}((2+\tau)^2\log{1\over \tau} - 2(1-\tau)(3+\tau)) ~. \end{eqnarray} \noindent In this expression the total cross section for $e^+e^-\to e^+e^-+X$ is given if one takes $\hat m_e=m_e$ as the mass of the electron. If one wishes, however to observe the $e^+e^-$ in the final state, experimental considerations suggest that a minimum cut on the transverse momentum of the final state electrons $P_{Tmin}$ be used. In this case the result is given by taking $\hat m_e=P_{Tmin}$. For instance, if one is considering the production of real gravitons as in~\cite{ABS1}, it is essential to observe these final state electrons since the graviton itself is undetectable. \begin{figure}[htb] \psfull \begin{center} \leavevmode \epsfig{file=fig2.ps,height=8cm,width=8cm,bbllx=0cm,bblly=2cm,bburx=20cm,bbury=25cm,angle=0} \end{center} \caption{\emph{The overall cross sections $\sigma(e^+e^- \to V_1 V_2 e^+e^-)$, where $V_i=\gamma,~Z$ or $g$, for various channels of effective gauge boson scattering sub-processes in an electron-positron collider, as a function of the center of mass energy of the collision, $\sqrt{s_0}$. In all the curves we take $m_D=1~TeV$ and $n=4$. The $\gamma\gamma\to\gamma\gamma$ sub-process is shown with a solid line, the $\gamma\gamma\to ZZ$ sub-process is shown with a short dashed line, the $\gamma\gamma\to gg$ sub-process is shown with a dotted line and the $\gamma Z\to \gamma Z$ sub-process is shown with a long dashed line.}} \label{fig2} \end{figure} It is well known~\cite{evba,collider} that at high energy colliders this expression can be generalized to cases where the photons are replaced with $W$ or $Z$ bosons; in those cases one must generally use helicity dependent structure functions for the initial state gauge bosons. \begin{figure}[htb] \psfull \begin{center} \leavevmode \epsfig{file=fig3.ps,height=8cm,width=8cm,bbllx=0cm,bblly=2cm,bburx=20cm,bbury=25cm,angle=0} \end{center} \caption{\emph{The normalized invariant mass distribution $(d\sigma/d\tau)/\sigma$ is shown for $e^+e^- \to \gamma \gamma e^+e^-$ via the sub-process $\gamma\gamma\to\gamma\gamma$ (solid line) and for $e^+e^- \to \gamma Z e^+e^-$ via the sub-process $\gamma Z\to \gamma Z$ (dashed line), where $\sqrt{s_0}=1~TeV$. The other model parameters are the same as in Fig.~\ref{fig2}.}} \label{fig3} \end{figure} The helicity amplitudes and cross section for $\gamma \gamma \to \gamma \gamma$ are given in appendix A (eqs.~(\ref{a1})--(\ref{a3})) and the corresponding full cross section, $\sigma(e^+e^- \to \gamma \gamma e^+e^-)$, is shown in Fig.~\ref{fig2} as a function of $\sqrt s_0$ for the case where $n=4$ (or equivalently $F=1$, see eq.~(\ref{eqf})) and $m_D=1~TeV$. In Fig.~\ref{fig3} we show the normalized distribution of $(d\sigma/d\tau)/\sigma$, where $\tau=s/s_0$, from which it is clear that most of these events are concentrated at high invariant $\gamma \gamma$ mass. Thus a large portion of the events that make up this cross section would be quite distinctive. This follows from the fact that the cross section is proportional to $s^3$; hence, even though the effective luminosity distribution of the photons decreases at large $\tau$, the growth of the cross section with $\tau$ makes the average value of $\tau$ become large. The same trend is also true for all of the other reactions considered and so experimental tests for these type of processes should focus on final states of large invariant mass. Since $\sigma(\gamma \gamma \to \gamma \gamma)$ (and therefore $\sigma(e^+e^- \to \gamma \gamma e^+e^-)$) is proportional to $F^2/m_D^8$, the value of the cross section may be trivially adjusted for other values of these input parameters; the same is also true for all the other processes which we consider below. Thus, for instance, if $\sqrt{s_0}=1~TeV$, then the cross section is $\sim 270~fb$ with $m_D=1$ TeV, so at such an accelerator with a luminosity of $200~fb^{-1}$, the limit which could be placed on $m_D$ with a criterion of 10 events (i.e., assuming that such a signal is observable only if 10 or more events are seen) would be $m_D\sim 2.9~TeV$. This limit is weaker than the one achievable in a photon-photon collider using polarized initial photons by about a factor of two (see \cite{4gammapaper}), but is independently useful for an electron-positron collider. The amplitude for $\gamma\gamma\to gg$ is simply the $s$-channel of the $\gamma\gamma\to\gamma\gamma$ graph. The helicity amplitudes and cross section for this process are given in appendix B (eqs.~(\ref{b1})--(\ref{b3})). The cross section $\sigma(e^+e^- \to gg e^+e^-)$ is also shown in Fig.~\ref{fig2}; it is similar to the $\gamma\gamma$ final state and in principle leads to similar sensitivity to $m_D$. The events in this case would consist of two jets which would typically have a high invariant mass as with the $\gamma\gamma$ events above. Another class of reactions which may be of interest at electron-positron colliders is reactions which contain $Z$-bosons in the initial and/or final states. For instance $\gamma\gamma\to ZZ$ which has been studied in~\cite{gammagamma_ZZ} in the context of a photon--photon collider. We note again that the NLC in its photon mode and with polarized initial photons can place stringer limits on $m_D$ \cite{gammagamma_ZZ} by studying $\gamma\gamma\to ZZ$. Nonetheless, as shown below, it is clearly important to analyze this process in an electron-positron collider as well. The helicity amplitudes and cross section for this process are given in appendix D (eqs.~(\ref{d1})--(\ref{d3})) and the cross section is plotted in Fig.~\ref{fig2}. This process is smaller than $\gamma\gamma\to \gamma\gamma$ since it only proceeds through one channel and also smaller than the similar $\gamma\gamma\to gg$ by roughly the color factor of $8$. It does, however, have the prospect of providing information about the polarization of the $Z$'s from analysis of their decay distributions. In the limit of large $s>>m_Z^2$, the ratio of longitudinal Z-pairs in comparison to transverse pairs approaches $1/12$ if indeed gravitational interactions are responsible for $\gamma \gamma \to ZZ$. In particular: \begin{eqnarray} \left ( {Z_L Z_L\over Z_T Z_T}\right )_{graviton} = {1\over 12}(1+4x_Z)^2 \label{ratio}~, \end{eqnarray} \noindent where $x_Z=m_Z^2/s$. This may be useful in distinguishing Z pairs produced by gravitons from those produced by other new physics mechanisms. For instance, Higgs or other massive scalar can be produced in the $s$-channel via $\gamma \gamma$ fusion and decay to a $ZZ$ pair. This would produce a predominance of longitudinally polarized $Z$'s given by~\cite{higgs_hunters}:\footnote{As discussed below, there is, however, a tree level SM background to the $ZZe^+e^-$ final state coming from exchanges of the SM Higgs boson in the $ZZ\to ZZ$ sub-process. Such Higgs exchanges in the $t$ and $u$-channel will have a different (from the one given in eq.~(\ref{ratio})) ratio of $Z_LZ_L/Z_TZ_T$. However, since the $ZZ$ luminosity functions are much smaller than the $\gamma \gamma$ ones, we expect the ratio $Z_LZ_L/Z_TZ_T$ to be dominated by the gravity mediated $\gamma \gamma \to ZZ$ sub-process if indeed the gravity scale is at the 1 TeV level. Moreover, as further mentioned below, the processes $\gamma \gamma \to ZZ$ and $ZZ \to ZZ$ may be distinguished by studying the final state electrons.} \begin{eqnarray} \left ( {Z_L Z_L\over Z_T Z_T}\right )_{scalar} = {(2-x_Z)^2\over 2 x_Z^2} ~. \end{eqnarray} \noindent One could also consider the reverse process $ZZ\to \gamma\gamma$; however, this would probably not be of experimental interest since the smaller $ZZ$ luminosity function would cause it to be about 100 times smaller than the $\gamma\gamma\to \gamma\gamma$ process with the same final state. The crossed graph $\gamma Z\to \gamma Z$, however, could have a significant cross section similar to the case of $\gamma\gamma\to ZZ$.\footnote{We note that, for massive vector bosons, the effective vector boson approximation in leading log tends to over estimate the cross section, in particular, the cross section coming from fusion of transversely polarized gauge bosons, see e.g., Johnson {\it et al.} in \cite{evba}. Therefore, the actual overall cross section from $\gamma Z \to \gamma Z$ may be slightly smaller than what is shown in Fig.~\ref{fig2}. For the point we are making, however, the leading log approximation suffices.} In this case the electron which emits the $Z$ would typically have transverse momentum $P_T\sim m_Z$ and hence would be easily detectable providing additional constraints on the kinematics of the final state. Since both $\gamma\gamma\to ZZ$ and $\gamma Z\to \gamma Z$ are about 10 times smaller than $\gamma\gamma\to \gamma\gamma$ these modes would not be primarily useful in putting a bound on $m_D$. However, if a graviton signal were seen in the $\gamma\gamma\to\gamma\gamma$ channel, the ratio $\gamma\gamma:\gamma Z:ZZ$ would be useful in indicating that gravitational interactions were indeed the explanation since this ratio would be independent of $m_D$ and $n$ in the propagator summation approximation discussed above. The process $ZZ\to ZZ$ potentially has the unique feature that the exchanged graviton can be either scalar or tensor. We find, however, that numerically the cross section in the context of an $e^+e^-$ collider is not greatly sensitive to the scalar sector. This is partly because there is a similar contribution to the scalar exchanges in Fig.~\ref{fig1} coming from the SM Higgs boson. The helicity amplitudes for the process $ZZ \to ZZ$, for both the graviton and the SM Higgs exchanges are given in appendix G. In fact, even assuming for simplicity an infinitely heavy Higgs, i.e., disregarding the SM background to this process, the scalar graviton contribution is rather small as compared to the spin 2 graviton. In particular, in order to test the sensitivity to the scalar sector in $ZZ\to ZZ$ in the absence of the SM diagrams, we can multiply the scalar propagator by $(1+\epsilon)$, where $\epsilon$ is an arbitrary constant. Using this, the helicity amplitudes for this process, due to the spin 0 (and the spin 2) graviton exchanges are given in appendix G. Note that the only term proportional to $R$ (hence sensitive to the scalar graviton), which is not suppressed at large $s$ by a factor of $x_Z=m_Z^2/s$, is that corresponding to the $0000$ helicity combination. This is because the coupling of the scalar graviton to a $ZZ$ pair is explicitly proportional to the mass-squared of the $Z$ ~\cite{taohan}. However, when this is coupled to a longitudinal state, this dependence is canceled by the explicit $1/m_Z$ mass dependence due to each of the four longitudinal polarization vectors. To get an idea of the sensitivity of the graviton cross section to $\epsilon$ let us define: \begin{eqnarray} r(\epsilon)={\sigma(\epsilon)\over \sigma(\epsilon=0)}-1 ~. \end{eqnarray} \noindent Thus, if $n=4$, then for $s_0=1~TeV$, $r(1)=0.009$, $r(5)=0.064$, and $r(10)=0.18$. On the other hand, if $\sqrt{s_0}=500~GeV$, then $r(1)=0.05$, $r(5)=0.31$ and $r(10)=0.80$. Therefore, probably a non-zero value of $\epsilon$ leads to unobservably small effects unless $\epsilon$ is fairly large, $\epsilon >>1$. The reason for this is that the contribution from the scalar exchange is dominated by the case where the initial bosons are longitudinal but the kernel for $e\to eZ_L$ does not receive logarithmic enhancement as in the case for transverse $Z$ emission. The scalar graviton contribution to the final cross section in the overall reaction is thus modest even though the hard cross section for $Z_LZ_L\to Z_LZ_L$ via scalar exchange is comparable to the hard cross section for $Z_TZ_T\to Z_TZ_T$ via tensor exchange. Likewise, if one measures the proportion of longitudinal and transverse Z bosons in the final state, the sensitivity to the scalar exchanged may be increased, however, in reality the SM Higgs contribution will probably dominate the scalar dynamics in this process. The final state in the case of $ZZ\to ZZ$ is the same as that of $\gamma\gamma\to ZZ$; however, the two processes may be separated by observation of the final state electrons. For instance, if we impose the cut that $P_T>m_Z$ for each of the final state electrons at $\sqrt{s_0}=1~TeV$, the $\gamma\gamma\to ZZ$ is reduced by a factor of about 70 while the signal due to $ZZ\to ZZ$ is reduced by a factor of about 1.3. Using this cut then, the contribution of $ZZ\to ZZ$ may thus be enhanced relative to $\gamma\gamma\to ZZ$. \section{Hadronic Colliders} At hadronic colliders such as the LHC, gauge boson pairs may be produced via graviton exchange in gluon gluon collisions. In particular, let us first consider $gg\to \gamma \gamma$ and $gg\to ZZ$. The parton cross section for $gg\to \gamma\gamma$ is the same as $\gamma\gamma\to gg$ reduced by a factor of 64 because of color averaging (see appendix B). Likewise, the parton cross section for $gg\to ZZ$ is the same as $\gamma\gamma\to ZZ$ reduced by 8 to take into account color (see appendix D). The differential cross section as a function of $\tau$ is shown in Fig.~\ref{fig4} in the case of $\gamma\gamma$ production and in Fig.~\ref{fig5} in the case of $ZZ$ production, where a cut of $|z|<0.7$ has been applied. \begin{figure}[htb] \psfull \begin{center} \leavevmode \epsfig{file=fig4.ps,height=8cm,width=8cm,bbllx=0cm,bblly=2cm,bburx=20cm,bbury=25cm,angle=0} \end{center} \caption{\emph{The distribution of $pp\to \gamma\gamma+X$ events as a function of $\tau$ at the LHC for various sub-processes, where in each case a cut of $|\cos\theta|<0.7$ is imposed. Shown are the total (solid line) $pp\to\gamma\gamma +X$ cross section from $gg$ and $q \bar q$ fusion processes including the graviton exchange and the SM contributions, only the SM cross section from $q\bar q\to \gamma\gamma$ (dashed line) and only the graviton exchange cross section from $gg\to \gamma\gamma$ (dot-dashed line). The model parameters are the same as in Fig.~\ref{fig2}.}} \label{fig4} \end{figure} \begin{figure}[htb] \psfull \begin{center} \leavevmode \epsfig{file=fig5.ps,height=8cm,width=8cm,bbllx=0cm,bblly=2cm,bburx=20cm,bbury=25cm,angle=0} \end{center} \caption{\emph{The distribution of $pp\to ZZ+X$ events as a function of $\tau$ at the LHC for various sub-processes, where in each case a cut of $|\cos\theta|<0.7$ is imposed. Shown are the total (solid line) $pp\to ZZ +X$ cross section from $gg$ and $q \bar q$ fusion processes including the graviton exchange and the SM contributions, only the SM cross section from $q\bar q\to ZZ$ (dashed line) and only the graviton exchange cross section from $gg\to ZZ$ (dot-dashed line). The model parameters are the same as in Fig.~\ref{fig2}.}} \label{fig5} \end{figure} In both cases, there is also a contribution from $q\bar q\to \gamma\gamma$, $ZZ$ due to $s$-channel graviton exchange. These $q \bar q$ annihilation processes also have a SM contribution. The differential cross sections for $q\bar q\to \gamma\gamma$ and $q \bar q \to ZZ$, including the SM, graviton mediated and SM$\times$graviton interference terms, are given in appendix H. We note that the effects of graviton exchanges at the Tevatron via the processes $gg \to \gamma \gamma$ and $q\bar q\to \gamma\gamma$ were studied in detail by Cheung in \cite{4gammapaper}. Here we focus instead on the LHC in which case the dominant contribution to $\gamma \gamma +X$ production comes from the gluon fusion sub-process as opposed to the Tevatron where $q\bar q\to \gamma\gamma$ is more important. As will be shown below, the attainable limit on $m_D$ at the LHC from $pp \to \gamma \gamma +X$ is much stronger than the one obtained by Cheung in \cite{4gammapaper}. In Figs.~\ref{fig4} and \ref{fig5} we plot the invariant mass distribution, $d\sigma/d\tau$, for the total cross section (i.e., including the SM and graviton contributions from $gg \to \gamma \gamma,~ZZ$ and $q\bar q \to \gamma \gamma,~ZZ$), the SM background from $q\bar q \to \gamma \gamma,~ZZ$ and the graviton cross section due to the gluon fusion process only. Here and in what follows, the corresponding overall cross sections of the colliding protons are calculated using the CTEQ4M parton distributions \cite{cteq4m}. We see that the gluon fusion process is the dominant production mechanism from graviton exchanges. Moreover, as in the case of the electron-positron collider, the cross section peaks at relatively large $\tau$ in marked contrast to the SM background which falls off sharply with $\tau$. Clearly, events with such large $\tau$ would be a distinctive signature of new physics at the LHC with negligible SM background. In Fig~\ref{fig7} we show the $3\sigma$ limits that can be placed on the scale of the gravitational interactions $m_D$ at the LHC, by measuring $pp \to \gamma \gamma,~ZZ ~+X$. The limits are obtained by requiring: \begin{eqnarray} \frac{\sigma^T_{M_{VV}^{min}} -\sigma^{SM}_{M_{VV}^{min}}} {\sqrt{\sigma^T_{M_{VV}^{min}}}} \times \sqrt L > 3 ~,\label{bound} \end{eqnarray} \noindent where $\sigma^T_{M_{VV}^{min}}$ is the total cross section for $\gamma \gamma$ or $ZZ$ production at the LHC integrated from a lower $VV$ invariant mass cut of $M_{VV}^{min}$ ($V=\gamma$ or $Z$), and $\sigma^{SM}_{M_{VV}^{min}}$ is the corresponding SM cross section for these processes. Also, we take an integrated luminosity of $L=30$ fb$^{-1}$ and we require at least 10 such events above the SM background for the given value of $m_D^{min}$ - the lower bound on $m_D$ (see \cite{footbound}). \begin{figure}[htb] \psfull \begin{center} \leavevmode \epsfig{file=fig6.ps,height=8cm,width=8cm,bbllx=0cm,bblly=2cm,bburx=20cm,bbury=25cm,angle=0} \end{center} \caption{\emph{The 2 jets invariant mass distribution of $pp\to 2 ~{\rm jets}~+X$ events as a function of $\tau$ at the LHC for various sub-processes where in each case a cut of $|\cos\theta|<0.7$ is imposed. Shown are the total (solid line) $pp\to2~{\rm jets}~+X$ cross section from the $gg$ fusion processes $gg \to gg,~q \bar q$ including the graviton exchange and the SM contributions, only the SM cross section from $gg \to gg,~q \bar q$ (dashed line) and only the graviton exchange cross section from four gluons scattering sub-process $gg \to gg$ (dot-dashed line). The model parameters are the same as in Fig.~\ref{fig2}.}} \label{fig6} \end{figure} As stated before, since the SM cross sections for these type of processes drop sharply with $M_{VV}$, it is advantageous to study these cross sections at high $M_{VV}$ in which case one is eliminating a large portion of the SM background. Clearly, as can be seen in Fig.~\ref{fig7}, there is an optimal lower cut, $M_{VV}^{min}$, that one can impose on these signals in order to place the best limits on $m_D$ in case no deviation from the SM is observed. This is because as one further goes to higher $M_{VV}$ values, the gravitational signal sharply falls as well. For example, for $\gamma \gamma$ and $ZZ$ production, the optimal lower cuts to be considered are $M_{\gamma \gamma}^{min} \sim M_{ZZ}^{min} \approx 2$ TeV, in which case the obtainable bound at the LHC (with $L=30$ fb$^{-1}$) will be $m_D \mathrel{\lower4pt\hbox{$\sim$} 6$ TeV. We note that, for $m_D=6$ TeV and $M_{\gamma \gamma}^{min} = 2$ TeV, $\sigma^T_{M_{\gamma \gamma}^{min}}(pp \to \gamma \gamma+X) \simeq 0.7$ fb and $\sigma^{SM}_{M_{\gamma \gamma}^{min}}(pp \to \gamma \gamma+X) \simeq 0.1$ fb, yielding about $\sim 20$ $\gamma \gamma$ events out of which $\sim 17$ are due to gravitational interactions. Also, this limit is more than four times larger than the one found by Cheung in \cite{4gammapaper} for the upgraded Tevatron run II case. The corresponding cross sections for $ZZ$ production with $m_D=6$ TeV and $M_{ZZ}^{min} = 2$ TeV are: $\sigma^T_{M_{ZZ}^{min}}(pp \to ZZ+X) \simeq 0.9$ fb and $\sigma^{SM}_{M_{ZZ}^{min}}(pp \to ZZ+X) \simeq 0.25$ fb, yielding about $\sim 30$ $ZZ$ events out of which $\sim 20$ are due to gravitational interactions. \begin{figure}[htb] \psfull \begin{center} \leavevmode \epsfig{file=fig7.ps,height=8cm,width=8cm,bbllx=0cm,bblly=2cm,bburx=20cm,bbury=25cm,angle=0} \end{center} \caption{\emph{$3\sigma$ bounds on the scale $m_D$ of the gravitational interactions derived from eq.~(\ref{bound}), as a function of the lower cut on the invariant mass $M_{VV}^{min}$ or $M_{jj}^{min}$ of the $VV$ or $jj$ system ($V=\gamma$ or $Z$ and $j$=jet), see text. Shown are the limits ($m_D^{min}$) that can be obtained by using the reactions $pp \to \gamma \gamma +X$ (solid line), $pp \to ZZ +X$ (dashed line) and $pp \to 2~{\rm jets} +X$ (dot-dashed line), where in each case a cut of $|\cos\theta|<0.7$ is imposed. The bounds are given for a LHC with an integrated luminosity of $30$ inverse fb's. The rest of the model parameters are the same as in Fig.~\ref{fig2}. See also \cite{footbound}.}} \label{fig7} \end{figure} The process $gg\to W^+W^-$ would be about twice as large as the $gg \to ZZ$ due to the Bose symmetry factor in the latter. The final state however would be more difficult to observe experimentally. Another process which may be studied at hadronic colliders is $pp~{\rm or}~p \bar p \to 2~{\rm jets}+X$. At the LHC, $gg\to gg$ and $gg \to q \bar q$ will be the dominant production mechanism of 2 jets due to the large gluon content in the colliding protons. These processes can proceed via graviton exchanges where, again, the dominant graviton signal comes from the gluon -- gluon scattering sub-process $gg\to gg$. There will of course be a large 2 jets QCD background from the SM. The SM and gravity mediated amplitudes and cross sections for $gg\to gg$ and $gg \to q \bar q$ are given in appendix A and H, respectively. In Fig.~\ref{fig6} we plot the invariant mass distribution, $d\sigma/d\tau$, of the total cross section for $pp \to 2~{\rm jets}+X$, of the SM QCD background and of the graviton cross section due only to the pure four gluon scattering process. Evidently, the QCD background is again peaked at low $\tau$ while the 2 jet events resulting from graviton exchange occur at large $\tau$. In Fig~\ref{fig7} we also show the $3\sigma$ limits (obtained from eq.(\ref{bound})) that can be placed on $m_D$ at the LHC, by measuring the dijets events $pp \to 2~{\rm jets}~+X$. For reasons mentioned above, there is again an optimal value of $M_{jj}^{min}$ ($M_{jj}$ is the invariant mass of the 2 jets system) which gives the best limit on $m_D$. In particular, in this case, cutting the cross section from below by $M_{jj}^{min} \sim 5$ TeV, will result in the $3 \sigma$ limit $m_D \mathrel{\lower4pt\hbox{$\sim$} 6$ TeV, if no deviation from the SM cross section is observed. Again we note that for $m_D=6$ TeV and $M_{jj}^{min} = 5$ TeV, $\sigma^T_{M_{jj}^{min}}(pp \to 2~{\rm jets}+X) \simeq 1.5$ fb and $\sigma^{SM}_{M_{jj}^{min}}(pp \to 2~{\rm jets}+X) \simeq 0.8$ fb, yielding about $\sim 45$ $jj$ events out of which $\sim 20$ are due to gravitational interactions. \section{Summary and conclusion} We have studied gauge boson - gauge boson scattering processes of the form $V_1V_2 \to V_3V_4$, where $V_i$ are SM gauge bosons, due to graviton exchanges. We have shown that these type of processes can lead to some very distinct signatures of gravitational interactions at future colliders, which have no SM background at lowest order in the relevant gauge couplings. For example, at an high energy electron-positron collider, vector bosons which are produced nearly on-shell and collinear with the initial particles, can collide and exchange spin 2 and/or spin 0 gravitons, leading to appreciably large new signals such as $\gamma \gamma \to \gamma \gamma,~ gg,~ZZ$, $\gamma Z \to \gamma Z$, $ZZ\to ZZ$ etc... The latter is of particular interest since it has some sensitivity to scalar graviton excitations if, for some reason, these turn out to be enhanced. Similarly, gauge boson scattering, due to graviton exchanges, which involve two or four gluons, such as $gg \to \gamma \gamma,~ZZ,~WW,~gg$, can lead to significantly enhanced signals of $\gamma \gamma,~ZZ$, $WW$ and $jj$ ($j$=jet) pair production at the LHC. A key feature of all these type of scattering processes is that the gravity mediated cross-sections peak at high values of the invariant $VV$ mass, whereas their corresponding SM cross sections (which in some instances arise only at higher orders in the gauge couplings) are concentrated at low $\tau$ values. Thus, these type of low energy gravity signals will be quite distinctive. Alternatively, if no such new signals are observed, then some of these processes can be used to place a bound on the scale of gravitational interactions. For example, we find that a limit of $m_D \mathrel{\lower4pt\hbox{$\sim$} 3$ TeV can be placed on the scale of the low energy gravity using the reaction $e^+e^- \to \gamma \gamma e^+e^-$ which proceed predominantly through graviton exchanges in the $\gamma \gamma \to \gamma \gamma$ sub-process at the $e^+e^-$ Next Linear Collider. In those cases which have a significant SM background, we utilized the growth of the gravitational cross-sections with $\tau$ to derive the best (optimal) limits. For example, we found that $m_D \mathrel{\lower4pt\hbox{$\sim$} 6$ TeV will be obtainable at the LHC by measuring the production rates of $pp \to \gamma \gamma,~ZZ,~WW~+X$ and $pp \to 2~{\rm jets}~+X$ which, at high $\tau$, are driven predominantly by graviton exchanges in the $gg \to \gamma \gamma,~ZZ,~WW,~gg$ sub-processes. \bigskip \bigskip S.B. thanks Jose Wudka for discussions. This research was supported in part by US DOE Contract Nos. DE-FG02-94ER40817 (ISU), DE-FG03-94ER40837 (UCR) and DE-AC02-98CH10886 (BNL). \newpage \begin{center} \noindent {\Large \bf Appendices: Helicity amplitudes and cross sections} \end{center} For each of the processes we discuss, we give the helicity amplitudes, the differential cross section and the cross section. In all cases the scattering is of the general form $V_1V_2\to V_3V_4$ where $V_i$ is a vector boson with momentum $p_i$ and helicity $h_i$. The angle between $\vec p_1$ and $\vec p_3$ in the cms frame is $\theta$ and $z=\cos\theta$. We also define the quantity $s=(p_1+p_2)^2$.\\ \noindent{\large \bf A: $\gamma\gamma\to\gamma\gamma$ and $gg\to gg$} \setcounter{num}{1} \setcounter{equation}{0} \def\theequation{\Alph{num}.\arabic{equation}}\\ The helicity amplitudes for these processes, in which the graviton exchange is in all three $s$, $t$ and $u$-channels, are: \begin{eqnarray} &&{\cal M}_{4\gamma}(h1,h2,h3,h4) = {1\over 4}\kappa^2 D \times \nonumber \\ &&~~~~~~~~~~\left \{ \begin{array}{ll} 2Q_s s^2 &{\rm if}~h1=h2=h3=h4 \\ Q_u (1+z)^2s^2/2 &{\rm if}~h1=-h2=-h3=h4 \\ Q_t (1-z)^2s^2/2 &{\rm if}~h1=-h2=h3=-h4 \\ 0 & {\rm otherwise} \end{array} \right . ~, \label{a1} \end{eqnarray} \noindent where for $\gamma\gamma\to\gamma\gamma$, $Q_s=Q_t=Q_u=1$ while for $gg\to gg$, $Q_s=(\delta_{AC}\delta_{BD}+\delta_{AD}\delta_{BC})/2$, $Q_t=(\delta_{AB}\delta_{CD}+\delta_{AD}\delta_{BC})/2$ and $Q_u=(\delta_{AC}\delta_{BD}+\delta_{AB}\delta_{CD})/2$, where $A$, $B$, $C$ and $D$ are the color indices for $g_1$ $g_2$ $g_3$ and $g_4$ respectively. This leads to the following differential and total cross sections, previously derived in~\cite{4gammapaper}: \begin{eqnarray} {d\sigma_{2\gamma\to 2\gamma}\over dz} &=& {\pi F^2 Q_{tot}\over 16 s}\left ({s^4\over m_D^8}\right ) (z^2+3)^2 ~, \label{a2} \\ \sigma_{2\gamma\to 2\gamma}&=& {\pi F^2 Q_{tot}\over s} \left ( {s^4\over m_D^8} \right ) \left ( {7\over 5}\right ) ~, \label{a3} \end{eqnarray} \noindent where $Q_{tot}=1$ for $\gamma\gamma\to\gamma\gamma$ and $Q_{tot}=9/16$ for $gg\to gg$. In the case of $gg\to gg$ there is also a SM contribution which is given for example in \cite{collider}. The graviton amplitude, therefore, interferes with the tree level SM diagrams. Denoting the interference term by $\sigma^I_{gg\to gg}$, this interference is given by: \begin{eqnarray} {d\sigma^I_{gg\to gg}\over dz}=- {5\over 2} \sqrt{{\pi \alpha_s^2\over s} \left ({d \sigma_{gg\to gg}\over d z }\right ) } ~~,~~ \sigma^I_{gg\to gg}=- {25\over 42}\sqrt{35{\pi \alpha_s^2\over s} \sigma_{gg\to gg} } ~,\label{a4} \end{eqnarray} \noindent where $\alpha_s=g_s^2/4 \pi$ and $g_s$ is the QCD coupling.\\ \noindent{\large \bf B: $\gamma\gamma\to gg$} \setcounter{num}{2} \setcounter{equation}{0} \def\theequation{\Alph{num}.\arabic{equation}}\\ In the case of $\gamma\gamma\to gg$ the graviton exchange is only in the $s$-channel and the helicity amplitudes are: \begin{eqnarray} &&{\cal M}_{2\gamma\to 2g}(h1,h2,h3,h4) = {1\over 16}\kappa^2 D \times \nonumber\\ &&~~~~~~~~~~\left \{ \begin{array}{ll} (1+z)^2s^2\delta_{AB} &{\rm if}~h1=-h2=h4=-h3 \\ (1-z)^2s^2\delta_{AB} &{\rm if}~h1=-h2=h3=-h4 \\ 0 & {\rm otherwise} \end{array} \right . ~, \label{b1} \end{eqnarray} \noindent where $A$ and $B$ are color indices. This leads to the differential and total cross sections: \begin{eqnarray} {d\sigma_{2\gamma\to 2 g} \over dz}&=&{\pi F^2\over 8 s}\left ( {s^4\over m_D^8}\right ) (z^4+6z^2+1)~,\label{b2}\\ \sigma_{2\gamma\to 2 g} &=& {4\over 5} {\pi F^2\over s}\left ( {s^4\over m_D^8}\right ) ~.\label{b3} \end{eqnarray} \noindent The cross sections in the reverse reaction $gg\to \gamma\gamma$ are smaller by a factor of $64$ due to color averaging and was also derived by Cheung in \cite{4gammapaper} with whom we agree. \\ \noindent{\large \bf C: $g\gamma\to g\gamma$} \setcounter{num}{3} \setcounter{equation}{0} \def\theequation{\Alph{num}.\arabic{equation}}\\ In this case, the graviton exchange is only in the $t$-channel and the helicity amplitudes are: \begin{eqnarray} && {\cal M}_{g\gamma\to g\gamma}(h1,h2,h3,h4) = {1\over 4}\kappa^2 D \times \nonumber\\ &&~~~~~~~~~~\left \{ \begin{array}{ll} s^2 &{\rm if}~h1=h2=h3=h4 \\ (1+z)^2 s^2/4 &{\rm if}~h1=-h2=h3=-h4 \\ 0 & {\rm otherwise} \end{array} \right . ~, \label{c1} \end{eqnarray} \noindent which gives a differential and total cross section of: \begin{eqnarray} {d\sigma_{g\gamma\to g\gamma}\over dz} &=& {\pi F^2\over 64 s}\left ({s^4\over m_D^8}\right ) (17+4z+6z^2+4z^3+z^4) ~,\label{c2}\\ \sigma_{g\gamma\to g\gamma}&=& {\pi F^2\over s} \left ({s^4\over m_D^8}\right ) \left ( {3\over 5}\right ) ~.\label{c3} \end{eqnarray}\\ \noindent{\large \bf D: $\gamma\gamma\to ZZ$ and $gg\to ZZ$} \setcounter{num}{4} \setcounter{equation}{0} \def\theequation{\Alph{num}.\arabic{equation}}\\ In the case of $\gamma\gamma\to ZZ$ the graviton exchange is also only in the $s$-channel and the helicity amplitudes are given by: \begin{eqnarray} {\cal M}_{2\gamma\to 2Z}(h1,h2,h3,h4) = {1\over 4}\kappa^2 D s^2 \alpha_{h1,h2,h3,h4}(z,x_Z) ~,\label{d1} \end{eqnarray} \noindent where $x_Z=m_Z^2/s$ and we have defined the ``reduced'' helicity amplitude $\alpha_{h1,h2,h3,h4}$ which is given in Table 1. Note that the amplitude is only non-zero if $h1=-h2$ so only $\alpha_{+1,-1,h3,h4}$ is given. The case where $h1=-1$ is clearly related by Parity.\\ $$ \begin{tabular}{|c|c|} \hline $h1~h2~h3~h4$ & $\alpha_{h1,h2,h3,h4}(z,x_Z)$ \\ \hline \hline $+-\pm\pm$, & $x_Z(1-z^2)$\\ \hline $+-\pm 0$; $+-0\mp$ & $(x_Z(1-z^2)/2)^{1/2}(1\pm z)$\\ \hline $+-\pm\mp$ & $(1\pm z)^2/4$\\ \hline $+-00$ & $-(1-z^2)(1+4x_Z)/4$ \\ \hline \end{tabular} $$ \bigskip \bigskip {\bf Table 1:} {\emph {The reduced helicity amplitude $\alpha_{h1,h2,h3,h4}$, as defined in eq.(\ref{d1}), for the process $\gamma \gamma \to ZZ$ or $gg \to ZZ$. Values not given are either zero or related to the given values by Parity.}} \bigskip \bigskip The differential and total cross sections for this process are thus given by: \begin{eqnarray} {d\sigma_{2\gamma\to 2Z}\over dz} &=& {\pi F^2\over 128 s} \left ({s^4\over m_D^8}\right ) \beta_Z (3+4x_Z-4x_Zz^2+z^2) \nonumber\\ && (1+12x_Z-12z^2x_Z+3z^2) \label{d2}\\ \sigma_{2\gamma\to 2Z}&=& {\pi F^2\over 120 s} \left ({s^4\over m_D^8}\right ) \beta_Z (48 x_Z^2 +56 x_Z+13) ~,\label{d3} \end{eqnarray} \noindent where: \begin{eqnarray} \beta_Z=\sqrt{1-4x_Z} \label{betaz} ~. \end{eqnarray} \noindent In the case of $gg\to ZZ$ the cross sections are related to the above by $\sigma_{2g\to 2Z}=\sigma_{2\gamma\to 2Z}/8$ due to color. \\ \noindent{\large \bf E: $ZZ\to\gamma\gamma$} \setcounter{num}{5} \setcounter{equation}{0} \def\theequation{\Alph{num}.\arabic{equation}}\\ The process $ZZ\to\gamma\gamma$ is the reverse of $\gamma\gamma\to ZZ$ so the matrix elements are the same as the reverse process, however in order to analyze the cross section using the effective boson approximation we need to obtain the differential and total cross section for each initial helicity state. These are given by: \begin{eqnarray} &&\frac{d\sigma_{2Z\to 2\gamma}}{d z} (h_1,h_2) = {\pi F^2\over 4 s} \left ({s^4\over m_D^8}\right ) \beta_Z^{-1} \times \nonumber\\ &&~~~~~~~~\left \{ \begin{array}{ll} (1+6z^2+z^4)/8 & {\rm for }~h1,h2=\pm,\mp \\ 2x_Z^2(1-z^2)^2 & {\rm for }~h1,h2=\pm,\pm \\ x_Z(1-z^4) & {\rm for }~h1,h2=\pm, 0;~0,\pm\\ (1+4x_Z)^2 (1-z^2)^2/8 & {\rm for }~h1,h2=0,0 \end{array} \right. ~,\label{e1} \end{eqnarray} \begin{eqnarray} &&\sigma_{2Z\to 2\gamma}(h_1,h_2) = {\pi F^2\over 4 s} \left ({s^4\over m_D^8}\right ) \beta_Z^{-1} \times \nonumber\\ &&~~~~~~~~\left \{ \begin{array}{ll} 4/5 & {\rm for }~h1,h2=\pm,\mp \\ 32 x_Z^2/15 & {\rm for }~h1,h2=\pm,\pm \\ 8 x_Z/5 & {\rm for }~h1,h2=\pm, 0;~0,\pm\\ 2(1+4x_Z)^2/15 & {\rm for }~h1,h2=0,0 \end{array} \right. ~, \label{e2} \end{eqnarray} \noindent where $\beta_Z$ is defined in eq.~(\ref{betaz}) and again $x_Z=m_Z^2/s$.\\ \noindent{\large \bf F: $\gamma Z \to \gamma Z$} \setcounter{num}{6} \setcounter{equation}{0} \def\theequation{\Alph{num}.\arabic{equation}}\\ In the case of $\gamma Z\to \gamma Z$ the reaction proceeds only through the $t$-channel. The helicity amplitudes for this process are: \begin{eqnarray} {\cal M}_{\gamma Z \to \gamma Z}(h1,h2,h3,h4) = {1\over 4}\kappa^2 D s^2 \beta_{h1,h2,h3,h4}(z,x_Z) ~,\label{f1} \end{eqnarray} \noindent where the reduced helicity amplitudes in this case, $\beta_{h1,h2,h3,h4}$, are given in Table 2.\\ $$ \begin{tabular}{|c|c|} \hline $h1~h2~h3~h4$ & $\beta_{h1,h2,h3,h4}(z,x_Z)$ \\ \hline \hline $++++$ &$ (2-x_Z+x_Zz)^2(1-x_Z)^2/4 $\\ \hline $+-++$ &$ x_Z(1-x_Z)^2(1-z^2)/4 $\\ \hline $+-+-$ &$ (1-x_Z)^2(1+z)^2/4 $\\ \hline $+++0$ &$ -\sqrt{2x_Z(1-z^2)}(1-x_Z)^2(2-x_Z+zx_Z)/4 $\\ \hline $+0+-$ &$ -\sqrt{2x_Z(1-z^2)}(1-x_Z)^2(1+z)/4 $\\ \hline $+0+0$ &$ (1+z)(1-x_Z)^2(1-x+x_Zz)/2 $\\ \hline \end{tabular} $$ \bigskip \bigskip {\bf Table 2:} {\emph {The reduced helicity amplitude $\beta_{h1,h2,h3,h4}$, as defined in eq.(\ref{f1}), for the process $\gamma Z \to \gamma Z$. Values not given are either zero or related to the given values by Parity.}} \bigskip \bigskip The total cross section as a function of the initial $Z$ helicity and averaged over the initial photon polarizations is: \begin{eqnarray} &&\sigma_{\gamma Z\to \gamma Z}(h2)= {\pi F^2\over 60 s} \left ( {s^4\over m_D^8} \right ) \times \nonumber\\ &&\left \{ \begin{array}{cl} (1-x_Z)^4(36-47x_Z+52x_Z^2-27x_Z^3+6x_Z^4) &~~~{\rm for}~~h2=\pm\\ 2 (1-x_Z)^4(10+3x_Z-6x_Z^2+3x_Z^3) &~~~{\rm for}~~h2=0 \end{array} \right . ~, \label{f2} \nonumber\\ ~ \end{eqnarray}\\ \noindent{\large \bf G: $ZZ \to ZZ$} \setcounter{num}{7} \setcounter{equation}{0} \def\theequation{\Alph{num}.\arabic{equation}}\\ This process proceeds in all three channels via both spin 2 and spin 0 graviton exchanges and, in the SM, via neutral Higgs exchange. For the graviton exchange the helicity amplitudes for this process are: \begin{eqnarray} {\cal M}_{2Z \to 2Z}(h1,h2,h3,h4) = {1\over 4}\kappa^2 D s^2 \gamma_{h1,h2,h3,h4}(z,x_Z,R) ~,\label{g1} \end{eqnarray} \noindent where the reduced helicity amplitudes $\gamma_{h1,h2,h3,h4}$ are given in Table~3 and $R=2(1+\epsilon)(n-1)/(3n+6)$ is the factor associated with the scalar propagator as discussed in the text. $$ \begin{tabular}{|c|c|c|} \hline $h1~h2~h3~h4$ & $\gamma_{h1,h2,h3,h4}^2(z,x_Z)$ & $\gamma_{h1,h2,h3,h4}^0(z,x_Z)$ \\ \hline \hline $+-\pm\mp$ & $(3-6x_Z+8 x_Z^2)(1\pm z)^2/6$ & $2 x_Z^2 (1\pm z)^2$ \\ \hline $+-\pm\pm$ & $x_Z(9-4x_Z)(1-z^2)/6$ & $x_Z^2(1-z^2)$ \\ \hline $\pm\pm\pm\pm$ & $2(24x_Z^2+8x_Z^2 z^2-18x_Z+3)/3$ & $2x_Z^2(z^2+3)$ \\ \hline $\pm\pm\mp\mp$ & $16x_Z^2z^2/3 $ & $2x_Z^2(3+z^2)$ \\ \hline $0000$ & $ (3+6z^2x_Z-22x_Z+24x_Z^2z^2+24x_Z^2+z^2)/3$ & $(24x_Z^2-16 x_Z+3+z^2)$ \\ \hline $00 \pm\pm$ & $x_Z(15-28x_Z-7z^2+36z^2x_Z)/6$ & $x_Z(3-4x_Z-z^2)$ \\ \hline $00 \pm\mp$ & $-(1-z^2)(3+26 x_Z-24 x_Z^2)/12$ & $-x_Z(1-z^2)$ \\ \hline $000\pm$ & $\pm (7+36x_Z)z\sqrt{2x_Z(1-z^2)}/12$ & $\pm z\sqrt{2x_Z(1-z^2)}/2$ \\ \hline $0+0\pm;~0+\mp 0$ & $(1\pm z)(3-28 x_Z+32x_Z^2\pm 16x_Zz)/6$ & $-x_Z(1\pm z)(1-2x_Z\mp z)$ \\ \hline $0+\pm\pm$ & $\pm 8 z\sqrt{2x_Z^3(1-z^2)}$ & $\pm z\sqrt{2x_Z^3(1-z^2)}$ \\ \hline $0+\mp\pm$ & $-(1\pm z)(9-4x_Z)\sqrt{2x_Z(1-z^2)}/12$ & $-(1\pm z)x_Z \sqrt{2x_Z(1-z^2)} $ \\ \hline \end{tabular} $$ \bigskip \bigskip {\bf Table 3:} {\emph {The reduced helicity amplitude $\gamma_{h1,h2,h3,h4}$, as defined in eq.(\ref{g1}), for the process $ZZ \to ZZ$. We have defined $\gamma_{h1,h2,h3,h4}(z,x_Z,R)=\gamma_{h1,h2,h3,h4}^2(z,x_Z)+ R\gamma_{h1,h2,h3,h4}^0(z,x_Z)$, where $R=2(1+\epsilon)(n-1)/(3n+6)$. See also text. Values not given are related to the given values by Parity.}} \bigskip \bigskip For the SM Higgs boson exchange the helicity amplitudes for this process are: \begin{eqnarray} {\cal M}_{2Z \to 2Z}^{SM}(h1,h2,h3,h4) = {1\over 8} g_W^2 \frac{m_Z^2}{m_W^2} s \gamma_{h1,h2,h3,h4}^{SM}(z,x_Z,\Pi_s,\Pi_t,\Pi_u) ~,\label{g1sm} \end{eqnarray} \noindent where $g_W=e/s_W$ is the weak coupling and $\Pi_x=(x - m_H^2 +i \Gamma_H m_H)^{-1}$, $x=s,~t,~u$, are the $s$, $t$ and $u$-channel factors associated with the corresponding SM Higgs propagators, where $t(u)=s(z -(+) 1)(4x_Z-1)/2$. The reduced SM helicity amplitudes $\gamma_{h1,h2,h3,h4}^{SM}$ are given in Table~4. $$ \begin{tabular}{|c|c|} \hline $h1~h2~h3~h4$ & $\gamma_{h1,h2,h3,h4}^{SM}(z,x_Z,\Pi_s,\Pi_t,\Pi_u)$ \\ \hline \hline $+-\pm\mp$ &$ 2 x_Z(1 \pm z)^2 (\Pi_t+\Pi_u)$\\ \hline $+-\pm \pm$ &$ 2 x_Z(1 - z^2) (\Pi_t+\Pi_u) $\\ \hline $++ \pm \pm$ &$ 2x_Z \left[ 4\Pi_s + (1 \pm z)^2 \Pi_t + (1 \mp z)^2 \Pi_u \right]$\\ \hline $0000$ &$ 2 \left[ 4\Pi_s (1-2x_Z)^2 + \Pi_t (z-1+4x_Z)^2 + \Pi_u (z+1-4x_Z)^2 \right]/x_Z$\\ \hline $00\pm \pm$ &$ \left[ 4\Pi_s (1-2x_Z) + (1-z^2)(\Pi_t+\Pi_u) \right] $\\ \hline $00 \pm \mp $ &$ -(1-z^2)(\Pi_t+\Pi_u) $\\ \hline $00 0\pm $ &$ \pm \sqrt{2x_Z(1-z^2)} \left[ \Pi_t(z-1+4x_Z) + \Pi_u(z+1-4x_Z)\right]/2x_Z $\\ \hline $0 + \pm 0 $ &$ -(1 \mp z) \left[ \Pi_t(z \pm 1) + \Pi_u(z+1-4x_Z)\right]$\\ \hline $0 + 0 \pm $ &$ (1 \pm z) \left[ \Pi_u(z \mp 1) + \Pi_t(z-1+4x_Z)\right]$\\ \hline $0 + \pm \pm $ &$ \sqrt{2x_Z(1-z^2)} \left[ \Pi_t(z \pm 1) + \Pi_u(z \mp 1)\right] $\\ \hline $0 + \mp \pm $ &$ -\sqrt{2x_Z(1-z^2)} (z \pm 1) (\Pi_t +\Pi_u)/2$\\ \hline \end{tabular} $$ \bigskip \bigskip {\bf Table 4:} {\emph {The reduced helicity amplitude $\gamma_{h1,h2,h3,h4}^{SM}$ for the SM Higgs exchange contribution to the process $Z Z \to Z Z$, as defined in eq.(\ref{g1sm}). Values not given are related to the given values by Parity. $\Pi_{s,t,u}$ are defined in the text above.}} \bigskip \bigskip \noindent{\large \bf H: $q \bar q \to \gamma \gamma$, $q \bar q \to ZZ$ and $gg\to q\bar q$} \setcounter{num}{8} \setcounter{equation}{0} \def\theequation{\Alph{num}.\arabic{equation}}\\ Though not gauge-gauge scattering processes, we require $q \bar q \to \gamma \gamma$, $q \bar q \to ZZ$ and $gg\to q\bar q$ for the processes $p p \to \gamma \gamma +X$, $p p \to ZZ +X$ and $pp \to 2~jets$, respectively. As in the case of $gg\to gg$, these processes have a SM contribution and, therefore, also an interference term of the graviton exchange with the SM diagrams.\footnote{The differential cross section for $q\bar q \to \gamma \gamma$ including the SM and graviton exchanges was also derived by Cheung in \cite{4gammapaper}}. The pure SM differential cross sections for these processes are: \begin{eqnarray} {d \sigma_{q\bar q \to \gamma \gamma}^{SM} \over d z} &=& \frac{e^4 Q_q^4}{48 \pi s} \left( \frac{1+z^2}{1-z^2} \right) ~,\label{h1} \\ {d \sigma_{q\bar q \to ZZ}^{SM} \over d z} &=& \frac{e^4}{24 \pi s}~ \frac{(g_L^q)^4+(g_R^q)^4}{s_W^4(1-s_W^2)^2}~\beta_Z \times \nonumber \\ &&~\frac{2+\beta_Z^2(3-\beta_Z^4)-z^2 \beta_Z^2(9-10 \beta_Z^2+\beta_Z^4)- 4z^4\beta_Z^4}{\left[ (1+\beta_Z^2)^2-4z^2\beta_Z^2 \right]^2} ~,\label{h2}\\ {d \sigma_{g g\to q\bar q}^{SM} \over d z} &=& \frac{g_s^4} {1536 \pi s} \left( \frac{7+16 z^2+9z^4}{1-z^2} \right) ~.\label{h3} \end{eqnarray} \noindent where $\beta_Z$ is defined in eq.~(\ref{betaz}) and $s_W=\sin\theta_W$, where $\theta_W$ is the weak mixing angle. Also, $g_L^q$ and $g_R^q$ are left and right handed couplings of a $Z$-boson to quarks; for $q=u$ (up quark) $g_L^u=1/2-2s_W^2/3$, $g_R^u=-2s_W^2/3$ and for $q=d$ (down quark) $g_L^d=-1/2+s_W^2/3$, $g_R^d=-s_W^2/3$. The pure gravity mediated differential cross sections for these processes are: \begin{eqnarray} {d \sigma_{q \bar q \to \gamma \gamma}^G \over d z} &=& {\pi F^2\over 192 s} \left ( {s^4\over m_D^8} \right ) (1-z^4) ~,\label{h4}\\ {d \sigma_{q \bar q \to ZZ}^G \over d z} &=& {\pi F^2\over 384 s} \left ( {s^4\over m_D^8} \right ) \beta_Z \times \nonumber\\ &&~~~~\left[ 4 +3\beta_Z^4 z^2(1-z^2) - 2 \beta_Z^2(1+z^2) \right] ~,\label{h5}\\ {d \sigma_{gg\to q\bar q}^G \over d z} &=& \frac{9}{4}{d \sigma_{q \bar q \to \gamma \gamma}^G \over d z} ~,\label{h6} \end{eqnarray} \noindent and the corresponding interference terms are: \begin{eqnarray} {d \sigma_{q \bar q \to \gamma \gamma}^I \over d z} &=& {e^2 Q_q^2 F\over 48 s} \left ( {s^2\over m_D^4} \right ) (1+z^2) ~,\label{h7}\\ {d \sigma_{q \bar q \to ZZ}^I \over d z} &=& \frac{e^2 F}{48 s}~ \frac{(g_L^q)^2+(g_R^q)^2}{s_W^2(1-s_W^2)} ~ \beta_Z \times \nonumber \\ &&~~~~\frac{-2-\beta_Z^2(1-\beta_Z^2)+5 z^2 \beta_Z^2 (1-\beta_Z^2) +2z^4\beta_Z^4}{(1+\beta_Z^2)^2-4z^2\beta_Z^2} ~,\label{h8}\\ {d \sigma_{gg\to q\bar q}^I \over d z} &=& \frac{3}{8}{d \sigma_{q \bar q \to \gamma \gamma}^I \over d z} ~.\label{h9} \end{eqnarray} \newpage
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Although galaxies cover a wide range in properties, such as luminosity, Hubble type and environment, their globular cluster systems are remarkably similar in many respects. In particular, all globular cluster luminosity functions (GCLF) can be crudely represented by a Gaussian with a peak located at M$_V$ $\sim$ --7.5 and a width of $\sigma$ $\sim$ 1.2. The peak, or turnover magnitude, has been used with some success as an independent distance estimator to galaxies (see Jacoby {\it et al. } 1992). Its usefulness relies on the assumption that the turnover magnitude is universal for all galaxies. It has been suggested that the GCLF is not universal, but rather varies systematically with GC metallicity (Ashman, Conti \& Zepf 1995), the galaxy Hubble type (Secker \& Harris 1993) or galaxy environment (Blakeslee \& Tonry 1996). If any of these `second parameter' effects are confirmed, then accurate distance estimates using the GCLF method will require an appropriate correction. In the case of GC metallicity, Ashman {\it et al. } (1995) have quantified the effects of metallicity on the turnover magnitude and show that once a metallicity correction ($\sim$ 0.1 mag) is applied, the GCLF gives distances in good agreement with other methods. To fully resolve these issues, a number of high quality GCLFs are required covering a range of Hubble types and environments. One approach is to study galaxies that belong to a nearby group. This has the benefit of allowing the GCLF--determined distances to be directly compared with other distance methods for galaxies in the same group. Fleming {\it et al. } (1995) aimed to use this approach on the nearby Coma I cloud by examining the GCLF in NGC 4494 (E1) and NGC 4565 (Sb). Their results will be discussed below. The Coma I cloud is a small group of galaxies in the foreground of the well--known Coma cluster of galaxies (A1656). It is dominated by spiral galaxies but also includes a few ellipticals (Gregory \& Thompson 1977). Here we examine the GCLFs of the two brightest ellipticals which may be members of the Coma I cloud, namely NGC 4278 and NGC 4494. Both galaxies contain a kinematically--distinct core which may have resulted from a merger. Using {\it Hubble Space Telescope} Wide Field and Planetary Camera 2 (WFPC2) data of Forbes {\it et al. } (1996), we can probe the GCLF in these galaxies to about 2 magnitudes beyond the turnover magnitude. The WFPC2 data have the additional advantages of very low background contamination with no blending of GCs in the central galaxy region. Our method is similar to that applied to the GCLF of NGC 4365 (Forbes 1996). In particular, we use the maximum likelihood code of Secker \& Harris (1993) to fit the GCLF and the absolute turnover magnitude determination of Sandage \& Tammann (1995), i.e. M$_V^0$ = --7.62 $\pm$ 0.08. We compare the GCLF distance modulus with other methods and estimate the Hubble constant. \section{Observations and Data Reduction} Details of the WFPC2 data for the GCs in NGC 4278 and NGC 4494 are presented, along with 12 other ellipticals, in Forbes {\it et al. } (1996). From the 2 $\times$ 500s F555W images, we use DAOPHOT (Stetson 1987) to detect GCs. Detection is based on fairly conservative limits of flux threshold, shape, sharpness and size. And after checking against a list of hot pixels, the final contamination from cosmic rays, foreground stars, background galaxies and hot pixels is less than a few percent. We did not apply any color selection to the GC lists. The GC magnitudes have been converted into Johnson V and corrected for Galactic extinction. When examining the GCLF of NGC 4365 (Forbes 1996), we showed that the fraction of detected GCs with magnitude, once the threshold criterion is adjusted for a 0.3 mag sensitivity difference (Burrows {\it et al. } 1993), is similar for all four CCDs. For NGC 4494 we were able to use all CCDs, as the dust is confined to a small ($\sim$ 1$^{''}$) ring which doesn't appear to affect any GCs (Carollo {\it et al. } 1996). In the case of NGC 4278, dust covers much of the PC CCD. We have therefore decided not to include GCs from the PC in this analysis. \section{Modeling} A full description of the modeling processes including completeness tests, photometric errors, background contamination and maximum likelihood fitting, are given in Forbes (1996). To summarize, we carried out simulations to test the ability of DAOPHOT to detect GCs on actual WFC images for both galaxies. These completeness tests, and the photometric error in measuring GC magnitudes, are given in Figure 1 for NGC 4278 and Figure 2 for NGC 4494. Both figures are similar to that found for NGC 4365 (Forbes 1996). The 50\% completeness level is V = 24.9 for NGC 4278 and V = 24.8 for NGC 4494. We ignore GCs with magnitudes fainter than these to avoid large incompleteness corrections. A small correction is made for background contamination based on similar exposure WFPC2 images from the Medium Deep Survey (Forbes {\it et al. } 1994). Finally we fit the background--corrected GCLF using the maximum likelihood code of Secker \& Harris (1993) which takes account of the completeness and photometric error variations with magnitude. We fit both Gaussian and $t_5$ distributions to the GCLF. \section{Results and Discussion} We have detected 241 GCs in NGC 4278 and 148 in NGC 4494 to the 50\% completeness level. The results of fitting the GCLF of each galaxy, with the Gaussian and a $t_5$ functions, are summarized in Table 1. In particular, the average turnover magnitudes are m$_V ^0$ = 23.23 $\pm$ 0.11 for NGC 4278 and m$_V ^0$ = 23.07 $\pm$ 0.13 for NGC 4494. As these magnitudes are statistically within the combined errors, this would indicate that both galaxies are at the same distance. The probability contours output from the maximum likelihood code for the Gaussian fit, over a range of 0.5--3 standard deviations, are shown in Figures 3 and 4. In Figures 5 and 6 we show the binned GCLF and our best--fit Gaussian superposed for each galaxy. Note that the fitting procedure does not use binned data but rather treats each data point individually. These figures clearly show that the 50\% completeness limit is almost 2 magnitudes fainter than the turnover magnitude, giving us additional confidence that the turnover is well determined. This makes our data among the most complete, in terms of sampling the luminosity function, of GCLFs published to date. The number of `missing' GCs, i.e. those fainter than the limiting magnitude, is $\le$ 10\%. Fleming {\it et al. } (1995) have recently investigated the GCLF of NGC 4494 and the Sb spiral NGC 4565. The aim of their study was to derive a GCLF distance for galaxies of different Hubble types located in the same group. This would allow a direct test of whether or not the GCLF turnover magnitude depends on Hubble type. They used the same CFHT $\sim$1$^{''}$ V band images used by Simard \& Pritchet (1994) for a surface brightness fluctuation (SBF) study of these galaxies. The data consisted of two pointings for each galaxy. Using DAOPHOT for GC detection and completeness tests, their 50\% completeness limits for the two pointings were V = 23.2, 24.0 for NGC 4494 and V = 24.5, 24.3 for NGC 4565. For a Gaussian fit to their GCLF data of NGC 4494 they quote $m_V ^0$ = 23.6 $\pm$ 0.4 with $\sigma$ fixed at 1.4. The results for NGC 4565 are $m_V ^0$ = 22.63 $\pm$ 0.2, $\sigma$ = 1.35 $\pm$ 0.22. In both cases there is considerable scatter in their faint magnitude bins (see their Figures 5 and 6). They concluded that NGC 4565 was in the Coma I cloud but that NGC 4494 lies in the background. We now consider whether their data, and hence inferred distances, are consistent with our dataset. We first note that all Fleming {\it et al. } magnitudes should be 0.05$^m$ brighter after applying a Galactic extinction correction. Starting with NGC 4494, for which their 50\% completeness limit is comparable to their estimated turnover magnitude. For a Gaussian fit our data gives $m_V ^0$ = 23.05 $\pm$ 0.13 which is {\it not} consistent with their result. However, an important distinction is that they fixed $\sigma$ to be 1.4 (their data did not warrant fits to both $\sigma$ and $m_V ^0$). It has been shown that uncertainties in $\sigma$ correspond to uncertainties in $m_V ^0$, so that a change from $\sigma$ = 1.4 (Fleming {\it et al. }) to 1.1 (us) would translate into a brighter $m_V ^0$ by about 0.5$^m$ (e.g. Hanes 1977; Secker \& Harris 1993). Thus the Fleming {\it et al. } GCLF would have a turnover of $m_V ^0$ $\sim$ 23.05. An alternative, and perhaps more straight forward, way to show this is given by Figure 7. This shows the Fleming {\it et al. } data for NGC 4494 and the best fit Gaussian to our data for NGC 4494. We include two Gaussians arbitrarily scaled up and down by a factor of two vertically (the turnover magnitude is held constant, but the dispersion is allowed to vary). This figure indicates that, within the error bars, the Fleming {\it et al. } data for NGC 4494 are consistent with a Gaussian that has a turnover of $m_V ^0$ = 23.05. As a further test, we re--fit their data but excluded the faintest bin and allowed the Gaussian dispersion $\sigma$ to be a free parameter (along with the turnover magnitude and the normalization). This gave m$_V ^0$ = 23.27 $\pm$ 0.4 (after extinction correction) and $\sigma$ = 1.3 $\pm$ 0.3. We conclude that the ground--based dataset of Fleming {\it et al. } (1995) for NGC 4494 and our WFPC2 data {\it are} consistent, albeit within some large scatter. Next we consider NGC 4565, in particular is it at the same distance as NGC 4494 (and NGC 4278) ? Figure 7 shows the Fleming {\it et al. } data for the spiral NGC 4565. The data lie between the best fit Gaussian to our NGC 4494 data and one scaled down by a factor of two. Thus, as with NGC 4494, the ground--based data for NGC 4565 is consistent with a Gaussian of $m_V ^0$ = 23.05. It is difficult to rule out the possibility that the turnover magnitude is brighter by $\sim$ 0.5$^m$ (this difference is too large to be explained as metallicity effect Ashman {\it et al. } 1995). We now calculate the distance modulus for our data from the apparent turnover magnitude. As noted in the introduction, Ashman {\it et al. } (1995) have advocated that a small correction be applied to the universal value based on GC metallicity. Such a metallicity correction has been shown by them to improve GCLF distance estimates. In the absence of spectroscopic measures, we can estimate the mean metallicity of a GC system assuming [Fe/H] = 5.051 (V--I) -- 6.096 (Couture {\it et al. } 1990). Using the mean color from Forbes {\it et al. } (1996), we derive [Fe/H] = --0.79 for NGC 4278 and --0.84 for NGC 4494, giving metallicity corrections of $\Delta$M$_V$ = 0.16 and 0.14 respectively. Thus we make the universal value of M$_V ^0$ = --7.62 $\pm$ 0.08 (Sandage \& Tammann 1995) fainter by 0.16 or 0.14 magnitudes. In Table 2 we summarize distance determinations from other workers for galaxies in the Coma I cloud. We include the GCLF distance for NGC 4565 from Fleming {\it et al. } (1995), after making a 0.05 Galactic extinction correction and assuming M$_V ^0$ = --7.62 $\pm$ 0.08 with no metallicity correction. From V band surface brightness fluctuations, Simard \& Pritchet (1994) estimate (m -- M) = 30.88 $\pm$ 0.3 for NGC 4494, and 30.08 $\pm$ 0.07 for NGC 4565. Using the planetary nebulae luminosity function (PNLF) method, Jacoby, Ciardullo \& Harris (1996) get 30.54 $\pm$ 0.05 for NGC 4494 and 30.21 $\pm$ 0.08 for NGC 4565. From the a mass model of the Virgo region and an {\it assumption} about the `triple value ambiguity' in velocity, Tully \& Shaya (1984) give distances to NGC 4494 and NGC 4565 which correspond to 30.34 and 30.21 respectively. We will assume an error of $\pm$ 0.3$^m$ on these estimates. Finally, Aaronson \& Mould (1983) find (m -- M) = 30.42 $\pm$ 0.32 for several spiral galaxies in the direction of the Coma I cloud using the infrared Tully--Fisher relation. The distance modulus for NGC 4494 ranges from (m -- M) = 30.34 (mass models of Tully \& Shaya 1984) to 30.88 (SBF work of Simard \& Pritchet 1994) with a weighted mean of 30.54 $\pm$ 0.07. Our value for NGC 4278 (30.61 $\pm$ 0.14) is consistent with the NGC 4494 mean. Most recently, I band SBF measurements also indicate that NGC 4278 and NGC 4494 have essentially the same distance modulus, albeit at the upper range of values listed in Table 2 (Tonry 1996). For NGC 4565 the weighted mean distance modulus is 30.10 $\pm$ 0.05. This is 0.4$^m$ or 2 Mpc closer than NGC 4494. This suggests that NGC 4565 lies in the foreground relative to NGC 4494. It has been suggested that the Coma I cloud may consist of two sub--groups, one around NGC 4565 and the other associated with the S0 galaxy NGC 4274 and NGC 4278 (de Vaucouleurs 1976). This claim was questioned by Gregory \& Thompson (1977) who found no particular evidence that the Coma I cloud formed two sub--groups. However, they did note that the group formed a bar--like structure of dimensions 0.9 $\times$ 2.5 Mpc and that the galaxies with the {\it lower} velocities tend to be systematically fainter, i.e. located further from us. Table 2 shows that in each case, other workers found NGC 4494 to be more distant than NGC 4565 for a given distance method. Our preferred interpretation is that both NGC 4494 and NGC 4278 are at the same distance and are located at the far end of the Coma I cloud, whereas NGC 4565 is located 2 Mpc closer at the front end of the group. Aaronson \& Mould (1983) caution that the redshifts for galaxies in the Coma I cloud may not correlate well with distance, given the proximity of the group to the Virgo cluster. Nevertheless, we will attempt to estimate the Hubble constant from our measurements. If we calculate the mean velocity of the eight galaxies associated with NGC 4274 sub--group (de Vaucouleurs 1976), and include NGC 4494, we get 880 km s$^{-1}$. The correction for motion with respect to the Local Group using solution number 2 of Yahil, Tammann \& Sandage (1977) is $\sim$ --40 km s$^{-1}$, giving 840 km s$^{-1}$. The Virgocentric infall component from Tammann \& Sandage (1985) is $\sim$ 200 km s$^{-1}$, which gives a corrected recession velocity of 1020 km s$^{-1}$. Using a distance modulus of 30.54 and this corrected velocity we estimate a Hubble constant of $\sim$ 80 km s$^{-1}$ Mpc$^{-1}$. Finally, we have derived the local (within 100$^{''}$ radius) and total GC specific frequency ($S$) for each galaxy following the method described in Forbes (1996). These give similar results of $\sim$ 5 for NGC 4278 and $\sim$ 2 for NGC 4494. The richness of the GC system around NGC 4494 appears to lower than that of a typical elliptical ($S$ $\sim$ 5; van den Bergh 1995). \section{Conclusions} Using WFPC2 data of Forbes {\it et al. } (1996) we have fit the globular cluster luminosity function (GCLF) of two ellipticals, NGC 4278 and NGC 4494. The first of which is generally thought to lie in the Coma I cloud, whereas the latter has been suggested to lie in the background. Both the Gaussian and $t_5$ profile fits give similar results, namely a turnover magnitude of m$_V^0$ = 23.23 $\pm$ 0.11 for NGC 4278 and m$_V^0$ = 23.07 $\pm$ 0.13 for NGC 4494. The fitted dispersions ($\sigma$ $\sim$ 1.1$^m$) are somewhat smaller than typical values for other ellipticals. The limiting magnitude, as determined by completeness tests, is about 2 magnitudes fainter than these values. We derive distance modulii of 30.61 $\pm$ 0.14 and 30.50 $\pm$ 0.15 for NGC 4278 and NGC 4494 respectively, assuming an absolute turnover magnitude of M$_V$ = --7.62 $\pm$ 0.08 from Sandage \& Tammann (1995) and a small metallicity correction based on the precepts of Ashman {\it et al. } (1995). We compare our distance measure with the ground--based GCLF study of Fleming {\it et al. } (1995) and other distance determinations for galaxies in the Coma I cloud. Our distance modulii lie within the range of published values. We conclude that both NGC 4278 and NGC 4494 {\it are} members of the Coma I cloud, and speculate that they lie at the far end of a bar structure; the near end of which is associated with NGC 4565. Finally, we make a rough estimate of the Hubble constant and globular cluster specific frequency from our data. \noindent {\bf Acknowledgments}\\ We are particularly grateful to J. Secker for the use of his maximum likelihood code and useful suggestions. We also thank J. Blakeslee, J. Brodie and C. Grillmair for helpful discussions. The referee is thanked for several suggestions that have improved the paper. This research was funded by the HST grant AR-05794.01-94A\\ \noindent{\bf References} \noindent Aaronson, M., \& Mould, J. 1983, ApJ, 265, 1\\ Ashman, K. M., Conti, A., \& Zepf, S. E. 1995, AJ, 110, 1164\\ Blakeslee, J. P., \& Tonry, J. L. 1996, ApJ, in press\\ Burrows, C., {\it et al. } 1993, Hubble Space Telescope Wide Field and Planetary Camera 2 Instrument Handbook, STScI\\ Carollo, C. M., Franx, M., Illingworth, G. D., \& Forbes, D. A. 1996, ApJ, submitted\\ Couture, J., Harris, W. E., \& Allwright, J. W. B., 1990, ApJS, 73, 671\\ de Vaucouleurs, G. 1976, Stars and Stellar Systems, edited by A. Sandage, M. Sandage and J. Kristian (Chicago: University of Chicago Press) v9, p557\\ Forbes, D. A. 1996, AJ, in press\\ Forbes, D. A., Elson, R. A. W., Phillips, A. C., Illingworth, G. D. \& Koo, D. C. 1994, ApJ, 437, L17\\ Forbes, D. A., Franx, M., Illingworth, G. D., \& Carollo, C. M. 1996, ApJ, in press\\ Fleming, D. E. B., Harris, W. E., Pritchet, C. J., \& Hanes, D. A. 1995, AJ, 109, 1044\\ Gregory, S. A., \& Thompson, L. A. 1977, ApJ, 213, 345\\ Hanes, D. A. 1977, MNRAS, 180, 309\\ Jacoby, G. H., {\it et al. } 1992, PASP, 104, 599\\ Jacoby, G. H., Ciardullo, R., \& Harris, W. E. 1996, ApJ, 462, 1\\ Sandage, A., \& Tammann, G. A. 1995, ApJ, 446, 1\\ Secker, J., \& Harris, W. E. 1993, AJ, 105, 1358\\ Simard, L., \& Pritchet, C. J. 1994, AJ, 107, 503\\ Stetson, P. B., 1987, PASP, 99, 191\\ Tammann, G. A., \& Sandage, A. 1985, ApJ, 294, 81\\ Tonry, J. L. 1996, The Extragalactic Distance Scale workshop held at Space Telescope Science Institute\\ Tully, R. B., \& Shaya, E. J. 1984, ApJ, 281, 31\\ van den Bergh, S. 1995, AJ, 110, 2700\\ Yahil, A., Tammann, G. A., \& Sandage, A. 1977, ApJ, 217, 903\\ \begin{figure*}[p] \centerline{\psfig{figure=fig1.epsi,width=300pt}} \caption{\label{fig1} Completeness function for NGC 4278 from simulations. Circles show the fraction of simulated GCs detected in 0.1 magnitude bins. A typical error bar is shown in the lower left. {\bf b)} Photometric error for NGC 4278 as a function of GC V magnitude determined from DAOPHOT. Circles show the data points, and the dashed line an exponential fit to the data of the form p.e. = exp~[a~(V~--~b)]. } \end{figure*} \begin{figure*}[p] \centerline{\psfig{figure=fig2.epsi,width=300pt}} \caption{\label{fig2} Completeness function for NGC 4494 from simulations. Circles show the fraction of simulated GCs detected in 0.1 magnitude bins. A typical error bar is shown in the lower left. {\bf b)} Photometric error for NGC 4494 as a function of GC V magnitude determined from DAOPHOT. Circles show the data points, and the dashed line an exponential fit to the data of the form p.e. = exp~[a~(V~--~b)]. } \end{figure*} \begin{figure*}[p] \centerline{\psfig{figure=fig3.epsi,width=300pt}} \caption{\label{fig3} Probability contours for a Gaussian fit to the globular cluster luminosity function of NGC 4278. Contours represent 0.5 to 3 standard deviations probability limits from the best estimate (see Table 1). } \end{figure*} \begin{figure*}[p] \centerline{\psfig{figure=fig4.epsi,width=300pt}} \caption{\label{fig4} Probability contours for a Gaussian fit to the globular cluster luminosity function of NGC 4494. Contours represent 0.5 to 3 standard deviations probability limits from the best estimate (see Table 1). } \end{figure*} \begin{figure*}[p] \centerline{\psfig{figure=fig5.epsi,width=300pt}} \caption{\label{fig5} Globular cluster luminosity function for NGC 4278. The raw data is shown by a dashed line, and by a thin solid line after completeness correction has been applied. The maximum likelihood best fit Gaussian profile, which includes the effects of photometric error and background contamination, is superposed as a thick solid line. Note that the fitting procedure does not use binned data. } \end{figure*} \begin{figure*}[p] \centerline{\psfig{figure=fig6.epsi,width=300pt}} \caption{\label{fig6} Globular cluster luminosity function for NGC 4494. The raw data is shown by a dashed line, and by a thin solid line after completeness correction has been applied. The maximum likelihood best fit Gaussian profile, which includes the effects of photometric error and background contamination, is superposed as a thick solid line. Note that the fitting procedure does not use binned data. } \end{figure*} \begin{figure*}[p] \centerline{\psfig{figure=fig7.epsi,width=300pt}} \caption{\label{fig7} Comparison of the ground--based data from Fleming {\it et al. } (1995) with our data for NGC 4494. Open circles represent the Fleming {\it et al. } data for NGC 4494, and the filled circles for NGC 4565. The three solid lines represent our best fit Gaussian to the WFPC2 data on NGC 4494, and scaled up and down arbitrarily by a factor of two (the turnover magnitude is held fixed but the dispersion is allowed to vary). Within the large scatter, the ground--based data for NGC 4494 and NGC 4565 could be consistent with a turnover magnitude of $m_V ^0$ = 23.05, as found from the WFPC2 data. } \end{figure*} \begin{figure*} \centerline{\psfig{figure=table1.epsi,width=300pt}} \end{figure*} \begin{figure*} \centerline{\psfig{figure=table2.epsi,width=300pt}} \end{figure*} \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Graph neural networks (GNNs) are efficient approaches for learning the representations of structured graph data (e.g., molecules, citation networks) \cite{xu2018powerful}. In recent years, several types of GNNs have been proposed for predicting molecular properties, and they have achieved excellent results~\cite{duvenaud2015convolutional, kearnes2016molecular, wu2018moleculenet, gilmer2017neural, lamb2020bayesian, hao2020asgn}. Moreover, the application of GNNs accelerated the work in many related domains, including drug discovery \cite{wieder2020compact, feinberg2018potentialnet}, biology \cite{bove2020prediction}, physics \cite{shlomi2020graph}, and these GNNs reduced computational cost compared to traditional first-principles methods such as Density Functional Theory \cite{hao2020asgn, lu2019molecular}. In practice, there are many GNN variants that can be employed for molecular property prediction. Each variant is proposed based upon a distinct idea for feature learning of molecules. For example, GC (graph convolutional network) \cite{duvenaud2015convolutional} exploits neural architectures to generalise the chemical operation of the circular fingerprint to extract molecular features. In contrast, Weave \cite{kearnes2016molecular} and MPNN (message passing neural network) \cite{gilmer2017neural} have been proposed to learn molecular features by taking readout operations \cite{wu2020comprehensive} on atomic features. To learn atomic features, Weave applies global convolution operation, and MPNN uses the message passing process. However, the hyperparameter selection is a direct impediment for GNNs to achieve excellent results. In general, the process of searching optimal hyperparameters is often a trial-and-error process. Traditionally, people used to adjust hyperparameters manually, but this requires domain experience and intuition. To extricate people from this predicament, random search (RS) has been employed for hyperparameter optimisation (HPO). In brief, RS draw hyperparameter values from uniform distributions within given ranges. The drawn hyperparameter values are evaluated on an objective function, and the one with the best performance will be selected when the given computational resource is exhausted. Although very simple, RS has been proved to be efficient for HPO of neural networks for many problems \cite{bergstra2011algorithms}. In recent years, there is an increasing number of strategies proposed for HPO. TPE \cite{bergstra2011algorithms} and CMA-ES \cite{hansen2016cma} are two state-of-the-art HPO algorithms, and they are proposed to improve the efficiency of search for promising hyperparameters by utilising the experience of previous trials. In this paper, a trial denotes the process of evaluating a hyperparameter setting on an objective function \cite{bergstra2012random}. Research on HPO of GNNs for molecular property prediction is still in its infancy. For example, the pioneering work of GNN presented in \cite{duvenaud2015convolutional, gilmer2017neural, wu2018moleculenet} did not discuss the problem of HPO in detail. Meanwhile, most HPO methods have not been explored in terms of their efficiency on GNNs when facing this type of problems. Their performance may need to be further investigated because the sizes of molecular datasets vary from hundreds to thousands, which are far less than those of the datasets used for typical deep learning applications (e.g., image recognition, natural language processing). At the same time, predicting molecular properties requires more sophisticated neural architectures to process irregular molecular structures, which is different from image processing problems that have regular spatial patterns within image patches. Therefore, it has become necessary to explore the performance of existing HPO methods on GNNs in molecular domains. This motivates our research, and we conducted methodology comparison and experimental analysis for RS, TPE, and CMA-ES to assess their effects on GNNs as HPO methods. We expect that our research can inform researchers in both molecular machine learning and chemistry and material sciences. The contributions of our research are summarised as below: \begin{itemize} \item We conducted systematic experiments to compare and analyse HPO methods including RS, TPE, and CMA-ES for GNN in the molecular domain in terms of their features, computational cost, and performance. \item Our research on HPO for GNN can be applied to a wider range of domains such as physical and social sciences. \item The outcomes of our research will contribute to the development of molecular machine learning \cite{pfluger2020molecular} as well as HPO for GNNs in general. \end{itemize} The rest of this paper is organized as follows. In Section \ref{sec: related work}, the related work of RS, TPE, and CMA-ES will be presented. In Section \ref{sec: theoretical analysis}, we will conduct a methodology comparison of RS, TPE, and CMA-ES. After that, the design of experiments and detailed experimental results will be described and discussed in Section \ref{sec: experiment investigation}. Finally, in Section \ref{sec: conclusion}, conclusions and future work will be given. \section{Related Work}\label{sec: related work} \subsection{Random Search} \begin{algorithm} $\operatorname{RS}\left(f, T,U\right)$ \tcp*[r]{\small $T$ the total number of trials, $U$ normal distribution} \For{$t \leftarrow 1$ \KwTo T}{ $\lambda_{t} \leftarrow U$\; $ \mathcal{L}_{t} \leftarrow \text {Evaluate } f(x_{train}, x_{valid}, \lambda_{t})$\; $\mathcal{H} \leftarrow \mathcal{H} \cup(\mathcal{L}_{t}, \lambda_{t})$\ \tcp*[r]{\small $\mathcal{H}$ historical optimization records} } \Return{$\mathcal{H}$} \caption{Random Search (RS) \cite{bergstra2012random}} \label{alg:rs} \end{algorithm} Random Search (RS) \cite{bergstra2012random} is an approach that uses uniform probability in determining iterative procedures at the price of optimality \cite{zabinsky2010random}, and it is helpful for handling many ill‐structured global optimization problems with continuous and/or discrete variables \cite{zabinsky2010random} such as HPO. The process of applying RS for HPO is shown in Algorithm~\ref{alg:rs}. In \textbf{Line 3} of Algorithm \ref{alg:rs}, a solution $\lambda_{t}$ (i.e. a set of hyperparameter values) is sampled from a uniform distribution $U$, and then evaluated on objective function $f(x_{train}, x_{valid}, \lambda_{t})$ in \textbf{Line 4}, which is normally the most expensive step. The evaluation result $\mathcal{L}_{t}$ and the solution $\lambda_{t}$ is paired and recorded in $\mathcal{H}$. The procedures from \textbf{Line 3} to \textbf{Line 5} are iteratively executed $T$ times. Finally, the best solution is obtained by sorting historic solutions in $\mathcal{H}$ according to their corresponding $\mathcal{L}$ values. Furthermore, Bergstra. J et al. \cite{bergstra2012random} holds the opinion that RS is the natural baseline for sequential HPO methods. Meanwhile, it is noted that Zelda B. \cite{zabinsky2010random} considered that RS is likely to be able to solve large-scale problems efficiently in a way that is not possible for deterministic algorithms. However, when using RS for HPO, the disadvantage is that its performance accompanied by high variance, and it may not produce satisfactory results given a larger search space and limited computational resources. \subsection{TPE} \begin{algorithm} $\operatorname{TPE}\left(f, T, M_{0}, S\right)$ \tcp*[r]{\small $M$ surrogate model, $S$ sampling function} \For{$t \leftarrow 1$ \KwTo T}{ $\lambda^{*}_{t} \leftarrow \operatorname{argmax}_{\lambda} S\left(\lambda, M_{t-1}\right)$\; $ \mathcal{L}_{t} \leftarrow \text {Evaluate } f(x_{train}, x_{valid}, \lambda^{*}_{t})$\; $\mathcal{H} \leftarrow \mathcal{H} \cup(\mathcal{L}_{t}, \lambda^{*}_{t}$)\; $\text { Fit a new model } M_{t} \text { to } \mathcal{H}$ \tcp*[r]{update $\mathcal{M}$} } \Return{$\mathcal{H}$} \caption{TPE \cite{bergstra2011algorithms}} \label{alg:tpe} \end{algorithm} The problems of expensive evaluation on fitness function can be solved by sequential model-based global optimisation (SMBO) algorithms \cite{bergstra2011algorithms, hutter2009automated, hutter2011sequential}. In HPO, a challenge is that the fitness function $f: \mathcal{X} \rightarrow \mathbb{R}$ may be expensive to evaluate given a trial of hyperparameters. By using model-based algorithms with a surrogate to approximate $f$ can reduce the evaluation cost. Typically, the core of an SMBO algorithm is to optimise the surrogate for the real fitness function, or some transformation of the surrogate. The two key components of SMBO algorithms are (1) what criterion is defined and optimized to obtain promising solutions given a model (or surrogate) of $f$, and (2) how $f$ can be approximated via historical trials/solutions. Tree-structured Parzen Estimator (TPE) \cite{bergstra2011algorithms} is an approach based on SMBO, as shown in Algorithm~\ref{alg:tpe}. Compared with RS, it makes the significant change in \textbf{Line 3}, in which solutions are sampled by $S$, instead of uniform distribution $U$. In $S$, many candidates $\lambda$ are drawn according to surrogate model $M$ and the one ($\lambda^{*}$) with the most promising performance evaluated by Expected Improvement (EI, introduce later) is returned \cite{jones2001taxonomy}. In \textbf{Line 4}, $\lambda^{*}$ is then evaluated on fitness function $f$ and recorded in $\mathcal{H}$ in \textbf{Line 5}. In \textbf{Line 6}, the surrogate $M$ is optimised to approximate the real fitness function by updated $\mathcal{H}$. Finally, the best solution can be obtained by sorting $\mathcal{H}$ after $T$ iterations. In the following paragraphs, we will review the most important work in TPE based on SMBO in detail. In TPE, EI \cite{jones2001taxonomy} has been chosen as the criterion to guide the search for optimal solution(s), and it keeps the balance between exploitation and exploration during the search process. The utility function is defined as $u(\lambda)=\max \left(0, f^{\prime}-f(\lambda)\right)$, where $f^{\prime}$ denotes the output of the current best solution, and $\lambda$ is the solutions we want to find whose $f(\lambda)$ is expected as smaller as possible than $f^{\prime}$. The value of the difference between $f^{\prime}$ and $f(\lambda)$ will be return as a reward. In each iteration, the optimal solution $\lambda^{*}$ is given by $\mathrm{EI}_{y^{*}}(\lambda):=\int_{-\infty}^{\infty} \max \left(y^{*}-y, 0\right) p_{M}(y \mid \lambda) d y$, where $p_{M}(y \mid \lambda)$ is the surrogate of the real fitness function, and $y^{*}$ represents some quantile of the observed $y$ values. Meanwhile, modelling of $p_{M}(y \mid \lambda)$ is costly, and TPE proposed an indirect way to model $p(\lambda \mid y)$ (Eq.~\ref{eq:1}). $p(\lambda \mid y)$ is defined by Eq.~\ref{eq:2}.where $\ell(\lambda)$ and $g(\lambda)$ are two density functions modelled by Parzen Estimator \cite{parzen1962estimation}, a non-parametric method to approximate the probability density function of a random variable. The collected observations are sorted by loss of $f$, and are divided into two groups based on some quantile. $\ell(\lambda)$ is generated by using the observations $\left\{\lambda^{(i)}\right\}$ such that the corresponding loss $f(\lambda)$ was less than $y^{*}$ and the remaining observations are used to generate $g(\lambda)$. In practice, a number of hyperparameter settings are sampled according to $\ell$, evaluated in term of $g(\lambda) / \ell(\lambda)$ , and the one that yields the minimum value under $\ell(\lambda)/g(\lambda)$ corresponding to the greatest EI is returned. This solution is then evaluated on the fitness function, and we call this process a trial. In this way, $g(\lambda)$ and $\ell(\lambda)$ are optimized according to the updated observation set, thus the exploration of optimal solutions moves to more promising regions of the whole search space by increasing the densities. \begin{equation}\label{eq:1} p(y \mid \lambda)=\frac{p(\lambda \mid y) * p(y)}{p(\lambda)} \end{equation} \begin{equation}\label{eq:2} p(\lambda \mid y)=\left\{\begin{array}{ll} \ell(\lambda) & \text { if } y<y^{*} \\ g(\lambda) & \text { if } y \geq y^{*} \end{array}\right. \end{equation} In TPE, Tree-structure means that the hyperparameter space is tree-like, and the value chosen for one hyperparameter determines what hyperparameter will be chosen next and what values are available for it. \subsection{CMA-ES} \begin{algorithm} $\operatorname{CMA-ES}(f, G, K, \mathcal{N})$ \tcp*[r]{\small $G$ the maximum number of generations, $K$ the size of population, $\mathcal{N}$ normal distribution} \For{$g \leftarrow 1$ \KwTo G}{ \For{$k \leftarrow 1$ \KwTo K}{ $\lambda^{g}_{k} \leftarrow \mathcal{N} $\;} \For{$k \leftarrow 1$ \KwTo K}{ $\mathcal{L}_{k}^{g} \leftarrow \text {Evaluate } f(x_{train}, x_{valid}, \lambda^{g}_{k})$\; $\mathcal{H} \leftarrow \mathcal{H} \cup(\mathcal{L}^{g}_{k}, \lambda^{g}_{k})$\; } $\text{Update } \mathcal{N}$ } \Return{$\mathcal{H}$} \caption{CMA-ES \cite{hansen2016cma}} \label{alg:cmaes} \end{algorithm} Covariance matrix adaptation evolution strategy (CMA-ES) \cite{hansen2016cma} is a derivative-free evolutionary algorithm for solving black-box optimization problems, and it has been applied for HPO with large-scale parallel GPU computational resources \cite{loshchilov2016cma, bergstra2011algorithms}. The pseudo-code of CMA-ES is shown in Algorithm~\ref{alg:cmaes}. In \textbf{Line 4}, a solution $\boldsymbol{\lambda}_{k}^{(g)}$ is generated by sampling from a multivariate normal distribution $\mathcal{N}$ until the size of the population is satisfied, where $k$ denotes the index of offspring and $g$ for generation. Thereafter, in \textbf{Line 7}, the individuals of $\lambda^{g}$ will be evaluated on fitness function $f$ and recorded in $\mathcal{H}$. In \textbf{Line 10}, similar to TPE, it exploits the historical information $\mathcal{H}$ to optimise the search process. However, it is noted that CMA-ES optimises $\mathcal{N}$ rather than the surrogate, and we will discuss this in the following paragraphs. In CMA-ES, the multivariate distribution is re-defined. Specifically, a population of solutions $\lambda^{g+1}$ (i.e., individuals or offspring) is generated by sampling from a multivariate normal distribution $\mathcal{N}$ (Eq.~\ref{eq:multivar}). In Eq.~\ref{eq:multivar}, $\mathcal{N}\left(\mathbf{0}, \boldsymbol{C}^{(g)}\right)$ is a multivariate normal distribution with zero mean and covariance matrix $\boldsymbol{C}^{(g)}$. The latter decides the distribution shape and describes the correlations of the variables. Meanwhile, $\boldsymbol{m}^{(g)}$ represents the mean value which is the centroid of the distribution, and it determines the search region of the whole search space in generation $g$. $\sigma^{(g)}$ represents the step size which also decides the global variance; in other words, it controls the size of the region. \begin{equation}\label{eq:multivar} \boldsymbol{x}_{k}^{(g+1)} \sim \boldsymbol{m}^{(g)}+\sigma^{(g)} \mathcal{N}\left(\mathbf{0}, \boldsymbol{C}^{(g)}\right) \quad \text { for } k=1, \ldots, K, \end{equation} To promote the efficiency of sampling, the key is to update $\boldsymbol{m}^{(g+1)}$, $C^{(g+1)}$, and $\sigma^{(g+1)}$ for the new generation. The mean is updated by the weighted average of $\mu$ selected individuals by $\boldsymbol{m}^{(g+1)}=\boldsymbol{m}^{(g)}+ \sum_{i=1}^{\mu} w_{i}\left(\boldsymbol{\lambda}_{i:K }^{(g+1)}-\boldsymbol{m}^{(g)}\right)$, where $w_{i}$ means corresponding weights to $\lambda_i$. The selection is according to the performance of individuals on the objective function. The novelty of CMA-ES is that it adapts the covariance matrix by combining rank-$\mu$-update and rank-one update \cite{akimoto2010bidirectional}. In this way, rank-µ update can efficiently make use of the information from the entire population. At the same time, rank-one update can be used to exploit the information of correlations among generations from the evolution path. This solution keeps the balance between less generations with a large population and more generation with a smaller population. Additionally, CMA-ES introduces the mechanism of step-size control based on evolution path (cumulative step-size adaptation of the global step-size) which aims to approximate the optimal overall step length efficiently by evolution path, because co-variance matrix may not be able to find the optimal overall step length efficiently. Generally, CMA-ES imitates the biological evolution, assuming that no matter what kind of gene changes, the results (traits) always follow a Gaussian distribution of a variance and zero-mean. Meanwhile, the generated population is evaluated on a objective function. A part of well-performed individuals is selected to guide evolution, moving to the area where better individuals would be sampled with higher probability. \section{Methodology Comparison}\label{sec: theoretical analysis} \subsection{Common Features} \begin{itemize} \item \textbf{Randomness} plays an important role in RS, TPE and CMA-ES. RS is supported by a number of independent uniform distributions with random sampling to explore hyperparameter space and find an optimal solution. TPE and CMA-ES both have exploitation and exploration mechanisms, which means they are given a more specific region of search space to explore compared with RS. TPE draws samples with randomness over the space of density function of $\ell(\lambda)$. Meanwhile, the sampling in CMA-ES is backed by a multivariate distribution. \item \textbf{Derivative-free} denotes that the approach does not use derivative information to guide the search for optimal solutions. For RS, TPE, and CMA-ES, as the last paragraph described, they search for the optimal solutions depending on drawing samples with randomness, rather than using gradient information as in the training of neural networks. \item \textbf{Termination Condition} As Section 2 shows, RS, TPE, and CMA-ES all have loops which means they need to preset the condition to stop the optimisation. However, this situation might cause a dilemma of balancing computational cost and performance. \end{itemize} \subsection{Specific Features} \begin{itemize} \item \textbf{Uniform Distribution vs Gaussian Mixture Model vs Multivariate Normal Distribution } The uniform distribution is a symmetric probability distribution that gives a finite number of values with equal probability to be drawn. Meanwhile, in RS, the dimension of hyperparameter solutions corresponds to the required number of uniform distributions, and each uniform distribution is independent. In contrast, in CMA-ES, multivariate distribution is a distribution used to approximately describe a set of correlated real-valued random variables, each of which clusters around a mean value. Furthermore, TPE makes use of the Gaussian mixture model which assumes all the point are generated from a mixture of a number of Gaussian distributions with unknown parameters. \item \textbf{Model-based vs Model-free} These are two distinct approaches originally defined in reinforcement learning \cite{haith2013model}. RS is a very representative model-free approach that directly searches for better solutions via a process of trial-and-error. In contrast, TPE is a model-based approach that firstly uses density functions $\ell(x)$ and $g(x)$ to model hyperparameter space in terms of the surrogate, then searching solutions over the space of the functions. \item \textbf{Bayesian Optimisation vs Evolutionary Strategy} The main idea of applying evolution strategies to black-box optimisation is to search through iterative adjustment of a multivariate normal distribution. The distribution is controlled by the mean and covariance, which are adjusted and moved to the area where better solutions could be sampled with higher probability. The adjustment generally has four main steps: sampling, evaluation, selecting good individuals, and updating the mean and co-variance by selected individuals. In contrast, in TPE, it starts by Bayesian optimisation to approximate the distribution of hyperparameters and objective function. Instead of using the Gaussian Process to model the distribution, TPE makes use of the Parzen estimator (i.e., kernel density estimation). The posterior distribution is unceasingly updated to approximate the real situation, and an acquisition function (TPE uses EI as the acquisition function) is used to approach the optimal solution. \end{itemize} \section{Experimental Investigation}\label{sec: experiment investigation} In this section, we first describe the design of our systematic experiments. We then conducted four sets of experiments in Sections \ref{sec:computationalcost} and \ref{sec:performanceprioity}, to compare RS, TPE, and CMA-ES from the perspective of performance and computational cost. \subsection{Experimental Settings} To investigate the performance of HPO methods for GNN on molecular property prediction, three representative datasets from Deep\-Chem \cite{Ramsundar-et-al-2019} (MoleculeNet)\cite{wu2018moleculenet} were selected in our experiments: ESOL (1128 records), FreeSolv (642 records), and Lipophilicity (4200 records), which respectively correspond to the tasks of predicting the following molecular properties: water solubility, hydration free energy, and octanol/water distribution coefficient. These properties are crucial in many problems. For example, in drug discovery, lipophilicity is an important property to reflect the affinity of a molecule, and it affects both membrane permeability and solubility \cite{lobo2020there}. Furthermore, the research presented in \cite{duong2012molecular} analyses molecular solubility data for exploring organic semiconducting materials. Therefore, the above three representative molecular property datasets are worth investigating and will benefit the research of many related problems. Using different sizes of datasets in experiments is helpful for us to conduct more comprehensive analyses. Meanwhile, there are many GNN variants, and we chose graph convolutional network (GC) \cite{duvenaud2015convolutional} because it was proposed considering the molecular domain background knowledge. The architectures of GC generalise the chemical operation of circular fingerprint \cite{glen2006circular} to extract molecular features. Four hyperparameters of GC are selected for HPO: batch size $s_b$, learning rate $l_r$, the size of fully-connected layer $s_f$, and the size of graph convolution layer $s_g$. The selection is motivated by the related benchmark work presented in \cite{wu2018moleculenet} and considers molecular domain knowledge. By default setting, the number of graph convolution layers is two, but their sizes are the same, while the number of fully-connected layer is one. RS, TPE, and CMA-ES are implemented by Optuna \cite{akiba2019optuna}. The arguments of HPO methods in our experiments are set empirically or default values offered by Optuna. Meanwhile, considering that most practitioners/researchers do not have sufficient large-scale GPU computational resources as in industry (e.g., DeepMind, FAIR), we would like to assess the performance of HPO given limited resource, so our experiments all are conducted on a single GPU (GeForce GTX 1070), while the MedianPruner technique \cite{akiba2019optuna} is used to speed up HPO. We expect our experimental outcomes would inspire and help other people when they face similar HPO problems and are given limited computational resource. In our experiments, every dataset is split into training, validation, and test sets with 80\%, 10\%, and 10\%. The training set is used to fit GC given a hyperparameter setting, and the validation set provides an unbiased evaluation of the hyperparameters during the search. The test set is used to evaluate the performance of HPO methods. The evaluation metric is the root mean square error (RMSE) of GC, and the loss function of GC defines the evaluation function. To make our experiments more persuasive, the best hyperparameter settings found by each method are given to GC, and then the GC is run for 30 times independently to calculate the mean of RMSEs on the training, validation, and test datasets. Meanwhile, to statistically analyse the difference between those means, we conducted corresponding $t$-tests, in which $t$ denotes $t$-value and $h$ represents the hypothesis. $t$-test in our all experiments is set with the significance level of $\alpha= 5\%$. When $h = 1$, the equal mean hypothesis is rejected, and a positive $t$ means the latter has better performance than the former; a negative $t$ represents that the former is better than the latter. In contrast, $h = 0$, the equal mean hypothesis is accepted. Empirically, during HPO, we found that the results of evaluating each trial often fluctuated, and to minimize this effect on HPO performance evaluation, we use the mean value of RMSEs from three repeated evaluations of GC as a single evaluation value. \subsection{Computational Cost as a Primary Consideration}\label{sec:computationalcost} In this section, we assess the performance of RS, CMA-ES, and TPE, while considering the computational cost as a priority. In HPO, any discussion and analysis of method performance without considering computational cost is argumentative because naive RS can find optimal solutions if given sufficient time and computational budget, and it is a highly friendly approach for parallel computing. Therefore, any comparison of HPO methods must be performed with acceptable computational cost. \subsubsection{General Experiments} Different HPO methods employ different optimisation strategies. To compare them fairly, we first proposed assigning each of them with a total of 100 trials, assuming all HPO methods have equal computational cost. In other words, RS, TPE, and CMA-ES all have 100 opportunities to evaluate the hyperparameter settings on GC. Tables ~\ref{tab:generalexp esol}$\sim$\ref{tab:generalexp lipo} summary the experiments of RS, TPE, and CMA-ES, respectively on three datasets: ESOL, FreeSolv, and Lipophilicity from DeepChem over the defined search space where batch size $s_b$ and the size of graph convolution layer $s_g$ range from 32 to 256 with the incremental step 32; learning rate $l_r$ is from 0.0001 to 0.0016 with the incremental step of 0.0001; the size of fully-connected layer $s_f$ is from 64 to 512, with the step of 64. This search space has $2^{13}$ solutions in total. Meanwhile, $t$-test with $\alpha=5\%$ is employed to determine if there is a significant difference between the means of RMSEs. $h=0$ denotes null hypothesis that the performance of GC under the two hyperparameter settings do not have significant difference; $h=1$ means rejecting the equal mean hypothesis. The $t$-test results of Tables ~\ref{tab:generalexp esol}$\sim$\ref{tab:generalexp lipo} are respectively summarised in Tables~\ref{tab:ttestesol}$\sim$\ref{tab:ttestlipo}. In Tables~\ref{tab:generalexp esol} and~\ref{tab:generalexp freesolv}, TPE and CMA-ES did not present significant difference on performance on the test set (see Tables~\ref{tab:ttestesol} and~\ref{tab:ttestfreesolv}), but the hyperparameter settings they found have different $s_g$ and $s_f$ in both datasets. The negative value of $t$ means the former is better, for example, the row of RS-TPE in Table \ref{tab:ttestfreesolv}, the $t$ on the test set is -3.3167, which represents RS has a smaller RMSE value than TPE (i.e., RS is better than TPE); when the value of $t$ is positive, it means the latter is better than the former, for example, the row of RS-TPE in Table \ref{tab:ttestesol}, the $t$ on the test set is 1.8355, which means the performance of TPE is better than that of RS. A larger absolute value of $t$ indicates a bigger difference. Moreover, RS outperforms the other two methods in Table~\ref{tab:generalexp freesolv} with significant difference (see Table \ref{tab:ttestfreesolv}). We consider the problem of FreeSolv may be more complex (the work research in \cite{asthagiri2003absolute} discussed the problem of deviations of calculating hydration free energy), and TPE and CMA-ES are constrained by the size of search space, the number of trials, and the size of the dataset, thus they got stuck in local optima. In contrast, RS uses a completely random strategy which is helpful to deal with this kind of special situation. However, we believe that CMA-ES and TPE would find better solutions if given more trials and a larger search space. In Table~\ref{tab:generalexp lipo}, TPE demonstrated better performance than CMA-ES and RS with significant difference (see Table~\ref{tab:ttestlipo}). The size of the Lipophilicity dataset is the largest in our experiments; compared with smaller datasets, the evaluation on validation set would return a result with less deviations, which is helpful for TPE and CMA-ES to improve and update their strategies for promising solutions. However, CMA-ES did not show excellent performance in all datasets in this group of experiments, and we considered that CMA-ES is based on the evolution strategy, which means it depends on unceasingly generating new offspring to find solutions, so 100 trials might restrict its performance. \begin{table} \caption{The general experimental settings on the ESOL Dataset \\ \small Search space: $s_b$:32$\sim$256, $step$=32; $l_r$:0.0001$\sim$0.0016, $step$=0.0001; $s_g$:32$\sim$256, $step$=32; $s_f$:64$\sim$512, $step$=64} \centering \label{tab:generalexp esol} \resizebox{\columnwidth}{!}{ \begin{tabular}{|l|l|l|l|l|l|} \hline HPO Methods & Hyperparameters & Train & Validation & Test & \\ \hline \multirow{4}{*}{RS} & $s_g$=256 & \multirow{2}{*}{$\mathbf{0.2666}$} & \multirow{2}{*}{0.9067} & \multirow{2}{*}{0.8888} & \multirow{2}{*}{Mean RMSE} \\ \cline{2-2} & $s_f$=64 & & & & \\ \cline{2-6} & $l_r$=0.0016 & \multirow{2}{*}{0.0364} & \multirow{2}{*}{0.0542} & \multirow{2}{*}{0.0411} & \multirow{2}{*}{Mean STD} \\ \cline{2-2} & $s_b$=64 & & & & \\ \hline \multirow{4}{*}{TPE} & $s_g$=192 & \multirow{2}{*}{0.3083} & \multirow{2}{*}{$\mathbf{0.8739}$} & \multirow{2}{*}{$\mathbf{0.8667}$} & \multirow{2}{*}{Mean RMSE} \\ \cline{2-2} & $s_f$=192 & & & & \\ \cline{2-6} & $l_r$=0.0015 & \multirow{2}{*}{0.0534} & \multirow{2}{*}{0.0401} & \multirow{2}{*}{0.0476} & \multirow{2}{*}{Mean STD} \\ \cline{2-2} & $s_b$=32 & & & & \\ \hline \multirow{4}{*}{CMA-ES} & $s_g$=256 & \multirow{2}{*}{0.2939} & \multirow{2}{*}{$\mathbf{0.8739}$} & \multirow{2}{*}{0.8782} & \multirow{2}{*}{Mean RMSE} \\ \cline{2-2} & $s_f$=64 & & & & \\ \cline{2-6} & $l_r$=0.0016 & \multirow{2}{*}{0.0458} & \multirow{2}{*}{0.0424} & \multirow{2}{*}{0.0562} & \multirow{2}{*}{Mean STD} \\ \cline{2-2} & $s_b$=32 & & & & \\ \hline \end{tabular}} \end{table} \begin{table} \centering \caption{The general experimental settings on the FreeSolv Dataset \\ \small Search space: $s_b$:32$\sim$256, $step$=32; $l_r$:0.0001$\sim$0.0016, $step$=0.0001; $s_g$:32$\sim$256, $step$=32; $s_f$:64$\sim$512, $step$=64} \label{tab:generalexp freesolv} \resizebox{\columnwidth}{!}{ \begin{tabular}{|l|l|l|l|l|l|} \hline HPO Methods & Hyperparameters & Train & Validation & Test & \\ \hline \multirow{4}{*}{RS} & $s_g$=256 & \multirow{2}{*}{0.6197} & \multirow{2}{*}{$\mathbf{1.2175}$} & \multirow{2}{*}{$\mathbf{1.1040}$} & \multirow{2}{*}{Mean RMSE} \\ \cline{2-2} & $s_f$=320 & & & & \\ \cline{2-6} & $l_r$=0.0015 & \multirow{2}{*}{0.1248} & \multirow{2}{*}{0.1055} & \multirow{2}{*}{0.0995} & \multirow{2}{*}{Mean STD} \\ \cline{2-2} & $s_b$=32 & & & & \\ \hline \multirow{4}{*}{TPE} & $s_g$=160 & \multirow{2}{*}{0.6875} & \multirow{2}{*}{1.3425} & \multirow{2}{*}{1.2006} & \multirow{2}{*}{Mean RMSE} \\ \cline{2-2} & $s_f$=448 & & & & \\ \cline{2-6} & $l_r$=0.0015 & \multirow{2}{*}{0.1854} & \multirow{2}{*}{0.1711} & \multirow{2}{*}{0.1212} & \multirow{2}{*}{Mean STD} \\ \cline{2-2} & $s_b$=32 & & & & \\ \hline \multirow{4}{*}{CMA-ES} & $s_g$=256 & \multirow{2}{*}{$\mathbf{0.5792}$} & \multirow{2}{*}{1.2721} & \multirow{2}{*}{1.1967} & \multirow{2}{*}{Mean RMSE} \\ \cline{2-2} & $s_f$=64 & & & & \\ \cline{2-6} & $l_r$=0.0016 & \multirow{2}{*}{0.2653} & \multirow{2}{*}{0.1907} & \multirow{2}{*}{0.2128} & \multirow{2}{*}{Mean STD} \\ \cline{2-2} & $s_b$=32 & & & & \\ \hline \end{tabular}} \end{table} \begin{table} \centering \caption{The general experimental settings on the Lipophilicity Dataset \\ \small Search space: $s_b$:32$\sim$256, $step$=32; $l_r$:0.0001$\sim$0.0016, $step$=0.0001; $s_g$:32$\sim$256, $step$=32; $s_f$:64$\sim$512, $step$=64} \label{tab:generalexp lipo} \resizebox{\columnwidth}{!}{ \begin{tabular}{|l|l|l|l|l|l|} \hline HPO Methods & Hyperparameters & Train & Validation & Test & \\ \hline \multirow{4}{*}{RS} & $s_g$=96 & \multirow{2}{*}{0.2682} & \multirow{2}{*}{0.7024} & \multirow{2}{*}{0.6949} & \multirow{2}{*}{Mean RMSE} \\ \cline{2-2} & $s_f$=384 & & & & \\ \cline{2-6} & $l_r$=0.001 & \multirow{2}{*}{0.0444} & \multirow{2}{*}{0.0279} & \multirow{2}{*}{0.0248} & \multirow{2}{*}{Mean STD} \\ \cline{2-2} & $s_b$= 64 & & & & \\ \hline \multirow{4}{*}{TPE} & $s_g$=224 & \multirow{2}{*}{$\mathbf{0.2475}$} & \multirow{2}{*}{$\mathbf{0.6914}$} & \multirow{2}{*}{$\mathbf{0.6655}$} & \multirow{2}{*}{Mean RMSE} \\ \cline{2-2} & $s_f$=192 & & & & \\ \cline{2-6} & $l_r$=0.0015 & \multirow{2}{*}{0.0328} & \multirow{2}{*}{0.0229} & \multirow{2}{*}{0.0219} & \multirow{2}{*}{Mean STD} \\ \cline{2-2} & $s_b$=32 & & & & \\ \hline \multirow{4}{*}{CMA-ES} & $s_g$=32 & \multirow{2}{*}{0.3496} & \multirow{2}{*}{0.7191} & \multirow{2}{*}{0.7183} & \multirow{2}{*}{Mean RMSE} \\ \cline{2-2} & $s_f$=64 & & & & \\ \cline{2-6} & $l_r$=0.0016 & \multirow{2}{*}{0.0425} & \multirow{2}{*}{0.0309} & \multirow{2}{*}{0.0245} & \multirow{2}{*}{Mean STD} \\ \cline{2-2} & $s_b$=32 & & & & \\ \hline \end{tabular}} \end{table} \begin{table} \centering \caption{$t$-Test on the ESOL} \label{tab:ttestesol} \begin{tabular}{|c|c|c|c|c|c|c|} \hline & $t$ & $h$ & $t$ & $h$ & $t$ & $h$ \\ \hline HPO Methods & \multicolumn{2}{c|}{Train} & \multicolumn{2}{c|}{Valid} & \multicolumn{2}{c|}{Test} \\ \hline RS - TPE & -3.4671 & 1 & 2.6223 & 1 & 1.8355 & 1 \\ \hline RS - CMA-ES & -2.5080 & 1 & 2.5666 & 1 & 0.7706 & 0 \\ \hline TPE - CMA-ES & 1.0984 & 0 & -0.0007 & 0 & -0.8396 & 0 \\ \hline \end{tabular} \end{table} \begin{table} \centering \caption{$t$-Test on the FreeSolv} \label{tab:ttestfreesolv} \begin{tabular}{|c|c|c|c|c|c|c|} \hline & $t$ & $h$ & $t$ & $h$ & $t$ & $h$ \\ \hline HPO Methods & \multicolumn{2}{c|}{Train} & \multicolumn{2}{c|}{Valid} & \multicolumn{2}{c|}{Test} \\ \hline RS - TPE & -1.6328 & 0 & -3.3492 & 1 & -3.3167 & 1 \\ \hline RS - CMA-ES & 0.7435 & 0 & -1.3487 & 0 & -2.1245 & 1 \\ \hline TPE - CMA-ES & 1.8011 & 1 & 1.4804 & 0 & 0.0855 & 0 \\ \hline \end{tabular} \end{table} \begin{table} \centering \caption{$t$-Test on the Lipophilicity} \label{tab:ttestlipo} \begin{tabular}{|c|c|c|c|c|c|c|} \hline & $t$ & $h$ & $t$ & $h$ & $t$ & $h$ \\ \hline HPO Methods & \multicolumn{2}{c|}{Train} & \multicolumn{2}{c|}{Valid} & \multicolumn{2}{c|}{Test} \\ \hline RS - TPE & 2.0146 & 0 & 1.6454 & 0 & 4.7676 & 1 \\ \hline RS - CMA-ES & -7.1251 & 1 & -2.1564 & 1 & -3.6088 & 1 \\ \hline TPE - CMA-ES & -10.2330 & 1 & -3.8762 & 1 & -8.6244 & 1 \\ \hline \end{tabular} \end{table} \subsubsection{Experiments on One Hour Runtime} The same number of trials may not be able to assign the same computation cost for different HPO methods in practice, because different trials of hyperparameters may incur different computational cost on evaluation. For example, a larger value of $s_f$/$s_g$ means more trainable parameters, which will take more computational resource for the corresponding trial. Therefore, in this section, we design another set of experiments, in which we assign 1 hour time and the same hardware configuration to different HPO methods on the ESOL dataset with the same search space defined in Table 1, to see which method can find the best solution. Within one hour, the best trials of hyperparameters from the HPO methods were selected to configure GC, and this GC will be run 30 times, and the results and $t$-test are shown in Tables~\ref{tab:limitedtime} and \ref{tab:ttes1hour}. In Table~\ref{tab:limitedtime}, RS completed the largest number of trials, and the performance is approximately equal to the one shown in Table~\ref{tab:generalexp esol} because it completed almost 100 trials, which is similar to the previous experiment. We believe RS is efficient and stable in such a small search space. Furthermore, TPE obviously showed surprising performance with 54 trials to accomplish almost the same performance as shown in Table~\ref{tab:generalexp esol}. Additionally, it is noted that TPE found two different hyperparameter settings respectively shown in Table~\ref{tab:generalexp esol} and \ref{tab:limitedtime} but with almost the same performance on the test dataset. Meanwhile, as shown in Table \ref{tab:ttes1hour}, the performance of three HPO methods within 1 hour runtime has significant difference: TPE performs the best. CMA-ES did not reach our expectation that at least it should maintain the similar performance to RS, and we consider that CMA-ES might not be suitable for our particular HPO with insufficient computational cost and relatively small search space. Furthermore, it is noted the under-performing of CMA-ES may be alleviated by further exploring the "meta parameters" of CMA-ES, for example, the population size of CMA-ES. However, this seems to be an even more challenging "meta-HPO" problem, which is beyond the scope of this research. We will explore this in our future work. \begin{table} \centering \caption{Experiments on the ESOL Dataset given one hour running time \\ \small Search Space: $s_b$:32$\sim$256, $step$=32; $l_r$:0.0001$\sim$0.0016, $step$=\\0.0001; $s_g$:32$\sim$256, $step$=32; $s_f$:64$\sim$512, $step$=64} \label{tab:limitedtime} \scalebox{0.624}{ \begin{tabular}{|l|l|l|l|l|l|l|} \hline HPO Methods & Number of Trials & Hyperparameters & Train & Validation & Test & \\ \hline \multirow{4}{*}{RS} & \multirow{4}{*}{96} & $s_g$=224 & \multirow{2}{*}{0.3301} & \multirow{2}{*}{0.8817} & \multirow{2}{*}{0.8994} & \multirow{2}{*}{Mean RMSE} \\ \cline{3-3} & & $s_f$=448 & & & & \\ \cline{3-7} & & $l_r$=0.0008 & \multirow{2}{*}{0.0492} & \multirow{2}{*}{0.0457} & \multirow{2}{*}{0.0544} & \multirow{2}{*}{Mean STD} \\ \cline{3-3} & & $s_b$=32 & & & & \\ \hline \multirow{4}{*}{TPE} & \multirow{4}{*}{54} & $s_g$=256 & \multirow{2}{*}{$\mathbf{0.3193}$} & \multirow{2}{*}{$\mathbf{0.8605}$} & \multirow{2}{*}{$\mathbf{0.8634}$} & \multirow{2}{*}{Mean RMSE} \\ \cline{3-3} & & $s_f$=256 & & & & \\ \cline{3-7} & & $l_r$=0.0014 & \multirow{2}{*}{0.0462} & \multirow{2}{*}{0.0400} & \multirow{2}{*}{0.0408} & \multirow{2}{*}{Mean STD} \\ \cline{3-3} & & $s_b$=32 & & & & \\ \hline \multirow{4}{*}{CMA-ES} & \multirow{4}{*}{63} & $s_g$=32 & \multirow{2}{*}{0.4287} & \multirow{2}{*}{0.9231} & \multirow{2}{*}{0.9688} & \multirow{2}{*}{Mean RMSE} \\ \cline{3-3} & & $s_f$=512 & & & & \\ \cline{3-7} & & $l_r$=0.0016 & \multirow{2}{*}{0.0933} & \multirow{2}{*}{0.0706} & \multirow{2}{*}{0.0845} & \multirow{2}{*}{Mean STD} \\ \cline{3-3} & & $s_b$=32 & & & & \\ \hline \end{tabular}} \end{table} \begin{table} \centering \caption{$t$-Test on experiments on the ESOL Dataset given one hour running time} \label{tab:ttes1hour} \begin{tabular}{|c|c|c|c|c|c|c|} \hline & $t$ & $h$ & $t$ & $h$ & $t$ & $h$ \\ \hline HPO Methods & \multicolumn{2}{c|}{Train} & \multicolumn{2}{c|}{Valid} & \multicolumn{2}{c|}{Test} \\ \hline RS - TPE & 0.8573 & 0 & 1.8774 & 1 & 2.8449 & 1 \\ \hline RS - CMA-ES & -5.0338 & 1 & -2.6530 & 1 & -3.7178 & 1 \\ \hline TPE - CMA-ES & -5.6546 & 0 & -4.1563 & 1 & -6.0439 & 1 \\ \hline \end{tabular} \end{table} \subsection{Performance as Primary Consideration}\label{sec:performanceprioity} In this section, we design another group of experiments to explore RS, TPE and CMA-ES with performance as a primary consideration by providing as much as possible computational cost. \subsubsection{Experiments on Repeated HPO Runs}\label{sec:exponrepeatedhporuns} In order to purely compare HPO performance, we designed to respectively run three HPO methods on the ESOL dataset for 10 times independently, and each time is assigned with 100 trials and keep the search space the same as that defined in Table 1. The performance is evaluated by calculating the mean of RMSE values of the best trial from every 100 trials. We did not run it on the test data set because the 10 times RMSEs correspond to 10 different hyperparameter settings. Our purpose is to discover the capability of those methods to fit HPO problem, and the results and t-test are shown in Tables~\ref{tab:performance} and \ref{tab:tteshporepeat}. TPE again outperforms RS and CMA-ES, and it also presents more stable performance as with less standard deviation. $t$-test in Table~\ref{tab:tteshporepeat} shows the same outcome on the test dataset to Table~\ref{tab:generalexp esol} that RS and CMA-ES in this problem and search space have similar performance (no significant difference). So, we believe the CMA-ES still has room for improvement for its performance in our experiments. \begin{table} \centering \caption{Experiments on the ESOL Dataset with performance as primary consideration} \label{tab:performance} \begin{tabular}{|l|l|l|l|} \hline & RS & TPE & CMA-ES \\ \hline Mean RMSE & 0.8529 & $\mathbf{0.8190}$ & 0.8469 \\ \hline Std & 0.0169 & 0.0090 & 0.0169 \\ \hline \end{tabular} \end{table} \begin{table} \centering \caption{$t$-Test on ESOL with performance as primary consideratoin} \label{tab:tteshporepeat} \begin{tabular}{|c|c|c|} \hline HPO Methods & $t$ & $h$ \\ \hline RS - TPE & 5.2891 & 1 \\ \hline RS - CMA-ES & 0.7464 & 0 \\ \hline TPE - CMA-ES & -4.3625 & 1 \\ \hline \end{tabular} \end{table} \subsubsection{Experiments on A Larger Search Space} To further investigate the performance of the HPO methods, we increased the search space so that $s_b$ and $s_g$ range from 8 to 512 with the step size of 8; $l_r$ is changed to from 0.0001 to 0.0032 with the step size of 0.0001; $s_f$ is increased from 32 to 1024 with the step size of 32. The new search space has $2^{22}$ configurations, and the increments become small and the value ranges are increased. The experimental details are shown in Tables~\ref{tab:searchspaceesol}, \ref{tab:searchspacefreesolv}, and \ref{tab:searchspacelipo}; while the corresponding $t$-test are presented in Tables \ref{tab:ttestesollarger}, \ref{tab:ttestfreesolvlarger}, and \ref{tab:ttestlipolarger}. In the three datasets, in general, the RMSEs for RS, TPE, and CMA-ES on the test datasets have improved compared with the experiments in Section \ref{sec:computationalcost} given the same number of trials. Meanwhile, by observing the results on the validation and test datasets for all three datasets, we do not see over-fitting issues. In ESOL, TPE and CMA-ES have almost the same performance, and both of them are better than RS as indicated by $t$-test (see Table~\ref{tab:ttestesollarger}). In addition, in FreeSolv, three HPO methods show no significant difference performance, while TPE and CMA-ES made the improvements compared with the previous experiments (see Table~\ref{tab:generalexp freesolv}). It is sensible that a potentially complex problem should be given a large search space to find the most suitable hyperparameters. In Table \ref{tab:searchspacelipo}, the rank of performance of the methods is still that TPE is the best, and RS is better than CMA-ES. \begin{table} \centering \caption{Experiments in larger search space on the ESOL Dataset \\ \small $s_b$:8$\sim$512, $step$=8; $l_r$:0.0001$\sim$0.0032, $step$=0.0001; $s_g$:8$\sim$512, $step$=8; $s_f$:32$\sim$1024, $step$=32} \resizebox{\columnwidth}{!}{ \begin{tabular}{|l|l|l|l|l|l|} \hline HPO Methods & Hyperparameters & Train & Validation & Test & \\ \hline \multirow{4}{*}{RS} & $s_g$=384 & \multirow{2}{*}{$\mathbf{0.3190}$} & \multirow{2}{*}{0.8727} & \multirow{2}{*}{0.8479} & \multirow{2}{*}{Mean RMSE} \\ \cline{2-2} & $s_f$=160 & & & & \\ \cline{2-6} & $l_r$=0.0016 & \multirow{2}{*}{0.0323} & \multirow{2}{*}{0.0310} & \multirow{2}{*}{0.0453} & \multirow{2}{*}{Mean STD} \\ \cline{2-2} & $s_b$=24 & & & & \\ \hline \multirow{4}{*}{TPE} & $s_g$=312 & \multirow{2}{*}{0.5089} & \multirow{2}{*}{$\mathbf{0.8203}$} & \multirow{2}{*}{0.781} & \multirow{2}{*}{Mean RMSE} \\ \cline{2-2} & $s_f$=224 & & & & \\ \cline{2-6} & $l_r$=0.003 & \multirow{2}{*}{0.1281} & \multirow{2}{*}{0.096} & \multirow{2}{*}{0.057} & \multirow{2}{*}{Mean STD} \\ \cline{2-2} & $s_b$=8 & & & & \\ \hline \multirow{4}{*}{CMA-ES} & $s_g$=512 & \multirow{2}{*}{0.5793} & \multirow{2}{*}{0.848} & \multirow{2}{*}{$\mathbf{0.7772}$} & \multirow{2}{*}{Mean RMSE} \\ \cline{2-2} & $s_f$=1024 & & & & \\ \cline{2-6} & $l_r$=0.0032 & \multirow{2}{*}{0.1529} & \multirow{2}{*}{0.1247} & \multirow{2}{*}{0.097} & \multirow{2}{*}{Mean STD} \\ \cline{2-2} & $s_b$=8 & & & & \\ \hline \end{tabular}} \label{tab:searchspaceesol} \end{table} \begin{table} \centering \caption{Experiments in larger search space on the FreeSolv Dataset \\ \small $s_b$:8$\sim$512, $step$=8; $l_r$:0.0001$\sim$0.0032, $step$=0.0001; $s_g$:8$\sim$512, $step$=8; $s_f$:32$\sim$1024, $step$=32} \label{tab:searchspacefreesolv} \resizebox{\columnwidth}{!}{ \begin{tabular}{|l|l|l|l|l|l|} \hline HPO Methods & Hyperparameters & Train & Validation & Test & \\ \hline \multirow{4}{*}{RS} & $s_g$=200 & \multirow{2}{*}{$\mathbf{0.3747}$} & \multirow{2}{*}{1.2412} & \multirow{2}{*}{1.0880} & \multirow{2}{*}{Mean RMSE} \\ \cline{2-2} & $s_f$=64 & & & & \\ \cline{2-6} & $l_r$=0.0030 & \multirow{2}{*}{0.0684} & \multirow{2}{*}{0.1152} & \multirow{2}{*}{0.0990} & \multirow{2}{*}{Mean STD} \\ \cline{2-2} & $s_b$=48 & & & & \\ \hline \multirow{4}{*}{TPE} & $s_g$=424 & \multirow{2}{*}{0.6144} & \multirow{2}{*}{$\mathbf{1.1288}$} & \multirow{2}{*}{$\mathbf{1.0620}$} & \multirow{2}{*}{Mean RMSE~~} \\ \cline{2-2} & $s_f$=224 & & & & \\ \cline{2-6} & $l_r$=0.0008 & \multirow{2}{*}{0.0951} & \multirow{2}{*}{0.1163} & \multirow{2}{*}{0.1115} & \multirow{2}{*}{Mean STD} \\ \cline{2-2} & $s_b$=16 & & & & \\ \hline \multirow{4}{*}{CMA-ES} & $s_g$=512 & \multirow{2}{*}{0.6973} & \multirow{2}{*}{1.2329} & \multirow{2}{*}{1.0835} & \multirow{2}{*}{Mean RMSE~~} \\ \cline{2-2} & $s_f$=32 & & & & \\ \cline{2-6} & $l_r$=0.0032 & \multirow{2}{*}{0.07819} & \multirow{2}{*}{0.1306} & \multirow{2}{*}{0.1073} & \multirow{2}{*}{Mean STD} \\ \cline{2-2} & $s_b$=8 & & & & \\ \hline \end{tabular}} \end{table} \begin{table} \centering \caption{Experiments in larger search space on the Lipophilicity Dataset\\ \small $s_b$:8$\sim$512, $step$=8; $l_r$:0.0001$\sim$0.0032, $step$=0.0001; $s_g$:8$\sim$512, $step$=8; $s_f$:32$\sim$1024, $step$=32} \label{tab:searchspacelipo} \resizebox{\columnwidth}{!}{ \begin{tabular}{|l|l|l|l|l|l|} \hline HPO Methods & Hyperparameters & Train & Validation & Test & \\ \hline \multirow{4}{*}{RS} & $s_g$=312 & \multirow{2}{*}{0.2570} & \multirow{2}{*}{$\mathbf{0.6736}$} & \multirow{2}{*}{0.6552} & \multirow{2}{*}{Mean RMSE} \\ \cline{2-2} & $s_f$=32 & & & & \\ \cline{2-6} & $l_r$=0.0031 & \multirow{2}{*}{0.0240} & \multirow{2}{*}{0.0285} & \multirow{2}{*}{0.0223} & \multirow{2}{*}{Mean STD} \\ \cline{2-2} & $s_b$=32 & & & & \\ \hline \multirow{4}{*}{TPE} & $s_g$=496 & \multirow{2}{*}{$\mathbf{0.2413}$} & \multirow{2}{*}{0.6786} & \multirow{2}{*}{$\mathbf{0.6395}$} & \multirow{2}{*}{Mean RMSE} \\ \cline{2-2} & $s_f$=32 & & & & \\ \cline{2-6} & $l_r$=0.0022 & \multirow{2}{*}{0.0188} & \multirow{2}{*}{0.0195} & \multirow{2}{*}{0.0193} & \multirow{2}{*}{Mean STD} \\ \cline{2-2} & $s_b$=24 & & & & \\ \hline \multirow{4}{*}{CMA-ES} & $s_g$=248 & \multirow{2}{*}{0.2442} & \multirow{2}{*}{0.6931} & \multirow{2}{*}{0.6826} & \multirow{2}{*}{Mean RMSE} \\ \cline{2-2} & $s_f$=480 & & & & \\ \cline{2-6} & $l_r$=0.0015 & \multirow{2}{*}{0.0430} & \multirow{2}{*}{0.0194} & \multirow{2}{*}{0.0167} & \multirow{2}{*}{Mean STD} \\ \cline{2-2} & $s_b$=120 & & & & \\ \hline \end{tabular}} \end{table} \begin{table} \centering \caption{$t$-test on the ESOL in larger search space} \label{tab:ttestesollarger} \begin{tabular}{|c|c|c|c|c|c|c|} \hline & $t$ & $h$ & $t$ & $h$ & $t$ & $h$ \\ \hline HPO Methods & \multicolumn{2}{c|}{Train} & \multicolumn{2}{c|}{Valid} & \multicolumn{2}{c|}{Test} \\ \hline RS - TPE & -7.7384 & 1 & 2.7832 & 1 & 4.9451 & 1 \\ \hline RS - CMA-ES & -10.7815 & 1 & 1.0330 & 0 & 3.5578 & 1 \\ \hline TPE - CMA-ES & -2.1105 & 1 & -0.9456 & 0 & 0.1855 & 0 \\ \hline \end{tabular} \end{table} \begin{table} \centering \caption{$t$-test on the FreeSolv in larger search space} \label{tab:ttestfreesolvlarger} \begin{tabular}{|c|c|c|c|c|c|c|} \hline & $t$ & $h$ & $t$ & $h$ & $t$ & $h$ \\ \hline HPO Methods & \multicolumn{2}{c|}{Train} & \multicolumn{2}{c|}{Valid} & \multicolumn{2}{c|}{Test} \\ \hline RS - TPE & -11.0149 & 1 & 3.6976 & 1 & 0.9385 & 0 \\ \hline RS - CMA-ES & -16.7183 & 1 & 0.2580 & 0 & 0.1661 & 0 \\ \hline TPE - CMA-ES & -3.6269 & 1 & -3.2042 & 1 & -0.7475 & 0 \\ \hline \end{tabular} \end{table} \begin{table} \centering \caption{$t$-test on the Lipophilicity in larger search space} \label{tab:ttestlipolarger} \begin{tabular}{|c|c|c|c|c|c|c|} \hline & $t $ & $h$ & $t $ & $h$ & t & $h$ \\ \hline HPO Methods & \multicolumn{2}{c|}{Train} & \multicolumn{2}{c|}{Valid} & \multicolumn{2}{c|}{Test} \\ \hline RS - TPE & 2.7588 & 1 & -0.7815 & 0 & 2.7989 & 1 \\ \hline RS - CMA-ES & 1.3964 & 0 & 3.0423 & 1 & -5.2825 & 1 \\ \hline TPE - CMA-ES & -0.3299 & 0 & -2.8299 & 1 & -8.9802 & 1 \\ \hline \end{tabular} \end{table} \section{Conclusion and Future Work}\label{sec: conclusion} Overall, our experimental results indicate that TPE is the most suited HPO method for GNN as applied to our molecular property prediction problems given limited computational resources. Meanwhile, RS is the simplest method but can achieve comparable performance against TPE and CMA-ES. In our future work, facing molecular problems on small datasets, the use of CMA-ES also deserves further investigation, and we believe that CMA-ES, RS, and TPE will have very similar performance given more computational budget. Furthermore, as mentioned in Section \ref{sec:exponrepeatedhporuns}, the selection of the "meta-parameters" for HPO methods deserve more research; we will investigate the impact of HPO methods' meta-parameter values on their performance. Finally, we expect that our work will help people from various fields (e.g., machine learning, chemistry, materials science) when they are facing similar type interdisciplinary problems. As the applications of GNNs have been explored in many areas and indeed benefited the research in those areas, we believe that our research outcomes would give them valuable insights to facilitate their research. \section*{ACKNOWLEDGMENTS} This research is supported by the Engineering and Physical Sciences Research Council (EPSRC) funded Project on New Industrial Systems: Manufacturing Immortality (EP/R020957/1). The authors are also grateful to the Manufacturing Immortality consortium. \section*{Data Statement} All data used in our experiments are from MoleculeNet \cite{wu2018moleculenet}, which are publicly available in \url{http://moleculenet.ai/datasets-1}.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Let $J_N(K;q)$ be the colored Jones polynomial of a knot $K$ in the three-sphere $S^3$ associated with the $N$-dimensional irreducible representation of $\mathfrak{sl}(2;\C)$. We normalize it so that $J_N(\text{unknot};q)=1$. Note that $J_2(K;q)$ is the celebrated Jones polynomial \cite{Jones:BULAM385} after a suitable change of variable. \par In 1995 Kashaev introduced a complex valued knot invariant for each natural number $N$ by using quantum dilogarithm \cite{Kashaev:MODPLA95} and observed that its asymptotic behavior for large $N$ determines the hyperbolic volume for several hyperbolic knots \cite{Kashaev:LETMP97}. He also conjectured that it is also true for any hyperbolic knot. Here a knot is called hyperbolic if its complement has a unique complete hyperbolic structure with finite volume. It is proved in 2001 by J.~Murakami and the author that his invariant coincides with $J_N\bigl(K;\exp(2\pi\sqrt{-1}/N)\bigr)$ \cite{Murakami/Murakami:ACTAM12001}. We also generalized Kashaev's conjecture to the following volume conjecture. \begin{conj}[Volume Conjecture \cite{Kashaev:LETMP97,Murakami/Murakami:ACTAM12001}] Let $K$ be a knot in $S^3$ and $\Vol(K)$ denote the simplicial volume of $S^3\setminus{K}$. Then the following equality would hold: \begin{equation}\label{eq:VC} \lim_{N\to\infty} \frac{\log\left|J_N\bigl(K;\exp(2\pi\sqrt{-1}/N)\bigr)\right|}{N} = \frac{\Vol(K)}{2\pi}. \end{equation} \end{conj} See for example \cite{Murakami:Columbia} about recent developments of the conjecture and its generalizations. \par As one of the generalizations Yokota and the author \cite{Murakami/Yokota:JREIA2007} proved that for the figure-eight knot the colored Jones polynomial knows much more. Actually we showed that if we perturb the parameter $2\pi\sqrt{-1}$ a little the corresponding limit determines the $\SL(2;\C)$ Chern--Simons invariant associated with an irreducible representation of $\pi_1(S^3\setminus{K})$ to $\SL(2;\C)$ in the sense of Kirk and Klassen \cite{Kirk/Klassen:COMMP93}. In fact we showed the following theorem. \begin{thm}[\cite{Murakami/Yokota:JREIA2007}]\label{thm:Murakami/Yokota} Let $E$ be the figure-eight knot. There exists a neighborhood $U\subset\C$ of $0$ such that if $u\in(U\setminus{\pi\sqrt{-1}\Q})\cup\{0\}$ then the following limit exists: \begin{equation*} \lim_{N\to\infty} \frac{\log J_N\bigl(K;\exp((u+2\pi\sqrt{-1}/N))\bigr)}{N}. \end{equation*} Moreover the limit determines the $\SL(2;\C)$ Chern--Simons invariant associated with an irreducible representation of $\pi_1(S^3\setminus{E})$ to $\SL(2;\C)$ which is determined by the parameter $u$. \end{thm} On the other hand, Andersen and Hansen \cite[Theorem~1]{Andersen/Hansen:JKNOT2006} refined the volume conjecture for the figure-eight knot as follows. \begin{thm}[\cite{Andersen/Hansen:JKNOT2006}]\label{thm:Andersen/Hansen} The following asymptotic equivalence holds: \begin{equation*} \begin{split} &J_N\bigl(E;\exp(2\pi\sqrt{-1}/N)\bigr) \\ \underset{N\to\infty}{\sim}& \frac{1}{3^{1/4}} N^{3/2} \exp\left(\frac{N\Vol(E)}{2\pi}\right) \\ =\hspace{3mm}& 2\pi^{3/2} \left(\frac{2}{\sqrt{-3}}\right)^{1/2} \left(\frac{N}{2\pi\sqrt{-1}}\right)^{3/2} \exp\left(\frac{N}{2\pi\sqrt{-1}}\times\sqrt{-1}\Vol(E)\right). \end{split} \end{equation*} Note that the twisted Reidemeister torsion and the Chern--Simons invariant associated with the unique complete hyperbolic structure of $S^3\setminus{E}$ are $2/\sqrt{-3}$ and $\sqrt{-1}\Vol(E)$ respectively. \end{thm} Note that we write $f(N)\underset{N\to\infty}{\sim}g(N)$ if and only if $\lim_{N\to\infty}f(N)/g(N)=1$ and that \eqref{eq:VC} follows from the equivalence relation above when $K$ is the figure-eight knot. \par In this paper we refine Theorem~\ref{thm:Murakami/Yokota} as Theorem~\ref{thm:Andersen/Hansen} for the case where $u$ is real. \begin{thm}\label{thm:main} Let $u$ be a real number with $0<u<\log((3+\sqrt{5})/2)=0.9624\dots$ and put $\xi:=2\pi\sqrt{-1}+u$. Then we have the following asymptotic equivalence of the colored Jones polynomial of the figure-eight knot $E$: \begin{equation}\label{eq:main} J_N(E;\exp(\xi/N)) \\ \underset{N\to\infty}{\sim} \frac{\sqrt{-\pi}}{2\sinh(u/2)} T(u)^{1/2} \left(\frac{N}{\xi}\right)^{1/2} \exp\left(\frac{N}{\xi}S(u)\right), \end{equation} where \begin{equation*} S(u) := \Li_2(e^{u-\varphi(u)})-\Li_2(e^{u+\varphi(u)})-u\varphi(u) \end{equation*} and \begin{equation*} T(u) := \frac{2}{\sqrt{(e^u+e^{-u}+1)(e^u+e^{-u}-3)}}. \end{equation*} Here $\varphi(u):=\arccosh(\cosh(u)-1/2)$ and \begin{equation*} \Li_2(z) := -\int_{0}^{z}\frac{\log(1-x)}{x}\,dx \end{equation*} is the dilogarithm function. \end{thm} Note that $S(u)$ defines the $\SL(2;\C)$ Chern--Simons invariant and $T(u)$ is the cohomological twisted Reidemeister torsion, both of which are associated with an irreducible representation of $\pi_1(S^3\setminus{E})$ into $SL(2;\C)$ sending the meridian to an element with eigenvalues $\exp(u/2)$ and $\exp(-u/2)$. See Section~\ref{sec:interpretation} for details. \begin{rem} Since the figure-eight knot is amphicheiral, we have $J_N(E;q^{-1})=J_N(E;q)$. Thus if $u<0$ we have \begin{equation*} \begin{split} J_N\bigl(E;\exp((u+2\pi\sqrt{-1})/N)\bigr) &= J_N\bigl(E;\exp((-u-2\pi\sqrt{-1})/N)\bigr) \\ &= \overline{J_N\bigl(E;\exp((-u+2\pi\sqrt{-1})/N)\bigr)}, \end{split} \end{equation*} where $\overline{z}$ denotes the complex conjugate of $z$. So if we prove Theorem~\ref{thm:main} for $u>0$, we have a similar asymptotic equivalence for $u<0$. Details are left to the readers. \end{rem} \par Theorem~\ref{thm:main} confirms the following conjecture in the case of the figure-eight knot for real $u$ with $0<u<\log((3+\sqrt{5})/2)$. \begin{conj}[\cite{Gukov/Murakami:FIC2008,Dimofte/Gukov:Columbia}] Let $K$ be a hyperbolic knot. Then there exists a neighborhood $U\in\C$ of $0$ such that if $u\in U\setminus\pi\sqrt{-1}\Q$, we have the following asymptotic equivalence: \begin{equation*} J_N(K;\exp(\xi/N)) \\ \underset{N\to\infty}{\sim} \frac{\sqrt{-\pi}}{2\sinh(u/2)} T(K;u)^{1/2} \left(\frac{N}{\xi}\right)^{1/2} \exp\left(\frac{N}{\xi}S(K;u)\right), \end{equation*} where $\xi:=2\pi\sqrt{-1}+u$, $T(K;u)$ is the cohomological twisted Reidemeister torsion and $S(K;u)$ is the $\SL(2;\C)$ Chern--Simons invariant, both of which are associated with an irreducible representation of $\pi_1(S^3\setminus{K})$ into $SL(2;\C)$ sending the meridian to an element with eigenvalues $\exp(u/2)$ and $\exp(-u/2)$. \end{conj} For physical interpretations of this conjecture, see for example \cite{Gukov:COMMP2005,Dimofte/Gukov:Columbia}. \par For torus knots we know the following result. Let $T(a,b)$ be the torus knot of type $(a,b)$ for positive coprime integers $a$ and $b$. It is known that the $\SL(2;\C)$ character variety of $\pi_1(S^3\setminus{T(a,b)})$ has $(a-1)(b-1)/2$ components \cite{Klassen:TRAAM1991} (see also \cite{Munoz:REVMC2009}). Such components are indexed by a positive integer $k$ that is not a multiple of $a$ or $b$. See \cite[\S~2]{Hikami/Murakami:Bonn} for details. Let $\rho_k$ be an irreducible representation in the component indexed by $k$, $S_k(u)$ be the Chern--Simons invariant associated with $\rho_k$ with $\exp(\pm u/2)$ the eigenvalues of the image of the meridian by $\rho_k$, and $T_k$ be the cohomological twisted Reidemeister torsion associated with $\rho_k$. Then we have the following formulas \cite{Hikami/Murakami:Bonn}: \begin{align*} S_k(u) &:= \frac{-\bigl(2k\pi\sqrt{-1}-ab(2\pi\sqrt{-1}+u)\bigr)^2}{4ab} \\ \intertext{and} T_k &:= \frac{16\sin^2(k\pi/a)\sin^2(k\pi/b)}{ab}. \end{align*} Dubois and Kashaev \cite{Dubois/Kashaev:MATHA2007}, and Hikami and the author \cite{Hikami/Murakami:Bonn} obtain the following asymptotic equivalences. \begin{thm}[\cite{Dubois/Kashaev:MATHA2007}] For $u=0$ we have \begin{multline*} J_N\bigl(T(a,b);\exp(2\pi\sqrt{-1}/N)\bigr) \\ \underset{N\to\infty}{\sim} \frac{\pi^{3/2}}{2ab} \left(\frac{N}{2\pi\sqrt{-1}}\right)^{3/2} \sum_{k=1}^{ab-1} (-1)^{k+1}k^2 T_k^{1/2} \exp\left(\frac{N}{\xi}S_k(0)\right). \end{multline*} Note that since $T_k$ vanishes if $a$ or $b$ divides $k$, the summation is for all the irreducible components of the character variety. \end{thm} \begin{thm}[\cite{Hikami/Murakami:Bonn}] Let $u$ be a complex number with $0<|u|<2\pi/(ab)$. Then we have \begin{equation*} J_N\bigl(T(a,b);\exp(\xi/N)\bigr) \underset{N\to\infty}{\sim} \frac{1}{\Delta(T(a,b);e^{u})} \end{equation*} when $\Re{u}>0$ and \begin{multline*} J_N\bigl(T(a,b);\exp(\xi/N)\bigr) \\ \underset{N\to\infty}{\sim} \frac{1}{\Delta(T(a,b);e^u)} + \frac{ \sqrt{-\pi}}{2\sinh(u/2)} \sum_{k=1}^{ab-1} (-1)^{k} T_k^{1/2} \left(\frac{N}{\xi}\right)^{1/2} \exp\left(\frac{N}{\xi}S_k(u)\right) \end{multline*} when $\Re{u}<0$, where $\xi:=u+2\pi\sqrt{-1}$ and $\Delta(T(a,b);t)$ is the Alexander polynomial. \par See \cite{Hikami/Murakami:Bonn} for more details. \end{thm} \par The paper is organized as follows. In Section~\ref{sec:integral} we give an integral formula for the colored Jones polynomial using the quantum dilogarithm. We study its asymptotic behavior by using the saddle point method to give a proof of Theorem~\ref{thm:main} in Section~\ref{sec:approximation}. In Section~\ref{sec:interpretation} we give topological interpretations of $S(u)$ and $T(u)$. Sections~\ref{sec:S_gamma} to \ref{sec:Phi_0} are devoted to miscellaneous calculations. \section{Integral formula for the colored Jones polynomial} \label{sec:integral} In this section we use the quantum dilogarithm function to express the colored Jones polynomial of the figure-eight knot as an integral. We mainly follow \cite{Andersen/Hansen:JKNOT2006}. \par First of all we recall the following formula due to Habiro and Le (see for example \cite{Masbaum:ALGGT12003}). \begin{equation}\label{eq:Habiro_Le} \begin{split} J_N(E;q) &= \sum_{k=0}^{N-1} \prod_{l=1}^{k} \left(q^{(N-l)/2}-q^{-(N-l)/2}\right) \left(q^{(N+l)/2}-q^{-(N+l)/2}\right) \\ &= \sum_{k=0}^{N-1} q^{-kN} \prod_{l=1}^{k} \left(1-q^{N-l}\right) \left(1-q^{N+l}\right). \end{split} \end{equation} \par For a complex number $\gamma$ with $\Re(\gamma)>0$, define the quantum dilogarithm $S_{\gamma}(z)$ as follows \cite{Faddeev:LETMP1995}: \begin{equation*} S_{\gamma}(z) := \exp \left( \frac{1}{4} \int_{C_R} \frac{e^{zt}}{\sinh(\pi t)\sinh(\gamma t)} \frac{dt}{t} \right), \end{equation*} where $|\Re(z)|<\pi+\Re(\gamma)$ and $C_R$ is $(-\infty,-R]\cup\Omega_R\cup[R,\infty)$ with $\Omega_R:=\{R\exp(\sqrt{-1}(\pi-s))\mid0\le s\le\pi\}$ for $0<R<\min\{\pi/|\gamma|,1\}$. Note that the poles of the integrand are $0,\pm\sqrt{-1},\pm2\sqrt{-1},\dots$ and $\pm\pi\sqrt{-1}/\gamma,\pm2\pi\sqrt{-1}/\gamma,\dots$. \begin{rem} Note that in \cite{Andersen/Hansen:JKNOT2006} it is assumed that $\gamma$ is real and $0<\gamma<1$ but we can define $S_{\gamma}(z)$ when $\Re(\gamma)>0$. See \cite[(3.21)]{Dimofte/Gukov/Lenells/Zagier:CNTP2010}. (Our quantum dilogarithm $S_{\gamma}(z)$ is equal to $\Phi(z/(2\pi);\gamma/\pi)$ in \cite{Dimofte/Gukov/Lenells/Zagier:CNTP2010}.) We give a proof of the analyticity of $S_{\gamma}$ in Lemma~\ref{lem:analyticity}. \end{rem} The following formula is well known and its proof can be found in \cite[p.~530]{Andersen/Hansen:JKNOT2006}. Note that they assume that $\gamma$ is real but their proof is also valid in our case. \begin{lem} If $|\Re(z)|<\pi$, then we have \begin{equation}\label{eq:S_gamma} \left(1+e^{\sqrt{-1}z}\right)S_{\gamma}(z+\gamma) = S_{\gamma}(z-\gamma). \end{equation} \end{lem} \begin{comment} Then the set of the poles of (the analytic continuation of) $S_{\gamma}$ is $\{(2k+1)\gamma+\pi(2l+1)\mid k\in\Z_{\ge0},l\in\Z_{\ge0}\}$ and the set of zeroes is $\{-(2k+1)\gamma+\pi(2l+1)\mid k\in\Z_{\ge0},l\in\Z_{\ge0}\}$. \end{comment} Putting $\gamma:=(2\pi-\sqrt{-1}u)/(2N)$ and $z:=\pi-\sqrt{-1}u-2l\gamma$ ($l=1,2,\dots,N-1$) in \eqref{eq:S_gamma}, we have \begin{equation*} \left(1+e^{\sqrt{-1}(\pi-\sqrt{-1}u-2l\gamma)}\right) S_{\gamma}(\pi-\sqrt{-1}u-2l\gamma+\gamma) = S_{\gamma}(\pi-\sqrt{-1}u-2l\gamma-\gamma). \end{equation*} So for $k=1,2,\dots,N-1$ we have \begin{equation*} \begin{split} \prod_{l=1}^{k}\bigl(1-\exp((N-l)\xi/N)\bigr) &= \prod_{l=1}^{k} \left(1+e^{\sqrt{-1}(\pi-\sqrt{-1}u-2\pi l/N+\sqrt{-1}ul/N)}\right) \\ &= \prod_{l=1}^{k} \frac{S_{\gamma}(\pi-\sqrt{-1}u-(2l+1)\gamma)} {S_{\gamma}(\pi-\sqrt{-1}u-(2l-1)\gamma)} \\ &= \frac{S_{\gamma}(\pi-\sqrt{-1}u-(2k+1)\gamma)} {S_{\gamma}(\pi-\sqrt{-1}u-\gamma)}. \end{split} \end{equation*} Similarly we have \begin{equation*} \prod_{l=1}^{k}\bigl(1-\exp((N+l)\xi/N)\bigr) = \frac{S_{\gamma}(-\pi-\sqrt{-1}u+\gamma)} {S_{\gamma}(-\pi-\sqrt{-1}u+(2k+1)\gamma)}. \end{equation*} Therefore we have \begin{multline*} J_N\bigl(E;\exp(\xi/N)\bigr) \\ = \frac{S_{\gamma}(-\pi-\sqrt{-1}u+\gamma)}{S_{\gamma}(\pi-\sqrt{-1}u-\gamma)} \sum_{k=0}^{N-1} \exp(-ku) \frac{S_{\gamma}(\pi-\sqrt{-1}u-(2k+1)\gamma)} {S_{\gamma}(-\pi-\sqrt{-1}u+(2k+1)\gamma)}. \end{multline*} \par Using $S_{\gamma}$ we define \begin{equation}\label{eq:g_N_def} g_N(w) := \exp(-Nuw) \frac{S_{\gamma}(\pi-\sqrt{-1}u+\sqrt{-1}\xi w)} {S_{\gamma}(-\pi-\sqrt{-1}u-\sqrt{-1}\xi w)}. \end{equation} Since $S_{\gamma}(z)$ is defined for $|\Re(z)|<\pi+\Re(\gamma)$, the function $g_N(w)$ is defined for $w$ with $|\pi-\Im(\xi w)|<\pi+\Re(\gamma)=\pi+\pi/N$, that is, $w$ is in the strip $-(2\pi/u)\Re(w)-\pi/(Nu)<\Im(w)<-(2\pi/u)\Re(w)+2\pi/u+\pi/(Nu)$ (Figure~\ref{fig:contour}). \begin{figure}[h] \includegraphics[scale=1]{contour.eps} \caption{The function $g_N$ ($\Phi$, respectively) is defined between the two dotted (thick, respectively) lines. The dashed parallelogram indicates the contour $C_{+}(\varepsilon)\cup C_{-}(\varepsilon)$.} \label{fig:contour} \end{figure} \par For $0<\varepsilon<1/(4N)$, let $C_{+}(\varepsilon)$ be the polygonal line that connects $1-\varepsilon$, $1-u/(2\pi)-\varepsilon+\sqrt{-1}$, $-u/(2\pi)+\varepsilon+\sqrt{-1}$, and $\varepsilon$, and $C_{-}(\varepsilon)$ be the polygonal line that connects $\varepsilon$, $u/(2\pi)-\sqrt{-1}$, $1+u/(2\pi)-\sqrt{-1}$, and $1-\varepsilon$. See Figure~\ref{fig:contour}. Put $C(\varepsilon):=C_{-}(\varepsilon)\cup C_{+}(\varepsilon)$. Note that the domain of $g_N(w)$ contains $C(\varepsilon)$. \par By the residue theorem we have \begin{multline*} J_N\bigl(E;\exp(\xi/N)\bigr) \\ = \frac{S_{\gamma}(-\pi-\sqrt{-1}u+\gamma)}{S_{\gamma}(\pi-\sqrt{-1}u-\gamma)} \frac{\sqrt{-1}\exp(u/2)N}{2} \int_{C(\varepsilon)} \tan(N\pi w) g_N(w) dw \end{multline*} since the set of the poles of $\tan(N\pi w)$ inside $C(\varepsilon)$ is $\{(2k+1)/(2N)\mid k=0,1,2,\dots,N-1\}$ and the residue of each pole is $-1/(N\pi)$. \par Putting \begin{equation*} G_{\pm}(N,\varepsilon) := \int_{C_{\pm}(\varepsilon)}\tan(N\pi w)g_N(w)\,dw, \end{equation*} we have \begin{equation}\label{eq:G} J_N\bigl(E;\exp(\xi/N)\bigr) = \frac{S_{\gamma}(-\pi-\sqrt{-1}u+\gamma)}{S_{\gamma}(\pi-\sqrt{-1}u-\gamma)} \frac{\sqrt{-1}\exp(u/2)N}{2} \bigl(G_{+}(N,\varepsilon)+G_{-}(N,\varepsilon)\bigr). \end{equation} \section{Approximating the integral formula} \label{sec:approximation} In this section we approximate the integral formula for the colored Jones polynomial obtained in the previous section. \par Since $\tan(N\pi w)$ is close to $\sqrt{-1}$ ($-\sqrt{-1}$, respectively) when $\Im(w)$ is positive and large (negative and $|\Im(w)|$ is large, respectively), we can approximate $G_{\pm}(N,\varepsilon)$ by the integral of $g_N(w)$ on $C_{\pm}(\varepsilon)$. In fact if we write \begin{equation*} G_{\pm}(N,\varepsilon) = \pm\sqrt{-1}\int_{C_{\pm}(\varepsilon)}g_N(w)\,dw + \int_{C_{\pm}(\varepsilon)}(\tan(N\pi w)\mp\sqrt{-1})g_N(w)\,dw, \end{equation*} then we have the following lemma. \begin{prop}[see Equation (4.7) in \cite{Andersen/Hansen:JKNOT2006}]\label{prop:4.7} There exists a positive constant $K_{1,\pm}$ independent of $N$ and $\varepsilon$ such that the following inequality holds: \begin{equation*} \left| \int_{C_{\pm}(\varepsilon)}(\tan(N\pi w)\mp\sqrt{-1})g_N(w)\,dw \right| < \frac{K_{1,\pm}}{N}. \end{equation*} \end{prop} A proof is given in Section~\ref{sec:prop:4.7}. \par Now we approximate the integral of $g_N(w)$ along $C_{\pm}(\varepsilon)$. Define \begin{equation*} \Phi(w) := \frac{1}{\xi} \bigl( \Li_2\left(e^{u-\xi w}\right)-\Li_2\left(e^{u+\xi w}\right) \bigr) -uw. \end{equation*} Since $\Li_2$ is analytic in the region $\C\setminus(1,\infty)$, the function $\Phi$ is analytic in the region $\{w\in\C\mid-\frac{2\pi}{u}\Re(w)<\Im(w)<-\frac{2\pi}{u}(\Re(w)-1)\}$ (Figure~\ref{fig:contour}). \par \begin{comment} \begin{rem} Note that $\Li_2$ is analytic on $\C\setminus\{(1,\infty)\}$ and so $\Phi$ is analytic on $\C\setminus\left\{\dfrac{t+2\pi\sqrt{-1}k}{2\pi\sqrt{-1}+u}\Biggm| k\in\Z,t\in\R\right\}$. Note also that the set $\left\{\dfrac{t+2\pi\sqrt{-1}k}{2\pi\sqrt{-1}+u}\Biggm| k\in\Z,t\in\R\right\}$ consists of straight lines that cross the points $k$ (on the real axis) and $2k\pi\sqrt{-1}/u$ (on the imaginary axis) for $k\in\Z$. \end{rem} \end{comment} \begin{prop}[see Equation (4.9) in \cite{Andersen/Hansen:JKNOT2006}] \label{prop:4.9} Let $p(\varepsilon)$ be any contour in the parallelogram bounded by $C(\varepsilon)$ connecting $\varepsilon$ and $1-\varepsilon$, then there exists a positive constant $K_2$ independent of $N$ and $\varepsilon$ such that the following inequality holds. \begin{equation*} \left| \int_{p(\varepsilon)} g_N(w)\,dw - \int_{p(\varepsilon)} \exp(N\Phi(w))\,dw \right| \le \frac{K_2\log(N)}{N} \max_{w\in p(\varepsilon)}\left\{\exp\bigl(N\Re\Phi(w)\bigr)\right\}. \end{equation*} \end{prop} A proof is given in Section~\ref{sec:prop:4.9}. \par We will study the asymptotic behavior of $\int_{C_{\pm}(\varepsilon)}\exp\bigl(N\Phi(w)\bigr)\,dw$ for large $N$. \par Since $\Phi(w)$ is analytic in the region $\{w\in\C\mid-\frac{2\pi}{u}\Re(w)<\Im(w)<-\frac{2\pi}{u}(\Re(w)-1)\}$, we have \begin{equation*} \int_{C_{+}(\varepsilon)}\exp\bigl(N\Phi(w)\bigr)\,dw = \int_{C_{-}(\varepsilon)}\exp\bigl(N\Phi(w)\bigr)\,dw \end{equation*} by Cauchy's integral theorem. \par We will apply the saddle point method (see for example \cite[\S~7.2]{Marsden/Hoffman:Complex_Analysis}) to approximate the integral $\int_{C_{-}(\varepsilon)}\exp(N\Phi(w))\,dw$. \par First we find a solution to the equation $d\,\Phi(w)/d\,w=0$. Since we have \begin{equation*} \frac{d\,\Phi(w)}{d\,w} = \log(1-e^{u-\xi w})+\log(1-e^{u+\xi w})-u, \end{equation*} a solution to the equation \begin{equation*} e^{\xi w}+e^{-\xi w} = e^u+e^{-u}-1 \end{equation*} can be a saddle point that we need. Put \begin{align}\label{eq:varphi} \varphi(u) &:= \arccosh(\cosh(u)-1/2)\notag \\ &= \log \left( \frac{1}{2} \left( e^u+e^{-u}-1-\sqrt{(e^u+e^{-u}+1)(e^u+e^{-u}-3)} \right) \right), \\ \tilde{\varphi}(u) &:= \varphi(u)+2\pi\sqrt{-1},\notag \\ \intertext{and} w_0 &:= \frac{\tilde{\varphi}(u)}{\xi},\notag \end{align} where we choose the square root of $(e^u+e^{-u}+1)(e^u+e^{-u}-3)$ as a positive multiple of $\sqrt{-1}$ and the branch of $\log$ so that $-\pi/3<\Im\varphi(u)<0$. \begin{rem}\label{rem:varphi} Note that $\varphi(u)$ and $\tilde{\varphi}(u)$ are purely imaginary since $\left|e^u+e^{-u}-1-\sqrt{(e^u+e^{-u}+1)(e^u+e^{-u}-3)}\right|=4$. \end{rem} It is easy to see that $d\,\Phi(w_0)/d\,w=0$. Since we have \begin{equation*} \Im(w_0)+\frac{2\pi}{u}\Re(w_0) = \frac{\Im\tilde{\varphi}(u)}{u}, \end{equation*} $w_0$ is in the domain of $\Phi$. \par \begin{comment} \begin{figure}[h] \includegraphics{Mathematica/11_26_contour.tex_gr4.eps} \caption{A path keeping $\Im\Phi(w)$ constant that passes through the saddle point is indicated by a red curve when $u=0$.} \end{figure} \begin{figure}[h] \includegraphics{Mathematica/11_26_contour.tex_gr8.eps} \caption{A path keeping $\Im\Phi(w)$ constant that passes through the saddle point is indicated by a red curve when $u=0.1$.} \label{fig:contour01} \end{figure} \begin{figure}[h] \includegraphics{Mathematica/11_26_contour.tex_gr12.eps} \caption{The path $P$ passing through the saddle point $w_0$, indicated by a black disk, is indicated by the thick curve for $u=0.5$.} \label{fig:contour05} \end{figure} \end{comment} \begin{figure}[h] \includegraphics{saddle.eps} \caption{A contour plot of $\Re\Phi(w)$ on the complex plane for $u=0.9$. A brighter part is higher than a darker part. The path $P$ is indicated by a thick curve and the saddle point $w_0$ is marked by a circle.} \label{fig:contour09} \end{figure} \begin{comment} \begin{figure}[h] \includegraphics{Mathematica/11_26_contour.tex_gr20.eps} \caption{A path keeping $\Im\Phi(w)$ constant that passes through the saddle point is indicated by a red curve when $u=\log((3+\sqrt{5})/2)$.} \end{figure} \begin{figure}[h] \includegraphics{Mathematica/11_26_contour.tex_gr24.eps} \caption{A path keeping $\Im\Phi(w)$ constant that passes through the saddle point is indicated by a red curve when $u=1$.} \end{figure} \begin{figure}[h] \includegraphics{Mathematica/11_26_contour.tex_gr28.eps} \caption{A path keeping $\Im\Phi(w)$ constant that passes through the saddle point is indicated by a red curve when $u=2$.} \end{figure} \end{comment} We choose a path $P$ from $\varepsilon$ to $1-\varepsilon$ that passes through $w_0$ so that near $w_0$ it keeps $\Im\Phi(w)$ constant and that $\Re\Phi(w)$ takes its maximum (over all $w$ on $P$) at $w_0$ as indicated in the thick curve in Figures~\ref{fig:contour09}. Then the integral $\int_{C_-(\varepsilon)}\exp(N\Phi(w))\,dw$ is approximated by the integral near $w_0$ along the path we choose. More precisely we have \begin{equation}\label{eq:Phi_saddle} \int_{C_-(\varepsilon)}\exp(N\Phi(w))\,dw \underset{N\to\infty}{\sim} \frac{\sqrt{2\pi}\exp(N\Phi(w_0))}{\sqrt{N}\sqrt{-d^2\,\Phi(w_0)/d\,w^2}} \end{equation} from \cite[Theorem~7.2.8]{Marsden/Hoffman:Complex_Analysis}, where the sign of the square root of $-d^2\,\Phi(w_0)/d\,w^2$ is chosen so that \begin{equation}\label{eq:argument} \sqrt{-d^2\,\Phi(w_0)/d\,w^2}\times\text{(tangent of $P$ at $w_0$)}>0. \end{equation} Note that \begin{equation}\label{eq:Phi_0} \Phi(w_0) = \frac{1}{\xi} \left( \Li_2\left(e^{u-\varphi(u)}\right) - \Li_2\left(e^{u+\varphi(u)}\right) -u\tilde{\varphi}(u) \right). \end{equation} \par From Propositions~\ref{prop:4.7} and \ref{prop:4.9}, choosing $P$ as a contour in Proposition~\ref{prop:4.9} we have \begin{equation*} \begin{split} &\left| G_{\pm}(N,\varepsilon) \mp \sqrt{-1} \int_{C_{\pm}(\varepsilon)}\exp\bigl(N\Phi(w)\bigr)\,dw \right| \\ \le& \left| \int_{C_{\pm}(\varepsilon)}(\tan(N\pi w)\mp\sqrt{-1})g_N(w)\,dw \pm\sqrt{-1} \int_{C_{\pm}(\varepsilon)}\left(g_N(w)-\exp\bigl(N\Phi(w)\bigr)\right)\,dw \right| \\ \le& \frac{K_{1,\pm}}{N} + \frac{K_2\log(N)}{N} \exp\bigl(N\Re\Phi(w_0)\bigr). \end{split} \end{equation*} From \eqref{eq:Phi_saddle} we have \begin{equation*} \begin{split} &\lim_{N\to\infty} \left| \frac{G_{\pm}(N,\varepsilon)} {\pm\sqrt{-1} \int_{C_{\pm}(\varepsilon)}\exp\bigl(N\Phi(w)\bigr)\,dw} -1 \right| \\ \le& \frac{K_{1,\pm}} {N \left| \int_{C_{\pm}(\varepsilon)}\exp\bigl(N\Phi(w)\bigr)\,dw \right|} + \frac{K_2\log(N)}{N}\times \frac{\exp\bigl(N\Re\Phi(w_0)\bigr)} {\left| \int_{C_{\pm}(\varepsilon)}\exp\bigl(N\Phi(w)\bigr)\,dw \right|} \\ \underset{N\to\infty}{\longrightarrow}&0. \end{split} \end{equation*} Here we use the following lemma which will be proved in Section~\ref{lem:Phi_0}. \begin{lem}\label{lem:Phi_0} The real part of $\Phi(w_0)$ is positive for $0<u<\log\bigl((3+\sqrt{5})/2\bigr)$. Therefore from \eqref{eq:Phi_saddle} we see that $\int_{C_{\pm}(\varepsilon)}\exp\bigl(N\Phi(w)\bigr)\,dw$ grows exponentially. \end{lem} So we have \begin{equation*} G_{\pm}(N,\varepsilon) \underset{N\to\infty}{\sim} \pm\sqrt{-1} \int_{C_{\pm}(\varepsilon)}\exp\bigl(N\Phi(w)\bigr)\,dw. \end{equation*} \par Therefore from \eqref{eq:G} we have \begin{equation*} \begin{split} &J_N\bigl(K;\exp(\xi/N)\bigr) \\ \underset{N\to\infty}{\sim}& \frac{e^{2\pi\sqrt{-1}u N/\xi}}{e^u-1} \frac{N e^{u/2}}{2} \left( \int_{C_{-}(\varepsilon)}\exp(N\Phi(w))\,dw - \int_{C_{+}(\varepsilon)}\exp(N\Phi(w))\,dw \right) \\ =& \frac{N e^{2\pi\sqrt{-1}u N/\xi}}{2\sinh(u/2)} \left( \int_{C_{-}(\varepsilon)}\exp(N\Phi(w))\,dw \right) \end{split} \end{equation*} from the lemma below. \begin{lem}\label{lem:S_gamma} For $\gamma=(2\pi-\sqrt{-1}u)/(2N)$ with positive $u$, we have \begin{equation*} \frac{S_{\gamma}(-\pi-\sqrt{-1}u+\gamma)}{S_{\gamma}(\pi-\sqrt{-1}u-\gamma)} = \frac{e^{\pi u/\gamma}-1}{e^u-1}. \end{equation*} Therefore we have the following asymptotic equivalence: \begin{equation*} \frac{S_{\gamma}(-\pi-\sqrt{-1}u+\gamma)}{S_{\gamma}(\pi-\sqrt{-1}u-\gamma)} \underset{N\to\infty}{\sim} \frac{e^{2\pi\sqrt{-1}u N/\xi}}{e^u-1}. \end{equation*} \end{lem} A proof of the lemma is given in Section~\ref{sec:S_gamma}. \begin{rem}\label{rem:S} When $u=0$, we have \begin{equation*} \frac{S_{\gamma}(-\pi+\gamma)}{S_{\gamma}(\pi-\gamma)} = N \end{equation*} from \cite[P.~492]{Andersen/Hansen:JKNOT2006}. \end{rem} \par Since \begin{equation*} \frac{d^2\,\Phi(w_0)}{d\,w^2} = \xi\sqrt{(e^u+e^{-u}+1)(e^u+e^{-u}-3)}, \end{equation*} we have the following asymptotic equivalence from \eqref{eq:Phi_saddle}: \begin{equation*} \begin{split} \int_{C_{-}(\varepsilon)}\exp(N\Phi(w))\,dw \underset{N\to\infty}{\sim} \frac{\sqrt{2\pi}\exp(N\Phi(w_0))} {\sqrt{N}\sqrt{-\xi\sqrt{(e^u+e^{-u}+1)(e^u+e^{-u}-3)}}}, \end{split} \end{equation*} where we choose the square root of $\sqrt{-\xi\sqrt{(e^u+e^{-u}+1)(e^u+e^{-u}-3)}}$ so that it is in the fourth quadrant from \eqref{eq:argument}. \par Therefore we finally have \begin{equation*} \begin{split} &J_N\bigl(E;\exp(\xi/N)\bigr) \\ \underset{N\to\infty}{\sim}& \frac{N e^{2\pi\sqrt{-1}u N/\xi}}{2\sinh(u/2)} \frac{\sqrt{2\pi}}{\sqrt{N}\sqrt{-\xi\sqrt{(e^u+e^{-u}+1)(e^u+e^{-u}-3)}}} \\ & \times \exp \left( \frac{N}{\xi} \left( \Li_2\left(e^{u-\varphi(u)}\right) - \Li_2\left(e^{u+\varphi(u)}\right) - u\tilde{\varphi}(u) \right) \right) \\ =& \frac{\sqrt{\pi}}{2\sinh(u/2)} \sqrt{ \frac{-2}{\sqrt{(e^u+e^{-u}+1)(e^u+e^{-u}-3)}} } \sqrt{\frac{N}{\xi}} \exp\left(\frac{N}{\xi}S(u)\right). \end{split} \end{equation*} Here we put \begin{equation*} S(u) := \Li_2\left(e^{u-\varphi(u)}\right) - \Li_2\left(e^{u+\varphi(u)}\right) - u\varphi(u). \end{equation*} \begin{rem} When $u=0$, we have \begin{equation*} \begin{split} \int_{C_{-}(\varepsilon)}\exp(N\Phi(w))\,dw \underset{N\to\infty}{\sim} \frac{\exp(N\Phi(w_0))} {\sqrt{N}3^{1/4}}. \end{split} \end{equation*} Since $w_0=5/6$ in this case we have \begin{equation*} \begin{split} \Phi(w_0) &= \frac{1}{2\pi\sqrt{-1}} \left( \Li_2(e^{-5\pi\sqrt{-1}/3})-\Li_2(e^{5\pi\sqrt{-1}/3}) \right) \\ &= \frac{6\sqrt{-1}\Lambda(\pi/3)}{2\pi\sqrt{-1}} = \frac{\Vol(E)}{2\pi}. \end{split} \end{equation*} Therefore from Remark~\ref{rem:S}, we have Theorem~\ref{thm:Andersen/Hansen}. \end{rem} \begin{comment} \begin{rem} When $u=\log\bigl((3+\sqrt{5})/2\bigr)$, $d^2\,\Phi(u)/d\,u^2$ also vanishes. Since $\varphi(u)=0$ in this case, we have $w_0=2\pi\sqrt{-1}/\xi$ and \begin{equation*} \begin{split} \Phi(w) &= \Phi(w_0) + \frac{1}{3!} \frac{d^3\,\Phi(w_0)}{d\,w^3}(w-w_0)^3 +\dots \\ &= \frac{-\xi^2e^{u+\xi w_0}(1+e^{2u}+e^{2\xi w_0}-4e^{u+\xi w_0}+e^{2(u+\xi w_0)})} {6(e^{u}-e^{\xi w_0})^2(e^{u+\xi w_0}-1)^2} (w-w_0)^3 \\ &\quad+\text{higher order terms of $(w-w_0)$} \\ &= \frac{-\xi^2e^{u}} {3(e^{u}-1)^2} (w-w_0)^3 +\text{higher order terms of $(w-w_0)$} \\ &= -\frac{\xi^2}{3}(w-w_0)^3 +\text{higher order terms of $(w-w_0)$}. \end{split} \end{equation*} Therefore putting $z:=(N\xi^2/3)^{1/3}(w-w_0)$ we have \begin{equation*} \int_{C_{-}(\varepsilon)}\exp(N\Phi(w))\,dw = \left(\frac{3}{N\xi^2}\right)^{1/3} \int_{C'}\exp[-z^3+\text{higher order terms of $z$}]\,dz \end{equation*} \end{rem} \end{comment} \section{Topological interpretations of $S(u)$ and $T(u)$} \label{sec:interpretation} In this section we describe topological interpretations of $S(u)$ and $T(u)$. \subsection{Representation} Let $x$ and $y$ be the Wirtinger generators of the fundamental group $\pi_1(S^3\setminus{E})$ (with a base point above the paper) of the complement of the figure-eight knot $E$ depicted in Figure~\ref{fig:fig8_pi1}. \begin{figure}[h] \includegraphics[scale=0.3]{fig8_pi1.eps} \caption{Generators of the fundamental group of the complement of the figure-eight knot} \label{fig:fig8_pi1} \end{figure} The group $\pi_1(S^3\setminus{E})$ has the following presentation: \begin{equation*} \pi_1(S^3\setminus{E}) = \langle x,y \mid xy^{-1}x^{-1}yx=yxy^{-1}x^{-1}y \rangle. \end{equation*} Due to \cite{Riley:QUAJM31984} any non-abelian representation $\rho$ of $\pi_1(S^3\setminus{E})$ into $SL(2;\C)$ is, up to conjugation, given as follows: \begin{align*} \rho(x)& := \begin{pmatrix} m^{1/2} & 1 \\ 0 & m^{-1/2} \end{pmatrix}, \\ \rho(y) &:= \begin{pmatrix} m^{1/2} & 0 \\ -d & m^{-1/2} \end{pmatrix}, \end{align*} where \begin{equation*} d= \frac{1}{2} \left( m+m^{-1}-3\ \pm \sqrt{(m+m^{-1}+1)(m+m^{-1}-3)} \right). \end{equation*} Since the longitude $\lambda$ is given by $xy^{-1}xyx^{-2}yxy^{-1}x^{-1}$ if we read it off from the top right, we have \begin{equation*} \rho(\lambda) = \begin{pmatrix} \ell(m)^{\pm1}& (m^{1/2}+m^{-1/2})\sqrt{(m+m^{-1}+1)(m+m^{-1}-3)} \\ 0 & \ell(m)^{\mp1} \end{pmatrix}, \end{equation*} where \begin{equation*} \ell(m) := \frac{m^2-m-2-m^{-1}+m^{-2}}{2} + \frac{(m-m^{-1})}{2}\sqrt{(m+m^{-1}+1)(m+m^{-1}-3)}. \end{equation*} See also \cite[Section~3.1]{Murakami:ACTMV2008}. \par Let $\rho_u$ be the representation given by putting $m:=e^u$. We introduce a parameter $v$ so that $\ell(e^u)=e^{v/2}$. Since we assume that $0<u<\log\bigl((3+\sqrt{5})/2\bigr)$, we have $2<e^u+e^{-u}<3$. Therefore we have \begin{equation*} |\ell(e^u)|^2 = \frac{1}{4} (e^{2u}-e^u-2-e^{-u}+e^{-2u})^2 - \frac{1}{4} (e^u-e^{-u})^2(e^u+e^{-u}+1)(e^u+e^{-u}-3) = 1 \end{equation*} and so $v$ is purely imaginary. \par The representation $\rho_u$ gives an incomplete hyperbolic structure to $S^3\setminus{E}$ and its completion is the generalized Dehn surgery \cite{Thurston:GT3M} with parameter $(p,q)$ with $pu+qv=2\pi\sqrt{-1}$. Since $v$ is purely imaginary $p=0$ and $q=2\pi\sqrt{-1}/v$. Therefore the completion is a cone manifold whose underlying space is the $0$-surgery of $S^3$ along the figure-eight knot with singularity the core of the surgery and with cone angle $\alpha=\Im(v)=v/\sqrt{-1}$. Note that when $u=0$, the cone angle is $2\pi$ and when $u=\log\bigl((3+\sqrt{5})/2\bigr)$, the cone angle is $0$. See \cite{Hilden/Lozano/Montesinos:JMASU1995} for more details about the geometric structure of this manifold. \par In the following two subsections we will calculate the Reidemeister torsion and the Chern--Simons invariant associated with $\rho_u$. \subsection{Reidemeister torsion} From \cite[P.~113]{Porti:MAMCAU1997} (see also \cite[\S~6.3]{Dubois:CANMB2006}) the cohomological Reidemeister torsion $T^{E}_{\lambda}(\rho_u)$ associated with the {\em longitude} $\lambda$ twisted by the adjoint action of the representation $\rho_u$ is given by \begin{equation*} T^{E}_{\lambda}(\rho_u) = \frac{1}{\sqrt{17+4\Tr\bigl(\rho_u(\lambda)\bigr)}} = \frac{1}{2m+2m^{-1}-1} \end{equation*} up to sign, where $\Tr$ means the trace. Note that since in \cite{Porti:MAMCAU1997} Porti uses homological Reidemeister torsion, we need to take the inverse. \par From \cite[Th{\'e}or{\`e}me~4.1]{Porti:MAMCAU1997} the Reidemeister torsion $T^{E}_{\mu}(\rho_u)$ associated with the {\em meridian} $\mu$ is given by \begin{equation*} T^{E}_{\mu}(\rho_u) = \pm\frac{\partial\, v}{\partial\, u}T^{E}_{\lambda}(\rho_u). \end{equation*} Since $\ell(e^u)=e^{v/2}$, we have \begin{equation*} \pm T^{E}_{\mu}(\rho_u) = \frac{\partial\,\bigl(2\log\ell(e^{u})\bigr)}{\partial\, u} \times \frac{1}{2e^u+2e^{-u}-1} = \frac{2}{\sqrt{(e^u+e^{-u}+1)(e^{u}+e^{-u}-3)}}. \end{equation*} Therefore $T(u)$ that appears in Theorem~\ref{thm:main} coincides with $T^{E}_{\mu}(\rho_u)$ up to sign. \subsection{Chern--Simons invariant} Let $M$ be a closed three-manifold and $\rho\colon\pi_1(M)\to\SL(2;\C)$ a representation. Then the Chern--Simons invariant $\cs_{M}(\rho)$ is defined as \begin{equation*} \frac{1}{8\pi^2}\int_{M}\Tr(A\wedge dA+\frac{2}{3}A\wedge A\wedge A) \in\C/\Z, \end{equation*} where $A$ is the $\mathfrak{sl}(2;\C)$-valued $1$-form on $M$ with $dA+A\wedge A=0$ such that $\rho$ is given as the holonomy representation induced by the flat connection on $M\times\SL(2;\C)$ defined by $A$ . \par In \cite{Kirk/Klassen:COMMP93} Kirk and Klassen defined the $\SL(2;\C)$ Chern--Simons invariant $\cs_M(\rho)$ for a three-manifold with boundary. It is a triple $[\alpha,\beta;z]$ of complex numbers modulo the following relation: \begin{equation*} [\alpha,\beta;z] = [\alpha+1,\beta;z\exp(-8\pi\sqrt{-1}\beta)] = [\alpha,\beta+1;z\exp(8\pi\sqrt{-1}\alpha)] = [-\alpha,-\beta;z]. \end{equation*} For $i=1,2$, let $M_i$ be a three-manifold with boundary $\partial{M_i}$ a torus and put $M:=M_1\bigcup_{\partial{M_1}=-\partial{M_2}}M_2$. For a representation $\rho\colon\pi_1(M)\to SL(2;\C)$, put $\rho_i:=\rho\bigr|_{M_i}$. If $\cs_{M_i}(\rho_i)=[\alpha,\beta;z_i]$ then $\cs_M(\rho)$ is given by $z_1z_2$. \par Define the Chern--Simons invariant $\CS_u(K)$ for a knot $K$ to be \begin{equation*} \cs_{S^3\setminus{E}}(\rho_u) = \left[ \frac{u}{4\pi\sqrt{-1}}, \frac{v}{4\pi\sqrt{-1}}; \exp \left( \frac{2}{\pi\sqrt{-1}}\CS_{u}(K) \right) \right]. \end{equation*} Then as described in \cite{Hikami/Murakami:Bonn} we have \begin{equation*} \CS_{u}(E) = S(u)-\pi\sqrt{-1}u-\frac{uv}{4}. \end{equation*} Note that we are using the $\operatorname{PSL}(2;\C)$ normalization of the Chern--Simons invariant \cite[P.~543]{Kirk/Klassen:COMMP93}. So the function $f(u)$ in \cite{Neumann/Zagier:TOPOL85} is $-\CS_u(E)/4$ (up to a constant) and the function $f(u)$ in \cite[P.~543]{Kirk/Klassen:COMMP93} and \cite{Yoshida:INVEM85} is $-2\sqrt{-1}\CS_u(E)/\pi$. \section{Calculation of $S_{\gamma}$} \label{sec:S_gamma} In this section we first show the analyticity of $S_{\gamma}$ and then calculate its special values. \begin{lem}\label{lem:analyticity} If a complex number $\gamma$ has the positive real part, then $S_{\gamma}(z)$ is an analytic function in $\{z\in\C\mid|\Re(z)|<\pi+\Re(\gamma)\}$. \end{lem} \begin{proof} Put \begin{equation*} L_{\gamma}(t) := \frac{e^{zt}}{t\sinh(\pi t)\sinh(\gamma t)}. \end{equation*} We will show that the improper integrals $\int_{R}^{\infty}L_{\gamma}t\,dt$ and $\int_{-\infty}^{R}L_{\gamma}t\,dt$ converge. \par Putting $z:=x+\sqrt{-1}y$ and $\gamma:=a+\sqrt{-1}b$ for real numbers $x,y,a,b$ with $a>0$ we have \begin{equation*} \Re(L_{\gamma}(t)) = \frac {-2e^{xt}\bigl(\cosh(at)\sin(bt)\sin(ty)+\sinh(at)\cos(bt)\cos(yt)\bigr)} {t\sinh(\pi t)\bigl(\cos(2bt)-\cosh(2at)\bigr)} \end{equation*} and \begin{equation*} \Im(L_{\gamma}(t)) = \frac {2e^{xt}\bigl(\cosh(at)\sin(bt)\cos(ty)-\sinh(at)\cos(bt)\sin(yt)\bigr)} {t\sinh(\pi t)\bigl(\cos(2bt)-\cosh(2at)\bigr)} \end{equation*} for $t\in\R$. \par If $t$ is positive and sufficiently large we have \begin{equation*} \begin{split} |\Re(L_{\gamma}(t))| &\le \frac{2e^{xt}e^{at}}{t\sinh(\pi t)(\cosh(2at)-1)} \\ &= \frac{8e^{(x-\pi-a)t}}{t(1-e^{-2\pi t})(1+e^{-4at}-2e^{-2at})} \end{split} \end{equation*} and similarly we have \begin{equation*} |\Im(L_{\gamma}(t))| \le \frac{8e^{(x-\pi-a)t}}{t(1-e^{-2\pi t})(1+e^{-4at}-2e^{-2at})}. \end{equation*} Therefore the integral $\int_{R}^{\infty}L_{\gamma}t\,dt$ converges since $x<\pi+a$. \par If $t$ is negative and $|t|$ is sufficiently large we have \begin{equation*} \begin{split} |\Re(L_{\gamma}(t))| &\le \frac{2e^{(x-a)t}}{t\sinh(\pi t)(\cosh(2at)-1)} \\ &= \frac{8e^{(x+\pi+a)t}}{t(e^{2\pi t}-1)(e^{4at}+1-2e^{2at})} \end{split} \end{equation*} and similarly we have \begin{equation*} |\Im(L_{\gamma}(t))| \le \frac{8e^{(x+\pi+a)t}}{t(e^{2\pi t}-1)(e^{4at}+1-2e^{2at})}. \end{equation*} Therefore the integral $\int_{-\infty}^{R}L_{\gamma}t\,dt$ converges since $x>-\pi-a$. \end{proof} Next we prove Lemma~\ref{lem:S_gamma}. \begin{proof}[Proof of Lemma~\ref{lem:S_gamma}] By the definition we have \begin{equation*} \begin{split} &\frac{S_{\gamma}(-\pi-\sqrt{-1}u+\gamma)}{S_{\gamma}(\pi-\sqrt{-1}u-\gamma)} \\ =& \exp \left( \frac{1}{4} \int_{C_R} \left( \frac{e^{(-\pi-\sqrt{-1}u+\gamma)t}}{\sinh(\pi t)\sinh(\gamma t)} - \frac{e^{(\pi-\sqrt{-1}u-\gamma)t}}{\sinh(\pi t)\sinh(\gamma t)} \right) \frac{dt}{t} \right) \\ =& \exp \left( \frac{1}{2} \int_{C_R} e^{-\sqrt{-1}ut} \frac{\sinh(-\pi t+\gamma t)}{\sinh(\pi t)\sinh(\gamma t)} \frac{dt}{t} \right) \\ =& \exp \left( \frac{1}{2} \int_{C_R} \frac{e^{-\sqrt{-1}ut}\coth(\pi t)}{t} \,dt - \frac{1}{2} \int_{C_R} \frac{e^{-\sqrt{-1}ut}\coth(\gamma t)}{t} \,dt \right). \end{split} \end{equation*} We will calculate the integral $\int_{C_R}e^{-\sqrt{-1}ut}\coth(\kappa t)/t\,dt$ for a complex number $\kappa$ with $\Re(\kappa)>0$ and $\Im(\kappa)\le0$. \par Put $\kappa:=\alpha-\beta\sqrt{-1}$ with $\alpha>0$ and $\beta\ge0$. For a positive number $r$, let $U_1$ be the segment connecting $r$ and $r+r'\beta/\alpha-r'\sqrt{-1}$, $U_2$ be the segment connecting $r+r'\beta/\alpha-r'\sqrt{-1}$ and $-r+r'\beta/\alpha-r'\sqrt{-1}$, and $U_3$ be the segment connecting $-r+r'\beta/\alpha-r'\sqrt{-1}$ and $-r$ with $r':=(n+1/2)\pi\alpha/|\kappa|^2$, where $n:=\lfloor{r|\kappa|^2/(\pi\alpha)}\rfloor$. Here $\lfloor{x}\rfloor$ is the largest integer that does not exceed $x$. Note that $r-\pi\alpha/(2|\kappa|^2)<r'\le r+\pi\alpha/(2|\kappa|^2)$. We use $r'$ instead of $r$ just to avoid the poles of $\coth(\kappa t)$. Then we have \begin{equation*} \begin{split} &\left| \int_{U_1} \frac{e^{-\sqrt{-1}ut}}{t} \coth(\kappa t) \,dt \right| \\ \le& \int_{0}^{r'} \left| \frac{e^{-\sqrt{-1}u(r+s\beta/\alpha-s\sqrt{-1})}} {r+s\beta/\alpha-s\sqrt{-1}} \right| \left| \coth\bigl((\alpha-\beta\sqrt{-1})(r+s\beta/\alpha-s\sqrt{-1})\bigr) \right| \,ds \\ \le& \frac{1}{r} \int_{0}^{r'} e^{-us} \left| \coth\bigl(r\alpha-\sqrt{-1}(s\alpha+r\beta+s\beta^2/\alpha)\bigr) \right| \,ds \\ \le& \frac{1}{r} \int_{0}^{r'} e^{-us} \,ds \\ =& \frac{1}{ur} (1-e^{-ur'}) \xrightarrow{r\to\infty}0. \end{split} \end{equation*} Similarly we have \begin{equation*} \begin{split} &\left| \int_{U_3} \frac{e^{-\sqrt{-1}ut}}{t} \coth(\kappa t) \,dt \right| \\ \le& \int_{0}^{r'} \left| \frac{e^{-\sqrt{-1}u(-r+s\beta/\alpha-s\sqrt{-1})}} {-r+s\beta/\alpha-s\sqrt{-1}} \right| \left| \cosh\bigl((\alpha-\beta\sqrt{-1})(-r+s\beta/\alpha-s\sqrt{-1})\bigr) \right| \,ds \\ \le& \frac{|\kappa|}{r\alpha} \int_{0}^{r'} e^{-us} \left| \cosh\bigl(-r\alpha-\sqrt{-1}(s\alpha-r\beta+s\beta^2/\alpha)\bigr) \right| \,ds \\ \le& \frac{|\kappa|}{ur\alpha} (1-e^{-ur'}) \xrightarrow{r\to\infty}0. \end{split} \end{equation*} We also have \begin{equation*} \begin{split} &\left| \int_{U_2} \frac{e^{-\sqrt{-1}ut}}{t} \coth(\kappa t) \,dt \right| \\ \le& \int_{-r}^{r} \left| \frac{e^{-\sqrt{-1}u(s+r'\beta/\alpha-r'\sqrt{-1})}} {s+r'\beta/\alpha-r'\sqrt{-1}} \right| \left| \coth\bigl((\alpha-\beta\sqrt{-1})(s+r'\beta/\alpha-r'\sqrt{-1})\bigr) \right| \,ds \\ \le& \frac{e^{-ur'}}{r'} \int_{-r}^{r} \left| \coth\bigl(s\alpha-\sqrt{-1}(r'\alpha+s\beta+r'\beta^2/\alpha)\bigr) \right| \,ds \\ =& \frac{e^{-ur'}}{r'} \int_{-r}^{r} \left| \coth \left( s\alpha-\sqrt{-1}\bigl((n+1/2)\pi+s\beta\bigr) \right) \right| \,ds \\ =& \frac{e^{-ur'}}{r'} \int_{-r}^{r} \left| \tanh\bigl(\kappa s\bigr) \right| \,ds \\ &\text{($\delta:=\max_{-1\le s\le1}|\tanh(\kappa s)|>0$)} \\ \le& \frac{e^{-ur'}}{r'} \left( 2\delta + \int_{-r}^{-1} \frac{|e^{\kappa s}-e^{-\kappa s}|}{|e^{\kappa s}+e^{-\kappa s}|} \,ds + \int_{1}^{r} \frac{|e^{\kappa s}-e^{-\kappa s}|}{|e^{\kappa s}+e^{-\kappa s}|} \,ds \right) \\ \le& \frac{e^{-ur'}}{r'} \left( 2\delta + \int_{-r}^{-1} \frac{|e^{\kappa s}|+|e^{-\kappa s}|} {\bigl||e^{\kappa s}|-|e^{-\kappa s}|\bigr|} \,ds + \int_{1}^{r} \frac{|e^{\kappa s}|+|e^{-\kappa s}|} {\bigl||e^{\kappa s}|-|e^{-\kappa s}|\bigr|} \,ds \right) \\ =& \frac{2e^{-ur'}}{r'} \left( \delta + \int_{1}^{r}|\coth(\alpha s)|\,ds \right) \\ =& \frac{2e^{-ur'}}{r'} \left( \delta + \frac{\log(\sinh(\alpha r))-\log(\sinh(\alpha))}{\alpha} \right) \xrightarrow{r\to\infty}0. \end{split} \end{equation*} Therefore we have \begin{equation*} \begin{split} \int_{C_R}\frac{e^{-\sqrt{-1}ut}}{t}\coth(\kappa t)\,dt &= 2\pi\sqrt{-1} \sum_{l=1}^{\infty} \Res \left( \frac{e^{-\sqrt{-1}ut}}{t}\coth(\kappa t); t=\frac{l\pi\sqrt{-1}}{\kappa} \right) \\ &= 2\pi\sqrt{-1} \sum_{l=1}^{\infty} \frac{e^{lu\pi/\kappa}}{l\pi\sqrt{-1}} \\ &= -2\log(1-e^{u\pi/\kappa}) \end{split} \end{equation*} and so we have \begin{equation*} \frac{S_{\gamma}(-\pi-\sqrt{-1}u+\gamma)}{S_{\gamma}(\pi-\sqrt{-1}u-\gamma)} = \frac{1-e^{u\pi/\gamma}}{1-e^u}. \end{equation*} \end{proof} \section{Proof of Proposition~\ref{prop:4.7}} \label{sec:prop:4.7} In this section we follow \cite[Appendix~A]{Andersen/Hansen:JKNOT2006} to show Proposition~\ref{prop:4.7}. \par From \cite[\S~4.1]{Andersen/Hansen:JKNOT2006} we have the following integral expression for $|\Re(z)|<\pi$, or $|\Re(z)|=\pi$ and $\Im(z)\ge0$: \begin{equation*} \frac{1}{2\sqrt{-1}} \Li_2(-e^{\sqrt{-1}z}) = \frac{1}{4} \int_{C_R}\frac{e^{zt}}{t^2\sinh(\pi t)}\,dt. \end{equation*} Therefore we have \begin{equation*} \begin{split} S_{\gamma}(z) &= \exp \left( \frac{1}{2\sqrt{-1}\gamma} \Li_2(-e^{\sqrt{-1}z}) + I_{\gamma}(z) \right) \\ &= \exp \left( \frac{N}{\xi} \Li_2(-e^{\sqrt{-1}z}) + I_{\gamma}(z) \right), \end{split} \end{equation*} where \begin{equation*} I_{\gamma}(z) := \frac{1}{4} \int_{C_R}\frac{e^{zt}}{t\sinh(\pi t)} \left( \frac{1}{\sinh(\gamma t)}-\frac{1}{\gamma t} \right) \,dt \end{equation*} (see \cite[Equation~(4.2)]{Andersen/Hansen:JKNOT2006}). Note that $I_{\gamma}(z)$ is defined for $z$ with $|\Re(z)|\le\pi$ (\cite[Appendix~A]{Andersen/Hansen:JKNOT2006}). \par Then we have from \eqref{eq:g_N_def} \begin{equation}\label{eq:g_N} \begin{split} &g_N(z) \\ =& \exp \left( \frac{N}{\xi} \left( \Li_2(-e^{\sqrt{-1}\pi+u-\xi z}) - \Li_2(-e^{-\sqrt{-1}\pi+u+\xi z}) \right) -Nuz \right) \\ &\times \exp \left( I_{\gamma}(\pi-\sqrt{-1}u+\sqrt{-1}\xi z) - I_{\gamma}(-\pi-\sqrt{-1}u-\sqrt{-1}\xi z) \right) \\ =& \exp\bigl(N\Phi(z)\bigr) \exp \bigl( I_{\gamma}(\pi-\sqrt{-1}u+\sqrt{-1}\xi z) - I_{\gamma}(-\pi-\sqrt{-1}u-\sqrt{-1}\xi z) \bigr). \end{split} \end{equation} \par We first give an estimation for $|I_{\gamma}(z)|$. \begin{lem}[see Lemma~3 in \cite{Andersen/Hansen:JKNOT2006}]\label{lem:Lemma3} If $|\Re(z)|<\pi$, then we have \begin{equation*} |I_{\gamma}(z)| \le A \left( \frac{1}{\pi-\Re(z)}+\frac{1}{\pi+\Re(z)} \right) |\gamma| + B \left( 1+e^{-\Im(z)R} \right) |\gamma| \end{equation*} If $|\Re(z)|\le\pi$, then we have \begin{equation*} |I_{\gamma}(z)| \le 2A+B(1+e^{-\Im(z)R})|\gamma|. \end{equation*} \end{lem} \begin{proof} We follow \cite[Appendix~A]{Andersen/Hansen:JKNOT2006}. \par Put \begin{equation*} \psi(w) := \frac{1}{\sinh(w)}-\frac{1}{w}, \end{equation*} which is holomorphic in the open disk $D_0(\pi)$ with center $0$ and radius $\pi$. Note that \begin{equation}\label{eq:psi} \psi(w) = \frac{w-\sinh(w)}{w\sinh(w)} = -w \frac{h(w)}{k(w)} \end{equation} for entire functions $h(w)$ and $k(w)$ with \begin{align*} h(w)&=\sum_{j=0}^{\infty}\frac{w^{2j}}{(2j+3)!} \intertext{and} k(w)&=\sum_{j=0}^{\infty}\frac{w^{2j}}{(2j+1)!} \end{align*} when $|w|<\pi$. Therefore there exists $\delta>0$ such that $\min_{|w|\le\delta}|\psi(w)/w|=D>0$ since $\lim_{w\to0}\psi(w)/w=1/6$. \par Put $C_{\delta}:=[-\delta/|\gamma|,-R]\cup\Omega_R\cup[R,\delta/|\gamma|]$. Consider the following integrals $I_0(z)$ and $I_1(z)$ so that $I_{\gamma}(z)=I_0(z)+I_1(z)$: \begin{align*} I_0(z) &:= \frac{1}{4} \int_{C_{\delta}} \frac{\exp(zt)}{t\sinh(\pi t)}\psi(\gamma t) \,dt, \\ I_1(z) &:= \frac{1}{4} \int_{-\infty}^{-\delta/|\gamma|} \frac{\exp(zt)}{t\sinh(\pi t)}\psi(\gamma t) \,dt + \frac{1}{4} \int_{\delta/|\gamma|}^{\infty} \frac{\exp(zt)}{t\sinh(\pi t)}\psi(\gamma t) \,dt. \end{align*} Since $\lim_{w\to\infty}w\psi(w)=0$, $\psi(w)$ has poles at $w=m\pi\sqrt{-1}$ ($m=\pm1,\pm2,\dots$), and $\Im(\gamma)\ne0$, we have $|\gamma t\psi(\gamma t)|\le E$ for a positive number $E$. Note that $E$ depends only on the argument of $\gamma$ and so only on $\xi=\gamma\times 2\sqrt{-1}N$. So we have \begin{equation*} |\psi(\gamma t)| \le \frac{E}{|\gamma t|} \end{equation*} for any $t$. Therefore we can apply the argument (replacing $\gamma$ there with $|\gamma|/E$ and $a$ with $\delta/|\gamma|$) in \cite[Page~532]{Andersen/Hansen:JKNOT2006} to conclude \begin{equation*} |I_1(z)| \le \frac{E}{\delta\left(1-e^{-2\pi|\delta|/|\gamma|}\right)} \end{equation*} if $|\Re(z)|\le\pi$ and \begin{equation*} |I_1(z)| \le \frac{E|\gamma|} {2\delta^2\left(1-e^{-2\pi\delta/|\gamma|}\right)} \left( \frac{e^{-(\pi-\Re(z))\delta/|\gamma|)}}{\pi-\Re(z)} + \frac{e^{-(\pi+\Re(z))\delta/|\gamma|)}}{\pi+\Re(z)} \right) \end{equation*} if $|\Re(z)|<\pi$. \par \begin{comment} If $t\in {C'}_{R}$, then $|\gamma t|\le\delta$ and so we have $|\psi(\gamma t)|\le D|\gamma||t|$. So we have \begin{equation*} |I_0(z)| \le \frac{|\gamma|}{4} \int_{C_{\delta}} \left| \frac{\exp(zt)}{\sinh(\pi t)} \right|\,dt. \end{equation*} \par \end{comment} Next we estimate $I_0(z)$. Let $M(z,R)$ be the maximum of $\left|\frac{e^{z t}}{t\sinh(\pi t)}\psi(\gamma t)\right|$ for $t\in\Omega_R$. Then we have \begin{equation*} \left| \int_{\Omega_R} \frac{\exp(zt)}{t\sinh(\pi t)}\psi(\gamma t) \,dt \right| \le \pi R M(z,R). \end{equation*} \par From \eqref{eq:psi} we have \begin{equation*} \begin{split} M(z,R) &= \max_{t\in\Omega_R} \left|\frac{e^{z t}}{t\sinh(\pi t)}\psi(\gamma t)\right| \\ &= \max_{t\in\Omega_R} |\gamma| \left|\frac{e^{z t}}{\sinh(\pi t)}\right| \left|\frac{h(\gamma t)}{k(\gamma t)}\right| \end{split} \end{equation*} since $|\gamma t|=|\gamma|R<\pi$ when $t\in\Omega_R$ and $R<1$. Then putting $L(R):=\max_{t\in\Omega_R}|h(z)/k(z)|$ and $N(z,R):=\max_{t\in\Omega_R}\left|e^{zt}/(e^{\pi t}-e^{-\pi t})\right|$ we can apply the argument in \cite[Page~533]{Andersen/Hansen:JKNOT2006} to have the following estimation. \begin{equation*} \left| \frac{1}{4} \int_{\Omega_R} \frac{e^{zt}}{t\sinh(\pi t)}\psi(\gamma t)\,dt \right| \le |\gamma|B(1+e^{-\Im(z)R}) \end{equation*} for a constant $B$ (depending only on $R$). \par Now we will estimate the rest of $I_0(z)$. We have \begin{equation*} \begin{split} &\left| \int_{-\delta/|\gamma|}^{-R}\frac{e^{zt}}{t\sinh(\pi t)}\psi(\gamma t)\,dt + \int_{R}^{\delta/|\gamma|}\frac{e^{zt}}{t\sinh(\pi t)}\psi(\gamma t)\,dt \right| \\ \le& \int_{-\delta/|\gamma}^{-R} \frac{e^{\Re(z)t}}{|t||\sinh(\pi t)|}|\psi(\gamma t)|\,dt + \int_{R}^{\delta/|\gamma} \frac{e^{\Re(z)t}}{|t||\sinh(\pi t)|}|\psi(\gamma t)|\,dt \\ =& \int_{R}^{\delta/|\gamma} \frac{e^{\Re(z)t}+e^{-\Re(z)t}}{t\sinh(\pi t)} |\psi(\gamma t)|\,dt \\ =& 2|\gamma| \int_{R}^{\delta/|\gamma} \frac{e^{\Re(z)t}+e^{-\Re(z)t}}{e^{\pi t}-e^{-\pi t}} \left|\frac{\psi(\gamma t)}{\gamma t}\right|\,dt. \end{split} \end{equation*} Since $\psi(w)/w$ is continuous in $D_0(\pi)$ and $\lim_{w\to0}\psi(w)/w=1/6$, there exists $\delta>0$ such that $\min_{|w|<\delta}|\psi(w)/w|=D>0$. Therefore if $a<\delta/|\gamma|$, then $|\gamma t|\le|\gamma|a<\delta$ in the integral and so we have \begin{equation*} \begin{split} &\left| \int_{-\delta/|\gamma|}^{-R} \frac{e^{zt}}{t\sinh(\pi t)}\psi(\gamma t)\,dt + \int_{R}^{\delta/|\gamma|} \frac{e^{zt}}{t\sinh(\pi t)}\psi(\gamma t)\,dt \right| \\ \le& 2|\gamma|D \int_{R}^{\delta/|\gamma|} \frac{e^{\Re(z)t}+e^{-\Re(z)t}}{e^{\pi t}-e^{-\pi t}} \,dt \\ \le& \frac{2|\gamma|D}{1-e^{-2\pi R}} \int_{R}^{a}(e^{-(\pi-\Re(z))t}+e^{-(\pi+\Re(z))t}) \,dt. \end{split} \end{equation*} Then from the argument in \cite[Page~533]{Andersen/Hansen:JKNOT2006} we have \begin{equation*} \begin{split} &\left| \int_{-\delta/|\gamma|}^{-R} \frac{e^{zt}}{t\sinh(\pi t)}\psi(\gamma t)\,dt + \int_{R}^{\delta/|\gamma|} \frac{e^{zt}}{t\sinh(\pi t)}\psi(\gamma t)\,dt \right| \\ \le& \frac{4\delta D}{1-e^{-2\pi R}} \end{split} \end{equation*} when $|\Re(z)|\le\pi$, and \begin{equation*} \begin{split} &\left| \int_{-\delta/|\gamma|}^{-R} \frac{e^{zt}}{t\sinh(\pi t)}\psi(\gamma t)\,dt + \int_{R}^{\delta/|\gamma|} \frac{e^{zt}}{t\sinh(\pi t)}\psi(\gamma t)\,dt \right| \\ \le& \frac{2|\gamma|D}{(1-e^{-2\pi R})} \left( \frac{1-e^{-(\pi-\Re(z))\delta/|\gamma|}}{\pi-\Re(z)} + \frac{1-e^{-(\pi+\Re(z))\delta/|\gamma|}}{\pi+\Re(z)} \right) \end{split} \end{equation*} when $|\Re(z)|<\pi$. \par Therefore if $|\Re(z)|\le\pi$, we have \begin{equation*} \begin{split} I_{\gamma}(z) &\le \frac{E}{\delta\left(1-e^{-2\pi|\delta|/|\gamma|}\right)} + |\gamma|B\left(1+e^{\Im(z)R}\right) + \frac{\delta D}{1-e^{-2\pi R}} \\ &\le \frac{1}{1-e^{-2\pi R}}\left(\frac{E}{\delta}+\delta D\right) + |\gamma|B\left(1+e^{\Im(z)R}\right) \end{split} \end{equation*} since $\delta/|\gamma|\ge R$. If $|\Re(z)|<\pi$ we also have \begin{equation*} \begin{split} &|I_{\gamma}(z)| \\ \le& \frac{E|\gamma|} {2\delta^2\left(1-e^{-2\pi\delta/|\gamma|}\right)} \left( \frac{e^{-(\pi-\Re(z))\delta/|\gamma|)}}{\pi-\Re(z)} + \frac{e^{-(\pi+\Re(z))\delta/|\gamma|)}}{\pi+\Re(z)} \right) \\ &+ \frac{|\gamma|D}{2(1-e^{-2\pi R})} \left( \frac{1-e^{-(\pi-\Re(z))\delta/|\gamma|}}{\pi-\Re(z)} + \frac{1-e^{-(\pi+\Re(z))\delta/|\gamma|}}{\pi+\Re(z)} \right) \\ &+ |\gamma|B\left(1+e^{\Im(z)R}\right) \\ \le& |\gamma| \left( \frac{1}{\pi-\Re(z)}+\frac{1}{\pi+\Re(z)} \right) \frac{1}{1-e^{-2\pi R}} \left(\frac{E}{2\delta^2}+\frac{D}{2}\right)+ |\gamma|B\left(1+e^{\Im(z)R}\right). \end{split} \end{equation*} \par The lemma follows by putting \begin{equation*} A := \frac{1}{1-e^{-2\pi R}} \times \max \left\{ \left(\frac{E}{2\delta}+\frac{\delta D}{2}\right), \left(\frac{E}{2\delta^2}+\frac{D}{2}\right) \right\}. \end{equation*} \end{proof} Now we prove Proposition~\ref{prop:4.7}. \par First note that since $g_N(x)$ has no poles inside $C_{+}(\varepsilon)\cup C_{-}(\varepsilon)$, we can assume that $\varepsilon=0$ without changing the sum, that is, \begin{equation*} \begin{split} &\int_{C_{+}(\varepsilon)}(\tan(N\pi x)-\sqrt{-1})g_N(x)\,dx + \int_{C_{-}(\varepsilon)}(\tan(N\pi x)+\sqrt{-1})g_N(x)\,dx \\ =& \int_{C_{+}(0)}(\tan(N\pi x)-\sqrt{-1})g_N(x)\,dx + \int_{C_{-}(0)}(\tan(N\pi x)+\sqrt{-1})g_N(x)\,dx \end{split} \end{equation*} We decompose $C_{+}(0)$ into $C_{+,1}\cup C_{+,2}\cup C_{+,3}$, where $C_{+,1}$ connects $0$ and $-u/(2\pi)+\sqrt{-1}$, $C_{+,2}$ connects $-u/(2\pi)+\sqrt{-1}$ and $1-u/(2\pi)+\sqrt{-1}$, and $C_{+,3}$ connects $1-u/(2\pi)+\sqrt{-1}$ and $1$. Similarly we also decompose $C_{-}(0)$ in to $C_{-,1}\cup C_{-,2}\cup C_{-,3}$, where $C_{-,1}$ connects $0$ and $u/(2\pi)-\sqrt{-1}$, $C_{-,2}$ connects $u/(2\pi)-\sqrt{-1}$ and $1+u/(2\pi)-\sqrt{-1}$, and $C_{-,3}$ connects $1+u/(2\pi)-\sqrt{-1}$ and $-1$. Let $I_{\pm,i}(N)$ be the integral of $\bigl(\tan(N\pi x)-\sqrt{-1}\bigr)g_N(x)$ along $C_{\pm,i}$ ($i=1,2,3$). We will show that $|I_{\pm,i}(N)|$ is bounded from above by $K_{\pm,i}/N$ for a positive constant $K_{\pm,i}$ independent of $N$. \par We will give the following estimations for $I_{\pm,i}(N)$. \begin{equation}\label{eq:I_{+,1}} |I_{+,1}(N)|<\frac{K_{+,1}}{N}. \end{equation} \begin{equation}\label{eq:I_{+,2}} |I_{+,2}(N)|<\frac{K_{+,2}}{N}. \end{equation} \begin{equation}\label{eq:I_{+,3}} |I_{+,3}(N)|<\frac{K_{+,3}}{N}. \end{equation} \begin{equation}\label{eq:I_{-,1}} |I_{-,1}(N)|<\frac{K_{-,1}}{N}. \end{equation} \begin{equation}\label{eq:I_{-,2}} |I_{-,2}(N)|<\frac{K_{-,2}}{N}. \end{equation} \begin{equation}\label{eq:I_{-,3}} |I_{-,3}(N)|<\frac{K_{-,3}}{N}. \end{equation} \begin{proof}[Proof of \eqref{eq:I_{+,1}}] We first estimate $|\tan(N\pi (-u/(2\pi)+\sqrt{-1})t)-\sqrt{-1}|$. We have \begin{equation*} \begin{split} & |\tan(N\pi (-u/(2\pi)+\sqrt{-1})t))-\sqrt{-1}| \\ =& \left| \frac{2e^{-Nut\sqrt{-1}/2-N\pi t)}} {e^{-Nut\sqrt{-1}/2-N\pi t)}+e^{Nut\sqrt{-1}/2+N\pi t)}} \right| \\ =& \frac{2e^{-2N\pi t}} {\left|e^{-Nut\sqrt{-1}-2N\pi t}+1\right|}. \end{split} \end{equation*} Since the denominator is bigger than $1$ if $Nut<\pi/2$ we have \begin{equation*} |\tan(N\pi (-u/(2\pi)+\sqrt{-1})t))-\sqrt{-1}| < 2e^{-2N\pi t} \end{equation*} when $t<\pi/(2Nu)$. If $t\ge\pi/(2Nu)$, then the denominator is bigger than equal to $1-e^{-2N\pi t}$. So we have \begin{equation*} |\tan(N\pi (-u/(2\pi)+\sqrt{-1})t))-\sqrt{-1}| \le \frac{2e^{-2N\pi t}}{1-e^{-2N\pi t}} \le \frac{2e^{-2N\pi t}}{1-e^{-\pi^2/u}} \end{equation*} when $t\ge\pi/(2Nu)$. Therefore for any $0\le t\le 1$ we have \begin{equation}\label{eq:I_{+,1}tan} |\tan(N\pi (-u/(2\pi)+\sqrt{-1})t)-\sqrt{-1}| < \frac{2e^{-2N\pi t}}{1-e^{-\pi^2/u}}. \end{equation} \par So we have \begin{equation}\label{eq:I_{+,1}integral} \left|I_{+,1}(N)\right| \le \frac{2}{1-e^{-\pi^2/u}} \int_{0}^{1} e^{-2N\pi t} \left| g_N\left(\left(-\frac{u}{2\pi}+\sqrt{-1}\right)t\right) \right| \,dt. \end{equation} \par We estimate $|g_N((-u/(2\pi)+\sqrt{-1})t)|$. \par Since the function $g_N$ is not well-defined on the segment $(-u/(2\pi)+\sqrt{-1})t$ (Figure~\ref{fig:contour}), we need to consider the segment $(-u/(2\pi)+\sqrt{-1})t+\varepsilon)$ ($0\le t\le1$) instead for small $\varepsilon$. (See the argument in \cite[Page~534]{Andersen/Hansen:JKNOT2006}.) \par From \eqref{eq:g_N} we have \begin{equation*} \begin{split} &g_{N}\left(\left(-\frac{u}{2\pi}+\sqrt{-1}\right)t+\varepsilon\right) \\ =& \exp \left[ N\Phi\left(\left(-\frac{u}{2\pi}+\sqrt{-1}\right)t+\varepsilon\right) \right] \\ & \times \exp \left[ I_{\gamma} \left( \pi-\sqrt{-1}u-\sqrt{-1}\frac{|\xi|^2t}{2\pi}+\sqrt{-1}\varepsilon\xi \right) \right. \\ &\phantom{\times\exp\bigl[} \left. - I_{\gamma} \left( -\pi-\sqrt{-1}u+\sqrt{-1}\frac{|\xi|^2t}{2\pi}-\sqrt{-1}\varepsilon\xi \right) \right]. \end{split} \end{equation*} \par From Lemma~\ref{lem:Lemma3}, there exist $A>0$ and $B>0$ such that $|I_{\gamma}(z)|\le 2A+B|\gamma|(1+e^{-\Im(z)R})$. So we have \begin{equation}\label{eq:I_{+,1}Phi} \begin{split} & \left| g_{N}\left(\left(-\frac{u}{2\pi}+\sqrt{-1}\right)t+\varepsilon\right) \right| \\ \le& \exp \left[ N\Re\Phi\left(\left(-\frac{u}{2\pi}+\sqrt{-1}\right)t+\varepsilon\right) \right] \frac{\exp\left(2A+B|\gamma|(1+e^{(u+|\xi|^2t/(2\pi)-\varepsilon u)R}\right)} {\exp\left(2A+B|\gamma|(1+e^{(u-|\xi|^2t/(2\pi)+\varepsilon u)R}\right)} \\ \le& \exp \left[ N\Re\Phi\left(\left(-\frac{ut}{2\pi}+\sqrt{-1}\right)t+\varepsilon\right) \right] \exp\left(2A+B|\gamma|(1+e^{(u+|\xi|^2/(2\pi)-\varepsilon u)R}\right) \end{split} \end{equation} for $0\le t\le1$. \par Now we want to estimate $\Re\Phi\bigl((-ut/(2\pi)+\sqrt{-1})t+\varepsilon\bigr)$. \par From the definition we have \begin{multline*} \Phi\left(\left(-\frac{u}{2\pi}+\sqrt{-1}\right)t+\varepsilon\right) \\ = \frac{1}{\xi} \left( \Li_2(e^{u+|\xi|^2t/(2\pi)-\varepsilon\xi}) - \Li_2(e^{u-|\xi|^2t/(2\pi)+\varepsilon\xi}) \right) +\frac{u^2t}{2\pi}-\sqrt{-1}ut-\varepsilon u. \end{multline*} Since we may assume that $\Re(u+|\xi|^2t/(2\pi)-\xi\varepsilon)=(1-\varepsilon)u+|\xi|^2t/(2\pi)>0$, there are two cases to consider; the case where $u-|\xi|^2t/(2\pi)<0$ and the case where $u-|\xi|^2t/(2\pi)\ge0$. \par If $|z|>1$, it is convenient to replace $\Li_2(z)$ with $\Li_2(z^{-1})$ using the following well-known formula. \begin{equation}\label{eq:dilog} \Li_2(z)+\Li_2(z^{-1}) = -\frac{\pi^2}{6}-\frac{1}{2}\bigl(\log(-z)\bigr)^2, \end{equation} where we choose a branch of $\log(-z)$ so that $-\pi<\Im\log(-z)<\pi$. \begin{itemize} \item The case where $u-|\xi|^2t/(2\pi)<0$. We choose $\varepsilon$ small enough so that $u-|\xi|^2t/(2\pi)+\varepsilon u<0$. From \eqref{eq:dilog} we have \begin{equation*} \begin{split} &\Phi\left(\left(-\frac{u}{2\pi}+\sqrt{-1}\right)t+\varepsilon\right) \\ =& \frac{1}{\xi} \left(\vphantom{ \left(u+\frac{|\xi|^2t}{2\pi}-\varepsilon\xi+\sqrt{-1}\pi\right)^2} - \Li_2(e^{-u-|\xi|^2t/(2\pi)+\varepsilon\xi}) - \Li_2(e^{u-|\xi|^2t/(2\pi)+\varepsilon\xi}) \right. \\ & \left.\quad -\frac{\pi^2}{6} -\frac{1}{2} \left( u+\frac{|\xi|^2t}{2\pi}-\varepsilon\xi+\sqrt{-1}\pi \right)^2 \right) \\ &+\frac{u^2t}{2\pi}-\sqrt{-1}ut-\varepsilon u \\ =& \frac{1}{\xi} \left( - \Li_2(e^{u-|\xi|^2t/(2\pi)+\xi\varepsilon}) - \Li_2(e^{u-|\xi|^2t/(2\pi)+\xi\varepsilon}) \right) \\ &-\frac{\pi^2}{6\xi} -\frac{1}{2\xi} \left( u+\frac{|\xi|^2t}{2\pi}-\varepsilon\xi+\sqrt{-1}\pi \right)^2 \\ &+\frac{u^2t}{2\pi}-\sqrt{-1}ut-\varepsilon u, \end{split} \end{equation*} where in the first equality we choose the sign of $\sqrt{-1}\pi$ so that $\Im(u+|\xi|^2t/(2\pi)-\varepsilon\xi+\pi\sqrt{-1})=-\varepsilon u+\pi$ is between $-\pi$ and $\pi$. Since the dilogarithm function $\Li(z)$ is analytic when $\Re(z)<1$, we have \begin{equation*} \begin{split} &\lim_{\varepsilon\searrow0} \Phi\left(\left(-\frac{u}{2\pi}+\sqrt{-1}\right)t+\varepsilon\right) \\ =& \frac{1}{\xi} \left( - \Li_2(e^{-u-|\xi|^2t/(2\pi)}) - \Li_2(e^{u-|\xi|^2t/(2\pi)}) \right) \\ &-\frac{\pi^2}{6\xi} -\frac{1}{2\xi} \left( u+\frac{|\xi|^2t}{2\pi}+\sqrt{-1}\pi \right)^2 +\frac{u^2t}{2\pi}-\sqrt{-1}ut. \end{split} \end{equation*} Since $\Li_2(z)$ is real when $z$ is real and $z<1$, we have \begin{equation*} \begin{split} &\lim_{\varepsilon\searrow0} \Re\Phi\left(\left(-\frac{u}{2\pi}+\sqrt{-1}\right)t+\varepsilon\right) \\ =& -\frac{u}{|\xi|^2} \left( \Li_2(e^{-u-|\xi|^2t/(2\pi)}) + \Li_2(e^{u-|\xi|^2t/(2\pi)}) \right) \\ &-\frac{u\pi^2}{6|\xi|^2} -\frac{u}{2|\xi|^2} \left( u+\frac{|\xi|^2t}{2\pi} \right)^2 + \frac{u\pi^2}{2|\xi|^2} - \frac{4\pi(u+|\xi|^2t/(2\pi))\pi}{2|\xi|^2} +\frac{u^2t}{2\pi} \\ =& -\frac{u}{|\xi|^2} \left( \Li_2(e^{-u-|\xi|^2t/(2\pi)}) + \Li_2(e^{u-|\xi|^2t/(2\pi)}) \right) \\ &-\frac{5u\pi^2}{6|\xi|^2} -\frac{u^3}{2|\xi|^2} -\frac{u|\xi|^2t^2}{8\pi^2} -\pi t \\ <&0 \end{split} \end{equation*} if $u>0$. \item The case where $u-|\xi|^2t/(2\pi)\ge0$. In this case we have \begin{equation*} \begin{split} &\Phi\left(\left(-\frac{u}{2\pi}+\sqrt{-1}\right)t+\varepsilon\right) \\ =& \frac{1}{\xi} \left( \vphantom{\left(\frac{|\xi|^2t}{2\pi}\right)^2} - \Li_2(e^{-u-|\xi|^2t/(2\pi)+\varepsilon\xi}) + \Li_2(e^{-u+|\xi|^2t/(2\pi)-\varepsilon\xi}) \right. \\ &\left.\quad -\frac{1}{2} \left( u+\frac{|\xi|^2t}{2\pi}-\varepsilon\xi+\pi\sqrt{-1} \right)^2 +\frac{1}{2} \left( u-\frac{|\xi|^2t}{2\pi}+\varepsilon\xi-\pi\sqrt{-1} \right)^2 \right) \\ &+ \frac{u^2t}{2\pi}-\sqrt{-1}ut-u\varepsilon \\ =& \frac{1}{\xi} \left( \vphantom{\left(\frac{|\xi|^2t}{2\pi}\right)^2} - \Li_2(e^{-u-|\xi|^2t/(2\pi)+\varepsilon\xi}) + \Li_2(e^{-u+|\xi|^2t/(2\pi)-\varepsilon\xi}) \right) \\ & -\frac{2u}{\xi} \left( \frac{|\xi|^2t}{2\pi}-\varepsilon\xi+\pi\sqrt{-1} \right) + \frac{u^2t}{2\pi}-\sqrt{-1}ut-u\varepsilon \end{split} \end{equation*} and so we have \begin{equation*} \begin{split} &\lim_{\varepsilon\searrow0} \Re\Phi\left(\left(-\frac{u}{2\pi}+\sqrt{-1}\right)t+\varepsilon\right) \\ =& \frac{u}{|\xi|^2} \left( - \Li_2(e^{-u-|\xi|^2t/(2\pi)}) + \Li_2(e^{-u+|\xi|^2t/(2\pi)}) \right) \\ & -\Re \left( \frac{2u}{\xi} \left( \frac{|\xi|^2t}{2\pi}+\pi\sqrt{-1} \right) \right) + \frac{u^2t}{2\pi} \\ =& \frac{u}{|\xi|^2} \left( - \Li_2(e^{-u-|\xi|^2t/(2\pi)}) + \Li_2(e^{-u+|\xi|^2t/(2\pi)}) \right) -\frac{4\pi^2u}{|\xi|^2}-\frac{u^2t}{2\pi} \\ <&0 \end{split} \end{equation*} if $u>0$. \end{itemize} Therefore for any $t$ we have $\Re\Phi\bigl((-u/(2\pi)+\sqrt{-1})t+\varepsilon\bigr)\le0$ for small $\varepsilon>0$. \par So from \eqref{eq:I_{+,1}integral} and \eqref{eq:I_{+,1}Phi} we have \begin{equation*} \begin{split} \left|I_{+,1}(N)\right| &\le \frac{2}{1-e^{-\pi^2/u}} \exp\left(2A+B|\gamma|(1+e^{(u+|\xi|^2/(2\pi)-\varepsilon u)R}\right) \int_{0}^{1}e^{-2N\pi t}\,dt \\ &= \frac{1-e^{-2N\pi}}{N\pi(1-e^{-\pi^2/u})} \exp\left(2A+B|\gamma|(1+e^{(u+|\xi|^2/(2\pi))R}\right) \\ &< \frac{K_{+,1}}{N} \end{split} \end{equation*} for a positive constant $K_{+,1}$. \end{proof \begin{proof}[Proof of \eqref{eq:I_{-,1}}] Since $\tan$ is an odd function we have from \eqref{eq:I_{+,1}tan} \begin{equation}\label{eq:I_{-,1}tan} |\tan\bigl(N\pi(u/(2\pi)-\sqrt{-1})t\bigr)+\sqrt{-1}| < \frac{2e^{-2N\pi t}}{1-e^{-\pi^2/u}} \end{equation} for any $0\le t\le 1$. \par So we have \begin{equation}\label{eq:I_{-,1}integral} \left|I_{-,1}(N)\right| \le \frac{2}{1-e^{-\pi^2/u}} \int_{0}^{1} e^{-2N\pi t} \left| g_N\left(\left(\frac{u}{2\pi}-\sqrt{-1}\right)t\right) \right| \,dt. \end{equation} \par We estimate $|g_N((u/(2\pi)-\sqrt{-1})t)|$. \par As in the case of $I_{+,1}$ we need to calculate $g_N(g_N((u/(2\pi)-\sqrt{-1})t)+\varepsilon)$. \par From \eqref{eq:g_N} we have \begin{equation*} \begin{split} &g_{N}\left(\left(\frac{u}{2\pi}-\sqrt{-1}\right)t+\varepsilon\right) \\ =& \exp \left[ N\Phi\left(\left(\frac{u}{2\pi}-\sqrt{-1}\right)t+\varepsilon\right) \right] \\ & \times \exp \left[ I_{\gamma} \left( \pi-\sqrt{-1}u+\sqrt{-1}\frac{|\xi|^2t}{2\pi}+\sqrt{-1}\varepsilon\xi \right) \right. \\ &\phantom{\times\exp\bigl[} \left. - I_{\gamma} \left( -\pi-\sqrt{-1}u-\sqrt{-1}\frac{|\xi|^2t}{2\pi}-\sqrt{-1}\varepsilon\xi \right) \right]. \end{split} \end{equation*} \par From Lemma~\ref{lem:Lemma3}, there exist $A>0$ and $B>0$ such that $|I_{\gamma}(z)|\le 2A+B|\gamma|(1+e^{-\Im(z)R})$. So we have \begin{equation}\label{eq:I_{-,1}Phi} \begin{split} & \left| g_{N}\left(\left(\frac{u}{2\pi}-\sqrt{-1}\right)t+\varepsilon\right) \right| \\ \le& \exp \left[ N\Re\Phi\left(\left(\frac{u}{2\pi}-\sqrt{-1}\right)t+\varepsilon\right) \right] \frac{\exp\left(2A+B|\gamma|(1+e^{(u-|\xi|^2t/(2\pi)-\varepsilon u)R}\right)} {\exp\left(2A+B|\gamma|(1+e^{(u+|\xi|^2t/(2\pi)+\varepsilon u)R}\right)} \\ \le& \exp \left[ N\Re\Phi\left(\left(\frac{ut}{2\pi}-\sqrt{-1}\right)t+\varepsilon\right) \right] \exp\left(2A+B|\gamma|(1+e^{(u-|\xi|^2/(2\pi)-\varepsilon u)R}\right) \end{split} \end{equation} for $0\le t\le1$. \par Now we want to estimate $\Re\Phi\bigl((ut/(2\pi)-\sqrt{-1})t+\varepsilon\bigr)$. \par From the definition we have \begin{multline*} \Phi\left(\left(\frac{u}{2\pi}-\sqrt{-1}\right)t+\varepsilon\right) \\ = \frac{1}{\xi} \left( \Li_2(e^{u-|\xi|^2t/(2\pi)-\varepsilon\xi}) - \Li_2(e^{u+|\xi|^2t/(2\pi)+\varepsilon\xi}) \right) -\frac{u^2t}{2\pi}+\sqrt{-1}ut-\varepsilon u. \end{multline*} Since $\Re(u+|\xi|^2t/(2\pi)+\varepsilon\xi)=(1+\varepsilon)u+|\xi|^2t/(2\pi)>0$, there are two cases to consider; the case where $u-|\xi|^2t/(2\pi)<0$ and the case where $u-|\xi|^2t/(2\pi)\ge0$. \begin{itemize} \item The case where $u-|\xi|^2t/(2\pi)\le0$. In this case we have $u-|\xi|^2t/(2\pi)-\varepsilon u<0$. From \eqref{eq:dilog} we have \begin{equation*} \begin{split} &\Phi\left(\left(\frac{u}{2\pi}-\sqrt{-1}\right)t+\varepsilon\right) \\ =& \frac{1}{\xi} \left(\vphantom{ \left(u+\frac{|\xi|^2t}{2\pi}-\varepsilon\xi+\sqrt{-1}\pi\right)^2} \Li_2(e^{u-|\xi|^2t/(2\pi)-\varepsilon\xi}) + \Li_2(e^{-u-|\xi|^2t/(2\pi)-\varepsilon\xi}) \right. \\ & \left.\quad +\frac{\pi^2}{6} +\frac{1}{2} \left( u+\frac{|\xi|^2t}{2\pi}+\varepsilon\xi-\sqrt{-1}\pi \right)^2 \right) \\ &-\frac{u^2t}{2\pi}+\sqrt{-1}ut-\varepsilon u \\ =& \frac{1}{\xi} \left( \Li_2(e^{u-|\xi|^2t/(2\pi)-\xi\varepsilon}) + \Li_2(e^{-u-|\xi|^2t/(2\pi)-\xi\varepsilon}) \right) \\ &+\frac{\pi^2}{6\xi} +\frac{1}{2\xi} \left( u+\frac{|\xi|^2t}{2\pi}+\varepsilon\xi-\sqrt{-1}\pi \right)^2 \\ &-\frac{u^2t}{2\pi}+\sqrt{-1}ut-\varepsilon u. \end{split} \end{equation*} Therefore we have \begin{equation*} \begin{split} &\lim_{\varepsilon\searrow0} \Phi\left(\left(\frac{u}{2\pi}-\sqrt{-1}\right)t+\varepsilon\right) \\ =& \frac{1}{\xi} \left( \Li_2(e^{u-|\xi|^2t/(2\pi)}) + \Li_2(e^{-u-|\xi|^2t/(2\pi)}) \right) \\ &+\frac{\pi^2}{6\xi} +\frac{1}{2\xi} \left( u+\frac{|\xi|^2t}{2\pi}-\sqrt{-1}\pi \right)^2 \\ &-\frac{u^2t}{2\pi}+\sqrt{-1}ut \end{split} \end{equation*} and so \begin{equation*} \begin{split} &\lim_{\varepsilon\searrow0} \Re\Phi\left(\left(\frac{u}{2\pi}-\sqrt{-1}\right)t+\varepsilon\right) \\ =& \frac{u}{|\xi|^2} \left( \Li_2(e^{u-|\xi|^2t/(2\pi)}) + \Li_2(e^{-u-|\xi|^2t/(2\pi)}) \right) \\ &+\frac{u\pi^2}{6|\xi|^2} +\frac{u}{2|\xi|^2} \left( u+\frac{|\xi|^2t}{2\pi} \right)^2 - \frac{u\pi^2}{2|\xi|^2} - \frac{4\pi(u+|\xi|^2t/(2\pi))\pi}{2|\xi|^2} -\frac{u^2t}{2\pi} \\ =& \frac{u}{|\xi|^2} \left( \Li_2(e^{u-|\xi|^2t/(2\pi)}) + \Li_2(e^{-u-|\xi|^2t/(2\pi)}) \right) \\ &-\frac{7u\pi^2}{3|\xi|^2} +\frac{u^3}{2|\xi|^2} +\frac{u|\xi|^2t^2}{8\pi^2} -\pi t \\ =& -2\frac{u\pi^2}{|\xi|^2} +\frac{u^3}{2|\xi|^2} +\frac{u|\xi|^2t^2}{8\pi^2} -\pi t \end{split} \end{equation*} since $\Li_2(z)\le\pi^2/6$ for $0<z\le1$. We can easily prove \begin{equation*} -2\frac{u\pi^2}{|\xi|^2} +\frac{u^3}{2|\xi|^2} +\frac{u|\xi|^2t^2}{8\pi^2} -\pi t < \pi t \end{equation*} when $0<u<1$ and $2u\pi/|\xi|^2\le t\le1$. So we have \begin{equation*} \lim_{\varepsilon\searrow0} \Re\Phi\left(\left(\frac{u}{2\pi}-\sqrt{-1}\right)t+\varepsilon\right) <\pi t \end{equation*} in this case. \item The case where $u-|\xi|^2t/(2\pi)>0$. We choose $\varepsilon$ so that $u-|\xi|^2t/(2\pi)-\varepsilon u>0$. In this case we have \begin{equation*} \begin{split} &\Phi\left(\left(\frac{u}{2\pi}-\sqrt{-1}\right)t+\varepsilon\right) \\ =& \frac{1}{\xi} \left( \vphantom{\left(\frac{|\xi|^2t}{2\pi}\right)^2} - \Li_2(e^{-u+|\xi|^2t/(2\pi)+\varepsilon\xi}) + \Li_2(e^{-u-|\xi|^2t/(2\pi)-\varepsilon\xi}) \right. \\ &\left.\quad -\frac{1}{2} \left( u-\frac{|\xi|^2t}{2\pi}-\varepsilon\xi+\pi\sqrt{-1} \right)^2 +\frac{1}{2} \left( -u-\frac{|\xi|^2t}{2\pi}-\varepsilon\xi+\pi\sqrt{-1} \right)^2 \right) \\ &- \frac{u^2t}{2\pi}+\sqrt{-1}ut-\varepsilon u \end{split} \end{equation*} and so we have \begin{equation*} \begin{split} &\lim_{\varepsilon\searrow0} \Re\Phi\left(\left(\frac{u}{2\pi}-\sqrt{-1}\right)t+\varepsilon\right) \\ =& \frac{u}{|\xi|^2} \left( - \Li_2(e^{-u+|\xi|^2t/(2\pi)}) + \Li_2(e^{-u-|\xi|^2t/(2\pi)}) \right) +\frac{u^2t}{2\pi}-\frac{4u\pi^2}{|\xi|^2} \\ <& \frac{u^2t}{2\pi}-\frac{23u\pi^2}{6|\xi|^2} <0 \end{split} \end{equation*} if $0<u<1$ and $0\le t<2u\pi/|\xi|^2$. \end{itemize} Therefore for any $0\le t\le1$ we have $\Re\Phi\bigl((u/(2\pi)-\sqrt{-1})t+\varepsilon\bigr)\le\pi t$ for small $\varepsilon>0$. \par So from \eqref{eq:I_{-,1}integral} and \eqref{eq:I_{-,1}Phi} we have \begin{equation*} \begin{split} \left|I_{-,1}(N)\right| &\le \frac{2}{1-e^{-\pi^2/u}} \exp\left(2A+B|\gamma|(1+e^{(u-|\xi|^2/(2\pi))R}\right) \int_{0}^{1}e^{-N\pi t}\,dt \\ &= \frac{2(1-e^{-N\pi})}{N(1-e^{-\pi^2/u})} \exp\left(2A+B|\gamma|(1+e^{(u-|\xi|^2/(2\pi))R}\right) \\ &< \frac{K_{-,1}}{N} \end{split} \end{equation*} for a positive constant $K_{-,1}$. \end{proof \begin{proof}[Proof of \eqref{eq:I_{+,3}}] Since $\tan$ has period $\pi$ we have \begin{equation}\label{eq:tan_I+3} |\tan(N\pi (-u/(2\pi)+\sqrt{-1})t)-\sqrt{-1}| < \frac{2e^{-2N\pi t}}{1-e^{-\pi^2/u}} \end{equation} from \eqref{eq:I_{+,1}tan} \par From \eqref{eq:tan_I+3} we have \begin{equation}\label{eq:I_{+,3}integral} \left|I_{+,3}(N)\right| \le \frac{2}{1-e^{-\pi^2/u}} \int_{0}^{1} e^{-2N\pi t} \left| g_N\left(\left(-\frac{u}{2\pi}+\sqrt{-1}\right)t+1\right) \right| \,dt. \end{equation} \par As in the case of $I_{+,1}(N)$ we consider the integral on the segment $(-u/(2\pi)+\sqrt{-1})t+1-\varepsilon$ ($0\le t\le1$) for small $\varepsilon$. \par We estimate $\Phi\bigl((-u/(2\pi)+\sqrt{-1})t+1-\varepsilon\bigr)$. We have \begin{equation*} \begin{split} &\Phi\left(\left(-\frac{u}{2\pi}+\sqrt{-1}\right)t+1-\varepsilon\right) \\ =& \frac{1}{\xi} \left( \Li_2(e^{u+|\xi|^2t/(2\pi)-(1-\varepsilon)\xi}) - \Li_2(e^{u-|\xi|^2t/(2\pi)+(1-\varepsilon)\xi}) \right) \\ &+\frac{u^2t}{2\pi}-\sqrt{-1}ut-(1-\varepsilon)u. \end{split} \end{equation*} Since we may assume that $\Re(u+|\xi|^2t/(2\pi)-(1-\varepsilon)\xi)=\varepsilon u+|\xi|^2t/(2\pi)>0$, there are two cases to consider; the case where $2u-|\xi|^2t/(2\pi)\le0$ and the case where $2u-|\xi|^2t/(2\pi)>0$. \begin{itemize} \item $2u-|\xi|^2t/(2\pi)\le0$. In this case, since $u-|\xi|^2t/(2\pi)+(1-\varepsilon)u<0$, from \eqref{eq:dilog} we have \begin{equation*} \begin{split} &\Phi\left(\left(-\frac{u}{2\pi}+\sqrt{-1}\right)t+1-\varepsilon\right) \\ =& \frac{1}{\xi} \left( \vphantom{ \left( u+\frac{|\xi|^2t}{2\pi}-(1-\varepsilon)\xi+\pi\sqrt{-1} \right)^2} - \Li_2(e^{-u-|\xi|^2t/(2\pi)+(1-\varepsilon)\xi}) - \Li_2(e^{u-|\xi|^2t/(2\pi)+(1-\varepsilon)\xi}) \right. \\ & \quad \left. -\frac{\pi^2}{6} -\frac{1}{2} \left( u+\frac{|\xi|^2t}{2\pi}-(1-\varepsilon)\xi+\pi\sqrt{-1} \right)^2 \right) \\ &+\frac{u^2t}{2\pi}-\sqrt{-1}ut-(1-\varepsilon)u \end{split} \end{equation*} Since the dilogarithm function $\Li(z)$ is analytic when $\Re(z)<1$, we have \begin{equation*} \begin{split} &\lim_{\varepsilon\searrow0} \Phi\left(\left(-\frac{u}{2\pi}+\sqrt{-1}\right)t+1-\varepsilon\right) \\ =& \frac{1}{\xi} \left( - \Li_2(e^{-|\xi|^2t/(2\pi)}) - \Li_2(e^{2u-|\xi|^2t/(2\pi)}) \right) \\ &-\frac{\pi^2}{6\xi} -\frac{1}{2\xi} \left( \frac{|\xi|^2t}{2\pi}-\pi\sqrt{-1} \right)^2 \\ &+\frac{u^2t}{2\pi}-\sqrt{-1}ut-u. \end{split} \end{equation*} So we have \begin{equation*} \begin{split} &\lim_{\varepsilon\searrow0} \Re\left(\left(-\frac{u}{2\pi}+\sqrt{-1}\right)t+1-\varepsilon\right) \\ =& -\frac{u}{|\xi|^2} \left( \Re\Li_2(e^{-|\xi|^2t/(2\pi)}) + \Re\Li_2(e^{2u-|\xi|^2t/(2\pi)}) \right) \\ &-\frac{u\pi^2}{6|\xi|^2} -\frac{u}{2|\xi|^2} \frac{|\xi|^4t^2}{4\pi^2} + \frac{u\pi^2}{2|\xi|^2} - \frac{4\pi|\xi|^2t/(2\pi)\pi}{2|\xi|^2} + \frac{u^2t}{2\pi} -u \\ =& -\frac{u}{|\xi|^2} \left( \Re\Li_2(e^{-|\xi|^2t/(2\pi)}) + \Re\Li_2(e^{2u-|\xi|^2t/(2\pi)}) \right) \\ &+\frac{u\pi^2}{3|\xi|^2} -\frac{u|\xi|^2t^2}{8\pi^2} -\pi t +\frac{u^2t}{2\pi} -u \le0. \end{split} \end{equation*} The last inequality follows since $\frac{u\pi^2}{3|\xi|^2} -\frac{u|\xi|^2t^2}{8\pi^2} -\pi t +\frac{u^2t}{2\pi} -u$, is a quadratic function with respect to $u$ with non-positive maximum. \item $2u-|\xi|^2t/(2\pi)>0$. In this case we may choose $\varepsilon$ small so that $u-|\xi|^2t/(2\pi)+(1-\varepsilon)u>0$. Then we have \begin{equation*} \begin{split} &\Phi\left(\left(-\frac{u}{2\pi}+\sqrt{-1}\right)t+1-\varepsilon\right) \\ =& \frac{1}{\xi} \left( \vphantom{\left(\frac{|\xi|^2t}{2\pi}\right)^2} - \Li_2(e^{-u-|\xi|^2t/(2\pi)+(1-\varepsilon)\xi}) + \Li_2(e^{-u+|\xi|^2t/(2\pi)-(1-\varepsilon)\xi}) \right. \\ &\left. -\frac{1}{2} \left( u+\frac{|\xi|^2t}{2\pi}-(1-\varepsilon)\xi+\pi\sqrt{-1} \right)^2 +\frac{1}{2} \left( u-\frac{|\xi|^2t}{2\pi}+(1-\varepsilon)\xi-\pi\sqrt{-1} \right)^2 \right) \\ &+ \frac{u^2t}{2\pi}-\sqrt{-1}ut-(1-\varepsilon)u \end{split} \end{equation*} and so we have \begin{equation*} \begin{split} &\lim_{\varepsilon\searrow0} \Re\Phi\left(\left(-\frac{u}{2\pi}+\sqrt{-1}\right)t+1-\varepsilon\right) \\ =& \frac{u}{|\xi|^2} \left( - \Li_2(e^{-|\xi|^2t/(2\pi)}) + \Li_2(e^{-2u+|\xi|^2t/(2\pi)}) \right) +\frac{2u^3}{|\xi|^2} -\frac{u^2t}{2\pi} +\frac{4\pi^2u}{|\xi|^2} -u. \end{split} \end{equation*} Putting $s:=|\xi|^2t/(2\pi)$ and consider the function \begin{equation*} \begin{split} f(u,s) &:= - \Li_2(e^{-s}) + \Li_2(e^{-2u+s}) +2u^2 -us +4\pi^2 -|\xi|^2 -\frac{2\pi^2s}{u} \\ &= - \Li_2(e^{-s}) + \Li_2(e^{-2u+s}) +u^2 -us-\frac{2\pi^2s}{u} \end{split} \end{equation*} so that \begin{equation*} \frac{u}{|\xi|^2}f\left(u,\frac{|\xi|^2t}{2\pi}\right) = \lim_{\varepsilon\searrow0} \Re\Phi\left(\left(-\frac{u}{2\pi}+\sqrt{-1}\right)t+1-\varepsilon\right) -\pi t. \end{equation*} \end{itemize} \par We will show $f(u,s)\le0$ for $0\le s<2u\le2$. \par We have \begin{equation*} \exp\left[-\frac{\partial\,f(u,t)}{\partial\,t}\right] = 2e^{2\pi^2/u} \left( \cosh(u)-\cosh(s-u) \right). \end{equation*} Therefore it can be shown that for fixed $u$, $f(u,s)$ is increasing for $0\le s<u-\arccosh(\cosh(u)-\exp(-2\pi^2/u)/2)$, decreasing for $u-\arccosh(\cosh(u)-\exp(-2\pi^2/u)/2)<s<u+\arccosh(\cosh(u)-\exp(-2\pi^2/u)/2)$, and increasing for $u+\arccosh(\cosh(u)-\exp(-2\pi^2/u)/2)<s<u$. Since a graph of $f\bigl(u,u-\arccosh(\cosh(u)-\exp(-2\pi^2/u)/2)\bigr)$ looks as Figure~\ref{fig:graph} and \begin{equation*} f(u,2u) = -\frac{23\pi^2}{6} -u^2 -\Li_2(e^{-2u}) <0, \end{equation*} we see $f(u,s)\le0$. \begin{figure}[h] \includegraphics[scale=0.7]{graph.eps} \caption{A graph of $f\bigl(u,u-\arccosh(\cosh(u)-\exp(-4\pi^2/u)/2)\bigr)$.} \label{fig:graph} \end{figure} \par Therefore we finally have \begin{equation*} \begin{split} |I_{+,3}(N)| &< \frac{2}{1-e^{-\pi^2/u}} \exp\left(2A+B(1+e^{(u+|\xi|^2/(2\pi))R})|\gamma|\right) \int_{4\pi u/|\xi|^2}^{1} e^{-2N\pi t}\,dt \\ &= \frac{e^{-8N\pi^2u/|\xi|^2}-e^{-2N\pi}}{N\pi(1-e^{-\pi^2/u})} \exp\left(2A+B(1+e^{(u+|\xi|^2/(2\pi))R})|\gamma|\right) \\ &< \frac{K_{+,3}}{N} \end{split} \end{equation*} for a positive constant $K_{+,3}$. \end{proof \begin{proof}[Proof of \eqref{eq:I_{-,3}}] Since $\tan$ has period $\pi$ we have \begin{equation}\label{eq:tan_I-3} |\tan(N\pi((u/(2\pi)-\sqrt{-1})t+1)+\sqrt{-1}| < \frac{2e^{-2N\pi t}}{1-e^{-\pi^2/u}} \end{equation} from \eqref{eq:I_{+,1}tan} \par From \eqref{eq:tan_I-3} we have \begin{equation}\label{eq:I_{-,3}integral} \left|I_{-,3}(N)\right| \le \frac{2}{1-e^{-\pi^2/u}} \int_{0}^{1} e^{-2N\pi t} \left| g_N\left(\left(-\frac{u}{2\pi}+\sqrt{-1}\right)t+1\right) \right| \,dt. \end{equation} \par As in the case of $I_{+,1}(N)$ we consider the integral on the segment $(u/(2\pi)-\sqrt{-1})t+1-\varepsilon$ ($0\le t\le1$) for small $\varepsilon$. \par We estimate $\Phi\bigl((u/(2\pi)-\sqrt{-1})t+1-\varepsilon\bigr)$. We have \begin{equation*} \begin{split} &\Phi\left(\left(\frac{u}{2\pi}-\sqrt{-1}\right)t+1-\varepsilon\right) \\ =& \frac{1}{\xi} \left( \Li_2(e^{u-|\xi|^2t/(2\pi)-(1-\varepsilon)\xi}) - \Li_2(e^{u+|\xi|^2t/(2\pi)+(1-\varepsilon)\xi}) \right) \\ &-\frac{u^2t}{2\pi}+\sqrt{-1}ut-(1-\varepsilon)u. \end{split} \end{equation*} We will calculate $\lim_{\varepsilon\searrow0}\Phi\bigl((u/(2\pi)-\sqrt{-1})t+1-\varepsilon\bigr)$. \par Note that $\Re(u+|\xi|^2t/(2\pi)+(1-\varepsilon)\xi)=2u+|\xi|^2t/(2\pi)-\varepsilon u>0$ for small $\varepsilon$. Since $\Re(u-|\xi|^2t/(2\pi)-(1-\varepsilon)\xi)=-|\xi|^2t/(2\pi)+\varepsilon u$, if $t>0$ we assume that $\Re(u-|\xi|^2t/(2\pi)-(1-\varepsilon)\xi)<0$. Therefore we assume that $t>0$. \par In this case from \eqref{eq:dilog} we have \begin{equation*} \begin{split} &\Phi\left(\left(\frac{u}{2\pi}-\sqrt{-1}\right)t+1-\varepsilon\right) \\ =& \frac{1}{\xi} \left( \Li_2(e^{-|\xi|^2t/(2\pi)+\varepsilon\xi}) + \Li_2(e^{-2u-|\xi|^2t/(2\pi)+\varepsilon\xi}) \right) \\ & +\frac{\pi^2}{6\xi} +\frac{1}{2\xi} \left( 2u+\frac{|\xi|^2t}{2\pi}-\varepsilon\xi+\pi\sqrt{-1} \right)^2 \\ &-\frac{u^2t}{2\pi}+\sqrt{-1}ut-(1-\varepsilon)u \end{split} \end{equation*} and \begin{equation*} \begin{split} &\Re\Phi\left(\left(\frac{u}{2\pi}-\sqrt{-1}\right)t+1-\varepsilon\right) \\ =& \frac{1}{\xi} \left( \Li_2(e^{-|\xi|^2t/(2\pi)+\varepsilon\xi}) + \Li_2(e^{-2u-|\xi|^2t/(2\pi)+\varepsilon\xi}) \right) \\ & +\frac{\pi^2}{6\xi} +\frac{1}{2\xi} \left( 2u+\frac{|\xi|^2t}{2\pi}-\varepsilon\xi+\pi\sqrt{-1} \right)^2 \\ &-\frac{u^2t}{2\pi}+\sqrt{-1}ut-(1-\varepsilon)u. \end{split} \end{equation*} So we have \begin{equation*} \begin{split} &\lim_{\varepsilon\searrow0} \Phi\left(\left(\frac{u}{2\pi}-\sqrt{-1}\right)t+1-\varepsilon\right) \\ =& \frac{1}{\xi} \left( \Li_2(e^{-|\xi|^2t/(2\pi)}) + \Li_2(e^{-2u-|\xi|^2t/(2\pi)}) \right) \\ & +\frac{\pi^2}{6\xi} +\frac{1}{2\xi} \left( 2u+\frac{|\xi|^2t}{2\pi}+\pi\sqrt{-1} \right)^2 \\ &-\frac{u^2t}{2\pi}+\sqrt{-1}ut-u \\ =& \frac{u}{|\xi|^2} \left( \Li_2(e^{-|\xi|^2t/(2\pi)}) + \Li_2(e^{-2u-|\xi|^2t/(2\pi)}) \right) \\ &+\frac{u\pi^2}{6|\xi|^2} + \frac{u}{2|\xi|^2} \left( 2u+\frac{|\xi|^2t}{2\pi} \right)^2 - \frac{u\pi^2}{2|\xi|^2} + \frac{4\pi^2}{2|\xi|^2} \left( 2u+\frac{|\xi|^2t}{2\pi} \right) - \frac{u^2t}{2\pi} -u \\ \le &\frac{11u\pi^2}{3|\xi|^2} +\frac{2u^3}{|\xi|^2} +\frac{u|\xi|^2t^2}{8\pi^2} +\pi t +\frac{u^2t}{2\pi} -u <\frac{3}{2}\pi t \end{split} \end{equation*} if $0<u<1$ and $0<t\le1$. \par Therefore we finally have \begin{equation*} \begin{split} |I_{-,3}(N)| &< \frac{2}{1-e^{-\pi^2/u}} \exp\left(2A+B(1+e^{(u+|\xi|^2/(2\pi))R})|\gamma|\right) \int_{0}^{1} e^{-N\pi t/2} \,dt \\ &= \frac{4(1-e^{-N\pi/2})}{N\pi(1-e^{-\pi^2/u)}} \exp\left(2A+B(1+e^{(u+|\xi|^2/(2\pi))R})|\gamma|\right) \\ &< \frac{K_{-,3}}{N} \end{split} \end{equation*} for a positive constant $K_{-,3}$. \end{proof \begin{proof}[Proof of \eqref{eq:I_{+,2}}] From \cite[Equation~(4.6)]{Andersen/Hansen:JKNOT2006} we have \begin{equation*} |I_{+,2}(N)| \le 4e^{-2\pi N} \int_{\varepsilon}^{1-\varepsilon}|g_N(-u/(2\pi)+\sqrt{-1}+t)|\,dt. \end{equation*} From \eqref{eq:g_N} we have \begin{multline*} |g_N(-u/(2\pi)+\sqrt{-1}+t)| \\ \le \exp\bigl(N\Re(\Phi(-u/(2\pi)+\sqrt{-1}+t))\bigr) \exp\bigl(4A+2|\gamma|(1+e^{R(u+u^2/(2\pi)+2\pi)})\bigr). \end{multline*} \par Now estimate $\Re(\Phi(-u/(2\pi)+\sqrt{-1}+t))$. We have \begin{equation*} \begin{split} &\Phi\left(-\frac{u}{2\pi}+\sqrt{-1}+t\right) \\ =& \frac{1}{\xi} \left( \Li_2(e^{u+|\xi|^2/(2\pi)-\xi t}) - \Li_2(e^{u-|\xi|^2/(2\pi)+\xi t}) \right) +\frac{u^2}{2\pi}-\sqrt{-1}u-ut. \end{split} \end{equation*} Since $\Re(u+|\xi|^2/(2\pi)-\xi t)=u(1-t)+|\xi|^2/(2\pi)>0$ and $\Re(u-|\xi|^2/(2\pi)+\xi t)=u(1+t)-|\xi|^2/(2\pi)\le0$, we have \begin{equation*} \begin{split} &\Phi\left(-\frac{u}{2\pi}+\sqrt{-1}+t\right) \\ =& -\frac{1}{\xi} \left( \Li_2(e^{u+|\xi|^2/(2\pi)-\xi t}) + \Li_2(e^{u-|\xi|^2/(2\pi)+\xi t}) \right) \\ & -\frac{\pi^2}{6\xi} -\frac{1}{2\xi} \left( u+\frac{|\xi|^2}{2\pi}-(u+2\pi\sqrt{-1})t+\pi\sqrt{-1} \right)^2 +\frac{u^2}{2\pi}-\sqrt{-1}u-ut \end{split} \end{equation*} from \eqref{eq:dilog}. Therefore we have \begin{equation*} \begin{split} &\Re\Phi\left(-\frac{u}{2\pi}+\sqrt{-1}+t\right) \\ =& -\frac{u}{|\xi|^2} \left( \Re\Li_2(e^{u+|\xi|^2/(2\pi)-\xi t}) + \Re\Li_2(e^{u-|\xi|^2/(2\pi)+\xi t}) \right) \\ &-\frac{2\pi}{|\xi|^2} \left( \Im\Li_2(e^{u+|\xi|^2/(2\pi)-\xi t}) + \Im\Li_2(e^{u-|\xi|^2/(2\pi)+\xi t}) \right) \\ & +(2t-1)\pi -\frac{1}{2}(t^2+1)u +\frac{u^2t}{2\pi} -\frac{u^3}{8\pi^2} +\frac{\pi^2u}{3|\xi|^2} \\ \le& -\frac{u}{|\xi|^2} \left( \Re\Li_2(e^{u+|\xi|^2/(2\pi)-\xi t}) + \Re\Li_2(e^{u-|\xi|^2/(2\pi)+\xi t}) \right) \\ &-\frac{2\pi}{|\xi|^2} \left( \Im\Li_2(e^{u+|\xi|^2/(2\pi)-\xi t}) + \Im\Li_2(e^{u-|\xi|^2/(2\pi)+\xi t}) \right) \\ &+(2t-1)\pi. \end{split} \end{equation*} For $0<r<1$ and $0<\theta<2\pi$ we have \begin{equation*} \begin{split} \Re\Li_2(re^{\sqrt{-1}\theta}) &= -\frac{1}{2} \int_{0}^{r} \frac{\log(1-2s\cos\theta+s^2)}{s}\,ds \ge -\frac{1}{2} \int_{0}^{r} \frac{\log(1+2s+s^2)}{s}\,ds \\ &= -\int_{0}^{r}\frac{\log(1+s)}{s}\,ds \ge -\int_{0}^{1}\frac{\log(1+s)}{s}\,ds = \Li_2(-1) = -\frac{\pi^2}{12}. \end{split} \end{equation*} We also have \begin{equation*} \Im\Li_2(re^{\sqrt{-1}\theta}) = -\int_{0}^{r} \frac{\arg(1-se^{\sqrt{-1}\theta})}{s}\,ds. \end{equation*} If $0<\theta\le\pi$ the right hand side is non-negative. If $\pi<\theta<2\pi$ we have \begin{equation*} \begin{split} \Im\Li_2(re^{\sqrt{-1}\theta}) &= -\int_{0}^{1}\frac{\arg(1-se^{\sqrt{-1}\theta})}{s}\,ds + \int_{r}^{1} \frac{\arg(1-se^{\sqrt{-1}\theta})}{s}\,ds \\ &= \Im\Li_2(e^{\sqrt{-1}\theta}) + \int_{r}^{1} \frac{\arg(1-se^{\sqrt{-1}\theta})}{s}\,ds. \end{split} \end{equation*} The second integral is positive and $\Im\Li_2(e^{\sqrt{-1}\theta})$ is bigger than or equal to $-\Im\Li_2(\exp(\sqrt{-1}\pi/3))=-1.01494\ldots$. \par Therefore we have \begin{equation*} \Re\Phi\left(-\frac{u}{2\pi}+\sqrt{-1}+t\right) \le \frac{\pi^2u}{6|\xi|^2} + \frac{2\pi\Im\Li_2(e^{\sqrt{-1}\pi/3})}{|\xi|^2} +(2t-1)\pi \end{equation*} So we have \begin{equation*} \begin{split} &|I_{+,2}(N)| \\ \le& 4\exp(4A+2|\gamma|(1+e^{R(u+u^2/(2\pi)+2\pi)})) e^{-3\pi N} \\ &\times \exp \left[ N \left( \frac{\pi^2u}{6|\xi|^2} + \frac{2\pi\Im\Li_2(e^{\sqrt{-1}\pi/3})}{|\xi|^2} \right) \right] \int_{0}^{1}e^{2N\pi t}\,dt \\ =& \frac{2(1-e^{-2\pi N})}{\pi N} \exp(4A+2|\gamma|(1+e^{R(u+u^2/(2\pi)+2\pi)})) \\ &\times \exp \left[ \pi N \left( \frac{\pi u}{6|\xi|^2} + \frac{2\Im\Li_2(e^{\sqrt{-1}\pi/3})}{|\xi|^2} -1 \right) \right] \\ <& \frac{2}{\pi N} \exp(4A+2|\gamma|(1+e^{R(u+u^2/(2\pi)+2\pi)})) \\ <& \frac{K_{+,2}}{N} \end{split} \end{equation*} for a positive constant $K_{+,2}$. Here we use the inequality \begin{equation*} \frac{\pi u}{6|\xi|^2} + \frac{2\Im\Li_2(e^{\sqrt{-1}\pi/3})}{|\xi|^2} -1 < \frac{\pi}{6} +\frac{1.01494\ldots}{2\pi^2} -1 <0. \end{equation*} \end{proof \begin{proof}[Proof of \eqref{eq:I_{-,2}}] From \cite[Equation~(4.6)]{Andersen/Hansen:JKNOT2006} we have \begin{equation*} |I_{-,2}(N)| \le 4e^{-2\pi N} \int_{\varepsilon}^{1-\varepsilon}|g_N(u/(2\pi)-\sqrt{-1}+t)|\,dt. \end{equation*} From \eqref{eq:g_N} we have \begin{multline*} |g_N(u/(2\pi)-\sqrt{-1}+t)| \\ \le \exp\bigl(N\Re(\Phi(u/(2\pi)-\sqrt{-1}+t))\bigr) \exp\bigl(4A+2|\gamma|(1+e^{R(u+u^2/(2\pi)+2\pi)})\bigr). \end{multline*} \par Now estimate $\Re(\Phi(u/(2\pi)-\sqrt{-1}+t))$. We have \begin{equation*} \begin{split} &\Phi\left(\frac{u}{2\pi}-\sqrt{-1}+t\right) \\ =& \frac{1}{\xi} \left( \Li_2(e^{u-|\xi|^2/(2\pi)-\xi t}) - \Li_2(e^{u+|\xi|^2/(2\pi)+\xi t}) \right) -\frac{u^2}{2\pi}+\sqrt{-1}u-ut. \end{split} \end{equation*} Since $\Re(u-|\xi|^2/(2\pi)-\xi t)=u(1-t)-|\xi|^2/(2\pi)<0$ and $\Re(u+|\xi|^2/(2\pi)+\xi t)=u(1+t)+|\xi|^2/(2\pi)>0$, we have \begin{equation*} \begin{split} &\Phi\left(\frac{u}{2\pi}-\sqrt{-1}+t\right) \\ =& \frac{1}{\xi} \left( \Li_2(e^{u-|\xi|^2/(2\pi)-\xi t}) + \Li_2(e^{-u-|\xi|^2/(2\pi)-\xi t}) \right) \\ & +\frac{\pi^2}{6\xi} +\frac{1}{2\xi} \left( u+\frac{|\xi|^2}{2\pi}+(u+2\pi\sqrt{-1})t-\pi\sqrt{-1} \right)^2 -\frac{u^2}{2\pi}+\sqrt{-1}u-ut \end{split} \end{equation*} from \eqref{eq:dilog}. Therefore we have \begin{equation*} \begin{split} &\Re\Phi\left(\frac{u}{2\pi}-\sqrt{-1}+t\right) \\ =& \frac{u}{|\xi|^2} \left( \Re\Li_2(e^{u-|\xi|^2/(2\pi)-\xi t}) + \Re\Li_2(e^{-u-|\xi|^2/(2\pi)-\xi t}) \right) \\ & + \frac{2\pi}{|\xi|^2} \left( \Im\Li_2(e^{u-|\xi|^2/(2\pi)-\xi t}) + \Im\Li_2(e^{-u-|\xi|^2/(2\pi)-\xi t}) \right) \\ & +\frac{u\pi^2}{6|\xi|^2}-\frac{u^2}{2\pi}-ut \\ & +\frac{u}{2|\xi|^2} \left( \left( u+\frac{|\xi|^2}{2\pi}+ut \right)^2 - \pi^2(2t-1)^2 \right) \\ & + \frac{4\pi}{2|\xi|^2} \pi(2t-1) \left( u+\frac{|\xi|^2}{2\pi}+ut \right) \\ =& \frac{u}{|\xi|^2} \left( \Re\Li_2(e^{u-|\xi|^2/(2\pi)-\xi t}) + \Re\Li_2(e^{-u-|\xi|^2/(2\pi)-\xi t}) \right) \\ & + \frac{2\pi}{|\xi|^2} \left( \Im\Li_2(e^{u-|\xi|^2/(2\pi)-\xi t}) + \Im\Li_2(e^{-u-|\xi|^2/(2\pi)-\xi t}) \right) \\ & +\frac{ut^2}{2} +\frac{|\xi|^2t}{2\pi} -\frac{7u\pi^2}{3|\xi|^2} +\frac{u^3}{2|\xi|^2} +\frac{u|\xi|^2}{8\pi^2} -\pi. \end{split} \end{equation*} For $0<r<1$ and $0<\theta<2\pi$ we have \begin{equation*} \begin{split} \Re\Li_2(re^{\sqrt{-1}\theta}) &= -\frac{1}{2} \int_{0}^{r} \frac{\log(1-2s\cos\theta+s^2)}{s}\,ds \le -\frac{1}{2} \int_{0}^{r} \frac{\log(1-2s+s^2)}{s}\,ds \\ &= -\int_{0}^{r}\frac{\log(1-s)}{s}\,ds \le -\int_{0}^{1}\frac{\log(1-s)}{s}\,ds = \Li_2(1) = \frac{\pi^2}{6}. \end{split} \end{equation*} We also have \begin{equation*} \begin{split} \Im\Li_2(re^{\sqrt{-1}\theta}) &= -\int_{0}^{r} \frac{\arg(1-se^{\sqrt{-1}\theta})}{s}\,ds \\ &= -\int_{0}^{1}\frac{\arg(1-se^{\sqrt{-1}\theta})}{s}\,ds + \int_{r}^{1} \frac{\arg(1-se^{\sqrt{-1}\theta})}{s}\,ds \\ &\le \Im\Li_2(e^{\sqrt{-1}\theta}) \le \Im\Li_2(e^{\sqrt{-1}\pi/3}) = 1.01494\dots. \end{split} \end{equation*} when $0\le\theta\le\pi$. Since $\Im\Li_2(re^{\sqrt{-1}\theta})=-\Im\Li_2(re^{\sqrt{-1}(\theta-\pi)})$ when $\pi<\theta\le2\pi$, we have \begin{equation*} \begin{split} &\Re\Phi\left(\frac{u}{2\pi}-\sqrt{-1}+t\right) \\ =& \frac{4\pi}{|\xi|^2}\Im\Li_2(e^{\sqrt{-1}\pi/3}) +\frac{ut^2}{2} +\frac{|\xi|^2t}{2\pi} -2\frac{u\pi^2}{|\xi|^2} +\frac{u^3}{2|\xi|^2} +\frac{u|\xi|^2}{8\pi^2} -\pi \\ &\quad\text{(since $0<t<1$ and $0<u<\pi/2$)}. \\ <& \frac{|\xi|^2t}{2\pi} +\frac{1}{\pi}\Im\Li_2(e^{\sqrt{-1}\pi/3}) +\frac{\pi}{2} -\frac{4\pi}{17} +\frac{\pi}{64} +\frac{17\pi}{64} -\pi \\ \le& \frac{|\xi|^2t}{2\pi} + \frac{1}{\pi}\Im\Li_2(e^{\sqrt{-1}\pi/3}) -\frac{247}{544}\pi. \end{split} \end{equation*} when $u<\pi/2$. \par So we have \begin{equation*} \begin{split} &|I_{-,2}(N)| \\ <& 4e^{(\Im\Li_2(e^{\sqrt{-1}\pi/3})/\pi-247\pi/544-2\pi)N} \exp\bigl(4A+2|\gamma|(1+e^{R(u+u^2/(2\pi)+2\pi)})\bigr) \\ &\times \lim_{\varepsilon\searrow0} \int_{\varepsilon}^{1-\varepsilon}e^{N|\xi|^2t/(2\pi)}\,dt \\ =& 4e^{(\Im\Li_2(e^{\sqrt{-1}\pi/3})/\pi-247\pi/544-2\pi)N} \exp\bigl(4A+2|\gamma|(1+e^{R(u+u^2/(2\pi)+2\pi)})\bigr) \\ &\times \frac{2\pi}{N|\xi|^2}(e^{N|\xi|^2/(2\pi)}-1) \\ =& \frac{8\pi\exp\bigl(4A+2|\gamma|(1+e^{R(u+u^2/(2\pi)+2\pi)})\bigr)}{N|\xi|^2} \\ &\times \left( e^{(\Im\Li_2(e^{\sqrt{-1}\pi/3})/\pi-247\pi/544-2\pi+|\xi|^2/(2\pi))N} \right. \\ &\phantom{\times}\qquad \left. - e^{(\Im\Li_2(e^{\sqrt{-1}\pi/3})/\pi-247\pi/544-2\pi)N} \right) \\ <& \frac{8\pi\exp\bigl(4A+2|\gamma|(1+e^{R(u+u^2/(2\pi)+2\pi)})\bigr)}{N|\xi|^2} \\ &\times \left( e^{(\Im\Li_2(e^{\sqrt{-1}\pi/3})/\pi-179\pi/544)N} - e^{(\Im\Li_2(e^{\sqrt{-1}\pi/3})/\pi-247\pi/544-2\pi)N} \right) \\ <& \frac{8\pi\exp\bigl(4A+2|\gamma|(1+e^{R(u+u^2/(2\pi)+2\pi)})\bigr)}{N|\xi|^2} \\ <& \frac{K_{-,2}}{N} \end{split} \end{equation*} for a positive constant $K_{-,2}$, since $\Im\Li_2(e^{\sqrt{-1}\pi/3})/\pi-179\pi/544=-0.710657\dots$. \end{proof \section{Proof of Proposition~\ref{prop:4.9}}\label{sec:prop:4.9} In this section we again follow \cite{{Andersen/Hansen:JKNOT2006}} to prove Proposition~\ref{prop:4.9}. From \eqref{eq:g_N} we have \begin{equation*} \int_{p(\varepsilon)}g_N(w)\,dw = \int_{p(\varepsilon)}\exp\bigl(N\Phi(w)\bigr)\,dw + \int_{p(\varepsilon)}\exp\bigl(N\Phi(w)\bigr)h_{\gamma}(w)\,dw \end{equation*} for any path in the parallelogram bounded by $C_+(\varepsilon)\cup C_-(\varepsilon)$ connecting $\varepsilon$ and $1-\varepsilon$, where \begin{equation*} h_{\gamma}(w) := \exp \left( I_{\gamma}(\pi-\sqrt{-1}u+\sqrt{-1}\xi w) - I_{\gamma}(-\pi-\sqrt{-1}u-\sqrt{-1}\xi w) \right) -1. \end{equation*} Note that $h_{\gamma}(w)$ is defined for $w$ with $0<\Im(\xi w)<2\pi$, that is, for $w$ between the two parallel thick lines depicted in Figure~\ref{fig:contour}. \par Then we have \begin{equation}\label{eq:max} \begin{split} &\left| \int_{p(\varepsilon)}g_N(w)\,dw - \int_{p(\varepsilon)}\exp\bigl(N\Phi(w)\bigr)\,dw \right| \\ \le& \int_{p(\varepsilon)}\left|\exp\big(N\Phi(w)\bigr)h_{\gamma}(w)\right|\,dw \\ \le& \max_{w\in p(\varepsilon)}\left\{\exp\bigl(N\Re\Phi(w)\bigr)\right\} \int_{p(\varepsilon)}|h_{\gamma}(w)|\,dw \\ =& \max_{w\in p(\varepsilon)}\left\{\exp\bigl(N\Re\Phi(w)\bigr)\right\} \int_{\varepsilon}^{1-\varepsilon}|h_{\gamma}(w)|\,dw \\ \le& \max_{w\in p(\varepsilon)}\left\{\exp\bigl(N\Re\Phi(w)\bigr)\right\} \int_{0}^{1}|h_{\gamma}(w)|\,dw. \end{split} \end{equation} Here we use the analyticity of $h_{\gamma}$ in the equality. \par By the definition of $h_{\gamma}$ we have \begin{equation}\label{eq:h_gamma} h_{\gamma}(t) = \sum_{n=1}^{\infty} \frac{1}{n!} \left( I_{\gamma}(\pi-\sqrt{-1}u+\sqrt{-1}\xi t) - I_{\gamma}(-\pi-\sqrt{-1}u-\sqrt{-1}\xi t) \right)^n. \end{equation} From Lemma~\ref{lem:Lemma3}, for $0<t<1$ we have \begin{equation*} \left| I_{\gamma}(\pi-\sqrt{-1}u+\sqrt{-1}\xi t) \right| \le A|\gamma| \left( \frac{1}{2\pi t}+\frac{1}{2\pi-2\pi t} \right) + B|\gamma| \left( 1+e^{(u-ut)R} \right) \end{equation*} and \begin{equation*} \left| I_{\gamma}(-\pi-\sqrt{-1}u-\sqrt{-1}\xi w) \right| \le A|\gamma| \left( \frac{1}{2\pi-2\pi t}+\frac{1}{2\pi t} \right) + B|\gamma| \left( 1+e^{(u+ut)R} \right), \end{equation*} and so we have \begin{equation*} \begin{split} &\left| I_{\gamma}(\pi-\sqrt{-1}u+2\pi t) - I_{\gamma}(-\pi-\sqrt{-1}u-2\pi t) \right| \\ \le& \frac{A|\gamma|}{\pi} \left( \frac{1}{1-t}+\frac{1}{t} \right) + B|\gamma| \left( 2+e^{(1+t)uR}+e^{(1-t)uR} \right) \\ \le& |\gamma|\left(A'f(t)+B'\right) \end{split} \end{equation*} for some positive constants $A'$ and $B'$, where we put $f(t):=1/t+1/(1-t)$. Since $f(t)\ge4$ for $0<t<1$ we have \begin{equation}\label{eq:I_gamma} \begin{split} &\left| I_{\gamma}(\pi-\sqrt{-1}u+\sqrt{-1}\xi w) - I_{\gamma}(-\pi-\sqrt{-1}u-\sqrt{-1}\xi w) \right| \\ \le& |\gamma|\left(A'f(t)+B'\frac{f(t)}{4}\right) \\ =& A''|\gamma|f(t), \end{split} \end{equation} where $A'':=A'+B'/4$. \par From the argument in \cite[P.~537]{Andersen/Hansen:JKNOT2006} we have \begin{equation*} \int_{|\gamma|}^{1-|\gamma|}f(t)^n\,dt \le 2^{2n+1}\int_{|\gamma|}^{1/2}\frac{dt}{t^n} \end{equation*} for $n\ge1$. Since $|\gamma|=|\xi|/(2N)$ we have \begin{equation*} \int_{|\gamma|}^{1/2}\frac{dt}{t} = \log{N}-\log|\xi| \le \log{N} \end{equation*} and \begin{equation*} \int_{|\gamma|}^{1/2}\frac{dt}{t^n} = \frac{1}{n-1} \left( \frac{1}{|\gamma|^{n-1}}-2^{n-1} \right) \le \frac{1}{(n-1)|\gamma|^{n-1}}. \end{equation*} for $n\ge2$. \par Therefore from \eqref{eq:h_gamma} and \eqref{eq:I_gamma} we have \begin{equation*} \begin{split} \int_{|\gamma|}^{1-|\gamma|}|h_{\gamma}(t)|\,dt &\le \sum_{n=1}^{\infty} \frac{1}{n!}(A'')^n|\gamma|^n \int_{|\gamma|}^{1-|\gamma|}f(t)^n\,dt \\ &\le 2|\gamma| \left( 4A''\log{N} + \sum_{n=2}^{\infty} \frac{(4A'')^n}{(n-1)n!} \right) \\ &\le \frac{|\xi|}{N} \left( 4A''\log{N} + \exp(4A'')-4A''-1 \right) \\ &\le \frac{K'\log{N}}{N} \end{split} \end{equation*} for a positive constant $K'$ if $N$ is sufficiently large. \par For $0\le t\le1$, we also have from Lemma~\ref{lem:Lemma3} \begin{equation*} \begin{split} &\left| I_{\gamma}(\pi-\sqrt{-1}u+2\pi t) - I_{\gamma}(-\pi-\sqrt{-1}u-2\pi t) \right| \\ \le& 4A + B|\gamma| \left( 2+e^{(1+t)uR}+e^{(1-t)uR} \right). \end{split} \end{equation*} Since $|\gamma|=|\xi|/(2N)$, we have \begin{equation*} |h_{\gamma}(t)| \le \exp \left( 4A + \frac{B'|\xi|}{2N} \right). \end{equation*} So we have \begin{equation*} \int_{0}^{|\gamma|}|h_{\gamma}(t)|\,dt \le \frac{|\xi|}{2N} \exp \left( 4A + \frac{B'|\xi|}{2N} \right) \le \frac{K''}{N} \end{equation*} and \begin{equation*} \int_{1-|\gamma|}^{1}|h_{\gamma}(t)|\,dt \le \frac{|\xi|}{2N} \exp \left( 4A + \frac{B'|\xi|}{2N} \right) \le \frac{K'''}{N} \end{equation*} for positive constants $K''$ and $K'''$. \par Therefore we have \begin{equation*} \int_{0}^{1}|h_{\gamma}(t)|\,dt \le \frac{K'\log{N}}{N}+\frac{K''}{N}+\frac{K'''}{N} \le \frac{K_2\log{N}}{N} \end{equation*} for a positive constant $K_2$. Now from \eqref{eq:max} we finally have \begin{equation*} \left| \int_{p(\varepsilon)}g_N(w)\,dw - \int_{p(\varepsilon)}\exp\bigl(N\Phi(w)\bigr)\,dw \right| \le \frac{K_2\log{N}}{N} \max_{w\in p(\varepsilon)}\left\{\exp\bigl(N\Re\Phi(w)\bigr)\right\}, \end{equation*} proving Proposition~\ref{prop:4.9}. \section{Proof of Lemma~\ref{lem:Phi_0}} \label{sec:Phi_0} We will show that $\xi\Phi(w_0)$ is purely imaginary with positive imaginary part. Then since $\xi$ is in the first quadrant, $\Phi(w_0)$ is in also in the first quadrant and so $\Re\Phi(w_0)>0$. \par Since $\varphi(u)$ is purely imaginary (Remark~\ref{rem:varphi}), we have $\Li_2(e^{u-\varphi(u)})=\overline{\Li_2(e^{u+\varphi(u)})}$. Therefore from \eqref{eq:Phi_0} and \eqref{eq:dilog} we have \begin{equation*} \begin{split} \Im(\xi\Phi(w_0)) &= -2\Im\Li_2(e^{u+\varphi(u)})-u\Im\tilde{\varphi}(u) \\ &= 2\Im\Li_2(e^{-u-\varphi(u)}) +\Im\bigl(u+\varphi(u)+\pi\sqrt{-1}\bigr)^2 -u\Im\tilde{\varphi}(u) \\ &= 2\Im\Li_2(e^{-u-\varphi(u)}) +u\varphi(u). \end{split} \end{equation*} So we have \begin{equation*} \begin{split} &\frac{d\,\Im(\xi\Phi(w_0))}{d\,u} \\ =& 2\Im \left( \log(1-e^{-u-\varphi(u)}) \left( 1 + \sqrt{-1}\frac{d\,\Im\varphi(u)}{d\,u} \right) \right) +\Im\varphi(u) +u\frac{d\,\Im\varphi(u)}{d\,u} \\ =& \frac{d\,\Im\varphi(u)}{d\,u} \left( 2\log|1-e^{-u-\varphi(u)}|+u \right) +2\arg(1-e^{-u-\varphi(u)}) +\Im\varphi(u) \\ =& \frac{d\,\Im\varphi(u)}{d\,u} \log\Bigl((1-e^{-u-\varphi(u)})(1-e^{-u+\varphi(u)})e^u\Bigr) +2\arg(1-e^{-u-\varphi(u)}) +\Im\varphi(u) \\ =& \frac{d\,\Im\varphi(u)}{d\,u} \log(e^u+e^{-u}-e^{\varphi(u)}-e^{-\varphi(u)}) +2\arg(1-e^{-u-\varphi(u)}) +\Im\varphi(u) \\ =& 2\arg(1-e^{-u-\varphi(u)}) +\Im\varphi(u) <0 \end{split} \end{equation*} since $-\pi/3<\Im\varphi(u)<0$. \par Since $\varphi(0)=-\pi\sqrt{-1}/3$ and $\varphi\Bigl(\log\bigl((3+\sqrt{5})/2\bigr)\Bigr)=0$, we have \begin{equation*} \xi\Phi(w_0) = \begin{cases} 2\Im\Li_2\left(e^{\pi\sqrt{-1}/3}\right)=1.01494\ldots &\quad\text{($u=0$)}, \\ 2\Im\Li_2\left(\frac{2}{3+\sqrt{5}}\right)=0 &\quad\text{($u=\log\bigl((3+\sqrt{5})/2\bigr)$)}. \end{cases} \end{equation*} Therefore $\Im(\xi\Phi(w_0))>0$ for $0<u<\log\bigl((3+\sqrt{5})/2\bigr)$, completing the proof of Lemma~\ref{lem:Phi_0}.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec1} The very first ALICE physics results from the $\sqrt{s_{NN}}=2.76$~TeV Pb-Pb collisions at the LHC were the elliptic flow \cite{Aamodt:2010pa}, multiplicities \cite{ALICE-M}, as well as the transverse momentum ($P_T$) spectrum of charged particles \cite{ALICE-RAA}. In the measured charged-particle $P_T$ spectrum, which covers the range $P_T<20$~GeV and extends over three orders of magnitude, two quite distinct regions can be seen for the centralmost collisions: at $P_T\lesssim 4-5$~GeV the spectrum is exponential, while at $P_T\gtrsim 5$~GeV it shows a power-law like behaviour. Both of these carry very important information on the QCD dynamics of the system: the low-$P_T$ spectrum, responsible for the hadronic bulk multiplicity, reflects the collective transverse motion (flow) developed in the system during its entire spacetime evolution, while the high-$P_T$ spectrum, and its suppression relative to the yield in proton-proton collisions, tells us about energy losses of high transverse momentum ($p_T$) partons on their way out of the dense bulk matter. In this paper, our goal is to make a simultaneous analysis of the high- and low-$P_T$ parts of the charged-hadron $P_T$ spectrum measured by the ALICE collaboration \cite{ALICE-RAA} in Pb-Pb collisions, based on models which have been successfully applied and constrained in Au-Au collisions at RHIC. While the extrapolation of the pQCD parts from $\sqrt{s_{NN}}=200$~GeV to 2.76~TeV is straightforward, the nonperturbative features in bulk dynamics and its connection to partonic energy loss contain considerable uncertainties. A generally accepted framework for describing the bulk QCD-matter evolution in $A$-$A$ collisions, is relativistic hydrodynamics, which at the same time is the only framework which can incorporate the effects of the phase transition predicted by lattice QCD. There has been considerable progress in the hydrodynamical modeling of ultrarelativistic heavy-ion collisions over the last years, and into many directions, too: one has moved from solving 1+2 dimensional boost symmetric ideal hydrodynamical equations \cite{Kolb:2000sd} to genuinely 1+3 D ideal hydrodynamics \cite{Hirano:2001eu,Nonaka:2006yn,Schenke:2010nt}, and to 1+2 D \cite{Heinz:2005bw,Song:2007ux,Dusling:2007gi,Romatschke:2007mq,Niemi:2011ix} and even 1+3 D dissipative hydrodynamics \cite{Schenke:2010rr}. Furthermore, various hybrid models have recently been developed, where the hydrodynamical evolution is coupled with a hadronic cascade afterburner \cite{Teaney:2000cw,Teaney:2001av,Hirano:2005xf,Nonaka:2006yn,Song:2011hk}. Also genuine event-by-event hydrodynamic studies have been performed, both in the ideal hydrodynamics \cite{Andrade:2006yh,Andrade:2008xh,Petersen:2009vx,Werner:2010aa,Holopainen:2010gz} as well as in the viscous case \cite{Schenke:2010rr}. Hydrodynamics itself does not provide any prescription for the extrapolation in $\sqrt{s_{NN}}$. The $\sqrt{s_{NN}}$ dependence of bulk dynamics enters mainly via the initial conditions (IC) which are crucial element in these studies. One can try to fit the IC using the available data (multiplicities, $P_T$ spectra, elliptic flow) as constraints. Then, however, the more fitting is done the more one looses in predictive power. To improve upon this, the IC may be computable using a dynamical model for primary production of QCD quanta, which provides the needed $\sqrt{s_{NN}}$ dependence. One such initial-state model, which we will employ in this paper, is the pQCD + saturation approach, the "EKRT model" \cite{Eskola:1999fc}. When combined with ideal hydrodynamics, this model correctly predicted the charged-hadron multiplicities in $\sqrt{s_{NN}}=56$, 130 and 200 GeV Au-Au collisions at RHIC, and within 7 \% also at the LHC \cite{Eskola:2001bf}. Also the centrality dependence of the multiplicity is consistent with the data, see Fig.~23(a) in \cite{:2008ez} and Fig.~4 in \cite{Eskola:2000xq}, and the measured $P_T$ spectra of the bulk of the identified hadrons at RHIC have been reproduced well in this framework \cite{Eskola:2005ue}. Elliptic flow has also been successfully addressed \cite{Niemi:2008ta,Kolb:2001qz}, and in particular, the similarity of the differential elliptic flow at RHIC and LHC as well as the increase of integrated elliptic flow from RHIC to LHC was predicted in \cite{Niemi:2008ta}. Furthermore, the emergence of the pQCD tail from the hydrodynamic spectrum at $P_T\sim 4...5$~GeV in central Pb-Pb collisions at the LHC was predicted in \cite{Eskola:2005ue}. High-$p_T$ partons are created in hard pQCD subprocesses along with bulk multiplicity production in ultrarelativistic heavy-ion (A-A) collisions. The idea that the final state interaction of such partons with the surrounding matter reflects properties of the bulk and that hence a measurement of high $P_T$ hadrons can be used as a probe of the QCD medium, is known as "jet tomography" \cite{Jet1,Jet2,Jet3,Jet4,Jet5,Jet6}. One of the expected signatures of this final state interaction is the suppression of high $P_T$ hadron production in A-A collisions as compared to the scaled expectation from proton-proton (p-p) collisions, measured by the nuclear suppression factor $R_{AA}$. This phenomenon is often called "jet quenching" (although this is is slightly misleading since the observable is not a jet of hadrons but inclusive single-hadron spectrum). Experimentally, jet quenching has been measured at RHIC with great precision as a function of collision centrality and orientation with respect to the reaction plane \cite{PHENIX-RAA-RP}. A number of parton-medium interaction models has been proposed and tested against the experimental results for $R_{AA}$ together with a well-constrained hydrodynamical description of the bulk medium \cite{JetHyd1,JetHyd2,JetHyd3}. However in systematical comparisons of models with the data, even including observables such as high $P_T$ back to back correlations, ambiguities remain \cite{SysJet1,SysJet2,SysJet3} which do not allow firm answers as to what the correct dynamics of parton-medium interaction is in nature. A partial answer was obtained with regard to models proposing elastic (or more general incoherent) energy loss of leading partons \cite{Elastic1,Elastic2,Elastic3,Elastic4,Elastic5}. Such models fail to reproduce pathlength dependent observables such as the spread between in-plane and out of plane emission of hard hadrons \cite{ElRP1,ElRP2}. This result establishes that quantum coherence is a crucial ingredient of the dynamics, but fails to discriminate among the various QCD radiative energy loss models to the degree that a quantitative extraction of medium parameters would be possible with good accuracy \cite{qhat}. The underlying reason for this failure is that the nuclear suppression factor is largely independent of the functional form of the energy loss probability distribution $P(\Delta E)$ for a given leading parton \cite{gamma-h}. While different models predict different forms for $P(\Delta E)$, the pQCD parton spectrum at RHIC kinematics is so steeply falling that even a small shift in parton energy due to energy loss to the medium effectively acts like a complete suppression of the parton. Thus, only a small part of $P(\Delta E)$ close to zero energy loss is actually probed in observables. The vastly larger kinematic range of the LHC, leading to a significantly harder pQCD parton spectrum is expected to change this situation and to allow to probe more deeply into $P(\Delta E)$. While leading-parton energy loss models are sufficient to describe single inclusive high $P_T$ hadron production, there is a second class of models which treat the whole medium-modification of a parton shower in the medium \cite{JEWEL,YaJEM1,YaJEM2,Q-PYTHIA,Martini} with the aim of eventually describing fully reconstructed jets in heavy-ion collisions. These models are often Monte-Carlo (MC) codes extending vacuum shower codes such as PYTHIA \cite{PYTHIA} or HERWIG \cite{HERWIG} to include the interaction with a medium. A systematic comparison of these codes with pathlength dependent observables is so far largely absent. In this paper, we make a simultaneous analysis of the high- and low-$P_T$ parts of the charged-hadron $P_T$ spectrum measured by the ALICE collaboration \cite{ALICE-RAA} in $\sqrt{s_{NN}}=2.76$~TeV central Pb-Pb collisions at the LHC. We first study the compatibility of the pQCD+saturation+hydrodynamics framework with the measured low-$P_T$ spectrum, systematically charting the model uncertainties. Then for the high-$P_T$ part, using the obtained hydrodynamical evolution of the bulk matter as background, we investigate what discriminating power the first measurement of $R_{AA}$ at the LHC \cite{ALICE-RAA} offers for models which are tuned to describe the observed nuclear suppression at RHIC. We consider both leading-parton radiation \cite{QuenchingWeights} and elastic \cite{ElRP1} energy losses as well as showers \cite{YaJEM1,YaJEM2} modified by the QCD medium. For other recent works discussing $R_{AA}$ at the LHC, see Refs.~\cite{Che:2011vt,Majumder:2011uk}. For the $R_{AA}$ study, our strategy is as follows: First, in moving from RHIC to LHC using our default hydrodynamical set-up, we compute $R_{AA}$ for different models of parton-medium interaction without any re-tuning of parameters (straight extrapolation). Since some amount of uncertainty is expected to originate from the uncertainties in the hydrodynamical initial state, and due to the fact that energy-loss model parameters are known to differ for the same parton-medium interaction model even among constrained hydrodynamical models \cite{SysJet3}, we re-tune in a second run the model parameters to the best fit of the $\sqrt{s_{NN}}=2.76$~TeV data and quote the difference to the tune at $\sqrt{s_{NN}}=200$~GeV. This allows to gauge how the change in $\sqrt{s_{NN}}$ acts to constrain models. Using the best fits which we obtain for the low-$P_T$ and high-$P_T$ parts, we investigate to what extent the measured charged-hadron $P_T$ spectrum can be reproduced. \section{The hydrodynamical bulk description} \subsection{pQCD+saturation+hydrodynamics framework} For obtaining the bulk hadron $p_T$ spectra in a hydrodynamical framework, we need to define the QCD matter initial conditions, solve the hydrodynamical equations numerically with a given Equation of State (EoS), compute the thermal particle spectra at freeze-out, and account for the strong and electromagnetic decays of unstable particles. We compute the QCD matter initial densities using the EKRT saturation model \cite{Eskola:1999fc}, which is based on collinearly factorized pQCD minijet production and the conjecture of gluon saturation in the transverse plane. In this model, which has been quite successful in predicting the multiplicities from RHIC to LHC, saturation of primary parton (gluon) production is assumed to take place when the minijet production vertices, each of which occupies a geometric uncertainty area $\sim \pi/p_T^2$, start to overlap. For central $A$-$A$ collisions studied here, the following criterion is fulfilled at saturation: \begin{equation} N(p_0) \pi/p_0^2 = \pi R_A^2, \label{saturation} \end{equation} where the number of produced minijets at $p_T\ge p_0$ in the mid-rapidity unit $\Delta y=1$ can be written in terms of the standard nuclear overlap function $T_{AA}$ and leading-order (LO) pQCD cross sections as $N(p_0) = T_{AA}({\bf 0})\sigma \langle N \rangle$, with (see Ref. \cite{Eskola:2005ue} for details) \begin{eqnarray} \nonumber \sigma \langle N \rangle &=& \int_{p_0^2}dp_T^2 \bigg[ \int_{\Delta y}dy_1\int dy_2 + \int dy_1\int_{\Delta y} dy_2 \bigg] \\ && \sum_{\langle kl\rangle}\frac{1}{1+\delta_{kl}} \frac{d\sigma^{AA\rightarrow kl+X}}{dp_T^2dy_1dy_2}. \end{eqnarray} Above, $T_{AA}({\bf b}) = \int d^2 {\bf s}\, T_A({\bf s}) T_A({\bf s}-{\bf b})$, where $T_A(r)=\int dz n_A(r)$ is the nuclear thickness function computed from the Woods-Saxon nuclear densities $n_A$ with $n_0=0.17$~fm$^{-1}$, $d=0.54$~fm. The inclusive cross section for producing partons of flavours $k$ and $l$ above is, as usual, \begin{equation} \frac{d\sigma^{AA\rightarrow kl+X}}{dp_T^2dy_1dy_2} = K \sum_{ij} x_1 f_{i/A}(x_1,Q^2) x_2f_{j/A}(x_2,Q^2) \frac{d\hat\sigma^{ij\rightarrow kl}}{d\hat t}. \label{LOminijets} \end{equation} For the nuclear parton distribution functions (nPDFs), we use the CTEQ6L1 PDFs \cite{Pumplin:2002vw} together with the EPS09 nuclear effects \cite{Eskola:2009uj}. For transverse energy ($E_T$) production, the pQCD minijet calculation can be extended to next-to-leading order (NLO) in an infra-red and collinear singularity safe manner (whereas the number of produced minijets is not well defined beyond LO) \cite{Eskola:2000my}. The corresponding updated $K$ factors are, however, not yet available \cite{EHPT_soon}, which is why we simply perform a LO pQCD calculation here and fit the $K$ parameter in Eq.~(\ref{LOminijets}) so that the minijet production at saturation leads to (after hydrodynamic evolution and resonance decays) the measured charged hadron multiplicity in the LHC Pb-Pb collisions at $\sqrt{s_{NN}}=2.76$~TeV in the 0-5\% centrality class, $dN_{\rm ch}/d\eta = 1584$ \cite{ALICE-M}. The obtained $K$ thus also accounts for all higher-order contributions. The centrality selection is simulated by considering a central $A_{\rm eff}$-$A_{\rm eff}$ collision of an effective nucleus $A_{\rm eff} = 192 < A$, as explained in Ref. \cite{Eskola:2001bf}. Most importantly, with the EKRT-based modeling above, we can also estimate the formation time of the system from the pQCD dynamics: according to the uncertainty principle, $\tau_i\approx 1/p_{\text{sat}}$, where $p_{\text{sat}}$ is the solution of Eq.~(\ref{saturation}). With $K=1.54$, obtained by an iterative fit to the measured multiplicity, we have $p_{\text{sat}}=1.58$~GeV and $\tau_i=0.12$~fm. In the EKRT framework, where the computation of the minijet $E_T$ production is possible in NLO, the initial conditions should be given in terms of the energy density instead of the entropy density. Once the saturation scale is known, we compute the local initial energy density by distributing the minijet $E_T$ over the transverse plane at the time $\tau_i$ according to (for more details, see \cite{Eskola:2005ue}) \begin{equation} \epsilon(r,\tau_i) = T_A(r)T_A(r) \frac{\sigma\langle E_T\rangle}{\tau_i\Delta y}, \label{eBC} \end{equation} where \begin{equation} \sigma\langle E_T\rangle = \int_{p_0^2} dp_T^2 p_T \frac{d(\sigma\langle N \rangle)}{dp_T^2}. \end{equation} Above, we distribute the energy density into the transverse plane according to the binary collision profile, thus our default set-up is the "eBC" initialization. It should be emphasized, however, that since we do not attempt to make the saturation condition (\ref{saturation}) strictly local in the transverse plane (which would lead to a varying $p_{\rm sat}$, see \cite{Eskola:2000xq,Kolb:2001qz} for such discussion), the transverse profile is not uniquely fixed here. We use ideal hydrodynamics to describe the space-time evolution of the bulk matter. Since we will consider only mid-rapidity observables here, longitudinal boost invariance as well as neglecting the net-baryon number are very valid approximations. We solve the 2+1 dimensional relativistic hydrodynamical equations, $\partial_\mu T^{\mu\nu} = 0$, using the SHASTA algorithm \cite{Boris,Zalesak}. For the EoS which closes the set of dynamical equations, we choose the recently developed EoS s95p-v1 from Ref.~\cite{Huovinen:2009yb}. We assume a very rapid thermalization here, taking the formation time $\tau_i$ as the starting time $\tau_0$ for the hydrodynamical evolution. Furthermore, a full chemical equilibrium and zero initial transverse flow are assumed. Regarding these initial conditions, we should emphasize three points here. First, the early initialization of the hydrodynamical evolution, $\tau_0\propto 1/p_{\rm sat}$, is quite essential, since, as pointed out long ago \cite{Eskola:1988yh}, pQCD minijets do produce a large amount of $E_T$, and only by doing $PdV$ work over a long enough time early on, the hydrodynamically evolving system can sufficiently degrade its transverse energy: As shown in \cite{Eskola:2001bf}, the amount of the measured final-state $E_T$ is only a third of the initially produced $E_T$. Second, although the system may not be fully thermal at early times, the early start accounts for the buildup of flow and pressure as well as $PdV$ work during the thermalization stage. Third, as discussed e.g. in \cite{Chatterjee:2008tp}, thermal photon production is very sensitive to the hydro initialization time. In order to explain the photon production measured at $p_T\sim 2$ GeV in $\sqrt{s_{NN}}=200$~GeV Au-Au collisions, one needs a substantial thermal photon production component from the QGP. This can be obtained only if the initialization time is small enough. For these reasons, we believe that to start the hydrodynamical evolution at an early time with zero transverse flow is physically a well motivated approximation. Finally, hadron spectra in the hydrodynamical model are calculated from a constant temperature freeze-out hypersurface using the Cooper-Frye formula \cite{Cooper}, and accounting for the strong and electromagnetic decays of unstable particles. The freeze-out temperature $T_F$ is fixed so that we can describe the measured positive pion spectra in $\sqrt{s_{NN}} = 200$ GeV Au-Au collisions at RHIC. The value of $T_F$ is found to be 165 MeV for the eBC profile. As shown in \cite{Eskola:2007zc}, for computing the $P_T$ spectra of hadrons, keeping the $T_F$ unchanged from RHIC to LHC is a good approximation to a more dynamical decoupling treatment where the scattering rates are compared with the expansion rate of the system. \subsection{The results: Hydrodynamical $p_T$ spectra and their systematics} Figure~\ref{F-Spectra} shows the hydrodynamically obtained $p_T$ spectrum of charged hadrons in 0-5\% central $\sqrt{s_{NN}}= 2.76$~TeV Pb-Pb collisions at the LHC, and its comparison with the ALICE data \cite{ALICE-RAA}. Also shown is the comparison with the computed pQCD-part of the spectrum, which is subjected to the energy losses of high-energy partons discussed in Sec.~\ref{sec:Elosses} below. As seen in the figure, the agreement of the hydrodynamically obtained spectrum is quite good down to 4-5 GeV, where the pQCD tail takes over. The observed behaviour, the change from the hydro-dominated to the pQCD-dominated spectrum at $p_T= 4-5$ GeV is indeed very similar to what was predicted in \cite{Eskola:2005ue} (see Fig. 15 there). \begin{figure}[htb] \begin{center} \epsfig{file=fullpT.eps, width=8.5cm} \end{center} \caption{\label{F-Spectra}(Color online) The transverse momentum $P_T$ spectrum of charged hadrons in 0-5\% central $\sqrt{s_{NN}}=2.76$~TeV Pb-Pb collisions as measured by the ALICE collaboration \cite{ALICE-RAA} and compared with theoretical calculations using a two component picture: the low $P_T$ region is described by pQCD+saturation+hydrodynamics whereas in the high $P_T$ region we apply a pQCD + jet quenching picture, here in the ASW formalism (see text). Shown for comparison is also the pQCD result without jet quenching. Due to the $P_T$ dependence of the jet quenching, this has a different shape which would not agree with the data.} \end{figure} To study the sensitivity of the hydrodynamic $p_T$ spectrum to the model uncertainties, we perform the following systematics shown in Fig.~\ref{Hydrosystematics}. \begin{figure*}[htb] \hspace{-.7cm} \epsfig{file=spectra_all.eps, width=14.5cm} \caption{\label{Hydrosystematics}(Color online) Systematics of the hydrodynamic $P_T$ spectrum in 0-5\% central $\sqrt{s_{NN}}=2.76$~TeV Pb-Pb collisions. Keeping the initial energy fixed, we show the sensitivity of the computed spectrum to the freeze-out temperature $T_F$ (upper left panel), to the hydrodynamic initialization time $\tau_0$ (upper right), and to the initial transverse profile (lower right). Allowing for a change of 20\% for the computed multiplicity, the sensitivity of the results to the value of $K$ (affecting both the initial $E_T$ and $\tau_0$) is shown in the lower left panel. The data points are from the ALICE measurement \cite{ALICE-RAA}. } \end{figure*} First, in the upper left panel, we vary the freeze-out temperature between 120 MeV and 165 MeV, keeping the initial conditions fixed to the default set-up. With the single-$T_F$ scenario we use here (i.e. no partial chemical equilibrium or coupling to a hadronic afterburner), a lower $T_F$ enables more transverse flow to develop which in turn leads to a flatter $P_T$ spectrum which easily overshoots the data both at RHIC and at the LHC. Thus, with the eBC and small-$\tau_0$ set-up we need a high value of $T_F$. Second, the upper right panel shows the sensitivity of the results to the initial time: here $\tau_0$ has been varied in the range $0.5/p_{\rm sat}$ -- $2/p_{\rm sat}$, keeping the initial minijet $E_T$ and the final $T_F$ unchanged. We observe that the eBC set up favours a small hydro initialization time but that at the same time -- since the initial $E_T$ is conserved and not the initial entropy ($S_0$) -- the multiplicity decreases slightly with decreasing $\tau_0$: $N_{\rm ch}\sim S_0 \sim \epsilon^{3/4}\tau_0\sim (E_T/\tau_0)^{3/4}\tau_0 \sim E_T^{3/4}\tau_0^{1/4}$. Thus, changing $\tau_0$ by a factor 2 causes a change of 19 \% in the multiplicity -- a change which is already well beyond the 5 \% error bar in the data \cite{ALICE-M}. Third, the panel on lower right shows the sensitivity of the hadronic $P_T$ spectrum to the choice of the energy density transverse profile. In addition to the binary profile $T_A(r)T_A(r)$ in Eq.~(\ref{eBC}), we have distributed the minijet $E_T$ also over a wounded nucleon density profile (eWN), keeping however $\tau_0$ fixed. In this change, the charged particle multiplicity then increases from 1580 to 1640. Based on fitting the measured charged-particle $P_T$ spectrum in $\sqrt{s_{NN}}= 200$~GeV Au-Au collisions at RHIC, we use $T_F=160$~MeV for the eWN case. Due to the slower build-up of transverse flow in the eWN case, the eWN spectrum becomes slightly steeper than our default eBC case with $T_F=165$ MeV. From this, we can deduce that fine-tuning the transverse profile towards an eWN profile would be certainly possible but also that an early initial time $\tau_0\approx 1/p_{\rm sat}$ is still required. Fourth, the lower left panel shows the sensitivity of the spectrum to the fit parameter $K$. For this figure, we allow a change of 20 \% (ca. 300) in the multiplicity, and compute the $E_T$ and $\tau_0$ separately in each case but keep $T_F$ constant. We can see how an increase(decrease) in the multiplicity corresponds to a larger increase(decrease) in the value of $K$ but a smaller decrease(increase) in $\tau_0$. These scalings can be deduced from Eqs.~(1)--(5), keeping track of the powers of $p_0$ and $K$: At the scaling limit $\sigma(p_0)\sim K/p_0^2$, and Eq.~(\ref{saturation}) gives $p_{\rm sat}\sim K^{1/4}$. The energy density scales as $\epsilon\sim \sigma\langle E_T\rangle p_{\rm sat} \sim (K/p_{\rm sat})p_{\rm sat}\sim K$, and the charged-particle multiplicity then scales as $N_{\rm ch}\sim S_0 \sim \epsilon^{3/4}\tau_0\sim K^{1/2}$. Thus, a 20\% increase in the multiplicity corresponds to a 40 \% increase in the value of $K$ and a 10 \% decrease in $\tau_0$. We see on the one hand, that our initial-state modeling is fairly robust against small variations in $K$ and on the other hand that fine-tuning of $K$ would depend on the transverse profile and also on $T_F$. Such fine-tuning, however, we do not consider very meaningful before more data, on e.g. identified hadron spectra, are available. From Fig.~\ref{Hydrosystematics}, we conclude on the one hand that with our pQCD+saturation+(ideal)hydrodynamics framework we are committed to a fairly narrow window of the parameters $\tau_0$ and $T_F$, and on the other hand that the profile uncertainty can be considered to be the main uncertainty from the jet quenching viewpoint. Further fine-tuning of the hydrodynamical description is left as future work, when we are studying the centrality dependence of the hadronic $P_T$ spectrum. \section{Parton-medium interaction models} \label{sec:Elosses} The starting point for a computation of the high $P_T$ hadron yield in an A-A collision is the initial spectrum of hard partons. In our framework, the differential cross section $d\sigma_{vac}^{AA \rightarrow f +X}$ for the production of a parton $f$ in an A-A collision is calculated in LO pQCD by integrating out the unobserved parton kinematics in Eq.~(\ref{LOminijets}) (explicit expressions are given in \cite{SysJet1} and references therein). Uncertainty relation arguments indicate that the medium cannot modify the hard process itself, but rather influences the fragmentation pattern of the outgoing highly virtual partons, in particular their development into a parton shower. Thus, for in-medium shower models, the expression to evaluate is the convolution of the partonic production cross section with the medium-modified fragmentation function (MMFF), \begin{equation} \label{E-Conv} d\sigma_{\rm med}^{AA\rightarrow h+X} \negthickspace \negthickspace = \sum_f d\sigma_{vac}^{AA \rightarrow f +X} \otimes \langle D_{MM}^{f \rightarrow h}(z,\mu^2)\rangle_{T_{AA}} \end{equation} where $\langle D_{MM}^{f \rightarrow h}(z,\mu^2)\rangle_{T_{AA}}$ is the MMFF, averaged over the geometry of the medium, $z$ is the fractional momentum of produced hadrons given a parton $f$, and $\mu^2$ is the hadronic momentum scale. To evaluate this expression requires knowledge of both the geometry of the medium (e.g. in terms of a spacetime description of medium density) and of the MMFF $D_{MM}(z, \mu_p^2,\zeta)$ for any given path $\zeta$ through the medium. If the angle between outgoing parton and the reaction plane is $\phi$, the path of a given parton through the medium $\zeta(\tau)$, i.e. its trajectory $\zeta$ as a function of proper medium evolution time $\tau$ is determined in an eikonal approximation by its initial position ${\bf r_0}$ and the angle $\phi$ as $\zeta(\tau) = \left(x_0 + \tau \cos(\phi), y_0 + \tau \sin(\phi)\right)$ where the parton is assumed to move with the speed of light $c=1$ and the $x$-direction is chosen to be in the reaction plane. How $D_{MM}(z, \mu_p^2,\zeta)$ is obtained once a medium is specified is characteristic for a given model of parton-medium interaction and will be discussed for the code YaJEM later. Once the MMFF for a given path is known, the averaging over the medium geometry is given by \begin{equation} \label{E-D_TAA} \begin{split} \langle D_{MM}&(z,\mu_p^2)\rangle_{T_{AA}} \negthickspace =\\ &\negthickspace \frac{1}{2\pi} \int_0^{2\pi} \negthickspace \negthickspace \negthickspace d\phi \int_{-\infty}^{\infty} \negthickspace \negthickspace \negthickspace \negthickspace dx_0 \int_{-\infty}^{\infty} \negthickspace \negthickspace \negthickspace \negthickspace dy_0 P(x_0,y_0) D_{MM}(z, \mu_p^2,\zeta). \end{split} \end{equation} Here, the initial distribution of hard vertices in the transverse $(x,y)$ plane is assumed to be calculable as \begin{equation} \label{E-Profile} P(x_0,y_0) = \frac{T_{A}({\bf r_0 + b/2}) T_A(\bf r_0 - b/2)}{T_{AA}({\bf b})}. \end{equation} In the leading-parton energy loss approximation, the medium-modified production of high-$P_T$ hadrons can be computed from the convolution \begin{equation} d\sigma_{\rm med}^{AA\rightarrow h+X} \negthickspace \negthickspace = \sum_f d\sigma_{\rm vac}^{AA \rightarrow f +X} \otimes \langle P(\Delta E)\rangle_{T_{AA}} \otimes D^{f \rightarrow h}(z, \mu^2) \end{equation} where $\langle P(\Delta E)\rangle_{T_{AA}}$ is the medium-induced energy loss probability, averaged over the medium geometry and $D^{f \rightarrow h}(z, \mu^2)$ is the vacuum fragmentation function for the production of a hadron $h$ from a parton $f$, at fractional momentum $z$ and hadronic momentum scale $\mu^2$. The underlying assumption is that the dynamics of parton-medium interactions can largely be cast in terms of a shift in leading parton energy and that hence the MMFF can be approximated by the convolution of an energy loss probability with the vacuum fragmentation function, $\langle D_{MM}^{f \rightarrow h}(z,\mu^2)\rangle_{T_{AA}} = \langle P(\Delta E)\rangle_{T_{AA}} \otimes D^{f \rightarrow h}(z, \mu^2)$. In this case, the energy-loss probability for a given path $\zeta$ of a parton through the medium, $P(\Delta E, \zeta)$, is the ingredient to be computed within a specific model of parton-medium interaction. If $P(\Delta E, \zeta)$ is known, the geometrical averaging involves as above integrating over all possible initial vertices $(x_0,y_0)$ with the weight of $P(x_0,y_0)$ and over all possible orientations $\phi$ as \begin{equation} \label{E-P_TAA} \langle P(\Delta E)\rangle_{T_{AA}} \negthickspace = \negthickspace \frac{1}{2\pi} \int_0^{2\pi} \negthickspace \negthickspace \negthickspace d\phi \int_{-\infty}^{\infty} \negthickspace \negthickspace \negthickspace \negthickspace dx_0 \int_{-\infty}^{\infty} \negthickspace \negthickspace \negthickspace \negthickspace dy_0 P(x_0,y_0) P(\Delta E,\zeta). \end{equation} In all cases, the nuclear modification factor is computed with the given medium-modified yield of hard hadron production as \begin{equation} \label{E-RAA} R_{AA}(p_T,y) = \frac{dN^h_{AA}/dp_Tdy }{T_{AA}({\bf b}) d\sigma^{pp}/dp_Tdy}. \end{equation} The details of the parton-medium interaction model are thus in either the energy loss probability distribution $P(\Delta E,\zeta)$ for leading parton energy loss models or the MMFF $D_{MM}(z, \mu_p^2,\zeta)$ for in-medium shower models, given a specific path through the medium. In the following, we formulate three different types of models which we apply to the ALICE data. \subsection{Armesto-Salgado-Wiedemann (ASW) formalism} The detailed calculation of $P(\Delta E, \zeta)$ follows the Baier-Dokshitzer-Mueller-Peigne-Schiff (BDMPS) formalism for radiative energy loss \cite{Jet2} using quenching weights as introduced by Salgado and Wiedemann \cite{QuenchingWeights}. In this framework, the energy loss probability $P(\Delta E,\zeta)$ for a path can be obtained by evaluating the line integrals along $\zeta(\tau)$ as \begin{equation} \label{E-omega} \omega_c({\bf r_0}, \phi) = \int_0^\infty \negthickspace d \zeta \zeta \hat{q}(\zeta) \quad \text{and} \quad \langle\hat{q}L\rangle ({\bf r_0}, \phi) = \int_0^\infty \negthickspace d \zeta \hat{q}(\zeta) \end{equation} with the relation \begin{equation} \label{E-qhat} \hat{q}(\zeta) = K_{\rm med} \cdot 2 \cdot \epsilon^{3/4}(\zeta) (\cosh \rho - \sinh \rho \cos\alpha) \end{equation} assumed between the local transport coefficient $\hat{q}(\zeta)$ (specifying the quenching power of the medium), the energy density $\epsilon$ and the local flow rapidity $\rho$ with angle $\alpha$ between flow and parton trajectory \cite{Flow1,Flow2}. $K_{\rm med}$ is the adjustable parameter in this framework. It is naturally expected to be $O(1)$, but in fits to data at $\sqrt{s_{NN}}=200$~GeV the parameter takes (dependent on the precise hydrodynamical model) values ranging between 3 and 10 (the latter number occurs for viscous hydrodynamics where the initial entropy density is lower than in the ideal case, see \cite{SysJet3}). Using the numerical results of \cite{QuenchingWeights} and the definitions above, the energy loss probability distribution given a parton trajectory can now be obtained as a function of the initial vertex and direction $({\bf r_0},\phi)$ as $P(\Delta E; \omega_c({\bf r},\phi), R({\bf r},\phi)) \equiv P(\Delta E,\zeta)$ for $\omega_c(\zeta)$ and $R=2\omega_c(\zeta)^2/\langle\hat{q}L(\zeta)\rangle$. In practical terms, $\langle P(\Delta E) \rangle_{T_{AA}}$ is characterized by a fairly large discrete escape probability without energy loss and a very broad distribution of energy loss ranging up to $O(100)$ GeV at RHIC conditions (for explicit figures, see e.g. \cite{SysJet1}). \subsection{YaJEM (Yet another Jet Energy-loss Model)} The MC code YaJEM is based on the PYSHOW code \cite{PYSHOW} which is part of PYTHIA \cite{PYTHIA}. It simulates the evolution from a highly virtual initial parton to a shower of partons at lower virtuality in the presence of a medium. A detailed description of the model can be found in \cite{YaJEM1,YaJEM2}. The parton shower developing from a highly virtual initial hard parton in this model is described as a series of $1\rightarrow 2$ splittings $a \rightarrow bc$ where the virtuality scale decreases in each splitting, i.e. $Q_a > Q_b,Q_c$ and the energy is shared among the daughter partons $b,c$ as $E_b = z E_a$ and $E_c = (1-z) E_a$. The splitting probabilities for a parton $a$ in terms of $Q_a, E_a$ are calculable in pQCD and the resulting shower is computed event by event in a MC framework. In the presence of a medium, the main assumption of YaJEM is that the parton kinematics or the splitting probability is modified. In the RAD (radiative energy loss) scenario, the relevant modification is a virtuality gain \begin{equation} \label{E-Qgain} \Delta Q_a^2 = \int_{\tau_a^0}^{\tau_a^0 + \tau_a} d\zeta \hat{q}(\zeta) \end{equation} through the interaction with the medium during the parton lifetime $\tau_a$. This modification leads to an increase in radiation. In order to evaluate Eq.~(\ref{E-Qgain}) during the shower evolution, the momentum space variables of the shower evolution equations need to be linked with a spacetime position in the medium. This is done via the uncertainty relation for the average formation time as \begin{equation} \label{E-Lifetime} \langle \tau_b \rangle = \frac{E_b}{Q_b^2} - \frac{E_b}{Q_a^2} \end{equation} and randomized splitting by splitting by sampling $\tau_b$ from the distribution \begin{equation} \label{E-RLifetime} P(\tau_b) = \exp\left[- \frac{\tau_b}{\langle \tau_b \rangle} \right]. \end{equation} The evolution for any given parton in the shower evolution is terminated as soon as the parton reaches a minimum virtuality scale $Q_0$. The result of the partonic evolution in terms of a shower of low virtuality partons is then passed on to the Lund model \cite{Lund} to hadronize. The fractional longitudinal momentum distribution of the resulting hadron distribution corresponds to the MMFF of the various hadron species. In the default version of YaJEM, the minimum virtuality scale is fixed at $Q_0 = 0.7$ GeV. In the version YaJEM-D (dynamical computation of $Q_0$) \cite{YaJEM-D}, the formation length of the in-medium shower is forced to be within the medium length. This corresponds to the choice \begin{equation} \label{E-Q0} Q_0 = \sqrt{E/L} \end{equation} which depends on both in-medium pathlength $L$ and shower-initiating parton energy $E$. The original motivation for this prescription was to introduce a pathlength dependence that can account for the experimentally observed split between in-plane and out of plane emission of high $P_T$ hadrons in non-central collisions \cite{PHENIX-RAA-RP}. However, together with the stronger pathlength dependence, YaJEM-D also predicts a strong rise of $R_{AA}$ with $P_T$ in angular averaged observables which we aim to test against the ALICE data. In principle, the full functional form of $\hat{q}(\zeta)$ could determine the MMFF, which would be computationally very expensive as a full MC simulation would be needed for every possible path in the medium. However, due to an approximate scaling law identified in \cite{YaJEM1}, it is sufficient to compute the line integral \begin{equation} \label{E-Qsq} \Delta Q^2_{\rm tot} = \int d \zeta \hat{q}(\zeta) \end{equation} in the medium to obtain the full MMFF $D_{MM}(z, \mu_p^2,\zeta)$ from a YaJEM simulation for a given eikonal path of the shower-initiating parton, where $\mu_p^2$ is the momentum scale of the shower initializing parton. The scaling law implies that the MC simulation has to be run only for a finite set of paths and makes a numerical solution of the geometry averaging possible. Each YaJEM run determines the MMFF for a fixed partonic scale $\mu_p$. To account for the scale evolution of the MMFF, runs for different $\mu_p$ need to be done. For technical reasons having to do with numerical performance, we like to evolve the MMFF for given hadronic scale as indicated in Eq.~(\ref{E-Conv}). For matching a partonic scale for which the MMFF is computed to a hadronic scale, we use the following procedure: For each available partonic scale, $\langle D_{MM}(z,\mu_p^2)\rangle_{T_{AA}}$ is computed, and the exponent $n$ of a power law fit to the parton spectrum at scale $\mu_p$ is determined. The maximum of $z^{n-1} \langle D_{MM}(z,\mu_p^2)\rangle_{T_{AA}}$ corresponds to the most likely value $\tilde{z}$ in the fragmentation process, and thus the partonic scale choice is best for a corresponding hadronic scale $P_T = \tilde{z}\mu_p$. The $P_T$ dependence of the hadronic $R_{AA}$ is then computed by interpolation between runs with different scale choice for the MMFF. As in the previous case, $K_{\rm med}$ in Eq.(\ref{E-qhat}) serves as the adjustable parameter of the model once $Q_0$ is chosen. YaJEM requires, dependent on the underlying hydrodynamical model, rather natural values for $K_{\rm med}$ ranging from 0.6 to 2. \subsection{Parametrized elastic energy loss} In \cite{ElRP1}, a phenomenological model for elastic energy loss, consisting of a discrete parton escape probability and a Gaussian parametrization for the energy loss probability was introduced to explore the pathlength dependence of incoherent energy loss. While the model itself is rather simplistic, its main findings with regard to pathlength dependence have been confirmed later in a detailed MC simulation of elastic energy loss \cite{ElRP2}. It needs to be stressed that unlike the previous models, the parametrized elastic energy loss is not meant as a serious QCD based model for the underlying dynamics of parton-medium interaction. The reason that it is presented here is rather that the simple and adjustable form of $P(\Delta E,\zeta)$ allows insight into how the observed rise of $R_{AA}$ with $P_T$ depends on the underlying functional form of the energy loss probability density, which is much less transparent in the context of ASW or YaJEM. In the model, the escape probability of a parton $i$ without any medium interaction is computed as \begin{equation} P_{\rm esc}^i = \exp\left[- const. \cdot \sigma_{el}^i \int \tilde{\rho}_M(\zeta)d\zeta \right] = \exp[- \gamma_i \cdot \kappa] \end{equation} where it is assumed that $\sigma_{el}$ is approximately independent of $\zeta$, and $\kappa$ is defined as \begin{equation} \label{E-kappa} \kappa = \int d\xi \epsilon^{3/4}(\xi) (\cosh \rho(\xi) - \sinh \rho(\xi) \cos\alpha) \end{equation} taking into account the flow corrections to the probed density. Here $\gamma_i$ is a parameter with dimensions of a cross section measuring the interaction strength, and hence $\gamma_g = 9/4 \gamma_q$ must hold to account for the different color factors of quarks and gluons. If the parton does not escape without energy loss, it must undergo a shift in energy (there is also the possibility that a strong shift into a thermal regime occurs, which is equivalent to an absorption of the parton). It is assumed that the mean value of the shift in energy will grow linearly in the number of scatterings $N$ as \begin{equation} d\Delta E = \Delta E_{1} \sigma_{el}^i \rho_M d\xi \end{equation} with $\Delta E_1$ the mean energy loss per scattering, whereas the fluctuations around the mean will grow like $\sqrt{N}$. Assuming a Gaussian distribution, this leads to the ansatz \begin{equation} \label{E-Elastic} P^i(\Delta E,\zeta) = P_{\rm esc}^i \delta(\Delta E) + \mathcal{N}_i \exp\left[ \frac{(\Delta E - \alpha_i \kappa)^2}{\beta_i \kappa} \right] \end{equation} where $\mathcal{N}_i$ is a normalization such that $\int_0^\infty d (\Delta E) P^i(\Delta E, \zeta) = 1$ and (\ref{E-Elastic}) has to hold for quarks and gluons separately due to the different color factor. $\alpha_i$ is a parameter with the dimensions of a cross section times the energy shift per reaction. The model is thus characterized by three parameters: \begin{itemize} \item $\alpha_i$ controls the mean shift in energy per expected scattering \item $\beta_i$ governs the strength of fluctuations around this mean shift. If $\beta_i$ is small, the model will have a strong correlation between path (and hence initial vertex) and shift in energy, if the parameter is large, this correlation is lessened \item $\gamma_i$ finally determines the magnitude of the escape probability. \end{itemize} In \cite{ElRP1} it was discussed that the space of all possible $(\alpha_i, \beta_i, \gamma_i)$ which can describe the measured $R_{AA}$ for $\sqrt{s_{NN}}=200$~GeV central Au-Au collisions is triangular and ordered by $\gamma$ --- if $P_{\rm esc}$ is of the order of the measured $R_{AA}$ already, the space of allowed shifts in energy is very constrained. On the other hand, if $P_{\rm esc}$ is small, more possibilities for a shift arise. Here, we investigate two distinct scenarios, one with large escape probabilities for quarks and gluons close to the allowed limit where $P_{\rm esc}^q = 0.218, P_{\rm esc}^g = 0.054$ and one with about half these values where $P_{\rm esc}^q = 0.12, P_{\rm esc}^g = 0.027$. The parameters are found in Table \ref{T-Params}. Note that the parameters here do not correspond to the sets given in \cite{ElRP1} since we use a different hydrodynamical model to do the extrapolation from $\sqrt{s_{NN}}=200$~GeV to 2.76~TeV here. \begin{table}[htb] \begin{tabular}{ccc} \hline &small $P_{\rm esc}$ & large $P_{\rm esc}$ \\ \hline \hline $\alpha_q$ [GeV$^{-1}$] & 0.5 &2.0 \\ $\beta_q$ & 20.0 & 80.0\\ $\gamma_q$ [GeV$^{-2}$] & 0.4 & 0.28 \\ \hline \end{tabular} \caption{\label{T-Params}Parameters for the two different elastic energy loss scenarios described in the text.} \end{table} \section{Results} \subsection{$P_T$ spectra} In Fig.~\ref{F-Spectra}, we show in addition to the good description of low $P_T$ bulk matter by hydrodynamics the quality of the description of the high $P_T$ part above 6 GeV by pQCD + jet quenching. We conclude from this comparison that a LO pQCD calculation, supplemented by a $K$ factor, is sufficient for the accuracy required to compute $R_{AA}$ reliably enough. The residual small deviations are not a crucial issue, since due to the nature of $R_{AA}$ as a ratio of $P_T$ spectra deviations in the spectral shape between computation and data cancel to first order and only lead to subleading corrections. We also note at this point that the effect of jet quenching cannot be cast into the form of a constant downward shift of the unmodified spectrum, but that rather its $P_T$ dependence is crucial to describe the spectral shape correctly. As an interesting side remark, the fact that pQCD and jet quenching offer a good description of the spectrum from 6 to 20 GeV suggests that there is fairly little room for another $P_T$-dependent component of hadron production, such as suggested by, e.g. certain recombination models (see e.g. \cite{Coalescence}) in this region. On the other hand, e.g. the sudden recombination model \cite{Reco,RecoLHC} expects to see effects largely at lower momenta. However, since recombination models generically expect quite different effects for mesons and baryons \cite{Coalescence,Reco,Greco}, the charged hadron spectrum is not a suitable observable to gauge the importance of recombination in this momentum window and a detailed discussion should be based on identified hadron spectra. \subsection{Direct extrapolation} In Fig.~\ref{F-RAA} we show $R_{AA}$ computed in the various models for parton-medium interaction discussed above in comparison with the ALICE data \cite{ALICE-RAA}, with their parameters adjusted to 0-10\% central $\sqrt{s_{NN}}=200$~GeV Au-Au collisions at RHIC. The assumption underlying this extrapolation is that the hydrodynamical model for the bulk matter can be extended from RHIC to LHC in a well controlled manner. This is a non-trivial issue, as it is known that changing the hydrodynamical model at the same energy may amount to 50\% change in model parameters if the underlying dynamics is sufficiently different, even if both models are constrained by bulk observables. \begin{figure}[htb] \begin{center} \epsfig{file=RAA-LHC_f.eps, width=8.5cm} \end{center} \caption{\label{F-RAA}(Color online) The nuclear suppression factor $R_{AA}$ in 0-5\% central $\sqrt{s_{NN}}=2.76$~TeV Pb-Pb collisions computed in various models for the parton-medium interaction (see text) with model parameters adjusted to describe 0-5\% central $\sqrt{s_{NN}}=200$~GeV Au-Au collisions compared with the ALICE data \cite{ALICE-RAA} .} \end{figure} The predictions for $R_{AA}$ from the various models at $\sqrt{s_{NN}}=2.76$~TeV turn out to be quite dramatically different in normalization, shape and their expectation for even larger $P_T$. In particular, the differences between models are significantly larger than the statistical errors of the measurement, thus allowing for a clean discrimination if the systematic errors (such as those with the p-p baseline) can be understood. ASW and YaJEM in the default mode predict a rather slow rise of $R_{AA}$ with $P_T$ along with a strong suppression. This is not in agreement with the data shown for the default p-p reference, but would agree better with an alternative NLO scaled p-p reference (see \cite{ALICE-RAA} for details). Both parametrized elastic scenarios reproduce the shape of $R_{AA}$ with the default reference well, they mainly differ in the normalization. The difference between the parametrized elastic and the ASW model can be understood as follows: At RHIC conditions, only a narrow region of $P(\Delta E)$ around zero is effectively probed due to the steeply falling spectrum, as even a small shift in parton energy is equivalent to a massive suppression. In the parametrized elastic models, most of the weight of $\langle P(\Delta E)\rangle_{T_{AA}}$ is contained in the region between zero and $\sim 30$ GeV (see e.g. Fig.~1 right in \cite{ElRP1}), for small $P_{\rm esc}$ even more weight is contained close to the origin. This is very different for $\langle P(\Delta E)\rangle_{T_{AA}}$ computed in the ASW model --- here (see e.g. Fig.~5 in \cite{SysJet1}) the distribution is much flatter and contains a sizeable weight out to 100 GeV. To illustrate this in more detail: In a schematic model neglecting (among other things) hadronization, $R_{AA}$ can be understood from the ratio of modified over unmodified parton spectrum, where the modified parton spectrum at a given $p_T$ is determined by the number of partons escaping without energy loss plus the number of partons available in the spectrum at $p_T + \Delta E$ times the probability $P(\Delta E)$ of a shift by $\Delta E$. If we assume a power law $p_T^{-n}$ for the parton spectrum, \begin{equation} R_{AA} \approx \int_0^{E_{\rm max}} d\Delta E \langle P(\Delta E) \rangle_{T_{AA}} \left(1+\frac{\Delta E}{p_T}\right)^{-n}. \end{equation} It is evident from the expression that $R_{AA}$ at a given $p_T$ is equal to the transmission term of zero energy loss plus a contribution which is proportional to the integral of $\langle P(\Delta E) \rangle_{T_{AA}}$ from zero up to the energy scale $E_{\rm max}$ of the parton (since a parton cannot lose more energy than it originally has), {\it seen through the filter} of the steeply falling spectrum. $R_{AA}$ then generically grows with $p_T$ since $E_{\rm max}$ grows linearly with $p_T$, and the speed of growth depends on the weight of $\langle P(\Delta E) \rangle_{T_{AA}}$ in the region from zero to $E_{\rm max}$ and on the power $n$ of the parton spectrum. At the LHC, more of $\langle P(\Delta E) \rangle_{T_{AA}}$ is accessible compared with RHIC due to the harder underlying parton spectrum, i.e. the power $n$ of the 'filter' by which large $\Delta E$ in $\langle P(\Delta E) \rangle_{T_{AA}}$ are suppressed is reduced. This translates into a stronger rise of $R_{AA}$ with $P_T$ \cite{ElRP1} as compared to RHIC kinematics, and this rise is most pronounced for models where there is substantial weight of the energy loss probability density close to the origin. Thus, the parametrized elastic model with small $P_{\rm esc}$ shows the strongest rise, while ASW shows very weak $P_T$ dependence. Finally, YaJEM-D predicts the strongest rise of $R_{AA}$ with $P_T$ as a consequence of Eq.~(\ref{E-Q0}). In addition to the rise expected from the way the effective energy loss probability is probed as outlined above, YaJEM-D thus contains an explicit mechanism which introduces a strong energy dependence into the MMFF itself. While the normalization of the curve falls below the default baseline data, the shape is well reproduced. \subsection{Refit to data} As discussed in the context of Fig.~\ref{Hydrosystematics} above, there can be residual uncertainties in extrapolating the QCD matter fluid dynamics description from RHIC to the LHC. Therefore, in the second step, we allow a refit of parton-medium interaction model parameters to the ALICE data. We introduce a parameter $R$ to quantify the amount of refitting which is needed, where $R$ stands for the ratio of modified over unmodified parameter $K_{\rm med}$. Here we would consider, say, a 25\% change in the model parameter reasonable within the uncertainties of the hydrodynamical extrapolation. The results are shown in Fig.~\ref{F-RAArefit}. \begin{figure}[htb] \begin{center} \epsfig{file=RAA-LHCrefit_f.eps, width=8.5cm} \end{center} \caption{\label{F-RAArefit}(Color online) The nuclear suppression factor $R_{AA}$ in 0-5\% central $\sqrt{s_{NN}}=2.76$~TeV Pb-Pb collisions computed in various models for the parton-medium interaction (see text) with model parameters refit to ALICE data \cite{ALICE-RAA}. To indicate the magnitude of the uncertainty, the alternative p-p reference result which is not shown with errors by ALICE has been given a 10\% error band.} \end{figure} Following this procedure, we find that the ASW model can be brought into a rough agreement with the alternative p-p reference data, but even allowing for a substantial parameter readjustment it does not agree with the default reference data --- the $P_T$ dependence is too weak. YaJEM likewise follows the trend of the alternative p-p reference data well. YaJEM-D on the other hand can be brought into good agreement in both shape and normalization with the default p-p reference data. Unfortunately, even provided that we are willing to discard a model based on deviations $R < 0.75$ or $R > 1.25$ in the refitting procedure, we can at present make no such statement from the ALICE data due to the uncertainty in the p-p baseline. This stresses the importance of having a measured baseline. However, looking at Fig.~\ref{F-RAA}, we see reason to conclude that already a larger range in $P_T$ will allow to decisively rule out some models based on the wrong shape of the $P_T$ dependence. \section{Discussion} In this work, we have simultaneously analyzed the low-$P_T$ and high-$P_T$ spectrum of charged hadrons in central Pb-Pb collisions at $\sqrt{s_{NN}}=2.76$~TeV. For hydrodynamical modeling describing the low-$P_T$ spectrum, we have computed the initial conditions based on pQCD minijet production and saturation \cite{Eskola:1999fc,Eskola:2005ue}. To account for the NLO and higher order corrections in minijet production, we have made an iterative fit to the measured LHC charged-hadron multiplicity. The main outcome of this computation are the produced transverse energy and the early formation time, $\tau_i\approx 0.12$~fm, of the pQCD minijet system. We assume that $\tau_i$ is also the starting time $\tau_0$ for the hydrodynamical evolution but in charting the uncertainties of our modeling, we have shown the sensitivity of the computed $P_T$ spectrum to $\tau_0$. Also the sensitivity to the decoupling temperature, multiplicity fitting, and transverse profile of the initial energy density is explicitly shown. In our hydrodynamic framework, the uncertainty related to the initial transverse profile can be considered to be the main uncertainty in the extrapolation from RHIC to LHC. We have for simplicity considered two possibilities here, the eBC and the eWN profiles, thus essentially ignoring the QCD dynamics that may cause the profile to change from RHIC to LHC. The profile should, however, be computable in the pQCD+saturation approach \cite{Eskola:2000xq} but since we do not have the needed NLO pQCD elements fully at hand yet \cite{EHPT_soon}, we leave this as interesting future work. We nevertheless can see that within the uncertainties charted, the pQCD + saturation + (ideal)hydrodynamics framework works quite well in reproducing the charged-hadron $P_T$ spectrum up to $P_T\sim 4$~GeV, and that we obtain the agreement essentially without further tuning of the hydrodynamic parameters from RHIC to LHC. Also interestingly, we have observed (see Fig.~\ref{F-Spectra}) that once parton energy losses have been accounted for, the computed pQCD tail of high-$P_T$ hadron production starts to dominate over the hydrodynamic component at $P_T\gtrsim 5$~GeV. Comparison with the LHC data shows that the matching of these two components is very efficient in that it leaves fairly little room for hadron production components in addition to the hydrodynamics and pQCD+energy loss in the cross-over region $P_T=4-5$~GeV. After getting the obtained hydrodynamical evolution of the background QCD matter in control, we have proceeded to analyze the high-$P_T$ part of the charged-hadron $P_T$ spectrum. We have applied several models of parton-medium interactions, tuned to the nuclear suppression factor $R_{AA}$ at $\sqrt{s_{NN}}=200$~GeV Au-Au collisions, to $\sqrt{s_{NN}}=2.76$~TeV Pb-Pb collisions where the primary parton spectra are significantly harder. We found that in principle the large kinematic lever-arm at the LHC translates into a significant power to distinguish various models, even given the uncertainties in extrapolating the bulk medium model to larger energy. Of particular importance here is the rise of $R_{AA}$ with $P_T$ observed at $\sqrt{s_{NN}}=2.76$~TeV which is intimately connected with the slope of the pQCD parton spectrum and reflects the way the energy loss probability distribution is probed. If one had no systematic uncertainty in the data at this point, two of the models we tested (ASW and YaJEM) could be ruled out already by the ALICE data obtained with the default p-p baseline. On the other hand, these models agree fairly well with the alternative p-p reference data. However, in addition to the angular averaged suppression factor in central collisions considered here, other observables need to be studied. If we require that a model should also account for the observed spread between the in-plane and out-of-plane hard hadron emission as observed at $\sqrt{s_{NN}}=200$~GeV, then YaJEM would be strongly disfavoured \cite{YaJEM-D} and YaJEM-D would be more consistent with the data than ASW. The fact that the simple parametrized elastic energy loss model which is known to fail for pathlength dependent observables is able to describe the scaling in $\sqrt{s}$ from RHIC to LHC rather well should be a stern warning that $\sqrt{s}$ and $P_T$ dependence only probe particular aspects of energy loss models, and that agreement with a subset of available observables may not be enough to judge the validity of a model. As shown in Fig.~\ref{F-RAA}, extending the measurements of $R_{AA}$ out to larger values of $P_T$ at the LHC will provide strong constraints and viability tests for the parton-medium interaction models. A combined analysis of $R_{AA}$ in the dependence on $P_T, \sqrt{s_{NN}}$, impact parameter $b$ and reaction plane angle $\phi$ will be highly discriminating between the available models even without having to resort to other high $P_T$ observables such as triggered correlation measurements or fully reconstructed jets. Constraining the nature of the parton-medium interaction by leading hadron production in this way is thus the first step towards tackling the more difficult task of understanding the complete dynamics of a parton shower in the medium. \begin{acknowledgments} We thank Pasi Huovinen for providing us with the EoS we used in this work. T.R. is supported by the Academy researcher program of the Academy of Finland, Project No. 130472. Additional support comes from K.J.E's Academy Project 133005. H.H. gratefully acknowledges the financial support from the national Graduate School of Particle and Nuclear Physics, and the computing time from the CSC IT Center for Science at Espoo, Finland. \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Electromagnetic waves (EWs) propagation in metamaterials-artificially prepared media composed of a network of interacting lumped electromagnetic circuits- attracts recently an enormous attention due to variety of physical phenomena occurring in such systems, e.g. electromagnetically induced transparency (reflectivity) \cite{liao2016electromagnetically,shulga2018magnetically,chaldyshev2011optical}, "left-handed" metamaterials \cite{smith2004metamaterials,zharov2003nonlinear}, dynamically induced metastable states \cite{lazarides2015chimeras,jung2014multistability}, just to name a few. These networks have been fabricated from metallic, semiconducting, magnetic or superconducting materials. The latter case of networks based on superconducting elementary circuits presents a special interest because of an extremely low dissipation, a great tunabilty of the microwave resonances, and a strong nonlinearity \cite{jung2014multistability,ricci2005superconducting,anlage2010physics}. In most of studied systems these superconducting electromagnetic circuits contained by one or a few Josephson junctions, can be precisely described as classical nonlinear oscillators, and the interaction of propagating EWs with a network of such superconducting lumped circuits is determined by a set of classical nonlinear dynamic equations \cite{jung2014multistability,miroshnichenko2001breathers,filatrella2000high}. However, it is well known for many years that small superconducting circuits can be properly biased in the \textit{coherent macroscopic quantum regime}, and in a simplest case the dynamics of such circuits is equivalent to the quantum dynamics of two-level systems, i.e. qubits \cite{pashkin2003quantum,houck2009life,chiorescu2004coherent,majer2007coupling,fink2009dressed,macha2014implementation,shulga2017observation}. A surfeit of different types of superconducting qubits has been realized , e.g. dc voltage biased charge qubits (Fig. 1A) \cite{pashkin2003quantum}, flux qubits weakly \cite{macha2014implementation} (Fig. 1B) and strongly (Fig. 1C) \cite{shulga2018magnetically} interacting with a low-dissipative transmission line, transmons \cite{houck2009life,shulga2017observation} etc. As a next step these qubits are organized in different arrays or lattices forming \textit{ quantum electromagnetic networks}, and an inductive or capacitive coupling of such networks to an external low-dissipative transmission line allows one to experimentally access the frequency dependent transmission coefficient of EWs, $D(\omega)$. The interaction of EWs with quantum networks of qubits results in a large amount of coherent quantum phenomena on a macroscopic scale , e.g. collective quantum states \cite{fink2009dressed,macha2014implementation,shulga2017observation,fistul2017quantum}, magnetically induced transparency \cite{shulga2018magnetically} have been observed, and the coherent electromagnetic pulses propagation \cite{asai2015effects,ivic2016qubit}, the nonclassical states of photons \cite{iontsev2016double} have been theoretically predicted and studied. Therefore, in this quickly developing field a natural question arises \cite{rakhmanov2008quantum,asai2015effects,ivic2016qubit,fistul2017quantum,iontsev2016double}: how the coherent quantum dynamics of a network of superconducting qubits influences the EWs propagation? In this paper we present a systematic study of EWs propagation through an array of qubits embedded in a low-dissipative transmission line (see Fig. 1). We will focus on the resonant case, i.e. $\omega \simeq \omega_q$, where $\omega_q$ is the qubit frequency, and the transmission coefficient $D(\omega)$ will be theoretically analyzed. To obtain $D(\omega)$ we derive the effective nonlinear EWs equation taking into account the coherent quantum dynamics of qubits exposed to the electromagnetic field. It allows us to address both limits, low and high power of applied microwave radiation. In the linear regime and a relatively wide frequency region near the resonance we obtain a strong suppression of $D(\omega)$ in both cases of a single qubit and chains composed of a large number of densely arranged qubits. However, in a narrow frequency region for chains of qubits we obtain the resonant transmission of EWs with a greatly enhanced $D(\omega)$. As we turn to the nonlinear regime realized for a moderate power of applied microwave radiation, we predict and analyze various transitions between states characterized by high and low values of $D(\omega)$. We argue that these transitions are fingerprints of nonequilibrium steady states of an array of qubits. \begin{figure} \includegraphics[width=1\columnwidth]{Schematic-a.png} \includegraphics[width=1\columnwidth]{Schematic-b.png} \includegraphics[width=1\columnwidth]{Schematic-c.png} \caption{(color online) Schematic of qubit arrays coupled to a low-dissipative transmission line: voltage biased charge qubits (A), weakly coupled flux qubits (B) and strongly coupled flux qubits (C). Josephson junctions are illustrated by crosses, imput and output of EWs are shown by arrows. The classical, $Q$, and quantum $\varphi_n$ dynamic variables are shown. The properties of transmission line are characterized by two parameters: the capacitance $C_0$ and inductance $L_0$ per length. The $D$ is the transmission coefficient of propagating EWs. } \label{fig:schematic} \end{figure} The paper is organized as follows: In Section II we present our model for qubits array embedded in a low-dissipative transmission line, introduce the Lagrangian, and derive the effective nonlinear wave equation for the electromagnetic field interacting with an array of qubits. In Sec. III we analyze the coherent quantum dynamics of a single qubit subject to an applied electromagnetic field in both limits of low and high power. In Sec. IV we apply derived in Sec. II the effective nonlinear wave equation to a study of frequency dependent transmission coefficient, $D(\omega)$, for a chain of densely arranged qubits. Moreover, we address both regimes, i.e. linear and nonlinear ones. The Section V provides conclusions. \section{Model, Lagrangian and Dynamic Equations} \subsection{Model} Let us consider a regular one-dimensional array of $N$ lumped superconducting quantum circuits embedded in a low-dissipative nondispersive transmission line (see Fig. 1). As the amplitude of propagating EWs is not too low, i.e in the regime of large number of photons, the electromagnetic field in the transmission line is characterized by coordinate and time dependent classical variables - the charge distribution, $Q(x,t)$. Different types of lumped superconducting quantum circuits have been realized (the schematics of arrays composed of charge (Fig. 1(A)) and flux (Fig. 1(B),(C)) qubits are shown), and the quantum dynamics of such circuits is characterized by quantum variables-the Josephson phases, $\varphi_n$. An artificially prepared potential $U(\varphi_n)$ allows one to vary the circuits resonant frequencies in a wide region. The dynamics of a whole system in the classical regime is described by total Lagrangian, which consists of three parts: the Lagrangian of electromagnetic field $L_{EF}$, the Lagrangian of an array of lumped superconducting quantum circuits (qubits) $L_{qb}$, and the interaction Lagrangian $L_{int}$ describing the interaction between qubits and electromagnetic field: \begin{equation} \label{Lagrangian} L=L_{EW}+L_{qb}+L_{int}. \end{equation} \subsection{Lagrangian and dynamic equation} The electromagnetic field Lagrangian $L_{EF}$ is written as \begin{equation} \label{Lagph} \ L_{EF}=\frac{L_0 \ell}{2} \left \{ \left [\frac{\partial Q}{\partial t} \right ]^2-c_0^2 \left [\frac{\partial Q}{\partial x} \right]^2 \right \}, \end{equation} where $c_0=1/\sqrt{L_0C_0}$ is the velocity of EWs propagating in the transmission line, $L_0$ and $C_0$ are the inductance and capacitance of the transmission line per length, accordingly; $\ell$ is the length of the system. The Lagrangian of an array of lumped quantum circuits is written as \begin{equation} \label{Lagj} \ L_{qb}= \sum_{n=1}^N \frac{E_J}{2\omega_p^2}(\dot{\varphi}_n-\dot{\phi}_{0n})^2-U(\varphi_n), \end{equation} where $E_J$, $\omega_p$ are the Josephson energy and the plasma frequency, accordingly; the parameter $\dot{\phi}_{0n}$ is proportional to the gate voltage, and it allows one to vary the frequency of charge qubits (such gate circuits are shown by dashed arrows in Fig. 1(A)). For charge qubits the potential $U(\varphi_n)$ is written explicitly as $U(\varphi_n)=E_J (1-\cos \varphi_n)$ whereas for flux qubits (see Fig. 1(B),(C)) the double well potential reads as: $U(\varphi_n)=-E_J [2\cos \varphi_n-\eta \cos (2\varphi_n)]$. The interaction part of the Lagrangian is written as \begin{equation} \label{Lagint} \ L_{int}= - \frac{\hbar\alpha w }{2e}\sum_{n=1}^N \delta (na-x) Q(t,x)\dot{\varphi}_n, \end{equation} where the coupling coefficient $\alpha$ varies by around five order of magnitude from $10^{-2}$ for a weak coupling between qubits and transmission line \cite{macha2014implementation} (Fig. 1(B)) up to $4 \times 10^2$ for strongly coupled qubits \cite{shulga2018magnetically} (Fig. 1(C)). The $w$ is the geometrical size of lumped quantum circuits (qubits), $w \ll \ell$. As we turn to the coherent quantum regime of networks of qubits, the dynamics of EWs is described by the specific wave equation as \begin{equation} \label{dynequation} \frac{\partial^2 Q}{\partial t^2} -\gamma \frac{\partial Q}{\partial t} -c_0^2 \frac{\partial^2 Q}{\partial x^2} = -\frac{ \hbar}{2e} \frac{ \alpha w}{ L_0 \ell}\sum_{n=1}^N \delta (na-x) <\dot{\varphi}_n>_{eq}, \end{equation} where we take into account the dissipation effects in the transmission line characterized by phenomenological parameter $\gamma \ll 1$. Here, $<...>_{eq}$ denotes the quantum mechanical averaging over the equilibrium state of quantum network. \subsection{Quantum dynamics of a single qubit and effective wave equation} Since we neglect the direct coupling between elementary circuits, the coherent quantum dynamics of a network is reduced to the sum of independent lumped electromagnetic circuit exposed to an applied electromagnetic field. The quantum dynamics of a single element is determined by the time-dependent Hamiltonian, $\hat H_{qb}=\hat H_0+\hat H_{t}$, where the equilibrium Hamiltonian $\hat H_0$ is \begin{equation} \label{eqHamil} \hat H_0= \frac{\omega_p^2}{2E_J}(\hat p_\varphi-p_0)^2+U(\varphi) \end{equation} and the nonequilibrium part of the total Hamiltonian, explicitly depending on time, $\hat H_t$ is \begin{equation} \label{neqHamil} \hat H_t= \frac{\hbar \alpha \omega_p^2}{2e E_J} Q(t,x) \hat p_\varphi. \end{equation} In the resonant regime as the EW frequency $\omega \simeq \omega_q$, we truncate the explicit Hamiltonian (see Eqs . (\ref{eqHamil}) and (\ref{neqHamil})) to the Hamiltonian of two-levels systems. These two levels can be fine-tuned to the resonance with the frequency of EW propagating in the transmission line. In particular, for the charge qubits case shown in Fig. 1(A), on the avoid-crossing point such truncation leads to the effective Hamiltonian written as \cite{pashkin2003quantum} \begin{equation} \label{effHamiltonian} \hat H_{eff}=\frac{E_J}{2} \hat \sigma_z+\frac{(\hbar \omega_p)^2 \alpha}{4e E_J} Q(x,t)\hat \sigma_x, \end{equation} where $\hat \sigma_{x,z}$ are the Pauli matrices. In this case the qubit frequency is expressed as $\omega_q=E_J$. The time-dependent wave function of a charge qubit is written as \begin{equation} \Psi (t)=C_-(t)f_-+C_+(t)f_+, \end{equation} where $f_{\pm}=\frac{1}{\sqrt{4\pi}}\left(1\pm e^{i\varphi} \right)$ stationary wave functions of two states. The corresponding quantum-mechanical average of the operator $<\dot \varphi>$ in the right hand part of Eq. (\ref{dynequation}) reads as \begin{equation} \label{Average} <\dot \varphi>_{eq}= \frac{\hbar \omega_p^2}{E_J}\Re e [C_-(t)C_+(t)] \end{equation} Taking into account the initial conditions $C_{-}(0)=1$ and $C_{+}(0)=0$, and using the resonant condition, $\omega_q \simeq \omega$ we obtain in the non-dissipative (\textit{nd}) regime $$ S^{nd}_n(\omega)= \int dt e^{i\omega t} \Re e [C_-(t)C_+(t)]= $$ $$ =\eta q(x_n,\omega)\frac{1-\omega/\omega_q}{(1-\omega/\omega_q)^2+\eta^2 | q(x_n,\omega)|^2}, $$ where we introduce the dimensionless strength of interaction, $\eta=\alpha [\hbar \omega_p/(2E_J)]^2$ and the dimensionless charge distribution, $q(x,t)=Q(x,t)/e$. In the low-dissipative regime we introduce the relaxation time $T$, and by solving the dynamic equations for the density matrix, the time-dependent correlation function of $n$-th qubit is written in the following form: \begin{equation} \label{CoorFunction} S_n(\omega)= \eta \frac{1-\omega/\omega_q+i/(\omega_q T)}{(1-\omega/\omega_q)^2+1/(\omega_q T)^2+\eta^2 |q(x_n,\omega)|^2}q(x_n,\omega) \end{equation} Substituting (\ref{Average}) and (\ref{CoorFunction}) in (\ref{dynequation}) we obtain the effective equation allowing one to analyze the transmission coefficient $D(\omega)$ for propagating EWs of frequency $\omega$. $$ c_0^2 \frac{d^2 q}{d x^2}+\omega^2 q(x) +i\gamma \omega_q(x) =\frac{2 w\hbar \omega_q}{e^2 L_0 \ell} \eta^2 $$ \begin{equation} \label{Propagationequation} \sum_{n=1}^N \delta (na-x) \frac{1-\omega/\omega_q+i/(T\omega_q)}{\eta^2 |q(x,\omega)|^2+(1-\omega/\omega_q)^2+1/(T \omega_q)^2}q(x,\omega). \end{equation} \section{EW transmission: a single qubit} In this Section we consider the EW transmission through a single qubit. The charge distribution $q(x,\omega)$ satisfies the effective equation $$ c_0^2 \frac{d^2 q}{d x^2}+[\omega^2 +i\gamma \omega ] q(x) =\frac{2\hbar w\omega_q}{e^2 L_0 \ell} \eta^2 $$ \begin{equation} \label{Propagationequation-SQ} \delta (x) \frac{1-\omega/\omega_q+i/(T\omega_q)}{\eta^2 |q(x,\omega)|^2+(1-\omega/\omega_q)^2+1/(T \omega_q)^2}q(x,\omega). \end{equation} As the power of EW is small, i.e. $|Q(x)/e|<<(\eta \omega_q T)^{-1}$, the transmission coefficient $D(\omega)$ reads as \begin{equation} \label{TQ-SQ-LC} \ D(\omega) =\left[ 1+ \frac{g}{4}\frac{g+4\Gamma}{(\omega/\omega_q-1)^2+\Gamma^2} \right]^{-1}. \end{equation} Here, we introduce the dimensionless relaxation rate of a single qubit, $\Gamma=(T\omega_q)^{-1}$ and the interaction strength $g=2\eta^2 \frac{\hbar w}{e^2 c_0 L_0 \ell} $, where $w$ is the geometrical size of the lumped quantum circuit (qubit), $w \ll \ell$. The dependencies of $D(\omega)$ in the linear regime for different values of $\Gamma$ and $g$ are presented in Fig. 2. A most important effect is a strong resonant suppression of EW propagation in the limit of $g/\Gamma \gg 1$. The width of the $D(\omega)$ curve is diminished as the relaxation rate $\Gamma$ decreases. As we turn to the high power regime of applied microwave radiation, i.e. $|q(-\infty)| \simeq \sqrt{P_0}>> \Gamma/\eta $ we obtain the transmission coefficient as a solution of transcendent equation: \begin{widetext} \begin{equation} \label{TC-SQ-NLR} D(\omega) = \left \{ 1+\frac{g}{4}\frac{(4\Gamma +g )[(\omega/\omega_q-1)^2+\Gamma^2] +4\Gamma \eta^2 P_0 D(\omega) }{\left[(\omega/\omega_q-1)^2+\eta^2 P_0 D(\omega)+\Gamma^2 \right ]^2}\right \}^{-1} \end{equation} \end{widetext} Here, $P_0$ is the power of applied microwave radiation far away from the resonator. An analysis of the Eq. (\ref{TC-SQ-NLR}) shows that in the nonlinear regime the transmission coefficient $D$ is determined strongly by the ratio of two parameters $g$ and $\sqrt{(\omega-\omega_q)^2+\Gamma^2}$. Indeed, if $g/ \sqrt{(\omega-\omega_q)^2+\Gamma^2} \leq 1$ the transmission coefficient just monotonically increases with $P_0$ but in the opposite case, $g/\sqrt{(\omega-\omega_q)^2+\Gamma^2} \gg 1$, there is a particular range of power $P_0$, where two dynamic states of EWs characterized by large and small transmission coefficients, are obtained. The numerically calculated dependencies of the transmission coefficient on the power $P_0$ are shown in Fig. 3. \begin{figure} \includegraphics[width=1.1\columnwidth]{Transmission-SQLR.PNG} \caption{(color online) The transmission of EWs, $D(\omega)$: the linear regime, a single qubit embedded in the transmission line. The parameters were chosen as: $\Gamma=10^{-2}, g=0.06$ (red solid line), $\Gamma=10^{-2}, g=0.008$ (blue solid line), $\Gamma=10^{-1}, g=0.06$ (black dashed line).} \label{fig:2} \end{figure} \begin{figure} \includegraphics[width=1.1\columnwidth]{Fig3.JPG} \caption{(color online) The transmission coefficient $D$ on the power $P_0$ of applied microwave radiation. The parameters were chosen as $\omega=\omega_q$ and $g/\Gamma=9$ (blue line), $g/\Gamma=16$ (magenta line), $g/\Gamma=34.6$ (red line). } \label{fig:3} \end{figure} \section{EW Transmission: a periodic array of superconducting qubits} In this Section we consider the propagation of EWs through a periodic array of $N$ qubits. In this case the charge distribution $q(x,t)$ is determined by Eq. (\ref{Propagationequation}). By making use of the method elaborated for the solution of the Schr\"odinger equation with the Kronig-Penney potential \cite{lifshitz1988introduction} we present the charge distribution $q(x)$ in the following form: $$ q(x)=-\frac{i }{2 k c_0^2} \sum_n \beta\{q_n\} q_n \exp{(ik|x-x_n|)}~, $$ \begin{equation} \label{Presentation-Q} \beta=\eta^2 \frac{2\hbar}{e^2} \frac{w \omega_q}{ L_0 \ell} \frac{1-\omega/\omega_q+i/(T\omega_q)}{(1-\omega/\omega_q)^2+1/(T \omega_q)^2+\eta^2 |q(x,\omega)|^2}. \end{equation} where the wave vector $k=\sqrt{\omega^2+i\gamma \omega}/c_0$, and $q_n=q(x_n)$ is the amplitude of propagating charge distribution at the point of $x_n$. By making use of the properties of the $\delta-$ function, we obtain the set of discrete equations for $q_n$ \begin{equation} \label{DiscreteEquation} q_{n+1}+q_{n-1}-\left[ 2\cos ka-\frac{\beta\{q_n\}}{k c_0^2}\sin(ka)\right] q_n=0~. \end{equation} Here, $a=\ell/N$ is the distance between the adjacent qubits in the array. Next, we study the EW propagation through an array of densely arranged qubits, i.e. as the condition $ka \ll 1$ is valid. In this case one can transform the difference equation (\ref{DiscreteEquation}) into the differential equation \begin{equation} \label{GenNonlEquation} c_o^2 \frac{d^2 q(x)}{dx^2}+\left[\omega^2+i\gamma \omega +\frac{\beta\{q(x)\}}{a} \right] q(x)=0~. \end{equation} The transmission coefficient is determined as $D(\omega)=|q(\ell)/q(0)|^2$. \subsection{Low power regime} As the power of applied microwave radiation is low, one can neglect the nonlinear dependence of $\beta (q)$ on $q$, and by making use of a well known result for the quantum tunneling through a rectangular barrier the transmission coefficient is written as \begin{equation} \label{TrCoeff-Array-Lin} \ D = \left | cos(k \ell \sqrt{K (\omega)})+\frac{i}{2}\sqrt{ K(\omega)} sin(k \ell \sqrt{ K(\omega)}) \right | ^{-2} \end{equation} where $K(\omega)=\frac{g c_0}{\omega_q a} \frac{(1-\omega/\omega_q)+i \Gamma}{(1-\omega/\omega_q)^2+\Gamma^2}$. Here, we consider a regime most relevant to current experiments as the total length of a system is smaller than the wave length of EWs, i.e. $\ell \ll \lambda=c_0/\omega$, and the effective strength of interaction between a single qubit and EWs is large, i.e. $\beta/a \gg \omega_q^2$. With such assumptions the dependencies of $D(\omega)$ for different values of an effective strength of interaction $g$ are presented in Fig. \ref{fig:xi1TAlpha}. Beyond a standard resonant suppression of $D(\omega)$ observed for moderately large values of $g$ (see Fig.\ref{fig:xi1TAlpha}, red line) , we obtain a great enhancement of $D(\omega)$ in an extremely narrow region of frequencies (see Fig.\ref{fig:xi1TAlpha}, blue line). This effect of resonant transparency of EWs propagating through an array of qubits occurs for an extremely large values of an effective strength of interaction $g$. We notice that such resonant propagation of EWs through a chain of densely arranged superconducting qubits has been experimentally observed in \cite{shulga2018magnetically} where an extremely large value of coupling was achieved by direct incorporation of qubits Josephson junctions in the superconducting transmission line. Moreover, as the size of the array increases we obtain a large set of peaks in the dependence of $D(\omega)$. It is shown in Fig. \ref{Fig5}. \begin{figure} \includegraphics[width=1\columnwidth]{Transmission-LR-Array-1.PNG} \caption{(color online) The transmission coefficient of EWs, $D(\omega)$: the linear regime, a moderate size ($k \ell =0.01$) array of qubits embedded in a low-dissipative transmission line. The parameters were chosen as: $\Gamma=3 \cdot 10^{-3}$ and $g c_0/(\omega_q a)=9$ (red solid line), $ g c_0/(\omega_q a)=900$ (blue solid line). } \label{fig:xi1TAlpha} \end{figure} \begin{figure} \includegraphics[width=1\columnwidth]{Transmission-LR-Array-2.PNG} \caption{(color online) The transmission coefficient of EWs, $D(\omega)$: the linear regime, large size arrays of qubits embedded in a low-dissipative transmission line. The parameters were chosen as: $\Gamma=3 \cdot 10^{-3}$, $g c_0/(\omega_q a)=9$, and different values of $k\ell=0.08$ (blue thick line), $ k \ell=0.32$ (red thin line). } \label{Fig5} \end{figure} \subsection{High power regime} In the regime of high power applied microwave radiation the dynamics of EWs is determined by generic Eq. (\ref{GenNonlEquation}) written as \begin{equation} \label{NonlEq-Arrays} \frac{d^2 q}{dx^2}+\left [k^2+\frac{\chi}{|q|^2+\xi^2} \right]q(x)=0, \end{equation} where $\chi=[g c_0/(\eta^2 \omega_q a)](\omega_q-\omega)$ and $\xi=(1/\eta^2)[(\omega_q-\omega)^2+\Gamma^2]$. Here, we neglect a small absorption of EWs, i.e. an imaginary part of $k$ and $\chi$. We solve such intrinsically nonlinear wave equation by making use of an analogy with the famous Kepler problem in classical mechanics \cite{landau1960course}. To make that we introduce the spatially dependent amplitude $r(x)$ and the phase $\phi(x)$ of EWs as $q(x)=r(x)e^{i\phi(x)}$ and $|q|=r$. Spatial distributions of the amplitude $r(x)$ and phase $\phi(x)$ of the electromagnetic field are determined by following equations: $$ r^2 \frac{d\phi}{dx}=C $$ \begin{equation} \label{NonlEquation-ArrQubit} \frac{d}{dx} \left [(r^\prime)^2+\frac{C^2}{r^2} \right]+(r^2)^\prime \left [k^2+R(r) \right]=0, \end{equation} where we introduce the nonlinear function $R(r)=\frac{\chi}{r^2+\xi^2}$, $C$ is the constant that have to be found from the boundary conditions. The boundary conditions are derived from the continuity of the electric and magnetic fields of EW at the boundaries, $x=0$ and $x=\ell$, of a system. The boundary conditions are explicitly written down as \begin{equation} \label{BoundaryConditions} \begin{cases} \frac{d}{dx}\ln q(\ell)=ik\\ A+B=q(0) \\ A-B=q^\prime(0)/ik, \\ \end{cases} \end{equation} where the amplitude of incident EW, $A \propto \sqrt{P}$ and $P$ is the power of an incident EW. The transmission coefficient of propagating EWs is determined as $D=|q(\ell)/A|^2$. The solution of Eq. (\ref{NonlEquation-ArrQubit}) is obtained as \begin{equation} \label{SolutionNonlEquation-ArrQubit} \int\limits_{r(0)}^{r(\ell)}\frac{du}{ \sqrt{E-\frac{C^2}{r^2}-r^2k^2-\chi \ln(r^2+\xi^2) } }=\ell, \end{equation} where the constant $E$ is the effective energy of a system. The constant $C$ is determined as: $C=k[r(\ell)]^2$. In the limit of not extremely large coupling $g$ and large system size, $kl \gg 1$, using the condition $|r(\ell)-r(0)| \ll r(0)$ we write down the expression for the transmission coefficient $D(\omega)$ as \begin{equation} \label{Transmission-Array} D=\frac{1}{1+\frac{\chi}{2k^2\xi^2}(1-z)}, \end{equation} where the variable $z=r(0)/r(\ell)$ is closed to one. The parameter $z$ is determined by the transcendent equation derived from Eq. (\ref{SolutionNonlEquation-ArrQubit}) as \begin{equation} \label{ZParameter} \sqrt{\frac{[r^2(\ell)+\xi^2]}{2\chi}}\int\limits_z^1\frac{dy}{\sqrt{1-y}}=\ell \end{equation} Thus, the parameter $z$ is obtained explicitly as \begin{equation} \label{ZParameter-2} 1-z=\frac{\chi \ell^2}{2[r^2(\ell)+\xi^2]}. \end{equation} Substituting (\ref{ZParameter-2}) in (\ref{Transmission-Array}) we obtain in a strongly nonlinear regime the transmission coefficient of $D(\omega)$ as \begin{equation} \frac{1}{D}=1+\left [ \frac{\chi \ell}{2k \xi^2 (PD+1)} \right ]^2. \end{equation} For various parameters $\omega_q-\omega$, $\chi$ and $\Gamma$ the dependencies of $D(P)$ are presented in Fig. \ref{fig:xi1TAlpha-LP}. \begin{figure} \includegraphics[width=1\columnwidth]{Transmission-Array-NL.PNG} \caption{(color online) The EWs transmission through an array of qubits: high power regime. The different values of parameter $\chi \ell/(4k\xi^2)$ are chosen as $4$ (blue line), $17$ (red line).} \label{fig:xi1TAlpha-LP} \end{figure} Thus, the main result of this Section is that if for low power EWs the transmission $D$ is strongly suppressed ($D \ll 1$) but in the high-power limit the transmission will be recovered to $D \simeq 1$. The origin of such effect is an equalizing of the populations of qubits states in the limit of a large power of EWs, that, in turn, strongly suppresses the ac response of qubits to the applied electromagnetic field. \section{Conclusion} In conclusion we theoretically studied the propagation of EWs through a one-dimensional array of densely arranged superconducting qubits, i.e. coherent two-level systems embedded in a low-dissipative transmission line (see Fig. 1). A particular near-resonant case as $\omega \simeq \omega_q$ has been studied. We derive an effective nonlinear wave equation taking into account a non-equilibrium state of qubits, i.e. Eq. (\ref{Propagationequation}). The dependencies of transmission coefficient $D(\omega, P)$ on the frequency $\omega$ and power $P$ of applied microwave radiation were obtained. In particular, for both cases of a single qubit and large arrays of qubits the resonant suppression of $D(\omega)$ was found in the limit of small power $P$ and as $|\omega-\omega_q| \ll \omega_q$ (see Fig. 2 and 4). However, the resonant transmission with $D \simeq 1$ was found in large arrays of qubits for an extremely large coupling of qubits with EWs (see Fig. 4 and 5) and in a narrow band of frequencies. Notice here that the effect of resonant transmission of EWs through an array of qubits has been observed in Ref. \cite{shulga2018magnetically}. In the limit of high powers of applied EWs the large transmission $D \simeq 1$ was recovered in both cases of a single qubit and an array of qubits. We anticipate that strong variations of transmission coefficient $D$ on the frequency and power of EWs will be used in electronic devices with quantum efficiency. \begin{acknowledgments} The authors thank S. Mukhin and S. Flach for useful discussions. The authors acknowledge a partial financial support of Ministry of Science and Higher Education of the Russian Federation in the framework of Increase Competitiveness Program of NUST 'MISiS' K2-2017-085 and the State Program 3.3360.2017. \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
\subsection{PdBI observations and data reduction} The Plateau de~Bure Interferometer (PdBI) was used to image molecular lines at $1.3$ to $3$~mm using a single pointing focusing on the central $50\arcsec$ of \ngc\ for a total of 132\,h from 2002 to 2009. Eight high-resolution spectral windows were used in each observation, each offering a bandwidth of $160~$MHz and channel width of $2.5~$MHz (i.e. $9$~km~s$^{-1}$ at $87$~GHz). The data, acquired from multiple observations (project codes: Q059, R02C, R09F, S02A, R09F, S08E, P069, M032 and S02A; PI: Schinnerer), were split between five spectral setups at reference frequencies of ${\sim}87$~GHz (setup~A), ${\sim}97$~GHz (setup~B), ${\sim}92$~GHz (setup~C), ${\sim}115$~GHz (setup~D) and ${\sim}230$~GHz (setup~one mm). We identified in the spectral setup~A: \cch, \hco\ and \hcn, in setup~B: \chohtwo, \cstwo, \chohthree, \hcnsixt, \htwoco\ and \csthree, in setup~C: \hnc, \hcnten\ and \ntwoh, in setup~D: \co, in setup~one mm: \cotwo\ (see \ref{tab:Obs-properties}). The primary beam FWHM ranges from $\sim57\arcsec$ at $87$~GHz to $\sim43\arcsec$ at $115$~GHz and $\sim22\arcsec$ at $230$~GHz. The spectral window of each molecular line was centred on the redshifted frequency of the line assuming a systemic velocity of $50$~km\,s$^{-1}$ for \ngc\ \citep{schinnerer2006molecular}. The imaged width in each spectral window is the velocity width at zero level of the \chem{CO}{10} line ($300$~km\,s$^{-1}$, matching the Full Width at Zero Intensity of the HI line seen by THINGS in \citealt{Walter2008}) plus $50$~km\,s$^{-1}$ on each side of the line centre. These data were all calibrated and imaged using \texttt{GILDAS}\footnote{\url{http://www.iram.fr/IRAMFR/GILDAS}}. For the bandpass calibration, observations of the bright quasars 3C454.3, 2145+067, 0923+392 and 3C273 have been used. The phase and amplitude calibrators were either both 2037+511 and 1928+738 or one of them. These calibrators were observed every 20 minutes. Most of the observations compared MWC\,349 observations with its IRAM flux model to calibrate the absolute flux scale. We expect an accuracy of the flux calibration of about 5$\%$ at 3mm and a typical antenna aperture efficiencies of about $25$~Jy\,K$^{-1}$ at $3$~mm, and $28$~Jy\,K$^{-1}$ at $2$~mm data. The continuum emission has been combined with the \texttt{UV MERGE} task after excluding the line channels. The spectral lines were resampled to $10$~km\,s$^{-1}$ for all the lines during the creation of UV tables in the \texttt{GILDAS CLIC} environment. The continuum has been subtracted in the {$u{-}v$\,} plane with the above continuum {$u{-}v$\,} table extracted from all line-free channels with the \texttt{UV SUBSTRACT} task before the imaging. Imaging of the visibilities used natural weighting, where each visibility is weighted by the inverse of the noise variance, as this maximises the sensitivity. The field of view of each image is twice the primary beam with pixel sizes $1/4$ of the synthesised beam major axis FWHM size (ranging from $0.4\arcsec~\times~0.29\arcsec$ to $2.0\arcsec~\times~1.7\arcsec$ and $3.6\arcsec~\times~2.9\arcsec$ , see \autoref{tab:PdBIonly-Characteristics}). \texttt{Clean}ing was performed with a preconstructed clean mask which has been produced similar to the signal identification methodology used in \texttt{CPROPS} \citep{Rosolowsky2006,Rosolowsky2021,Leroy2021} using the high resolution (${\sim}1\arcsec$) \chem{CO}{10} PdBI data without short-spacing correction \citep{schinnerer2006molecular}. Following Leroy et al. (\citealt{Leroy2021}; for the PHANGS-ALMA survey), the clean mask creation includes the following steps: (a) convolving the \chem{CO}{10} beam from $\sim1\arcsec$ to a coarser resolution of $33\arcsec$ and smoothing along the spectral axis with a boxcar function with a width of $20$~km~s$^{-1}$; (b) selecting peak pixels for each channel which have a S/N of at least 4; (c) expanding each peak down to a S/N of 2; (d) binary-dilating 4 channels along the spectral axis to account for edge effect. This produces a clean mask that covers all possible CO signal regions and ensures that the cleaning of other molecular gas tracers is less affected by noisy, no-signal regions outside the mask. The typical rms per channel observed is $\approx2$~mJy~ beam$^{-1}$. Before being used in the deconvolution, the clean mask was regridded to match the astrometry of the dirty cube of each line. After imaging and deconvolution we corrected for the primary beam attenuation. We provide the observational characteristics of the \textit{PdBI only data} in \autoref{tab:PdBIonly-Characteristics} (and show the channel maps in \autoref{fig:app-hcn_cm}--\ref{app:channelmap_last}). Given that these data have different {$u{-}v$\,} coverage, we built a homogenised product by imaging only the visibilities that are within the {$u{-}v$\,} coverage of the \chem{CO}{10} PdBI data, that is $11.5$ to $153.8$~k$\lambda$. We do this to match the spatial scales of the CO data. This has been done using a \texttt{GILDAS} script where we loop through all the visibilities and flag the ones outside the \chem{CO}{10} {$u{-}v$\,} range by assigning negative weights to them. Then the \texttt{UV TRIM} function was used to discard the flagged data. Interferometric observations are insensitive to emission with low spatial frequencies due to the lack of the shortest baselines. For the short-spacing correction, we utilise IRAM 30\,m observations. We use data of the EMIR Multiline Probe of the ISM Regulating Galaxy Evolution \citep[EMPIRE;][]{Jimenez-Donaire2019EMPIRE} survey which observed in the $3{-}4$~mm regime, in particular, HCN across the whole disc of nine nearby spiral galaxies, among them \ngc. For the purpose of short-spacing correction (SSC) we use their $33\arcsec$ resolution \hcn, \hco\ and \hnc\ maps. For the \chem{CO}{21} observations, we use data from the HERA CO-Line Extragalactic Survey (HERACLES) \citep{Leroy2009Heracles}. For the remaining PdBI detected molecules we do not find public single-dish data with a significant \SN\footnote{We note that the 3mm emission lines of C$_2$H, N$_2$H$^{+}$, and HC$_3$N were covered by EMPIRE, but not included in the public release.}, therefore, no SSC was feasible for them. SSC has been done in \texttt{GILDAS} in its \texttt{MAPPING} environment. Before performing SSC on the data available, we ensured that the 30\,m data used the same projection center and spectral grid as the interferometric data, before applying the \texttt{UVSHORT} task. This produces pseudo-visibilities that fill the short-spacings before imaging and deconvolution (see \citealt{Rodriguez2008,Pety2013} for details). We summarise the observational characteristics of the \textit{SSC + {$u{-}v$\,} trim} data in \autoref{tab:SSC-Characteristics}. In summary, within this work we make use of the following data sets: \textit{PdBI only}, which includes 14 molecular emission lines, and \textit{SSC + {$u{-}v$\,} trim}, which includes five molecular emission lines. \subsection{Integrated intensity maps} \label{sec:Sampling} As a next step, we converted our \textit{PdBI only} and \textit{SSC + {$u{-}v$\,} trim data} cubes from units of Jy\,beam$^{-1}$ to brightness temperature, $T_\mathrm{b}$, measured in Kelvin. Our observations include emission lines from a wavelength range of 1 to 3~mm which results in various spatial resolutions. We convolved our data to a common beam size of $4\arcsec$ (corresponding to $150$~pc at $7.72$~Mpc distance) and sampled the integrated intensities onto a hexagonal grid. The grid points are spaced by half a beam size to approximately Nyquist sample the maps. This approach has the advantage that the emission lines can be directly compared. To improve the signal-to-noise ratio (\SN) we applied a masking routine for the determination of the integrated intensity maps. We used the bright \chem{CO}{10} emission line as a prior for masking and produced two \SN\ cuts: a low \SN\ mask ($\SN > 2$) and a high \SN\ mask ($\SN > 4$). Subsequently, the high \SN\ mask is expanded into connected voxels in the low \SN\ mask, and the integrated intensity is determined by integrating along the velocity axis for each of the individual sight lines multiplied by the channel width, $\Delta v_\mathrm{chan}$, of $10$~km\,$\rm s^{-1}$: \begin{equation} I = \sum_{n_\mathrm{chan}} T_\mathrm{b} \times \Delta v_\mathrm{chan}~. \end{equation} The uncertainty is calculated taking the square root of the number of masked channels (n$_\mathrm{chan}$) along a line of sight multiplied by the $1\sigma$ root mean square ($\sigma_\mathrm{rms}$) value of the noise and the channel width: \begin{equation} \sigma_{I} = \sqrt{n_\mathrm{chan}} \times \sigma_\mathrm{rms} \times \Delta v_\mathrm{chan}~. \label{eq:unc} \end{equation} We calculate $\sigma_\mathrm{rms}$ over the signal-free part of the spectrum using the \texttt{astropy} (\citealt{astropy:2013,astropy:2018}) function \texttt{mad\_std} that calculates the median absolute deviation and scales it by a factor of $1.4826$. This factor results from the assumption that the noise follows a Gaussian distribution. We show the integrated intensity maps of the \textit{PdBI only data} in \autoref{fig:intensities}. \subsection{Ancillary data} \label{sec:Ancillary data} We include ancillary data for investigating the central regions of \ngc\ and analysing the relationship of our detected molecules to star formation rate (SFR) tracers. We also compare \ngc\ with other galaxy centers. Therefore, we include (1) $33$~GHz continuum emission, (2) EMPIRE dense gas observations of eight additional galaxy centres (angular resolution of 33$\arcsec\approx1$~kpc), (3) high resolution resolution dense gas observations of M~51 ($\sim$4$\arcsec\approx166$~pc) and NGC~3627 ($\sim$2$\arcsec\approx100$~pc), and (4) $0.5{-}7.0$~keV X-ray observations of \ngc. \subsubsection{SFR tracers} \label{sec:2-SFRtracers} Tracers of the number of ionizing photons, specifically free-free radio continuum emission and hydrogen recombination lines, are often regarded as good indicators of massive star formation. The radio continuum emission at low frequencies consists of two components: (1)~thermal free-free emission directly related to the production rate of ionizing photons in \HII\ regions and (2)~non-thermal emission arising from cosmic ray electrons or positrons which propagate through the magnetised ISM after being accelerated by supernova remnants or from AGN. Concentrating on the first case, only stars more massive than ${\sim}8$~M$_\odot$ are able to produce a measurable ionising photon flux (see e.g.\ \citealt{Murphy2012, KennicuttEvans2012}). For \ngc\ we find recently published $33$~GHz continuum emission, which best match with the angular resolution of our molecular data set ($\sim$2.1$\arcsec$ angular resolution, see \autoref{fig:color}g and \autoref{fig:SFR-and-mask}). These data are from Very Large Array (VLA) observations within the Star Formation in Radio Survey (SFRS;~\citealt{Murphy2018SFRS,Linden2020SFRS}). In this work, we use the (1)~thermal free-free emission of the $33$~GHz continuum emission as a SFR tracer (see below). We discuss the reasons why we preferred $33$~GHz and check for consistency in the \autoref{appendix:sfr}. \subsubsection{Calibration of the SFR} \label{sec:2-SFRCalibration} The calibration of a variety of SFR indicators has been described in detail by \cite{Murphy2011}. They used \texttt{Starburst99} (\citealt{LEitherer1999Starburst99}) together with a \citet{Kroupa2001IMF} initial mass function (IMF). This type of IMF has a slope of $-1.3$ for stellar masses between $0.1$ and $0.5$~M$_{\odot}$, and a slope of $-2.3$ for stellar masses ranging between $0.5$ and $100$~M$_{\odot}$. Together with their assumptions of a continuous, constant SFR over ${\sim}100$~Myr, their \texttt{Starburst99} stellar population models show a relation between the SFR and the production rate of ionizing photons, $Q(H^{0})$, as: \begin{equation} \frac{\mathrm{SFR}}{[\mathrm{M}_{\odot}~\mathrm{yr}^{-1}]} = 7.29 \times 10^{-54} \, \frac{Q(H^{0})}{[\mathrm{s}^{-1}]}~. \end{equation} For the $33$~GHz continuum map, we need to separate the (1) thermal free-free and (2) non-thermal (synchrotron) parts of the emission. The thermal emission (denoted by~$^\mathrm{T}$) scales as $S_{\nu}^\mathrm{T} \propto \nu^{-\alpha^\mathrm{T}}~$, where $\nu$ refers to the frequency in GHz. To convert the thermal flux into a SFR, we follow Eq.\,(11) from \citealt{Murphy2011}: \begin{equation}\label{eq:SFR} \begin{split} \frac{\mathrm{SFR}_{\nu}^{\mathrm T}}{[\mathrm{M_{\odot}~yr^{-1}}]} = 4.6~\times~10^{-28} \, \left( \frac{T_\mathrm{e}}{[10^4~\mathrm{K}]} \right) ^{-0.45}\\ \enspace \enspace \times~\left( \frac{\nu}{[\mathrm{GHz}]}\right) ^{{-\alpha^\mathrm{T}}} \, \frac{L^\mathrm{T}_{\nu}}{[\mathrm{erg\,s^{-1}\,Hz^{-1}}]}~, \end{split} \end{equation} where $T_\mathrm{e}$ is the electron temperature in units of $10^4$~K, $\nu$ refers to the frequency in GHz, and $L^\mathrm{T}_{\nu}$ is the luminosity of the free-free emission at frequency~$\nu$ in units of $\mathrm{erg\,s^{-1}\,Hz^{-1}}$. We adopt $\alpha^\mathrm{T} = 0.1$ and a thermal fraction of $f^{\mathrm{T}}_{33\,\mathrm{GHz}} = 0.62$ (as in \citealt{Murphy2011} for the nucleus of \ngc) and calculate the luminosity of the thermal free-free emission at frequency~$\nu$: \begin{equation} \label{eq:lumthermal} \frac{L^\mathrm{T}_{{\nu}}}{[\mathrm{erg\,s^{-1}\,Hz^{-1}}]} = \frac{L}{[\mathrm{erg\,s^{-1}\,Hz^{-1}}]} \times f_{33\,\mathrm{GHz}}^{\mathrm{T}}~. \end{equation} Together with $T_\mathrm{e} = 0.42$ in units of $10^4$~K (again as in \citealt{Murphy2011} for the nucleus of \ngc) and Eq.~\eqref{eq:lumthermal} we can solve Eq.~\eqref{eq:SFR} and get a SFR$_\mathrm{33\,GHz}^\mathrm{T}$ map in units of ${\mathrm{M_{\odot}\,yr^{-1}}}$. We discuss the mean values within 150~pc sized apertures (i.e. the $4\arcsec$ working resolution) containing the NUC, SBE, or NBE regions in \autoref{sec:3-SFRindicators-compare}. In this work, we also use SFR surface densities ($\Sigma_\mathrm{SFR}$) in units of M$_{\odot}\,$yr$^{-1}\,$kpc$^{-2}$ for scaling relations in \autoref{sec:comp-moleculesandSFR}. We define $\Sigma_\mathrm{SFR}$ as: \begin{equation} \frac{\Sigma_{\rm SFR}}{[\mathrm{M}_{\odot}~\mathrm{yr}^{-1} \mathrm{kpc}^{-2}]} = \frac{\mathrm{SFR_{33\,GHz}}}{\mathrm{[M_{\odot}~yr^{-1}]}} \left(\frac{\Omega}{\rm [kpc^{-2}]} \right)^{-1}, \end{equation} where $\Omega$ is: \begin{equation} \begin{split} \frac{\Omega}{[\mathrm{kpc^{-2}]}} = \bigg(~\pi~\left[\frac{\theta_{\mathrm{bmaj}}}{\mathrm{[arcsec]}}~\left(\frac{d}{\mathrm{[kpc]}}~\psi^{-1}\right)\right]\\ \times~\left[\frac{\theta_{\mathrm{bmin}}}{\mathrm{[arcsec]}}~\left(\frac{d}{\mathrm{[kpc]}}~\psi^{-1}\right)\right]~\bigg)\\ \times~\left[\left(4~\ln(2)\right)\right]^{-1}~. \end{split} \end{equation} Here, $\theta_{\mathrm{bmaj}}$ and $\theta_{\mathrm{bmin}}$ are the major and minor axis of the beam in arcsec, $d$ the distance to \ngc\ in kpc and $\psi$ is the factor to convert from rad to arcsec (i.e. ($3600\times180$)/$\pi$). \subsubsection{Molecular gas mass and depletion time} \label{sec:SigmaMolandDepltime} The molecular gas mass surface density can be estimated from the \chem{CO}{10} line emission in our data set. Given that H$_2$ is the most abundant molecule, the conversion of CO emission to molecular gas mass surface density is related to the CO-to-H$_2$ conversion factor $\alpha_\mathrm{CO}$. We adopt a fixed conversion factor $\alpha_\mathrm{CO} = 0.39$ M$_{\odot}$\,pc$^{-2}$ (K\,km\,s$^{-1}$)$^{-1}$ corrected for helium (\citealt{Sandstrom2013}; from their Table~6 for the centre of \ngc). This low, central $\alpha_\mathrm{CO}$ value of \ngc\ (a factor $\sim$10 lower than the canonical Milky Way value of 4.36 M$_{\odot}$\,pc$^{-2}$ (K\,km\,s$^{-1}$)$^{-1}$; see \citealt{Bolatto2013}) agrees with other studies \citep[e.g.][]{Cormier2018, Bigiel2020} and will not affect the main results in this paper. Then we convert the \chem{CO}{10} integrated intensity, $I_\mathrm{CO}^{1-0}$, to the molecular gas mass surface density, $\Sigma_\mathrm{mol}$, via: \begin{equation} \label{eq:sigmamol} \frac{\Sigma_\mathrm{mol}}{\mathrm{[M_{\odot}\,pc^{-2}]}} = \alpha_\mathrm{CO} \, \frac{I_\mathrm{CO,\,150\,pc}^{1-0}}{\mathrm{[K\, km\,s^{-1}]}}~\cos{(i)}~. \end{equation} Here, $I_\mathrm{CO,\,150\,pc}^{1-0}$ stands for $I_\mathrm{CO}^{1-0}$ convolved to $150$~pc ($4\arcsec$) FWHM and the $\cos(i)$ factor corrects for inclination. We note, that the conversion from observed to physical quantity is subject to uncertainties for example, the low-J transition of \chem{^{12}CO} is known to be optically thick and does not necessarily peak in the 1-0 transition in many environments. These, together, may result in a less accurate CO to H$_2$ conversion, particularly towards starburst or AGN regions, yet this is most likely still secondary compared to the uncertainty on the $\alpha_\mathrm{CO}$ (e.g. due to metallicity). The ratio of the two profiles $\Sigma_\mathrm{mol}/$\sigsfr\ is the molecular gas depletion time, the time it takes (present day) star formation to deplete the current supply of molecular gas: \begin{equation}\label{eq:depl} \centering \frac{\tau_\mathrm{depl,\,150\,pc}^\mathrm{mol}}{[\mathrm{yr}^{-1}]} = \frac{\Sigma_\mathrm{mol}}{\mathrm{[M_{\odot}\,pc^{-2}]}} \left( \frac{\Sigma_\mathrm{SFR}}{{[\mathrm{M_{\odot}~yr^{-1}\,kpc^{-2}}]}} \times \frac{\gamma}{\mathrm{[kpc^{2}/pc^{2}]}} \right)^{-1}~. \end{equation} Here, $\gamma$ stands for the conversion factor to get from kpc$^{-2}$ to pc$^{-2}$. \autoref{eq:depl} implies that all the molecular gas traced by CO will turn into star formation fuel which is an overestimate. We use CO to define \tdepl\ because we later compare it to literature values that used the same method to calculate $\tau_\mathrm{depl}^\mathrm{mol}$; this does not affect the main results of this work. \subsubsection{EMPIRE dense gas data, high-resolution M~51 and NGC~3627 data} In \autoref{sec:Discussion} we discuss line ratio diagnostic plots of the dense gas tracers HCN, HCO$^{+}$ and HNC. To compare \ngc\ with other galaxy centres, we use available observations of additional eight galaxy centres (see legend in \autoref{fig:XDR-PDR}d) covered in the EMPIRE survey (\citealt{Jimenez-Donaire2019EMPIRE}). Those data products have a common angular resolution of 33$\arcsec$ ($\approx1$~kpc). For the analysis in \autoref{Disc:HCO/HCN} we also include high resolution observations of the dense gas tracers for: (i) M~51 and (ii) NGC~3627. The M~51 short-spacing corrected observations have an angular resolution of $\sim$4$\arcsec$ ($\approx166$~pc) and their reduction are described in \citealt{Querejeta2016}. We used the same technique as for the \ngc\ observations described in \autoref{sec:Sampling} to sample the data on a hexagonal grid and produce integrated intensity maps with a common angular resolution of 4$\arcsec$. We have also done this for the NGC~3627 observations, which have an angular resolution of $\sim$2$\arcsec$ ($\approx100$~pc); their short-spacing correction and reduction are described in \citealt{Beslic2021}. \subsubsection{X-ray} We also compare our observations to \textit{Chandra} \mbox{ACIS-S} observations (ObsIDs 1054 and 13435) from 2001 (PI: S.~Holt; \citealt{Holt2003}) and 2012 (PI: C.~Kochanek) totalling $79$~ks. We reduced the data using the \textit{Chandra} Interactive Analysis of Observations (\texttt{ciao}) Version 4.9 and produced exposure-corrected images using the \texttt{ciao} command \textit{merge\_obs}. Point sources were identified using \textit{wavdetect} on the merged, broad-band ($0.5{-}7.0$~keV) image and were removed with \textit{dmfilth} to create the $X$-ray image shown in \autoref{fig:XDR-PDR}. We also show a version of the $X$-ray diffuse, hot gas that has been smoothed with a Gaussian kernel (FWHM of 3) in \autoref{fig:color}\,(m). \subsection{Multi-wavelength gallery of the central region of NGC~6946} \label{sec:story-about-central-region} \autoref{fig:color} shows a compilation of the various observations towards the inner $50\arcsec ~\approx1.86$~kpc of \ngc\ (see \autoref{tab:fig1-ref} for details on the observations). The centre appears to have several branches that connect to the outside environment. The two more pronounced spiral-like structures running north and south (extending $50\arcsec$) are most apparent in \chem{^{12}CO}{10} (published in \citealt{schinnerer2006molecular}) (c) and to some extent also in the infrared emission (h)--(j). Another spiral-like structure could be presumed from these observations. Preceding studies even assumed four spiral features \citep{Regan1995}, which are not apparent in this \chem{CO} map. The sketch on the top right shows the two prominent spiral-like structures and the third presumable one as blue coloured arcs. The red coloured ellipse in the sketch (extending $20\arcsec \approx750$~pc) denotes the radio continuum emission (e)--(g). In the $70~\mu$m, $24~\mu$m, $8~\mu$m, Pa$\beta$, H$\alpha$ and $X$-ray emission (h)--(m), we likewise find emission in this region. In maps (e)--(m), however, no substructures are visible within the red ellipse due to high saturation or limited angular resolution. In contrast to that, \chem{CO}{21} reveals the nuclear region and two additional features to the north and south, illustrated as green ellipses in the sketch (central ${\sim}10\arcsec ~\approx372$~pc). \cite{schinnerer2006molecular} showed with dynamical models that the structures of \chem{CO}{21} can be explained with an inner bar (first observed and proposed via FIR by \citealt{Elmegreen1998}). The two inner bar ends (NBE and SBE) are connected to the nuclear region by straight gas lanes running along a position angle of $\mathrm{P.A.} {\sim}40\degr$. Those regions are also bright in emission in \chem{HCN}, \chem{HCO^+} and \chem{HNC} (see \autoref{sec:Results}). The four additional red ellipses in the sketch, most visible in the continuum emission (e) -- (f), were identified by \cite{Linden2020SFRS}. The southern two are associated with star formation (SF) regions and the northern two are suggested to be anomalous microwave emission (AME) candidates \citep{Linden2020SFRS}. The exact mechanism causing AMEs is not entirely understood, but the most promising explanation is that they occur due to small rotating dust grains in the ISM (see the review on AMEs by \citealt{Dickinson2018AMEReview}). One of the techniques to identify AMEs is to investigate the $33~\mathrm{GHz}/\ha$ flux ratio \citep{Murphy2018SFRS}. Larger ratios of $33$~GHz flux to \ha\ line flux would arise by an excess of non-thermal radio emission (see fig.~4 in \citealt{Murphy2018SFRS}; ratios of ${\sim}10^9$). Using $7\arcsec$ apertures in diameter for the two AME candidates we find $33~\mathrm{GHz}/\ha$ ratios of ${\sim}10^9$ expected for AME emission. This would confirm them being AME candidates. Within the southern SF region, at the end of the \chem{CO}{10} southern spiral structure, \citet{Gorski2018} found a water maser (\chem{H_2O}\,($6_{16}{-}5_{23}$) at $22.235$~GHz) and within the southern inner bar end two methanol masers (\chem{CH_3OH}\,($4_{-1}{-}3_0$) at $36.169$~GHz), marked as blue and orange stars in the sketch. The water maser is associated with one of the identified SF regions of \citet{Linden2020SFRS}. Water masers can be in general variable on timescales of a few weeks and could indicate YSOs or AGB stars \citep{Palagi1993}. \citet{Billington2020} showed within the Milky Way the relationship between dense gas clumps and water masers. The location of the water maser matches well with the SF region seen in the \ha\ and $33$~GHz maps. \subsection{The substructures in the inner kiloparsec of NGC 6946}\label{sec:Mol_struc} In every map of \autoref{fig:intensities} we detect significant emission in the inner $10\arcsec \approx 375~\mathrm{pc}$. We note that we have for the first time detected ethynyl \cch, cyanoacetylene \hcnten\ and \hcnsixt\ in the inner kiloparsec of \ngc. All other molecules have been detected previously, mostly at lower resolution (e.g. ${\sim}30\arcsec$ \hnc\ and \ntwoh: \citealt{Jimenez-Donaire2019EMPIRE}; $25\arcsec$ and $17\arcsec$ \chohtwo, \chohthree, \htwoco: \citealt{Kelly2015MappingCSinNGC6946}; $8\arcsec$ \hco: \citealt{Levine2008}; $1\arcsec$ \hcn: \citealt{Schinnerer2007}). \textbf{The spiral structures:} We see several branches that seem to be connected to the outside environment. The two more prominent spiral-like structures are best visible in the integrated intensity map of \co\ tracing the bulk molecular gas. For comparison with the other molecular species, we plot red contours of CO\,(1-0) with \SN\ levels of $30, 60, 90$ showing the spirals to the north and south in all of the 14~maps. The white \SN\ contours of \hcn, \hco\ and \hnc\ overlap with the northern spiral structure (labeled as `s1' in the sketch in \autoref{fig:color}). The southern spiral (s2) is visible in \hcn, \hco, \hnc\ and \ntwoh. We denoted a third spiral (s3), which is present in \hcn, \hco, \hnc, \ntwoh, \cstwo\ and \chohtwo. \textbf{The inner bar and nuclear region:} Already inferred in the sketch in \autoref{fig:color} from the native resolution \cotwo\ data, we know the location of the inner bar ends (NBE and SBE) and the nuclear region (NUC), and are able to investigate them with our other molecular emission lines. Considering only the integrated intensities with $\mathrm{S/N} > 5$ in each of the maps in \autoref{fig:intensities}, we see that the small-scale distributions of molecular species vary in these environments and that they do not necessarily peak at the same locations (see \autoref{tab:ticks}). While in the strongest lines, \co, \cotwo, \hcn, \hco\ and \hnc, all these regions are evident and their integrated intensities are peaking in the NUC, the situation becomes different for molecular species with higher effective critical densities.\footnote{The effective critical densities, $n_\mathrm{eff}$, are defined in \autoref{tab:Obs-properties}.} In particular, \chem{CS}{32} and \cch\ show emission concentrated to the inner $5\arcsec$ in radius. Contrary to that, the highest integrated intensities are not peaking at the very centre for \hcnten, \ntwoh, \chohtwo\ and \chohthree. Their peaks in emission are in the southern inner bar end. Interestingly, in the CS maps we do not see a peak in the SBE, although their $n_\mathrm{eff}$ is higher than \chem{N_2H^+} which peaks in the SBE. Of course, the different distributions of the molecular emissions may not be a direct result of the density difference, but also temperature (e.g. driven by the embedded star formation) that could drive both excitation and abundance variations. \autoref{tab:ticks} compares the spatial distribution of various molecular species. \subsection{Molecular line profiles towards the nuclear region and what they are indicating}\label{sec:Mol_indicating} \autoref{fig:central-sightline-spectra} shows the spectra of all the detected molecules for the central sight line ($4\arcsec \approx 150$~pc aperture). For visualisation purposes we normalised them to the maximum value in their spectra. We investigate what the molecular species in our data set reveal about the physical state of the molecular gas at the centre of \ngc. CO is tracing the bulk molecular gas content and its line brightness is up to a factor of $14$ higher than other molecular lines within this work (see \autoref{tab:PdBIonly-Characteristics}). Relative to CO, molecules such as HCN are tracing denser gas, as they get excited at effective densities of $n \gtrsim 10^{3}$~cm$^{-3}$. As we go up in $n_\mathrm{eff}$, HNC, N$_2$H$^+$, CS and HC$_3$N are potentially tracing even denser molecular gas. This means, HC$_3$N is tracing the densest molecular gas in our data set. The spectrum in \autoref{fig:central-sightline-spectra}, however, shows that HC$_3$N has in the central $150$~pc the lowest \SN\ among all our molecular emission lines (\SN\ of~$6$). However, as we could see in \autoref{tab:ticks}, their maximum integrated intensities are located in the SBE, and there we find a \SN\ of~$11$ and~$36$. Among our molecular species, there are some that reveal more about the chemistry of the gas. The formation path of ethynyl (C$_2$H) is favoured in PDRs by the reaction of C$^+$ with small hydrocarbons and additionally through photodissociation of C$_2$H$_2$ \citep[][and references therein]{Meier2005CenterofIC342}. This means enhanced C$_2$H in PDRs associated with massive hot cores indicates hidden star formation \citep{Martin2015, Harada2018}. Recently, \citet{Holdship2021} showed in the nucleus of NGC\,253 that high abundances of C$_2$H can be caused by a high cosmic ray ionization rate. We detect \cch\ towards the nuclear region of \ngc\ and the line shows similar to CO and dense gas tracers a broad line profile (FWHM${\sim}200$~km\,s$^{-1}$) including two peaks. Strong methanol (CH$_3$OH) emission is considered a tracer of shocks \citep[e.g.][]{Saito2017}. The reason for this is the formation process of CH$_3$OH in the gas phase is not efficient enough to produce higher amounts of CH$_3$OH \citep{Lee1996}. Intense CH$_3$OH emission is believed to arise from a series of hydrogenations of CO on dust grain surfaces under a low-temperature condition \citep{Watanabe2003}. After production on dust, it needs energetic heating mechanisms -- shock chemistry -- \citep[e.g.][]{Viti2011, James2020} to heat the dust and then sublimate CH$_3$OH into the gas phase. However, methanol emission can also be enhanced in non-shocked environments, such as towards massive stars or sources of cosmic ray or X-rays that heat the dust to ${\sim}100$~K and allow methanol to evaporate into the gas phase. In our data set, both methanol transitions are stronger in their line brightness than the faintest dense gas tracers -- N$_2$H$^+$ and HC$_3$N. Also, both methanol tracers have their maximum integrated intensity peaks in the SBE, where their ratios with CO are higher ($I_{\chem{CH_3OH}} / I_{\chem{CO}{21}}$). Methanol is also known to be a good kinetic temperature probe \citep[e.g.][]{Beuther2005}. Furthermore, para-H$_2$CO transitions can also be used as a temperature indicator which is sensitive to warmer ($T > 20$~K) and denser ($n {\sim}10^{4-5}$~cm$^{-3}$) gas \citep[e.g.][]{Ginsburg2016, Mangum2019}. H$_2$CO can be formed in the gas phase as well as on the surface of dust grains (e.g. \citealt{TerwisschavanScheltinga2021}). In our data set, \htwoco\ is one of the faintest lines with $\SN~ {\sim}7$ towards the nuclear region. We note, however, that each molecule is not a unique tracer of a particular process or sets physical conditions in a galaxy. To highlight this point, \cch, for example, in the Milky Way is a tracer of PDRs, yet in NGC~1068 or NGC~253 it is tracing a completely different type of gas (\citealt{Garcia-Burillo2017,Holdship2021}). In NGC~1068 it appears to trace a turbulent extended interface between outflows and ISM, yet in NGC~253 it appears to trace the dense gas that is subject to enhanced cosmic-ray ionization rates. Modelling can shed some light on the interpretation, and will be the subject of future work. \cite{schinnerer2006molecular} found that the profiles of \co\ and \cotwo\ are double peaked in \ngc. In \autoref{fig:central-sightline-spectra} we also notice double-peaked profiles in the spectra of \hcn, \hco, \hnc, \cstwo, \csthree, \hcnten, \ntwoh, \cch, \chohtwo\ and \chohthree. These could be due to galactic orbital motions or in/outflows in the central $4\arcsec \approx 150$~pc. A more detailed analysis of the kinematics and dynamics of these spectral features are not discussed in this paper but it is planned for a future publication. In \autoref{fig:central-sightline-spectra} we denote the peak \SN\ and what condition each of the molecules can potentially indicate on the right-hand side. \subsection{Ratios of integrated intensities} \label{Sec:Res-LineRatios} \input{Figures/01caption/figure5} In \autoref{fig:RatioMaps-uvtrimmed-SSC} we show line ratio maps for our \textit{SSC + {$u{-}v$\,} trim data} set (see \autoref{tab:SSC-Characteristics}). We specify the line ratios such that the generally brighter line is in the denominator, while the overall weaker line is in the numerator. The line ratio maps were calculated by only taking integrated intensities with $\SN > 5$ for the fainter dense gas tracers (DGTs: \chem{HCN}, \chem{HCO^+} and \chem{HNC}) and $\SN > 15$ for the bulk molecular gas tracers (CO); non-detections were discarded from the line ratio maps: \begin{equation} \mathrm{Ratio\,map} = \frac{I_\mathrm{line_1} \left[ I_\mathrm{line_1}/\sigma_\mathrm{line_1} > \epsilon \right]}{I_\mathrm{line_2} \left[ I_\mathrm{line_2}/\sigma_\mathrm{line_2} > \epsilon \right]} \ \begin{cases} \epsilon = 5 & \text{for DGTs}\,,\\ \epsilon = 15 & \text{for CO}\,. \label{eq:Line ratios} \end{cases} \end{equation} In the next section, we include non-detections in our analysis. For the case of no significant detection we replaced the values with the upper limits ($2\sigma$ in \autoref{eq:unc}) and include the propagated errors ($\sigma_\mathrm{prop}$). We derive them as: \begin{equation} \sigma_\mathrm{prop} = \frac{|I_\mathrm{line_1}|}{|I_\mathrm{line_2}|} \sqrt{{\left(\frac{\sigma_\mathrm{line_1}}{I_\mathrm{line_1}} \right)}^2 + {\left(\frac{\sigma_\mathrm{line_2}}{I_\mathrm{line_2}}\right)}^2}~. \end{equation} The errors are expressed on a logarithmic scale of base~10 as: \begin{equation} \sigma_\mathrm{log} = \frac{1}{\ln(10)} \times \left( \sigma_\mathrm{prop} / I_\mathrm{ratio} \right) \approx 0.434 \times \left( \sigma_\mathrm{prop} / I_\mathrm{ratio} \right)~. \end{equation} In \autoref{fig:RatioMaps-uvtrimmed-SSC} the line ratio maps for the central $20\arcsec \approx 750$~pc show differences in line ratios between the environments NUC, SBE and NBE. In the following we report straight mean values over these regions by applying the mask in \autoref{fig:SFR-and-mask}. The ratios $\chem{HCO^+}/\chem{HCN}$ and $\chem{HNC}/\chem{HCN}$ both show values below unity in all three regions; and over the entire field of view. In both cases, the ratio to HCN is greater in the NUC than in the SBE or NBE. Looking at the NUC reveals values of $0.81\pm0.01$ and $0.37\pm0.01$ for $\chem{HCO^+}/\chem{HCN}$ and $\chem{HNC}/\chem{HCN}$, respectively. Ratios to CO indicate higher values in the SBE than in NBE; best visible in the case of $\chem{HNC}/\chem{CO}{21}$. We discuss the implications of these line ratios in \autoref{Disc:HNC/HCN}, \ref{Disc:HCO/HCN} and \ref{Disc:lineratiopattern}. \subsection{The relationship between molecular lines and SFR surface density} \label{sec:comp-moleculesandSFR} \input{Figures/01caption/figure6} \input{Table/table6} In this section we investigate how our molecular species correlate with \sigsfr\ and how the dense gas fraction traced by $\chem{HCN}/\chem{CO}$ (we investigate also $\chem{HCO^+}/\chem{CO}$ and $\chem{HNC}/\chem{CO}$) responds to the integrated intensity of CO (an indicator of the mean volume density, see below). We study how these quantities relate to conditions in the centre of \ngc. \autoref{fig:Corr} shows the relationships we investigate. We characterise scaling relations by including upper limits and measurement errors, using the hierarchical Bayesian method described in \cite{Kelly2007}. This approach is available as a python package: \texttt{linmix}\footnote{\url{https://linmix.readthedocs.io/en/latest/index.html}}. It performs a linear regression of $y$ on $x$ while having measurement errors in both variables and being able to account for non-detections (upper limits) in~$y$. The regression assumes a linear relationship in the form of: \begin{equation} \log(y) = \beta \times \log(x) + \alpha~, \end{equation} where $\beta$ is the slope, and $\alpha$ is the $y$-intercept\footnote{We specify the covariance between the measurement errors in x and y (xycov parameter) and set K$=2$.}. We find Pearson's correlation coefficients $\rho$ of the data sets for each fitted relationship, and the $3\sigma$ confidence intervals are estimated via Markov chain Monte Carlo (MCMC). For a detailed description we refer to \citet{Kelly2007}. We provide all the correlations in \autoref{tab:Corr}.\\ \\ Dense molecular gas traced by, e.g., HCN emission, has been observed to correlate with SFR (e.g. \citealt{gao2004hcn,lada2010star,lada2012star}). \citet{Kauffmann2017} showed that HCN traces more extended gas and therefore can have an impact on the observed SF trends in galaxies (also see e.g. \citealp{Pety2017,Barnes2020}). \citet{Krumholz2007} showed that star formation correlates with any line with a critical density comparable to the median molecular cloud density. Therefore, we expect to see positive correlations between \sigsfr\ and the surface density of \chem{HCN}, \chem{HCO^+} and \chem{HNC}. In our observations this is confirmed with the additional characteristic that the molecular line with the lowest effective critical density ($n_\mathrm{eff}$) shows the strongest correlation ($n_\mathrm{eff}$ ordered as $\chem{HCO^+} < \chem{HNC} < \chem{HCN}$; see \autoref{tab:Obs-properties}). HCO$^+$ shows the strongest ($\rho~{\sim}0.95$, see \autoref{tab:Corr} for uncertainties of $\rho$) correlation with a slope of $\beta = 1.55$ and a small intrinsic scatter: \begin{equation} \log(\sigsfr) = 1.55\pm0.11 \times \log(I_{\chem{HCO^+}}) - 1.91\pm0.15~. \end{equation} We note that our slopes are all higher ($\beta>1$) than those found at global scales ($\beta<1$; e.g. \citealt{gao2004hcn,Krumholz2007}). We speculate that this is due to two contributing factors. Firstly, in this work we are focussing on the centre of \ngc\ (central $20\arcsec\approx745$~pc), and not the whole galaxy disc. Hence, this could be due to the limited dynamic range in environmental conditions we are including within the analysis - that is focussing on the densest and most actively star-forming gas within the galaxy. Secondly, this could be a result of the resolution obtained with our PdBI observations. At around 150 ~pc, we are close to resolving individual discrete star-forming and/or quiescent regions, which could result in the different slope compared to lower resolution studies that include an average on small scale conditions within each sample point; somewhat akin to following a branch of the tuning fork within the recent `uncertainty principle for star formation' framework \citep{kruijssen18,kruijssen19a,chevance20,Kim2021}. % The dense gas fraction, \fdense\footnote{In this work, we take the \chem{CO}{21} data for our \fdense\ estimates because they have a higher S/N and better quality than \chem{CO}{10}.}, usually traced by the integrated intensity of \chem{HCN}{10} over \chem{CO}, has been observed to increase towards the centres of galaxies (e.g. \citealt{usero2015variations,bigiel2016empire,gallagher2018dense,Jimenez-Donaire2019EMPIRE,Jiang2020,Beslic2021}). In turbulent cloud models, increasing the mean volume density of a molecular cloud results in a shift of the gas density distribution to higher densities (e.g. \citealt{Federrath2013}). In addition, the velocity dispersion (or Mach number) widens the density PDF, which causes a larger fraction of the mass to be at higher gas densities. Combined, these increase the fraction of gas above a fixed density (e.g. the effective critical density of HCN), and consequently, these models predict a positive correlation between the volume density and \fdense\ (see \citealt{Padoan2014} for a review). Such trends are, in particular, interesting to study within galaxies centres environments thanks to their higher average densities and broader (cloud-scale) line widths compared to typical disc star-forming regions (e.g. \citealp{Henshaw2016,Krieger2020}). In the following we use the integrated intensity of \chem{CO}{21} as an indicator of the mean volume density (\citealt{Leroy2016,Sun2018,gallagher2018spectroscopic}, see also \autoref{Disc:lineratiopattern}) and explore the dense gas fraction using \chem{HCN}, \chem{HCO^+} and \chem{HNC}. All three \fdense\ versus \chem{CO}{21} fits in \autoref{fig:Corr} present sub-linear power-law indices (i.e. $\beta < 1.0$). We find that the correlations are weakest for the dense gas fraction using HCN ($\rho~{\sim}0.39$) and strongest for $\chem{HNC}/\chem{CO}{21}$ ($\rho~{\sim}0.66$): \begin{equation} \log \left( \frac{I_{\chem{HNC}}}{I_{\chem{CO}{21}}} \right) = 0.76\pm0.13 \times \log \left( I_{\chem{CO}{21}} \right) - 1.63\pm0.16~. \end{equation} The particular order in the slopes and correlation coefficients do not follow the order of $n_\mathrm{eff}$ as given in \citet{Shirley2015}. Instead, they show the order of $\beta_\mathrm{HNC}$ > $\beta_\mathrm{HCO^+}$ > $\beta_\mathrm{HCN}$ (see second row in \autoref{fig:Corr}). Possible explanations for this behaviour could be anomalous excitation for one of the species, for example IR pumping of HCN and HNC levels and/or peculiar filling factors. The star formation efficiency of dense gas (\sfedense) -- the ratio of SFR to the integrated intensity of \chem{HCN}{10} -- has been observed to decrease towards galaxy centres (same studies as above). This is because the critical overdensity (relative to the mean density) is higher due to the higher Mach number, but also that the absolute critical density (i.e. that obtained from the above line ratios) is higher due to the higher (1) mean density and (2) Mach number (in the context of the CMZ of the Milky Way, see e.g. \citealt{Kruijssen2014a}). Hence, as the mean density of a molecular cloud approaches $n_\mathrm{eff}$ of the molecular line, the line's intensity will increasingly trace the bulk mass of the cloud, and not exclusively the overdense, star-forming gas; in effect reducing the apparent SFE. From that we would expect to see \sfedense\ to drop with $I_{\chem{CO}{21}}$ ($\propto$~volume density) as it has been observed over the disc regions of galaxies (e.g. \citealt{gallagher2018dense,Jimenez-Donaire2019EMPIRE}). We use the ratio of \sigsfr\ to the integrated intensities of the dense gas tracers as \sfedense, and find a different picture of \sfedense\ with galactocentric radius -- increasing towards the NUC; as it shows higher efficiencies in the NUC than in the bar ends (see third row in \autoref{fig:Corr}). We find a moderate\footnote{Here we refer to a moderate correlation if $\rho$ lies in the range of $0.5{-}0.7$.}, approximately linear correlation between $\sigsfr / I_{\chem{HCN}}$ and $I_{\chem{CO}{21}}$. The other two \sfedense\ measurements show weak sub-linear relationships with $\rho~{\sim}0.46$ and $\rho~{\sim}0.19$ for ratios using \chem{HCO^+} and \chem{HNC}, respectively. We find that \sfedense\ increases with increasing $I_{\chem{CO}{21}}$, contrary to the trends found by \citet{gallagher2018dense, Jimenez-Donaire2019EMPIRE}. The reason for that could be the special environment of \ngc\ (inner bar) and/or the different used SFR tracers (we use the free-free emission of 33~GHz continuum compared to H$\alpha$+24\,$\mu$m and total infrared). Also the above studies could not resolve the inner bar of \ngc\ (\citealt{Jimenez-Donaire2019EMPIRE}, angular resolution of $33\arcsec\approx~$1~kpc) or \ngc\ was not in the nearby disc galaxy sample \citep{gallagher2018dense}. However, we notice, as expected, that stronger correlations in \fdense\ result in weaker correlations in \sfedense\ and vice versa (e.g. $I_{\chem{HNC}} / I_{\chem{CO}{21}}$ with $\rho = 0.76\pm0.13$ and $\sigsfr / I_{\chem{HNC}}$ with $\rho = 0.51\pm0.24$; see \autoref{tab:Corr}). \subsection{\texorpdfstring{$\bm{R_{21}}$}{R21} variations in the nuclear region and inner bar ends} \label{sec:R21} The \chem{CO}{21}-to-\chem{CO}{10} line ratio, $R_{21}$, is widely used to convert \chem{CO}{21} emission to \chem{CO}{10}, which then can be further converted via the $\alpha_\mathrm{CO}$ conversion factor to the molecular gas mass. It has been shown that higher $R_{21}$ values are expected within the central kpc in individual galaxies \citep{Leroy2009Heracles, leroy2013, koda2020}. The same trend has been found by \citet{denBrok21} and \citet{yajima2021} studying a galaxy sample. \ngc\ is in both of the aforementioned studies. They find within the central kpc region $R_{21}~{\sim}0.7$ (at an angular resolution of $33\arcsec~{\sim}1$~kpc; \citealt{denBrok21} using the EMPIRE sample) and $R_{21}~{\sim}1.1$ (angular resolution of $17\arcsec~{\sim}0.6$~kpc; \citealt{yajima2021}). However, these studies are not able to resolve variations within the centre between different sub-features. From a physical point of view, $R_{21}$ should depend on the temperature and density of the gas, as well as on the optical depths of the lines (e.g. \citealt{Penaloza2018}). Therefore, understanding how $R_{21}$ varies in response to the local environment also has the prospect of providing information about the physical conditions of the molecular gas. With our higher resolution ($4\arcsec \approx 150$~pc) observations we are able to resolve smaller-scale structures and thus have the opportunity to investigate variations of $R_{21}$ within the centre. In \autoref{fig:R21} we show $R_{21}$ against galactocentric radius, highlighting NUC, NBE and SBE in different colours. We see along the galactocentric radius, in the centre higher $R_{21}$ values, an initial steady decrease followed by a gradual increase. We find average $R_{21}$ of $0.65\pm2$E$-3$ within NUC and even lower values in the southern end of the bar $R_{21}=0.62\pm3$E$-2$ (see \autoref{tab:regions-charac}). Interestingly, however, we observe higher $R_{21}$ values towards the northern end of the bar ($R_{21}=0.70\pm3$E$-2$) compared to the NUC (factor of~${\sim}1.08$), despite the higher \sigsfr\ in the SBE compared to the NBE (see \autoref{sec:3-SFRindicators-compare}). The reason for a higher $R_{21}$ value could be related to denser gas and/or warmer gas with higher temperatures. We colour-code points in \autoref{fig:R21} by their $\chem{HCN}{10}/\chem{CO}{21}$ line ratio ($\propto \fdense$) and find them showing higher \fdense\ towards the NUC. Furthermore, we would expect to observe higher HCN-to-CO ratios towards the NBE. However, we do not see an increase in the denser gas in the NBE, suggesting a different physical driver for the increased $R_{21}$ in the NBE. We analyse the three regions regarding their molecular gas density in more detail in \autoref{Disc:lineratiopattern}. In summary, we find higher $R_{21}$ values towards one of the inner bar ends of \ngc\ compared to the nuclear region. If substructures such as small-scale bar ends are to be observed and analysed, $R_{21}$ may possibly deviate in a minimal way from the kpc-sized $R_{21}$ values from the literature. \input{Figures/01caption/figure7} \subsection{\texorpdfstring{${\mathrm{HNC}/\mathrm{HCN}}$}{HNC/HCN}: sensitive to kinetic temperatures in extragalactic environments?} \label{Disc:HNC/HCN} In interstellar space, isomers do not necessarily share similar chemical or physical properties. HNC (hydrogen iso-cyanide) and HCN (hydrogen cyanide) isomers are both abundant in cold clouds, but at temperatures exceeding ${\sim}30$~K, HNC begins to be converted to HCN by reactions with atomic~H. These isomers exhibit an abundance ratio of unity at low temperatures \citep{Schilke1992,Graninger2014}. A major study to understand this ratio, which was focused on the galactic SF region Orion Molecular Cloud 1 (OMC-1), was carried out by \citet{Schilke1992}. They found that the {$\chem{HNC}/\chem{HCN}$}\ ratio is ${\sim}1/80$ in the direction of Orion Kleinmann-Low (Orion-KL) but increases to $1/5$ in regions with lower temperatures near Orion-KL. In the coldest OMC-1 regions, the ratio rises further to~1. The temperature dependence suggests that the ratio must be kinetically controlled \citep{Herbst2000}, so that the integrated intensity line ratio {$\chem{HNC}/\chem{HCN}$}\ should decrease at higher temperatures \citep{Pety2017}. Whether this ratio is sensitive to temperatures in extragalactic sources is uncertain (e.g. \citealt{Aalto2002, Meier2005CenterofIC342, 2012MeierChemistryinMaffei2}). For example, \cite{Meier2005CenterofIC342} found a kinetic temperature for the centre of IC~342 using the {$\chem{HCN}/\chem{HNC}$}\ ratio and the empirical relation of \cite{Hirota1998} of a factor of~2 less than the dust temperature and the kinetic temperature of the gas using \chem{CO}{21} and ammonia (\chem{NH_3}). They suggested that there might be an abundant dense component in IC~342 that is significantly cooler and more uniform than the more diffuse CO, but this was not consistent with the similar distribution of CO, HNC and HCN, unless such a dense component directly follows the diffuse gas. Alternatively, they assumed that this line relationship might not capture temperature, for the nuclear region of IC~342. Also \cite{Aalto2002} find overluminous HNC in many of the most extreme (and presumably warm) (U)LIRGs and suggest that its bright emission cannot be explained by the cool temperatures demanded. Recently, however, this line ratio has come back into focus. \cite{Hacar2020} demonstrated the strong sensitivity of the {$\chem{HNC}/\chem{HCN}$}\ ratio to the gas kinetic temperature, $T_\mathrm{k}$, again towards the Orion star-forming region. They compared the line ratio with \chem{NH_3} observations (\citealt{Friesen2017}) and derived $T_\mathrm{k}$ from their lower inversion transition ratio $\chem{NH_3~(1,1)} / \chem{NH_3~(2,2)}$. In particular, they found that $T_\mathrm{k}$ can be described by a two-part linear function for two conditions (we show only Eq.~(3) in \citealt{Hacar2020}): \begin{equation} \label{eq:hcn-hnc} T_\mathrm{k} ~ [\mathrm{K}] = 10 \times \frac{I_{\chem{HCN}}}{I_{\chem{HNC}}} \quad \text{for} \quad \frac{I_{\chem{HCN}}}{I_{\chem{HNC}}} \leq 4~. \end{equation} However, since the $\chem{NH_3~(1,1)} / \chem{NH_3~(2,2)}$ transitions are only sensitive to $T_\mathrm{k} \lesssim 50$~K (see e.g. Fig.~1 of \citealt{Mangum2013}), the calibration shown above only represents the low temperature regime. The challenge of apply such concepts in nearby galaxies is that the concentrations of dense gas studied by \citet{Hacar2020} in the local Milky Way environment (i.e. Orion) are very compact (${\sim}0.1{-}1$pc; \citealt{Lada2003}), and representative of solar neighbourhood environmental conditions (e.g. chemistry, and average densities, Mach numbers and kinetic temperatures). Achieving such a resolution is currently extremely difficult in an extragalactic context (e.g. 1\,pc = 0.025\arcsec\ at the distance of the NGC 6946), potentially limiting our capability to determine kinetic temperatures (e.g. when using \chem{H_2CO}; see \cite{Mangum2019} and below). That said, galaxy centres present an ideal regime in which to focus our efforts. As densities similar to the concentrations observed within local star-forming regions are not compact, but can span (nearly) the entire CMZ (so up to 100\,pc), leading to luminous and, importantly, extended HCN and HNC emission (\citealt{Longmore2013,Rathborne2015,Krieger2017,Petkova2021}). Hence, testing this temperature probe within galaxy centres overcomes the requirement for such extremely high-resolution observations, and should be possible with $\sim$\,100\,pc scale measurements presented in this work. We find in the nuclear region of \ngc\ a mean {$\chem{HNC}/\chem{HCN}$}\ ratio of ${\sim}0.37$. For the bar ends we find lower ratios: ${\sim}0.31$ for NBE and ${\sim}0.32$ for the SBE (see \autoref{tab:regions-charac}). \cite{Jimenez-Donaire2019EMPIRE} reported over kpc-scales a ratio of ${\sim}0.31$. There are no other ratios of HNC and HCN in the literature for a comparison, since HNC has hardly been observed towards \ngc. If we assume that the ratio of HCN and HNC traces kinetic temperature and adopt the \cite{Hacar2020} relation\footnote{The equation in \citealt{Hacar2020} uses HCN over HNC ($\chem{HCN}/\chem{HNC}$), therefore we have for e.g. NUC = $1/0.37 = 2.7$.}, then we would infer a $T_\mathrm{k} (\chem{HCN}/\chem{HNC})$ of ${\sim}27$~K for the NUC. For the bar ends we calculate slightly higher temperatures of ${\sim}31$~K and ${\sim}32$~K (on $4\arcsec \approx 150$~pc scales). \cite{Meier2004Nucleusof6946CO} predict $T_\mathrm{k}~{\sim}20{-}40$~K (for $n_{\chem{H_2}} = 10^{3}$~cm$^{-3}$) based on CO and its isotopologues using a large velocity gradient (LVG) radiative transfer model (building on the models presented in \citealp{Meier2000}). Including HCN into their LVG models, favoured higher $T_\mathrm{k}$ (${\sim}90$~K) and $n_{\chem{H_2}}$ (${\sim}10^{4} - 10^{4.5}$~cm$^{-3}$), but these numbers are sensitive to whether \chem{^{13}CO} and HCN trace the same gas component. With the inverse transition of \chem{NH_3}, \cite{Mangum2013} found $T_\mathrm{k}~{\sim}47{\pm}8$~K (using the $\chem{NH_3~(1,1)} / \chem{NH_3~(2,2)}$ ratio) which is a factor of ${\sim}2$ higher than our inferred $T_\mathrm{k}$~({$\chem{HNC}/\chem{HCN}$}) . The higher excitation $\chem{NH_3~(2,2)} / \chem{NH_3~(4,4)}$ ratio, which monitors $T_\mathrm{k} \lesssim 150$~K, was not yet detected towards \ngc\ \citep{Mangum2013, Gorski2018}. Those studies already showed that an unambiguous determination of the kinetic temperature is challenging. Comparing our obtained $T_\mathrm{k}$~({$\chem{HNC}/\chem{HCN}$}) to typical $T_\mathrm{k}$ measurements towards the central molecular zone (CMZ) in the Milky Way, reveals higher gas temperatures in the CMZ ($T_\mathrm{k} > 40$~K; \citealt{Ao2013,Ott2014,Ginsburg2016,Krieger2017}). Investigating individual CMZ clouds, \cite{Ginsburg2016} used \chem{para{-}H_2CO} transitions as a temperature tracer which is sensitive to warmer ($T_\mathrm{k} > 20$~K) and denser ($n~{\sim}10^{4-5}$~cm$^{-3}$) gas. They determined gas temperatures ranging from ${\sim}60$ to ${>}100$~K. We know from extragalactic studies that high kinetic temperatures ($50$ to ${>}250$~K) can be produced by both cosmic ray and mechanical (turbulent) heating processes \citep{Mangum2013,Gorski2018}. The CMZ of our own Galaxy seems to be different in this respect where the mismatch between dust and gas temperature at moderately high density ($n~{\sim}10^{4-5}$~cm$^{-3}$) is better explained by mechanical heating \citep{Ginsburg2016}. However, it is not clear what might be the reason for observing low $T_\mathrm{k}$ in an environment where we expect mechanical heating processes. Compared to the better studied extragalactic nuclear source NGC~253, \cite{Mangum2019} found kinetic temperatures on $5\arcsec\approx85$~pc scales of $T_\mathrm{k} > 50$~K using ten transitions of \chem{H_2CO}, while on scales ${<}1\arcsec$ (${\sim}17$~pc) they measure $T_\mathrm{k} > 300$~K. Using \chem{NH_3} as a thermometer indicates the presence of a warm and hot component with $T_\mathrm{k} = 75$~K and $T_\mathrm{k} > 150$~K, respectively \citep{Gorski2018, Perez-Beaupuits2018}. The reported $\chem{HCN}/\chem{HNC}$ ratio over the whole nucleus is ${\sim}1$, which if \cite{Hacar2020} were true would imply $T_\mathrm{k} (\chem{HNC}/\chem{HCN})~{\sim}10$~K. This is in contrast to the aforementioned warm component a factor of $7$ lower. It indicates that this ratio provides no reliable information about $T_\mathrm{k}$ in the extragalactic region of NGC~253. We find for the eight kpc-sized galaxy centers in the EMPIRE sample -- using Eq.~\eqref{eq:hcn-hnc} -- $T_\mathrm{k}$~({$\chem{HNC}/\chem{HCN}$}) lower than $50$~K for almost all galaxies\footnote{For NGC~3627, NGC~4254 and NGC~5055 we had to take for the calibration the second part of the two-part linear function in \cite{Hacar2020}.}. NGC~3627 and NGC~5055 exhibit higher kinetic temperatures, $58$~K and $61$~K, respectively. \cite{Beslic2021} found towards NGC~3627 on 100~pc scales (using the same framework) lower $T_\mathrm{k}$~({$\chem{HNC}/\chem{HCN}$}) of ${\sim}34$~K. In summary, the {$\chem{HNC}/\chem{HCN}$}\ ratio results in low inferred kinetic temperatures in galaxy centres ($T_\mathrm{k} < 50$~K) if \cite{Hacar2020} prescription can be applied. However, in the absence of other accurate kinetic temperature measurements (with \chem{NH_3} or \chem{H_2CO}) against \ngc\ and the EMPIRE galaxies, we speculate that the isomer ratio is not a suitable $T_\mathrm{k}$ probe for large sized extragalactic regions, and, in particular, towards galaxy centres that can also have high optical depths and complex chemistry (also AGN activity and prominent additional excitation mechanisms). A comparison with similarly high resolution observations towards a galaxy centre using kinetic temperatures derived from ammonia emission would be worthwhile to further investigate the {$\chem{HNC}/\chem{HCN}$}\ temperature sensitivity framework of \cite{Hacar2020}. \subsection{Examining ratios among HCN, \texorpdfstring{HCO$\bm{^{+}}$}{HCO+} and HNC as a diagnostic of AGN state} \label{Disc:HCO/HCN} Clouds of gas in the inner kpc of galaxies are exposed to intense radiation, which can emanate from an active galactic nucleus (AGN), seen as hard $X$-rays with $E > 1$~keV; from starburst regions, dominated by radiation of O~and B~stars; or from both. Excess of $X$-ray emission affects the thermal and chemical balance of the surrounding ISM, which in turn could influence molecular line emission (see below). The centre of \ngc\ exhibits no clear indication for the presence of an AGN. \citet{Holt2003} studied the distribution of the X-ray emission over the full disc of \ngc\ and found several low-luminosity point-like sources, one of which coincides with the dynamical centre determined by \cite{schinnerer2006molecular}. Theoretical modelling of ratios between \chem{HCO^+} and HCN suggested it as a diagnostic tool to distinguish between photon-domi\-nated regions (PDRs) and $X$-ray-domi\-nated regions (XDRs) for a given column density of $N~{\sim}10^{23}$~cm$^{-2}$ in the presence of ionizing radiation (\citealt{Meijerink2005,Meijerink2007}). In their models the $\chem{HCO^+}/\chem{HCN}$ ratio seems to systematically vary with gas density, the incident ultraviolet and infrared radiation field. Also mechanical heating and cosmic ray ionization could be possible sources of variations in $\chem{HCO^+}/\chem{HCN}$ (\citealt{Bayet2010,Meijerink2011}). Including HNC to the analyses, \cite{Loenen2008} claims that in XDRs HNC is always stronger than the HCN line, whereas the inverse trend is seen in PDRs (resulting in line ratios lower than unity). For their analyses they used observations obtained with the IRAM \mbox{30-m} telescope for the HCN, HNC, \chem{HCO^+} line emission of 37 infrared luminous galaxies \citep{Baan2008} and additional 80 sources from the literature (see \citealt{Loenen2008} and references therein); all unresolved measurements. Then they compared the observational data with the predictions of PDR and XDR models \citep{Meijerink2007} with varying volume densities, ranging from $10^{4.5}$ to $10^{6.0}$~cm$^{-3}$. A low $\chem{HCO^+}/\chem{HCN}$ ratio was proposed as the signature of an AGN. Since studies of galaxies hosting an AGN have found evidence for enhanced emission from HCN, relative to \chem{HCO^+} (e.g. \citealt{Kohno2001DenseMolecularGas,Imanishi2007,Davies2012}). Recently, however, this statement has been under discussion, for example, in \cite{Privon2020}. They investigated \textit{NuSTAR} hard $X$-ray emission together with literature \chem{HCO^+} and HCN observations and found no correlation between the $\chem{HCO^+}/\chem{HCN}$ ratio and the $X$-ray to IR luminosity ratio or the AGN luminosity. Thus, observing enhanced HCN relative to \chem{HCO^+} against a galaxy centre is not convincingly linked to currently observed AGN activity. We now apply the PDR versus XDR framework by \cite{Baan2008} and \cite{Loenen2008} to our observed dense gas tracers towards the centre of \ngc\ and investigate \textit{Chandra} $X$-ray observations. \input{Figures/01caption/figure8} We see from the top panels of \autoref{fig:RatioMaps-uvtrimmed-SSC} that the $\chem{HCO^+}/\chem{HCN}$ and $\chem{HNC}/\chem{HCN}$ ratios exhibit values lower than unity in all three regions (see \autoref{tab:regions-charac}). In \autoref{fig:XDR-PDR} we investigate the diagnostic plots proposed by \cite{Baan2008} and \cite{Loenen2008} to visually discriminate between XDR and PDR by comparing the line ratios between \chem{HCO^+}, HCN and HNC in the central $20\arcsec\approx745$~pc towards \ngc. In all these diagnostic plots (panels a--d), \ngc\ is in the PDR regime. Panel~(d) could indicate a linear relationship between $\chem{HNC}/\chem{HCN}$ and $\chem{HNC}/\chem{HCO^+}$. We see $\log_{10}$($\chem{HNC}/\chem{HCN}$) in the range of $-0.80$ to $-0.25$ and $\log_{10}$($\chem{HNC}/\chem{HCO^+}$) between $-0.70$ and $+0.10$. We investigate which mechanism could cause the observed ranges of line ratios. For this purpose we run the \cite{Meijerink2005} models for the PDR case. In particular, we investigate two scenarios: (i) fixing the radiation field ($G_0 = 10^2$) with varying densities ranging of $n = 10^5 - 10^6$~cm$^{-3}$, and (ii) fixing the density ($n = 10^{5.5}$~cm$^{-3}$) with varying radiation field\footnote{$ G_0 = 10^2$ and $G_0 = 10^5$ are the default minimum and maximum $G_0$ in their model outputs.} of $ G_0 = 10^2 - 10^5$ (see panel (i) and (ii) in \autoref{fig:XDR-PDR}). For the first scenario, we find consistent ratios for n$~{\sim}10^{5.25} - 10^{5.75}$~cm$^{-3}$, but the predicted ratios span a narrow range (from $\log_{10}$($\chem{HNC}/\chem{HCO^+}$=$-0.25$ to $+0.10$) compared to the observations (red shaded areas). On the other hand, $\log_{10}$($\chem{HNC}/\chem{HCN}$) remains rather constant. In the second case, $\log_{10}$($\chem{HNC}/\chem{HCO^+}$) decreases roughly linearly with $G_0$, from $-0.2$ to $-0.7$. This corresponds quite well to our observed ranges. From this we conclude that the scatter we observe in $\log_{10}$($\chem{HNC}/\chem{HCO^+}$) (panel~d) is mainly due to variations in the radiation field strength, with smaller ratios for stronger radiation fields. From all these diagnostic plots (panels a--d), it seems that the centre of \ngc\ is dominated by photons (PDR) rather than X-rays (XDR). For \ngc\ -- as mentioned above -- there is no clear evidence for an AGN. In the following we investigate whether there are $X$-ray sources that are strong enough to trigger an XDR, although the diagnostic diagrams favour a PDR. The \textit{Chandra} $X$-ray map ($0.5{-}7.0$~keV) in counts towards the central $20\arcsec$ of \ngc\ shows us that most of the diffuse $X$-ray emission is coming from a region not associated with the NUC, SBE and NBE (see \autoref{fig:XDR-PDR}). The stronger of the two detected $X$-ray sources near NUC by \cite{Holt2003} (shown as black crosses) is ${\sim}2\arcsec$ away from the dynamical centre position (shown as red circle). They find a flux of $2.8\times10^{-13}$ erg~s$^{-1}$~cm$^{-2}$ and classified its hardness as `medium' (see their Table~2 for SourceID~45). Scaling this flux to our working resolution results in $4.53\times10^{-3}$ erg~s$^{-1}$~cm$^{-2}$. \footnote{Assuming the source is point-like, we scale the flux by (D/d)$^2$, where D is the distance to \ngc\ and d is the distance from the $X$-ray source, 2$\arcsec$.} This is similar to the lowest values in \cite{Meijerink2007}. This suggests that the brightest $X$-ray source might affect the gas within a beam scale, but hardly beyond that. Interestingly, all the kpc-sized central regions of the EMPIRE galaxies in \autoref{fig:XDR-PDR} lie in the PDR regime. Their $\log_{10}(\chem{HNC}/\chem{HCN})$ ratios are in the same range as those observed for \ngc. On the other hand, $\log_{10}(\chem{HNC}/\chem{HCO^+})$ varies only from $-0.60$ to $-0.20$. The galaxies NGC~3627, NGC~5055 and M51 are known to host an AGN classified as LINER \citep{Goulding2009} and are still located in the PDR regime in \autoref{fig:XDR-PDR}. There is no strong enhanced HCN emission relative to \chem{HCO^+} towards these three galaxy centres which would `move' them to the XDR regime of the diagnostic plots. The fact that they do not lie within the XDR region of \cite{Loenen2008} could be because: a) their models are not quantitatively accurate; b) the EMPIRE AGNs are faint and their effects are diluted when averaging over $1$~kpc regions; and c) a different model that does not require the significant variations in the line ratio to be driven by a PDR or XDR (e.g. \citealt{Viti2017}). We test the dilution effects in a way that we include available high-resolution dense gas observations of M~51 and NGC~3627 (i.e. ${\sim}$4$\arcsec\approx166$~pc from \citealt{Querejeta2016} and ${\sim}$2$\arcsec\approx100$~pc from \citealt{Beslic2021}, respectively; see \autoref{sec:Ancillary data}). For both, we show line ratios for the central beam size; they are all in the PDR regime. From that we speculate that the diagnostic plots shown in \autoref{fig:XDR-PDR} and in particular the $\chem{HCO^+}/\chem{HCN}$ line ratio might not be a unique indicator to diagnose the presence of an AGN in galaxies on kpc/sub-kpc scales. This finding is consistent with the previous studies by \citet{Privon2020} and \citet{Li2021}. \subsection{Density variations at the inner bar ends and nuclear region} \label{Disc:lineratiopattern} \input{Figures/01caption/figure9} Already in \autoref{fig:intensities} we saw that the \co\ and \cotwo\ emission is spatially more extended than \hcn, \hco\ or \hnc, which could be a sign that the CO lines trace a lower density regime. That said, line intensities depend not only on density, but also on optical depth, elemental abundance variations and IR pumping (see e.g. \citealt{Shirley2015, Barnes2020b}), and all of these effects can thus drive the relative line intensity ratios. \cite{Leroy2017} analysed how changes in the sub-beam density distributions affect the beam-averaged line emissivity, by applying non-LTE radiative transfer models coupled with a parametrised density probability distribution. They found that the strength of tracer emission for dense gas is more sensitive to changes in gas density than for example CO. More precisely, the line can still be emitted at densities below the critical density, but with lower efficiency. As a result, a small increase in gas density can significantly increase the efficiency of the emission. This is not the case for lines with a lower critical density (e.g. bulk molecular gas tracer -- CO). The density of the gas exceeds the critical density, so varying the gas density does not significantly affect the efficiency of the emission. We investigate the scenario from \cite{Leroy2017} that line ratios can reflect changes in density distributions. In Figure~\ref{fig:regions} we investigate the dependence of the observed molecular line ratios. The colourbar shows the integrated intensity of CO (an indicator of the volume density at cloud scales \citealt{Leroy2016,Sun2018}) towards the 150~pc sized NUC, NBE and SBE (see \autoref{fig:SFR-and-mask}; we take the mean over the 7 hexagonal points). The upper panel shows on the $y$-axis logarithmic molecular line ratios with \cotwo. At first glance, within NUC we find line ratios being enhanced by at least $20\%$~compared to the inner bar ends. Identifying ratio variations by eye in the two bar ends is challenging. The lower panel shows the relative differences. Here we divide for example $\chem{HCN}/\chem{CO}$ by the mean $\chem{HCN}/\chem{CO}$ of all the 3 regions. We ordered them by their flaring appearance. We see the trend is not monotonic with $n_\mathrm{eff}$, instead they show an order of $\chem{HCN} < \chem{HCO^+} < \chem{HNC}$; that is the order of decreasing intensity. We find that the highest ratios are associated with NUC (purple marker), except for the $\chem{CO}{10}/\chem{CO}{21}$ ratio. Comparing the two bar ends we find (i) higher ratio values towards the SBE (orange marker) for ratios including \co\ and \chem{HCN}, (ii) higher ratio values towards the NBE (brown markers) for ratios including \chem{HCO^+} and \chem{HNC} and, (iii) that the highest differences between the SBE and NBE is ${\sim}10\%$ which are ratios including \co\ and \chem{HCO^+}. The \chem{HCO^+} intensity in the SBE seems to be under-luminous compared to the other two environments. The \chem{HNC} intensity provides the largest dynamic range, with the highest value seen in the NUC and the lowest one in the SBE. We compare the observed line ratios of CO, HCN, \chem{HCO^+} and HNC in the three regions using radiative transfer models to estimate the mass-weighted mean gas density. From the temperature analyses (see \autoref{Disc:HNC/HCN}) we would expect potential higher densities in the bar ends, whereas from the $R_{21}$ examinations (see \autoref{sec:R21}) we would anticipate higher molecular densities in the NBE than in the SBE. Furthermore, from our derived \sigsfr\ (see \autoref{sec:3-SFRindicators-compare} and \autoref{tab:regions-charac}) we would expect higher molecular gas densities in the SBE than in the NBE. For that purpose we use the radiative transfer code \texttt{Dense Gas Toolbox} \citep{Puschnig2020} which is based on the approach by \cite{Leroy2017} including that emission lines emerge from an isothermal ensemble of gas densities that follow a log-normal distribution (with or without power-law tail) in combination with \texttt{RADEX} calculations \citep{vanderTak2007}. It works as follows: (i) Using Bayesian inference, model parameters (i.e. temperature and density) are inferred from a number of integrated input line intensities. (ii) The assumed fixed line optical depths and abundances are calibrated through observations of the EMPIRE survey (\citealt{Jimenez-Donaire2019EMPIRE}, which includes \ngc). (iii) Then it solves for line emissivities in each density bin which are calculated using expanding-sphere escape probabilities (large velocity gradient approximation) as implemented in \texttt{RADEX} \citep{vanderTak2007}. (iv) Given the estimated kinetic temperature in \autoref{Disc:HNC/HCN}, we are assuming a fixed temperature of $30$~K. A more detailed description of the \texttt{Dense Gas Toolbox} will be provided in J.~Puschnig et al. (in prep.). The models suggest that the density in NUC is highest with a mass-weighted mean density of ${\sim}10^{4.0}$~cm$^{-3}$, while the lowest one is found in the NBE with a value of ${\sim}10^{3.7}$~cm$^{-3}$. In summary, these models broadly agree with our empirical findings of \autoref{fig:regions}. The higher mass-weighted mean densities in the SBE agree with the enhanced \sigsfr\ in the SBE (compared to the NBE). However, the model results do not explain the elevated $R_{21}$ in NBE (see \autoref{fig:R21}) and, therefore, an additional physical mechanism has to be responsible for it. A note of caution is appropriate, as the mass-weighted mean density differences found among the three regions are of small magnitude and might not be significant in conjunction with the model assumptions (see \autoref{tab:regions-charac}). For example, the kinetic temperature in the model is fixed and the same for the two bar ends and the nuclear region. An accurate measurement of kinetic temperatures on parsec scales could unveil the duality of the bar ends. \input{Table/table7} \subsection{Correcting for distance} \label{app:SFRdistance} \cite{Schinnerer2007} found within a $3\arcsec \times 3\arcsec$ region a SFR of ${\sim}0.1$ \sfrunit\ adopting a distance of $5.5$~Mpc. Using $33$~GHz as a SFR tracer and taking the same distance and region, we get $0.06$ \sfrunit. Adopting the updated distance to \ngc\ of $7.72$~Mpc results in $0.11$ \sfrunit. This is only a factor of $1.8$ higher. \section{Introduction} \input{Sections/01-Introduction} \section{Observations and ancillary data} \label{sec:Observation} \input{Sections/02-Observation} \section{Results -- Molecules in different environmental regions in the centre of NGC 6946} \label{sec:Results} \input{Sections/03-Results_Molecules_in_different_environmental_regions_in_the_centre_of_ngc6946} \section{The environmental variability of the star formation rate in NGC 6946} \input{Sections/04-The_environmental_variability_in_SFR} \section{Results - Line ratios and relationships among molecular species} \label{sec:ResultsB} \input{Sections/05-Results_Line_Ratio_Diagnostics_and_relationships_among_molecular_species} \section{Implications for spectroscopic studies of other galaxies -- from bulk molecular gas to dense gas} \label{sec:Discussion} \input{Sections/06-Implications_for_spectroscopic_studies_of_other_galaxies} \section{Conclusion} \label{sec:Conclusions} \input{Sections/07-Conclusion} \begin{acknowledgements} We would like to thank the anonymous referee for their insightful comments that helped improve the quality of the paper. CE gratefully acknowledges funding from the Deutsche Forschungsgemeinschaft (DFG) Sachbeihilfe, grant number BI1546/3-1. FB, AB, IB, JP and JdB acknowledge funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No.726384/Empire). ES, DL, HAP, TS and TGW acknowledge funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 694343). IL acknowledges funding from the Deutsche Forschungsgemeinschaft (DFG) Sachbeihilfe, grant number SCHI 536/11-1. MC and JMDK gratefully acknowledge funding from the Deutsche Forschungsgemeinschaft (DFG) in the form of an Emmy Noether Research Group (grant number KR4801/1-1) and the DFG Sachbeihilfe (grant number KR4801/2-1), and from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme via the ERC Starting Grant MUSTANG (grant agreement number 714907). HH acknowledges the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), funding reference number RGPIN-2017-03987 and the Canadian Space Agency funding reference 21EXPUVI3. SCOG and RSK acknowledge financial support from the German Research Foundation (DFG) via the collaborative research center (SFB 881, Project-ID 138713538) `The Milky Way System' (subprojects A1, B1, B2, and B8). They also acknowledge funding from the Heidelberg Cluster of Excellence `STRUCTURES' in the framework of Germany's Excellence Strategy (grant EXC-2181/1, Project-ID 390900948) and from the European Research Council via the ERC Synergy Grant `ECOGAL' (grant 855130). The work of AKL is partially supported by the National Science Foundation under Grants No. 1615105, 1615109, and 1653300. JP acknowledges support from the Programme National "Physique et Chimie du Milieu Interstellaire" (PCMI) of CNRS/INSU with INC/INP co-funded by CEA and CNES. MQ acknowledges support from the Spanish grant PID2019-106027GA-C44, funded by MCIN/AEI/10.13039/501100011033. ER acknowledges the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), funding reference number RGPIN-2017-03987. TS acknowledges funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 694343). MCS acknowledges financial support from the German Research Foundation (DFG) via the collaborative research center (SFB 881, Project-ID 138713538) "The Milky Way System" (subprojects A1, B1, B2, and B8). AU acknowledges support from the Spanish grants PGC2018-094671-B-I00, funded by MCIN/AEI/10.13039/501100011033 and by "ERDF A way of making Europe", and PID2019-108765GB-I00, funded by MCIN/AEI/10.13039/501100011033. Y-HT acknowledges funding support from NRAO Student Observing Support Grant SOSPADA-012 and from the National Science Foundation (NSF) under grant No. 2108081. \end{acknowledgements} \bibliographystyle{aa}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Consider a centered stationary Gaussian family of random variables $X=\{ X_n , n\in \mathbb{Z}\}$ with unit variance. For all $k \in \mathbb{Z}$, set $\rho(k) = \mathbb{E}(X_0 X_k)$, so $\rho(0)=1$ and $\rho(k) = \rho(-k)$. We say that a function $g \in L^2( \mathbb{R}, \gamma)$, where $\gamma$ is the standard Gaussian measure, has {\it Hermite rank} $d\ge 1$ if \begin{equation}\label{hexp} g(x)= \sum_{m=d} ^\infty c_m H_m(x), \end{equation} where $c_d \not =0$ and $H_m$ is the $m$th Hermite polynomial. We will make use of the following condition that relates the covariance function $\rho$ to the Hermite rank of a function $g$: \begin{equation} \label{h1} \sum_{j \in \mathbb{Z}} |\rho(j)|^d < \infty. \end{equation} Let us recall the celebrated Breuer-Major theorem for functionals of the stationary Gaussian sequence $X$ (see \cite{bm}). \begin{thm}[Breuer-Major theorem]\label{bm} Consider a centered stationary Gaussian family of random variables $X=\{ X_n , n\in \mathbb{Z}\}$ with unit variance and covariance function $\rho$. Let $g \in L^2( \mathbb{R}, \gamma)$ be a function with Hermite rank $d\ge 1$ and expansion (\ref{hexp}). Suppose that (\ref{h1}) holds true. Set \begin{equation}\label{bm.sig} \sigma^2 = \sum_{m=d}^\infty m! c_m^2 \sum_{k \in \mathbb{Z}} \rho(k)^m. \end{equation} Then the sequence \begin{equation}\label{yn} Y_n := \frac{1}{\sqrt{n}} \sum_{j=1}^n g(X_j) \end{equation} converges in law to the normal distribution $N(0, \sigma^2)$. \end{thm} The purpose of this paper is to show that, under suitable regularity assumptions on the function $g$, the sequence $Y_n /\sigma_n$, where $\sigma_n^2 = \mathbb{E}(Y_n^2)$, converges in the total variation distance to the standard normal law $N(0,1)$, and we can estimate the rate of convergence in terms of the covariance function $\rho$. To show these results we will apply a combination of Stein's method for normal approximations and techniques of Malliavin calculus. The combination of Stein's method with Malliavin calculus to study normal approximations was first developed by Nourdin and Peccati (see the pioneering work \cite{np-ptrf} and the monograph \cite{np-book}). For random variables on a fixed Wiener chaos, these techniques provide a quantitative version of the {\it Fourth Moment Theorem} proved by Nualart and Peccati in \cite{nunugio}. A basic result in this direction is the following proposition. Along the paper $Z$ will denote a $N(0,1)$ random variable. \begin{prop}\label{bd.4m-1} Let $F$ be a random variable in the $q$th ($q \geq 2$) Wiener chaos with unit variance. Then \begin{equation} \label{equ80} d_{\rm TV}(F, Z) \leq 2 \sqrt{{\rm Var} \left(\frac{1}{q}\|DF\|_{\mathfrak{H}}^2\right)} \leq 2\sqrt{\frac{q-1}{3q}(\mathbb{E}(F^4)-3)}\,, \end{equation} where $D$ denotes the derivative in the sense of Malliavin calculus and $d_{\rm TV}$ is the total variation distance. \end{prop} In the context of the Breuer-Major theorem, this result can be applied to obtain a rate of convergence for the total variation distance $d_{\rm TV} (Y_n/ \sigma_n, Z)$, provided $g=H_d$ and condition (\ref{h1}) holds (see \cite{np-ptrf}). Later on, the rate of convergence was improved in \cite{bbl} using an approach based on the spectral density. In the reference \cite{np-15}, with an intensive application of Stein's method combined with Malliavin calculus, Nourdin and Peccati improved the estimate (\ref{equ80}), obtaining the following matching upper and lower bounds for the total variation distance. \begin{prop}\label{bd.4m-2} Let $F$ be a random variable in the $q$th ($q \geq 2$) Wiener chaos with unit variance. Then, there exist constants $C_1, C_2>0$, depending on $q$, such that \[ C_1 \max\{ |\mathbb{E}(F^3)|, \mathbb{E}(F^4)-3\} \le d_{\rm TV}(F, Z) \leq C_2 \max\{ |\mathbb{E}(F^3)|, \mathbb{E}(F^4)-3\} \,. \] \end{prop} In the paper \cite{bbnp}, it is proved that that $|\mathbb{E}(F^3)| \leq C \sqrt{\mathbb{E}(F^4)-3}$, which trivially indicates that the bound in Proposition \ref{bd.4m-2} is better than (\ref{equ80}). Furthermore, using an analytic characterization of cumulants and Edgeworth-type expansions, the authors of \cite{bbnp} proved that, for a normalized sequence $F_n$ which belongs to the $q$th Wiener chaos and converges to $Z$ in distribution as $n \to \infty$, the rate of convergence of the total variation distance is characterized by the third and fourth cumulants. The literature on the rate of convergence for normal approximations is focused on random variables on a fixed Wiener chaos. The goal of this paper is to provide an answer to the following question: \medskip \noindent {\bf Question:} To what extent Propositions \ref{bd.4m-1} and \ref{bd.4m-2} can be generalized to random variables that are not in a fixed chaos and how this approach is applied in the context of the Breuer Major theorem? \medskip We cannot expect that, in this more general framework, the convergence to a normal distribution is characterized by the third and fourth cumulants, and new functionals will appear. In the first part of the paper, we consider random variables that can be written as divergences, that is $F=\delta(u)$, where $\delta $ is the adjoint of the derivative operator in the Malliavin calculus. We will use Stein's method and Malliavin calculus to provide three different bounds (see Propositions \ref{stein.hd0}, \ref{stein.hd} and \ref{stein.hd2}) for $d_{\rm TV} (F,Z)$. If $F$ is in some fixed chaos, the bound in Proposition \ref{stein.hd0} should be the same as that of Proposition \ref{bd.4m-1} and the bound in Proposition \ref{stein.hd} should coincide with that of Proposition \ref{bd.4m-2}. Actually, the proof of Proposition \ref{stein.hd} has been inspired by the approach used to derive the upper bound in Proposition \ref{bd.4m-2}. The second part of the paper is devoted to derive upper bounds for the total variation distance in the context of the Breuer-Major theorem, applying the estimates provided by Propositions \ref{stein.hd0}, \ref{stein.hd} and \ref{stein.hd2}. To do this, we need to represent $g(X_j)$ as a divergence $\delta(u)$. A basic ingredient for this representation is the shift operator $T_1$ (see formula (\ref{t1a}) below) defined using the expansion of $g$ into a series of Hermite polynomials. It turns out that the representation obtained through $T_1$ coincides with the classical representation $F= \delta (-DL^{-1} F)$, introduced in \cite{nuaort}, that plays a fundamental role in normal approximations by Stein's method and Malliavin calculus. The representation of $g(X_j)$ as a divergence (or an iterated divergence) allows us to apply the integration by parts in the context of Malliavin calculus (or duality between the derivative and divergence operators), which leads to estimates of the expectation of products of random variables of the form $g^{(k)}(X_j)$. For this approach to work, we are going to assume that the function $g$ belongs to the Sobolev space $\mathbb{D}^{k,p}(\mathbb{R},\gamma)$, for some $k$ and $p$, of functions that have $k$ weak derivatives with moments of order $p$ with respect to $\gamma$. In this way we have been able to obtain the following results in the framework of Theorem \ref{bm}, for functions of Hermite rank one or two. \begin{itemize} \item[(i)] For functions $g$ of Hermite rank $d=1$, assuming $g\in \mathbb{D}^{2,4} (\mathbb{R},\gamma)$, we have (see Theorem \ref{main.d1} below) \[ d_{\rm TV}(Y_n / \sigma_n, Z) \leq C n^{-\frac{1}{2}}. \] \item[(ii)] For functions $g$ of Hermite rank $d=2$, assuming $g\in \mathbb{D}^{6,8} (\mathbb{R},\gamma)$, we have (see Theorem \ref{main.d12} below) \begin{equation} \label{rate1} d_{\rm TV}(Y_n / \sigma_n, Z) \leq C n^{-\frac{1}{2}} \left(\sum_{|k| \leq n} |\rho(k)|^{\frac{3}{2}}\right)^{2}. \end{equation} \end{itemize} It is worth noticing that the upper bound (\ref{rate1}) coincides with the optimal rate for the Hermite polynomial $g(x)= x^2-1$ obtained in \cite{bbnp}. Furthermore, in Theorem \ref{main.d12}, rates worse than (\ref{rate1}) are established under less smoothness on the function $g$. For functions $g$ of Hermite rank $d\ge 3$ and assuming $g\in \mathbb{D}^{3d-2,4} (\mathbb{R},\gamma)$, we have established in Theorem \ref{thm3.4} an upper bound for the total variation distance $d_{\rm TV}(Y_n / \sigma_n, Z)$ based on Proposition \ref{stein.hd0}, which is a slight modification of the rate derived for the Hermite polynomial $H_d$. Due to the complexity of the computations, the application of Proposition \ref{stein.hd} in the case $d\ge 3$ has not been considered in this paper. The paper is organized as follows. Section 2 contains some preliminaries on Malliavin calculus and Stein's method, including the definition and properties of the shift operator $T_1$. In Section 3, we derive the three basic estimates for the total variation distance between a divergence $\delta(u)$ and a $N(0,1)$ random variable. Section 4 contains the main results of the paper. First we thoroughly analyze the cases $d=1$ and $d=2$ and establish bounds for the total variation distance in the framework of the Breuer-Major theorem and later we consider the case $d\ge 3$, applying Proposition \ref{stein.hd0}. As an application, in Section 5 we give the convergence rates for the fractional Gaussian case. We also discuss some applications to the asymptotic behavior of power variations of the fractional Brownian motions and to the consistency of the estimator of the Hurst parameter using power variations. The Appendix contains some technical lemmas used in the proof of the main results and some inequalities, obtained as an application of the rank-one Brascamp-Lieb inequality and H\"older's inequality, which play an important role in the proofs. \section{Preliminaries} In this section, we briefly recall some notions of Malliavin calculus, Stein's method and the Brascamp-Lieb inequality. The shift operator $T_1$ mentioned above is also introduced here. \subsection{Gaussian analysis} Let $\mathfrak{H}$ be a real separable Hilbert space. For any integer $m \geq 1$, we use $\mathfrak{H}^{\otimes m}$ and $\mathfrak{H}^{\odot m}$ to denote the $m$-th tensor product and the $m$-th symmetric tensor product of $\mathfrak{H}$, respectively. Let $\mathbb{X} = \{\mathbb{X}(\phi): \phi \in \mathfrak{H}\}$ denote an isonormal Gaussian process over the Hilbert space $\mathfrak{H}$. That means, $\mathbb{X}$ is a centered Gaussian family of random variables, defined on some probability space $(\Omega, \mathcal{F}, P)$, with covariance $$\mathbb{E}\left(\mathbb{X}(\phi)\mathbb{X}(\psi)\right) = \langle \phi, \psi \rangle_{\mathfrak{H}}, \qquad \phi, \psi \in \mathfrak{H}.$$ We assume that $\mathcal{F}$ is generated by $\mathbb{X}$. We denote by $\mathcal{H}_m$ the closed linear subspace of $L^2(\Omega)$ generated by the random variables $\{H_m(\mathbb{X}(\varphi)): \varphi \in \mathfrak{H}, \|\varphi\|_{\mathfrak{H}}=1\}$, where $H_m$ is the $m$-th Hermite polynomial defined by \[ H_m(x)=(-1)^me^{\frac{x^2}{2}}\frac{d^m}{dx^m}e^{-\frac{x^2}{2}},\quad m \geq 1, \] and $H_0(x)=1$. The space $\mathcal{H}_m$ is called the Wiener chaos of order $m$. The $m$-th multiple integral of $\phi^{\otimes m} \in \mathfrak{H}^{\odot m}$ is defined by the identity $ I_m(\phi^{\otimes m}) = H_m(\mathbb{X}(\phi))$ for any $\phi\in \mathfrak{H}$. The map $I_m$ provides a linear isometry between $\mathfrak{H}^{\odot m}$ (equipped with the norm $\sqrt{m!}\|\cdot\|_{\mathfrak{H}^{\otimes m}}$) and $\mathcal{H}_m$ (equipped with $L^2(\Omega)$ norm). By convention, $\mathcal{H}_0 = \mathbb{R}$ and $I_0(x)=x$. The space $L^2(\Omega)$ can be decomposed into the infinite orthogonal sum of the spaces $\mathcal{H}_m$, which is known as the Wiener chaos expansion. Thus, any square integrable random variable $F \in L^2(\Omega)$ has the following expansion, \[ F = \sum_{m=0}^{\infty} I_m (f_m) , \] where $f_0 = \mathbb{E}(F)$, and $f_m \in \mathfrak{H}^{\odot m}$ are uniquely determined by $F$. We denote by $J_m$ the orthogonal projection onto the $m$-th Wiener chaos $\mathcal{H}_m$. This means that $I_m (f_m) = J_m (F)$ for every $m \geq 0$. \subsection{Malliavin calculus} In this subsection we present some background of Malliavin calculus with respect to an isonormal Gaussian process $\mathbb{X}$. We refer the reader to \cite{np-book,nualartbook} for a detailed account on this topic. For a smooth and cylindrical random variable $F= f(\mathbb{X}(\varphi_1), \dots , \mathbb{X}(\varphi_n))$, with $\varphi_i \in \mathfrak{H}$ and $f \in C_b^{\infty}(\mathbb{R}^n)$ ($f$ and its partial derivatives are bounded), we define its Malliavin derivative as the $\mathfrak{H}$-valued random variable given by \[ DF = \sum_{i=1}^n \frac{\partial f}{\partial x_i} (\mathbb{X}(\varphi_1), \dots, \mathbb{X}(\varphi_n))\varphi_i. \] By iteration, one can define the $k$-th derivative $D^k F$ as an element of $L^2(\Omega; \mathfrak{H}^{\otimes k})$. For any natural number $k$ and any real number $ p \geq 1$, we define the Sobolev space $\mathbb{D}^{k,p}$ as the closure of the space of smooth and cylindrical random variables with respect to the norm $\|\cdot\|_{k,p}$ defined by \[ \|F\|^p_{k,p} = \mathbb{E}(|F|^p) + \sum_{i=1}^k \mathbb{E}(\|D^i F\|^p_{\mathfrak{H}^{\otimes i}}). \] The divergence operator $\delta$ is defined as the adjoint of the derivative operator $D$ in the following manner. An element $u \in L^2(\Omega; \mathfrak{H})$ belongs to the domain of $\delta$, denoted by $\rm Dom\, \delta$, if there is a constant $c_u$ depending on $u$ such that \[ |\mathbb{E} (\langle DF, u \rangle_{\mathfrak{H}})| \leq c_u \|F\|_{L^2(\Omega)} \] for any $F \in \mathbb{D}^{1,2}$. If $u \in \rm Dom \,\delta$, then the random variable $\delta(u)$ is defined by the duality relationship \begin{equation} \label{dua} \mathbb{E}(F\delta(u)) = \mathbb{E} (\langle DF, u \rangle_{\mathfrak{H}}) \, , \end{equation} which holds for any $F \in \mathbb{D}^{1,2}$. In a similar way we can introduce the iterated divergence operator $\delta^k$ for each integer $k\ge 2$, defined by the duality relationship \begin{equation} \label{dua2} \mathbb{E}(F\delta^k(u)) = \mathbb{E} \left(\langle D^kF, u \rangle_{\mathfrak{H}^{\otimes k}} \right) \, , \end{equation} for any $F \in \mathbb{D}^{k,2}$, where $u\in {\rm Dom}\, \delta^k \subset L^2(\Omega; \mathfrak{H}^{\otimes k})$. The Ornstein-Uhlenbeck semigroup $(P_t)_{t \geq 0}$ is the semigroup of operators on $L^2(\Omega)$ defined by \[ P_t F = \sum_{m=0}^\infty e^{-mt} I_m(f_m), \] if $F$ admits the Wiener chaos expansion $F = \sum_{m=0}^\infty I_m(f_m)$. Denote by $L = \frac{d}{dt}|_{t=0} P_t$ the infinitesimal generator of $(P_t)_{t \geq 0}$ in $L^2(\Omega)$. Then we have $LF=-\sum_{m=1}^\infty m J_m(F)$ for any $F \in {\rm Dom}\, L=\mathbb{D}^{2,2} $. We define the pseudo-inverse of $L$ as $L^{-1} F = -\sum_{m=1}^\infty \frac{1}{m} J_m F$. We recall the following formula for any centered and square integrable random variable $F$, \begin{equation} \label{equ5} L^{-1} F = -\int_0 ^\infty P_t F dt. \end{equation} The basic operators $D$, $\delta$ and $L$ satisfy the relation $ LF =-\delta DF $, for any random variable $F\in\mathbb{D}^{2,2}$. As a consequence, any centered random variable $F\in L^2(\Omega)$ can be expressed as a divergence: \begin{equation} \label{deltad} F= \delta (-DL^{-1}F). \end{equation} This representation has intensively been used in normal approximations (see \cite{nuaort,nunugio}). We denote by $\gamma$ the standard Gaussian measure on $\mathbb{R}$. The Hermite polynomials $\{H_m(x), m\ge 0\}$ form a complete orthonormal system in $L^2(\mathbb{R},\gamma)$ and any function $g\in L^2(\mathbb{R},\gamma)$ admits an orthogonal expansion of the form \begin{equation}\label{hexp1} g(x)= \sum_{m=0} ^\infty c_m H_m(x). \end{equation} If $g\in L^2(\mathbb{R},\gamma)$ has the expansion (\ref{hexp1}), we define the operator $T_1$ by \begin{equation} \label{t1a} T_1(g)(x) = \sum_{m=1}^\infty c_m H_{m-1}(x) \,. \end{equation} To simplify the notation we will write $T_1(g) =g_1$. Suppose that $F$ is a random variable in the first Wiener chaos of $\mathbb{X}$ of the form $F= I_1(\varphi)$, where $\varphi \in \mathfrak{H}$ has norm one. In view of the relation between Hermite polynomials and multiple stochastic integrals, it follows that for any $g\in L^2(\mathbb{R}, \gamma)$ of the form (\ref{hexp1}), the random variable $g(F)$ admits the Wiener chaos expansion \begin{equation} \label{expg} g(F)= \sum_{m=0} ^\infty c_m I_m(\varphi^{\otimes m}). \end{equation} Next we establish the connection between the shift operator $T_1$ defined in (\ref{t1a}) and the representation of a centered and square integrable random variable as divergence given in (\ref{deltad}). \begin{lemma} \label{lem2.1} Let $F$ be a random variable in the first Wiener chaos of $\mathbb{X}$ of the form $F= I_1(\varphi)$, where $\| \varphi \|_{\mathfrak{H}} =1$. Suppose that $g\in L^2(\mathbb{R}, \gamma)$ is centered. Then \[ g_1(F) \varphi = -DL^{-1} g(F). \] As a consequence, $g(F) = \delta( g_1(F) \varphi)$. \end{lemma} \begin{proof} Using the Wiener chaos expansion (\ref{expg}), we can write \[ L^{-1} g(F)=- \sum_{m=1} ^\infty \frac { c_m} m H_m(F), \] which implies, taking into account that $H_m' =mH_{m-1}$, that \[ -DL^{-1} g(F)= \sum_{m=1} ^\infty c_mH_{m-1}(F) \varphi= g_1(F)\varphi. \] Property $g(F) = \delta( g_1(F) \varphi)$ is a consequence of (\ref{deltad}). This completes the proof. \end{proof} For any $k\ge 2$, we can define the iterated operator $T_k= T_1\circ \stackrel{k} { \cdots} \circ T_1$ by \begin{equation} \label{t1} T_k(g)(x) = \sum_{m=k}^\infty c_m H_{m-k}(x) \,. \end{equation} We will write $T_k(g)= g_k$ and we have the representation \begin{equation}\label{g.intrep} g(F) = \delta^k( g_k(X) \varphi^{\otimes k}) \,, \end{equation} provided $F$ is a random variable in the first Wiener chaos of $\mathbb{X}$ of the form $F= I_1(\varphi)$, with $\| \varphi \|_{\mathfrak{H}} =1$, and $g$ has Hermite rank $k$. \begin{lemma} Let $F$ be a random variable in the first Wiener chaos of $\mathbb{X}$ of the form $F= I_1(\varphi)$, with $\| \varphi \|_{\mathfrak{H}} =1$. Suppose that $g\in L^2(\mathbb{R}, \gamma)$ is centered. Then for any $p \geq 1$, \begin{equation} \| g_1(F) \|_{L^p(\Omega)} \leq \sqrt{ \pi}\|g(F)\|_{L^p(\Omega)} \,. \end{equation} \end{lemma} \begin{proof} Observe that, using Lemma \ref{lem2.1}, we can write \[ \| g_1(F)\|_{L^p(\Omega)} = \|-DL^{-1} g(F)\|_{L^p(\Omega; \mathfrak{H})} \,. \] Then, using (\ref{equ5}), Minkowski's inequality and Propositions 3.2.4 and 3.2.5 of \cite{CBMS}, we can write \begin{eqnarray*} \|-DL^{-1} g(F)\|_{L^p(\Omega; \mathfrak{H})} & \leq & \left\| \int_0^\infty D P_t g(F) dt \right\|_{L^p(\Omega; \mathfrak{H})} \\ & \leq & \int_0^\infty \| D P_t g(F) \|_{L^p(\Omega; \mathfrak{H})} dt \\ & \leq & \int_0^\infty t^{-\frac{1}{2}} e^{-t} \| g(F) \|_{L^p(\Omega)} dt \,, \end{eqnarray*} which allows us to complete the proof. \end{proof} By iteration, we obtain \begin{equation} \| g_k(F) \|_{L^p(\Omega)} \leq \pi^{\frac k2} \|g(F)\|_{L^p(\Omega)} \,, \end{equation} for any $k\ge 2$, provided $g$ has Hermite rank $k$ and $F= I_1(\varphi)$, with $\| \varphi \|_{\mathfrak{H}} =1$. If $g$ has Hermite rank strictly less than $k$, we can write \[ T_kg(x)= T_k \widetilde{g}(x), \] where $\widetilde{g}(x)= \sum_{m=k} ^\infty c_m H_m(x)$. Then, \[ \| T_k g(F)\|_{ L^p(\Omega)} \le \pi^{\frac k2} \| g(F) \|_{ L^p(\Omega)} + \pi^{\frac k2} \left \| \sum_{m=0 }^{k-1} c_m H_m(F) \right\|_{ L^p(\Omega)} \le \pi^{\frac k2} \|g(F) \|_{ L^p(\Omega) } + C_{k,p}. \] Consider $\mathfrak{H}=\mathbb{R}$, the probability space $(\Omega, \mathcal{F}, P)= (\mathbb{R}, \mathcal{B}(\mathbb{R}), \gamma)$ and the isonornal Gaussian process $\mathbb{X}(h)=h$. For any $k\ge0$ and $p\ge 1$, denote by $\mathbb{D}^{k,p}(\mathbb{R},\gamma)$ the corresponding Sobolev spaces of functions. Notice that if $g\in \mathbb{D}^{k,p}(\mathbb{R},\gamma)$, and $F=I_1(\varphi)$ is an element in the first Wiener chaos of a general isonormal Gaussian process $\mathbb{X}$, then $g(F)\in \mathbb{D}^{k,p}$. The next lemma provides a regularizing property of the operator $T_k$. \begin{lemma} \label{lem2.4} Suppose that $g\in \mathbb{D}^{j,p}(\mathbb{R},\gamma)$ for some $j\ge 0$ and $p>1$. Then $T_kg \in \mathbb{D}^{j+k}(\mathbb{R},\gamma)$ for all $k\ge 1$. \end{lemma} \begin{proof} We can assume that $g$ has Hermite rank $k$, otherwise, we just subtract the first $k$ terms in its expansion. Then, the result is an immediate consequence of the fact that $T_k=(-DL^{-1})^k$ and the equivalence in $L^p(\mathbb{R},\gamma)$ of the operators $D$ and $(-L)^{1/2}$, which follows from Meyer's inequalities (see, for instance, \cite{nualartbook}). \end{proof} Notice that $T_1$ and the derivative operator do not commute. We will write $(g_1)' = g'_1$, which is different from $T_1(g')$. Indeed, for any $g\in L^2(\mathbb{R}, \gamma)$, we have \[ g'_1 = T_1(g') - g_2, \] because if $g$ has the expansion (\ref{hexp1}), we obtain \[ g'_1(x) = \sum_{m=2} ^\infty c_m (m-1) H_{m-2} (x), \] \[ T_1(g')(x) = \sum_{m=2} ^\infty c_m m H_{m-2} (x) \] and \[ g_2(x) = \sum_{m=2} ^\infty c_m H_{m-2} (x). \] More generally we can show that for any $k,l \ge 1$, \[ g_k^{(l)} = \sum_{i=0}^l \binom{l}{i}\alpha_{k,i} T_{k+i}(g^{(l-i)}), \] where $\alpha_{k,i}= (-1)^i k(k+1)\cdots(k+i-1)$, with the convention $\alpha_{k,i}=1$ if $i=0$. \subsection{Brascamp-Lieb inequality} In this subsection we recall a version of the rank-one Brascamp-Lieb inequality that will be intensively used through this paper (see \cite{barthe,bcct,bl} and the references therein). This inequality constitutes a generalization of both H\"older's and Young's convolution inequalities. \begin{prop} \label{prop2.10} Let $2\le M\le N$ be fixed integers. Consider nonnegative measurable functions $f_j: \mathbb{R} \rightarrow \mathbb{R}_+$, $1\le j\le N$, and fix nonzero vectors ${\bf v_j} \in \mathbb{R}^M$. Fix positive numbers $p_j$, $1\le j\le N$, verifying the following conditions: \begin{itemize} \item[(i)] $\sum_{j=1}^N p_j = M$, \item[(ii)] For any subset $I\subset \{1,\dots, M\}$, we have $\sum_{j\in I} p_j \le {\rm dim} \left( {\rm Span} \{ {\bf v}_j, j\in I \} \right)$. \end{itemize} Then, there exists a finite constant $C$, depending on $N, M$ and the $p_j$'s such that \begin{equation} \label{BL} \sum_{ {\bf k} \in \mathbb{Z}^M} \prod_{j=1} ^N f_j({\bf k} \cdot {\bf v} _j) \le C \prod_{j=1} ^N \left( \sum_{ k \in \mathbb{Z}} f_j (k)^{ 1/p_j} \right) ^{p_j}. \end{equation} \end{prop} \subsection{Stein's method} Let $h: \mathbb{R} \to \mathbb{R}$ be a Borel function such that $h \in L^1(\mathbb{R}, \gamma)$. The ordinary differential equation \begin{equation} \label{stein} f'(x) - xf(x) = h(x) - \mathbb{E}(h(Z)) \end{equation} is called Stein's equation associated with $h$. The function \[ f_h(x):= e^{x^2/2}\int_{-\infty}^x (h(y) - \mathbb{E}(h(Z)))e^{-y^2/2} dy \] is the unique solution to the Stein's equation satisfying $\lim_{|x| \to \infty} e^{-x^2/2} f_h(x) = 0$. Moreover, if $h$ is bounded, $f_h$ satisfies \begin{equation} \label{equ81} \|f_h \|_\infty \leq \sqrt{\frac{\pi}{2}} \|h - \mathbb{E}(h(Z)) \|_\infty \end{equation} and \begin{equation} \label{equ82} \|f_h' \|_\infty \leq 2\|h - \mathbb{E}(h(Z)) \|_\infty \end{equation} (see \cite{np-book} and the references therein). We recall that the total variation distance between the laws of two random variables $F,G$ is defined by \[ d_{\rm TV}(F,G) = \sup_{B \in \mathcal{B}(\mathbb{R})}|P(F \in B) - P(G \in B)| \,, \] where the supremum runs over all Borel sets $B \subset \mathbb{R}$. Substituting $x$ by $F$ in Stein's equation (\ref{stein}) and using the inequalities (\ref{equ81}) and (\ref{equ82}) lead to the fundamental estimate \begin{equation} \label{equ83} d_{\rm TV}(F,Z) = \sup_{f\in \mathcal{C}^1(\mathbb{R}), \| f\|_\infty \le \sqrt{ \pi/2}, \| f' \|_\infty \le 2 } | \mathbb{E} (f'(F)- Ff(F)) | \,. \end{equation} \section{Basic estimates for the total variation distance} In the framework of an isonormal Gaussian process $\mathbb{X}$, we can use Stein's equation to estimate the total variation distance between a random variable $F = \delta(u)$ and $Z$. First let us recall the following basic result (see \cite{np-book}), which is an easy consequence of (\ref{equ83}) and the duality relationship (\ref{dua}). \begin{prop}\label{stein.hd0} Assume that $u\in {\rm Dom} \,\delta$, $F=\delta(u) \in \mathbb{D}^{1,2}$ and $\mathbb{E}(F^2)=1$. Then, \begin{eqnarray*} d_{\rm TV} (F,Z) \le 2 \mathbb{E}( |1-\langle DF, u \rangle_{\mathfrak{H}}|) \,. \end{eqnarray*} \end{prop} Notice that, applying the duality relationship (\ref{dua}), we can write \[ \mathbb{E}(\langle DF, u \rangle_{\mathfrak{H}})= \mathbb{E}( F \delta (u))= \mathbb{E}(F^2) = 1. \] As a consequence, if $F \in \mathbb{D}^{2,2}$, we apply Cauchy-Schwarz and Poincar\'e inequalities to derive the following estimate \begin{equation} d_{\rm TV} (F,Z) \leq 2 \sqrt{\mathbb{E}(1-\langle DF, u \rangle_{\mathfrak{H}})^2} = 2 \sqrt{{\rm Var}(D_uF)} \leq 2 \sqrt{\mathbb{E}( \|D(D_uF)\|^2_{\mathfrak{H}})} \label{2bound} \,, \end{equation} where we have used the notation $D_uF=\langle u, DF \rangle_{\mathfrak{H}}$. We will also write $ D_u^{i+1} F = \langle u, D(D_u^i F) \rangle_{\mathfrak{H}} $ for $i \ge 1$. Furthermore, if the random variable $F$ admits higher order derivatives, iterating the integration by parts argument we can improve the bound (\ref{2bound}) as follows. \begin{prop}\label{stein.hd} Assume that $u\in {\rm Dom} \,\delta$, $F=\delta(u) \in \mathbb{D}^{3,2}$ and $\mathbb{E}(F^2)=1$. Then \begin{eqnarray*} d_{\rm TV} (F,Z) & \leq & (8+\sqrt{32\pi}) \, \mathbb{E}(\|D(D_uF)\|^2_{\mathfrak{H}}) + \sqrt{2\pi} \, |\mathbb{E} (F^3)| \\ & & + \ \sqrt{32\pi}\,\mathbb{E}(|D_u^2F|^2 )+ 4\pi \,\mathbb{E}(|D_u^3 F|) \,. \end{eqnarray*} \end{prop} \begin{proof} Fix a continuous function $h: \mathbb{R} \rightarrow [0,1]$. Using Stein's equation (\ref{stein}), there exists a function $f_h\in \mathcal{C}^1(\mathbb{R})$ such that $\|f_h\|_\infty \leq \sqrt{\frac{\pi}{2}}$ and $\|f_h'\|_\infty \leq 2$, satisfying \[ I:=|\mathbb{E}(h(F)) - \mathbb{E}(h(Z))| = |\mathbb{E}(f_h'(F) - Ff_h(F))| \,. \] Applying the duality relationship (\ref{dua}), yields $$I=|\mathbb{E}(f_h'(F)(1-\langle DF, u \rangle_{\mathfrak{H}}))| \,.$$ Taking into account that $\mathbb{E}(\langle DF, u \rangle_{\mathfrak{H}} )= \mathbb{E}(F^2) = 1$, we have \[ I=|\mathbb{E}\left((f_h'(F) - \mathbb{E}(f_h'(Z)))(1-\langle DF, u \rangle_{\mathfrak{H}})\right)| \,. \] Let $f_\varphi$ be the solution to Stein's equation associated with the function $\varphi= f'_h$. Then, we have \[ I= \left|\mathbb{E}\left((f_{\varphi}'(F) - Ff_{\varphi}(F))(1-\langle DF, u \rangle_{\mathfrak{H}})\right)\right| \] where $\|f_{\varphi}\|_\infty \leq 4\sqrt{\pi/2}$ and $\|f_{\varphi}'\|_\infty \leq 8$. Substituting $F$ by $\delta (u)$ and applying again the duality relationship (\ref{dua}), yields \begin{eqnarray} I & = &\left|\mathbb{E}\left(f_{\varphi}'(F)(1-D_uF) - \langle u, D(f_{\varphi}(F)(1-D_uF))\rangle_{\mathfrak{H}}\right)\right| \nonumber\\ & = & \left|\mathbb{E}\left(f_{\varphi}'(F)(1-D_uF)^2\right) + \mathbb{E}\left(f_{\varphi}(F) D_u^2F\right)\right| \label{mid.ineq}\\ & \leq & 8 \mathbb{E}((1-D_uF)^2) + |\mathbb{E}\left((f_{\varphi}(F) - \mathbb{E}( f_{\varphi}(Z)))D_u^2 F \right)| + |\mathbb{E} (f_{\varphi}(Z) )\mathbb{E}(D_u^2 F)| \nonumber \\ & =: &I_1 + I_2 + I_3 \,. \nonumber \end{eqnarray} For the term $I_1$, we apply Poincar\'e inequality to get $$I_1 \leq 8 \mathbb{E}(\|D(D_uF)\|^2_{\mathfrak{H}})\,.$$ For the term $I_3$, taking into account that $$\mathbb{E} (D_u^2 F) = \mathbb{E}(\langle u, DF \rangle_{\mathfrak{H}} \delta(u)) = \frac{1}{2} \mathbb{E} (\langle u, DF^2 \rangle_{\mathfrak{H}} )= \frac{1}{2} \mathbb{E}(F^3 ),$$ we obtain $$I_3 \leq 2 \sqrt{\pi/2} |\mathbb{E}( F^3)| \,.$$ For the term $I_2$, applying Stein's equation associated with $\psi =f_{\varphi}$ yields \begin{eqnarray*} I_2 &=& \left|\mathbb{E}\left((f_{\psi}'(F) - Ff_{\psi}(F)) D_u^2F\right)\right| \\ & \leq & \left|\mathbb{E} \left(f_{\psi}'(F)(D^2_uF - D_uF D_u^2F)\right)\right| + \left|\mathbb{E} \left(f_{\psi}(F) D_u^3 F\right)\right|, \end{eqnarray*} where $f_{\psi}$ satisfies $\|f_{\psi}\|_\infty \leq 4\pi $ and $\|f_{\psi}'\|_\infty \leq 16\sqrt{\pi/2}$. Finally, $$\mathbb{E}(|D^2_uF - D_uF D_u^2F| )\leq \frac{1}{2}\left(\mathbb{E}(|D_u^2 F|^2) + \mathbb{E}(|1-D_uF|^2)\right) \leq \frac{1}{2}\left(\mathbb{E}(|D_u^2 F|^2) + \mathbb{E}(\|DD_uF\|^2_{\mathfrak{H}})\right) \,. $$ This concludes the proof of the proposition. \end{proof} If we bound \eqref{mid.ineq} in a different way, we would get the following estimate. \begin{prop}\label{stein.hd2} Assume that $u\in {\rm Dom} \,\delta$, $F=\delta(u) \in \mathbb{D}^{2,2}$ and $\mathbb{E}(F^2)=1$. Then \[ d_{\rm TV} (F,Z) \leq 8 \mathbb{E}((1-D_uF)^2 )+ \sqrt{8\pi} \mathbb{E}(|D_u^2 F|)\,. \] \end{prop} \section{Main results}\label{main.results} Consider a centered stationary Gaussian family of random variables $X=\{ X_n , n\in \mathbb{Z}\}$ with unit variance and covariance $\rho(k) = \mathbb{E}(X_0 X_k)$ for $k \in \mathbb{Z}$. Define the Hilbert space $\mathfrak{H}$ as the closure of the linear span of $\mathbb{Z}$ under the inner product $ \langle j,k \rangle_{\mathfrak{H}} = \rho(j-k)$. The mapping $k \rightarrow X_k$ can be extended to a linear isometry from $\mathfrak{H}$ to the closed linear subspace $L^2(\Omega)$ spanned by $X$. Then $\{X_{\varphi}, \varphi \in \mathfrak{H}\}$ is an isonormal Gaussian process. Consider the sequence $ Y_n := \frac{1}{\sqrt{n}} \sum_{j=1}^n g(X_j) $ introduced in (\ref{yn}), where $g\in L^2(\mathbb{R}, \gamma)$ has Hermite rank $d\ge 1$ and let $\sigma_n^2= \mathbb{E} (Y_n^2)$. Under condition (\ref{h1}), it is well known that as $n\to \infty$, $\sigma_n^2 \to \sigma^2$, where $\sigma^2$ has been defined in (\ref{bm.sig}). Along the paper, we will denote by $C$ a generic constant, whose value can be different from one formula to another one. Our aim is to establish estimates on the total variation distance between $Y_n/\sigma_n$ and $Z$. We will make use of the representation $Y_n = \delta (u_n)$, where \begin{equation} \label{un} u_n = \frac{1}{\sqrt{n}} \sum_{j=1}^n g_1(X_j) j, \end{equation} given by Lemma \ref{lem2.1}. Then, if $g\in \mathbb{D}^{2,2} (\mathbb{R},\gamma)$, by inequality \eqref{2bound} and taking into account that $\sigma_ n \rightarrow \sigma>0$, we have the estimate \begin{eqnarray} d_{TV} (Y_n / \sigma_n ,Z) &\leq& \frac{1}{\sigma_n^2}\sqrt{\mathbb{E} ( | \langle DY_n, u_n \rangle_{\mathfrak{H}} -\sigma_n^2| ^2 ) } \nonumber \\ &\leq & C\sqrt{\mathbb{E} \left( \left\| D (\langle DY_n, u_n \rangle_{\mathfrak{H}})\right \| ^2_{\mathfrak{H}} \right)}= C\sqrt{A_1}, \label{yn2.est} \end{eqnarray} where $A_1=\mathbb{E} (\|DD_{u_n} Y_n\|^2_{\mathfrak{H}})$. Furthermore, using Proposition \ref{stein.hd2}, we can write \begin{eqnarray} d_{TV} (Y_n / \sigma_n ,Z) & \le & \frac{8}{\sigma_n^4} \mathbb{E}(\|DD_{u_n} Y_n\|^2_{\mathfrak{H}}) + \frac{\sqrt{8\pi}}{\sigma_n^3} \sqrt{\mathbb{E}(|D_{u_n}^2 Y_n|^2)} \nonumber \\ & & \le C(A_1 + \sqrt{A_2}) \,, \label{dist.ynz2} \end{eqnarray} where $A_2=\mathbb{E}(|D_{u_n}^2 Y_n|^2)$ and where we recall that $D_{u_n} Y_n = \langle u_n, DY_n \rangle_{\mathfrak{H}}$ and $D^i_{u_n}Y_n = \langle u_n, D^{i-1}_{u_n}Y_n \rangle_{\mathfrak{H}}$ for $i\ge 2$. If $g\in \mathbb{D}^{3,2} (\mathbb{R},\gamma)$, using Proposition \ref{stein.hd}, we obtain \begin{eqnarray} d_{TV} (Y_n / \sigma_n ,Z) & \le & \frac{8+\sqrt{32\pi}}{\sigma_n^4} \mathbb{E}(\|DD_{u_n} Y_n\|^2_{\mathfrak{H}}) + \ \frac{\sqrt{32\pi}}{\sigma_n^6} \mathbb{E}(|D^2_{u_n}Y_n|^2 ) \nonumber \\ & & + \frac{\sqrt{2\pi}}{\sigma_n^3} |\mathbb{E}(Y_n^3)| + \frac{4\pi}{\sigma_n^4} \sqrt{\mathbb{E}(|D^3_{u_n} Y_n|^2)} \nonumber \\ & &\le C(A_1 + A_2 + A_3 + A_4) \,,\label{dist.ynz} \end{eqnarray} where $A_3=|\mathbb{E}(Y_n^3)|$ and $A_4=\sqrt{\mathbb{E}(|D^3_{u_n} Y_n|^2)}$. In the sequel we will derive estimates on the terms $A_i$, $i=1, \dots, 4$ in terms of the covariance function $\rho(k)$. We use the notation $A_i \prec A_j$ if $A_i$'s bound has a better convergence rate to zero than that of $A_j$. To get the best possible rate, we use the following strategy. If $g$ is just twice differentiable, we can use the estimates (\ref{yn2.est}) and (\ref{dist.ynz2}). Then we will compare the rates of the terms $A_1$ and $A_2$. If $A_1 \prec A_2$, we just use the bound \eqref{yn2.est}. Otherwise, \eqref{dist.ynz2} would be used. If $g$ has higher order derivatives, we would use the bound \eqref{dist.ynz} if $A_2 \prec \sqrt{A_1}$ and the rates of $A_3$ and $A_4$ are better than those of $\sqrt{A_2}$ and $ \sqrt{A_1}$. Otherwise, if the rate of either $A_3$ or $A_4$ is worse than that of $\sqrt{A_1}$ or $\sqrt{A_2}$, we consider the bound \eqref{dist.ynz2} or \eqref{yn2.est} depending on the comparison between $A_2$ and $A_1$. Before presenting the main results, we will derive some expressions and estimates for the terms $A_i$, $i=1,2,4$. To simplify the notation, we will write $\rho_{ij} = \rho(l_i-l_j)$ for any $1 \le i,j \le n$. \begin{lemma}\label{var.yn} Suppose that $g\in \mathbb{D}^{2,4} (\mathbb{R},\gamma)$. Then, \[ A_1 \leq \frac{2}{n^2} \sum_{i=1}^2 \sum_{l_1,l_2,l_3,l_4=1}^n \left| \mathbb{E}(I_i) \rho(l_1-l_2) \rho(l_3-l_4) \rho(l_2-l_4) \right|, \] where \begin{equation} I_1 = g''(X_{l_2}) g''(X_{l_4}) g_1(X_{l_1}) g_1(X_{l_3}) \,, \label{I1} \end{equation} and \begin{equation} I_2 = g'(X_{l_1}) g'(X_{l_3}) g'_1(X_{l_2}) g'_1(X_{l_4}) \label{I2} \,. \end{equation} \end{lemma} \begin{proof} First, we have \begin{eqnarray} \mathbb{E} \left( \left\| D (\langle DY_n, u_n \rangle_{\mathfrak{H}})\right \| ^2_{\mathfrak{H}} \right) \notag & \le& 2 \mathbb{E} \left( \left\| D^2Y_n\otimes _1 u_n \right \| ^2_{\mathfrak{H}} \right) +2\mathbb{E} \left( \left\| \langle D_*Y_n, Du_n(*) \rangle_{\mathfrak{H}}\right \| ^2_{\mathfrak{H}} \right), \label{A1} \end{eqnarray} where $D^2Y_n\otimes _1 u_n$ denotes the contraction of one variable between $D^2Y_n$ and $u_n$ and \[ \langle D_*Y_n, Du_n(*) \rangle_{\mathfrak{H}} =\sum_{i=1} ^\infty \langle DY_n ,e_i \rangle_{\mathfrak{H}} D( \langle u_n, e_i \rangle_{\mathfrak{H}}), \] with $\{e_i, i\ge 1\}$ being a complete orthonormal system in $\mathfrak{H}$. This implies, taking into account (\ref{un}), that \[ D^2Y_n \otimes _1 u_n =\frac 1n \sum_{j,k=1}^n g''(X_k) g_1(X_j) \rho(j-k) k \] and \[ \langle D_*Y_n, Du_n(*) \rangle_{\mathfrak{H}} =\frac 1n \sum_{j,k=1}^n g'(X_j) g_1'(X_k)\rho(j-k) k \,. \] As a consequence, \[ \left\| D^2Y_n \otimes _1 u_n\right \| ^2_{\mathfrak{H}} = \frac 1{n^2} \sum_{l_1,l_2,l_3,l_4=1}^n I_1 \rho(l_1-l_2) \rho(l_3-l_4) \rho(l_2-l_4), \] and \[ \left\|\langle D_*Y_n, Du_n(*) \rangle_{\mathfrak{H}}\right \| ^2_{\mathfrak{H}} = \frac 1{n^2} \sum_{l_1,l_2,l_3,l_4=1}^n I_2 \rho(l_1-l_2) \rho(l_3-l_4) \rho(l_2-l_4), \] which implies the desired result. \end{proof} Next we derive a simple estimate for the term $A_2$, assuming again that $g\in \mathbb{D}^{2,6} (\mathbb{R},\gamma)$. Notice that \[ D_{u_n}Y_n = \frac{1}{n} \sum_{l_1, l_2=1}^n g_1(X_{l_1})g'(X_{l_2}) \rho(l_1 - l_2) \,. \] Denote \begin{equation} \label{f1} f_{1}(l_1, l_2, l_3) = g_1'(X_{l_1})g'(X_{l_2}) g_1(X_{l_3}) \, \end{equation} and \begin{equation} \label{f2} f_{2}(l_1, l_2, l_3)=g_1(X_{l_1})g''(X_{l_2}) g_1(X_{l_3}) \,. \end{equation} Correspondingly, using the notation $\rho_{ij} = \rho(l_i-l_j)$, we can write \[ D^2_{u_n} Y_n = \frac{1}{\sqrt{n^3}} \sum_{l_1, l_2, l_3=1}^n \Big (f_{1}(l_1, l_2, l_3) \rho_{12} \rho_{13} + \ f_{2}(l_1, l_2, l_3) \rho_{12} \rho_{23}\Big) \,. \] Thus, \begin{eqnarray} A_2 = \mathbb{E}((D^2_{u_n}Y_n)^2) &\leq& \frac{2}{n^3} \sum_{l_1, \ldots, l_6=1}^n \Big( \mathbb{E} (f_{1}(l_1, l_2, l_3) f_{1}(l_4, l_5, l_6)) \rho_{12} \rho_{13} \rho_{45} \rho_{46} \nonumber\\ & & \qquad \quad + \ \mathbb{E} (f_{2}(l_1, l_2, l_3) f_{2}(l_4, l_5, l_6)) \rho_{12} \rho_{23} \rho_{45} \rho_{56} \Big) \,.\label{est.a2} \end{eqnarray} Finally, let us compute the term $A_4$, assuming $g\in \mathbb{D}^{3,8} (\mathbb{R},\gamma)$. We have \begin{eqnarray*} D^3_{u_n} Y_n &= & \frac{1}{n^2} \sum_{l_1, l_2, l_3,l_4=1}^n \sum_{i=1}^3 \Big (f_{1}^{(i)}(l_1, l_2, l_3) g_1(X_{l_4}) \rho_{12} \rho_{13}\rho_{i4} \\ && + f_{2}^{(i)}(l_1, l_2, l_3) g_1(X_{l_4}) \rho_{12} \rho_{23}\rho_{i4}\Big) \,, \end{eqnarray*} where \begin{eqnarray*} f_{1}^{(1)}(l_1, l_2, l_3) &= & g_1''(X_{l_1})g'(X_{l_2}) g_1(X_{l_3}) \,, \\ f_{1}^{(2)}(l_1, l_2, l_3) &= & g_1'(X_{l_1})g''(X_{l_2}) g_1(X_{l_3}) \,, \\ f_{1}^{(3)}(l_1, l_2, l_3) &= & g_1'(X_{l_1})g'(X_{l_2}) g'_1(X_{l_3}) \end{eqnarray*} and \begin{eqnarray*} f_{2}^{(1)}(l_1, l_2, l_3) &= & g_1'(X_{l_1})g''(X_{l_2}) g_1(X_{l_3}) \,, \\ f_{2}^{(2)}(l_1, l_2, l_3) &= & g_1(X_{l_1})g'''(X_{l_2}) g_1(X_{l_3}) \,, \\ f_{2}^{(3)}(l_1, l_2, l_3) &= & g_1(X_{l_1})g''(X_{l_2}) g'_1(X_{l_3}). \end{eqnarray*} Therefore, \begin{eqnarray} \nonumber A_4^2&=&\mathbb{E}((D^3_{u_n}Y_n)^2)\\ \nonumber & \leq& \frac{2}{n^4} \sum_{i=1}^3 \sum_{ j=1, \ldots, 8} \sum_{l_j=1}^n \mathbb{E} \left(f_{1}^{(i)}(l_1, l_2, l_3) g_1(X_{l_4}) f_{1}^{(i+4)}(l_5, l_6, l_7) g_1(X_{l_8})\right) \\ \nonumber && \times \rho_{12} \rho_{13}\rho_{i4} \rho_{56} \rho_{57}\rho_{(i+4) 8} \\ \nonumber &&+ \frac{2}{n^4} \sum_{i=1}^3 \sum_{ j=1, \ldots, 8} \sum_{l_j=1}^n \mathbb{E} \left(f_{2}^{(i)}(l_1, l_2, l_3) g_1(X_{l_4}) f_{2}^{(i+4)}(l_5, l_6, l_7) g_1(X_{l_8})\right) \\ \label{equ90} && \times \rho_{12} \rho_{23}\rho_{i4} \rho_{56} \rho_{67}\rho_{(i+4) 8}. \end{eqnarray} We are now ready to state and prove the main results of this paper. The notation is that of Theorem \ref{bm}. \subsection{Case $d=1$} \begin{thm}\label{main.d1} Let $d=1$ and $g \in \mathbb{D}^{2,4} (\mathbb{R},\gamma)$. Suppose that (\ref{h1}) holds true. Then \[ d_{\rm TV}(Y_n / \sigma_n, Z) \leq C n^{-\frac{1}{2}} \,. \] \end{thm} \begin{proof} We use the inequality \eqref{yn2.est} and we need to estimate the term $A_1$. By Lemma \ref{lem2.4}, H\"older's inequality and the fact that $g \in \mathbb{D}^{2,4} (\mathbb{R},\gamma)$, the quantities $I_1 $ and $ I_2$ have finite expectation. Then \[ A_1 \leq \frac{C}{n^2} \sum_{l_1, l_2, l_3, l_4 = 1}^n |\rho(l_1 - l_2) \rho(l_3 - l_4) \rho(l_2 - l_4)| \,. \] Making the change of variables $k_1= l_1 -l_2$, $k_2= l_3-l_4$, $ k_3= l_2-l_4$ and using condition (\ref{h1}) with $d=1$, we obtain \[ A_1 \leq \frac{C}{n} \sum_{|k_i| \leq n, 1\le i\le 3} |\rho(k_1)\rho(k_2)\rho(k_3)| \leq \frac{C}{n} \,, \] which provides the desired estimate. \end{proof} \subsection{Case of $d=2$} \begin{thm}\label{main.d12} Let $d=2$ and suppose that (\ref{h1}) holds true. \begin{itemize} \item[(i)] If $g \in \mathbb{D}^{2,4} (\mathbb{R},\gamma)$, we have \[ d_{\rm TV}(Y_n / \sigma_n, Z) \leq C n^{-\frac{1}{2}} \left(\sum_{|k| \leq n} |\rho(k)|\right)^{\frac{3}{2}} \,. \] \item[(ii)] If $g \in \mathbb{D}^{3,4} (\mathbb{R},\gamma)$, we have \[ d_{\rm TV}(Y_n / \sigma_n, Z) \leq C n^{-\frac{1}{2}} \sum_{|k| \leq n} |\rho(k)| \,. \] \item[(iii)] If $g \in \mathbb{D}^{4,4} (\mathbb{R},\gamma)$, we have \[ d_{\rm TV}(Y_n / \sigma_n, Z) \leq C n^{-\frac{1}{2}} \left(\sum_{|k| \leq n} |\rho(k)|\right)^{\frac{1}{2}} + C n^{-\frac{1}{2}} \left(\sum_{|k| \leq n} |\rho(k)|^{\frac{4}{3}}\right)^{\frac 32} \,. \] \item[(iv)] If $g \in \mathbb{D}^{5,6} (\mathbb{R},\gamma)$, we have \[ d_{\rm TV}(Y_n / \sigma_n, Z) \leq C n^{-\frac{1}{2}} \left(\sum_{|k| \leq n} |\rho(k)|\right)^{\frac{1}{2}} + C n^{-\frac{1}{2}} \left(\sum_{|k| \leq n} |\rho(k)|^{\frac{3}{2}}\right)^{2} \,. \] \item[(v)] If $g \in \mathbb{D}^{6,8} (\mathbb{R},\gamma)$, we have \[ d_{\rm TV}(Y_n / \sigma_n, Z) \leq C n^{-\frac{1}{2}} \left(\sum_{|k| \leq n} |\rho(k)|^{\frac{3}{2}}\right)^{2} . \] \end{itemize} \end{thm} \begin{remark} For $g\in \mathbb{D}^{6,8} (\mathbb{R},\gamma)$ the rate estalbished in point (v) coincides with the rate for the Hermite polynomial $g(x)=x^2-1$, obtained by Bierm\'e, Bonami, Nourdin and Peccati in \cite{bbnp} using the optimal bound for the total variation distance in the case of random variables in a fixed Wiener chaos derived by Nourdin and Peccati in \cite{np-15} (see Proposition \ref{bd.4m-2}). When the function $g $ belongs to $ \mathbb{D}^{i,4}(\mathbb{R},\gamma)$, $2\le i \le 4$ or $ g\in \mathbb{D}^{5,6}(\mathbb{R},\gamma)$, the rates we have obtained are worse than the rate for $g\in \mathbb{D}^{6,8} (\mathbb{R},\gamma)$. For $ g\in \mathbb{D}^{i,4}(\mathbb{R},\gamma)$, $i=2,3,4$, the estimates in points (i), (ii) and (iii) will be established using Proposition \ref{stein.hd0}, whereas, for $ g\in \mathbb{D}^{5,6}(\mathbb{R},\gamma)$ we will use Proposition \ref{stein.hd2} to derive the estimate in point (iv) and for $g\in \mathbb{D}^{6,8} (\mathbb{R},\gamma)$ we apply Proposition \ref{stein.hd}. \end{remark} \begin{proof}[Proof of Theorem \ref{main.d12}] The proof will be done in several steps. \medskip \noindent {\it Case $g \in \mathbb{D}^{2,4} (\mathbb{R},\gamma)$}. \quad We apply Lemma \ref{var.yn} to derive the rate of convergence of $A_1$. Using arguments similar to those in the case $d=1$ yields \begin{equation} \label{equ2} A_1 \leq \frac{C}{n} \sum_{|k_i| \leq n, 1\le i\le 3} |\rho(k_1)\rho(k_2)\rho(k_3) |= \frac{C}{n} \left(\sum_{|k| \leq n} |\rho(k)| \right)^3\,, \end{equation} which gives the desired estimate in view of \eqref{yn2.est}. We claim that, even if we impose more integrability conditions on the function $g$, that is, $g \in \mathbb{D}^{2,6} (\mathbb{R},\gamma)$, the estimate (\ref{dist.ynz2}) does not give a rate better than (\ref{equ2}). In fact, let us estimate the term $A_2$, which is bounded by the inequality \eqref{est.a2}, where $f_1$ and $f_2$ are defined in (\ref{f1}) and (\ref{f2}). The term $\mathbb{E} (f_{2}(l_1, l_2, l_3) f_{2}(l_4, l_5, l_6))$ cannot be integrated by parts because it involves $g''$ and $g$ is only twice weakly differentiable. Therefore, if $g\in \mathbb{D}^{2,6} (\mathbb{R},\gamma)$, using Lemma \ref{lem2.4} together with H\"older's inequality, and making a change of variables, we obtain \begin{eqnarray*} A_2 & \le & \frac C{n^3} \sum_{l_1, \ldots, l_6=1}^n \Big( | \rho_{12} \rho_{13} \rho_{45}\rho_{46} |+ |\rho_{12}\rho_{23}\rho_{45}\rho_{56}| \Big) \\ &\le & \frac Cn \sum_{ |k_i | \le n , 1\le i\le 4} \prod_{i=1}^4 |\rho(k_i) | = \frac Cn \left(\sum_{|k| \le n} | \rho(k)|\right)^4. \end{eqnarray*} Thus, $A_1 \prec A_2$, so we use (\ref{yn2.est}) and (\ref{equ2}) gives the best rate. \medskip \noindent {\it Case $g \in \mathbb{D}^{3,4}(\mathbb{R},\gamma)$}. \quad Let us first estimate the term $A_1$. Because $g$ has three derivatives, using Lemma \ref{var.yn} and Lemma \ref{cov.pth1}, we obtain \[ A_1 \leq \frac{C}{n^2} \sum_{l_1, l_2, l_3, l_4 = 1}^n |\rho(l_1 - l_2) \rho(l_3 - l_4) \rho(l_2 - l_4)| \sum_{j \neq 1} |\rho(l_1 - l_j)| \,. \] Making the change of variables $l_1-l_2=k_1$, $l_2-l_4=k_2$ and $l_3-l_4=k_3$, yields \begin{eqnarray*} A_1 & \leq & \frac{C}{n} \sum_{|k_i| \leq n} \Big( |\rho^2(k_1) \rho(k_2) \rho(k_3)| + |\rho(k_1) \rho(k_2) \rho(k_3) \rho(k_1 + k_2)| \\ & & \ \ + \ |\rho(k_1) \rho(k_2) \rho(k_3) \rho(k_1 + k_2 - k_3)| \Big). \end{eqnarray*} Taking into account condition (\ref{h1}) and applying (\ref{equ21}) with $M=3$, yields \begin{equation} \label{equ1} A_1 \leq \frac Cn \left(\sum_{|k| \leq n} |\rho(k)|\right)^2, \end{equation} which gives the desired estimate in view of \eqref{yn2.est}. Again, we claim that imposing more integrability conditions and using either (\ref{dist.ynz2}) or the more refined estimate (\ref{dist.ynz}) does not improve the above rate. Indeed, let us first estimate the term $A_2$, assuming $g\in \mathbb{D}^{3,6}(\mathbb{R},\gamma)$. Because $g$ is three times weakly differentiable, we can integrate by parts once in the expectations appearing in \eqref{est.a2}. The two summands in \eqref{est.a2} are similar, thus it suffices to consider the first one. Recall that $f_{1}(l_1, l_2, l_3) = g_1'(X_{l_1})g'(X_{l_2}) g_1(X_{l_3}) $ has been defined in (\ref{f1}). Using the representation $g'(X_{l_2})= \delta(T_1(g')(X_{l_2}) l_2)$, applying the duality relationship (\ref{dua}), and making a change of variables, we obtain \begin{eqnarray*} A_2 &\leq& \frac{C}{n^3} \sum_{l_1, \ldots, l_6=1}^n \Big( \rho_{12}^2 |\rho_{13} \rho_{45}\rho_{46} |+ |\rho_{12}\rho_{13}\rho_{45}\rho_{46}\rho_{23}| +| \rho_{12}\rho_{13}\rho_{45}\rho_{46}| \sum_{i=4}^6 |\rho_{2i}| \Big) \\ &\leq& \frac{C}{n^2} \sum_{ |k_i | \le n \atop 1\le i\le 5} \Big(\rho(k_1)^2 \prod_{i=2}^4 |\rho(k_i)| + |\rho(k_1 - k_2)|\prod_{i=1}^4 |\rho(k_i) | + \prod_{i=1}^5| \rho(k_i)| \Big) \,. \end{eqnarray*} This implies, using (\ref{equ21}) with $M=4$ for the second summand, that \[ A_2 \le \frac Cn\left(\sum_{|k|\leq n} |\rho(k)|\right)^3 + \frac C {n^2}\left(\sum_{|k|\leq n} |\rho(k)|\right)^5 \le \frac Cn\left(\sum_{|k|\leq n} |\rho(k)|\right)^3, \] where we have used the fact that $\sum_{|k|\leq n} |\rho(k)| \le C \sqrt{n}$ in the second inequality. Clearly, $A_1 \prec A_2$. So the estimate (\ref{yn2.est}) is better than (\ref{dist.ynz2}). On the other hand, the estimate (\ref{dist.ynz}) does not provide a rate better than (\ref{yn2.est}), because $ \sqrt{A_1} \prec A_3$. Indeed, let us estimate the term $A_3$. We know that \[ A_3 = |\mathbb{E} (Y_n^3)| = n^{-\frac{3}{2}}\left|\sum_{l_1, l_2, l_3=1}^n \mathbb{E} \left( \prod_{i=1}^3 g(X_{l_i}) \right) \right| . \] Using the representation $g(X_{l_1}) =\delta ^2( g_2(X_{l_1}) l_1^{\otimes 2})$ and applying twice the duality relationship (\ref{dua}), we obtain \begin{eqnarray*} A_3& \leq & C n^{-\frac{3}{2}} \sum_{l_1, l_2, l_3=1}^n \Big( | \mathbb{E} ( g_2(X_{l_1}) g''(X_{l_2}) g(X_{l_3}) )| \rho_{12}^2 \\ && +2 |\mathbb{E} ( g_2(X_{l_1}) g'(X_{l_2}) g'(X_{l_3}) ) \rho_{12} \rho_{13} |+ | \mathbb{E} ( g_2(X_{l_1}) g(X_{l_2}) g''(X_{l_3}) )| \rho_{13}^2 \Big). \end{eqnarray*} Because $g$ is three times differentiable, we can still use the representations $g(X_{l_3}) =\delta ( g_1( X_{l_3}) l_3)$, $g'(X_{l_2})=\delta ( T_1(g')( X_{l_2}) l_2)$ and $g(X_{l_2}) =\delta ( g_1( X_{l_2}) l_2)$, and apply the duality relationship (\ref{dua}) again to produce an additional factor of the form $|\rho_{13}|+| \rho_{23}|$ for the first term and $|\rho_{12}|+| \rho_{23}|$ for the second and third terms. In this way, we obtain \[ A_3 \le C n^{-\frac{3}{2}} \sum_{l_1, l_2, l_3=1}^n \Big( |\rho_{12}^2 \rho_{13}| + |\rho_{12}\rho_{13}\rho_{23}|\Big) \,. \] We make the change of variables $\rho_{12}= \rho(k_1) $, $ \rho_{13} = \rho(k_2)$ and apply (\ref{equ6}) with $M=2$ to the second summand to obtain $$A_3 \leq C n^{-\frac{1}{2}} \sum_{|k|\leq n} |\rho(k)| + C n^{-\frac{1}{2}} \left(\sum_{|k|\leq n} |\rho(k)|^{\frac{3}{2}}\right)^2 \,.$$ Clearly, by \eqref{equ7}, this bound is not better than the bound we have previously obtained for $\sqrt{A_1}$, and (\ref{equ1}) gives the result in this case. \medskip \noindent{\it Case $g \in \mathbb{D}^{4,4}(\mathbb{R},\gamma)$}. \quad As before, let us first estimate the term $A_1$. Taking into account that $g$ has four derivatives, by the results of Lemma \ref{var.yn} and Lemma \ref{cov.pth1} and using the notation $\rho(l_i - l_j) = \rho_{ij}$, we have \[ A_1 \leq \frac{C}{n^2} \sum_{l_1, l_2, l_3, l_4 = 1}^n |\rho_{12} \rho_{34} \rho_{24}| \left((|\rho_{12}|+ |\rho_{14}| )\sum_{j \neq 3}| \rho_{j3}| + |\rho_{13}| \right) \,. \] We further write \begin{eqnarray} A_1 & \leq & \frac{C}{n^2} \sum_{1\le l_i \le n, 1\le i \le 4} \Big( \rho_{12}^2 \rho_{34}^2 |\rho_{24} |+ \rho_{12}^2 |\rho_{34} \rho_{24} \rho_{13}| + \rho_{12}^2| \rho_{34} \rho_{24} \rho_{23}| +| \rho_{12} \rho_{34}^2\rho_{24} \rho_{14}| \nonumber \\ & & \ + | \rho_{12} \rho_{34} \rho_{24} \rho_{14} \rho_{23}| +| \rho_{12} \rho_{34} \rho_{24} \rho_{14} \rho_{13}| + |\rho_{12}\rho_{34}\rho_{24}\rho_{13}| \Big) \label{a1.d2}\\ & \leq & \frac{C}{n^2} \sum_{1\le l_i \le n, 1\le i \le 4} \rho_{12}^2 \rho_{34}^2 |\rho_{24}| + \rho_{12}^2 |\rho_{34} \rho_{24} \rho_{23}| + |\rho_{12}\rho_{34}\rho_{24}\rho_{13}| \nonumber \,. \end{eqnarray} For the second inequality in \eqref{a1.d2}, we have used that the third and fourth summands are equal and the fact that $|\rho_{ij}|\le 1$. By a change of variables, we obtain \begin{eqnarray} A_1 & \leq & \frac{C}{n} \sum_{|k_i| \leq n , 1\le i \le 3} \Big( \rho^2(k_1) \rho^2(k_2)| \rho(k_3) |+ \rho^2(k_1) |\rho(k_2) \rho(k_3) \rho(k_2 - k_3) |\nonumber\\ & & \ \ + | \rho(k_1) \rho(k_2) \rho(k_3) \rho(k_1 - k_2 + k_3)| \Big) \label{a1.c4f}\,. \end{eqnarray} Using condition (\ref{h1}) and applying inequality (\ref{equ21}) with $M=2$ to handle the second summand and inequality (\ref{equ6}) with $M=3$ for the third summand, yields \begin{equation}\label{a1.d24} A_1 \leq \frac Cn \sum_{|k| \leq n} |\rho(k)| + \frac Cn \left(\sum_{|k| \leq n} |\rho(k)|^{\frac{4}{3}}\right)^3 \,. \end{equation} This gives the desired estimate in view of \eqref{yn2.est}. As in the previous cases, we will show that, even with stronger integrability assumptions, using either (\ref{dist.ynz2}) or (\ref{dist.ynz}) does not improve the above rate. For this, consider first the term $A_2$, assuming $g\in \mathbb{D}^{4,6} (\mathbb{R},\gamma)$. Because $g$ has four derivatives, we can apply twice the duality relationship (\ref{dua}). Recall that the term $A_2$ is bounded by \eqref{est.a2} and it suffices to consider the first summand in the right-hand side of this inequality. We write it here for convenience \begin{equation}\label{est.a2.exp} A_{21}:= \frac{2}{n^3} \sum_{l_1, \ldots, l_6=1}^n \mathbb{E} (f_{1}(l_1, l_2, l_3) f_{1}(l_4, l_5, l_6)) \rho_{12} \rho_{13} \rho_{45} \rho_{46}, \end{equation} where $f_{1}(l_1, l_2, l_3) $ has been defined in (\ref{f1}). Notice that the functions $g'$ and $g_1$ have Hermite rank $1$. We first write $g'(X_{l_2})=\delta( T_1(g')(X_{l_2}) l_2)$ and apply duality with respect to this divergence producing factors of the form $\rho_{2i}$, $i\not =2$, $1\le i \le 6$. Next we choose another function that has Hermite rank $1$ among the factors $g_1(X_{l_3})$, $g'(X_{l_5})$ and $ g_1(X_{l_6})$, write it as a divergence integral and apply duality again to obtain: \begin{eqnarray}\label{est.a22_4} |\mathbb{E} (f_{1}(l_1, l_2, l_3) f_{1}(l_4, l_5, l_6))| \leq C \sum_{i=1 \atop i \not =2}^6 \sum_{s\in\{3,5,6\} \atop s\not =i} \sum_{j=1 \atop j\not =s }^6 |\rho_{2i} \rho_{sj}|. \end{eqnarray} Applying inequality (\ref{equ9}) in Lemma \ref{a3.c4} yields \begin{equation} \label{equ40} A_2 \le 2 A_{21} \leq \frac Cn \left(\sum_{|k| \leq n} |\rho(k)|\right)^2 \,. \end{equation} By the inequality (\ref{equ7}) with $M=3$, we get that $A_1 \prec A_2$. Next we will compare this estimate with the bound we can obtain for the term $A_3$ using the fact that $g$ has four derivatives. We can write \begin{eqnarray} A_3 &= & | \mathbb{E}( Y_n^3)| = C n^{-\frac{3}{2}}\left|\sum_{l_1, l_2, l_3=1}^n \mathbb{E} \left( \prod_{i=1}^3 g(X_{l_i}) \right) \right| \nonumber\\ & \leq & C n^{-\frac{3}{2}} \sum_{l_1, l_2, l_3=1}^n \Big( \rho_{12}^2(|\rho_{13}|+|\rho_{23} |)^2+ |\rho_{12}\rho_{13}|(|\rho_{23}|+|\rho_{12}| (|\rho_{13}| + |\rho_{23}|)) \nonumber \\ &&+ \rho_{13}^2( |\rho_{12}| +|\rho_{23}| ) ^2 \Big) \nonumber\\ & \leq & C n^{-\frac{3}{2}} \sum_{l_1, l_2, l_3=1}^n \Big( |\rho_{12}^2 \rho^2_{13}| + |\rho_{12}\rho_{13}\rho_{23}| \Big) \label{a3.c4f} \,. \end{eqnarray} Note that $n^{-\frac{3}{2}} \sum_{l_1, l_2, l_3=1}^n |\rho_{12}^2 \rho^2_{13}| = C n^{-\frac{1}{2}}$. We make the change of variables $\rho_{12}\to \rho(k_1), \rho_{13} \to \rho(k_2)$ and apply (\ref{equ6}) to the second summand, to obtain \begin{equation}\label{a3.d24} A_3 \leq C n^{-\frac{1}{2}} \left(\sum_{|k|\leq n} |\rho(k)|^{\frac{3}{2}}\right)^2 \,. \end{equation} By (\ref{ho1}) with $M=3$ and (\ref{ho2}), we obtain that $A_1\prec A_3$. By \eqref{ho3}, we have $A_3 \prec \sqrt{A_1}$. However, we cannot use the bound \eqref{dist.ynz} since the relationship between $\sqrt{A_1}$ and $A_2$ is not clear, because the sequences $n^{-\frac{1}{2}} (\sum_{|k| \leq n} |\rho(k)|)^{\frac{1}{2}}$ and $n^{-1} (\sum_{|k| \leq n}|\rho(k)|)^2$ are not comparable. An example could be $\rho(k) \sim k^{-\alpha}$ for $\alpha \in (\frac 12, \frac 23)$. So, we use the bound \eqref{yn2.est} that is given by (\ref{a1.d24}). \medskip \noindent{\it Case $g \in \mathbb{D}^{5,6} (\mathbb{R},\gamma)$}. \quad For the terms $A_1$ and $A_3$ we still have the estimates \eqref{a1.d24} and \eqref{a3.d24}. For the term $A_2$, we continue with the inequalities \eqref{est.a2.exp} and \eqref{est.a22_4}, and apply the duality for the third time to $\mathbb{E} (f_{1}(l_1, l_2, l_3) f_{1}(l_4, l_5, l_6))$ when there is a factor with Hermite rank $1$, to obtain \[ |\mathbb{E} (f_{1}(l_1, l_2, l_3) f_{1}(l_4, l_5, l_6))| \leq C \sum_{\substack{i \neq s \neq j \\ i,s,j \in \{3,5,6\}}}|\rho_{2i} \rho_{sj}| + \ C \sum_{(i,s,j,t,h) \in D_3} |\rho_{2i} \rho_{sj} \rho_{th}| \,, \] where \begin{equation} \label{d3} D_3= \{ (i,s,j,t,h): j,h \in \{1,\dots, 6\}; s,t \in \{3,5,6\}; i\not =2, s\not \in \{ i,j\}; t \not \in \{i,j,h\} \}. \end{equation} By inequality (\ref{equ10}) in Lemma \ref{a3.c4}, \begin{equation} \label{equ3} A_2 \leq \frac Cn \sum_{|k| \leq n} |\rho(k)| + \frac Cn \left(\sum_{|k|\leq n} |\rho(k)|^{\frac{3}{2}}\right)^4 \,. \end{equation} From (\ref{a1.d24}), (\ref{equ3}) and (\ref{ho3}) we deduce that $A_2 \prec A_1$ and, therefore, $ A_1+ \sqrt{A_2} \prec \sqrt{A_1}$. Therefore, (\ref{dist.ynz2}) gives a better rate than (\ref{yn2.est}), which is given by \begin{equation} \label{equ4} A_1 + \sqrt{A_2} \le C n^{-\frac 12 } \left(\sum_{|k| \leq n} |\rho(k)| \right)^{\frac 12} + C n^{-\frac 12} \left(\sum_{|k|\leq n} |\rho(k)|^{\frac{3}{2}}\right)^2. \end{equation} Clearly, $A_3 \prec A_1 + \sqrt{A_2} $. Whether we choose \eqref{dist.ynz2} or \eqref{dist.ynz} depends on the computation of $A_4$, where we need to assume $g\in \mathbb{D}^{5,8}(\mathbb{R},\gamma)$. Consider the second summand in the expression (\ref{equ90}) denoted by \begin{eqnarray} (A_{42})^2 & :=& \frac{2}{n^4} \sum_{l_j=1, j=1, \ldots, 8}^n \sum_{i=1}^3 \mathbb{E} \left(f_{2}^{(i)}(l_1, l_2, l_3) g_1(X_{l_4}) f_{2}^{(i+4)}(l_5, l_6, l_7) g_1(X_{l_8})\right) \nonumber \\ && \times \rho_{12} \rho_{23}\rho_{i4} \rho_{56} \rho_{67}\rho_{(i+4) 8} \,. \label{est.a4} \end{eqnarray} Taking into account that $g$ has five derivatives and the terms $f_2^{(2)}$ and $f_2^{(6)}$ involve $g'''$, we can apply duality twice using the factors that have Hermite rank $1$. In this way, we get the following item in the bound of $A_{42}$: \[ \sqrt{\frac{C}{n^4} \sum_{|l_j|=1, j=1, \ldots, 8}^n \rho_{12}^2 \rho_{13} \rho_{24} \rho_{56}^2 \rho_{67} \rho_{68}}, \] which gives the rate $\frac{1}{n} \left(\sum_{|k| \leq n} |\rho(k)|\right)^2$. This rate cannot always be better than that of $A_1+\sqrt{A_2}$ bound since the sequences $\frac{1}{n} \left(\sum_{|k| \leq n} |\rho(k)|\right)^2$ and $n^{-\frac{1}{2}} \left(\sum_{|k| \leq n} |\rho(k)|^{\frac{3}{2}} \right)^2$ are not comparable. An example could be $\rho(k) \sim k^{-\alpha}$ for $\alpha \in (\frac{1}{2}, \frac{2}{3})$. This suggests us using the bound \eqref{dist.ynz2} that is given by (\ref{equ4}). \medskip \noindent{\it Case $g \in \mathbb{D}^{6,8}(\mathbb{R},\gamma)$}. \quad For the terms $A_1$, $A_2$ and $A_3$, we still have the estimates \eqref{a1.d24}, \eqref{equ3} and \eqref{a3.d24}. Let us now study the term $A_4$ given by (\ref{equ90}). The terms $f_2^{(2)}$ and $f_2^{(6)}$ involve $g'''$ and they can be integrated by parts three times. Therefore, we are going to use only three integration by parts. On the other hand, the terms $f_2^{(2)}$, $f_2^{(6)}$ , $f_1^{(1)}$ and $f_1^{(4)}$ have two factors with Hermite rank one that can be represented as divergences, but the other terms have only one. All these terms are similar, with the only difference being the number of factors with Hermite rank one. We will handle only the term $f_1^{(1)}$ that has two factors with Hermite rank one and the term $f_1^{(2)}$ that has only one. The other terms could be treated in a similar way. In this way, for the term $f_1^{(1)}$, we obtain, after integrating by parts three times, \[ \left| \mathbb{E} \left(f_{1}^{(1)}(l_1, l_2, l_3) g_1(X_{l_4}) f_{1}^{(5)}(l_5, l_6, l_7) g_1(X_{l_8})\right) \right| \leq C \sum_{ (i,s,j,t,h) \in D_4} |\rho_{2i} \rho_{sj} \rho_{th}|, \] where \begin{equation} \label{d4} D_4= \left\{ (i,s,j,t,h): 1\le i,j,h \le 8; s,t \in \{ 3,4,6,7,8\}; i \not =2; s \not \in \{i,j\}; t \not \in \{i,s,j,h\} \right\}. \end{equation} On the other hand, for the term $f_1^{(2)}$, we obtain, after integrating by parts three times, \begin{eqnarray*} && \left| \mathbb{E} \left(f_{1}^{(2)}(l_1, l_2, l_3) g_1(X_{l_4}) f_{1}^{(6)}(l_5, l_6, l_7) g_1(X_{l_8})\right) \right| \\ &\leq& C \sum_{\substack{i \neq s \neq j\\ i, s, j\in\{4,7,8\}}} |\rho_{3i} \rho_{sj}| + C \sum_{ (i,s,j,t,h) \in D_5} |\rho_{3i} \rho_{sj} \rho_{th}|, \end{eqnarray*} where \begin{equation} \label{d5} D_5= \left\{ (i,s,j,t,h): 1\le i,j,h \le 8; s,t \in \{ 4,7,8\}; i \not =3; s \not \in \{i,j\}; t \not \in \{i,s,j,h\} \right\}. \end{equation} By Lemma \ref{a4.c6} and Lemma \ref{a4.c6-2}, we obtain \[ A_4 \le \frac{C}{n} \left(\sum_{|k| \leq n} |\rho(k)|\right)^{\frac 32} +\frac{C}{n} \left(\sum_{|k|\leq n}|\rho(k)|^{\frac{4}{3}}\right)^3. \] Then, from (\ref{ho1}) with $M=3$ and (\ref{ho2}), we deduce $A_4 \prec A_3$. We already know that $A_2 \prec A_1 \prec A_3 \prec \sqrt{A_2}$. Also using (\ref{ho3}) it follows that $A_3 \prec \sqrt{A_1}$. Thus, we use \eqref{dist.ynz} for the bound of $d_{\rm TV}(Y_n/\sigma_n, Z)$ which is given by the estimate \eqref{a3.d24} of the term $A_3$. \\ \end{proof} \subsection{Case $d \geq 3$} \begin{thm} \label{thm3.4} Assume $g \in \mathbb{D}^{3d-2,4}(\mathbb{R},\gamma)$ has Hermite rank $d\ge 3$ and suppose that (\ref{h1}) holds true. Then we have the following estimate \begin{eqnarray} d_{\rm TV} (Y_n / \sigma_n ,Z) \le C n^{-\frac{1}{2}} \sum_{|k| \leq n} |\rho(k)|^{d-1} \left(\sum_{|k| \leq n} |\rho(k)|^{2}\right)^{\frac{1}{2}} \nonumber\\ + C n^{-\frac{1}{2}} \left(\sum_{|k|\leq n}|\rho(k)|^2\right)^{\frac{1}{2}} \left(\sum_{|k|\leq n}|\rho(k)|\right)^{\frac{1}{2}}\label{equ70a}\,. \end{eqnarray} \end{thm} \begin{proof}[Proof of Theorem \ref{thm3.4}] Inequality (\ref{equ70a}) will be established using Proposition \ref{stein.hd0} that is specifically expressed as \eqref{yn2.est}. The proof will be done in two steps. \medskip \noindent{\it Step 1:} First, we consider the case when $g$ is the Hermite polynomial $H_d$. By Lemma \ref{var.yn} and Lemma \ref{cov.pth2}, we have \[ A_1\le \frac{C}{n^2} \sum_{l_1, l_2, l_3, l_4=1}^n |\rho(l_1 - l_2)^{\beta_1} \rho(l_3 - l_4)^{\beta_2} \rho(l_2 - l_4)^{\beta_3} \rho(l_1 - l_3)^{\beta_4} \rho(l_1 - l_4)^{\beta_5} \rho(l_2 - l_3)^{\beta_6} |, \] where the $\beta_i$'s satisfy $\sum_{i=1}^6 \beta_i = 2d$, $\beta_2 + \beta_3 + \beta_5 = d$, $\beta_1 + \beta_3 + \beta_6 = d$, $\beta_1 + \beta_4 + \beta_5 = d$, $\beta_2 + \beta_4 + \beta_6 = d$ and $\beta_j \geq 1$ for $j=1,2,3$. Making the change of variables, $l_i - l_4 \to k_i$, $i=1,2,3$ yields \[ A_1 \le \frac{C}{n} \sum_{k_1, k_2, k_3=1}^n |\rho(k_1 - k_2)^{\beta_1} \rho(k_3)^{\beta_2} \rho(k_2)^{\beta_3} \rho(k_1 - k_3)^{\beta_4} \rho(k_1)^{\beta_5} \rho(k_2 - k_3)^{\beta_6} |\,. \] Applying the Brascamp-Lieb inequality (\ref{BL}), we can write \[ A_1 \leq \frac Cn \prod_{i=1}^6 \left(\sum_{|k_i| \leq n} |\rho(k_i)|^{\frac{\beta_i}{p_i}}\right)^{p_i}, \] where the $p_i$'s satisfy $ \sum_{i=1}^6 p_i = 3$, $ p_i \leq 1$, $p_1 + p_3 + p_5 \leq 2$, $ p_2 + p_3 + p_6 \leq 2$, $ p_2 + p_4 + p_5 \leq 2$ and $p_1 + p_4 + p_6 \leq 2$. The restriction of $\beta_i$ could be further simplified as $$\beta_1 = \beta_2, \beta_3 = \beta_4, \beta_5 = \beta_6, \beta_1 + \beta_3 + \beta_5 = d, \ {\rm and} \ \beta_1, \beta_3 \geq 1 \,.$$ Then we choose $p_1 = p_2, p_3 = p_4, p_5 = p_6$ to obtain \begin{equation}\label{a1d} A_1 \leq \frac Cn \left(\prod_{i=1,3,5} \left(\sum_{|k_i| \leq n} |\rho(k_i)|^{\frac{\beta_i}{p_i}}\right)^{p_i} \right)^2 \,. \end{equation} We are going to choose $p_i = \frac{\beta_i}{d-1} + \epsilon_i$ for $i=1,3,5$, where the $\epsilon_i$'s satisfy $\epsilon _i \ge 0$ and $\frac{d}{d-1} + \sum_{i=1,3,5} \epsilon_i = \frac 32$. To choose the values of the $\epsilon _i$'s we consider two cases. Set $\delta = \frac{1}{2} - \frac{1}{d-1}$. \begin{itemize} \item[(i)] Suppose that $\delta \le 1- \frac {\beta_1} {d-1}$. Then, we take $\epsilon_1=\delta$ and $\epsilon_3 = \epsilon_5 = 0$ and we obtain $p_1= \frac {\beta_1} {d-1} +\frac 12 -\frac 1{d-1}$, $p_3=\frac{\beta_3} {d-1}$ and $p_5=\frac{\beta_5} {d-1}$. \item[(ii)] Suppose that $\delta \ge 1- \frac {\beta_1} {d-1}$. Then, we take $\epsilon_1= 1-\frac {\beta_1} {d-1} $ and $\epsilon_3 = \delta - \epsilon_1$ and $\epsilon_5 = 0$ and we obtain $p_1=1$, $p_3=\frac{\beta_3} {d-1}+\frac{\beta_1} {d-1} -\frac 12 -\frac 1{d-1}$ and $p_5=\frac{\beta_5} {d-1}$. \end{itemize} It is easy to show that these $p_i$'s satisfy the desired conditions and, furthermore, $\beta_i \geq 2 p_i$ for $i=1,3,5$. This allows us to choose the pairs $(\alpha_i, \gamma_i)$ that satisfy the following equations \begin{equation}\label{ab.eqs}\frac{\alpha_i}{2} + \frac{\gamma_i}{d-1} = 1, \ {\rm and} \ \alpha_i + \gamma_i = \frac{\beta_i}{p_i} \,. \end{equation} Then H\"older inequality implies $$\sum_{|k| \leq n} |\rho(k)|^{\frac{\beta_i}{p_i}} \leq \left(\sum_{|k| \leq n} |\rho(k)|^2\right)^{\frac{\alpha_i}{2}} \left(\sum_{|k| \leq n} |\rho(k)|^{d-1}\right)^{\frac{\gamma_i}{d-1}} \,.$$ Then we plug this inequality into \eqref{a1d} and solve $\alpha_i, \gamma_i$ from \eqref{ab.eqs}. In this way, we obtain the inequality \begin{equation} \label{a1.equ4} A_1 \le \frac Cn \left(\sum_{|k| \leq n} |\rho(k)|^{d-1}\right)^2 \sum_{|k| \leq n} |\rho(k)|^{2} \,. \end{equation} \noindent {\it Step 2:} We consider the case $g \in \mathbb{D}^{3d-2}(\mathbb{R},\gamma)$. By Lemma \ref{var.yn} and Lemma \ref{cov.pth2}, we have \begin{equation} \label{equ96} A_1 \leq \frac{C}{n^2} \sum_{l_1, l_2, l_3, l_4=1}^n |\rho(l_1 - l_2)^{\beta_1} \rho(l_3 - l_4)^{\beta_2} \rho(l_2 - l_4)^{\beta_3} \rho(l_1 - l_3)^{\beta_4} \rho(l_1 - l_4)^{\beta_5} \rho(l_2 - l_3)^{\beta_6}| , \end{equation} where the $\beta_i$'s satisfy $\beta_i \leq d$, $\beta_j \geq 1$ for $j=1,2,3$, $ \sum_{i=1}^6 \beta_i \leq 3d-1$ and the lower bounds \begin{eqnarray*} \beta_2 + \beta_3 + \beta_5 &\geq &d, \\ \beta_1 + \beta_3 + \beta_6 &\geq &d, \\ \beta_1 + \beta_4 + \beta_5 &\geq &d, \\ \beta_2 + \beta_4 + \beta_6 &\geq &d. \end{eqnarray*} When all the above $\beta_i$'s inequalities attain the lower bound $d$, the right hand-side of (\ref{equ96}) coincides with the case when $g$ is the Hermite polynomial $H_d$. This case has been discussed in Step 1. On the other hand, if $\beta_1 \wedge \beta_2 + \beta_3 \wedge \beta_4 + \beta_5 \wedge \beta_6 \geq d$ and $\beta_3 \wedge \beta_4 \geq 1$, taking into account that $|\rho|\leq 1$, the right-hand side of (\ref{equ96}) is actually dominated by the case where all the $\beta_i$'s inequalities attain the lower bound $d$. Now we need to consider the all the other possible cases. In each case, we make the change of variables $l_1 - l_2=k_1, l_3 - l_4 = k_2, l_2 - l_4 = k_3 $. \medskip \noindent {\it (i) Case $\beta_4 = \beta_5 = \beta_6=0$}. \quad Then $\beta_1 = \beta_2 =d$, $\beta_3 = 1$. For these values of the $\beta_i$'s we can write the right hand-side of (\ref{equ96}) as \begin{eqnarray*} \frac{1}{n^2}\sum_{l_1, l_2, l_3, l_4=1}^n |\rho(l_1 - l_2)^d \rho(l_3 - l_4)^d \rho(l_2 - l_4)| &=& \frac{1}{n} \sum_{|k_i|\leq n, 1\le i\le 3}|\rho(k_1)|^d |\rho(k_2)|^d |\rho(k_3)| \\ &\leq &\frac{C}{n} \sum_{|k|\leq n} |\rho(k)| \,. \end{eqnarray*} \medskip \noindent {\it (ii) Case $\beta_4 = \beta_5 = 0$, $\beta_6 > 0$.} \quad Then $\beta_1 = d, \beta_2<d$, $\beta_2 + \beta_3 \geq d$ and $\beta_2 + \beta_6 \geq d$. Using \eqref{h1}, we can write \begin{eqnarray*} A_1 &\leq& \frac Cn \sum_{|k_i| \leq n, i = 2,3} |\rho(k_2)|^{\beta_2} |\rho(k_3)|^{\beta_3} | \rho(k_3 - k_2)|^{\beta_6} \\ &\leq& \frac Cn \sum_{|k_i| \leq n, i = 2,3} |\rho(k_2)|^{\beta_2} |\rho(k_3)|^{d - \beta_2} | \rho(k_3 - k_2)|^{d-\beta_2} \\ & \leq & \frac Cn \sum_{|k| \leq n} |\rho(k)|^{d - \beta_2} \leq \frac Cn \sum_{|k| \leq n} |\rho(k)|, \end{eqnarray*} where in the third inequality we have used \eqref{BL} with $p_1 = \frac{\beta_2}{d}$, $p_2 = 1$ and $p_3 = \frac{d-\beta_2}{d}$. \medskip \noindent {\it (iii) Case $\beta_4 = \beta_6 = 0, \beta_5 > 0$.} \quad This case is similar to (ii). \medskip \noindent {\it (iv) Case $\beta_5 = \beta_6 = 0, \beta_4 > 0$. } \quad Then $\beta_2 + \beta_3 \geq d$, $\beta_1 + \beta_3 \geq d$, $\beta_1 + \beta_4 \geq d$, $\beta_2 + \beta_4 \geq d$. It is easy to see $\beta_1 \wedge \beta_2 + \beta_3 \wedge \beta_4 + \beta_5 \wedge \beta_6 \geq d$ and, furthermore, $\beta_3 \wedge \beta_4 \ge 1$. This situation has been discussed before and $A_1$ is dominated by the bound in the case where $g$ is the Hermite polynomial. \medskip \noindent {\it (v) $\beta_4 = 0, \beta_5 > 0, \beta_6 > 0$.} \quad Then $\beta_1<d$, $\beta_2<d$, $\beta_1 + \beta_5 \geq d, \beta_2 + \beta_6 \geq d$. As a consequence, we obtain \begin{eqnarray*} A_1 & \leq & \frac Cn \sum_{|k_i| \leq n, 1 \leq i \leq 3} |\rho(k_1)|^{\beta_1} |\rho(k_2)|^{\beta_2} |\rho(k_3)|^{\beta_3} |\rho(k_1 + k_3)|^{d- \beta_1} |\rho(k_3 - k_2)|^{d-\beta_2} \\ & \leq & \frac Cn \sum_{|k| \leq n} |\rho(k)|, \end{eqnarray*} where have used \eqref{BL} for $p_i = \frac{\beta_i}{d}$ for $i=1,2$, $p_3 = 1$ and $p_{i+3} = \frac{d-\beta_i}{d}$ for $i=1,2$. \medskip \noindent {\it (vi) $\beta_5=0, \beta_4> 0, \beta_6 > 0$.} \quad Then $\beta_2 + \beta_3 \geq d, \beta_1 + \beta_4 \geq d$. This case is similar to (v). \medskip \noindent {\it (vii) $\beta_6=0, \beta_4> 0, \beta_5 > 0$.} \quad This case is similar to (v) and (vi). \medskip \noindent {\it (viii) $\beta_i > 0$ for all $1 \leq i \leq 6$, and $\beta_1 \wedge \beta_2 + \beta_3 \wedge \beta_4 + \beta_5 \wedge \beta_6 < d$. } \quad Without loss of generality, we may assume that $\beta_1 \leq \beta_2$. We take into account of $\beta_1 + \beta_4 + \beta_5 \geq d$ and $\beta_1 + \beta_3 + \beta_6 \geq d$, so there are two cases: $\beta_3 \leq \beta_4, \beta_5 \leq \beta_6$; and $\beta_4 \leq \beta_3, \beta_6 \leq \beta_5$. These two cases are actually equivalent, because in the second case, we can make the change of variable $l_3 - l_1 \to k_3$, instead of $l_2 - l_4 \to k_3$ for the first case. Thus it sufficies to consider the first case, i.e., \begin{eqnarray*} A_1 \leq \frac Cn \sum_{\substack{|k_i| \leq n, \\ 1 \leq i \leq 3}} |\rho(k_1)|^{\beta_1} |\rho(k_2)|^{\beta_2} |\rho(k_3)|^{\beta_3} |\rho(k_1 - k_2 + k_3)|^{\beta_4} |\rho(k_1 + k_3)|^{\beta_5} |\rho(k_3 - k_2)|^{\beta_6}, \end{eqnarray*} where $\beta_1 + \beta_3 + \beta_5 < d$, $\beta_2 + \beta_4 + \beta_6 > d$ since $\sum_{i=1}^6 \beta_i \ge 2d$. Next we will apply Brascamp-Lieb inequality \eqref{BL} according to several different subcases. \begin{itemize} \item [(1)] Suppose $\beta_1 \wedge \beta_3 \wedge \beta_5 = \beta_1$. Then if $\sum_{i=2}^6 \beta_i \geq 2d$, the right-hand side of the above inequality is bounded by the case $\sum_{i=2}^6 \beta_i = 2d$ when we decrease $\beta_i$'s, $i=2,4,6$ appropriately. We use \eqref{BL} with $p_1 = 1$, $p_i = \frac{\beta_i}{d}$ for $i\geq 2$, taking into account that $|\rho| \leq 1$, to obtain \begin{eqnarray*} A_1 \leq \frac Cn \sum_{|k| \leq n} |\rho(k)|^{\beta_1} \leq \frac Cn \sum_{|k| \leq n} |\rho(k)|\,. \end{eqnarray*} If $\sum_{i=2}^6 \beta_i < 2d$, for which an example could be $\beta_1 = 2, \beta_3 = 2, \beta_5 = d-5, \beta_2 = 3, \beta_4 = 3, \beta_6 = d-4$, then taking into account $|\rho| < 1$, we obtain \begin{eqnarray*} A_1 &\leq& \frac Cn \sum_{\substack{|k_i| \leq n, \\ 1 \leq i \leq 3}} |\rho(k_1)|^{\beta_1} |\rho(k_2)|^{\beta_2} |\rho(k_3)|^{\beta_3} \\ & & \qquad \qquad |\rho(k_1 - k_2 + k_3)|^{\beta_4} |\rho(k_1 + k_3)|^{\beta_5} |\rho(k_3 - k_2)|^{\beta_6} \end{eqnarray*} where $\beta_2 + \beta_4 + \beta_6 = d$ and also $\beta_1 + \beta_3 + \beta_5 < d$. Applying \eqref{BL} with $p_i = \frac{\beta_i}{\beta_1 + \beta_3}$ for $i=1,3$, $p_5 = 1$ and $p_i = \frac{\beta_i}{d}$ for $i=2, 4, 6$, we obtain \begin{eqnarray*} A_1 \leq \frac Cn \sum_{|k| \leq n} |\rho(k)|^{\beta_1 + \beta_3} \sum_{|k| \leq n}|\rho(k)|^{\beta_5} \leq \frac Cn \sum_{|k| \leq n} |\rho(k)|^2 \sum_{|k| \leq n}|\rho(k)|. \end{eqnarray*} \item [(2)] $\beta_1 \wedge \beta_3 \wedge \beta_5 = \beta_5$. We use the same approach as for the subcase (1). \item [(3)] $\beta_1 \wedge \beta_3 \wedge \beta_5 = \beta_3$. We follow the same methodology. When $\sum_{i \neq 3} \beta_i < 2d$, the arguments are the same. When $\sum_{i \neq 3} \beta_i \geq 2d$, since $d \leq \beta_1 + \beta_4 + \beta_5 < 2d$, we can decrease $\beta_2, \beta_6$ appropriately such that $\sum_{i \neq 3} \beta_i = 2d$ and at the same time this implies $\beta_2 + \beta_6 \leq d$. Then we use \eqref{BL} with $p_3 = 1$, $p_i = \frac{\beta_i}{d}$ for $i\neq 3$ to obtain \begin{eqnarray*} A_1 \leq \frac Cn \sum_{|k| \leq n} |\rho(k)|^{\beta_3} \leq \frac Cn \sum_{|k| \leq n} |\rho(k)|\,. \end{eqnarray*} \end{itemize} This completes the proof of the theorem. \end{proof} \begin{remark} In the case of the Hermite polynomial $g=H_d$, $d\ge 3$, the proof of Theorem \ref{thm3.4}, based on Proposition \ref{stein.hd0}, yields \begin{equation} \label{equ71} d_{\rm TV} (Y_n / \sigma_n ,Z) \le C n^{-\frac{1}{2}} \sum_{|k| \leq n} |\rho(k)|^{d-1} \left(\sum_{|k| \leq n} |\rho(k)|^{2}\right)^{\frac{1}{2}} \,. \end{equation} In this case Proposition \ref{stein.hd} reduces to the computation of the third and fourth cumulants and one can derive the following bound (see \cite{bbnp}), which is better than (\ref{equ71}): \[ d_{\rm TV} (Y_n / \sigma_n ,Z) \le \frac Cn \left( \sum_{|k| \leq n} |\rho(k)|^{d-1} \right)^2 \sum_{|k| \leq n} |\rho(k)|^2 + \frac{C}{\sqrt{n}} \left( \sum_{|k| \leq n} |\rho(k)|^{\frac {3d} 4} \right)^2 \mathbf{1}_{\{d \,\, {\rm even}\}} \,. \] However, applying Proposition \ref{stein.hd} to the case of a general function $g$ is a much harder problem and it will not be dealt in this paper. \end{remark} Consider the particular case where $\rho(k) \sim k^{-\alpha}$, as $k$ tends to infinity, for some $\alpha>0$. Then, condition (\ref{h1}) is satisfied provided $\alpha d >1$. In this case, Theorems \ref{main.d1}, \ref{main.d12} and \ref{thm3.4} imply the following results. \begin{cor} \label{cor1} Suppose that $\rho(k) \sim k^{-\alpha}$, as $k$ tends to infinity, where $\alpha>0$ is such that $\alpha d >1$. Then, the following estimates hold true in the context of Theorem \ref{bm}: \begin{itemize} \item[(i)] If $g \in \mathbb{D}^{2,4} (\mathbb{R},\gamma)$ has Hermite rank $1$ and $\alpha > 1$, \[d_{\rm TV}(Y_n/\sigma_n, Z) \leq Cn^{-\frac{1}{2}} \,. \] \item[(ii)] If $g \in \mathbb{D}^{2,4} (\mathbb{R},\gamma)$ has Hermite rank $2$ and $\alpha > \frac 23$, \[ d_{\rm TV}(Y_n / \sigma_n, Z) \leq \begin{cases} Cn^{-\frac{1}{2}} & {\rm if} \ \alpha > 1\,, \\ Cn^{-\frac{1}{2}} (\log n)^{\frac 32} & {\rm if} \ \alpha =1\,, \\ Cn^{1-\frac 32 \alpha} & {\rm if} \ \alpha \in (\frac 23, 1). \end{cases} \] \item[(ii)] If $g \in \mathbb{D}^{3,4} (\mathbb{R},\gamma)$ has Hermite rank $2$, \[ d_{\rm TV}(Y_n / \sigma_n, Z) \leq \begin{cases} Cn^{-\frac{1}{2}} & {\rm if} \ \alpha > 1\,, \\ Cn^{-\frac{1}{2}} \log n & {\rm if} \ \alpha =1\,, \\ Cn^{\frac{1}{2} -\alpha} & {\rm if} \ \alpha \in(\frac 12, 1). \end{cases} \] \item[(iii)] If $g \in \mathbb{D}^{4,4} (\mathbb{R},\gamma)$ has Hermite rank $2$, \[ d_{\rm TV}(Y_n / \sigma_n, Z) \leq \begin{cases} Cn^{-\frac{1}{2}} & {\rm if} \ \alpha > 1\,, \\ Cn^{-\frac{1}{2}}( \log n )^{\frac 12} & {\rm if} \ \alpha = 1\,, \\ Cn^{ -\frac \alpha 2} & {\rm if} \ \alpha \in(1, \frac 23) \, ,\\ Cn^{ 1-2\alpha} & {\rm if} \ \alpha \in(\frac 12, \frac 23] \, . \end{cases} \] \item[(iv)] If $g \in \mathbb{D}^{5,6} (\mathbb{R},\gamma)$ has Hermite rank $2$, \[ d_{\rm TV}(Y_n / \sigma_n, Z) \leq \begin{cases} Cn^{-\frac{1}{2}} & {\rm if} \ \alpha > 1\,, \\ Cn^{-\frac{1}{2}}( \log n )^{\frac 12} & {\rm if} \ \alpha = 1\,, \\ Cn^{-\frac \alpha 2} & {\rm if} \ \alpha \in(\frac 35, 1) \, ,\\ Cn^{ \frac 32-3\alpha} & {\rm if} \ \alpha \in(\frac 12, \frac 35] \, . \end{cases} \] \item[(v)] If $g \in \mathbb{D}^{6,8} (\mathbb{R},\gamma)$ has Hermite rank $2$, \[ d_{\rm TV}(Y_n / \sigma_n, Z) \leq \begin{cases} Cn^{-\frac{1}{2}} & {\rm if} \ \alpha > \frac 23\,, \\ Cn^{-\frac{1}{2}}( \log n )^2 & {\rm if} \ \alpha = \frac 23\,, \\ Cn^{ \frac 32-3 \alpha} & {\rm if} \ \alpha \in(\frac 12, \frac 23) \, . \end{cases} \] \item[(vi)] If $g \in \mathbb{D}^{3d-2,4}(\mathbb{R},\gamma)$ has Hermite rank $d\ge 3$, \[ d_{\rm TV}(Y_n / \sigma_n, Z) \leq \begin{cases} Cn^{-\frac{1}{2}} & {\rm if} \ \alpha > 1\,, \\ Cn^{-\frac{1}{2}}( \log n )^{\frac 12} & {\rm if} \ \alpha = 1\,, \\ Cn^ { -\frac \alpha 2} & {\rm if} \ \alpha \in( \frac 12, 1) \,, \\ Cn^ { -\frac \alpha 2} \sqrt{\log n} & {\rm if} \ \alpha = \frac 12 \,, \\ Cn^{\frac 12 - \frac 32 \alpha} & {\rm if} \ \alpha \in (\frac{1}{2d-3}, \frac{1}{2}) \\ Cn^ {1 - \alpha d} & {\rm if} \ \alpha \in( \frac 1d, \frac 1 {2d-3}] \,.\\ \end{cases} \] \item[(vii)] When $g=H_d$, $d\ge 3$, the bound \eqref{yn2.est} combined with the estimate \eqref{a1.equ4} yields \[ d_{\rm TV}(Y_n / \sigma_n, Z) \leq \begin{cases} Cn^{-\frac{1}{2}} & {\rm if} \ \alpha > \frac 12\,, \\ Cn^{-\frac{1}{2}}( \log n )^{\frac 12} & {\rm if} \ \alpha = \frac 12\,, \\ Cn^ { - \alpha } & {\rm if} \ \alpha \in( \frac 1 {d-1}, \frac 12) \,, \\ Cn^ { - \alpha } \log n & {\rm if} \ \alpha = \frac 1 {d-1} \,, \\ Cn^ {1 - \alpha d} & {\rm if} \ \alpha \in( \frac 1d, \frac 1{d-1}) \,.\\ \end{cases} \] \end{itemize} \end{cor} We remark that the bounds derived in point (vii) coincide with the estimates obtained by Bierm\'e, Bonami and Le\'on in \cite{bbl} using techniques of Fourier analysis. Corollary \ref{cor1} can be applied to any function $g$ with an expansion $g(x)= \sum_{m=d} ^ {d+k} c_m H_m(x) $ for any $k \geq 0$. \section{Application to fractional Brownian motion} Recall that the fractional Brownian motion (fBm) $B=\{B_t, t \in \mathbb{R}\}$ with Hurst parameter $H \in (0,1)$ is a zero mean Gaussian process, defined on a complete probability space $(\Omega, \mathcal{F},P)$, with the covariance function \[ \mathbb{E}(B_s B_t) = \frac{1}{2} (|s|^{2H} + |t|^{2H} - |s-t|^{2H}) \,. \] The fractional noise defined by $X_j = B_{j+1} - B_j$, $j \in \mathbb{Z}$ is an example of a Gaussian stationary sequence with unit variance. The covariance function is given by \[ \rho_H(j) = \frac 12 \left( |j+1|^{2H} + |j-1|^{2H} -2|j|^{2H} \right). \] Notice that $\rho_H(j)$ behaves as $H(2H-1) j^{2H-2}$ as $j\rightarrow \infty$. Thus, this covariance function has a power decay at infinity with $\alpha = 2-2H$. Consider the sequence $Y_n$ defined by \[ Y_n = \frac{1}{\sqrt{n}} \sum_{j=1}^n g(B_{j+1} - B_j)\,, \] where $g\in L^2(\mathbb{R}, \gamma)$ has Hermite rank $d\ge 1$. As a consequence, the estimates obtained in Corollary \ref{cor1} hold with $\alpha =2-2H$. \subsection{Application to the asymptotic behavior of power variations} For any $p\ge 1$, the power variation of the fBm on the time interval $[0,1]$ is given by \[ V_n^p(B)= \sum_{j=0}^{n-1} \left|B_{\frac{j+1}{n}} - B_{\frac{j}{n}}\right|^p \,. \] By the self-similarity property of fBm, the sequence $\{ n^{H} (B_{\frac{j+1}{n}} - B_{\frac{j}{n}}), j\ge 0\}$ has the same distribution as $\{B_{j+1} - B_j, j\ge 0\}$, which is stationary and ergodic. By the Ergodic Theorem, we have, as $n \to \infty$, \[ n^{pH-1} V_n^p(B) \to c_p \] almost surely and in $L^q(\Omega)$ for any $q\ge 1$, where $c_p = \mathbb{E}( |Z|^p)$. Moreover, when $H \in (0, \frac{3}{4})$, using the fact that the function $g(x)= |x|^p -c_p$ has Hermite rank $2$, the Breuer-Major theorem leads to the following central limit theorem \begin{equation} \label{equ70} S_n:= \sqrt{n}\left(n^{pH-1} V_n^p(B) - c_p\right) \to N(0, \sigma^2_{H,p}), \end{equation} where $\sigma^2_{H,p} = \sum_{m=2} ^\infty c_m^2 m! \sum_{k\in \mathbb{Z}} \rho_H(k) ^m$, with $|x|^p -c_p =\sum_{m=2} ^\infty c_m H_m(x)$. A functional version of this central limit theorem can also be proved (see \cite{cnw}). We can apply the results obtained in Section 3 to derive the rate of convergence for the total variation distance in (\ref{equ70}). Indeed, the sequence $S_n$ has the same distribution as \[ Y_n = \sqrt{n}\left(\frac{1}{n}\sum_{j=1}^n \left|B_{j+1} - B_{j}\right|^p - c_p\right) \,. \] and it suffices to consider the case that the fractional noise $X_j = B_{j+1} - B_{j}$ and the function $g(x) = |x|^p - c_p$ that has Hermite rank $2$. More precisely, if $N \leq p < N+1$ where $N \geq 2$ is an integer, then the function $g $ belongs to $\mathcal{D}^N:= \cap_{q\ge 1}\mathbb{D}^{N,q} (\mathbb{R},\gamma)$ and Corrollary \ref{cor1} gives the convergence rate to zero of $ d_{\rm TV}(S_n/\sigma_n, Z)$ with $\alpha =2-2H$. Here are some examples. \\ \noindent {\it Example 1:} Let $p=2.5$ and $\sigma_n^2 = \mathbb{E}(S_n^2) = \mathbb{E}(Y_n^2)$. Then $g \in \mathcal{D}^2$ and $$d_{\rm TV}(S_n/\sigma_n, Z) \leq \begin{cases} C n^{-\frac{1}{2}} & {\rm if} \ H \in (0, \frac{1}{2}) \,,\\ C n^{-\frac{1}{2}} (\log n)^\frac{3}{2} & {\rm if} \ H=\frac{1}{2} \,,\\ C n^{3H-2} & {\rm if} \ H \in (\frac{1}{2}, \frac{2}{3}) \,. \end{cases}$$ \noindent {\it Example 2:} Let $p=3$ and $\sigma_n^2 = \mathbb{E}(S_n^2) = \mathbb{E}(Y_n^2)$. Then $g \in \mathcal{D}^3$ and $$d_{\rm TV}(S_n/\sigma_n, Z) \leq \begin{cases} C n^{-\frac{1}{2}} & {\rm if} \ H \in (0, \frac{1}{2}) \,,\\ C n^{-\frac{1}{2}}\log n & {\rm if} \ H=\frac{1}{2} \,,\\ C n^{2H - \frac{3}{2}} & {\rm if} \ H \in (\frac{1}{2}, \frac{3}{4}) \,. \end{cases}$$ \noindent {\it Example 3:} Let $p=4$ and $\sigma_n^2 = \mathbb{E}(S_n^2) = \mathbb{E}(Y_n^2)$. Then $g \in \mathcal{D}^4$ and $$d_{\rm TV}(S_n/\sigma_n, Z) \leq \begin{cases} C n^{-\frac{1}{2}} & {\rm if} \ H \in (0, \frac{1}{2}) \,,\\ C n^{-\frac{1}{2}} \sqrt{\log n} & {\rm if} \ H=\frac{1}{2} \,,\\ C n^{H-1} & {\rm if} \ H \in (\frac{1}{2}, \frac{2}{3}] \,,\\ C n^{4H-3} & {\rm if} \ H \in (\frac{2}{3}, \frac{3}{4}) \,. \end{cases}$$ \subsection{Application to the estimation of the Hurst parameter} As an application of the convergence rates of power variations, we establish the consistency of the estimatior of the Hurst parameter $H$ for the fBm, defined by means of $p$-power variations. This problem has been studied for $H>\frac{1}{2}$ using quadratic variations in the papers \cite{bcij, il97,km12,tv09} and the references therein. In the paper \cite{co10}, a consistent estimator based on the $p$-power variation is adopted, defined as $$\tilde H = \frac{\log C_p - \log (n^{-1} V_n^p(B))}{p\log n} \,,$$ where the specific constant $C_p$ depends on $p$. In the paper \cite{co10}, the author also discusses other filters to define the power variation and obtains the convergence rate $1/\sqrt{n}\log n$. Here we construct another estimator based on the $p$-power variation, which is motivated by the papers \cite{bcij,km12}, where the quadratic variation is used. Let $\lambda > 1, \lambda \in \mathbb{N}$ be a scaling parameter. Fix $p\ge 2$, and consider the statistics $T_{\lambda,n}$ defined by \[ T_{\lambda, n} := \frac{V^p_{\lambda n} (B)}{V^{p}_n (B)} = \frac{\sum_{j=0}^{\lambda n-1}\left|B_{\frac{j+1}{\lambda n}} - B_{\frac{j}{\lambda n}}\right|^p}{\sum_{j=0}^{n-1}\left|B_{\frac{j+1}{n}} - B_{\frac{j}{n}}\right|^p} \,. \] Then we propose the following estimator for the Hurst parameter $H$: \begin{equation}\label{H.est} \hat H_{\lambda, n} = \frac{1}{p}\left(1 - \frac{\log T_{\lambda, n}}{\log \lambda}\right) \,. \end{equation} In the next proposition we show the consistency of this estimator. Though the consistency could be clearly obtained from the ergodic theorem, we will apply the main results obtained in this paper to prove the consistency as well as the convergence rate. \begin{prop} When $H \in (0, \frac{3}{4})$, for $p \in \{2\} \cup [3, \infty) $, $$\lim_{n \to \infty} \sqrt{\frac{n}{\log n}}\left(\hat H_{\lambda, n} - H\right) = 0\,,$$ in probability. \end{prop} \begin{proof} Denote $\alpha_n = n^{-1+pH} V^{p}_n (B)$. Then $$\log \alpha_{\lambda n} - \log \alpha_n = (-1+pH)\log \lambda + \log T_{\lambda, n} \,.$$ Thus \begin{equation}\label{h.est} \hat H_{\lambda, n} - H = - \frac{\log \alpha_{\lambda n} - \log \alpha_n}{p \log \lambda} \,. \end{equation} Let $\sigma_n^2 = \mathbb{E}[(\sqrt{n}(\alpha_n - c_p))^2]$. By previous results, we know that $\sqrt{n}(\alpha_n - c_p) \to \sigma_{H,p} Z$ where $\sigma_n^2 \to \sigma^2_{H,p}$, and $$d_{\rm TV}(\frac{\sqrt{n}(\alpha_n - c_p)}{\sigma_n}, Z) < n^{-a}$$ for some $a>0$. Then for any $\epsilon > 0$, \begin{eqnarray*} P\left(\left|\frac{\sqrt{n}(\alpha_n - c_p)}{\sigma_n}\right| > \epsilon \sqrt{\log n}\right) \leq P(|Z| > \epsilon \sqrt{\log n}) + n^{-a} \\ \leq \frac{C_\epsilon}{n^{\frac{\epsilon^2}{2}}\sqrt{\log n}} + n^{-a}, \end{eqnarray*} where we have used the estimate for the tail of a standard Gaussian random variable, i.e., $P(Z > x) \leq \frac{e^{-x^2/2}}{x\sqrt{2\pi}}$. This implies that $\frac{\sqrt{n}(\alpha_n - c_p)}{\sqrt{\log n}} \to 0$ in probability as $n \to \infty$. Back to equation \eqref{h.est}, note that $\log \alpha_n - \log c_p = \frac{1}{\alpha_n^*} (\alpha_n - c_p)$ for some $\alpha_n^*$ between $\alpha_n$ and $c_p$. These results are true for $\alpha_{\lambda n}$ as well, so we conclude that $\sqrt{\frac{n}{\log n}}\left(\hat H_{\lambda, n} - H\right) \to 0$ in probability. \end{proof} \section{Appendix} In this section we show some technical lemmas that play a crucial role in the proof of our main results. \begin{lemma}\label{cov.pth1} Under the notation and assumptions of Theorem \ref{bm}, let $I_1$ and $I_2$ be the random variables defined in (\ref{I1}) and (\ref{I2}), respectively. Suppose $d=2$. Then we have the following estimates. \begin{enumerate} \item If $g \in \mathbb{D}^{3,4} (\mathbb{R},\gamma)$, then for $i=1,2$, we have $$|\mathbb{E}(I_i)| \leq C \sum_{i \neq 1} |\rho(l_1 - l_i)| \,.$$ \item If $g \in \mathbb{D}^{4,4} (\mathbb{R},\gamma)$, then for $i=1,2$, we have \begin{equation} |\mathbb{E}(I_i)| \leq C |(\rho(l_1 - l_2) + \rho(l_1 - l_4)) \sum_{j \neq 3} \rho (l_j - l_3) + \rho(l_1 - l_3)| \,. \label{cov.pth1.eq1} \end{equation} \item If $g$ is the Hermite polynomial $x^2-1$, then $$|\mathbb{E}(I_i)| \leq C |\rho(l_1 - l_3)| \,.$$ \end{enumerate} \end{lemma} \begin{proof} We first consider the term $I_1$. Observe that \[ g_1(X_{l_1})= \delta (g_2(X_{l_1})l_1). \] Applying the duality relationship (\ref{dua}), we obtain \begin{eqnarray*} \mathbb{E}(I_1) &=& \sum_{a+b+c=1}\mathbb{E}(g^{(a+2)}(X_{l_2}) g^{(b+2)}(X_{l_4}) (g_1)^{(c)} (X_{l_3}) g_2(X_{l_1})) \\ & & \ \times \langle l_1,l_2^{\otimes a} \otimes l_4^{\otimes b} \otimes l_3^{\otimes c} \rangle_{\mathfrak{H}} \,. \end{eqnarray*} When $g$ is the Hermite polynomial $x^2-1$, we just need to consider the case $a=0$, $ b=0$ and $c=1$. In this way we get \[ |\mathbb{E}( I_1)| \leq C |\rho(l_1 - l_3)| \,. \] When $g \in \mathbb{D}^{3,4} (\mathbb{R},\gamma)$, we obtain \[ |\mathbb{E} (I_1)| \leq C \sum_{i \neq 1}| \rho(l_1 - l_i)| \,. \] When $g \in \mathbb{D}^{4,4} (\mathbb{R}, \gamma)$, in the case of $c=0$, we apply duality again to obtain \begin{eqnarray*} \mathbb{E} (I_1) &=& \sum_{a+b=1} \sum_{a'+b'+c'=1} \mathbb{E}(g^{(a+a'+2)}(X_{l_2}) g^{(b+b'+2)}(X_{l_4}) g_2(X_{l_3}) g_2^{(c')}(X_{l_1})) \\ & & \times \langle l_1, l_2^{\otimes a} \otimes l_4^{\otimes b} \rangle_{\mathfrak{H}} \langle l_3, l_2^{\otimes a'} \otimes l_4^{\otimes b'} \otimes l_1^{\otimes c'} \rangle_{\mathfrak{H}} \,. \end{eqnarray*} Then the inequality \eqref{cov.pth1.eq1} for $i=1$ is derived from expanding the above identities. Similarly, for the term $I_2$, since $g'(X)$ has the Hermite rank $1$, we can write \[ g'(X_{l_i}) = \delta\left( (g')_1(X_{l_i}) l_i\right) \,. \] Using this representation, we have \[ \mathbb{E}(I_2) = \mathbb{E}\left(\delta\left( (g')_1(X_{l_1}) l_1\right)\delta\left( (g')_1(X_{l_3}) l_3\right) g_1'(X_{l_2}) g_1'(X_{l_4})\right) \,. \] We use the similar arguments as the term $I_1$ to obtain the inequality \eqref{cov.pth1.eq1} for $i=2$. \end{proof} \begin{lemma}\label{cov.pth2} Under the notation and assumptions of Theorem \ref{bm}, let $I_1$ and $I_2$ be the random variables defined in (\ref{I1}) and (\ref{I2}), respectively. Suppose $d \ge 3$. Then for $i=1, 2$, \begin{eqnarray*} |\mathbb{E}(I_i)| \leq C \sum_{\beta \in \mathcal{I}_1} |\rho(l_1-l_2)^{\beta_1} \rho(l_1-l_3)^{\beta_2} \rho(l_1-l_4)^{\beta_3} \rho(l_3-l_2)^{\beta_4} \rho(l_2-l_4)^{\beta_5} \rho(l_3-l_4)^{\beta_6}| \,, \end{eqnarray*} where $\beta=(\beta_1, \ldots, \beta_6)$, $\mathbb{N}_0 = \mathbb{N}\cup \{0\}$ and \begin{eqnarray} \nonumber \mathcal{I}_1 = \{\beta \in \mathbb{N}_0^6 &: & d-1 \leq \beta_1 + \beta_2 + \beta_3, \ d-1 \leq \beta_2 + \beta_4 + \beta_6 , \ d-2 \leq \beta_1 + \beta_4 + \beta_5, \\ \label{equ60} & & d-2 \leq \beta_3 + \beta_5 + \beta_6, \ \sum_{i=1}^6 \beta_i \leq 3d-4 \}. \end{eqnarray} Moreover, if $g$ is the Hermite polynomial $H_d$, we obtain \begin{equation} \label{equ94} |\mathbb{E}(I_i)| \le C \sum_{\beta \in \mathcal{I}_3} |\rho(l_1 - l_2)^{\beta_1} \rho(l_1 - l_3)^{\beta_2} \rho(l_1 - l_4)^{\beta_3} \rho(l_3 - l_2)^{\beta_3} \rho(l_2 - l_4)^{\beta_2-1} \rho(l_3 - l_4)^{\beta_1}| \,, \end{equation} where \[ \mathcal{I}_3= \{\beta=(\beta_1, \beta_2, \beta_3) \in \mathbb{N}^3 : \beta_1 + \beta_2 + \beta_3 = d-1 \}. \] \end{lemma} \begin{proof} We can represent the factor $g_1(X_{l_1})$ appearing in $I_1$ as $g_1(X_{l_1}) = \delta ^{d-1} ( g_d(X_{l_1}) l_1^{\otimes (d-1)})$. Then applying the duality relationship (\ref{dua2}) and Leibniz's rule yields \begin{eqnarray*} \mathbb{E}(I_1) & = & \sum_{a+b+c=d-1} \mathbb{E}\left(g^{(a+2)}(X_{l_2}) g^{(b+2)}(X_{l_4}) g_d(X_{l_1}) g_1 ^{(c)} (X_{l_3}) \right) \\ && \qquad \times \rho(l_1 - l_2)^a \rho(l_1 - l_4)^b \rho( l_1-l_3)^c \,. \end{eqnarray*} We write \[ g_1 ^{(c)} (X_{l_3}) = \delta^{d-1-c} ( T_{d-1-c} (g^{(c)}_1) (X_{l_3}) l_3 ^{\otimes (d-1-c)}). \] Then, applying again the duality relationship (\ref{dua2}) and Leibniz's rule, we obtain \begin{eqnarray*} \mathbb{E}(I_1) &=& \sum_{a+b+c=d-1} \sum_{a'+b'+c' = d-1-c} \mathbb{E} \Big(g^{(a+a'+2)}(X_{l_2}) g^{(b+b'+2)}(X_{l_4}) \\ && \qquad \times g_d^{(c')} (X_{l_1})T_{d-1-c} (g_1^{(c)})(X_{l_3})\Big) \\ && \qquad \times \rho(l_1 - l_2)^a \rho(l_1 - l_4)^b \rho(l_1 - l_3)^{c+c'} \rho(l_3 - l_2)^{a'}\rho(l_3 - l_4)^{b'}. \end{eqnarray*} We can still represent the factors $g^{(a+a'+2)}(X_{l_2})$ and $g^{(b+b'+2)}(X_{l_4})$ as divergences: \[ g^{(a+a'+2)}(X_{l_2}) = \delta^{d-(a+a'+2)}(T_{d-(a+a'+2)}(g^{(a+a'+2)})(X_{l_2}) l_2 ^{\otimes (d-(a+a'+2))} ) \] and \[ g^{(b+b'+2)}(X_{l_4}) = \delta^{d-(b+b'+2)}(T_{d-(b+b'+2)}(g^{(b+b'+2)})(X_{l_4}) l_4 ^{\otimes (d-(b+b'+2))} ). \] Then, we repeat the above process to obtain, using the fact that $g\in \mathcal{D}^{3d-2}$, \begin{eqnarray} \nonumber |\mathbb{E}(I_1)| &\leq& C \sum |\rho(l_1 - l_2)^{a+b''} \rho(l_1 - l_4)^{b+b'''} \rho(l_1 - l_3)^{c+c'} \\ && \qquad \times \rho(l_3 - l_2)^{a'+c''}\rho(l_3 - l_4)^{b'+c'''} \rho(l_2 - l_4)^{a''+a'''}|, \label{equ50} \end{eqnarray} where the sum runs over all nonnegative integers $a,b,c, a', b',c', a'', b'', c'', a''', b''', c'''$ satisfying \begin{eqnarray*} a+b+c &=& d-1 \\ a' +b'+c' &=& d-1-c \\ a'' +b''+c'' &=& (d-a'-a-2) \vee 0 \\ a'''+b'''+c''' &=& (d-b-b' -a'' -2) \vee 0 \end{eqnarray*} Inequality (\ref{equ50}) can be equivalently written as \begin{eqnarray*} |\mathbb{E}(I_1)| \leq C \sum_{\beta \in \mathcal{I}_1} |\rho(l_1 - l_2)^{\beta_1} \rho(l_1 - l_3)^{\beta_2} \rho(l_1 - l_4)^{\beta_3} \rho(l_3 - l_2)^{\beta_4} \rho(l_2 - l_4)^{\beta_5} \rho(l_3 - l_4)^{\beta_6}| \,, \end{eqnarray*} where $\beta=(\beta_1, \ldots, \beta_6)$ and $\mathcal{I}_1 $ is the set defined in (\ref{equ60}). Notice that we have the lower bound $\sum_{i=1}^6 \beta_i \ge 2d-3$. On the other hand, the upper bound $\sum_{i=1}^6 \beta_i \le 3d-4$ is attained when $a=d-1$, $a'=d-1$, $a'''=d-2$ and the other numbers vanish. Taking into account that in this case the function $g''$ might be differentiated $3d-4$ times, we need $g\in \mathcal{D}^{3d-2}$. When $g$ is the Hermite polynomial $H_d$, $g_d = 1$ and $g_1 = H_{d-1}$, so we have $T_{d-1-c} (g_d^{(c)} )= (d-1)(d-2)\cdots (d-c)$. In this case, taking into account of the orthogonality of Hermite polynomials of different order, we obtain \begin{eqnarray*} |\mathbb{E}(I_1)| & \leq & C\sum_{a+b+c=d-1, a'+b'= d-1-c, a+a'= b+b'=\tilde c} |\rho(l_1 - l_2)^a \rho(l_1 - l_4)^b \rho(l_1 - l_3)^{c} \\ & & \ \times \rho(l_3 - l_2)^{a'}\rho(l_3 - l_4)^{b'} \rho(l_2 - l_4)^{d-2-\tilde c}| \,. \end{eqnarray*} Again this can be written as \begin{eqnarray*} |\mathbb{E}(I_1)| \leq C \sum_{\beta \in \mathcal{I}_2} |\rho(l_1 - l_2)^{\beta_1} \rho(l_1 - l_3)^{\beta_2} \rho(l_1 - l_4)^{\beta_3} \rho(l_3 - l_2)^{\beta_4} \rho(l_2 - l_4)^{\beta_5} \rho(l_3 - l_4)^{\beta_6}| \,, \end{eqnarray*} where $\mathcal{I}_2$ is the set of $\beta \in \mathbb{N}_0^6$ such that $\beta_1 + \beta_2 +\beta_3=d-1$, $\beta_4+ \beta_6 + \beta_2=d-1$ and $\beta_1 + \beta _4= \beta_3 + \beta_6 =d- 2-\beta_5$. This implies $\beta_1 =\beta_6 $, $\beta_3 =\beta_4$, $\beta_5 = \beta_2 -1$ and $\beta_1+ \beta_2 +\beta_3 =d-1$, and this completes the proof of (\ref{equ94}). Similar arguments could be applied to handle the term $I_2$. \end{proof} \begin{lemma}\label{a3.c4} Assume condition (\ref{h1}). Define \[ J_1= \frac 1{n^3} \sum_{l_1, \ldots, l_6=1}^n \sum_{i=1 \atop i \not =2}^6 \sum_{s\in\{3,5,6\} \atop s\not =i} \sum_{j=1 \atop j\not =s }^6 |\rho_{2i} \rho_{sj} \rho_{12}\rho_{13}\rho_{45}\rho_{46}| \] and \begin{eqnarray*} J_2 &:=& \frac 1{n^3} \sum_{l_1, \ldots, l_6=1}^n \Big(\sum_{\substack{i \neq s \neq j \\ i,s,j \in \{3,5,6\}}}|\rho_{2i} \rho_{sj} \rho_{12}\rho_{13}\rho_{45}\rho_{46}| \\ && + \sum_{ (i,s,j,t,h) \in D_3}\ |\rho_{2i} \rho_{sj} \rho_{th} \rho_{12}\rho_{13}\rho_{45}\rho_{46}| \Big),\\ \end{eqnarray*} where the set $D_3$ has been defined in (\ref{d3}) and we recall that $\rho_{ij} = \rho(l_i - l_j)$. Then, \begin{equation} \label{equ9} J_1 \le \frac Cn \left(\sum_{|k| \leq n} |\rho(k)|\right)^2 \end{equation} and \begin{equation} \label{equ10} J_2 \le \frac C n \sum_{|k| \leq n}|\rho(k)| + \frac Cn \left(\sum_{|k| \leq n} |\rho(k)|^{\frac{3}{2}}\right)^4 \,. \end{equation} \end{lemma} \begin{proof} {\it Step 1: } We show first the inequality (\ref{equ9}). We make change of variables $l_1 - l_2 = k_1$, $l_1 - l_3 = k_2$, $l_4 - l_5 = k_3$, $l_4 - l_6=k_4$. We first consider the term $\rho_{2i}$ that has three possibilities: $\rho(k_1)$, $\rho(k_1 - k_2)$, or a new factor $\rho(k_5)$ where $k_5 = l_2 - l_i$ is linearly independent of $k_t, t=1, \ldots, 4$. If $\rho_{2i}$ is one of the first two cases, $\rho_{sj}$ have three possibilities: $\rho(k_i)$ for $i=2,3,4$; $\rho(k_1-k_2)$ or $\rho(k_3 - k_4)$; a new factor $\rho(k_5)$ where $k_5 = l_j - l_s$ independent of $k_t$, $1\le t \le 4$. If $\rho_{2i}$ is in the third case, i.e. a new factor, then $\rho_{sj}$ have several possibilities: $\rho(k_i)$ for $i=2,3,4$; $\rho({\bf k} \cdot {\bf v})$ where ${\bf k} \cdot {\bf v}$ is a linear combination of two, three or four or five $k_t$'s, $1\le t \le 5$. Through this analysis, by taking advantage of the symmetry, we obtain \[ J_1 \leq \frac C{n^2} \sum_{i=1}^9 \sum_{|k_j| \leq n, 1\le j \le 5} |J_{1i}|, \] where \begin{eqnarray*} && J_{11} = \rho(k_1)^2 \rho(k_2)^2 \rho(k_3) \rho(k_4), \\ && J_{12} = \rho(k_1)^2 \rho(k_2) \rho(k_1 - k_2)\rho(k_3) \rho(k_4), \\ && J_{13} = \rho(k_1)^2 \rho(k_2) \rho(k_3) \rho(k_4) \rho(k_3 - k_4), \\ && J_{14} = \rho(k_1)^2 \rho(k_2) \rho(k_3) \rho(k_4) \rho(k_5),\\ && J_{15} = \rho(k_1) \rho(k_2) \rho(k_1-k_2) \rho(k_3) \rho(k_4) \rho(k_3-k_4), \\ && J_{16} = \rho(k_1) \rho(k_2) \rho(k_1-k_2) \rho(k_3) \rho(k_4) \rho(k_5),\\ && J_{17} = \rho(k_1) \rho(k_2) \rho(k_3) \rho(k_4) \rho(k_5)\rho(k_1 - k_5 - k_2) ,\\ && J_{18} = \rho(k_1) \rho(k_2) \rho(k_3) \rho(k_4) \rho(k_5) \rho(k_1 - k_2 + k_3 - k_4) ,\\ && J_{19} = \rho(k_1) \rho(k_2) \rho(k_3) \rho(k_4) \rho(k_5) \rho(k_1 - k_2 + k_3 - k_4 + k_5). \end{eqnarray*} We claim that for $i=1,\dots, 9$, the following estimate holds true \begin{equation} \label{equ11} \frac 1{n^2} \sum_{|k_j| \leq n, 1\le j\le 5} |J_{1i}| \leq \frac{C}{n} \left(\sum_{|k| \leq n} |\rho(k)|\right)^2. \end{equation} The estimate (\ref{equ11}) holds clearly for $i=1$ and $i=4$ due to condition (\ref{h1}). By the Cauchy-Schwartz inequality we have \[ \sum_{|k_1|, |k_2| \leq n} \rho(k_1)^2 |\rho(k_2) \rho(k_1 - k_2)| <\infty \] and (\ref{equ11}) is true for $i=2$. For $i=3, 5, 6$, the estimate (\ref{equ11}) follows from (\ref{equ6}) and (\ref{equ7}) with $M=2$ and for $i=7,8,9$ we use these inequalities with $M=3,4,5$, respectively. \medskip \noindent {\it Step 2:} We proceed to prove the inequality (\ref{equ10}). Note that for the first summand in $J_2$, the product $\rho_{2i} \rho_{sj}$ can be only one of the following terms: $\rho_{23} \rho_{56}$, $\rho_{26} \rho_{35}$, or $\rho_{25} \rho_{36}$. In the first case, we obtain the term $J_{15}$, for which we have, by (\ref{equ6}) with $M=2$, \[ \frac 1{n^2} \sum_{|k_j| \leq n, 1 \le j\le 5} |J_{15}| \le \frac Cn \left(\sum_{|k| \leq n} |\rho(k)|^{\frac{3}{2}}\right)^4. \] In the second and third case, we obtain the term $J_{19}$, for which we have, by (\ref{equ6}) with $M=5$, \begin{equation} \label{equ13} \frac 1{n^2} \sum_{|k_j| \leq n, 1 \le j\le 5} |J_{19}| \le \frac C{n^2} \left(\sum_{|k| \leq n} |\rho(k)|^{\frac{6}{5}}\right)^5. \end{equation} By H\"older's inequality, \begin{equation} \label{equ14} \left(\sum_{|k| \leq n} |\rho(k)|^{\frac{6}{5}}\right)^5 \le n\left(\sum_{|k| \leq n} |\rho(k)|^{\frac{3}{2}}\right)^4, \end{equation} and we obtain the desired bound. Let us now consider the second summand in the expression of $J_2$. This summand will consists of terms of the form $ J_{1i} \rho_{th}$ for $i=1,\ldots, 4, 6, \ldots, 9$, where $\rho_{th}$ can be written as a linear combination of $k_1, \ldots, k_5$. For $i=6, \dots, 8$, we estimate the factor $|\rho_{th}|$ by one and apply the estimate (\ref{equ6}) with $M=3,4,5$ to obtain \begin{equation} \label{equ41} \frac 1{n^2} \sum_{|k_j| \leq n, 1\le j\le 5} |J_{16}| \le \frac C {n^2} \left(\sum_{ |k| \le n} |\rho(k)| \right)^3 \left(\sum_{ |k| \le n} |\rho(k)| ^{\frac 32}\right)^2, \end{equation} \begin{equation} \label{equ42} \frac 1{n^2} \sum_{|k_j| \leq n, 1\le j\le 5} |J_{17}| \le \frac C {n^2} \left(\sum_{ |k| \le n} |\rho(k)| \right)^2 \left(\sum_{ |k| \le n} |\rho(k)| ^{\frac 43}\right)^3, \end{equation} and \begin{equation} \label{equ43} \frac 1{n^2} \sum_{|k_j| \leq n, 1\le j\le 5} |J_{18}| \le \frac C {n^2} \left(\sum_{ |k| \le n} |\rho(k)| \right) \left(\sum_{ |k| \le n} |\rho(k)| ^{\frac 54}\right)^4. \end{equation} Then, from (\ref{equ41}) and \eqref{ho2}, we get \[ \frac 1{n^2} \sum_{|k_j| \leq n, 1\le j\le 5} |J_{16}| \le \frac C {n}\left(\sum_{ |k| \le n} |\rho(k)| ^{\frac 32}\right)^4. \] From (\ref{equ42}), (\ref{ho1}) with $M=3$ and (\ref{ho2}) \[ \frac 1{n^2} \sum_{|k_j| \leq n, 1\le j\le 5} |J_{17}| \le \frac C {n}\left(\sum_{ |k| \le n} |\rho(k)| ^{\frac 32}\right)^4. \] Finally, from (\ref{equ43}), (\ref{ho1}) with $M=4$ and the above inequality of $J_{17}$, \[ \frac 1{n^2} \sum_{|k_j| \leq n, 1\le j\le 5} |J_{18}| \le \frac C {n}\left(\sum_{ |k| \le n} |\rho(k)| ^{\frac 32}\right)^4. \] The term $J_{19}$ can be handled applying (\ref{equ13}) and (\ref{equ14}). For $J_{11}, J_{12}$, $t$ can be just chosen from the set $\{5, 6\}$ and the possible values of the factor $\rho_{th}$ (after a change of variable) can be $\rho(k_3), \rho(k_4), \rho(k_3 - k_4)$ or $\rho(k_5)$ where $k_5$ is linearly independent of $k_1, \ldots, k_4$. Then we first sum up the variables $k_1$ and $k_2$ and this part produces a constant. The sum with respect to $k_3, k_4, k_5$ is as follows. \[ \sum_{|k_j| \leq n} |\rho(k_3)^2 \rho(k_4)| \leq C \sum_{|k| \leq n} |\rho(k)|\,, \ \sum_{|k_j| \leq n} |\rho(k_3) \rho(k_4) \rho(k_5)| = (\sum_{|k| \leq n} |\rho(k)|)^3 \leq n \sum_{|k| \leq n} |\rho(k)| \] and \[ \sum_{|k_j| \leq n} |\rho(k_3) \rho(k_4) \rho(k_3 - k_4)| \leq C \sum_{|k| \leq n} |\rho(k)|\,, \] where we have used \eqref{equ6} and (\ref{equ7}) with $M=2$. Therefore, \[ \frac 1{n^2} \sum_{j=1}^5 \sum_{|k_j| \leq n} |J_{1i} \rho_{th}| \leq \frac{C}{n} \sum_{|k| \leq n} |\rho(k)| \,, \ i=1,2 \,. \] For $J_{13}$, $t=3$ and possible values of $\rho_{th}$ can be $\rho(k_2), \rho(k_2 - k_1)$ or $\rho(k_5)$ where $k_5$ is linearly independent of $k_1, \ldots, k_4$. The first two cases have been considered above in the discussion of the terms $J_{11} \rho_{th}$ and $J_{12} \rho_{th}$. For the third case, observe that \begin{eqnarray*} \frac 1{n^2} \sum_{j=1}^5 \sum_{|k_j| \leq n} |\rho(k_1)^2 \rho(k_2) \rho(k_3) \rho(k_4) \rho(k_3 - k_4) \rho(k_5)| \\ \leq \frac C{n^2} \left(\sum_{|k| \leq n} |\rho(k)| \right)^3 \leq \frac Cn \sum_{|k| \leq n} |\rho(k)|. \end{eqnarray*} where we have used \eqref{equ6} and \eqref{equ7} with $M=2$. Thus, \[ \frac 1{n^2} \sum_{j=1}^5 \sum_{|k_j| \leq n} |J_{13} \rho_{th}| \leq \frac{C}{n} \sum_{|k| \leq n} |\rho(k)| . \] Finally, for $J_{14}$, the term $\rho_{th}$ could be $\rho(k_i), i=2,\ldots, 4$ or $\rho(\star)$ where $\star$ is a linear combination of $k_i$'s which at least involves two different terms $k_{h_1}$ and $k_{h_2}$ where $h_1, h_2 \in \{2,3,4,5\}$. The first case has been considered above in the discussion of the terms $J_{1i} \rho_{th}, i = 1, 2,3$. For the second case, we apply inequalities \eqref{equ6} and \eqref{equ7} with $M=2,3,4$ and we get \begin{eqnarray*} \frac 1{n^2} \sum_{j=1}^5 \sum_{|k_j| \leq n} |\rho(k_1)^2 \rho(k_2) \rho(k_3) \rho(k_4) \rho(k_5) \rho(\star)| \\ \leq \frac C {n^2} \left(\sum_{|k| \leq n} |\rho(k)| \right)^3 \leq \frac Cn \sum_{|k| \leq n} |\rho(k)|. \end{eqnarray*} Thefore, $ \frac 1{n^2} \sum_{j=1}^5 \sum_{|k_j| \leq n} |J_{14} \rho_{th}| \leq \frac{C}{n} \sum_{|k| \leq n} |\rho(k)|$ and this finishes the proof. \end{proof} \begin{lemma}\label{a4.c6} Define \[ \mathcal{L}_1:= n^{-4} \sum_{l_1, \ldots, l_8=1}^n \sum_{ (i,s,j,t,h) \in D_4} |\rho_{12} \rho_{13} \rho_{14} \rho_{56} \rho_{57} \rho_{58}\rho_{2i} \rho_{sj} \rho_{th} |. \] where the set $D_4$ has been defined in (\ref{d4}). Then \begin{equation} \label{equ20} \mathcal{L}_1 \le \frac{C}{n^2} \left(\sum_{|k| \leq n} |\rho(k)|\right)^{3} . \end{equation} \end{lemma} \begin{proof} We make the change of variables $l_1-l_2= k_1$, $l_1-l_3 =k_2$, $l_1-l_4 =k_3$, $l_5-l_6= k_4$, $l_5-l_7 =k_5$, $l_5-l_8 =k_6$. The factors $ \rho_{2i} $, $\rho_{sj} $ and $\rho_{th}$ can be of one of the two forms: \begin{itemize} \item[(i)] $\rho_{\alpha \beta}$, where $\alpha, \beta \in \{1,2,3,4\}$ or $ \alpha, \beta \in \{5,6,7,8\}$. \item[(ii)] $\rho_{\alpha \beta}$, where $\alpha \in \{1,2,3,4\}$ and $\beta \in \{5,6,7,8\}$ or $\beta \in \{1,2,3,4\}$ and $\alpha \in \{5,6,7,8\}$. \end{itemize} For factors of the form (i), we have $\rho_{\alpha \beta} = \rho( {\bf k} \cdot {\bf v})$, where ${\bf k}$ is one of the vectors $(k_1,k_2,k_3) $ or $(k_4,k_5,k_6) $ and ${\bf v}$ is a vector in $\mathbb{R}^4$ whose components are $0$, $1$ or $-1$. For the first factor of the form (ii), we write $\rho_{\alpha \beta} = \rho(k_7)$, where $k_7$ is a new variable independent of the $k_i$'s, $1\le i \le 6$. If there are more than one factor of the form (ii), then these extra factor(s) can be written as $\rho( {\bf k} \cdot {\bf v})$, where ${\bf k}=(k_1,k_2,k_3,k_4,k_5,k_6,k_7) $ and ${\bf v}$ is a vector in $\mathbb{R}^7$ whose components are $0$, $1$ or $-1$. Then we decompose $\mathcal{L}_1$ as the sum of several terms $\mathcal{L}_{1j}$, according to the following cases:\\ \noindent {\it Case 1:} There are three factors that have power $2$. We denote the corresponding term by $\mathcal{L}_{11}$. For this term we have \[ \mathcal{L}_{11} = \frac 1 {n^2} \sum_{\substack{|k_i| \leq n \\ i=1,\ldots, 6}} \rho(k_1)^2 \rho(k_2)^2\rho(k_3)^2| \rho(k_4) \rho(k_5) \rho(k_6)| \le \frac{C}{n^2} \left(\sum_{|k| \leq n} |\rho(k)|\right)^{3}. \] \\ {\it Case 2:} Two factors have power $2$. Then we have the following possibilities by taking into account of the symmetry. \[ \mathcal{L}_{12}:= \frac 1 {n^3} \sum_{\substack{|k_i| \leq n \\ i=1,\ldots, 7}} |\rho^2(k_1) \rho^2(k_2) \rho(k_3) \rho(k_4) \rho(k_5) \rho(k_6) \rho(k_7) | \] and \[ \mathcal{L}_{13}:= \frac 1 {n^2} \sum_{\substack{|k_i| \leq n \\ i=1,\ldots, 6}} |\rho^2(k_1) \rho^2(k_2) \rho(k_3) \rho(k_4) \rho(k_5) \rho(k_6)\rho({\bf k} \cdot {\bf v}) | , \] where ${\bf k} =(k_1, k_2, k_3, k_4, k_5, k_6)$ and ${\bf v}$ is a vector in $\mathbb{R}^6$ whose components are $0$, $1$ or $-1$. Clearly, \begin{eqnarray*} \mathcal{L}_{12} \leq \frac C {n^3} \left(\sum_{|k| \leq n} |\rho(k)|\right)^5 \leq \frac C {n^2} \left(\sum_{|k| \leq n} |\rho(k)|\right)^{3} \,. \end{eqnarray*} For $\mathcal{L}_{13}$, $ {\bf k} \cdot {\bf v}$ involves at least two factors $k_{j}, k_{j'}$ but $ {\bf k} \cdot {\bf v}$ cannot be a linear combination of only $k_1 $ and $k_2$. Applying inequality (\ref{equ22}) with $M=5$, yields \begin{eqnarray*} \mathcal{L}_{13} \leq n^{-2} \left(\sum_{|k| \leq n} |\rho(k)|\right)^3 \,. \end{eqnarray*} \medskip \noindent {\it Case 3:} Only one factor has power $2$. Then we have the following two possibilities, taking into account the symmetry. The first one is \[ \mathcal{L}_{14} = \frac 1 {n^3} \sum_{\substack{|k_i| \leq n \\ i=1,\ldots, 7}}|\rho^2(k_1) \rho(k_2) \rho(k_3) \rho(k_4) \rho(k_5) \rho(k_6) \rho(k_7) \rho({\bf k} \cdot {\bf v})| \,, \] where ${\bf k} =(k_1, k_2, k_3, k_4, k_5, k_6, k_7)$ and ${\bf v}$ is a vector in $\mathbb{R}^7$ whose components are $0$, $1$ or $-1$ and it has at least two nonzero components. By (\ref{equ22}) with $M=7$, we can write \[ \mathcal{L}_{14} \leq \frac C { n^3} \left(\sum_{|k| \leq n} |\rho(k)|\right)^5 \leq \frac C { n^2} \left(\sum_{|k| \leq n} |\rho(k)|\right)^{3} \,. \] The second possibility is \[ \mathcal{L}_{15}:= \frac 1{ n^2} \sum_{\substack{|k_i| \leq n \\ i=1,\ldots, 6 }}|\rho^2(k_1) \rho(k_2) \rho(k_3) \rho(k_4) \rho(k_5) \rho(k_6) \rho({\bf k} \cdot {\bf v} ) \rho({\bf k} \cdot {\bf w})| \,, \] where ${\bf k} =(k_1, k_2, k_3, k_4, k_5, k_6)$ and ${\bf v}$, ${\bf w} $ are vectors in $\mathbb{R}^6$ in such a way that ${\bf k} \cdot {\bf v} $ and ${\bf k} \cdot {\bf w}$ are linear combinations of $k_1, k_2, k_3$ or $k_4, k_5, k_6$ with exactly two nonzero components are equal to $1$ and $-1$ and satisfying some additional restrictions, due to the definition of the set $D_4$. There are several combinations: \begin{itemize} \item[(i)] ${\bf k} \cdot {\bf v}= k_1-k_2$ and ${\bf k} \cdot {\bf w}$ is either $k_2-k_3$ or $k_1-k_3$. In this case, by H\"older's inequality, we have \[ \sum_{ | k_i | \le n, 1\le i \le 3} |\rho^2(k_1) \rho(k_2) \rho(k_3) \rho({\bf k} \cdot {\bf v} ) \rho({\bf k} \cdot {\bf w})| \le C, \] and we obtain \begin{equation} \label{equ30} \mathcal{L}_{15} \leq \frac C { n^2} \left(\sum_{|k| \leq n} |\rho(k)|\right)^{3} \,. \end{equation} \item[(ii)] ${\bf k} \cdot {\bf v}$ and ${\bf k} \cdot {\bf w}$ are two different linear combinations chosen among $\{ k_4-k_5, k_4-k_6, k_5-k_6\}$. Then, the inequality (\ref{equ23}) with $M=3$ yields \[ \sum_{ | k_i | \le n, i=4,5,6} |\rho(k_4) \rho(k_5) \rho(k_6) \rho({\bf k} \cdot {\bf v} ) \rho({\bf k} \cdot {\bf w})| \le \sum_{|k| \leq n} |\rho(k)|, \] which implies (\ref{equ30}). \item[(iii)] If ${\bf k} \cdot {\bf v}= k_1-k_2$ and ${\bf k} \cdot {\bf w}$ is $ k_4-k_5$, $ k_4-k_6$ or $ k_5-k_6$, then (\ref{equ30}) follows from \[ \sum_{ | k_1 | \le n, |k_2| \le n} |\rho^2(k_1) \rho(k_2) \rho(k_1- k_2) | \le C \] and (\ref{equ21}) with $M=3$. \end{itemize} \medskip \noindent {\it Case 4:} All factors have power $1$, $i \in \{3, 4\}$, and $\rho_{sj}=\rho({\bf k} \cdot {\bf v}), \rho_{th}=\rho({\bf k} \cdot {\bf w})$ where ${\bf k} \cdot {\bf v}$ is a linear combination of $k_1,k_2, k_3$ and ${\bf k} \cdot {\bf w}$ is a linear combination of $k_4, k_5, k_6$, or vice versa. We denote the corresponding term by $\mathcal{L}_{16}$. Then the estimate \[ \mathcal{L}_{16} \leq n^{-2} \left(\sum_{|k| \leq n} |\rho(k)|\right)^3 \, \] follows from (\ref{equ23}) with $M=3$ and (\ref{equ21}) with $M=3$. \medskip \noindent {\it Case 5:} All factors have power $1$, and there is one of the differences $l_i-l_2$, $l_j- l_s$ or $l_h-l_t$ linearly independent of $k_1, \ldots, k_6$. We denote this difference by $k_7$. The other two factors are of the form $\rho({\bf k} \cdot {\bf v})$ and $\rho({\bf k} \cdot {\bf w})$, where ${\bf k} \cdot {\bf v}$ and ${\bf k} \cdot {\bf w}$ are linear combinations of $k_1, \ldots, k_6, k_7$. In this case, the desired estimate follows from the inequality (\ref{equ23}), with $M=7$. In fact, if we denote the corresponding term by $\mathcal{L}_{17}$, we obtain \[ \mathcal{L}_{17} \leq \frac C { n^3} \left(\sum_{|k| \leq n} |\rho(k)|\right)^5 \leq \frac C { n^2} \left(\sum_{|k| \leq n} |\rho(k)|\right)^{3} \,. \] This finishes the lemma. \end{proof} \begin{lemma} \label{a4.c6-2} Define \[ \mathcal{L}_2:= n^{-4} \sum_{l_1, \ldots, l_8=1}^n \sum_{\substack{i \neq s \neq j\\ i, s, j\in\{4,7,8\}}} |\rho_{12} \rho_{13} \rho_{24} \rho_{56} \rho_{57} \rho_{68}\rho_{3i} \rho_{sj}| \] and \[ \mathcal{L}_3:= n^{-4} \sum_{l_1, \ldots, l_8=1}^n \sum_{ (i,s,j,t,h)\in D_5} |\rho_{12} \rho_{13} \rho_{24} \rho_{56} \rho_{57} \rho_{68}\rho_{3i} \rho_{sj} \rho_{th}|, \] where the set $D_5$ has been defined in (\ref{d5}). Then \begin{equation} \mathcal{L}_2 \leq \frac{C}{n^2} \left(\sum_{|k|\leq n}|\rho(k)|^{\frac{4}{3}}\right)^6. \label{ineq.tL} \end{equation} and \begin{equation} \mathcal{L}_3 \leq \frac{C}{n^2} \left(\sum_{|k| \leq n} |\rho(k)|\right)^3. \label{ineq.tL2} \end{equation} \end{lemma} \begin{proof} Let us first show (\ref{ineq.tL}). We make the change of variables $l_1-l_2= k_1$, $l_1-l_3 =k_2$, $l_2-l_4 =k_3$, $l_5-l_6= k_4$, $l_5-l_7 =k_5$, $l_6-l_8 =k_6$. By symmetry, it suffices to analyze the cases $i=4$ and $i=7$. If $i=4$, then $\rho_{34}= \rho(k_1-k_2+k_3)$ and $s=8, j=7$ or $s=7,j=8$, which gives $\rho_{sj}= \rho(k_4-k_5+k_6)$. In this case, we obtain a term of the form \[ \mathcal{L}_{21}:= n^{-2} \sum_{|k_i| \leq n, i=1,\ldots, 6} |\rho(k_1)\rho(k_2)\rho(k_3)\rho(k_1-k_2+k_3) \rho(k_4)\rho(k_5)\rho(k_6)\rho( k_4-k_5+k_6)|. \] Applying inequality (\ref{equ6}) with $M=3$ yields \[ \mathcal{L}_{21} \le \frac{C}{n^2} \left(\sum_{|k|\leq n}|\rho(k)|^{\frac{4}{3}}\right)^6. \] In the case $i=7$, we set $\rho_{37} = \rho(k_7)$ and have two possibilities for $sj$: $48$ and $84$, which produce the following term \begin{eqnarray*} \mathcal{L}_{23}: &=& n^{-3} \sum_{|k_i| \leq n \atop i=1,\ldots, 7} |\rho(k_1)\rho(k_2)\rho(k_3)\rho(k_4)\rho(k_5)\rho(k_6)\rho(k_7) \\ && \times \rho(k_2 + k_7 -k_3 - k_1-k_5+k_4+k_6 )| \end{eqnarray*} Applying the inequality (\ref{equ6}) with $M=7$ and H\"older's inequality, we obtain \begin{eqnarray*} \mathcal{L}_{23} &\le & \frac C {n^3} \left(\sum_{|k|\leq n}|\rho(k)|^{\frac{8}{7}}\right)^7 \le \frac C {n^2} \left(\sum_{|k|\leq n}|\rho(k)|^{\frac{4}{3}}\right)^6. \\ \end{eqnarray*} This finishes the proof of (\ref{ineq.tL}). The proof of (\ref{ineq.tL2}) is analogous to that of (\ref{equ20}). Namely, we can make the change of variables $l_1-l_2= k_1$, $l_1-l_3 =k_2$, $l_2-l_4 =k_3$, $l_5-l_6= k_4$, $l_5-l_7 =k_5$, $l_6-l_8 =k_6$, and follow the arguments of (\ref{equ20}). A subtle difference might be the verification of \eqref{equ30}. That is, the estimation of \[ \mathcal{L}_{15}:= \frac 1{ n^2} \sum_{\substack{|k_i| \leq n \\ i=1,\ldots, 6 }}|\rho^2(k_1) \rho(k_2) \rho(k_3) \rho(k_4) \rho(k_5) \rho(k_6) \rho({\bf k} \cdot {\bf v} ) \rho({\bf k} \cdot {\bf w})| \,, \] where ${\bf k} \cdot {\bf v}$, ${\bf k} \cdot {\bf w}$ have the following two cases: \begin{itemize} \item [(i)] They are linear combinations of $k_4, k_5, k_6$. \item [(ii)]${\bf k} \cdot {\bf v}$ is a linear combination of $k_1, k_2, k_3$ ($k_1 - k_2$ with respect to $i=2$ or $k_2-k_1-k_3$ with respect to $i=4$), and ${\bf k} \cdot {\bf w}$ is a linear combination of $k_4, k_5, k_6$. \end{itemize} In the case (i), we apply the inequality \eqref{equ23} with $M=3$ to obtain \begin{equation}\label{equ31} \mathcal{L}_{15} \leq \frac{C}{n^2} \left(\sum_{|k| \leq n} |\rho(k)|\right)^3\,. \end{equation} In the case (ii), we apply \eqref{equ22} with $M=3$ and \eqref{equ21} with $M=3$ to obtain the desired the inequality \eqref{equ31}. \end{proof} The next lemma contains several inequalities that are used along the paper. \begin{lemma} Fix an integer $M\ge 2$. We have \begin{equation} \label{equ6} \sum_{ |k_j| \le n \atop 1\le j \le M} |\rho( {\bf k} \cdot {\bf v} )| \prod _{j=1} ^M | \rho(k_j) | \le C \left(\sum_{|k| \leq n} |\rho(k)|^{1+ \frac 1M}\right)^M. \end{equation} where ${\bf k} = (k_1, \dots, k_M)$ and ${\bf v} \in \mathbb{R}^M$ is a fixed vector whose components are $1$or $-1$. Furthermore, if $\sum_{k \in \mathbb{Z}} \rho(k)^2<\infty$, then \begin{equation} \label{equ7} \left(\sum_{|k| \leq n} |\rho(k)|^{1+ \frac 1M}\right)^M\le C\left( \sum_{|k| \leq n} |\rho(k)|\right)^{M-1} \end{equation} and if ${\bf v} \in \mathbb{R}^M$ is a nonzero vector whose components are $0$, $1$or $-1$ \begin{equation} \label{equ21} \sum_{ |k_j| \le n \atop 1\le j \le M} |\rho( {\bf k} \cdot {\bf v} )| \prod _{j=1} ^M | \rho(k_j) | \le C\left( \sum_{|k| \leq n} |\rho(k)|\right)^{M-1}. \end{equation} \end{lemma} \begin{proof} Applying the Brascamp-Lieb inequality (\ref{BL}), we have \[ \sum_{ |k_j| \le n \atop 1\le j \le M} \prod _{j=1} ^M | \rho(k_j) | |\rho( {\bf k} \cdot {\bf v} )| \leq C \prod_{i=1} ^{M+1} \left(\sum_{|k| \leq n} |\rho(k)|^{\frac{1}{p_i}}\right)^{p_i} \,, \] where $p_i \le 1$ and $\sum_{i=1}^{M+1} p_i = M$. Choosing $p_i = M/ (M+1) $ for $i=1,\dots, M+1$, we get inequality (\ref{equ6}). To show (\ref{equ7}), we make the decomposition $ |\rho(k)|^{1+ \frac 1M}= |\rho(k)|^{1- \frac 1M} |\rho(k)|^{ \frac 2M}$ and apply H\"older's inequality with exponents $p= \frac M {M-1}$ and $q=M$. Finally, to show (\ref{equ21}), we decompose the sum into the product of the sum with respect to the $k_i$'s that appear in ${\bf k } \cdot {\bf v}$ and the sum of the remaining terms. \end{proof} \begin{lemma} Fix an integer $M\ge 3$ and assume $\sum_{k \in \mathbb{Z}} \rho(k)^2<\infty$. We have \begin{equation} \label{equ22} \sum_{|k_j| \le n \atop 1\le j \le M} \rho(k_1)^2 |\rho( {\bf k} \cdot {\bf v} )| \prod _{j=2} ^{M} | \rho(k_j) | \le C \left(\sum_{|k| \leq n} |\rho(k)|\right)^{M-2}, \end{equation} where ${\bf k} = (k_1, \dots, k_M)$ and ${\bf v} \in \mathbb{R}^M$ is a fixed vector whose components are $0 $, $1$or $-1$ and it has at least two nonzero components. \end{lemma} \begin{proof} It suffices to assume that all the components of ${\bf v}$ are nonzero. In this case, we apply the Brascamp-Lieb inequality (\ref{BL}) with exponents $p_1=1$ and $p_2 = \cdots =p_{M+1} = \frac {M-1}M$ and inequality (\ref{equ7}) with $M$ replaced by $M-1$. \end{proof} \begin{lemma} Fix an integer $M\ge 3$ and assume $\sum_{k \in \mathbb{Z}} \rho(k)^2<\infty$. We have \begin{equation} \label{equ23} \sum_{ |k_j| \le n \atop 1\le j \le M} |\rho( {\bf k} \cdot {\bf v} ) \rho( {\bf k} \cdot {\bf w} )| \prod _{j=1} ^{M} | \rho(k_j) | \le C \left(\sum_{|k| \leq n} |\rho(k)|\right)^{M-2}. \end{equation} where ${\bf k} = (k_1, \dots, k_M)$ and ${\bf v}, {\bf w} \in \mathbb{R}^M$ are linearly independent vectors, whose components are $0 $, $1$or $-1$ and they have at least two nonzero components. \end{lemma} \begin{proof} Suppose first that $ \rho( {\bf k} \cdot {\bf v} ) \rho( {\bf k} \cdot {\bf w} )$ involves only three $k_i$'s, for instance, $k_1, k_2, k_3$. In this case, applying the Brascamp-Lieb inequality (\ref{BL}) with exponents $p_i=3/5$, $1\le i\le 5$, yields, \[ \sum_{ |k_i| \le n \atop 1\le i \le 3} |\rho(k_1)\rho(k_2)\rho(k_3)\rho( {\bf k} \cdot {\bf v} ) \rho( {\bf k} \cdot {\bf w} )| \le \left( \sum_{|k|\le n} |\rho(k)| ^{\frac 53} \right)^3. \] Notice that assumption (ii) in Proposition \ref{prop2.10} is satisfied because three of the vectors $(1,0,0)$, $(0,1,0)$, $(0,0,1)$, ${\bf v}$, ${\bf w}$ may span a subspace of dimension $2$, and we have $3\times 3/5 = 9/5 \le 2$. Then, making the decomposition $|\rho(k)| ^{\frac 53}=|\rho(k)| ^{\frac 13}|\rho(k)| ^{\frac 43}$ and using H\"older's inequality with exponents $p=3$ and $q= \frac 32$, yields \[ \left( \sum_{|k|\le n} |\rho(k)| ^{\frac 53} \right)^3 \le C \sum_{|k|\le n} |\rho(k)|, \] which gives the desired estimate. If $ \rho( {\bf k} \cdot {\bf v} ) \rho( {\bf k} \cdot {\bf w} )$ involves four $k_i$'s, for instance, $k_1, k_2, k_3, k_4$, we apply the Brascamp-Lieb inequality (\ref{BL}) with exponents $p_i=2/3$, $1\le i\le 6$, and we obtain \[ \sum_{ |k_i| \le n \atop 1\le i \le 3} |\rho(k_1)\rho(k_2)\rho(k_3) \rho(k_4) \rho( {\bf k} \cdot {\bf v} ) \rho( {\bf k} \cdot {\bf w} )| \le \left( \sum_{|k|\le n} |\rho(k)| ^{\frac 32} \right)^4. \] Then, using (\ref{equ7}) with $M=2$, yields \[ \left( \sum_{|k|\le n} |\rho(k)| ^{\frac 32} \right)^4 \le C \left( \sum_{|k|\le n} |\rho(k)|\right)^2, \] which gives the desired estimate. Finally, if $ \rho( {\bf k} \cdot {\bf v} ) \rho( {\bf k} \cdot {\bf w} )$ involves more than four $k_i$'s, the result follows again from the Brascamp-Lieb inequality (\ref{BL}), where we choose $p_i=2/3$ for the factors $ \rho( {\bf k} \cdot {\bf v} ) $, $ \rho( {\bf k} \cdot {\bf w})$ and for the four factors $\rho(k_i)$ such that $k_i$ appears in the linear combination with less factors, and we choose $p_i=1$ for all the remaining factors $\rho(k_i)$ appearing in the linear combinations $ \rho( {\bf k} \cdot {\bf v} ) $ or $ \rho( {\bf k} \cdot {\bf w} )$. \end{proof} The last lemma summarizes some inequalities derived from the application of H\"older's inequality. \begin{lemma} For any $M\ge 2$, we have \begin{equation} \label{ho1} \left(\sum_{|k| \leq n} |\rho(k)|^{1+\frac 1M} \right)^M \le \left( \sum_{|k| \le n} |\rho(k)| \right) \left( \sum_{|k| \leq n} |\rho(k)|^{\frac M{M-1}} \right)^{M-1} \end{equation} and \begin{equation} \label{ho2} \left( \sum_{|k| \leq n} |\rho(k)| \right)^3 \le n \left(\sum_{|k| \leq n} |\rho(k)|^\frac{3}{2} \right) ^{2}. \end{equation} Furthermore, if $\sum_{|k| \leq n} |\rho(k)|^2 < \infty$, then \begin{equation} \label{ho3} \sum_{|k| \leq n} |\rho(k)|^\frac{3}{2} \leq C \left(\sum_{|k| \leq n} |\rho(k)|^\frac{4}{3}\right)^{\frac 34}. \end{equation} \end{lemma} \begin{proof} To show (\ref{ho1}) we make use of the decomposition $|\rho(k)|^{ 1+\frac 1M}= |\rho(k)| |\rho(k)|^\frac{1}{M}$ and apply H\"older's inequality with exponents $p= \frac M {M-1}$ and $q=M$. For (\ref{ho3}) we use the decomposition $|\rho(k)|^\frac{3}{2}= |\rho(k)| |\rho(k)|^\frac{1}{2}$ and apply H\"older's inequality with exponents $p= \frac 43$ and $q=4$. Finally, (\ref{ho2}) we use again H\"older's inequality. \end{proof}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{intro} Dynamical billiards are an indispensable model system in the field of Hamiltonian mechanics. The amenability of billiard systems to analytical and numerical study has allowed for detailed analyses of chaotic dynamics \cite{sinai1970,bunimovich1979}, diffusion and particle transport \cite{boldrighinietal1983,moranetal1987,chernovetal1993,chernovetal2013}, the semiclassical limit \cite{grafetal1992,tomsovicheller1993,zelditchzworski1996}, and energy absorption and dissipation \cite{ulam1961,jarzynski1993,barnettetal2001,karlisetal2006,gelfreichturaev2008,karlisetal2012,dettmannleonel2013,batistic2014,demersjarzynski2015}. This last topic, the problem of energy absorption in driven billiards, was first explored by Enrico Fermi to explain the acceleration of cosmic rays \cite{fermi1949}. Since then, this ``Fermi acceleration'' and related mechanisms have been studied in contexts such as nuclear dissipation \cite{blockietal1978}, plasma physics and astrophysics \cite{kobayakawaetal2002,veltricarbone2004,biankontar2013}, and atomic optics \cite{saifetal1998}. In this paper, we investigate energy absorption in chaotic, ergodic billiard systems, subject to a rapidly varying, time-periodic force. The system of interest is defined in Section \ref{setup}. In Section \ref{diffusion}, we argue that the evolution of the billiard particle's energy will be a diffusive process in energy space. It follows that the probability distribution for the particle energy obeys a Fokker-Planck equation in energy space, with drift and diffusion coefficients that characterize the rate at which this distribution shifts and spreads. In Section \ref{g1g2} we obtain expressions for these rates, which are found to scale like $\omega^{-2}$ for large $\omega$. In Section \ref{numerical}, we present exact (up to machine precision) numerical results that demonstrate the validity of the Fokker-Planck equation in the rapid driving regime, for the special case where the driving force is independent of position. Finally, we offer concluding remarks in Section \ref{discussion}. Our results constitute a detailed case study of the process of \textit{Floquet prethermalization}, in which a periodically driven system relaxes to a thermal state with respect to an effective Hamiltonian at short to intermediate times, before ultimately gaining energy on long timescales \cite{elseetal2017,abaninetal2017,herrmannetal2017,mori2018,morietal2018,mallayyaetal2019,howelletal2019,machadoetal2019,rajaketal2019,machadoetal2020,rubio-abadal2020,pengetal2021,hodsonjarzynski2021}. Floquet prethermalization has been widely studied as a mechanism for engineering stable, long-lived steady states of both classical and quantum systems. Within the energy diffusion framework, we obtain a comprehensive, quantitative picture of how prethermalization and its breakdown emerge from the Hamiltonian dynamics of a chaotic billiard particle. With these results, billiard systems emerge as a valuable model system in the study of energy absorption, prethermalization, and related phenomena in periodically driven systems. \section{Setup} \label{setup} We now define our system of interest. We consider a point particle of mass $m$, with position $\mathbf{x} \equiv \mathbf{x}_t$ and velocity $\mathbf{v} \equiv \mathbf{v}_t$, confined to the inside of a cavity or ``billiard.'' Precisely, the billiard is a bounded, connected subset of $d$-dimensional Euclidean space ($d \geq 2$), with a boundary or ``wall'' consisting of one or more $(d-1)$-dimensional surfaces. When strictly inside the billiard, the particle evolves smoothly according to Newton's laws. Whenever the particle reaches the billiard boundary, it undergoes an instantaneous elastic collision with the wall. Specifically, we assume that between collisions, the particle is subject to two forces. First, the particles experiences a conservative force $-\nabla U (\mathbf{x})$, generated by a static potential $U(\mathbf{x})$. Second, we apply a time-periodic driving force $\mathbf{F}(\mathbf{x}) \cos (\omega t) = - \nabla V (\mathbf{x}) \cos (\omega t)$, with period $T = 2 \pi /\omega$, where $V (\mathbf{x})$ is some potential. Therefore, the equations of motion for $\mathbf{x}$ and $\mathbf{v}$ are given by: \begin{equation} \label{newton} \frac{d \mathbf{x}}{d t} = \mathbf{v}, \quad m \frac{d \mathbf{v}}{d t} = -\nabla U (\mathbf{x}) + \mathbf{F}(\mathbf{x}) \cos (\omega t). \end{equation} When the particle reaches the billiard boundary, an instantaneous elastic collision occurs. This collision leaves the position of the particle unchanged, but the component of the velocity perpendicular to the wall is instantly reversed. That is, the velocity of the particle is updated from $\mathbf{v}$ to $\mathbf{v}'$ according to the reflection law \begin{equation} \label{reflection} \mathbf{v}' = \mathbf{v} - 2 (\mathbf{v} \cdot \hat{\mathbf{n}}(\mathbf{x})) \hat{\mathbf{n}}(\mathbf{x}) , \end{equation} \noindent where $\hat{\mathbf{n}}(\mathbf{x})$ is the outward-facing unit vector normal to the billiard boundary at $\mathbf{x}$, the point of collision. The equations \eqref{newton} and \eqref{reflection} fully define the dynamics of the driven particle. We note here that our use of the term ``billiard'' is more general than the typical usage: The word ``billiard'' often simply refers to a free particle in a cavity, corresponding to the case of vanishing $U(\mathbf{x})$ and $\mathbf{F}(\mathbf{x})$. In light of this, we will use the term ``standard billiard'' to refer to the special case of $U(\mathbf{x})=0$. For a driven standard billiard, the associated \textit{undriven} billiard (obtained by additionally setting $\mathbf{F} (\mathbf{x}) = \mathbf{0}$) corresponds to a billiard in the more common sense of the word. In our analysis, we are most interested in the evolution of the particle's energy, defined as $\mathcal{E} \equiv \mathcal{E}(\mathbf{x},\mathbf{v}) \equiv \frac{1}{2} m |\mathbf{v}|^2 + U(\mathbf{x})$. In the absence of driving, $\mathcal{E}$ is a constant of the motion: $\mathcal{E}$ is conserved under the equations of motion \eqref{newton} for $\mathbf{F} (\mathbf{x}) = \mathbf{0}$, and is also unchanged under the reflection law \eqref{reflection}. For nonzero $\mathbf{F} (\mathbf{x})$, the collisions are still energy-conserving, but \eqref{newton} implies that the particle's energy between collisions changes according to: \begin{equation} \label{power} \frac{d \mathcal{E}}{d t} = \mathbf{F}(\mathbf{x}) \cdot \mathbf{v} \cos (\omega t). \end{equation} \noindent In particular, we will consider the energy dynamics for large $\omega$, in the rapid driving regime. So far, we have considered a single trajectory of the particle in the billiard. However, in our analysis, it will also be useful to consider a statistical ensemble of particles, and averages over that ensemble. Each particle trajectory in such an ensemble is determined by an initial condition $(\mathbf{x}_0,\mathbf{v}_0)$ at $t=0$, which is sampled according to some probability distribution $\rho_0(\mathbf{x}_0,\mathbf{v}_0)$ on phase space (that is, the $2d$-dimensional space of particle positions and velocities). The ensemble is then evolved in time by evolving each initial condition according to \eqref{newton} and \eqref{reflection}, yielding $\mathbf{x}_t$ and $\mathbf{v}_t$. Any statistical property of this ensemble may be computed as an appropriate average over initial conditions, with respect to the distribution $\rho_0(\mathbf{x}_0,\mathbf{v}_0)$. In particular, since we are interested in the evolution of the system's energy $\mathcal{E}$, our analysis will focus on $\eta \equiv \eta (E,t)$, the time-dependent probability distribution for the energy. For small $dE$, $ \eta (E,t) dE$ gives the fraction of particles in the ensemble at time $t$ with energy between $E$ and $E + dE$. We may express $\eta$ as \begin{equation} \label{eta} \eta (E,t) = \int d^d \mathbf{x}_0 d^d \mathbf{v}_0 \, \rho_0(\mathbf{x}_0,\mathbf{v}_0) \, \delta \Big( \mathcal{E}(\mathbf{x}_t,\mathbf{v}_t) - E \Big). \end{equation} \noindent Here, $d^d \mathbf{x}_0 d^d \mathbf{v}_0$ is a $2d$-dimensional infinitesimal ``hyper-volume'' element in phase space, and $(\mathbf{x}_t,\mathbf{v}_t)$ is the phase space location, at time $t$, of the trajectory with initial conditions $(\mathbf{x}_0,\mathbf{v}_0)$. For this integral, and for similar integrals in this paper unless otherwise stated, the integration over $\mathbf{x}_0$ is performed over the interior of the billiard, and the integration over $\mathbf{v}_0$ runs over all $\mathbf{v}_0 \in \mathbb{R}^d$. The last essential assumption in our analysis is that the undriven system exhibits chaotic and ergodic motion at each energy $E$. For certain classes of undriven billiards, it has been rigorously proven that the particle motion is chaotic and ergodic \cite{sinai1970,bunimovich1979,wojtkowski1986,donnay1991}. Although these results were derived for undriven \textit{standard} billiards, in some cases they may be extended, at least approximately, to the $U(\mathbf{x}) \neq 0$ case, e.g. by considering weak forces \cite{chernov2001}, or by invoking the correspondence between motion in a potential and free motion in non-Euclidean space \cite{beletsky1999}. Finally, we note that if the driving force is generated by a more general time-periodic potential $V(\mathbf{x},t)$, then it is straightforward to extend our analysis by decomposing this potential as a Fourier series with fundamental frequency $\omega$. However, in order to keep the calculations relatively simple, we restrict our attention to the monochromatic driving force $\mathbf{F}(\mathbf{x}) \cos (\omega t)$. \section{Energy diffusion} \label{diffusion} We now describe the evolution of the particle's energy $E$, in the limit of large $\omega$. We argue that the energy of the particle evolves diffusively in this limit. A more general and detailed version of this argument may be found in our previous paper \cite{hodsonjarzynski2021}, wherein it is shown that a \textit{generic} chaotic, ergodic Hamiltonian system will exhibit energy diffusion when subject to rapid periodic driving. Energy diffusion in chaotic billiards under rapid periodic driving is a special case of this result. We first note that, for sufficiently large $\omega$, the effect of the driving force on the particle between collisions nearly averages to zero over a single period. This averaging effect may be rigorously demonstrated using tools such as multi-scale perturbation theory \cite{murdock1999,rahavetal2003}. However, it is also intuitively reasonable: For a very short driving period, the particle's position and velocity will remain nearly constant over the period (as long as a collision with the wall does not occur), because of the particle's inertia and finite speed. Under this approximation, integrating \eqref{newton} over a period reveals that the resulting change in position and velocity are the same as if the system was not being driven, since the term $\mathbf{F}(\mathbf{x}) \cos (\omega t)$ integrates to zero. This approximation will become better and better for shorter and shorter periods, so as $\omega$ goes to infinity, the driven evolution of the system will become closer and closer to the undriven dynamics. Notably, this conclusion holds regardless of the magnitude of $\mathbf{F} (\mathbf{x})$. So for sufficiently large $\omega$, the drive acts as a small perturbation on the undriven dynamics. Let us choose $\omega$ large enough such that driven and undriven trajectories closely resemble one another on timescales of order $\tau_C$, the characteristic correlation time set by the undriven particle's chaotic dynamics. To show that the driven particle's energy evolves diffusively, we now consider the evolution of the particle at discrete times $t=0$, $\delta t$, $2\delta t$ ..., for some $\delta t \geq \tau_C$. Over each timestep, the particle's energy changes by a small amount $\delta E_i$, $i=1,2,3...$. Since $\delta t \geq \tau_C$, these individual energy increments will be approximately uncorrelated. That is, the particle performs a random walk along the energy axis, where each energy increment $\delta E_i$ is statistically independent of the others. On timescales much longer than $\tau_C$, after many ``steps'' in this process have occurred, such a random walk will be well-described as a process of diffusion in energy space. If the energy of the driven particle evolves diffusively, then the particle's energy probability distribution $\eta$ (defined in \eqref{eta}) will evolve according to a Fokker-Planck equation in energy space \cite{gardiner1985}: \begin{equation} \label{fp} \frac{\partial \eta}{\partial t} = -\frac{\partial}{\partial E} \left( g_1 \eta \right)+ \frac{1}{2} \frac{\partial ^2}{\partial E^2} \left( g_2 \eta \right). \end{equation} \noindent Here, the drift coefficient $g_1 \equiv g_1 (E,\omega)$ and the diffusion coefficient $g_2 \equiv g_2 (E,\omega)$ characterize the diffusive process: $g_1$ gives the rate at which $\eta$ shifts along the energy axis, and $g_2$ gives the rate of diffusive spreading in energy space. In the next section, we obtain explicit expressions for $g_1$ and $g_2$ in the high frequency driving limit. As we will see, these drift and diffusion rates are suppressed by a factor of $\omega^{-2}$ for large $\omega$. Moreover, we show that $g_1$ is always nonnegative, which then implies that the system undergoes \textit{Fermi acceleration} \cite{fermi1949,ulam1961,barnettetal2001,karlisetal2006,gelfreichturaev2008,karlisetal2012,batistic2014} on average: The driven dynamics exhibit a statistical bias towards gaining energy, and the mean energy of the ensemble never decreases. How large must $\omega$ be for the energy diffusion picture, and the associated Fokker-Planck equation, to be approximately valid? Recall that the driving force must act as a small perturbation on the undriven dynamics. This suggests two conditions. First, we assume that over the course of a single period, the forces $- \nabla U (\mathbf{x})$ and $\mathbf{F} (\mathbf{x}) \cos (\omega t)$ produce a very small change in the particle's velocity. If the typical magnitude of these forces is denoted by $F$, then from \eqref{newton} we can estimate that the velocity will change by an amount of order $ F/ (m \omega)$ during a period (provided a collision does not occur). We assume that this change is much smaller than $v$, the typical speed of the particle: \begin{equation} \label{condition1resub} \frac{F }{m \omega} \ll v. \end{equation} This is our first condition. Importantly, this ensures that when a collision occurs, the outgoing trajectory of the particle will only be slightly altered relative to the undriven motion. If \eqref{condition1resub} is not satisfied, then the particle's direction of motion will oscillate wildly back and forth due to the force $\mathbf{F}(\mathbf{x}) \cos (\omega t)$. As a result, the drive may cause the particle to collide with the wall at a substantially different angle relative to a corresponding undriven particle. The associated driven and undriven trajectories would then rapidly diverge, contrary to our requirement that the drive act as a small perturbation. Second, we assume that the distance travelled by the particle over a typical period is very small, much smaller than any other relevant length scale associated with the system. Since \eqref{condition1resub} ensures that the particle's velocity changes little during a period, this distance travelled will be of order $ v T \sim v/\omega$. So we may write our second condition as \begin{equation} \label{condition2resub} \frac{v}{\omega} \ll l , \end{equation} \noindent where $l$ is the shortest length scale in the system. $l$ may be the mean free path for the particle, or a length scale characterizing the roughness of the billiard wall, or the typical distance over which the forces $-\nabla U (\mathbf{x})$ and $\mathbf{F} (\mathbf{x})$ vary by a significant amount. With the condition \eqref{condition2resub} satisfied, a large number of periods will occur between successive collisions with the billiard wall. Moreover, over a single period, the quantity $\mathbf{F} (\mathbf{x})$ will be nearly constant, since the particle will hardly move during this short time interval. As a result, during any period without a collision (the great majority of periods), integrating \eqref{newton} reveals that the driving force perturbs the particle's velocity by an amount $\approx \mathbf{F}(\mathbf{x}) \sin (\omega t)/(m \omega)$, and its position by $\approx -\mathbf{F}(\mathbf{x}) \left[\cos (\omega t) - 1 \right]/(m \omega^2)$. Thus, the cumulative effect of the force essentially integrates to zero as $\omega\rightarrow\infty$. Taken together, we see that if the conditions \eqref{condition1resub} and \eqref{condition2resub} are satisfied, the drive acts as a weak perturbation during periods both with and without collisions. When subject to such a drive, the particle will typically experience several collisions with the wall before its trajectory is significantly altered relative to the undriven motion. For any given energy $E$, which determines the typical particle speed $v$, the conditions \eqref{condition1resub} and \eqref{condition2resub} may always be satisfied for sufficiently large $\omega$. Thus, in this rapid driving regime, we expect that the energy diffusion description will be valid over a certain range of particle energies $[ E_{min},E_{max}]$ for which these conditions hold. For a given $\omega$, the energy distribution $\eta$ for a statistical ensemble with particle energies in $[ E_{min},E_{max}]$ will evolve according to the Fokker-Planck equation \eqref{fp}. Of course, under the Fokker-Planck dynamics, this distribution will shift and spread in energy space, ultimately spreading outside of the interval $[ E_{min},E_{max}]$. At this point, the conditions \eqref{condition1resub} and \eqref{condition2resub} are not satisfied for all particles in the ensemble. In particular, we expect condition \eqref{condition2resub} to generally break down for sufficiently high energy particles, which are fast enough to travel a significant distance over a single period. What happens in this high energy regime, when particle speeds have increased so that $v/\omega \sim l$? As before, condition \eqref{condition1resub} (which remains valid at high energies) tells us that the forces $- \nabla U(\mathbf{x})$ and $\mathbf{F} (\mathbf{x}) \cos (\omega t)$ only weakly perturb the particle's velocity over a given period. However, the increased speed of the particle now means that the particle travels a distance of order $v/\omega \sim l$ during this period. Assuming for simplicity that $l$ is comparable to the particle's mean free path, we see that the velocity is only slightly altered between successive collisions: Many collisions with the wall must occur before the drive significantly perturbs the particle's velocity relative to the undriven motion. Similarly, the drive will only weakly affect the particle's position: We can estimate from \eqref{newton} that the drive will perturb the particle's position by an amount of order $F/(m \omega^2)$ during a period, which by \eqref{condition1resub} and $v/\omega \sim l$ is much smaller than $l$. Therefore, the energy diffusion description may still be valid at energies greater than $E_{max}$, since we can potentially treat the drive as a small perturbation on the undriven dynamics. With that said, our main focus in this paper is rapidly driven particles, for which the conditions \eqref{condition1resub} and \eqref{condition2resub} are \textit{both} satisfied. In particular, the expressions for $g_1$ and $g_2$ obtained in the next section are only valid in this regime. This process of energy diffusion may be understood in terms of the phenomenon of Floquet prethermalization \cite{elseetal2017,abaninetal2017,herrmannetal2017,mori2018,morietal2018,mallayyaetal2019,howelletal2019,machadoetal2019,rajaketal2019,machadoetal2020,rubio-abadal2020,pengetal2021,hodsonjarzynski2021}. Consider an ensemble of driven particles with initial energy $E_0$, for which the conditions \eqref{condition1resub} and \eqref{condition2resub} are satisfied. The energy evolution of this ensemble may be divided into three stages. First, since the system is only weakly perturbed by the drive, the particles in the ensemble will exhibit chaotic, ergodic motion at nearly constant energy $E_0$. These dynamics lead to a process of chaotic mixing, in which the ensemble is distributed microcanonically (see \eqref{micro}) over a surface of constant energy in phase space \cite{dorfman1999}. That is, the system thermalizes at energy $E_0$: This is the prethermal stage. Second, the particle's energy distribution $\eta$ will slowly shift and broaden, as energy diffusion occurs according to the Fokker-Planck equation \eqref{fp}. As a result of this energy spreading, conditions \eqref{condition1resub} and \eqref{condition2resub} will eventually not hold for a significant fraction of particles in the ensemble, as $\eta$ spreads outside the interval $[ E_{min},E_{max}]$. At this point, although the energy evolution may still be diffusive, the energy drift and diffusion rates will no longer be given by the expressions \eqref{g1} and \eqref{g2} in the next section. In this third and final stage, general plausibility arguments for the existence of Fermi acceleration in driven billiards (see, e.g., Fermi's original work \cite{fermi1949}) lead us to speculate that on-average energy growth will continue, at least for certain choices of billiards and driving forces. In particular, since the particle's displacement during a period can be comparable to the typical distance travelled between collisions, resonances between the particle motion and the drive may result in especially rapid energy growth. Because the phase space of a billiard particle is unbounded, this energy absorption may potentially continue without limit. \section{Drift and diffusion coefficients} \label{g1g2} We now derive expressions for the drift and diffusion coefficients $g_1$ and $g_2$, in the limit of large $\omega$. We calculate these quantities in terms of powers of the small parameter $\omega^{-1}$, and ultimately only retain terms of order $O(\omega^{-2})$, the lowest non-zero order. We compute $g_2$ in terms of the variance in energy acquired by an ensemble of particles, initialized in a microcanonical ensemble at $t=0$ and then subject to the rapid drive. Then, we use the fluctuation-dissipation relation \eqref{fd}, established in \cite{hodsonjarzynski2021}, which allows us to calculate $g_1$ from our knowledge of $g_2$. To compute $g_2$, suppose that the initial conditions of the particle at $t=0$ are sampled according to a microcanonical distribution at energy $\mathcal{E} = E_0$. In the microcanonical ensemble, particles are confined to a single energy shell in phase space (a surface of constant energy), and the distribution of particles on this shell is uniform with respect to the Liouville measure. The initial distribution $\rho_0 (\mathbf{x}_0,\mathbf{v}_0) = \rho_{E_0} (\mathbf{x}_0,\mathbf{v}_0)$ corresponding to this ensemble is given by \begin{equation} \label{micro} \rho_{E_0} (\mathbf{x}_0,\mathbf{v}_0) \equiv \frac{1}{\Sigma (E_0)} \delta \Big( \mathcal{E}(\mathbf{x}_0,\mathbf{v}_0) - E_0 \Big), \end{equation} \begin{equation} \label{dos} \Sigma(E) \equiv \int d^d \mathbf{x} d^d \mathbf{v} \, \delta \Big( \mathcal{E}(\mathbf{x},\mathbf{v}) - E \Big), \end{equation} \noindent where $\Sigma (E)$ is the density of states for the undriven system. Since all particles in this ensemble have energy $E_0$ at $t=0$, this initial distribution corresponds to an initial condition $\eta (E,0) = \delta (E - E_0)$ for the Fokker-Planck equation \eqref{fp}. We now allow the driven system to evolve for a time $\Delta t$, where $\Delta t$ is long enough that the energy evolution is diffusive (i.e., $\Delta t \gg \tau_C$), but short enough that the change in the particle's energy is still small. By the end of this time interval, the ensemble of particles will have acquired a variance in energy $ \mathrm{Var} (\mathcal{E}) \equiv \langle \mathcal{E}^2 \rangle - \langle \mathcal{E} \rangle^2 =\langle \left(\Delta \mathcal{E}\right)^2 \rangle - \langle \Delta \mathcal{E} \rangle^2 $, where $\langle ... \rangle$ denotes an average over the ensemble at $t = \Delta t$, and $\Delta \mathcal{E} \equiv \mathcal{E} - E_0$ is the energy change of the particle from $t=0$ to $t= \Delta t$. From the Fokker-Planck equation \eqref{fp}, given the initial condition $\eta(E,0)=\delta(E-E_0)$, it follows that \cite{gardiner1985} \begin{equation} \label{fpvar} \mathrm{Var} (\mathcal{E}) \approx g_2 (E_0,\omega) \Delta t. \end{equation} \noindent Therefore, to determine $g_2 (E_0,\omega)$ for any particular $E_0$, it is sufficient to calculate $\mathrm{Var} (\mathcal{E})$, with trajectories sampled according to the appropriate microcanonical distribution $\rho_{E_0} (\mathbf{x}_0,\mathbf{v}_0)$. This calculation may be summarized as follows, with details given below and in Appendix A. First, for a given trajectory in the ensemble over the time $\Delta t$, we evaluate the associated energy change $\Delta \mathcal{E}$. From the fact that the drive acts as a small perturbation for large $\omega$, it follows that the dominant contribution to $\Delta \mathcal{E}$ is associated with driving periods during which a collision occurs. These $O(\omega^{-1})$ contributions are given by \eqref{dEicollision2}. We then average over the ensemble to obtain $\mathrm{Var} (\mathcal{E})$. Since the energy changes associated with different collisions become uncorrelated in the rapid driving limit, this average simplifies to \eqref{varE2}, as shown in Appendix A. Finally, we express this result in terms of an integral over the billiard boundary, leading to the expression \eqref{g2} for $g_2$. To begin, let us consider $\Delta \mathcal{E}$ for a particular particle in the ensemble. We may view this energy change as a sum of the $M= \Delta t/T$ small energy changes that occur over each period of the drive (assuming, for simplicity, that $\Delta t$ is an integer multiple of the period $T$). For sufficiently small $T$, \textit{at most} one collision will occur over each driving period. This property is guaranteed for a typical trajectory by condition \eqref{condition2resub}. Therefore, in this regime, $\Delta \mathcal{E}$ is a sum of two contributions: Energy changes from periods with no collisions, and energy changes from periods with a single collision. We will examine these two possibilities in turn. First, suppose that no collision occurs during the $i^{\mathrm{th}}$ period, from $t=(i-1)T$ to $t = i T$, with associated energy change $\Delta \mathcal{E}_i$. If we integrate \eqref{power} over this period and perform an integration by parts, we find that the boundary terms vanish, and the resulting expression for $\Delta \mathcal{E}_i$ is: \begin{equation} \begin{split} \label{dEinocollision} \Delta \mathcal{E}_i &= -\omega^{-1} \int_{(i-1)T}^{i T} dt \, \frac{d}{d t} \left[ \mathbf{F} (\mathbf{x}_t) \cdot \mathbf{v}_t \right] \sin (\omega t) \\ &= -\omega^{-1} \int_{(i-1)T}^{i T} dt \, \Bigg[ \mathbf{v}_t \cdot D\mathbf{F} (\mathbf{x}_t) \mathbf{v}_t \Bigg. \\ &\quad \Bigg. - \frac{\nabla U (\mathbf{x}_t) \cdot \mathbf{F}(\mathbf{x}_t)}{m} + \frac{|\mathbf{F}(\mathbf{x}_t)|^2}{m} \cos (\omega t) \Bigg] \sin (\omega t) . \end{split} \end{equation} In moving from the first line to the second line, we have used the equations of motion \eqref{newton} to evaluate the derivative $d \left[ \mathbf{F} (\mathbf{x}_t) \cdot \mathbf{v}_t \right] /dt$. The symbol $D \mathbf{F} (\mathbf{x})$ denotes the Jacobian matrix for the function $\mathbf{F} (\mathbf{x})$, with matrix elements $[D\mathbf{F} (\mathbf{x})]_{ij} \equiv \partial F_i / \partial x_j$, where $x_i$ and $F_i$ are the $i^{\mathrm{th}}$ components of $\mathbf{x}$ and $\mathbf{F} (\mathbf{x})$. So far, this is exact. Let us estimate the size of this quantity, in terms of orders of the small parameter $\omega^{-1}$. Since there is a factor of $\omega^{-1}$ outside the second integral in \eqref{dEinocollision}, and since we are integrating over a single period of duration $T = O(\omega^{-1})$, $\Delta \mathcal{E}_i$ is at most an $O(\omega^{-2})$ quantity. To approximate $\Delta \mathcal{E}_i$, we may replace $\mathbf{x}_t$ and $\mathbf{v}_t$ in the integrand by their values at the beginning of the period. Since $\mathbf{x}_t$ and $\mathbf{v}_t$ change over a period by an amount of order $O(\omega^{-1})$, the resulting expression for $\Delta \mathcal{E}_i$ is valid up to corrections of order $O(\omega^{-3})$. After this replacement, we are simply integrating the functions $\sin (\omega t)$ and $\cos (\omega t)\sin (\omega t)$ over a single period, which both vanish. Thus, $\Delta \mathcal{E}_i$ is a $O(\omega^{-3})$ quantity. Of course, the number of periods in which no collision occurs will scale like $\omega$; thus, the total energy change associated with collisionless periods is of order $O(\omega^{-2})$. The periods during which a collision takes place are more interesting. Suppose that the particle experiences $N$ collisions between $t=0$ and $t=\Delta t$, at times $t_1, t_2 ... t_N$. If the $k^{\mathrm{th}}$ collision occurs during the $i^{\mathrm{th}}$ period, then integrating \eqref{power} over this period yields the associated energy change $\Delta \mathcal{E}_i$: \begin{equation} \begin{split} \label{dEicollision} \Delta \mathcal{E}_i &= \int_{(i-1)T}^{t_k} dt \, \mathbf{F} (\mathbf{x}_t) \cdot \mathbf{v}_t \cos (\omega t) \\ &\quad + \int_{t_k}^{i T} dt \, \mathbf{F} (\mathbf{x}_t) \cdot \mathbf{v}_t \cos (\omega t) . \end{split} \end{equation} Each integral above is over a fraction of the period, and is therefore of order $O(\omega^{-1})$. By the same logic that we used for the collisionless case, we may approximate $\mathbf{F}(\mathbf{x}_t)$ and $\mathbf{v}_t$ in the first integral by $\mathbf{F}_k$ and $\mathbf{v}_k$, their values instantaneously prior to the $k^{\mathrm{th}}$ collision. Similarly, $\mathbf{F}(\mathbf{x}_t)$ and $\mathbf{v}_t$ in the second integral can be approximated by $\mathbf{F}_k$ and $\mathbf{v}_k^+$, where $\mathbf{v}_k^+$ is the particle's velocity immediately after the collision. The reflection law \eqref{reflection} tells us that $\mathbf{v}_k^+ = \mathbf{v}_k - 2 \left( \mathbf{v}_k \cdot \hat{\mathbf{n}}_k \right) \hat{\mathbf{n}}_k$, where $\hat{\mathbf{n}}_k$ is the normal to the wall at the point of collision. Upon making these substitutions, the resulting approximation for $\Delta \mathcal{E}_i$ is valid up to corrections of order $O(\omega^{-2})$. The integrals over $\cos (\omega t)$ are easily evaluated, and we obtain: \begin{equation} \label{dEicollision2} \Delta \mathcal{E}_i = 2 \omega^{-1}\left( \mathbf{F}_k \cdot \hat{\mathbf{n}}_k \right) \left( \mathbf{v}_k \cdot \hat{\mathbf{n}}_k \right) \sin (\omega t_k) + O(\omega^{-2}). \end{equation} \noindent Therefore, each collision that occurs is accompanied by a corresponding energy change of order $O(\omega^{-1})$ over the associated period, given by the above expression. Moreover, since the total energy change associated with \textit{collisionless} periods is of order $O(\omega^{-2})$, the energy changes corresponding to collisions are the dominant contribution to $\Delta \mathcal{E}$ for large $\omega$. After summing over all $N$ collisions to obtain $\Delta \mathcal{E}$, we can substitute this result into $ \mathrm{Var} (\mathcal{E}) = \langle \left(\Delta \mathcal{E}\right)^2 \rangle - \langle \Delta \mathcal{E} \rangle^2 $: \begin{widetext} \begin{equation} \label{varE0} \mathrm{Var} (\mathcal{E}) = 4 \omega^{-2} \Biggl< \left[\sum_{k=1}^N \left( \mathbf{F}_k \cdot \hat{\mathbf{n}}_k \right) \left( \mathbf{v}_k \cdot \hat{\mathbf{n}}_k \right) \sin (\omega t_k) \right]^2 \Biggr> - 4 \omega^{-2} \Biggl< \sum_{k=1}^N \left( \mathbf{F}_k \cdot \hat{\mathbf{n}}_k \right) \left( \mathbf{v}_k \cdot \hat{\mathbf{n}}_k \right) \sin (\omega t_k) \Biggr>^2 + O(\omega^{-3}). \end{equation} \end{widetext} This expression is computed in Appendix A. In this calculation, we find that the oscillating factors $\sin (\omega t_k)$ are uncorrelated with one another, and with the quantities $\left( \mathbf{F}_k \cdot \hat{\mathbf{n}}_k \right) \left( \mathbf{v}_k \cdot \hat{\mathbf{n}}_k \right)$, for large $\omega$. The phases $\omega t_k \, \mathrm{mod} \, 2 \pi$ may be thought of as effectively independent random variables, uniformly distributed on $[0,2 \pi )$. Intuitively, this lack of correlation arises because otherwise similar trajectories in the ensemble may have totally different values of $\sin (\omega t_k)$: Two nearby trajectories with even a small difference between the associated collision times $t_k$ will have a huge $O(\omega)$ difference in the value of $\omega t_k$, for large $\omega$. As a result, averages over the oscillating factors $\sin(\omega t_k)$ are found to vanish. The only non-vanishing terms in \eqref{varE0} are the ``diagonal'' terms in $ \langle \left(\Delta \mathcal{E}\right)^2 \rangle $, which include a factor of $\sin^2 (\omega t_k)$ that averages to $1/2$. We are left with: \begin{equation} \label{varE2} \mathrm{Var} (\mathcal{E}) = 2 \omega^{-2} \Biggl< \sum_{k=1}^N \left( \mathbf{F}_k \cdot \hat{\mathbf{n}}_k \right)^2 \left( \mathbf{v}_k \cdot \hat{\mathbf{n}}_k \right)^2 \Biggr>_0 + O(\omega^{-3}). \end{equation} \noindent Here, the subscript $0$ denotes that the average is now taken over an ensemble of \textit{undriven} trajectories, evolved with $\mathbf{F} (\mathbf{x}) = \mathbf{0}$. The error accrued by replacing the true driven trajectories with their undriven counterparts is of order $O(\omega^{-3})$, so we neglect it. Then, using standard techniques for evaluating ensemble averages in billiard systems, we may express this average as an integral over the billiard boundary. We simply present the results here; the details of this calculation are also found in Appendix A. Let $d S $ denote an infinitesimal $(d-1)$-dimensional patch of ``surface'' or ``hyper-area'' of the billiard wall, surrounding a location $\mathbf{x}$ on the wall. Such a patch has an associated outward-facing normal vector $\hat{\mathbf{n}} \equiv \hat{\mathbf{n}} (\mathbf{x})$, defined as in \eqref{reflection}, and an associated value of $\mathbf{F} \equiv \mathbf{F} (\mathbf{x})$. We may express $\mathrm{Var} (\mathcal{E})$ as an integral over all such patches: \begin{equation} \label{varE3} \mathrm{Var} (\mathcal{E}) = \frac{4 \omega^{-2} \Delta t}{d+1} \int dS \, \gamma_{E_0} v_{E_0}^2 \left( \mathbf{F} \cdot \hat{\mathbf{n}}\right)^2 + O (\omega^{-3}). \end{equation} Here, we define $v_E \equiv v_E(\mathbf{x})$ as \begin{equation} \label{defvE} v_E(\mathbf{x}) \equiv \begin{cases} \left[ 2 \left( E - U(\mathbf{x})\right)/m \right]^{1/2} & \text{if $U(\mathbf{x}) \leq E$} \\ 0 & \text{otherwise} \end{cases} \end{equation} which for $U(\mathbf{x}) \leq E$ is the speed of an undriven particle at position $\mathbf{x}$ with energy $E$. $\gamma_E \equiv \gamma_E (\mathbf{x})$ is the average collision rate per unit hyper-area of the billiard boundary for particles at position $\mathbf{x}$, averaged over undriven particles in the microcanonical ensemble at energy $E$. As explained in Appendix A, an explicit expression for $\gamma_E (\mathbf{x}) $ is given by \begin{equation} \label{gamma} \gamma_E (\mathbf{x}) = \frac{B_{d-1}}{m} \frac{v_E (\mathbf{x})^{d-1}}{\Sigma (E)}, \end{equation} \noindent where $B_n = \pi^{n/2} / \Gamma \left(\frac{n}{2} + 1 \right)$ is the hyper-volume of the unit ball in $n$-dimensional space, and where $\Sigma (E)$ is the density of states defined in \eqref{dos}. $\Gamma (s)$ is the gamma function, which coincides with the factorial $(s-1)!$ for positive integers $s$. Upon comparing \eqref{varE3} with \eqref{fpvar}, and relabelling $E_0$ as $E$, we obtain our final expression for $g_2$: \begin{equation} \label{g2} g_2 (E,\omega) = \frac{4 \omega^{-2}}{d+1} \, \int dS \, \gamma_E v_E^2 \left( \mathbf{F} \cdot \hat{\mathbf{n}} \right)^2 . \end{equation} In this equation and in the remainder of this section, we suppress the $O(\omega^{-3})$ corrections. Notably, the above expression can be computed without any knowledge of the particle trajectories, and depends on $\mathbf{F} (\mathbf{x}) $ only via the value of this force at the boundary of the billiard. This special dependence on $\mathbf{F} (\mathbf{x}) $ is sensible, since we know that the dominant changes in the particle's energy are associated with collisions with the wall. Also, we emphasize that while the potential $U(\mathbf{x})$ does not appear explicitly in \eqref{g2}, $g_2$ does depend on $U(\mathbf{x})$ via the quantities $v_E (\mathbf{x})$ and $\gamma_E (\mathbf{x})$. To calculate the drift coefficient $g_1$, we use the following fluctuation-dissipation relation derived in \cite{hodsonjarzynski2021} for general chaotic Hamiltonian systems: \begin{equation} \label{fd} g_1(E,\omega) = \frac{1}{2 \Sigma (E)} \frac{\partial}{\partial E} \Big[ g_2(E,\omega) \Sigma(E) \Big]. \end{equation} This relation emerges as a consequence of Liouville's theorem. Substituting \eqref{defvE} - \eqref{g2} into \eqref{fd}, we arrive at our final expression for the drift coefficient: \begin{equation} \label{g1} g_1 (E,\omega) = \frac{2 \omega^{-2}}{m} \int dS \, \gamma_E \left( \mathbf{F} \cdot \hat{\mathbf{n}} \right)^2 . \end{equation} \noindent This result implies that $g_1$ is always nonnegative (up to the $O(\omega^{-3})$ corrections), since $\gamma_E (\mathbf{x}) \geq 0$ for all $\mathbf{x}$ on the billiard boundary. From the Fokker-Planck equation \eqref{fp}, it follows that $d \langle \mathcal{E} \rangle / d t = \langle g_1 (\mathcal{E},\omega) \rangle$, where the ensemble average $\langle ... \rangle$ is given by $\langle f(\mathcal{E}) \rangle = \int dE \, \eta (E,t) f(E)$ for any function $f(\mathcal{E})$. Therefore, \eqref{g1} implies that the average energy of particles in the ensemble never decreases; that is, the system exhibits \textit{Fermi acceleration} on average. Combined with the expressions \eqref{g1} and \eqref{g2} for $g_1$ and $g_2$, the Fokker-Planck equation \eqref{fp} now fully characterizes the diffusive dynamics of the particle's energy under high frequency driving. Note that these expressions are only valid for energies in the range $[ E_{min}, E_{max}]$, for which conditions \eqref{condition1resub} and \eqref{condition2resub} both hold. For energies above $E_{max}$, the condition \eqref{condition2resub} breaks down, and the $O(\omega^{-3})$ corrections can no longer be ignored. Also, as mentioned previously, all of the above arguments and calculations may also be generalized to a broader class of periodic driving forces. In the remainder of this section, we set $U(\mathbf{x})=0$ in order to evaluate $g_1$ and $g_2$ for a standard billiard. In this case, the undriven particle maintains a constant speed $v_E=\sqrt{2E/m}$, independent of position. This greatly simplifies the calculation of both the density of states $\Sigma(E)$ and the collision rate $\gamma_E (\mathbf{x})$ -- note that \eqref{micro} factorizes into two $d$-dimensional integrals, over position and velocity. We obtain \begin{equation} \Sigma (E) = d B_d \dfrac{V v_E^{d-2}}{m},\quad \gamma_E = \dfrac{1}{d} \dfrac{B_{d-1}}{B_d} \dfrac{v_E}{V}, \end{equation} where $V$ is the $d$-dimensional hyper-volume of space enclosed by the billiard. Our expressions for the drift and diffusion coefficients now become \begin{align} \label{g1noU} g_1 (E,\omega) &= \frac{2 \omega^{-2} v_E}{m \lambda} \, \frac{1}{S} \int dS \, \left( \mathbf{F} \cdot \hat{\mathbf{n}} \right)^2 \qquad ( U = 0) \\ \label{g2noU} g_2 (E,\omega) &= \frac{4 \omega^{-2} v_E^3}{(d+1)\lambda} \, \frac{1}{S} \int dS \, \left( \mathbf{F} \cdot \hat{\mathbf{n}} \right)^2 \qquad ( U = 0) \end{align} where $S$ denotes the $(d-1)$-dimensional hyper-area of the billiard boundary, and $\lambda \equiv d \dfrac{B_d}{B_{d-1}} \dfrac{V}{S}$ is the mean free path (the average distance between collisions) of the undriven billiard particle \cite{chernov1997}. In \eqref{g1noU} and \eqref{g2noU}, the dependence of $g_1$ and $g_2$ on the particle energy $E$ enters only through the quantity $v_E=\sqrt{2E/m}$. Focusing specifically on energy absorption, we obtain, using the relation $d \langle \mathcal{E} \rangle / d t = \langle g_1 (\mathcal{E},\omega) \rangle$, \begin{equation} \frac{d\langle{\mathcal E}\rangle}{dt} = \frac{2 \bar{v}(t)}{m\lambda \omega^2} \frac{1}{S} \int dS \, \left( \mathbf{F} \cdot \hat{\mathbf{n}} \right)^2 \qquad ( U = 0) \end{equation} where $\bar{v}(t)\equiv \int dE\,\eta(E,t)\, v_E(E)$ is the mean particle speed at time $t$. Thus the average rate of energy absorption is proportional to the mean particle speed and inversely proportional to the square of the driving frequency, with a constant of proportionality determined by the particle mass, the shape and dimensionality of the billiard, and the driving field $\mathbf{F}(\mathbf{x})$. For a three-dimensional billiard this result becomes \begin{equation} \frac{d\langle{\mathcal E}\rangle}{dt} = \frac{\bar{v}(t)}{2 m \omega^2 V} \,\int dS \, \left( \mathbf{F} \cdot \hat{\mathbf{n}} \right)^2 \qquad ( U = 0, d=3) \end{equation} This expression resembles the {\it wall formula}, a semiclassical estimate of dissipation in low-energy nuclear processes, which gives a dissipation rate proportional to mean particle speed, with a constant of proportionality that includes a surface integral over the boundary of the nucleus; see Eq. (1.2) of Ref.~\cite{blockietal1978}. This resemblance is not surprising, since in both cases the system's energy evolves via an accumulation of small changes, sometimes positive, sometimes negative, occurring at collisions between the particle and the billiard boundary. In fact, the wall formula can be derived within an energy diffusion approach analogous to the one developed above~\cite{jarzynski1993}. \section{Numerical results} \label{numerical} We now present numerical simulation results that corroborate our calculations. We consider the special case of a particle in a two-dimensional ``clover'' billiard, (see Figure \ref{fig:clover}), subject only to a time-periodic, spatially uniform force. Since a \textit{free} particle in the clover billiard is known \cite{jarzynski1993} to exhibit chaotic and ergodic motion, this system satisfies all the assumptions of our paper, as long as the drive is sufficiently rapid. Specifically, in the equations of motion \eqref{newton}, we set $U(\mathbf{x}) = 0$, and take $\mathbf{F} (\mathbf{x}) = \mathbf{F} $ to be independent of position. This special case is particularly amenable to simulation, since the motion of the particle between collisions may be computed exactly. Morever, as described in Appendix B, the Fokker-Planck equation admits an explicit analytical solution for this choice of $U(\mathbf{x})$ and $\mathbf{F}(\mathbf{x})$. \begin{figure}[ht] \centering \includegraphics[width=0.45\textwidth]{cloverfinal.png} \caption{Diagram of the clover billiard, constructed from six mutually tangent circles. The billiard boundary is given by the solid line. The inner circles have radius $R_1 = 1$, and the outer circles have radius $R_2=2.$} \label{fig:clover} \end{figure} For this system, we calculate the evolution of the energy distribution $\eta (E,t)$ in two ways: By directly evolving an ensemble of particle trajectories according to \eqref{newton} and \eqref{reflection}, and by solving the Fokker-Planck equation \eqref{fp}. If the energy diffusion description is accurate, then the energy distributions obtained in both cases will coincide. We present the results of these computations here, and leave the details of our calculations to Appendix B. To test our model, we evolve an ensemble of driven particles with mass $m=1$ in the clover billiard, with $R_1 = 1$ and $R_2 = 2$ (see Figure \ref{fig:clover}). The mean free path for particles in this billiard is $\lambda \approx 2.610$, as shown in Appendix B. The particles are initialized at $t=0$ with speed $v_0=1$, in a microcanonical ensemble at energy $E_0 = m v_0^2 /2 = 1/2$. We set $\mathbf{F} = F(\hat{\mathbf{x}} + \hat{\mathbf{y}})/\sqrt{2}$, where $\hat{\mathbf{x}}$ and $\hat{\mathbf{y}}$ are the unit vectors for the coordinate system in Figure \ref{fig:clover}, and choose $F = |\mathbf{F}| = 10$. We run simulations for a range of driving frequencies $\omega$, with a focus on the high-frequency driving regime. First, we verify the validity of the Fokker-Planck equation. For various values of $\omega$, we evolve an ensemble of $N=10^5$ driven particles, and then compare the energy distribution of this ensemble with the energy distribution obtained by solving the Fokker-Planck equation. The plots in Figures \ref{fig:histw40pi} and \ref{fig:histw320pi} illustrate this comparison at times $t=10$, $100$, and $1000$, for driving frequencies $\omega = 40 \pi$ and $\omega = 320 \pi$ (note that the conditions \eqref{condition1resub} and \eqref{condition2resub} are satisfied for these parameter choices). We find close agreement between the true energy distribution (represented by the histograms) and the Fokker-Planck energy distribution (the solid lines). \begin{figure*}[ht] \centering \includegraphics[width=1.0\textwidth]{hist_w_40pi.png} \caption{Evolution of an ensemble starting with energy $E_0 =1/2$, with $F=10$ and $\omega = 40 \pi$. The three snapshots are captured at $t=10$, $t=100$, and $t=1000$. The histograms are populated from the numerical simulations, and the solid lines are the solution to the Fokker-Planck equation.} \label{fig:histw40pi} \end{figure*} \begin{figure*}[ht] \centering \includegraphics[width=1.0\textwidth]{hist_w_320pi.png} \caption{ Same as Fig.~\ref{fig:histw40pi}, but with $\omega=320 \pi$, and with a different scaling of the axes.} \label{fig:histw320pi} \end{figure*} Second, we look specifically at the ensemble mean $\langle \Delta \mathcal{E}\rangle$ and variance $\mathrm{Var} (\Delta \mathcal{E})$ of the energy change $\Delta \mathcal{E}$. If a microcanonical ensemble of initial conditions at energy $E_0$ is evolved for a short time $\Delta t$, then the Fokker-Planck equation predicts that $\langle \Delta \mathcal{E}\rangle \approx g_1 (E_0, \omega) \Delta t$ and $\mathrm{Var} (\Delta \mathcal{E}) \approx g_2 (E_0, \omega) \Delta t$ \cite{gardiner1985}. Here, $\Delta t$ must be longer than the correlation timescale associated with the particle's undriven motion, but short enough that the relative change in the energy of any particle in the ensemble is still very small. To test this theoretical result, we evolve an ensemble of $N=10^6$ driven particles for a time $\Delta t = 20$, and then compute the resulting values of $\langle \Delta \mathcal{E} \rangle$ and $\mathrm{Var} (\Delta \mathcal{E})$. We repeat this for a range of driving frequencies from $\omega = 10 \pi$ to $\omega =2560 \pi$, and then plot $\langle \Delta \mathcal{E} \rangle$ and $\mathrm{Var} (\Delta \mathcal{E})$ versus $\omega$ in Figure \ref{fig:deltaEVarEvsw}. For sufficiently large $\omega$, the true values of $\langle \Delta \mathcal{E} \rangle$ and $\mathrm{Var} (\Delta \mathcal{E})$ are in good agreement with the theoretical predictions $\langle \Delta \mathcal{E}\rangle \approx g_1 (E_0, \omega) \Delta t$ and $\mathrm{Var} (\Delta \mathcal{E}) \approx g_2 (E_0, \omega) \Delta t$, where $g_1$ and $g_2$ are given by the formulas \eqref{g1noU} and \eqref{g2noU}. Note that for large $\omega$, the error bars in Figure \ref{fig:deltaEVarEvsw} associated with $\langle \Delta \mathcal{E} \rangle$ become very large. This is because the fluctuations in $\Delta \mathcal{E}$ about its average are on the order of $\sqrt{\mathrm{Var} (\Delta \mathcal{E})} = O(\omega^{-1})$, while $\langle \Delta \mathcal{E} \rangle = O(\omega^{-2})$ itself is much smaller. \begin{figure*}[ht] \centering \includegraphics[width=1.0\textwidth]{deltaE_and_varE_vs_w.png} \caption{$\langle \Delta \mathcal{E} \rangle $ and $\mathrm{Var}(\Delta \mathcal{E})$ versus $\omega$, for an initial ensemble with energy $E_0 = 1/2$, with $F = 10$ and $\Delta t = 20$. The points denote results of the numerical simulations, and the solid line corresponds to the theoretical predictions given by \eqref{g1noU} and \eqref{g2noU}.} \label{fig:deltaEVarEvsw} \end{figure*} We note that the value of $F = 10$ corresponds to a ``strong'' driving force, in the following sense. Suppose that we set $\omega = 0$, so that the driving force is time-\textit{independent}, and then estimate the change in a particle's energy as it moves across the billiard. In the $\omega = 0$ case, the particle simply experiences free-fall within the billiard, with a uniform gravitational field pointing in the direction of $\mathbf{F} = F(\hat{\mathbf{x}} + \hat{\mathbf{y}})/\sqrt{2}$. If we initialize the particle on one side of the billiard and let it ``fall'' to the other side, then the (kinetic) energy gained by the particle during its descent will be given by $\Delta E = F \Delta x$, where $\Delta x$ is the distance that the particle moves in the direction of $\mathbf{F}$. $\Delta x$ will be on the order of the mean free path $\lambda \approx 2.610$, and so we find $\Delta E \sim 26$. This energy change is an order of magnitude larger than the particle's initial energy $E_0 = 1/2$. Clearly, when $\omega = 0$ (or generally, if $\omega$ is small), the driving force has a very large effect on the particle trajectories, and therefore we should not expect an energy diffusion description to apply. For $F=10$, we should only expect energy diffusion for sufficiently large values of $\omega$. Testing our model with this value of $F$ thus insures that energy diffusion is really a consequence of rapid driving, and not simply the result of a weak driving force. \section{Discussion} \label{discussion} We have fully characterized the diffusive evolution of energy in chaotic, ergodic billiard systems subject to a rapid periodic driving force. We obtained the associated energy drift and diffusion rates up to second order in the small parameter $\omega^{-1}$, and corroborated our theoretical predictions with numerical simulations of a driven particle in a clover-shaped billiard. We now conclude with a discussion of connections between this paper and other work, and of possible future directions for research. First, as described in Section \ref{diffusion}, our model is a detailed case study of \textit{Floquet prethermalization}, and its ultimate breakdown due to energy absorption. Floquet prethermalization, a phenomenon which has been documented in a range of classical and quantum systems, occurs when a periodically driven system relaxes to a long-lived thermal state with respect to an effective Hamiltonian \cite{elseetal2017,abaninetal2017,herrmannetal2017,mori2018,morietal2018,mallayyaetal2019,howelletal2019,machadoetal2019,rajaketal2019,machadoetal2020,rubio-abadal2020,pengetal2021,hodsonjarzynski2021}. As described in Section \ref{diffusion}, the evolution of the driven billiard particle proceeds in three stages: Prethermalization at the initial energy $E_0$, then slow energy absorption and diffusion, and finally the potential breakdown of the rapid driving assumption and the possibility of rapid, unbounded energy absorption. However, one characteristic sets the rapidly driven billiard apart from many other Floquet prethermal systems: The $O(\omega^{-2})$ scaling of the energy absorption rate $g_1$. In contrast to this behavior, for a variety of systems subject to rapid periodic driving, previous studies have revealed that prethermal energy absorption rates are \textit{exponentially} small in the driving frequency $\omega$ \cite{abaninetal2015,morietal2016,kuwaharaetal2016,elseetal2017,abaninetal2017,mori2018,howelletal2019,machadoetal2019,tranetal2019,rubio-abadal2020,pengetal2021,hodsonjarzynski2021}. We can understand this discrepancy by reviewing the general account of energy diffusion for Hamiltonian systems under rapid periodic driving, established in \cite{hodsonjarzynski2021}. In this paper, the energy drift and diffusion rates are related to the Fourier transform of an autocorrelation function for the undriven system, evaluated at the drive frequency $\omega$. In \textit{smooth} Hamiltonian systems, this Fourier transform decays faster than any power of $\omega^{-1}$ for large $\omega$ \cite{bracewell1978}, consistent with an energy absorption rate that is exponentially small in $\omega$. However, for a billiard, the discontinuous nature of the collisions with the wall produces a cusp in the relevant autocorrelation function at $t=0$, causing the associated Fourier transform to decay like $\omega^{-2}$. We therefore expect the drift and diffusion coefficients for a billiard to scale like $\omega^{-2}$ for large $\omega$, as verified by the formulas \eqref{g1} and \eqref{g2}. Our results are also situated in a extensive literature on forced billiards, which have been proposed as models for phenomena ranging from electrical conduction \cite{chernovetal1993,chernovetal2013}, to relativistic charged particle dynamics \cite{veltricarbone2004}, to nuclear dissipation \cite{blockietal1978}. Billiard systems may either be driven via an external force applied between collisions, as in the present paper, or via time-dependence of the billiard walls. In the latter scenario, the billiard boundary is deformed and shifted as a function of time according to a pre-specified schedule, and changes in the particle's energy are induced by collisions with the moving wall. For a variety of models, it has been demonstrated that such systems are susceptible to \textit{Fermi acceleration}: The particle exhibits a statistical bias towards energy-\textit{increasing} collisions, leading to a systematic growth of the (average) energy \cite{fermi1949,ulam1961,barnettetal2001,karlisetal2006,gelfreichturaev2008,karlisetal2012,batistic2014}. In particular, diffusive energy spreading via this mechanism has been observed for certain models \cite{jarzynski1993,dettmannleonel2013,demersjarzynski2015}. There is a natural correspondence between billiard systems with time-dependent boundaries and our present model. In Section \ref{diffusion}, we noted that over a single period, the driving force perturbs the velocity of a billiard particle by an amount $\approx \mathbf{F}(\mathbf{x}) \sin (\omega t)/(m \omega)$, and its position by $\approx -\mathbf{F}(\mathbf{x}) \left[\cos (\omega t) - 1 \right]/(m \omega^2)$. Evidently, the particle's motion over a period is well-approximated as small, sinusoidal oscillations about a corresponding undriven trajectory of the particle. Therefore, we can imagine moving to an oscillating reference frame, wherein the particle exhibits approximately undriven motion, and the \textit{walls} perform small, rapid oscillations. Accordingly, we hypothesize that for any rapidly driven billiard satisfying the assumptions of this paper, there is a particular billiard with oscillating boundaries which exhibits energy diffusion with the same drift and diffusion coefficients. For the special case of a standard billiard, $U(\mathbf{x})=0$, we can confirm this correspondence by comparing our results to those of \cite{demersjarzynski2015}, where energy diffusion is established for billiards in the ``quivering limit,'' wherein the walls of a billiard undergo small, rapid periodic oscillations. Under this framework, it is straightforward to verify that if each point on the boundary of a quivering chaotic billiard oscillates about its mean position $\mathbf{x}$ with time-dependence $\mathbf{x} + \mathbf{F}(\mathbf{x}) \cos (\omega t)/m \omega^2$, then the associated drift and diffusion coefficients are exactly those predicted in our model for a standard billiard subject to the force $\mathbf{F}(\mathbf{x}) \cos (\omega t)$. It would be interesting to see whether a similar correspondence is valid in the general case, for $U(\mathbf{x}) \neq 0$. The results in this paper are also relevant to many-particle systems. To see this, consider a gas of $N$ particles of mass $m$ in a $d$-dimensional billiard cavity, with positions $\mathbf{x}_1 ... \, \mathbf{x}_N$ and velocities $\mathbf{v}_1 ... \, \mathbf{v}_N$. Suppose that these particles interact via some potential $U_{\mathrm{int}} (\{\mathbf{x}_i\})$, and are subject to a driving force $\mathbf{F}_{\mathrm{int}} (\{\mathbf{x}_i\}) \cos (\omega t)$. If we collect the particle positions and velocities into two $(d \times N)$-dimensional vectors $\mathbf{X} \equiv (\mathbf{x}_1 ... \, \mathbf{x}_N)$ and $\mathbf{V} \equiv (\mathbf{v}_1 ... \, \mathbf{v}_N)$, we find that these vectors evolve according to a $(d \times N)$-dimensional version of Newton's law \eqref{newton}. Moreover, when a particle collides with the wall, $\mathbf{V}$ is updated according to a reflection law analogous to \eqref{reflection}, which only reverses the components of $\mathbf{V}$ associated with the colliding particle. Therefore, we see that a many-particle billiard in $d$-dimensional space is mathematically equivalent to a single-particle billiard in $(d \times N)$-dimensional space, where the boundary of the $(d \times N)$-dimensional billiard is given by all points in $\mathbf{X}$-space which correspond to having one or more particles on the $d$-dimensional billiard boundary. This equivalence broadly implies that our results can be extended to many-particle interacting billiards, although the detailed consequences of this equivalence remain to be explored. Finally, much work has been devoted to understanding energy absorption in periodically driven quantum systems \cite{dalessiorigol2014,lazaridesetal2014,abaninetal2015,ponteetal2015a,rehnetal2016,morietal2016,kuwaharaetal2016,abaninetal2017,notarnicolaetal2018,machadoetal2019,tranetal2019}. It is worth asking how energy diffusion in the classical billiards studied in the present paper might provide insight into the energy dynamics of analogous quantum systems. In accordance with the correspondence principle, we might anticipate that in the semiclassical limit (Planck's constant $h \to 0$), the energy of a rapidly periodically driven, quantized chaotic billiard should evolve diffusively. Indeed, this quantum-classical correspondence may be established \cite{cohen2000,elyutin2006,hodsonjarzynski2021} if one assumes Fermi's golden rule, and invokes semiclassical estimates \cite{feingoldperes1986,wilkinson1987} for the matrix elements of classically chaotic systems. However, it is unclear how to definitively demonstrate such a correspondence starting from unitary quantum dynamics, although much research has been devoted to this problem, particularly with the aid of random matrix theory models \cite{wilkinson1988,wilkinsonaustin1995,cohen2000,cohenkottos2000,elyutin2006}. It may also be fruitful to analyze the quantum analogue of our chaotic billiard with the help of the Floquet-Magnus expansion, which allows the evolution of a system under rapid periodic driving to be expressed perturbatively in powers of $\omega^{-1}$ \cite{bukovetal2015,morietal2018}. It would be interesting to see whether there is a correspondence between our analysis and a Floquet-Magnus approach. \section*{Acknowledgements} We acknowledge financial support from the DARPA DRINQS program (D18AC00033), and we thank an anonymous referee for their stimulating and helpful comments. \renewcommand{\theequation}{A\arabic{equation}} \setcounter{equation}{0} \setcounter{subsection}{0} \section*{Appendix A} \label{appendixA} Here, we calculate the variance \eqref{varE3} from the expression \eqref{varE0}. We begin by computing the second term in \eqref{varE0}, corresponding to the square of $\langle \Delta \mathcal{ E} \rangle$. Defining the shorthand $a_k \equiv \left( \mathbf{F}_k \cdot \hat{\mathbf{n}}_k \right) \left( \mathbf{v}_k \cdot \hat{\mathbf{n}}_k \right)$, we have: \begin{equation} \label{deltaEav} \langle \Delta \mathcal{E} \rangle = 2 \omega^{-1} \Biggl< \sum_{k=1}^N a_k \sin (\omega t_k) \Biggr> + O(\omega^{-2}). \end{equation} The $k^{\mathrm{th}}$ term in this sum depends on $a_k$ and $t_k$, which are ultimately determined by the initial conditions $(\mathbf{x}_0, \mathbf{v}_0)$ for each particle in the ensemble. Since $(\mathbf{x}_0, \mathbf{v}_0)$ is randomly sampled according to the microcanonical distribution \eqref{micro}, $a_k$, $t_k$, and $N$ are all random variables. Therefore, we may express the average of each term as an average with respect to $P_k(a_k,t_k,N)$, the joint probability distribution for $a_k$, $t_k$, and $N$. By the rules of conditional probability, $P_k(a_k,t_k,N)$ may be decomposed as \begin{equation} \label{bayes} P_k(a_k,t_k,N ) = P_k (a_k, N) P_k (t_k | a_k, N), \end{equation} \noindent where $P_k (a_k, N )$ is the joint probability distribution for $a_k$ and $N$, and $P_k (t_k | a_k, N)$ is the probability distribution for $t_k$, conditioned on particular values of $a_k$ and $N$. For the $k^{\mathrm{th}}$ term in \eqref{deltaEav}, we then compute the average by summing over all possible values of $N$, and integrating over all values of $a_k$ and $t_k$: \begin{widetext} \begin{equation} \label{deltaEav2} \langle \Delta \mathcal{E} \rangle = 2 \omega^{-1} \sum_{N=0}^{\infty} \sum_{k=1}^N \int d a_k \, P_k (a_k,N ) \, a_k \int d t_k \, P_k (t_k | a_k, N) \sin (\omega t_k) + O(\omega^{-2}). \end{equation} \end{widetext} Recall that the quantities being averaged over are associated with trajectories in an ensemble of \textit{driven} particles. However, for large values of $\omega$, each driven trajectory is only weakly perturbed from its undriven counterpart: The trajectory evolved from the same initial condition $(\mathbf{x}_0, \mathbf{v}_0)$, but with $\mathbf{F} (\mathbf{x}) = \mathbf{0}$. So, to leading order in $\omega^{-1}$, we may replace $P_k (a_k,N)$ with $P_k^0 (a_k,N)$, the joint distribution for $a_k$ and $N$ in the absence of driving. Similarly, we replace $P_k (t_k | a_k, N)$ with $P_k^0 (t_k | a_k, N)$, the conditional distribution for $t_k$ in the absence of driving. Importantly, these new undriven distributions are entirely independent of $\omega$, since they are completely determined by the dynamics of the undriven trajectories. Assuming that these undriven distributions differ from their driven counterparts by an amount of order $O(\omega^{-1})$, we obtain: \begin{widetext} \begin{equation} \label{deltaEav3} \langle \Delta \mathcal{E} \rangle = 2 \omega^{-1} \sum_{N=0}^{\infty} \sum_{k=1}^N \int d a_k \, P_k^0 (a_k,N) \, a_k \int d t_k \, P_k^0 (t_k | a_k, N) \sin (\omega t_k) + O(\omega^{-2}). \end{equation} \end{widetext} Finally, consider the inner integral over $t_k$. The integrand is the product of $P_k^0 (t_k | a_k, N)$, which is independent of $\omega$, and $\sin (\omega t_k)$, an oscillatory function with zero time average. It is straightforward to show that integrals of this form approach zero like $\omega^{-1}$ or faster for large $\omega$. Therefore, this integral is of order $O(\omega^{-1})$ for each $k$, and we are left with \begin{equation} \label{deltaEav4} \langle \Delta \mathcal{E} \rangle = O(\omega^{-2}). \end{equation} \noindent This implies that $\mathrm{Var} (\mathcal{E}) = \langle( \Delta \mathcal{E})^2\rangle + O(\omega^{-4})$. We may now express \eqref{varE0} as: \begin{equation} \label{varE} \mathrm{Var} (\mathcal{E}) = 4 \omega^{-2} \Biggl< \sum_{k=1}^N \sum_{l=1}^N a_k a_l \sin (\omega t_k) \sin (\omega t_l) \Biggr> + O(\omega^{-3}). \end{equation} We evaluate this average similarly to how we computed $\langle \Delta \mathcal{E} \rangle$. The logic is the same: To leading order, the average may be calculated with respect to the ensemble of \textit{undriven} trajectories. Then, for $l \neq k$, the integrals over $t_k$ and $t_l$ in the average are of order $O(\omega^{-1})$, because of the oscillating factor $\sin (\omega t_k) \sin(\omega t_l)$ in the integrand. The contribution of the $l \neq k$ terms to $\mathrm{Var} (\mathcal{E})$ is therefore of order $O(\omega^{-3})$, because of the factor $\omega^{-2}$ outside the sum. For the $l = k$ terms, we note that $\sin (\omega t_k) \sin (\omega t_l) = \sin^2 (\omega t_k) = \frac{1}{2} - \frac{1}{2} \cos (2 \omega t_k)$, the sum of a constant term and an oscillatory term. The contributions to $\mathrm{Var} (\mathcal{E})$ corresponding to the oscillatory term $- \frac{1}{2} \cos (2 \omega t_k)$ are also of order $O(\omega^{-3})$. Thus, the only remaining contribution to $\mathrm{Var} (\mathcal{E})$ is given by \begin{equation} \label{varE2A} \mathrm{Var} (\mathcal{E}) = 2 \omega^{-2} \Biggl< \sum_{k=1}^N a_k^2 \Biggr>_0 + O(\omega^{-3}), \end{equation} \noindent where we have added the subscript $0$ to emphasize that the average is over the ensemble of undriven particles. Recalling that $a_k = \left( \mathbf{F}_k \cdot \hat{\mathbf{n}}_k \right) \left( \mathbf{v}_k \cdot \hat{\mathbf{n}}_k \right)$, we see that this is \eqref{varE2} in the main text. We briefly pause to interpret this result. In evaluating $\langle( \Delta \mathcal{E})^2\rangle$ and $\langle\Delta \mathcal{E}\rangle$, we have seen that for large $\omega$, the oscillatory factors $\sin (\omega t_k)$ average to zero. These factors become effectively uncorrelated with one another, and with the quantities $a_k$. Intuitively, this lack of correlation arises because otherwise similar trajectories in the ensemble may have totally different values of $\sin (\omega t_k)$: Two nearby trajectories with even a small difference between their associated collision times $t_k$ will have a huge $O(\omega)$ difference in the corresponding values of $\omega t_k$. As a result, the phases $\omega t_k \, \mathrm{mod} \, 2 \pi$ effectively become independent random variables, uniformly distributed on $[0,2 \pi )$. To reiterate, the average \eqref{varE2A} is taken over a microcanonical ensemble of initial conditions with energy $E_0$, evolved for a time $\Delta t$ according to the \textit{undriven} equations of motion. The sum $\sum_{k=1}^N a_k^2$ is over all collisions which occur from $t=0$ to $t=\Delta t$. Our strategy will be to decompose this sum into many small contributions, evaluate the average of each contribution, and then add up all these results. Specifically, let us divide up the boundary of the billiard into infinitesimal patches, indexed by a variable $l$: Each patch is centered on a point $\mathbf{x}^{(l)}$ on the boundary, and has a $(d-1)$-dimensional hyper-area $dS $. Moreover, we partition velocity space into infinitesimal hypercubes of hyper-volume $d^d \mathbf{v}$ labeled by $m$, each centered on a velocity point $\mathbf{v}^{(m)}$. Finally, we divide up the time interval from $t=0$ to $t=\Delta t$ into infinitesimal segments of duration $d t$, beginning at successive times $t^{(n)} = n \,dt = 0,\, dt,\, 2 \,dt \,... $. Let us now sum $a_k^2$, \textit{only} counting collisions associated with a particular choice of the indices $l$, $m$, and $n$: Those collisions which occurred on the patch containing $\mathbf{x}^{(l)}$, with incoming velocity in the velocity cell corresponding to $\mathbf{v}^{(m)}$, between the times $t^{(n)}$ and $t^{(n)}+dt$. If we denote an average over such a restricted sum with $\langle ... \rangle_{0,l,m,n}$, then $\mathrm{Var} (\mathcal{E})$ is just a sum over such averages: \begin{equation} \label{X2} \mathrm{Var} (\mathcal{E}) = 2 \omega^{-2} \sum_{l,m,n} \Biggl< \sum_{k=1}^N a_k^2 \Biggr>_{0,l,m,n} + O(\omega^{-3}). \end{equation} For a given choice of $l$, $m$, and $n$, what is this average? Well, for all collisions associated with a particular $l$ and $m$, we have that $a_k^2 \approx \left[ \mathbf{F} (\mathbf{x}^{(l)}) \cdot \hat{\mathbf{n}} (\mathbf{x}^{(l)})\right]^2 \left[ \mathbf{v}^{(m)} \cdot \hat{\mathbf{n}} (\mathbf{x}^{(l)})\right]^2 \equiv \left( \mathbf{F} \cdot \hat{\mathbf{n}} \right)^2 \left( \mathbf{v} \cdot \hat{\mathbf{n}} \right)^2$, so this factor can be brought outside the average. Then, we are simply averaging over the number of collisions corresponding to $l$, $m$, and $n$. This is only nonzero for a small fraction of the ensemble with associated phase space volume $\mathbf{v}^{(m)} \cdot \hat{\mathbf{n}} (\mathbf{x}^{(l)}) \, dt \, dS \, d^d \mathbf{v} \equiv \mathbf{v} \cdot \hat{\mathbf{n}} \, dt \, dS \, d^d \mathbf{v}$ (see Figure \ref{fig:parallelogram}); the corresponding average is therefore $\rho_{E_0} (\mathbf{x}^{(l)},\mathbf{v}^{(m)}) \equiv \rho_{E_0}$ times this volume elment. Thus, we can convert the sum of over $l$, $m$, and $n$ into an integral over $\mathbf{x}$, $\mathbf{v}$, and $t$, obtaining: \begin{widetext} \begin{equation} \label{X3} \mathrm{Var} (\mathcal{E}) = 2 \omega^{-2} \Delta t \int dS \int_{\mathbf{v} \cdot \hat{\mathbf{n}} > 0 } d^d \mathbf{v} \, \rho_{E_0} \left( \mathbf{F} \cdot \hat{\mathbf{n}} \right)^2 \left( \mathbf{v} \cdot \hat{\mathbf{n}} \right)^3 + O(\omega^{-3}). \end{equation} \end{widetext} \noindent Note the restriction to $\mathbf{v} \cdot \hat{\mathbf{n}} (\mathbf{x})> 0$, since a collision can only occur if the incoming velocity $\mathbf{v}$ is directed towards the wall. We can interpret the quantity $\rho_{E_0} (\mathbf{x},\mathbf{v}) \, \mathbf{v} \cdot \hat{\mathbf{n}} (\mathbf{x}) $ as the differential average collision rate in the microcanonical ensemble, for collisions at the point $\mathbf{x}$ on the boundary with incoming velocity $\mathbf{v}$. $\mathrm{Var} (\mathcal{E})$ is then obtained by integrating $\left[ \mathbf{F} (\mathbf{x}) \cdot \hat{\mathbf{n}} (\mathbf{x}) \right]^2 \left[ \mathbf{v} \cdot \hat{\mathbf{n}} (\mathbf{x}) \right]^2$ over all possible collisions, weighted by the rate at which each type of collision occurs. \begin{figure}[ht] \centering \includegraphics[width=0.45\textwidth]{collisionrate.png} \caption{Diagram of collisions associated with a given choice of $l$, $m$, and $n$, for the case of $d=2$ dimensions. The curved line represents the billiard boundary, and $\hat{\mathbf{n}} (\mathbf{x}^{(l)}) \equiv \hat{\mathbf{n}}$ is the outward-facing normal near such collisions. Over the infinitesimal time interval from $t^{(n)}$ to $t^{(n)}+ dt$, any particle in the shaded parallelogram with velocity $ \mathbf{v}^{(m)} \equiv \mathbf{v}$ will collide with the boundary sometime during this interval. The area of this parallelogram is $\mathbf{v}\cdot \hat{\mathbf{n}} \, dt \, dS$, and so collisions associated with $l$, $m$, and $n$ correspond to a phase space volume of $\mathbf{v} \cdot \hat{\mathbf{n}} \, dt \, dS \, d^d \mathbf{v}$. Analogous arguments apply to higher-dimensional billiards.} \label{fig:parallelogram} \end{figure} With the definition of $\rho_{E_0} (\mathbf{x},\mathbf{v})$ (see \eqref{micro}), we may perform the integral over $\mathbf{v}$ using $d$-dimensional spherical coordinates. The result is \begin{equation} \label{X4} \mathrm{Var} (\mathcal{E}) = \frac{4 B_{d-1} \omega^{-2} \Delta t}{(d+1) m \Sigma (E_0)} \, \int dS \, v_{E_0}^{d+1} \left( \mathbf{F} \cdot \hat{\mathbf{n}} \right)^2 + O(\omega^{-3}). \end{equation} \noindent Here, $B_{d-1}$ is the hyper-volume of the unit ball in $(d-1)$-dimensional space, and $v_{E_0} (\mathbf{x}) \equiv v_{E_0}$ is defined as in \eqref{defvE}. We can rewrite this expression in terms of $\gamma_{E_0} (\mathbf{x})$, the differential average collision rate for collisions at the location $\mathbf{x}$. $\gamma_{E_0} (\mathbf{x})$ is obtained by integrating $\rho_{E_0} (\mathbf{x},\mathbf{v}) \, \mathbf{v} \cdot \hat{\mathbf{n}} (\mathbf{x}) $ over all $\mathbf{v}$ such that $\mathbf{v} \cdot \hat{\mathbf{n}} (\mathbf{x}) > 0$. This is another spherical integral; the result is given by \eqref{gamma}. Comparing \eqref{gamma} and \eqref{X4}, we obtain \eqref{varE3}. \renewcommand{\theequation}{B\arabic{equation}} \setcounter{equation}{0} \setcounter{subsection}{0} \section*{Appendix B} \label{appendixB} Here, we describe the details of the numerical simulations presented in Section \ref{numerical}. For a particle in the clover billiard subject to a force $\mathbf{F} \cos (\omega t)$, we discuss how to evolve the particle according to the equations of motion, and how to solve the corresponding Fokker-Planck equation. First, let us describe the evolution of the trajectory ensemble. We consider an ensemble of particles with initial energy $E_0$ at $t=0$, with a microcanonical distribution of initial conditions. For a standard billiard ($U(\mathbf{x}) = 0$), the microcanonical distribution \eqref{micro} corresponds to sampling the initial positions $\mathbf{x}_0$ from a uniform distribution over the billiard's area, and the initial velocities $\mathbf{v}_0$ from an isotropic distribution with fixed speed $v_0 \equiv \sqrt{2 E_0 /m}$. We generate $N \gg 1$ independent samples in this way, and then evolve each sample in time by alternately integrating the equations of motion \eqref{newton}, and updating the velocity according to the reflection law \eqref{reflection} whenever the particle collides with the wall. In between the $k^{\mathrm{th}}$ and $(k+1)^{\mathrm{th}}$ collisions, we may integrate \eqref{newton} explicitly to obtain $\mathbf{x}_t$ and $\mathbf{v}_t$. Using the same notation as in Section \ref{g1g2}, we find: \begin{align} \begin{split} \label{inbetweenx} \mathbf{x}_t &= \mathbf{x}_k + \left[ \mathbf{v}_k^+ - \frac{\mathbf{F}}{m \omega} \sin (\omega t_k) \right] (t-t_k) \\ &\quad - \frac{\mathbf{F}}{m \omega^2} \Big[ \cos (\omega t) - \cos (\omega t_k) \Big], \end{split} \end{align} \begin{equation} \label{inbetweenv} \mathbf{v}_t = \mathbf{v}_k^+ + \frac{\mathbf{F}}{m \omega} \Big[ \sin (\omega t) - \sin (\omega t_k) \Big]. \end{equation} \noindent We see that the particle rapidly oscillates within a small envelope about a straight-line average trajectory. Given the above expressions, finding the next position and velocity at the $(k+1)^{\mathrm{th}}$ collision is simply a matter of solving numerically for where and when this trajectory next intersects with the billiard wall. In this way, we determine the trajectory of each particle in the ensemble between $t=0$ and some $t=\Delta t$. Then, for any time $t \in [0,\Delta t]$, we compute the energy $\mathcal{E} = \frac{1}{2} m |\mathbf{v}_t|^2$ of each particle, and collect all of these energy values into a histogram. This histogram gives an excellent approximation of the energy distribution $\eta (E,t)$; it only deviates from $\eta (E,t)$ due to the finite number of samples and the small machine error accrued when computing each trajectory. To compare these results with the energy diffusion description, we then solve the Fokker-Planck equation \eqref{fp}. For a standard billiard, the Fokker-Planck equation admits an analytical solution which has been studied previously. To show this, we note that by \eqref{g1noU} and \eqref{g2noU}, we have $g_1 = C E^{1/2}$ and $g_2 = 4 C E^{3/2}/(d+1)$, where $C$ is a constant independent of energy. If we substitute these expressions into \eqref{fp}, and define the rescaled time variable $s \equiv C t$, then after some manipulations we obtain: \begin{equation} \label{fpnoU} \frac{\partial \eta}{\partial s} = \frac{2}{d+1} \frac{\partial}{\partial E} \left[ E^{(1+d)/2} \frac{\partial}{\partial E} \left( E^{(2-d)/2} \eta \right) \right] . \end{equation} \noindent This equation is identical to Equation (60) in \cite{demersjarzynski2015}. This reference also provides the solution to this equation for the initial condition $\eta (E,0) = \delta (E-E_0)$, which we reproduce here: \begin{widetext} \begin{equation} \label{fpsoln} \eta (E,t) =\eta (E,s/C) = \frac{d+1}{s E_0^{1/2}} \left( \frac{E}{E_0}\right)^{(d-3)/4} I_{d-1} \left[ \frac{4(d+1)}{s} E_0^{1/4} E^{1/4} \right] \exp \left[ - \frac{2 (d+1)}{s} \left( E_0^{1/2} + E^{1/2} \right) \right]. \end{equation} \end{widetext} \noindent Here, $I_{d-1} (x)$ is the modified Bessel function of the first kind, of order $d-1$. It only remains to compute the constant $C$ for the special case of the clover billiard: \begin{equation} \label{C} C = \left( \frac{2}{m} \right)^{3/2} \frac{\omega^{-2}}{\lambda} \frac{1}{S} \int dS \, \left( \mathbf{F} \cdot \hat{\mathbf{n}} \right)^2 . \end{equation} In $d=2$ dimensions, $S$ is the perimeter of the billiard, and the integral over $dS$ is a line integral over the billiard boundary. For a constant $\mathbf{F} (\mathbf{x}) = \mathbf{F}$, upon performing the appropriate line integrals we find that $S^{-1} \int dS \, \left[ \mathbf{F} (\mathbf{x}) \cdot \hat{\mathbf{n}} (\mathbf{x})\right]^2 = F^2 /2$, where $F \equiv |\mathbf{F}|$. Then, we can use the relation $\lambda \equiv d \dfrac{B_d}{B_{d-1}} \dfrac{V}{S}$ with $d=2$ to obtain $\lambda = \pi V /S$. In two dimensions, $V$ is the area of the billiard. $V$ and $S$ are geometric quantities which may be computed in terms of the radii $R_1$ and $R_2$. For the specific case of $R_1 = 1$ and $R_2 = 2$, we find that $\lambda \approx 2.610$. Upon combining these results, and setting $m=1$, we obtain: \begin{equation} \label{C2} C \approx 0.5419 \, \omega^{-2} F^2 . \end{equation} \noindent With this result, we may now determine the distribution $\eta(E,t)$ at any time $t$, given the parameter choices $m=1$, $R_1 = 1$, $R_2 = 2$, and $\mathbf{F} = F(\hat{\mathbf{x}} + \hat{\mathbf{y}})/\sqrt{2}$. We simply select values for $F$ and $\omega$, and then substitute the resulting value of $C$ into \eqref{fpsoln} (recalling that $s= C t$, and that $d=2$).
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Background} The reader is assumed to have some background in Finite Model Theory. If not, the book by Ebbinghaus and Flum \cite{EF} serves as a very good introduction. (See Section \ref{sec:notation} for notation and definitions.) In a recent paper, one of the authors \cite{cjtcs08} proved the following: \begin{thom} Let $\mathbf{A}$ be a structure (instance) defined over a signature $\sigma$. The value of an optimal solution to an instance $\mathbf{A}$ of a maximization problem $Q$ can be represented by \begin{equation} opt_{\mathrm{Q}} (\mathbf{A}) = \max_\mathbf{S} \vert \{\mathbf{w}: (\mathbf{A}, \mathbf{S}) \models \forall \mathbf{x} ~ \eta(\mathbf{w}, \mathbf{x}, \mathbf{S}) \} \vert \label{maxDef} \end{equation} if $Q \in \mathbf{P_{opt}^{pb}}$, where $\mathbf{x}$, $\mathbf{A}$, $\mathbf{S}$ and $\eta$ are defined in Table \ref{tab:defines}. The Horn condition in the formula $\eta$ applies only to the second order predicates in \bf{S}, not to first order predicates. \label{thom:hornMax} \end{thom} The converse of Theorem \ref{thom:hornMax} can be stated as follows: \begin{prop} \it{If the optimal solution value to a maximization problem $Q$ can be represented as in (\ref{maxDef}), then $Q$ belongs to the class $\mathbf{P_{opt}^{pb}}$}. \label{converseProp} \end{prop} The problem referred to, in Proposition \ref{converseProp}, can be cast as an optimization problem as follows: \begin{prob} \bf{Syntactic optimization}. \newline {\em Given}. (i) A structure $\mathbf{A}$, \newline (ii) a sequence of second order variables $\mathbf{S} = \{S_1, S_2, \cdots, S_p\}$ where each $S_i$ is of arity $r_i$ ($1 \le i \le p$). That is, each $S_i$ is of the form $S_i (z_1, z_2, \cdots, z_{r_i})$, where each $z_j$ can take any value in the domain of \bf{A}; and \newline (iii) a quantifier-free conjunction of Horn clauses $\eta(\mathbf{w}, \mathbf{x}, \mathbf{S})$. (As in Theorem \ref{thom:hornMax}, the Horn condition in the formula $\eta$ applies only to the second order predicates in \bf{S}.) {\em To Do}. For $1 \le i \le p$, assign truth values to each $S_i$, such that the number of tuples $\mathbf{w}$ that satisfy $\forall \mathbf{x} ~ \eta(\mathbf{w}, \mathbf{x}, \mathbf{S})$ is maximized. The goal is to achieve the maximum value for $opt_{\mathrm{Q}} (\mathbf{A})$ as given in (\ref{maxDef}). \label{converseDef} \end{prob} (We hope the problem definition above is clear. If not, we hope the referees or the editor can help us in re-phrasing it.) Due to difficulties in computing the optimal solution value for a general maximization problem in $\mathbf{P_{opt}^{pb}}$, Bueno and Manyem \cite{FI_2008} made the following conjecture: \begin{conj} The optimal value for an instance \bf{A} of an optimization problem, as measured in (\ref{maxDef}), cannot be computed in polynomial time by a deterministic Turing machine using syntactic techniques. We need optimization algorithms that exploit the particular problem structure. \label{syntacticImpossible} \end{conj} Gate and Stewart \cite{gateStewart} made an attempt to prove Conjecture \ref{syntacticImpossible}. The decision version of MaxHorn2Sat (see Definition \ref{maxhorn2sat}) is known to be NP-complete \cite{jaumard87}. Gate and Stewart were able to show a polynomial time reduction from the decision version of MaxHorn2Sat to the decision version of Problem \ref{converseDef}, thus proving that Problem \ref{converseDef} is NP-hard. In other words, the authors in \cite{gateStewart} essentially showed that just because the optimal solution value to an optimization problem can be expressed in the form in (\ref{maxDef}) does not mean that the problem is polynomially solvable, it may also be NP-hard. Here we prove a stronger negative result. Notice that the first order part in (\ref{maxDef}) is in $\Pi_1$ Horn form (universal Horn). One would expect that if we simplify the expression from $\Pi_1$ Horn to $\Pi_0$ Horn (that is, a quantifier-free Horn formula), we should be able to guarantee polynomial time solvability. Unfortunately this is not the case. We will show below that even a quantifier-free Horn expression is unable to guarantee polynomial time solvability. We show this by exhibiting such an expression for an NP-hard problem, MaxHorn2Sat defined above. \section{Notation and Definitions}\label{sec:notation} \begin{table}[h] \begin{tabular}{|l|p{330pt}|} \hline $\bf{A} $ & a structure defined over a signature $\sigma$; \bf{A} captures an instance of an optimization problem. \\ \hline $\eta $ & a quantifier-free first order (FO) formula, which is a conjunction of \it{Horn} clauses. (Recall that a Horn clause contains at most one positive literal.) \\ \hline $\mathbf{x} $ & an $m-$tuple of FO variables. \\ \hline $\mathbf{S} $ & a sequence of predicate symbols or second order (SO) variables;\\ & \bf{S} captures a solution to the optimization problem. \\ \hline $\mathbf{P_{opt}}$ ($\mathbf{NP_{opt}}$) & $\mathbf{P}$-optimization ($\mathbf{NP}$-optimization) problems. See Definition \ref{pOptProblem} (\ref{npOptProblem}). \\ \hline & \\ [-10pt] $\mathbf{P_{opt}^{pb}}$ ($\mathbf{NP_{opt}^{pb}}$) & Polynomially bound $\mathbf{P}$-optimization ($\mathbf{NP}$-optimization) problems. See Definition \ref{polyMax}. \\ \hline PNF & Prenex Normal Form. \\ \hline \end{tabular} \caption{Notation} \label{tab:defines} \end{table} \begin{defn} A \bf{P-optimization} problem $Q$ is a set $Q = \{I_\mathrm{Q}, F_\mathrm{Q}, f_\mathrm{Q}, opt_\mathrm{Q}\}$, where \begin{enumerate} \item[(i)] $I_\mathrm{Q}$ is a set of instances to $\mathrm{Q}$, \item [(ii)] $F_\mathrm{Q} (I)$ is the set of feasible solutions to instance $I$, \item[(iii)] $f_\mathrm{Q} (I, S)$ is the {\em objective function} value to a solution $S \in F_\mathrm{Q}(I)$ of an instance $I \in I_\mathrm{Q}$. ~It is a function $f:\bigcup_{I \in I_\mathrm{Q}} [\{I\} \times F_\mathrm{Q}(I)] \rightarrow R^+$ (non-negative reals)\footnote{Of course, when it comes to computer representation, rational numbers will be used.}, computable in time polynomial in the size of the universe of $I$\footnote{Strictly speaking, we should use $|I|$ here, where $|I|$ is the length of the representation of $I$. ~However, $|I|$ is polynomial in the size $A$ of the universe \bf{A}, hence we can use $|I|$.}, \item[(iv)] For an instance $I \in I_\mathrm{Q}$, $opt_\mathrm{Q} (I)$ is either the minimum or maximum possible value that can be obtained for the objective function, taken over all feasible solutions in $F_\mathrm{Q}(I)$. $\displaystyle opt_\mathrm{Q} (I) = \max_{S \in F_\mathrm{Q}(I)} f_\mathrm{Q} (I, S)$ (for P-maximization problems), $\displaystyle opt_\mathrm{Q} (I) = \min_{S \in F_\mathrm{Q}(I)} f_\mathrm{Q} (I, S)$ (for P-minimization problems), \item[(v)] The following decision problem is in the class $\mathbf{P}$: \it{Given an instance $I$ and a non-negative constant $k$, is there a feasible solution $S \in F_\mathrm{Q} (I)$, such that $f_\mathrm{Q} (I, S) \ge k$ (for a P-maximization problem), or $f_\mathrm{Q} (I, S) \le k$ (in the case of a P-minimization problem)?} And finally, \item[(vi)] An optimal solution $S_{\mathrm{opt}} (I)$ for a given instance $I$ can be computed in time polynomial in $|I|$, where $\displaystyle opt_\mathrm{Q} (I) = f_\mathrm{Q} (I, S_{\mathrm{opt}} (I))$. (LET ME LEAVE THIS POINT HERE FOR THE TIME BEING.) \end{enumerate} The set of all such $\mathbf{P}$-optimization problems is the $\mathbf{P_{opt}}$ class. \label{pOptProblem} \end{defn} A similar definition, for {NP-optimization} problems, appeared in Panconesi and Ranjan (1993) \cite{pancoRanjan}: \begin{defn} An \bf{NP-optimization} problem is defined as follows. Points \it{(i)-(iv)} in Definition \ref{pOptProblem} above apply to NP-optimization problems, whereas \it{(vi)} does not. Point \it{(v)} is modified as follows: (v) The following decision problem is in $\mathbf{NP}$: \it{Given an instance $I$ and a non-negative constant $k$, is there a feasible solution $S \in F_\mathrm{Q} (I)$, such that $f_\mathrm{Q} (I, S) \ge k$ (for an NP-maximization problem), or $f_\mathrm{Q} (I, S) \le k$ (in the case of an NP-minimization problem) ?} The set of all such $\mathbf{NP}$-optimization problems is the $\mathbf{NP_{opt}}$ class, and $\mathbf{P_{opt}} \subseteq \mathbf{NP_{opt}}$. \label{npOptProblem} \end{defn} \begin{defn} An optimization problem $Q$ is said to be \bf{polynomially bound} if the value of an optimal solution to every instance $I$ of $Q$ is bound by a polynomial in the size of $I$. ~In other words, for every problem $Q$, there exists a polynomial $p_\mathrm{Q}$, such that \begin{equation} opt_{Q} (I) \leq p_\mathrm{Q} (\vert I \vert), \label{polyBoundDef} \end{equation} for {every} instance $I$ of $Q$. ~$\mathbf{P_{opt}^{pb}}$ ($\mathbf{NP_{opt}^{pb}}$) is the set of polynomially-bound $\mathbf{P}$-optimization ($\mathbf{NP}$-optimization) problems. Naturally, $\mathbf{P_{opt}^{pb}} \subseteq \mathbf{P_{opt}}$ and $\mathbf{NP_{opt}^{pb}} \subseteq \mathbf{NP_{opt}}$. \label{polyMax} \end{defn} Informally, an instance of MaxHorn2Sat consists of a formula in conjunctive normal form (CNF), where each clause is Horn, and each clause contains at most two literals. (Such a formula is also known as a quadratic Horn formula.) The problem is to maximize the number of satisfiable clauses. \begin{defn} An \bf{existential second-order (ESO) Horn} expression is of the form $\exists \mathbf{S} \psi$, where $\psi$ is a first order formula, and $\mathbf{S} = (S_1, ~ \cdots ~ S_p)$ is a sequence of predicate symbols not in the vocabulary of $\psi$. The formula $\psi$ can be written in $\Pi_1$ form as \begin{equation} \psi = \forall x_1 \forall x_2 \cdots \forall x_k \eta = \forall \mathbf{x} ~ \eta. \label{hornDef} \end{equation} where $\eta$ is a conjunction of Horn clauses ($\eta$ is, of course, quantifier-free), and $x_i$ $(1 \leq i \leq k)$ are first order variables. Each clause in $\eta$ contains at most one positive occurrence of any of the second order predicates $S_i$ ($1 \leq i \leq p$). A general \bf{ESO} formula is the same as an ESO Horn expression, except that $\eta$ can now be any quantifier-free first order formula. \label{ESOhorn} \end{defn} \begin{prob} \bf{MaxHorn2Sat}. \newline {\em Given}. A set of clauses $c_i$, $1 \leq i \leq n$. Each clause $c_i$ is one of the following: (i) a Boolean variable $x_j$, (ii) its negation, $\neg x_j$, (iii) $x_j \vee \neg x_k$, or (iv) $\neg x_j \vee \neg x_k$. \newline {\em To Do}. Assign truth values to the $x_i$'s such that the number of satisfied clauses is maximized. \label{maxhorn2sat} \end{prob} \begin{defn} $\mathbf{\Pi_n}$ \bf{and} $\mathbf{\Sigma_n}$ \bf{formulae}. These formulae have $n$ quantifier blocks at the beginning, followed by a quantifier-free formula. Each block contains only one type of quantifier (either existential or universal). Here is the difference between $\Pi_n$ and $\Sigma_n$: A $\Pi_n$ ($\Sigma_n$) formula begins with a universal (existential) block. \bf{MAX} $\mathbf{\Pi_0}$ is the class of maximization problems whose optimal solution value to an instance \bf{A} of a Problem $Q$ can be represented as \begin{equation} opt_{\mathrm{Q}} (\mathbf{A}) = \max_\mathbf{S} \vert \{\mathbf{w}: (\mathbf{A}, \mathbf{S}) \models ~ \eta(\mathbf{w}, \mathbf{S}) \} \vert \label{maxPi_0_Def} \end{equation} where $\eta$ is a quantifier-free formula. \end{defn} \section{A Syntactic Expression for MaxHorn2Sat} We need instances at two different levels. For example, suppose we are given a MaxHorn2Sat instance (formula) such as \bf{M} $\equiv (z_1 \vee \neg z_2) \wedge (z_3) \wedge (\neg z_3 \vee \neg z_1)$. The variables in this instance are $Z = \{z_1, z_2, z_3\}$, and a structure \bf{B} maps $Z$ to its universe $V$ = \{TRUE, FALSE\}. However, to represent the MaxHorn2Sat instance \bf{M} as in (\ref{maxDef}) or (\ref{maxDefPoptPB}), the variables used will be $X = \{x, y\}$, and the universe of the structure \bf{A} would be $Z$. ~Diagrammatically, \begin{equation} X = \{x, y\} ~\longrightarrow~ Z = \{z_1, z_2, z_3\} ~\longrightarrow~ V = \{TRUE, FALSE\}. \end{equation} \bf{A} maps (instantiates) $X$ to $Z$, and \bf{B} maps (instantiates) $Z$ to $V$. The second order variables $\mathbf{S}$ (to be used with \bf{A}) consists of a single unary predicate $S$, that is $\mathbf{S} = \{S\}$ where $S$ is of arity one. $S$ can be considered as a guess of the map \bf{B}. ~In the above example, for a certain MaxHorn2Sat clause, if $\mathbf{A}(x) = z_1$, $\mathbf{A}(y) = z_3$, $S(x) = FALSE$ and $S(y) = TRUE$, then $S$ would have guessed that $\mathbf{B}(z_1) = FALSE$ and $\mathbf{B}(z_3) = TRUE$. If variables $x$ and $y$ appear in a 2-literal MaxHorn2Sat clause, then the clause can assume one of the following forms (and represented in the signature of \bf{A} by the corresponding first order predicate on the right): \begin{center} \begin{tabular}{|l|l|} \hline $\neg x \vee \neg y$ & $BothNeg (x,y)$ \\ \hline $\neg x \vee y$ & $FirstNegSecondPos(x,y)$, or simply $FNSP(x,y)$ \\ \hline $x \vee \neg y$ & $FirstPosSecondNeg(x,y)$, or simply $FPSN(x,y)$ \\ \hline \end{tabular} \end{center} If a clause contains only one literal, insert a second literal and set it to FALSE. \subsection{Counting satisfying clauses} Our approach is similar to that of Kolaitis-Thakur 1994 \cite{KT94}, where they provide an expression for the optimal value for Max3Sat (optimization version). We only count satisfying MaxHorn2Sat clauses for the objective function. That is, we count the number of tuples $(x, y)$ that satisfy $\phi$, where \begin{equation} \phi = \bigvee_{i=1}^4 \phi_i = \phi_1 \vee \phi_2 \vee \phi_3 \vee \phi_4. \label{thePhees} \end{equation} (The $\phi_i$'s are described below.) \subsection{Two-literal MaxHorn2Sat clauses} Two-literal MaxHorn2Sat clauses can be satisfied in one of the following ways: $\displaystyle \phi_1 = FPSN(x,y) \wedge [S(x) \vee \neg S(y)]$. $\displaystyle \phi_2 = FNSP(x,y) \wedge [\neg S(x) \vee S(y)]$. $\displaystyle \phi_3 = BothNeg (x,y) \wedge [\neg S(x) \vee \neg S(y)]$. \subsection{One-literal MaxHorn2Sat clauses} As mentioned earlier, convert one-literal clauses to two-literal clauses. (We do this, so that we can simply count the number of $(x,y)$ tuples that satisfy $\phi$.) If the literal is positive, then create a special predicate called $BothPos(x, y)$, as if the clause is $x \vee y$; but of course, we set $y$ to FALSE, so $y$ has no effect. So we use $\displaystyle \phi_4 = BothPos (x, y) \wedge S(x)$. If the literal is negative, then just use $\phi_2$ above; no need for another $\phi_i$. Since we set the second literal to FALSE, $S(y)$ will never hold. (For $\phi_2$ to be true, $S(x)$ must be false.) \subsection{The complete DNF formula} For convenience of writing, let us substitute $A = FPSN(x, y)$, $B = FNSP(x, y)$, $C = BothNeg (x,y)$, $D = BothPos (x,y)$, $P = S(x)$, $Q = S(y)$. Then we can rewrite $\phi_i$ ($1 \le i \le 4$) as \begin{equation} \begin{array}{ll} \phi_1 = (A \wedge P) \vee (A \wedge \neg Q), & \phi_2 = (B \wedge \neg P) \vee (B \wedge Q), \\ [1mm] \phi_3 = (C \wedge \neg P) \vee (C \wedge \neg Q), & \phi_4 = (D \wedge P). \end{array} \label{phiDefine0} \end{equation} From (\ref{thePhees}), since one of the $\phi_i$'s should be satisfied for a MaxHorn2Sat clause to be counted towards the objective function, \begin{equation} \begin{array}{ll} \phi & \equiv \phi_1 \vee \phi_2 \vee \phi_3 \vee \phi_4 \equiv \\ [1mm] & (A \wedge P) \vee (A \wedge \neg Q) \vee (B \wedge \neg P) \vee (B \wedge Q) \vee (C \wedge \neg P) \vee (C \wedge \neg Q) \vee (D \wedge P). \end{array} \label{phiDefine} \end{equation} Write $\phi = k_1 \vee k_2 \vee \cdots \vee k_7$, corresponding to each of the 7 ``conjunct" clauses above in $\phi$. That is, $k_1 = A \wedge P$, $k_2 = A \wedge \neg Q$, $\cdots$, and $k_7 = D \wedge P$. ($A \wedge P$), $\cdots$, ($D \wedge P$) can be called ``conjunct" clauses. \subsection{Converting DNF to CNF} Now $\phi$ is in DNF, so we should convert it to CNF. ~Call the CNF form as $\psi$ (or $\psi (x, y, S)$, to be more accurate). There are 7 clauses in $\phi$ with 2 literals each, so $\psi$ will have $2^7$ = 128 clauses, with 7 literals each --- one literal from each of the 7 clauses in $\phi$. (See footnote\footnote{128 may be ``large", but still a finite number.}) From (\ref{phiDefine}), we can write $\psi$ in lexicographic order as \begin{equation} \psi = (A \vee A \vee B \vee B \vee C \vee C \vee D) \wedge \cdots \wedge (P \vee \neg Q \vee \neg P \vee Q \vee \neg P \vee \neg Q \vee P). \label{psiDefine} \end{equation} We should ensure that each of the 128 disjunct clauses in $\psi$ is Horn, which is what we do next. \begin{lemma} Each of the 128 clauses in $\psi$ is Horn. \label{lem:128clauses} \end{lemma} \begin{proof} Note that the literals $A$, $B$, $C$ and $D$ are first order (part of the input), hence these do not affect the Horn condition; only the $P$'s and $Q$'s and their negations do. If there is a 7-literal clause in $\psi$ containing literals $P$ and $\neg P$, it can be set to TRUE. ~Similarly for a clause containing $Q$ and $\neg Q$. Anyway, we will run into trouble only if we have a clause $\psi_i$ in $\psi$, that (i) contains literals $P$ and $Q$, and (ii) contains neither $\neg P$ nor $\neg Q$. ~However, can such a clause evaluate to TRUE and hence can be ``ignored"? Will such a clause obey the Horn condition? The answer turns out to be yes. There are only three ways in which we can come across a ``$P \vee Q$" within a 7-literal clause of $\psi$: \begin{itemize} \item Pick $P$ from $k_1$, $Q$ from $k_4$, and one of \{$A$, $B$, $C$, $D$\} from the other clauses, to obtain $\psi_1 = P \vee A \vee B \vee Q \vee C \vee C \vee D$. This clause of $\psi$ contains $A$, $B$, $C$ and $D$ --- the four types of clauses mentioned in (\ref{phiDefine0}), and one of them must occur; they are mutually disjoint and collectively exhaustive. So $A \vee B \vee C \vee D$ is (always) valid. So $\psi_1$ can be set to TRUE. \item Pick $P$ from $k_7$, $Q$ from $k_4$, and one of \{$A$, $B$, $C$, $D$\} from the other clauses, to obtain $\psi_2 = A \vee A \vee B \vee Q \vee C \vee C \vee P$. This clause only contains $A$, $B$ and $C$, but not $D$. ~However, we know that $A \vee B \vee C \vee D$ is valid. If $A$, $B$ and $C$ are false, then $D$ will be true (the $BothPos$ predicate) --- this means, every clause in $\phi$ is false except $k_7$, which implies that $P$ is true. Hence $A \vee B \vee C \vee P$ is valid, which means $\psi_2$ can be set to TRUE. \item Pick $P$ from $k_1$ and $k_7$, $Q$ from $k_4$, and one of \{$A$, $B$, $C$, $D$\} from the other clauses, to obtain $\psi_3 = P \vee A \vee B \vee Q \vee C \vee C \vee P$. ~ Apply the same argument as for $\psi_2$. This sets $\psi_3$ to TRUE. \end{itemize} Hence each of the 128 clauses in $\psi$ is Horn. \end{proof} We need one more first order predicate to represent whether a certain $(x, y)$ combination actually occurs in the MaxHorn2Sat formula, and in which of the four $\phi_i$ (1 $\le i \le$ 4) varieties it occurs. For each of the four $\phi_i$ types, introduce a predicate $L_i (x, y)$. For instance, if $L_2 (z_2, z_5)$ is true, it means that $(\neg z_2 \vee z_5)$ is a clause in the given MaxHorn2Sat instance. Similarly, the truth of $L_3 (z_2, z_5)$ implies the existence of the clause $(\neg z_2 \vee \neg z_5)$. Thus we need to append $[L_1(x,y) \vee L_2(x,y) \vee L_3(x,y) \vee L_4(x,y)]$ as the 129th clause. However, these four are first order predicates, hence their truth values can be evaluated and substituted. This does not affect the Horn condition. From all the arguments above including Lemma \ref{lem:128clauses}, we conclude as follows: \begin{thom} Let the structure $\mathbf{A}$ represent an instance of MaxHorn2Sat defined in Problem \ref{maxhorn2sat}. Then the value of an optimal solution to $\mathbf{A}$ can be represented by \begin{equation} \mathrm{opt} (\mathbf{A}) = \max_{S} \vert \{(x,y): (\mathbf{A}, S) \models ~ \psi(x, y, S) \} \vert, \label{maxDefPoptPB} \end{equation} where $\psi(x, y, S)$ is defined in (\ref{psiDefine}). \label{thom:maxhorn2sat} \end{thom} Note that $\psi$ above is quantifier free ($\Pi_0$ or $\Sigma_0$ form). This means \bf{Corollary to Theorem \ref{thom:maxhorn2sat} and Discussion}: Since it is known that MaxHorn2Sat is NP-hard, even a $\Pi_0$ Horn expression does not guarantee polynomial time solvability for maximization problems. In \cite{cjtcs08}, it was shown that the MaxFlow$_{PB}$ problem (the MaxFlow problem with unit weight edges) cannot be represented in Horn $\Pi_0$ or Horn $\Sigma_1$ form first order form; it needs a Horn $\Pi_1$ sentence. The optimal solution to this problem can be obtained in polynomial time using MaxFlow algorithms. Hence it is a surprise that while a polynomially solvable problem, MaxFlow$_{PB}$, has a Horn $\Pi_1$ lower bound, an NP-hard problem, MaxHorn2Sat, can be expressed by a quantifier-free Horn sentence. A similar anamoly was observed by Panconesi and Ranjan (1993) \cite{pancoRanjan}: While the class MAX $\Pi_0$ (defined below) can express NP-hard problems such as Max3Sat, it is unable to express polynomially solvable problems such as Maximum Matching! This suggests that \begin{conj} Quantifier alternation does not provide a precise characterization of computation time. A hierarchy in quantifier alternation does not translate to one in computation time. We need to look at other characteristics of logical formulae such as the number of variables, or a combination of these. \end{conj} \section{Optimality Conditions: Duality to the Rescue}\label{sec:rescue} \bf{Recognizing (Verifying) Optimality}. In general, the question, ~\it{Given a solution \bf{T} to an instance \bf{A} of an optimization problem Q, is it an optimal solution?}~ is as hard to answer as determining an optimal solution, necessitating a $\Sigma_2$ second order sentence as in (\ref{lazyOpt}) below. However, under certain conditions, such as when the duality gap is zero, optimal solutions can be recognized more efficiently, and can be expressed in existential second order (ESO, or second order $\Sigma_1$) logic. \it{Duality Gap} is the difference between the optimal solution values for the primal and dual problems. For problems such as LP and MaxFlow-MinCut, the duality gap has been shown to be zero; that is, they posess the \it{strong duality} property. However, for other problems such as Integer Programming, there is no known dual problem that guarantees strong duality; hence expressions such as (\ref{LPcharac}) and (\ref{MFMCcharac}) cannot be derived, at least until a dual that guarantees strong duality is discovered. The above question can also be phrased as a classical decision problem (for maximization): \it{Given a solution \bf{T} for instance \bf{A} with solution value f(\bf{T}), is there another solution \bf{S} such that f(\bf{S}) $>$ f(\bf{T})?} An optimal solution \bf{T} to an instance \bf{A} of an optimization problem $Q$ can easily be represented as the best among all feasible solutions \bf{S}: \begin{equation} \exists \mathbf{T} \forall \mathbf{S} ~ \phi \wedge [f(\mathbf{A}, \mathbf{T}) \ge f(\mathbf{A}, \mathbf{S})], \label{lazyOpt} \end{equation} where $\phi$ represents satisfaction of the constraints to \bf{A}, and $f$ is the objective function referred to, in Definitions \ref{pOptProblem} and \ref{npOptProblem}. The formula $\phi$ captures the constraints, such as $\mathbf{g(x)} = \mathbf{b}$ and $\mathbf{h(x)} \le \mathbf{c}$ in (\ref{problemP1}) below. [Note that the above formula represents an optimal solution to a maximization problem; we can write a similar formula for minimization; simply change the last condition to $f(\mathbf{A}, \mathbf{T}) \le f(\mathbf{A}, \mathbf{S})$.] Recall that a maximization problem $P_1$ in the $\mathbf{R}^n$ Euclidean space can be represented as follows \cite{bazaraaEtAl}: \begin{equation} \begin{array}{lrl} & \mbox{Maximize} & f_1(\mathbf{x}):\mathbf{R}^n \rightarrow \mathbf{R}, \\ [1mm] (P_1) & \mbox{subject to} & \mathbf{g(x)} = \mathbf{b}, ~~\mathbf{h(x)} \le \mathbf{c}, \\ [1mm] & \mbox{where} & \mathbf{x} \in \mathbf{R}^n, \mathbf{b} \in \mathbf{R}^{m_1} \mbox{ and } \mathbf{c} \in \mathbf{R}^{m_2}. \end{array} \label{problemP1} \end{equation} For several optimization problems, an optimal solution can be recognized when a feasible solution obeys certain \it{optimality} conditions. In such cases, it is unnecessary to represent an optimal solution \bf{T} as in (\ref{lazyOpt}). The Duality concept in optimization can play an important role here. Let $\mathbf{u} \in \mathbf{R}^{m_1}$ and $\mathbf{v} \in \mathbf{R}^{m_2}$ be two vectors of variables with $\mathbf{v} \ge \mathbf{0}$. Given a \it{primal} problem $P_1$ as in (\ref{problemP1}), its Lagrangian dual problem $P_2$ can be represented as (see \cite{bazaraaEtAl}): \begin{equation} \begin{array}{lrl} & \mbox{Minimize} & \theta(\mathbf{u}, \mathbf{v}) \\ [1mm] (P_2) & \mbox{subject to} & \mathbf{v} \ge \mathbf{0}, \\ [1mm] & \mbox{where} & \theta(\mathbf{u}, \mathbf{v}) = \inf_{\mathbf{x} \in \mathbf{R}^n} ~ \{f(\mathbf{x}) + \sum_{i = 1}^{m_1} u_i g_i (\mathbf{x}) + \sum_{j = 1}^{m_2} v_j h_j (\mathbf{x}) \}. \end{array} \label{problemP2} \end{equation} Furthermore, $\displaystyle g_i \mathbf{(x)} = b_i$ [$\displaystyle h_j \mathbf{(x)} \le c_j$] is the $i^{th}$ equality [$j^{th}$ inequality] constraint. We have demonstrated $\Sigma_1$ second order expressibility using Lagrangian duality in this paper. However, other types of duality may be used, such as Fenchel duality or the geometric duality or the canonical duality, as long as they provide a zero duality gap, and optimality conditions that can be verified efficiently (say, in polynomial time). \subsection{Computational Models}\label{sec:compModels} Turing machine (TM) based computational models for solving an optimization problem $Q$ come in two flavours: \bf{Model 1}. The input consists of a problem instance such as in (\ref{problemP1}). If the instance has a feasible solution, the output is a string representing an optimal solution; otherwise, the TM crashes. Corresponding to classes P and NP in the world of decision problems, the classes here are \bf{FP} and \bf{FNP} respectively. \bf{Model 2}. In addition to a problem instance such as in (\ref{problemP1}), the input consists of a parameter $K$, which is a bound on the optimal solution value. The TM is a ``decision" machine, that is, one whose output is simply a yes or a no; call this machine as $M_1$. ~The method then to solve $Q$, by a Turing machine, say $M_2$, is to do a binary search on solution values, calling $M_1$ a logarithmic ($\log V$) number of times, where $V$ is the optimal solution value. Thus we make a \it{weakly polynomial}\footnote{For a graph problem, an algorithm is strongly polynomial if the running time is a polynomial in the number of vertices and/or edges; it becomes weakly polynomial if the running time is a polynomial in the logarithm of edge weights. In Linear Programming, this translates to the number of variables/constraints versus the data in the coefficient matrix \bf{A} and the right side vector \bf{b}.} number of calls to $M_1$. Each call to $M_1$ involves answering a question such as: ``Is there a feasible solution \bf{S} satisfying the constraints, such that the objective function value $f(\mathbf{A}, \mathbf{S})$ is greater than or equal to $K$?", for a maximization problem. If the problem answered by $M_1$ is in the NP class, then the complexity of solving $Q$ is $\displaystyle P^{NP}$, since $M_2$ makes a polynomial number of calls to the oracle $M_1$ (and $M_1$ solves a problem in NP). Similarly, if the problem answered by $M_1$ is in the class P, then the complexity of solving $Q$ as $\displaystyle P^{P}$, which is simply P (although strictly speaking, this is weakly polynomial due to the $\log V$ number of calls). The method used in Model 2, binary search, has been recognised/adopted for solving optimization problems since the discovery of the class NP. ~It involves making a polynomial number of calls to a ``decision TM" (a TM that solves decision problems). However, we show in this section that for pairs of problems with a duality gap of zero, a single call to a decision TM is sufficient. If the machine answers yes, then the primal and the dual problems have optimal solutions; otherwise, neither problem has an optimal solution (one of the problems will be infeasible and the other one will have an unbounded optimal solution). This is demonstrated by second order $\Sigma_1$ sentences such as (\ref{LPcharac}) and (\ref{MFMCcharac}), which implies, as per Fagin's result below, that such a machine produces a yes/no answer in NP time. \begin{thom} \cite{fagin} A decision problem can be logically expressed in ESO if and only if it is in NP. \label{thom:fagin} \end{thom} The following theorem is the deterministic counterpart of Fagin's result. It characterizes {P} as the class of decision problems definable by ESO universal Horn formulae. \begin{thom} (Gr\"{a}del \cite{gradel91}) For any ESO Horn expression as defined in Definition \ref{ESOhorn}, the corresponding decision problem is a member of {P}. The converse is also true --- if a problem $\mathcal{P}$ is a member of {P}, then it can be expressed in ESO Horn form --- but only if a successor relation is allowed to be included in the vocabulary of the first-order formula $\psi$. \label{thom:decision} \end{thom} The polynomial time computability in the first part of Theorem \ref{thom:decision} is due to the fact that the first order part of formulae representing decision problems can be reduced to propositional Horn formulae, which can be solved in time linear in the number of predicates (which are unknown second order). \subsection{Linear Programming}\label{sec:LP} For example, in the case of Linear Programming (LP), using Lagrangian duality, the primal and dual problems $P_3$ and $P_4$ respectively, can be stated as follows: \begin{equation} \begin{array}{crlccrl} (P_3) & \mbox{Maximize} & f_1(\mathbf{x}) = \mathbf{c}^T \mathbf{x}, & & (P_4) & \mbox{Minimize} & f_2(\mathbf{y}) = \mathbf{b}^T \mathbf{y}, \\ [1mm] & \mbox{subject to} & \mathbf{Ax} \le \mathbf{b}, ~ \mathbf{x} \ge \mathbf{0}, & & & \mbox{subject to} & \mathbf{A}^T \mathbf{y} \ge \mathbf{c}, ~ \mathbf{y} \ge \mathbf{0}, \\ [1mm] & \mbox{where} & \mathbf{x}, \mathbf{c} \in \mathbf{R}^n, & & & \mbox{and} & \mathbf{y}, \mathbf{b} \in \mathbf{R}^{m}, \end{array} \label{LPdual} \end{equation} after the usual process \cite{hadley} of converting unrestricted variables (if any) to non-negative variables, and equality constraints (if any) to inequality constraints, in the primal problem. Here, $y_i$ ($x_j$) is the $i^{th}$ dual ($j^{th}$ primal) variable corresponding to the $i^{th}$ primal ($j^{th}$ dual) constraint. When the primal and dual problems have feasible solutions, then they both have optimal solutions $\displaystyle \mathbf{x^*} = (x_1^*, x_2^*, \cdots, x_n^*)$ and $\displaystyle \mathbf{y^*} = (y_1^*, y_2^*, \cdots, y_m^*)$ such that the two objective functions are equal: $\displaystyle \mathbf{c}^T \mathbf{x^*} = \mathbf{b}^T \mathbf{y^*}$. (Almost every book on LP should explain this result. See for example, \cite{hadley}.) For LP's, the \it{complementary slackness} conditions below are known to be necessary and sufficient conditions for the existence of an optimal primal solution \bf{S} and an optimal dual solution \bf{T}: \begin{eqnarray} y_i^{\ast} (b_i - A_i \mathbf{x^{\ast}}) = 0, & y_i^{\ast} \ge 0, & b_i - A_i \mathbf{x^{\ast}} \ge 0, ~~ i \in \{1, 2, \cdots, m\} \label{compSlack1} \\ [1mm] x_j^{\ast} (c_j - A_j^T \mathbf{y^{\ast}}) = 0, & x_j^{\ast} \ge 0, & c_j - A_j^T \mathbf{y^{\ast}} \ge 0, ~~ j \in \{1, 2, \cdots, n\} \label{compSlack2} \end{eqnarray} where $A_i$ is the $i^{th}$ row of \bf{A}, $A_j^T$ is the $j^{th}$ column of \bf{A}, $(b_i - A_i \mathbf{x^{\ast}}) = 0$ is derived from the $i^{th}$ primal constraint, and $(c_j - A_j^T \mathbf{y^{\ast}}) = 0$ is derived from the $j^{th}$ dual constraint. Thus the existence of \bf{S} and \bf{T} can be expressed as \begin{equation} \exists \mathbf{S} \exists \mathbf{T} ~ [\forall i ~ \psi_1(i)] \wedge [\forall j ~ \psi_2(j)] \wedge \phi_p (\mathbf{S}) \wedge \phi_d (\mathbf{T}), \label{LPcharac} \end{equation} where $\psi_1(i)$ [$\psi_2(j)$] logically captures the $i^{th}$ [$j^{th}$] constraint in (\ref{compSlack1}) [(\ref{compSlack2})] respectively. Also, $\phi_p$ and $\phi_d$ model the primal and dual constraints in (\ref{LPdual}) respectively. We are not concerned about the first order part of the above expression, $\displaystyle [\forall i ~ \psi_1(i)] \wedge [\forall j ~ \psi_2(j)] \wedge \phi_p \wedge \phi_d$. What is of interest to us is that the existence of optimal solutions for the primal and dual problems can be expressed in ESO, existential second order logic; a $\Sigma_2$ second order sentence as in (\ref{lazyOpt}) is unnecessary. Applying Theorem \ref{thom:fagin}, it follows that recognition of an optimal solution, for certain problems such as LP, is in the computational class NP. (One could argue that the existence of a feasible solution\footnote{A word of caution --- \bf{Feasible solutions}, a difference in terminology: Fagin and Gr\"{a}del \cite{gradel91} have syntactically characterized \it{feasible} solutions for classes NP and P respectively. However the ``feasibility" captured by an ESO expression, as described by Fagin and Gr\"{a}del, also includes an upper (lower) bound on the objective function of a minimization (maximization) problem, such as $f_1(\mathbf{x}) \ge K$ where $K$ is a constant --- not just satisfaction of the constraints such as \bf{Ax} $\le$ \bf{b}, \bf{x} $\ge$ \bf{0} in (\ref{LPdual}). In this paper, we differ from this view; when we talk about feasibility, we only refer to satisfaction of constraints such as \bf{Ax} $\le$ \bf{b}, \bf{x} $\ge$ \bf{0}.} for an optimization problem, satisfying constraints such as $\mathbf{Ax} \le \mathbf{b}, ~ \mathbf{x} \ge \mathbf{0}$, implies the existence of an optimal solution.) \subsection{Polynomially Solvable Problems}\label{sec:polytime} But what if the primal and dual problems are polynomially solvable? Can this be reflected in expressions such as (\ref{LPcharac})? The answer turns out to be yes. Recall from Theorem (\ref{thom:decision}) that to express polynomial solvability, the first order part of (\ref{LPcharac}) needs to be a universal Horn formula. From our assumption about the polynomial solvability of the primal and the dual problems and from Theorem (\ref{thom:decision}), it follows that $\phi_p$ and $\phi_d$ can be expressed as universal Horn formulae. As for the complementary slackness conditions (\ref{compSlack1}) and (\ref{compSlack2}), we only need to express \newline $\displaystyle y_i^{\ast} (b_i - A_i \mathbf{x^{\ast}}) = 0$ and $\displaystyle x_j^{\ast} (c_j - A_j^T \mathbf{y^{\ast}}) = 0$, since the other conditions have been expressed in $\phi_p$ and $\phi_d$. $\displaystyle y_i^{\ast} (b_i - A_i \mathbf{x^{\ast}}) = 0$ can be expressed as $\displaystyle \psi_1(i) \equiv Y(i) \vee B\_A(i, X)$, where $Y(i)$ is a predicate which is true iff $\displaystyle y_i^{\ast} = 0$, and $\displaystyle B\_A(i, X)$ is a predicate which is true iff $\displaystyle b_i - A_i \mathbf{x^{\ast}} = 0$. The formula $\psi_1(i)$ is not Horn. However, since $\displaystyle y_i^{\ast} = 0$ and $\displaystyle b_i - A_i \mathbf{x^{\ast}} = 0$ do not occur anywhere else in (\ref{LPcharac}), we can negate the predicates and modify $\displaystyle \psi_1(i)$. As in Theorem \ref{thom:hornMax}, the Horn condition in the formula $\eta$ applies only to the second order predicates in \bf{S} and \bf{T}. ~In this case, it applies to predicates that involve unknowns such as $x_j$ and $y_i$. Let $\displaystyle YnotEq0(i)$ be true iff $\displaystyle y_i^{\ast} \not= 0$, and $\displaystyle B\_AnotEq0(i, X)$ be true iff $\displaystyle b_i - A_i \mathbf{x^{\ast}} \not= 0$. Using these, one can rewrite $\psi_1(i)$ as \begin{equation} \psi_1(i) \equiv \neg YnotEq0(i) \vee \neg B\_AnotEq0(i, X), \end{equation} which is a Horn formula. Similarly, the formula $\psi_2(j)$ in (\ref{LPcharac}) can be expressed in Horn form: \begin{equation} \psi_2(j) \equiv \neg XnotEq0(j) \vee \neg C\_AnotEq0(j, Y). \end{equation} Now that we know that all four subformulae in the first order part of (\ref{LPcharac}) can be expressed in universal Horn form, we can conclude that the formula in (\ref{LPcharac}) fully obeys the conditions of Theorem \ref{thom:decision}; that is, ESO logic with the first order part being a universal Horn formulae (that is, the quantifier-free part is a conjunction of Horn clauses). Hence we can state that \begin{thom} For a pair of primal and dual Linear Programming problems as in (\ref{LPdual}), and hence obeying strong duality, the existence of optimal solutions for the primal and the dual can be expressed in ESO logic with the first order part being a universal Horn formula, and the optimal solutions can be computed in polynomial time (a) syntactically, and (b) by a single call to a decision Turing machine (which returns yes/no answers). \end{thom} But does strong duality imply polynomial time solvability? This is the subject of another manuscript \cite{manyemSheu09}. \subsection{MaxFlow MinCut}\label{sec:maxFlowMinCut} (Still some work to be done here; The first order part is not universal Horn, but I think it should be possible to make it universal Horn.) The MaxFlow-MinCut Theorem is another example where Lagrangian duality plays an important role in characterizing optimal solutions. The MaxFlow and MinCut problems are dual to each other. At optimality, the values of the two optimal solutions coincide. An optimal solution to MaxFlow can be syntactically recognized by an ``optimality condition", rather than a comparison of the objective function value with those of all other feasible solutions. \begin{defn} The \bf{MaxFlow} problem: \newline {\em Given}. We are given a network $G = (V, E)$ with 2 special vertices $s, t \in V$, $E$ is a set of directed edges, and each edge $(i,j) \in E$ has a capacity $C_{ij} > 0$. \newline {\em To Do}. Determine the maximum amount of flow that can be sent from $s$ to $t$ such that in each edge $(i,j) \in E$, the flow $f(i,j)$ is at most its capacity $C_{ij}$. That is, $0 \le f(i,j) \le C_{ij}, ~ \forall (i,j) \in E$. \label{maxFlowDefn} \end{defn} An \bf{S-T Cut} is a non-empty subset $U$ of $V$ such that $S \in U$ and $T \in \bar{U}$, where $\bar{U} = V - U$. ~[If $U$ is used as a second order predicate, then $U(i)$ is true for all vertices $i \in U$; it follows that $U(S)$ is true and $U(T)$ is false;] ~The capacity of the cut, written as $C(U)$, is the sum of the capacities of all edges $(i,j)$ such that $i \in U$ and $j \in \bar{U}$: \[ C(U) = \sum_{(i,j) \in E, ~ i \in U, ~ j \in \bar{U}} C_{ij}. \] \begin{defn} The \bf{MinCut} problem: \newline {\em Given}. Same as the MaxFlow problem. \newline {\em To Do}. Of all the $S-T$ cuts in $G$, find a least cut; that is, a cut with the least capacity. \label{minCutDefn} \end{defn} The \it{optimality condition} for the MaxFlow problem is that there exists a least $S-T$ cut, $U$, such that \begin{itemize} \item (forward direction) For every edge $(i,j)$ in the edge set $E$ such that $i \in U$ and $j \in \bar{U}$, the flow in $(i,j)$, $f(i,j)$, is equal to its capacity $C_{ij}$; \item (backward direction) For every edge $(i,j) \in E$ such that $i \in \bar{U}$ and $j \in U$, $f(i,j) = 0$; and \item The maximum flow, that is, the optimal solution value for the MaxFlow problem, is equal to $C(U)$, the capacity of the cut $U$. \end{itemize} This condition can be syntactically characterized as \begin{equation} \begin{array}{ll} \exists {U} \exists {F} ~ \forall i \forall j ~ U(S) \wedge \neg U(T) \\ [1mm] \wedge ~ [E(i,j) \wedge U(i) \wedge \neg U(j) \longrightarrow F(i,j,C_{ij})] \\ [1mm] \wedge ~ [E(i,j) \wedge \neg U(i) \wedge U(j) \longrightarrow F(i,j,0)] ~ \wedge ~ \psi, ~ \mbox{where} \label{MFMCcharac} \end{array} \end{equation} $U$ and $F$ are second order predicates; \newline $E(i,j)$ is a first order relation which is true whenever $(i,j)$ is an edge in the input graph; \newline $U(i)$ is true when $i \in$ vertex set $U$; \newline $F(i,j,v)$ is true when the flow in the edge $(i,j)$ equals $v$; and \newline $\psi$ models the flow conservation constraint at all nodes. Once more, with the help of previously proven optimality conditions, we have been able to characterize the primal optimal solution $F$ and the dual optimal solution $U$, in existential second order logic (ESO). Similarly in Convex Programming, the Karush-Kuhn-Tucker conditions provide sufficient conditions for the optimality of a feasible solution. \section{Zero Duality Gap} \bf{Note}. Here, we only \it{express the existence} of optimal solutions. We \it{do not} compute optimal solutions (we do not provide a method to compute them). Gr\"{a}del \cite{gradel91} provided an expression for the existence of a feasible solution for polynomially solvable problems. What we provide here is an improvement on his result, for problems that obey strong duality. We observe that expressions such as those in (\ref{LPcharac}) and (\ref{MFMCcharac}) are possible only if there is no \it{duality gap}, that is, when the duality gap is zero. The primal optimality condition implies dual feasibility and vice versa. (ARE THE FORMULAE IN (\ref{LPcharac}) and (\ref{MFMCcharac}) Horn? If so, using the ``propositional Horn formula in linear time" property, can you get something algorithmic out of it?) To our knowledge, all known problem-pairs with a zero duality gap (also known as \it{strong duality}) are polynomially solvable. The decision versions of all such optimization problems can be shown to be in the complexity class NP $\cap$ CoNP \cite{manyemSheu09}. Problems in NP $\cap$ CoNP can be expressed in both ESO and USO (universal second order logic), since USO precisely characterizes problems in USO. Does strong duality imply polynomial time solvability? This is the subject of another manuscript \cite{manyemSheu09}. (How does all this relate to the POLYNOMIAL HIERARCHY? Saddle Points? (Linear) complementarity problems?) \section{Acknowledgements} We thank James Gate and Iain Stewart at the University of Durham (UK) for motivating us towards this line of research. A part of this work was carried out while one of the authors was visiting the National Cheng Kung University (NCKU) in Taiwan on a visiting fellowship. Support from NCKU is gratefully acknowledged.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{s:intro} Predicting the magnitudes of extreme events in deterministic dynamical systems is a fundamental problem with a wide range of applications. Examples of practical relevance include estimating the amplitudes of rogue waves in fluid or optical systems~\cite{Onorato2013}, the fastest possible mixing by incompressible fluid flows~\cite{Foures2014,Marcotte2018}, and the largest load on a structure due to dynamical forcing. In addition, extreme events relating to finite-time singularity formation are central to mathematical questions about the well-posedness and regularity of partial differential equations (PDEs). One such question is the Millennium Prize Problem concerning regularity of the three-dimensional Navier--Stokes equations~\cite{Carlson2006}, for which finite bounds on various quantities that grow transiently would imply the global existence of smooth solutions~\cite{Foias1989, Doering1995, Escauriaza2003, Doering2009}. This work studies extreme events in dynamical systems governed by ordinary differential equations (ODEs) or PDEs. Specifically, given a scalar quantity of interest $\Phi$, we seek to bound its largest possible value along trajectories that evolve forward in time from a prescribed set of initial conditions. This maximum, denoted by $\maxphi$ and defined precisely in the next section, may be considered over all forward times or up to a finite time. Our definition of extreme events as maxima applies equally well to minima since a minimum of $\Phi$ is a maximum of $-\Phi$. Bounding $\maxphi$ from above and from below are fundamentally different tasks. A lower bound is implied by any value of $\Phi$ on any relevant trajectory, whereas upper bounds are statements about whole classes of trajectories and require a different approach. Analytical bounds of both types appear in the literature for many systems with complicated nonlinear dynamics, but often they are far from sharp. More precise lower bounds on $\maxphi$ have sometimes been obtained using numerical integration, for instance to study extreme transient growth, optimal mixing, and transition to turbulence in fluid mechanics~\cite{Ayala2011, Ayala2014, Farazmand2017,Foures2014, Marcotte2018,Kerswell2014}. In such computations, adjoint optimization~\cite{Gunzburger2003} is used to search for an initial condition that locally maximizes $\Phi$ at a fixed terminal time, and a second level of optimization can vary the terminal time. Since both optimizations are non-convex, they give a local maximum of $\Phi$ but do not give a way to know whether it coincides with the global maximum $\maxphi$ or is strictly smaller. Thus, adjoint optimization cannot give upper bounds on $\maxphi$, even when made rigorous by interval arithmetic. {To find such an upper bound using numerical integration, one could use verified computations to find an outer approximation to the reachable set of trajectories starting from a bounded set~\cite{Cyranka2017}, and then bound $\maxphi$ from above by the global maximum of $\Phi$ on this approximating set. However, the latter is hard to compute if either $\Phi$ or the set on which it must be maximized are not convex.} The present study describes a general framework for bounding $\maxphi$ from above that does not rely on numerical integration. This framework can be implemented analytically, computationally, or both, depending on what is tractable for the equations being studied. It falls within a broad family of methods, dating back to Lyapunov's work on nonlinear stability~\cite{Lyapunov1892}, whereby properties of dynamical systems are inferred by constructing \emph{auxiliary functions}, which depend on the system's state and possibly on time, and which satisfy suitable inequalities. Lyapunov functions \cite{Lyapunov1892,Datko1970}, which often are used to verify nonlinear stability, are one type of auxiliary functions. Other types can be used to approximate basins of attraction~\cite{Tan2006, Korda2013, Henrion2014, Valmorbida2017} and reachable sets~\cite{Magron2019,Jones2019}, estimate the effects of disturbances~\cite{Willems1972, Dashkovskiy2013, Ahmadi2016}, guarantee the avoidance of certain sets~\cite{Prajna2007, Ahmadi2017}, design nonlinear optimal controls~\cite{Lasserre2008, Henrion2008, Majumdar2014, Korda2016, Zhao2017, Korda2018}, bound infinite-time averages or stationary stochastic expectations~\cite{Chernyshenko2014, Fantuzzi2016siads, Kuntz2016, Goluskin2016lorenz, Tobasco2018, Korda2018a, Goluskin2019}, and bound extreme values over global attractors~\cite{Goluskin2018}. Some of these works refer to auxiliary functions as Lyapunov, Lyapunov-like, storage, or barrier functions, or as subsolutions to the Hamilton--Jacobi equation. Others do not use auxiliary functions explicitly but characterize nonlinear dynamics using invariant or occupation measures; the two approaches are related by Lagrangian duality and are equivalent in many cases. Furthermore, many proofs about differential equations that rely on monotone quantities can be viewed as special cases of various auxiliary function methods. For instance, as we explain in \cref{ex:fractional-burgers}, the bounds on transient growth in fluid systems proved in~\cite{Ayala2011,Ayala2014} fit within the general framework described here. Similarly, the ``background method'' introduced in~\cite{Doering1992} to bound infinite-time averages in fluid dynamics is equivalent to using quadratic auxiliary functions in a different framework~\cite{Chernyshenko2017, Goluskin2019}. In this paper, we describe how to use auxiliary functions to bound extreme values among nonlinear ODE or PDE trajectories starting from a specified set of initial conditions. Precisely, any differentiable auxiliary function satisfying two inequalities given in \cref{s:bounds-with-afs} provides an \textit{a priori} upper bound on $\maxphi$, without any trajectories being known. {In the field of PDE analysis, these inequality conditions have been used implicitly to bound extreme events (e.g.,~\cite{Ayala2011,Ayala2014}), but the unifying framework we describe often has gone unrecognized. In the field of control theory, generalizations of our framework appear as convex relaxations of deterministic optimal control problems (e.g.,~\cite{Vinter1978,Vinter1978a,Lewis1980,Vinter1993}) and of stochastic optimal stopping problems~\cite{Cho2002}. In these works, constraints on auxiliary functions are deduced using convex duality after replacing the maximization of $\Phi$ over trajectories with a convex maximization over occupation measures. Here we derive the same constraints using elementary calculus, and we illustrate their application using numerous ODE examples and one PDE example.} Unlike the maximization over trajectories that defines $\maxphi$, seeking the smallest upper bound among all admissible auxiliary functions defines a convex minimization problem. In general these two optimization problems are weakly dual: the minimum is an upper bound on the maximum but may not be equal to it. In some cases they are strongly dual, meaning that the maximum over trajectories coincides with the minimum over auxiliary functions, and these functions act as Lagrange multipliers that enforce the dynamics when maximizing $\Phi$ over trajectories. In such cases there exist auxiliary functions giving arbitrarily sharp upper bounds on $\maxphi$. Strong duality holds for a large class of sufficiently regular ODEs where the maximum of $\Phi$ is taken over a finite time horizon. {This strong duality has been proved for a more general class of optimal control problems using measure theory and convex duality~\cite{Lewis1980}, and \cref{s:direct-proof-strong-duality} gives a simpler proof for our present context that shows existence of near-optimal auxiliary functions using a mollification argument similar to~\cite{Hernandez1996}.} In many practical applications, constructing auxiliary functions that yield explicit upper bounds on $\maxphi$ is difficult regardless of whether strong duality holds. We illustrate various constructions here but do not have an approach that works universally. However, in the important case of dynamical systems governed by polynomial ODEs, polynomial auxiliary functions can be constructed using computational methods for polynomial optimization. With an infinite time horizon, this approach is applicable if the only invariant trajectories are algebraic sets, which is always true of steady states and is occasionally true of periodic orbits. With a finite time horizon, there is no such restriction. Polynomial ODEs are computationally tractable because the inequality constraints on auxiliary functions amount to nonnegativity conditions on certain polynomials. Polynomial nonnegativity is NP-hard to decide~\cite{Murty1987} but can be replaced by the stronger constraint that the polynomial is representable as a sum of squares (SOS). Optimization problems subject to SOS constraints can be reformulated as semidefinite programs (SDPs)~\cite{Nesterov2000, Lasserre2001, Parrilo2003} and solved using algorithms with polynomial-time complexity~\cite{Vandenberghe1996}. Thus, one can minimize upper bounds on $\maxphi$ for polynomial ODEs by numerically solving SOS optimization problems. Moreover, we prove that bounds computed with SOS methods becomes sharp as the degree of the polynomial auxiliary function is raised, provided that the time horizon is finite, certain compactness properties hold, and the minimization over general auxiliary functions is strongly dual to the maximization of $\Phi$ over trajectories. We illustrate the computation of very sharp bounds using SOS methods for several ODE examples, including a 16-dimensional system. In addition to methods for bounding $\maxphi$ above, we describe a way to locate trajectories on which the observable $\Phi$ attains its maximum value of $\maxphi$. Specifically, auxiliary functions that prove sharp or nearly sharp upper bounds on $\maxphi$ can be used to define regions in state space where each such trajectory must lie prior to its extreme event. We illustrate this using an ODE for which nearly optimal polynomial auxiliary functions can be computed by SOS methods. The rest of this paper is organized as follows. \Cref{s:bounds-with-afs} explains how auxiliary functions can be used to bound the magnitudes of extreme events in nonlinear dynamical systems. We construct bounds in several ODE examples and one PDE example; some but not all of these bounds are sharp. \Cref{s:optimal-trajectories} explains how auxiliary functions can be used to locate trajectories leading to extreme events. \Cref{s:sos-optimization} describes {how polynomial optimization can be used to construct auxiliary functions computationally} for polynomial ODEs. {Bounds computed in this way for various ODE examples appear in that section and others.} \Cref{s:extensions} extends the framework to give bounds on extreme values at particular times or integrated over time, rather than maximized over time, {giving a more direct derivation of bounding conditions that have appeared in~\cite{Vinter1978,Vinter1978a,Lewis1980,Vinter1993}.} Conclusions and open questions are offered in \cref{s:conclusion}. Appendices contain details of calculations and an alternative proof of the strong duality result that follows from~\cite{Lewis1980}. \section{Bounds using auxiliary functions} \label{s:bounds-with-afs} Consider a dynamical system on a Banach space $\X$ that is governed by the differential equation \begin{equation} \label{e:system} \dot{x} = F(t,x), \quad x(t_0) = x_0. \end{equation} Here, $F:\R \times \X \to \X$ is continuous and possibly nonlinear, the initial time $t_0$ and initial condition $x_0$ are given, and $\dot x$ denotes $\partial_t x$. When $\X = \R^n$, \cref{e:system} defines an $n$-dimensional system of ODEs. When $\X$ is a function space and $F$ a differential operator, \cref{e:system} defines a parabolic PDE, which may be considered in either strong or weak form~\cite{Temam1997, Robinson2001}. The trajectory of~\cref{e:system} that passes through the point $y \in \X$ at time $s$ is denoted by $x(t \given s,y)$. We assume that, for every choice of $(s,y) \in \R \times \X$, this trajectory exists uniquely on an open time interval, which can depend on both $s$ and $y$ and might be unbounded. Suppose that $\Phi : \R \times \mathcal{X} \to \R$ is a continuous function that describes a quantity of interest for system~\cref{e:system}. Let $\Phi^*$ denote the largest value attained by $\Phi[t,x(t\given t_0, x_0)]$ among all trajectories that start from a prescribed set $X_0 \subset \X$ and evolve forward over a closed time interval $\mathcal T$ that is either finite, $\mathcal T=[t_0,T]$, or infinite, $\mathcal T=[t_0,\infty)$: \begin{equation} \label{e:maxphi} \maxphi := \sup_{\substack{x_0 \in X_0\\[2pt]t\in\mathcal T}} \Phi\!\left[ t, x(t \given t_0, x_0) \right]. \end{equation} We write $\maxphi_T$ and $\maxphi_\infty$ instead of $\maxphi$ when necessary to distinguish between finite and infinite time horizons. Our objective is to bound $\Phi^*$ from above without knowing trajectories of~\cref{e:system}. Let $\Omega \subset \T \times \X$ be a region of spacetime in which the graphs $(t,x(t\given t_0,x_0))$ of all trajectories starting from $X_0$ remain up to the time horizon of interest. In applications one may be able to identify a set $\Omega$ that is strictly smaller than $\T \times \X$, otherwise it suffices to choose $\Omega=\T \times \X$. The maximum~\cref{e:maxphi} that we aim to bound depends only on trajectories within~$\Omega$. To derive upper bounds on $\maxphi$ we employ auxiliary functions $V:\Omega\to\R$. In most cases we require $V$ to be differentiable along trajectories of~\cref{e:system}, so that its Lie derivative \begin{equation} \mathcal{L} V(s,y) := \lim_{\varepsilon \to 0} \frac{V \left[s+\varepsilon, x(s+\varepsilon \given s,y)\right] - V(s,y)}{\varepsilon} \end{equation} is well defined. By design the function $\mathcal LV:\Omega\to\R$ coincides with the rate of change of $V$ along trajectories, meaning $\ddt V(t,x(t))=\mathcal LV(t,x(t))$ if $x(t)$ solves~\cref{e:system} and all derivatives exist. Crucially, an expression for $\mathcal LV$ can be derived without knowing the trajectories. In practice one differentiates $V[t,x(t\given s,y)]$ with respect to $t$ and uses the differential equation~\cref{e:system}. For example, when $\X = \R^n$ and~\cref{e:system} is a system of ODEs, the chain rule gives \begin{equation} \label{e:LV-odes} \mathcal{L} V(t,x) = \partial_t V(t,x) + F(t,x)\cdot \nabla_x V(t,x). \end{equation} {\Cref{ss:framework} presents inequality constraints on $V$ and $\mathcal L V$ that imply upper bounds on $\Phi^*$, as well as a convex framework for optimizing these bounds. Both can be obtained as particular cases of a general relaxation framework for optimal control problems~\cite{Vinter1978,Vinter1978a,Lewis1980}, but we give an elementary derivation.} \Cref{ss:global-local} compares bounds obtained when $\Omega=\T \times \X$, meaning that the constraints on $V$ are imposed globally in spacetime, to bounds obtained when a strictly smaller $\Omega$ containing all relevant trajectories can be found. {Finally, \cref{ss:sharpness} discusses conditions under which arbitrarily sharp upper bounds on $\Phi^*$ can be proved.} \subsection{Bounding framework} \label{ss:framework} Assume that for each initial condition $x_0 \in X_0$ a trajectory $x(t\given t_0, x_0)$ exists on some open time interval where it is unique and absolutely continuous. This does not preclude trajectories that are unbounded in infinite or finite time. To bound $\maxphi$ we define a class $\V(\Omega)$ of admissible auxiliary functions as the subset of all differentiable functions, $C^1(\Omega)$, that do not increase along trajectories and bound $\Phi$ from above pointwise. Precisely, $V \in \V(\Omega)$ if and only if \begin{subequations} \label{e:V-conditions} \begin{align} \label{e:cond1} \mathcal{L}V(t,x) &\leq 0 \quad\forall (t,x) \in \Omega,\\ \label{e:cond2} \Phi(t,x) - V(t, x) &\leq 0 \quad\forall (t,x) \in \Omega. \end{align} \end{subequations} The system dynamics enter only in the derivation of $\mathcal{L}V$; conditions~(\ref{e:V-conditions}a,b) are imposed pointwise in the spacetime domain $\Omega$ and can be verified without knowing any trajectories. If $\Omega = \T\times \X$ we call $V$ a \emph{global} auxiliary function, otherwise it is \emph{local} on a smaller chosen $\Omega$. We claim that \begin{equation} \label{e:weak-duality} \maxphi \leq \adjustlimits \inf_{V \in \V(\Omega)}\sup_{x_0 \in X_0} V(t_0,x_0), \end{equation} with the convention that the righthand side is $+\infty$ if $\V(\Omega)$ is empty. To see that \cref{e:weak-duality} holds when $\V$ is not empty, consider fixed $V\in\V(\Omega)$ and $x_0\in X_0$. For any $t\geq t_0$ up to which the trajectory $x(t\given t_0,x_0)$ exists and is absolutely continuous, the fundamental theorem of calculus can be combined with~(\ref{e:V-conditions}a,b) to find \begin{align} \label{e:inequality-sequence} \Phi[t, x(t \given t_0, x_0)] &\leq V[t, x(t \given t_0, x_0)]\\\nonumber &= V(t_0,x_0) + \int_{t_0}^t \mathcal{L}V[\xi ,x(\xi \given t_0, x_0)] \dxi\\\nonumber &\leq V(t_0,x_0). \end{align} Thus, the existence of any $V \in \V(\Omega)$ implies that $\Phi[t, x(t \given t_0, x_0)]$ is bounded uniformly on $\T$ for each $x_0$. Conversely, if $\Phi$ blows up before the chosen time horizon for any $x_0\in X_0$, then no auxiliary functions exist. Maximizing both sides of \cref{e:inequality-sequence} over $t\in\T$ and $x_0\in X_0$ gives \begin{equation} \label{e:weak-duality-incomplete} \maxphi \leq \sup_{x_0 \in X_0} V(t_0,x_0), \end{equation} and then minimizing over $\V(\Omega)$ gives \cref{e:weak-duality} as claimed. The minimization problem on the righthand side of~\cref{e:weak-duality} is convex and gives a bound on the (generally non-convex) maximization problem defining $\maxphi$ in~\cref{e:maxphi}. Despite convexity of the minimization, it usually is difficult to construct an optimal or near-optimal auxiliary function, even with computer assistance. Nevertheless, any auxiliary function satisfying~(\ref{e:V-conditions}a,b) gives a rigorous upper bound on $\maxphi$ according to~\cref{e:weak-duality-incomplete}. This framework therefore can be useful for analysis, and sometimes for computation, even when the dynamics are very complicated. Analytically, one often can find a suboptimal auxiliary function that yields fairly good bounds. Computationally, for certain systems including polynomial ODEs, one can optimize $V$ over a finite-dimensional subset of $\V(\Omega)$ to obtain bounds that are very good and sometimes perfect. However, the inequality in~\cref{e:weak-duality} is strict in general, meaning that there are cases where the optimal bounds provable using conditions~(\ref{e:V-conditions}a,b) are not sharp. Local auxiliary functions can sometimes produce sharp bounds when global ones fail, although this depends on the spacetime set $\Omega$ inside which the graphs of trajectories are known to remain. This is illustrated by examples in \cref{ss:global-local}, while \cref{ss:sharpness} discusses sufficient conditions for bounds from auxiliary functions to be arbitrarily sharp. First, however, we present two examples where global auxiliary functions work well. \Cref{ex:nonautonomous-example-sos} concerns a simple ODE where the optimal upper bound~\cref{e:weak-duality} produced by global $V$ appears to be sharp. We conclude this by constructing $V$ increasingly near to optimal, obtaining bounds that are extremely close to $\maxphi$. These $V$ are constructed computationally using polynomial optimization methods, the explanation of which is postponed until \cref{s:sos-optimization}. \Cref{ex:fractional-burgers} proves bounds for the Burgers equation with ordinary and fractional diffusion. We analytically construct $V$ giving bounds that are finite, but unlikely to be sharp. The bounds for fractional diffusion are novel, while those for ordinary diffusion show that the proof of the same result in~\cite{Ayala2011} can be seen as an instance of {the} auxiliary function framework. \begin{example} \label{ex:nonautonomous-example-sos} \belowpdfbookmark{Example~\ref{ex:nonautonomous-example-sos}}{bookmark:nonautomonous-sos} Consider the nonautonomous ODE system \begin{equation} \label{e:nonautomonous-system-example} \begin{bmatrix} \dot{x}_1 \\ \dot{x}_2 \end{bmatrix} = \begin{bmatrix} x_2 t -0.1 x_1 -x_1 x_2\\ -x_1 t -x_2 +x_1^2 \end{bmatrix}. \end{equation} % All trajectories eventually approach the origin, but various quantities can grow transiently. For example, consider the maximum of $\Phi = x_1$ over an infinite time horizon. Let the initial time be $t_0=0$ and the set of initial conditions $X_0$ contain only the point $x_0=(0,1)$. Then, $\maxphi_\infty$ is the largest value of $x_1$ along the trajectory with $x(0)=(0,1)$, and it is easy to find by numerical integration. Doing so gives $\maxphi\approx 0.30056373$, and this value can be used to judge the sharpness of upper bounds on $\maxphi_\infty$ that we produce using global auxiliary functions. The quadratic polynomial % \begin{equation} \label{e:nonautomonous-system-example-V} V(t,x) = \tfrac12 \left( 1 + x_1^2 + x_2^2\right) \end{equation} % is an admissible global auxiliary function, meaning that it satisfies the inequalities~(\ref{e:V-conditions}a,b) on $\Omega=[0,\infty)\times\R^2$. For this $V$ and the chosen $X_0$ and $t_0$, the bound \cref{e:weak-duality-incomplete} yields % \begin{equation} \maxphi_\infty \leq V(0,x_0) = 1. \end{equation} % This is the best bound that can be proved using global quadratic $V$, as shown in \cref{app:nonautomonous-system-example-optimality}, but optimizing polynomial $V$ of higher degree produces better results. For instance, the best global quartic $V$ that can be constructed using polynomial optimization is \begin{multline} V(t,x)= 0.2353 +0.7731\,x_1^2 +0.1666\,x_1 x_2 +0.4589\,x_2^2 +0.5416\,x_1^3 +0.05008\,t x_1^2\\ +0.1616\,t x_1 x_2 +0.2505\,t x_2^2 -0.1058\,x_1^2 x_2 +0.1730\,x_1 x_2^2 -0.5766\,x_2^3\\ +0.2962\,x_1^4 +0.1888\,t^2 x_1^2 +0.1888\,t^2 x_2^2 +0.5923\,x_1^2 x_2^2 +0.2962\,x_2^4, \end{multline} where numerical coefficients have been rounded. The bound on $\maxphi_\infty$ that follows from the above $V$ is reported in \cref{table:results-nonautonomous-example}, along with bounds that follow from computationally optimized $V$ of polynomial degrees 6, 8, and 10 (omitted for brevity). The bounds improve as the degree of $V$ is raised, and the optimal degree-8 bound is sharp up to nine significant figures. The numerical approach used for such computations is described in \cref{s:sos-optimization}. \begin{table}[t] \caption{Upper bounds on $\maxphi_\infty$ for \cref{ex:nonautonomous-example-sos}, computed using polynomial optimization with $V$ of various polynomial degrees. For the single initial condition $x_0=(0,1)$, numerical integration gives $\maxphi\approx0.30056373$ for all time horizons larger than $T=1.6635$, which agrees with the degree-8 bound to the tabulated precision. For the set $X_0$ of initial conditions on the shifted unit circle with center $(-\tfrac34,0)$, nonlinear optimization of the initial angular coordinate yields $\maxphi_\infty\approx0.49313719$, which agrees with the degree-10 bound to the tabulated precision.} \label{table:results-nonautonomous-example} \centering \small \begin{tabular}{cc ccc} \toprule && \multicolumn{3}{c}{Upper bounds} \\[2pt] \cline{3-5}\\[-8pt] $\deg(V)$ && $X_0=\{(0,1)\}$ && $X_0$ circle \\[2pt] \hline 2 && 1\phantom{.00000000} && 1.75\phantom{000000} \\ 4 && 0.41381042 && 0.80537235 \\ 6 && 0.30056854 && 0.49808038 \\ 8 && 0.30056373 && 0.49313760 \\ 10 && '' && 0.49313719 \\ \bottomrule \end{tabular} \end{table} \begin{figure} \centering \includegraphics[scale=1]{./nonautonomous_phase_portrait} \begin{tikzpicture}[overlay] \node at (-12.4,2.65) {\footnotesize(a)}; \node at (-7.4,2.65) {\footnotesize(b)}; \end{tikzpicture} \vspace{-2ex} \caption{(a) Sample trajectories starting from the circle with center $(-\tfrac34,0)$ and unit radius ({\color{matlabgray}\dashedrule}). The initial conditions are marked with a circle, while the color scale reflects the maximum value of $\Phi$ along each trajectory. (b) Numerical approximation to the maximum of $\Phi$ along single trajectories with initial condition on the shifted unit circle $(\cos\theta-\tfrac34,\sin\theta)$ as a function of the angular coordinate~$\theta$.} \label{f:nonautonomous-figure} \end{figure} Unlike searching among particular trajectories, bounding $\maxphi$ from above is not more difficult when the set $X_0$ of initial conditions is larger than a single point. For example, consider initial conditions on the shifted unit circle centered at $(-\tfrac34,0)$, % \begin{equation} X_0 = \left\{(x_1,x_2):\,\left(x_1+\tfrac34 \right)^2+x_2^2 = 1\right\} = \Big\{\left(\cos \theta - \tfrac34,\sin\theta \right):\;\theta \in [0,2\pi)\Big\}. \end{equation} % Sample trajectories and the variation of $\max_{t\ge0}\Phi$ with the angular position $\theta$ in $X_0$ are shown in \cref{f:nonautonomous-figure}. Finding the trajectory that attains $\maxphi$ requires numerical integration, combined with nonlinear optimization over initial conditions in $X_0$. Starting MATLAB's optimizer \texttt{fmincon} from initial guesses with angular coordinate $\theta=\tfrac{3\pi}{4}$ and $\theta=\tfrac{\pi}{10}$ yields locally optimal initial conditions of $\theta\approx1.125\pi$ and $\theta=2\pi$, which lead to $\Phi$ values of 0.49313719 and 0.25, respectively. \Cref{f:nonautonomous-figure}(b) confirms that the former initial condition is globally optimal, meaning $\maxphi\approx0.49313719$. On the other hand, polynomial auxiliary functions can be optimized by the methods of \cref{s:sos-optimization} using exactly the same algorithms as when $X_0$ contains a single point. For initial conditions on the shifted unit circle $X_0$, \cref{table:results-nonautonomous-example} lists upper bounds on $\maxphi$ implied by numerically optimized polynomial $V$ of degrees up to 10. We omit the computed $V$ for brevity. The optimal degree-10 $V$ gives a bound that is sharp to eight significant figures. \markendexample\end{example} \begin{example} \label{ex:fractional-burgers} \belowpdfbookmark{Example~\ref{ex:fractional-burgers}}{bookmark:fractional-burgers} To illustrate the analytical use of global auxiliary functions for PDEs, we consider mean-zero period-1 solutions $u(t,x)$ of the Burgers equation with fractional diffusion, % \begin{equation} \label{e:fractional-burgers} \begin{gathered} \dot u = - u u_x - (-\Delta)^\alpha u, \\ u(0,x) = u_0(x),\quad u(t,x+1) = u(t,x),\quad \int_0^1 u(t,x) \dx = 0. \end{gathered} \end{equation} % Following standard PDE notation, in this example the state variable in $\X$ is denoted by $u(t,\cdot)$, whereas $x\in[0,1]$ is the spatial variable. Discussion of this equation and a definition of the fractional Laplacian $(-\Delta)^\alpha$ can be found in~\cite{Yun2018}. Ordinary diffusion is recovered when $\alpha=1$. For each $\alpha\in(\tfrac12,1]$, solutions exist and remain bounded when the Banach space $\X$ in which solutions evolve is the Sobolev space $H^s$ with $s>\tfrac32-2\alpha$~\cite{Kiselev2008}. Let us consider a quantity that is called fractional enstrophy in~\cite{Yun2018}, % \begin{equation} \Phi(u) := \frac12 \int_0^1 \left[(-\Delta)^{\frac{\alpha}{2}} u\right]^2 \dx. \end{equation} % We aim to bound $\maxphi_\infty$ among trajectories whose initial conditions $u_0$ have a specified value $\Phi_0$ of fractional enstrophy, so the set of initial conditions is \begin{equation} X_0=\left\{ u\in\X :\,\Phi(u)=\Phi_0 \right\}. \end{equation} Here we prove $\Phi_0$-dependent upper bounds on $\maxphi_\infty$ for $\alpha\in(\tfrac34,1]$. Such bounds have been reported for ordinary diffusion ($\alpha=1$) \cite{Ayala2011} but not for $\alpha<1$. We employ global auxiliary functions of the form % \begin{equation} V(u) = \left[ \Phi(u)^\beta + C \|u\|_2^2 \right]^{1/\beta}, \label{e:burgers-V-ansatz} \end{equation} % where $\|u\|_2^2 = \int_0^1 u^2 \dx$ and the constants $\beta,C>0$ are to be chosen. This ansatz is guided by the realization that the analysis of the $\alpha=1$ case~\cite{Ayala2011} is equivalent to {the} auxiliary function framework with $\beta=1/3$ in~\cref{e:burgers-V-ansatz}. To be an admissible auxiliary function, $V$ must satisfy~(\ref{e:V-conditions}a,b). The inequality $V(u) \geq \Phi(u)$ holds for every positive $C$, while the inequality $\mathcal{L}V(u)\le 0$ constrains $\beta$ and $C$. To derive an expression for $\mathcal{L}V(u)$ we first note that differentiating along trajectories of \cref{e:fractional-burgers} and integrating by parts gives % \begin{subequations} \label{e:fractional-burgers-KE} \begin{gather} \ddt \|u(t,\cdot)\|_2^2 = - 4 \Phi[u(t,\cdot)],\\[1ex] \ddt \Phi[u(t,\cdot)] =R[u(t,\cdot)] := - \int_0^1 [(-\Delta)^\alpha u]^2 \dx - \int_0^1 u u_x (-\Delta)^\alpha u \dx. \end{gather} \end{subequations} % Differentiating $V[u(t,\cdot)]$ in time thus gives % \begin{equation} \label{e:fractional-burgers-LV} \mathcal{L}V(u) = \frac{1}{\beta}\left[ \Phi(u)^\beta + C \|u\|_2^2 \right]^{ \frac{1}{\beta} -1 } \left[ \beta \Phi(u)^{\beta-1} R(u) - 4 C \Phi(u) \right]. \end{equation} % The sign of $\mathcal L V$ is that of the expression in the rightmost brackets, so an estimate for $R(u)$ is needed. Theorem 2.2 in~\cite{Yun2018} provides $R(u) \leq \sigma_\alpha \Phi(u)^{\gamma_\alpha}$, with $\gamma_\alpha=\tfrac{8\alpha-3}{6\alpha-3}$ and explicit prefactors $\sigma_\alpha$ that blow up as $\alpha\to\tfrac34^+$. By fixing $\beta=2-\gamma_\alpha$ and $C = (2-\gamma_\alpha) \sigma_\alpha/4$, we guarantee that \cref{e:fractional-burgers-LV} is nonpositive. Thus, $V$ is a global auxiliary function yielding the bound % \begin{equation} \maxphi_\infty \leq \sup_{u_0 \in X_0} \left[ \Phi_0^{2-\gamma_\alpha} + \frac{(2-\gamma_\alpha) \sigma_\alpha}{4} \, \|u_0\|_2^2 \right]^{ \frac{1}{2-\gamma_\alpha} } \end{equation} % according to~\cref{e:weak-duality-incomplete}. Finally, the righthand maximization over $u_0$ can be carried out analytically by calculus of variations to bound $\maxphi_\infty$ in terms of only the initial fractional enstrophy $\Phi_0$, % \begin{equation} \label{e:fractional-burgers-bound} \maxphi_\infty \leq \left[ \Phi_0^{2-\gamma_\alpha} + \frac{(2-\gamma_\alpha) \sigma_\alpha}{2 (2\pi)^{2\alpha}} \, \Phi_0 \right]^{ \frac{1}{2-\gamma_\alpha} }. \end{equation} % The bound~\cref{e:fractional-burgers-bound} is finite for every $\alpha \in (\frac34,1]$. The coefficient on $\Phi_0$ is bounded uniformly for $\alpha$ in this range, but the exponent $\tfrac{1}{2-\gamma_\alpha}$ blows up as $\alpha\to\tfrac34^+$. When $\alpha=1$ we can replace $\sigma_\alpha$ with a smaller prefactor from~\cite{Lu2008} to find \begin{equation} \label{e:ordinary-burgers-bound} \maxphi_\infty \leq \left( \Phi_0^{1/3} + 2^{-10/3}\pi^{-8/3} \, \Phi_0 \right)^3. \end{equation} The above estimate is identical to the result of~\cite{Ayala2011},\footnote{Expression (5) in~\cite{Ayala2011} is claimed to hold with $\mathcal E$ being identical to our $\Phi(u)$, but in fact it holds with $\mathcal E=2\Phi(u)$ because their derivation uses estimate (3.7) from~\cite{Lu2008}. With this correction, and with $L=1$ and $\nu=1$, the expression in~\cite{Ayala2011} agrees with our bound~\cref{e:ordinary-burgers-bound}.} and their argument is equivalent to ours in that it implicitly relies on our $V$ being nonincreasing along trajectories. Similarly, in~\cite{Ayala2014} the same authors bound a quantity called palinstrophy in the two-dimensional Navier--Stokes equations, and that proof can be seen as using (in their notation) the global auxiliary function $V(u) = \left[ \mathcal P(u)^{1/2} + (4\pi\nu^2)^{-2}\mathcal K(u)^{1/2} \mathcal E(u) \right]^2$. The bound~\cref{e:fractional-burgers-bound} is unlikely to be sharp. For $\alpha=1$ it scales like $\maxphi_\infty\leq \mathcal{O}\big(\Phi_0^3\big)$ when $\Phi_0\gg1$, whereas numerical and asymptotic evidence suggests that $\maxphi_\infty = \mathcal{O}\big(\Phi_0^{3/2}\big)$~\cite{Ayala2011,Pelinovsky2012}. It is an open question whether going beyond the $V$ ansatz \cref{e:burgers-V-ansatz} can produce sharper analytical bounds, and whether the optimal bound \cref{e:weak-duality} that can be proved using global auxiliary functions would be sharp in this case. \markendexample\end{example} \subsection{Global versus local auxiliary functions} \label{ss:global-local} In various cases, such as \cref{ex:nonautonomous-example-sos} above, global auxiliary functions can produce arbitrarily sharp upper bounds on $\maxphi$. Other times they cannot. In \cref{ex:ex-1d-unbounded-trajectories} below, global auxiliary functions give bounds that are finite but not sharp. In \cref{ex:ex-infeasible-bounded-phi}, no global auxiliary functions exist. Sharp bounds can be recovered in both examples by using local auxiliary functions, meaning that we enforce constraints~(\ref{e:V-conditions}a,b) only on a subset $\Omega \subsetneq \T\times \X$ of spacetime that contains all trajectories of interest. There are various ways to determine that trajectories starting from the initial set $X_0$ remain in a spacetime set $\Omega$ during the time interval $\T$. One option is to choose a function $\Psi(t,x)$ and use global auxiliary functions to show that $\Psi^*\le B$ for initial conditions in $X_0$. This implies that trajectories starting from $X_0$ remain in the set \begin{equation} \Omega := \lbrace (t,x)\in\T\times\X:\,\Psi(t,x)\leq B \rbrace. \end{equation} Any $\Psi$ that can be bounded using global auxiliary functions can be used, including $\Psi=\Phi$, and $\Omega$ can be refined by considering more than one $\Psi$. Another way to show that trajectories never exit a prescribed set $\Omega$ is to construct a barrier function that is nonpositive on $\{t_0\}\times X_0$, positive outside $\Omega$, and whose zero level set cannot be crossed by trajectories. Barrier functions can be constructed analytically in some cases, and computationally for ODEs with polynomial righthand sides; see~\cite{Prajna2007,Ahmadi2017} and references therein. Finally, in the polynomial ODE case the computational methods of \cite{Henrion2014} can produce a spacetime set $\Omega=\T \times X$, where $X \subsetneq \X$ is an outer approximation for the evolution of the initial set $X_0$ over the time interval $\T$. The next two examples demonstrate the differences between global and local auxiliary functions for a simple ODE where a suitable choice of $\Omega$ is apparent. \begin{example} \label{ex:ex-1d-unbounded-trajectories} \belowpdfbookmark{Example~\ref{ex:ex-1d-unbounded-trajectories}}{bookmark:1d-unbounded-trajectories} Consider the autonomous one-dimensional ODE \begin{equation} \label{e:xdot=x2} \dot{x} = x^2, \qquad x(0)=x_0. \end{equation} % Trajectories $x(t) = x_0/(1-x_0 t)$ with nonzero initial conditions grow monotonically. If $x_0<0$, then $x(t)\to0$ as $t\to\infty$; if $x_0>0$, then $x(t)$ blows up at the critical time $t=1/x_0$. Suppose the set of initial conditions $X_0$ includes only a single point $x_0$, the time interval is $\T=[0,\infty)$, and the quantity to be bounded is % \begin{equation} \label{e:phi-ode-blowup} \Phi(x) = \frac{4x}{1+4x^2}. \end{equation} % Since $|\Phi(x)|\le1$ uniformly, $\maxphi_\infty$ is finite for each $x_0$ despite the blowup of trajectories starting from positive initial conditions. Explicit solutions give % \begin{equation} \label{e:maxphi-example-unbounded-trajectories} \maxphi_\infty = \begin{cases} 0, & \phantom{0< }\,\,x_0 \leq 0,\\%[0.5ex] 1, & 0 < x_0 \leq\tfrac12,\\%[1.5ex] \displaystyle\frac{4x_0}{1+4 x_0^2}, &\phantom{0<}\,\,x_0>\tfrac12. \end{cases} \end{equation} Here $X_0$ contains only one initial condition, so the optimal bound~\cref{e:weak-duality} simplifies to % \begin{equation} \label{e:weak-duality-blowup-example} \maxphi_\infty \leq \inf_{V \in \V(\Omega)} V(0,x_0). \end{equation} % The constant function $V\equiv1$ belongs to $\V$ for each $x_0$ and implies the trivial bound $\maxphi_\infty \leq 1$, which is sharp for $x_0\in(0,1/2]$. For all other $x_0\neq0$ there exist different $V$ providing sharp bounds on $\maxphi_\infty$, regardless of whether the domain $\Omega$ of auxiliary functions is global or local. This is shown in \cref{app:sharp-bounds-ex-x2}. At the semistable point $x_0=0$, however, sharp bounds are possible only with local auxiliary functions on certain $\Omega$. In the $x_0=0$ case, the resulting trajectory is simply $x(t)\equiv0$. Thus it suffices to enforce the auxiliary function constraints (\ref{e:V-conditions}a,b) locally on $\Omega=[0,\infty) \times \{0\}$. On this $\Omega$, the constant function $V\equiv0$ is a local auxiliary function giving the sharp bound $\maxphi\le0$. In fact, the same is true with $\Omega = [0,\infty) \times X$ for any $X$ with $0\in X \subseteq (-\infty,0]$. On the other hand, if the chosen set $X$ contains any open neighborhood of 0, then sharp bounds are not possible. This is true in particular for global auxiliary functions, which must satisfy constraints (\ref{e:V-conditions}a,b) on $\Omega=[0,\infty)\times\R$. The righthand minimum in~\cref{e:weak-duality-blowup-example} over global auxiliary functions is attained by the constant function $V=1$. No better bound is possible with global $V$ because they must satisfy $V(0,0) \geq 1$. To prove this, recall that every $V(t,x)$ is continuous by definition. Thus for any $\delta > 0$ there exists $y>0$ such that $V(0,0) \geq V(0,y) - \delta$. The trajectory of~\cref{e:xdot=x2} with initial condition $x(0)=y$ blows up in finite time and must therefore pass through $x=\frac12$ at some time $t^*$. Condition~\cref{e:cond2} requires that $V(t^*,\frac12) \geq \Phi(\frac12) = 1$, while~\cref{e:cond1} implies that $V$ decays along trajectories, so % \begin{equation} V(0,0) \geq V(0,y) - \delta \geq V(t^*,\tfrac12) - \delta \geq 1-\delta \end{equation} % for every $\delta>0$. Thus $V(0,0) \geq 1$, so when $x_0=0$ the righthand minimum over global $V$ in~\cref{e:weak-duality-blowup-example} is indeed attained by $V\equiv1$. Local auxiliary functions can prove better bounds, but a similar argument shows that the sharp bound $\maxphi\le0$ for $X_0=\{0\}$ is possible only if $0\in X \subseteq (-\infty,0]$. That is, the upper limit of $X$ must coincide with the boundary of the basin of attraction of the semistable point at 0. In more complicated systems it may not be possible to locate $X$ so precisely. In such cases, if global auxiliary functions do not give sharp bounds, local ones might not either, at least for spacetime sets $\Omega$ that one can identify in practice. \markendexample\end{example} \begin{example} \label{ex:ex-infeasible-bounded-phi} \belowpdfbookmark{Example~\ref{ex:ex-infeasible-bounded-phi}}{bookmark:infeasible-bounded-phi} In some cases, global auxiliary functions can fail to exist even if $\maxphi$ is finite. Again consider the ODE \cref{e:xdot=x2} from \cref{ex:ex-1d-unbounded-trajectories} with $\T=[0,\infty)$ and a single initial condition $X_0=\{x_0\}$, but now consider the quantity % \begin{equation} \label{e:phi-no-global-V} \Phi(t,x) = x^2 {\rm e}^x. \end{equation} % Recalling that $x(t)$ approaches zero if $x_0\le0$ and blows up otherwise, we find % \begin{equation} \label{e:maxphi-example-x^2-exponential} \maxphi_\infty = \begin{cases} 4 \, {\rm e}^{-2}, &\phantom{-2< }\,\,x_0\leq -2,\\ x_0^2 \, {\rm e}^{x_0}, &-2<x_0\leq 0, \\ \infty, & \phantom{-2<}\,\, x_0 > 0. \end{cases} \end{equation} % For auxiliary functions satisfying~(\ref{e:V-conditions}a,b) globally on $\Omega=[0,\infty)\times\R$, $\V(\Omega)$ must be empty when $x_0>0$ since $\maxphi_\infty=\infty$. However, $\V(\Omega)$ is empty also when $x_0\le0$, despite $\maxphi_\infty$ being finite. This is because any global $V$ satisfying~(\ref{e:V-conditions}a,b) must be nonincreasing for trajectories starting at all $y\in\R$, not only for initial conditions in the set of interest $X_0$. In particular, % \begin{equation} \label{e:empty-V-contradiction} V(0,y) \geq V\!\left[ t, x(t;0,y) \right] \geq \Phi\!\left[ t, x(t;0,y) \right] = x(t;0,y)^2 \,{\rm e}^{x(t;0,y)} \end{equation} % for all $y\in\R$ and all $t\ge0$, where the second inequality follows from~\cref{e:cond2}. No $V$ that is continuous on $[0,\infty)\times \R$ can satisfy~\cref{e:empty-V-contradiction} because, for each $y>0$, the rightmost expression becomes infinite as $t$ approaches the blowup time $1/x_0$. Thus, $\V(\Omega)$ is empty. Sharp bounds on finite $\maxphi$ become possible with local rather than global auxiliary functions, much as in \cref{ex:ex-1d-unbounded-trajectories}. Since $\maxphi$ is finite only when $X_0\subseteq(-\infty,0]$, and trajectories starting from any such $X_0$ stay within $X=(-\infty,0]$, conditions (\ref{e:V-conditions}a,b) can be enforced locally on $\Omega =[0,\infty) \times X$. As in \cref{ex:ex-1d-unbounded-trajectories}, it is crucial that $X$ contains no points outside the basin of the semistable equilibrium at the origin. A local $V$ giving sharp bounds is % \begin{equation} V(t,x) = \begin{cases} 4 \, {\rm e}^{-2}, &x\leq -2,\\ x^2 \, {\rm e}^{x}, &x>-2. \end{cases} \end{equation} % At each $x_0\le 0$ this $V$ is equal to the value~\cref{e:maxphi-example-x^2-exponential} of $\maxphi_\infty$ for the single trajectory starting at $x_0$. Thus, this $V$ gives a sharp bound on $\maxphi_\infty$ for every possible initial set $X_0\subseteq(-\infty,0]$. \markendexample\end{example} \subsection{Sharpness of optimal bounds} \label{ss:sharpness} The best bounds on $\maxphi$ provable using auxiliary functions are often but not always sharp. \cref{ex:ex-1d-unbounded-trajectories,ex:ex-infeasible-bounded-phi} above show that the upper bound~\cref{e:weak-duality} can be strict, at least for infinite time horizons and global auxiliary functions. {For finite time horizons and local auxiliary functions, on the other hand, arguments in~\cite{Lewis1980} prove that \cref{e:weak-duality} is an equality provided trajectories remain in a compact set over the finite time interval of interest. \Cref{ss:finite-time-horizon} states this result and gives an explicit counterexample for infinite time horizons. \Cref{ss:discontinuous-afs} explains why sharp bounds are always possible if one allows $V$ to be discontinuous, a fact which is useful for theory but not for explicitly bounding quantities in particular systems.} \subsubsection{Sharp bounds for ODEs with finite time horizon} \label{ss:finite-time-horizon} Local auxiliary functions can produce arbitrarily sharp bounds on $\maxphi_T$ with finite time horizon $T$ for well posed ODEs, provided the initial set $X_0$ is compact and trajectories that start from it remain inside a compact set $X$ up to time $T$. {Precisely, Theorem 2.1 and equation (5.3) in~\cite{Lewis1980} imply the following result.} \begin{theorem}[\cite{Lewis1980}] \label{th:strong-duality} Let $\dot{x} = F(t,x)$ be an ODE with $F$ locally Lipschitz in both arguments. Given $\Phi:\R \times \R^n \to \R$ continuous, an initial time $t_0$, a finite time interval $\T = [t_0,T]$, and a compact set of initial conditions $X_0$, define $\maxphi_T$ as in~\cref{e:maxphi}. Assume that: % \begin{enumerate}[({A}.1)] \item All trajectories starting from $X_0$ at time $t_0$ remain in a compact set $X$ for $t \in \T$; \item There exist a time $t_1 > T$ and a bounded open neighborhood $Y$ of $X$ such that, for all initial points $(s,y) \in [t_0,t_1] \times Y$, a unique trajectory $x(t \given s,y)$ exists for all $t \in [s,t_1]$. \end{enumerate} Then, letting $\V(\Omega)$ denote the set of differentiable auxiliary functions that satisfy~(\ref{e:V-conditions}a,b) on the compact set $\Omega := \T \times X$, % \begin{equation} \maxphi_T = \adjustlimits \inf_{V \in \V(\Omega)}\sup_{x_0 \in X_0} V(t_0,x_0). \label{e:strong-duality} \end{equation} \end{theorem} {In \cref{s:direct-proof-strong-duality} we give an alternative proof of this theorem that uses mollification to construct near-optimal $V$. This construction does not yield explicit bounds on $\maxphi_T$ for particular ODEs because it invokes trajectories, which generally are not known.} Both the original proof in~\cite{Lewis1980} and our proof rely on assumptions (A.1) and (A.2) to ensure that trajectories starting in a neighborhood of $X$ remain bounded past the time horizon $T$ and are regular in the sense that the map $(s,y) \mapsto x(t \given s,y)$ is locally Lipschitz on $[t_0,t_1] \times Y$. Regularity over a spacetime set slightly larger than $\Omega$ is used to construct smooth uniform approximations to certain functions on $\Omega$ via mollification. However, the assumptions are not necessary for the equality~\cref{e:strong-duality} to hold. For instance, the example in \cref{app:sharp-bounds-ex-x2} violates assumption (A.1) when $x_0 > 0 $ and $T=1/x_0$, yet the $V$ in~\cref{e:V-example-blowup-solutions} implies sharp bounds on $\maxphi_T$. It is an open challenge to weaken the assumptions of \cref{th:strong-duality}. {With infinite time horizons, for instance, auxiliary functions give sharp bounds in some examples but not others. Sharp bounds for an infinite time horizon are illustrated in \cref{app:sharp-bounds-ex-x2}. In the next example, on the other hand, there exists a set $X$ such that infinite-time analogues of assumptions (A.1) and (A.2) hold, yet differentiable local auxiliary functions cannot give sharp bounds on~$\maxphi_\infty$.} \begin{example} \label{ex:strong-duality-failure} \belowpdfbookmark{Example~\ref{ex:strong-duality-failure}}{bookmark:strong-duality-failure} Consider the one-dimensional ODE % \begin{equation} \label{e:xdot=x2-x3} \dot{x}=x^2-x^3, \end{equation} % which has two equilibria: the semistable point $x_s = 0$ and the attractor $x_a = 1$. Although no explicit analytical solution is available, trajectories exist for all times. As $t\to\infty$, they approach $x_s$ if $x_0\le0$ and approach $x_a$ if $x_0>0$. We let % \begin{equation} \Phi(x)=4x(1-x) \end{equation} % and seek upper bounds on $\maxphi_\infty$ for initial conditions in the set $X_0=[-1,0]$. All trajectories starting in $X_0$ approach $x_s$ from below, so % \begin{equation} \maxphi_\infty = \sup_{\substack{x_0 \in X_0\\[2pt]t\in[t_0,\infty)}}\Phi[x(t;x_0)] = 0. \end{equation} % Trajectories with initial conditions in $X_0=[-1,0]$ remain there, so the smallest $X$ we could choose is $X=X_0$. With this choice, $V\equiv 0$ gives a sharp upper bound. However, suppose we choose $X = [-1,1]$, which is the smallest connected set that is globally attracting and contains $X_0$. For this $X$, assumptions analogous to (A.1) and (A.2) in \cref{th:strong-duality} hold on the infinite time interval $[0,\infty)$, yet any upper bound on $\maxphi_\infty=0$ provable with differentiable local $V$ cannot be smaller than 1. Indeed, any such $V$ must be continuous at $(t,x)=(0,0)$ and arguing as in \cref{ex:ex-1d-unbounded-trajectories} shows that $V(0,0) \geq 1$, so any $V$ subject to~(\ref{e:V-conditions}a,b) satisfies % \begin{equation} \max_{x \in [-1,0]} V(0,x) \geq 1. \end{equation} Thus, with $X=[-1,1]$, any bound implied by~\cref{e:weak-duality} is no smaller than 1 as claimed above \markendexample\end{example} The inability of differentiable auxiliary functions to produce sharp bounds in \cref{ex:ex-1d-unbounded-trajectories,ex:strong-duality-failure} is due to the map $x_0 \mapsto x(t \given 0,x_0)$ from initial conditions to trajectories not being locally Lipschitz near the saddle point $x_s=0$. Because the time horizon is infinite, a fixed distance from $x_s$ is eventually reached by trajectories starting arbitrarily close to $x_s$. This does not happen when the time horizon is finite. We cannot say whether the strong duality result of \cref{th:strong-duality} applies with an infinite time horizon when the map $x_0 \mapsto x(t \given 0,x_0)$ is Lipschitz; {both the original proof in~\cite{Lewis1980} and our alternative in \cref{s:direct-proof-strong-duality} rely on the time interval $\T$ being compact.} \subsubsection{Nondifferentiable auxiliary functions} \label{ss:discontinuous-afs} One way to guarantee that optimization over $V$ gives sharp bounds on $\maxphi$, regardless of whether the time horizon is finite or infinite, is to weaken the local sufficient condition~(\ref{e:V-conditions}a,b) by removing the requirement that $V$ is differentiable. Since the Lie derivative $\mathcal L V$ may not be defined in this case, condition~\cref{e:cond1} must be replaced with the direct constraint that $V$ does not increase along trajectories, \begin{equation} \label{e:cond1-discontinuous} V[s+\tau,x(s+\tau \given s, y)] \leq V(s,y) \quad \forall \tau \geq 0 \text{ and } (s,y) \in \Omega. \end{equation} Slight modification of the argument leading to~\cref{e:weak-duality-incomplete} then proves \begin{equation} \label{e:weak-duality-discontinuous} \maxphi_\infty \leq \min_{\subalign{V:\,&\cref{e:cond2},\\&\cref{e:cond1-discontinuous}}} \, \sup_{x_0 \in X_0} V(t_0,x_0). \end{equation} Condition~\cref{e:cond1-discontinuous} cannot be checked when trajectories are not known exactly.\footnote{For systems with discrete-time dynamics, on the other hand, discontinuous $V$ may be practically useful. This work focuses on continuous-time dynamics, but {the} convex bounding framework {of \cref{ss:framework}} readily extends to maps $x_{n+1} = F(n,x_{n})$ when the continuous-time decay condition~\cref{e:cond1} is replaced by the discrete version of~\cref{e:cond1-discontinuous}, namely that $V[n+1,F(n,x_{n})] \leq V(n,x_n)$ for all $n \in \mathbb{N}$ and $x_n \in \X$. This can be checked directly without knowing trajectories. In addition, the computational methods described in \cref{s:sos-optimization} can be applied with minor modifications to finite-dimensional polynomial maps.} Differentiability of $V$ therefore is crucial to find explicit bounds for particular systems because the Lie derivative $\mathcal L V$ gives a way to check that $V$ is nonincreasing without knowing trajectories. For theoretical purposes, on the other hand, nondifferentiable $V$ are useful because \begin{equation} \label{e:value-function} V^*(s,y) := \sup_{t \geq s} \Phi[t, x(t \given s, y)] \end{equation} is optimal and attains equality in~\cref{e:weak-duality-discontinuous}, meaning \begin{equation} \label{e:strong-duality-discontinuous} \maxphi_\infty = \min_{\subalign{V:\,&\cref{e:cond2},\\&\cref{e:cond1-discontinuous}}} \, \sup_{x_0 \in X_0} V(t_0,x_0) = \sup_{x_0\in X_0}V^*(t_0,x_0). \end{equation} This $V^*$ is discontinuous in general because of the maximization over time. It follows directly from the definition of $\maxphi_\infty$ that $V^*$ satisfies~\cref{e:cond2} globally and gives a sharp bound when substituted into~\cref{e:strong-duality-discontinuous}. To see that~\cref{e:cond1-discontinuous} holds, observe that the trajectory starting from $y$ at time $s$ is the same as that starting from $x(s+\tau\given s,y)$ at time $s+\tau$. Then, since $\tau \geq 0$, \begin{align} V^*[s+\tau,x(s+\tau \given s, y)] &= \sup_{t \geq s+\tau} \Phi\{t, x[t \given s+\tau, x(s+\tau \given s, y) ]\} \\ &= \sup_{t \geq s+\tau} \Phi[t, x(t \given s, y)] \notag \\ &\leq \sup_{t \geq s} \Phi[t, x(t \given s, y)] \notag \\ &= V^*(s,y). \notag \end{align} \Cref{ex:dicontinuous-af} below gives $V^*$ in a case where trajectories are known. \begin{example} \label{ex:dicontinuous-af} \belowpdfbookmark{Example~\ref{ex:dicontinuous-af}}{bookmark:dicontinuous-af} Recall \cref{ex:ex-1d-unbounded-trajectories}, which shows that differentiable global auxiliary functions cannot give sharp bounds for the ODE~\cref{e:xdot=x2} with $\Phi$ as in~\cref{e:phi-ode-blowup} and the single initial condition $X_0=\{0\}$. For the auxiliary function % \begin{equation} V(t,x) = \begin{cases} 0, & \phantom{0< }\,\,x \leq 0,\\%[0.5ex] 1, & 0 < x \leq\tfrac12,\\%[.5ex] \displaystyle\frac{4x}{1+4 x^2}, &\phantom{0<}\,\,x>\tfrac12, \end{cases} \end{equation} which is discontinuous at $x=0$, explicit ODE solutions confirm that $V$ satisfies the nonincreasing condition~\cref{e:cond1-discontinuous}. This $V$ implies sharp bounds on $\maxphi_\infty$ for all sets $X_0$ of initial conditions, and in fact it is exactly the optimal $V^*$ defined by~\cref{e:value-function}. \markendexample\end{example} When trajectories are not known explicitly, the $V^*$ defined by~\cref{e:value-function} cannot be used to find explicit bounds, but it can still be useful. For instance, in \cref{s:direct-proof-strong-duality} we prove \cref{th:strong-duality} by showing that $V^*$ can be approximated with differentiable $V$. Moreover, $V^*$ has arisen in various contexts. One field in which $V^*$ arises is optimal control theory. Using ideas from dynamic programming for optimal stopping problems (see, e.g., section III.4.2 in~\cite{Bardi1997}) one can show that if $V^*$ is bounded and uniformly continuous on $\Omega$, then it is exactly the so-called value function for problem~\cref{e:maxphi} and is the unique viscosity solution to its corresponding Hamilton--Jacobi--Bellman complementarity system. This system consists of the auxiliary function constraints~{(\ref{e:V-conditions}a,b)} and the condition \begin{equation} \mathcal{L}V(t,x)[\Phi(t,x) - V(t,x)] = 0 \quad\forall (t,x) \in \Omega. \label{e:hjb-complementarity} \end{equation} The auxiliary function framework studied in this work therefore can be seen as a relaxation of the Hamilton--Jacobi--Bellman system that results from dropping~\cref{e:hjb-complementarity}. A second connection between $V^*$ and existing literature occurs in the particular case of linear dynamics on a Hilbert space, as explained in the following example. \begin{example} Let $X$ be a Hilbert space with inner product $\langle\cdot,\cdot \rangle$. Consider the autonomous \emph{linear} dynamical system $\dot{x} = A x$ with initial condition $x(0)=x_0$, where $A$ is a closed and densely defined linear operator, not necessarily bounded, that generates a strongly continuous semigroup $\{S_t\}_{t \geq 0}$. Trajectories satisfy $x(t) = S_t\,x_0$, so $S_t$ is the flow map. Suppose $S_t$ is compact for each $t>0$. In various linear systems of this type, one is interested in the maximum possible amplification of the norm $\|x\| = \sqrt{\langle x,x\rangle}$, which in {the} present framework means that $\Phi(x)=\|x\|$ with the initial set $X_0=\{x_0\in X:\,\|x_0\|=1\}$. In fluid mechanics, for instance, such problems have been studied to understand linear mechanisms by which perturbations are amplified (see, e.g.,~\cite{Trefethen1993}). With the above choices,~\cref{e:value-function} and~\cref{e:strong-duality-discontinuous} reduce to the well-known result % \begin{equation} \Phi^*_\infty = \adjustlimits \sup_{\|x_0\|=1} \sup_{t\geq 0} \, \Phi(S_t\,x_0) = \adjustlimits \sup_{t\geq 0} \sup_{\|x_0\|=1}\, \sqrt{\langle S_t\,x_0, S_t\,x_0 \rangle} = \sup_{t \geq 0} \, \sigma_{\rm max}(S_t), \end{equation} % where $\sigma_{\rm max}(S_t)$ denotes the maximum singular value of $S_t$. We stress, however, that {the} general bounding framework {of \cref{ss:framework}} does not require an explicit flow map and applies also to nonlinear systems. \markendexample\end{example} \section{Optimal trajectories} \label{s:optimal-trajectories} So far we have presented a framework for bounding the magnitudes of extreme events without finding the extremal trajectories themselves. The latter is much harder in general, partly due to the non-convexity of searching over initial conditions. However, auxiliary functions producing bounds on $\maxphi$ do give some information about optimal trajectories. Specifically, sublevel sets of any auxiliary function define regions of state space in which optimal and near-optimal trajectories must spend a certain fraction of time prior to the extreme event. A similar connection has been found between trajectories that maximize infinite-time averages and auxiliary functions that give bounds on these averages~\cite{Tobasco2018,Korda2018a}. The following discussion applies to both global and local auxiliary functions with either finite or infinite time horizons. The simpler case of exactly optimal auxiliary functions is addressed in \cref{s:optimal-V}, followed by the general case in \cref{s:suboptimal-V}. \subsection{Optimal auxiliary functions} \label{s:optimal-V} Suppose for now that the optimal bound~\cref{e:weak-duality-incomplete} is sharp and is attained by some $V^*$, in which case \begin{equation} \label{e:optimal-af-definition} \sup_{x_0 \in X_0} V^*(t_0,x_0) = \maxphi. \end{equation} Let $x_0^*\in X_0$ be an initial condition leading to an optimal trajectory, which attains the maximum value $\maxphi$ at some time $t^*$. To determine the value of $V^*$ on an optimal trajectory, note that the same reasoning leading to~\cref{e:weak-duality-incomplete} yields \begin{align} \label{e:Vopt-inequalities} \maxphi &= \Phi[t,x(t^*;x_0^*)] \\ &\leq V^*(t_0,x_0^*) + \int_{t_0}^{t^*} \mathcal{L}V^*[\xi, x(\xi \given t_0,x_0^*)] \dxi \notag \\ &\leq \sup_{x_0 \in X_0} V^*(t_0,x_0) + \int_{t_0}^{t^*} \mathcal{L}V^*[\xi, x(\xi \given t_0,x_0^*)] \dxi \notag \\ &= \maxphi+ \int_{t_0}^{t^*} \mathcal{L}V^*[\xi, x(\xi \given t_0,x_0^*)] \dxi \notag \\ &\leq \maxphi \notag \end{align} The above inequalities must be equalities and $\mathcal{L}V^* \leq 0$, so $\mathcal{L}V^*\equiv0$ and $V^*\equiv\maxphi$ along an optimal trajectory up to time $t^*$. These constant values of $\mathcal{L}V^*$ and $V^*$ can be used to define sets in which optimal trajectories must lie: \begin{align} \label{e:R0} \mathcal{R}_0 &:= \left\{(t,x)\in\Omega :\, \mathcal{L}V^*(t,x) = 0 \right\}, \\ \label{e:S0} \mathcal{S}_0 &:= \left\{(t,x)\in\Omega:\, V^*(t,x) = \sup_{x_0 \in X_0} V^*(t_0,x_0) \right\}, \end{align} where we have used \cref{e:optimal-af-definition} in defining $\mathcal{S}_0$. The intersection $\mathcal{S}_0 \cap \mathcal{R}_0$ contains the graph of each optimal trajectory until the last time that trajectory attains the maximum value $\maxphi$. In general, $\mathcal{S}_0 \cap \mathcal{R}_0$ may also contain points not on any optimal trajectory. \subsection{General auxiliary functions} \label{s:suboptimal-V} Consider an auxiliary function $V$ and an initial condition $x_0$ that are a near-optimal pair, meaning that an upper bound on $\maxphi$ implied by $V$ and a lower bound implied by the trajectory starting from $x_0$ differ by no more than~$\delta$. That is, calling the upper bound $\lambda$, \begin{equation} \label{e:delta-suboptimal-V} \lambda-\delta\le\sup_{t\in\mathcal{T}}\Phi[t,x(t;t_0,x_0)] \le \maxphi \le \sup_{x_0 \in X_0} V(t_0,x_0) \leq \lambda. \end{equation} The upper bound $\lambda$ might be larger than $\sup_{x \in X_0} V(t_0,x)$ if the latter cannot be computed exactly, and the lower bound $\lambda-\delta$ might be smaller than $\sup_{t\in\mathcal{T}}\Phi[t,x(t;t_0,x_0)]$ if the trajectory starting from $x_0$ is only partly known. Let $t^*$ denote the latest time during the interval $\mathcal{T}$ when the trajectory starting at $x_0$ attains or exceeds the value $\lambda-\delta$. The constraints~(\ref{e:V-conditions}a,b) require $V$ to decay along trajectories and bound $\Phi$ pointwise, so \begin{equation} \lambda-\delta \leq V[t^*,x(t^* \given t_0,x_0)] \leq V[t,x(t\given t_0,x_0)] \leq V(t_0,x_0) \leq \sup_{x \in X_0} V(t_0,x) \leq \lambda \end{equation} for all $t\in[t_0,t^*]$. The above inequalities imply that the trajectory starting at $x_0$ satisfies \begin{equation} 0 \leq \lambda - V[t, x(t\given t_0,x_0)] \leq \delta \end{equation} up to time $t^*$, so its graph must be contained in the set \begin{equation} \label{e:Sdelta} \mathcal{S}_{\delta} := \left\{ (t,x) \in\Omega :\, 0 \leq \lambda - V(t, x) \leq \delta \right\}, \end{equation} which extends to suboptimal $V$ the definition \cref{e:S0} of $\mathcal{S}_0$ for optimal $V^*$. The definition \cref{e:R0} of $\mathcal{R}_0$ also can be extended to suboptimal $V$, but the resulting sets are guaranteed to contain optimal and near-optimal trajectories only for a certain amount of time. When $V$ satisfies~\cref{e:delta-suboptimal-V}, an argument similar to~\cref{e:Vopt-inequalities} shows that \begin{equation} \maxphi \leq \maxphi + \delta + \int_0^{t^*} \mathcal{L}V[\xi, x(\xi \given t_0,x_0)] \dxi, \end{equation} and therefore \begin{equation} -\int_{t_0}^{t^*} \mathcal{L}V[\xi, x(\xi \given t_0,x_0)] \dxi \leq \delta. \end{equation} Since $\mathcal{L}V\le0$, the above condition can be combined with Chebyshev's inequality (cf.\ \S VI.10 in~\cite{Knapp2005basic}) to estimate, for any $\varepsilon>0$, the total time during $[t_0,t^*]$ when $\mathcal{L}V\le-\varepsilon$. Letting $\Theta_\varepsilon$ denote this total time and letting $\mathbbm{1}_A$ denote the indicator function of a set $A$, we find \begin{equation} \Theta_{\varepsilon} :=\int_{t_0}^{t^*} \mathbbm{1}_{ \{\xi:\,\mathcal{L}V[\xi, x(\xi \given t_0,x_0)] < -\varepsilon \} } \dxi \leq -\frac{1}{\varepsilon} \int_{t_0}^{t^*} \mathcal{L}V[\xi, x(\xi \given t_0,x_0)] \dxi \leq \frac{\delta}{\varepsilon}. \end{equation} In other words, a trajectory on which $\Phi\ge \lambda-\delta$ at some time $t^*$ cannot leave the set \begin{equation} \label{e:Repsilon} \mathcal{R}_\varepsilon := \left\{ (t,x)\in\Omega :\, -\varepsilon \leq \mathcal{L}V(t,x) \leq 0 \right\} \end{equation} for longer than $\delta/\varepsilon$ time units during the interval $[t_0,t^*]$. This statement is most useful when the upper bound $\maxphi\le\lambda$ implied by $V$ is close to sharp, so there exist trajectories where $\Phi$ attains values $\lambda-\delta$ with small $\delta$. Then one may take $\varepsilon$ small enough for $\mathcal{R}_\varepsilon$ to exclude much of state space, while also having it be meaningful that near-optimal trajectories cannot leave $\mathcal{R}_\varepsilon$ for longer than $\delta/\varepsilon$. The computational construction of $\mathcal S_{\delta}$ and $\mathcal{R}_\varepsilon$ for a polynomial ODE is illustrated by \cref{ex:sos-2d-example} in the next section. \section{Computing bounds for ODEs using SOS optimization} \label{s:sos-optimization} The optimization of auxiliary functions and their corresponding bounds is prohibitively difficult in many cases, even by numerical methods. However, computations often are tractable when the system~\cref{e:system} is an ODE with polynomial righthand side $F:\R \times \R^n \to \R^n$, the observable $\Phi$ is polynomial, and the set of initial conditions $X_0$ is a basic semialgebraic set: \begin{equation} \label{e:X0-semialg} X_0 := \{ x \in \R^n :\, f_1(x)\geq 0,\,\ldots, f_p(x) \geq 0, \,g_1(x)= 0,\,\ldots, g_q(x)=0 \} \end{equation} for given polynomials $f_1,\,\ldots,\,f_p$ and $g_1,\ldots,\,g_q$. The set $\Omega \subset \R \times \R^n$ in which the graphs of trajectories remain over the time interval $\mathcal{T}$ is assumed to be basic semialgebraic as well: \begin{equation} \label{e:omega-semialg} \Omega := \{ (t,x) \in \R \times \R^n :\,h_1(t,x)\geq 0,\,\ldots, h_r(t,x) \geq 0, \,\ell_1(t,x)= 0,\,\ldots, \ell_s(t,x)=0 \} \end{equation} for given polynomials $h_1,\,\ldots,\,h_r$ and $\ell_1,\ldots,\,\ell_s$. To construct global auxiliary functions with state space $\R^n$, the set $\Omega$ can be specified by a single inequality: $h_1(t,x):=t-t_0\ge0$ or $h_1(t,x):=(t-t_0)(T-t)\ge0$ for infinite or finite time horizons, respectively. To construct local auxiliary functions, more inequalities or equalities must be added to define a smaller $\Omega$. For any integer $d$, let $\R_{d}[t,x]$ and $\R_{d}[x]$ denote the vector spaces of real polynomials of degree $d$ or smaller in the variables $(t,x)$ and $x$, respectively. Restricting the optimization over differentiable auxiliary functions in~\cref{e:weak-duality} to polynomials in $\R_{d}[t,x]$ gives \begin{equation} \label{e:weak-duality-polynomial} \maxphi \leq \inf_{\substack{V \in \R_{d}[t,x]\\\text{s.t. (\ref{e:V-conditions}a,b)}} } \sup_{x_0 \in X_0} V(t_0, x_0). \end{equation} Recalling that the supremum over $X_0$ is the smallest upper bound $\lambda$ on that set, and substituting expression~\cref{e:LV-odes} for $\mathcal{L}V$ in the ODE case into~\cref{e:cond1}, we can express the righthand side of~\cref{e:weak-duality-polynomial} as a constrained minimization over $V$ and $\lambda$: \begin{align} \label{e:sos-opt-partial} \maxphi \leq \inf_{\substack{V \in \R_{d}[t,x]\\\lambda \in \R}} \, \{ \lambda :\; -\partial_t V(t,x) - F(t,x) \cdot \nabla_x V(t,x) &\geq 0 \text{ on } \Omega, \\[-1.35\fsize] V(t,x) - \Phi(t,x) &\geq 0 \text{ on } \Omega, \notag \\ \lambda-V(t_0,x) &\geq 0 \text{ on } X_0 \}.\notag \end{align} Under the assumptions outlined above, the three constraints on $V$ and $\lambda$ are polynomial inequalities on basic semialgebraic sets. Checking such constraints is NP-hard in general~\cite{Murty1987}, so a common strategy is to replace them with stronger but more tractable constraints. Here we require that the polynomials in~\cref{e:sos-opt-partial} admit weighted sum-of-squares (WSOS) decompositions, which can be searched for computationally by solving~SDPs. These WSOS constraints imply that the inequalities in~\cref{e:sos-opt-partial} hold on $\Omega$ or $X_0$ but not necessarily outside these sets. To define the relevant WSOS decompositions, let $\Sigma_{\mu}[t,x]$ and $\Sigma_{\mu}[x]$ be the cones of SOS polynomials of degrees up to $\mu$ in the variables $(t,x)$ and $x$, respectively. That is, a polynomial $\sigma\in\R_\mu[x]$ belongs to $\Sigma_\mu[x]$ if and only if there exist a finite family of polynomials $q_1,\,\ldots,\,q_k \in \R_{\lfloor \mu/2\rfloor}[x]$ such that $\sigma = \sum_{i=1}^k q_i^2$. For each integer $\mu$ that is no smaller than the highest polynomial degree appearing in the definition \cref{e:X0-semialg} of $X_0$, the set of degree-$\mu$ WSOS polynomials associated with $X_0$ is \begin{align} \Lambda_\mu := \Big\{ \sigma_0 + \sum_{i=1}^p f_i \sigma_i + \sum_{i=1}^q g_i \rho_i :\; \sigma_0 &\in \Sigma_\mu[x], \\[-1.5ex] \sigma_i &\in \Sigma_{\mu-\deg(f_i)}[x], \;i=1,\,\ldots,\,p\notag \\%[-1.5ex] \rho_i &\in \R_{\mu-\deg(g_i)}[x], \;i=1,\,\ldots,\,q \,\Big\}. \notag \end{align} In words, WSOS polynomials associated with $X_0$ can be written as a weighted sum of polynomials, where the weights are $\{1,f_1,\ldots,f_p,g_1,\ldots,g_q\}$ and the polynomials weighted by $\{1,f_1,\ldots,f_p\}$ are SOS. Every SOS polynomial is globally nonnegative, and it is WSOS with respect to any $X_0$ since all terms in the WSOS decomposition aside from $\sigma_0$ can be zero. On the other hand, WSOS polynomials need not be SOS. Analogously to $\Lambda_\mu$, the set of degree-$\mu$ WSOS polynomials associated with $\Omega$ is \begin{align} \Gamma_\mu := \Big\{ \sigma_0 + \sum_{i=1}^r h_i \sigma_i + \sum_{i=1}^s \ell_i \rho_i :\; \sigma_0 &\in \Sigma_\mu[t,x], \\[-1.5ex] \sigma_i &\in \Sigma_{\mu-\deg(h_i)}[t,x], \;i=1,\,\ldots,\,r\notag \\%[-1ex] \rho_i &\in \R_{\mu-\deg(\ell_i)}[t,x], \;i=1,\,\ldots,\,s \,\Big\}.\notag \end{align} If a polynomial belongs to $\Gamma_\mu$ or $\Lambda_\mu$, then it is nonnegative on $\Omega$ or $X_0$, respectively. (The converse is false beyond a few special cases~\cite{Hilbert1888}.) We can strengthen the inequality constraints on $V$ in~\cref{e:sos-opt-partial} by requiring WSOS representations instead of nonnegativity. This gives \begin{align} \label{e:sos-opt} \maxphi \leq \lambda^*_d &:= \inf_{\substack{V \in \R_{d}[t,x]\\\lambda \in \R}} \, \{ \lambda :\; -\partial_t V - F \cdot \nabla_{x}V \in \Gamma_{d-1+\deg(F)}, \\[-1.35\fsize] &\hspace{131pt}V - \Phi \in \Gamma_{d}, \notag \\ &\hspace{108pt} \lambda-V(t_0,\cdot) \in \Lambda_d \}. \notag \end{align} For each integer $d$, the righthand side is a finite-dimensional optimization problem with WSOS constraints that are linear in the decision variables---the scalar $\lambda$ and the coefficients of the polynomial $V$. It is well known that such problems can be reformulated as SDPs (e.g., Section 2.4 in~\cite{Lasserre2015}). Such SDPs can be solved numerically in polynomial time, barring problems with numerical conditioning. Open-source software is available to assist both with the reformulation of WSOS optimizations as SDPs and with the solution of the latter.\footnote{Most modeling toolboxes for polynomial optimization, including the ones used in this work, do not natively support WSOS constraints. However, these can be implemented using standard SOS constraints. For instance, the WSOS constraint $P \in \Gamma_\mu$ can be implemented as the SOS constraint $P - \sum_{i=1}^p h_i \sigma_i - \sum_{i=1}^q \ell_i \rho_i \in \Sigma_\mu[t,x]$, along with the SOS constraints $\sigma_i \in \Sigma_{\mu - \deg(h_i)}[t,x]$ for $i=1,\ldots,p$. This formulation, known as the generalized S-procedure~\cite{Tan2006,Fantuzzi2016siads}, introduces more decision variables than the direct WSOS approach of~\cite[Section 2.4]{Lasserre2015}. The additional variables may lead to larger computations, but they can improve numerical conditioning by giving more freedom for the rescaling that is done within SDP solvers.} The SOS computations in \cref{ex:nonautonomous-example-sos,ex:sos-2d-example,ss:vdp}, and in \cref{app:iterative-procedure}, were set up in MATLAB using {YALMIP}~\cite{Lofberg2004,Lofberg2009} or a customized version of {SPOT\sc{less}}.\footnote{\href{https://github.com/aeroimperial-optimization/aeroimperial-spotless}{https://github.com/aeroimperial-optimization/aeroimperial-spotless}} The resulting SDPs were solved with the interior-point solver {MOSEK}\ v.8~\cite{mosek} except in \cref{ss:vdp}, where the SDP was solved in multiple precision arithmetic with {SDPA-GMP}\ v.7.1.3~\cite{sdpagmp}. The bounds $\lambda^*_d$ found by solving~\cref{e:sos-opt} numerically form a nonincreasing sequence as the degree $d$ of $V$ is raised. These bounds appear to become sharp in various cases, including \cref{ex:nonautonomous-example-sos} above and \cref{ex:sos-2d-example} below. We cannot say whether such convergence occurs in all cases, even when auxiliary functions arbitrarily close to optimality are known to exist. This is due to our restriction to polynomial $V$ and use of WSOS constraints, which are sufficient but not necessary for nonnegativity. However, if the sets $X_0$ and $\Omega$ are both compact and there exists a differentiable $V$ attaining equality in~\cref{e:weak-duality}, then the following theorem guarantees that bounds from SOS computations become sharp as the polynomial degree is raised. The proof is a standard argument in SOS optimization and relies on a result known as Putinar's Positivstellensatz~\cite[Lemma 4.1]{Putinar1993}, which guarantees the existence of WSOS representations for strictly positive polynomials; details can be found in Section 2.4 of~\cite{Lasserre2015}. \begin{theorem} \label{th:sos-convergence} Let $\Omega$ and $X_0$ be compact semialgebraic sets. Assume the definitions of $\Omega$ and $X_0$ include inequalities $C_1-t^2 - \|x\|_2^2\ge0$ and $C_2-\|x\|_2^2\ge0$ for some $C_1$ and $C_2$, respectively, which can always be made true by adding inequalities that do not change the specified sets. Let $\lambda_d^*$ be the bound from the optimization~\cref{e:sos-opt}. If differentiable auxiliary functions give arbitrarily sharp bounds~\cref{e:strong-duality} on $\maxphi_T$, then $\lambda_d^* \to \maxphi_T$ as $d \to \infty$. \end{theorem} \begin{proof} Assume that the semialgebraic definitions of $\Omega$ and $X_0$ include inequalities of the form $C_1-t^2 - \|x\|_2^2\ge0$ and $C_2- \|x\|_2^2\ge0$, respectively. If not, these inequalities can be added with $C_1$ and $C_2$ large enough to not change which points lie in $\Omega$ and $X_0$ since both sets are compact. Then, $C_1-t^2 - \|x\|_2^2 \in \Gamma_\mu$ and $C_2-\|x\|_2^2 \in \Lambda_\mu$ for all integers $\mu$.\footnote{\Cref{th:sos-convergence} holds also when the semialgebraic definitions of $\Omega$ and $X_0$ satisfy Assumption 2.14 in~\cite[Section 2.4]{Lasserre2015}, which is a slightly weaker but more technical condition implying the inclusions $C_1-t^2 - \|x\|_2^2 \in \Gamma_\mu$ and $C_2-\|x\|_2^2 \in \Lambda_\mu$ for all sufficiently large integers $\mu$.} To prove that $\lambda_d^* \to \maxphi_T$ as $d \to \infty$, we establish the equivalent claim that, for each $\varepsilon>0$, there exists an integer $d$ such that $\lambda_d^* \leq \maxphi_T + \varepsilon$. Choose $\gamma>0$ such that % \begin{equation} \label{e:sos-duality-gamma} \gamma < \frac{2T \varepsilon}{5T-t_0}. \end{equation} % By assumption there exists an auxiliary function $W\in C^1(\Omega)$, not generally a polynomial, such that % \begin{equation} \label{e:suboptimal-condition-sf-th2} W(t_0,x_0) \leq \maxphi_T + \gamma \quad\text{on } X_0. \end{equation} % Since $\Omega$ is compact, polynomials are dense in $C^1(\Omega)$ (cf.\ Theorem 1.1.2 in~\cite{Llavona1986}). That is, for each $\delta>0$ there exists a polynomial $P$ such that $\|W-P\|_{C^1(\Omega)} \leq \delta$, where $\|\cdot\|_{C^k(\Omega)}$ denotes the usual norm on $C^k(\Omega)$---the sum of the $L^\infty$ norms of all derivatives up to order $k$. Fix such a $P$ with % \begin{equation} \label{e:sos-duality-delta} \delta < \frac{\gamma}{\max\left\{ 2,2T,2T\Vert F_1\Vert_{C^0(\Omega)},\ldots,2T\Vert F_n\Vert_{C^0(\Omega)} \right\}}. \end{equation} % By definition $\Omega$ contains the initial set $\{t_0\}\times X_0$, so $\abs{W(t_0,\cdot)-P(t_0,\cdot)} < \delta$ uniformly on $X_0$. We define the polynomial auxiliary function % \begin{equation} V(t,x) = P(t,x) + \gamma\left( 1- \frac{t}{2T}\right). \end{equation} % With $\delta$ as in~\cref{e:sos-duality-delta}, $\gamma$ as in~\cref{e:sos-duality-gamma}, and $W$ satisfying~\cref{e:suboptimal-condition-sf-th2}, elementary estimates show that % \begin{subequations} \label{e:strict-inequalities-V} \begin{align} -\partial_t V - F \cdot \nabla_{x}V &> 0 \quad\text{on } \Omega,\\ V - \Phi &> 0 \quad\text{on } \Omega,\\ \maxphi_T + \varepsilon -V(t_0,\cdot) &> 0 \quad\text{on } X_0. \label{e:strict-inequalities-V-c} \end{align} \end{subequations} The inequalities~(\ref{e:strict-inequalities-V}a--c) are strict. Since $C_1-t^2 - \|x\|_2^2 \in \Gamma_\mu$ and $C_2-\|x\|_2^2 \in \Lambda_\mu$ for all integers $\mu$ by assumption, a straightforward corollary of Putinar's Positivstellensatz~\cite[Lemma 4.1]{Putinar1993} guarantees that inequalities~(\ref{e:strict-inequalities-V}a--c) can be proved with WSOS certificates. Precisely, there exists an integer $\mu'$ such that the polynomials in~(\ref{e:strict-inequalities-V}a,b) belong to $\Gamma_{\mu'}$, and the polynomial in~\cref{e:strict-inequalities-V-c} belongs to $\Lambda_{\mu'}$. We now set $d = \max\{\deg(V),\mu'\}$ and observe that $V$ is feasible for the righthand problem in~\cref{e:sos-opt} with $\lambda=\maxphi_T+\varepsilon$ because $\Gamma_{\mu'} \subseteq \Gamma_d$, $\Lambda_{\mu'} \subseteq \Lambda_d$, and $V \in \R_d[t,x]$. This proves the claim that $\lambda_d^* \leq \maxphi_T+\varepsilon$. \end{proof} The computational cost of solving WSOS optimization problems grows quickly as $d$ is raised. For instance, suppose the polynomials $f_1,\,\ldots,\,f_p$ and $h_1,\,\ldots,\,h_r$ all have the same degree $\omega$, and let $d_F:=d-1+\deg(F)$. Then, the time for standard primal-dual interior-point methods scales as $\mathcal{O}( L_1^{6.5} + (p+r)^{1.5} L_2^{6.5})$, where $L_1 = \binom{n+\lfloor d_F/2 \rfloor}{n}$ and $L_2 = \binom{n+\lfloor (d-\omega)/2 \rfloor}{n}$; see~\cite{Papp2019} and references therein for further details. \Cref{app:iterative-procedure} describes a way to improve bounds iteratively without raising $d$, but the improvement is small in the example tested. Poor computational scaling with increasing $d$ can be partly mitigated if symmetries of optimal $V$ can be anticipated and enforced in advance, leading to smaller SDPs. When the differential equations, the observable $\Phi$, and the sets $\Omega$ and $X_0$ all are invariant under a symmetry transformation, then the optimal bound is unchanged if the symmetry is imposed also on $V$ and the weights $\sigma_i$ and $\rho_i$. The next proposition formalizes these observations; its proof is a straightforward adaptation of a similar result in Appendix A of~\cite{Goluskin2019}, so we do not report it. \begin{proposition} \label{th:symmetry-reduction} Let $A : \R^{n \times n}$ be an invertible matrix such that $A^k$ is the identity for some integer $k$. Assume that $F(t,A x) = A F(t,x)$, $\Phi$ is $A$-invariant in the sense that $\Phi(t,A x)= \Phi(t,x)$, and all polynomials defining $\Omega$ and $X_0$ are $A$-invariant also. If $V \in \V(\Omega)$ gives a bound $\maxphi \leq \lambda$, then there exits $\widehat{V}\in \V(\Omega)$ that is $A$-invariant and proves the same bound. Moreover, if the pair $(V,\lambda)$ satisfies the WSOS constraints in~\cref{e:sos-opt}, then so does the pair $(\widehat{V},\lambda)$ and there exist WSOS decompositions with $A$-invariant weights $\sigma_i$, $\rho_i$. \end{proposition} We conclude this section with three computational examples. The first two demonstrate that SOS optimization can give extremely good bounds on both $\maxphi_T$ and $\maxphi_\infty$ in practice, even when the assumptions of \cref{th:strong-duality,th:sos-convergence} do not hold. The first example also illustrates the approximation of optimal trajectories described in \cref{s:optimal-trajectories}. The third example, on the other hand, reveals a potential pitfall of SOS optimization applied to bounding $\maxphi_\infty$ for systems with periodic orbits: infeasible problems may appear to be solved successfully due to unavoidably finite tolerances in SDP solvers. \begin{example} \label{ex:sos-2d-example} \belowpdfbookmark{Example~\ref{ex:sos-2d-example}}{bookmark:sos-2d-example} Consider the nonlinear autonomous ODE system % \begin{equation} \label{e:ex-nonnormal-2d} \begin{bmatrix} \dot{x}_1 \\ \dot{x}_2 \end{bmatrix} = \begin{bmatrix} 0.2 x_1 + x_2 - x_2(x_1^2 + x_2^2)\\ -0.4 x_2 + x_1(x_1^2 + x_2^2) \end{bmatrix}, \end{equation} % which is symmetric under $x \mapsto -x$. As shown in \cref{f:sos-2d-example-phase-portrait}(a), the system has a saddle point at the origin and a symmetry-related pair of attracting equilibria. Let $X_0 = \{x: \|x\|_2^2=0.25\}$. Aside from two points on the stable manifold of the origin, all points in $X_0$ produce trajectories that eventually spiral outwards towards the attractors, as shown in \cref{f:sos-2d-example-phase-portrait}(b). \begin{figure} \centering \includegraphics[scale=1]{./phase_portrait_unstable_R5}\\[-1ex] \begin{tikzpicture}[overlay] \node[fill=white] at (-2.6,-0.025) {\footnotesize(a)}; \node[fill=white] at (2.65,-0.025) {\footnotesize(b)}; \end{tikzpicture} \caption{(a) Phase portrait of the ODE~\cref{e:ex-nonnormal-2d} showing the attracting equilibria (\mytriangle{black}), the saddle (\mycross{black}), and the saddle's unstable ({\color{matlabred}\solidrule}) and stable ({\color{matlabred}\dottedrule}) manifolds. (b) Sample trajectories starting from the circle $\|x\|_2^2=0.25$. Small circles mark the initial conditions. Colors indicate the maximum value of $\Phi=\|x\|_2^2$ along each trajectory.} \label{f:sos-2d-example-phase-portrait} \end{figure} \begin{figure}[t] \centering \includegraphics[scale=1]{./bound_finite_T_test} \caption{ (a) Upper bounds on $\maxphi_T$ in \cref{ex:sos-2d-example} for various time horizons $T$, computed using auxiliary functions $V(t,x)$ with polynomial degrees 4~(\mysquare{matlabblue}), 6~(\mytriangle{matlabred}), and 8~(\mycross{matlabgreen}). Lower bounds on $\maxphi_T$ found by maximizing $\Phi[x(T\given 0, x_0)]$ over $x_0$ using adjoint optimization are also plotted~(\solidrule). (b) Detailed view of part of panel (a).} \label{f:sos-2d-example-finite-T-bounds} \end{figure} Using SOS optimization, we have computed upper bounds on the value of $\Phi(x)=\|x\|_2^2$ among all trajectories starting from $X_0$, for both finite and infinite time horizons. For simplicity we considered only global auxiliary functions, meaning we used $\Omega = [0,T] \times \R^2$ and $\Omega = [0,\infty) \times \R^2$ to solve~\cref{e:sos-opt} in the finite- and infinite-time cases, respectively. Since both choices of $\Omega$ and the set of initial conditions $X_0 = \{x: \|x\|_2^2=0.25\}$ share the same symmetry as~\cref{e:ex-nonnormal-2d}, we applied \cref{th:symmetry-reduction} to reduce the cost of solving~\cref{e:sos-opt}. Our implementation used {YALMIP}\ to reformulate~\cref{e:sos-opt} into an SDP, which was solved with {MOSEK}. \Cref{f:sos-2d-example-finite-T-bounds} shows upper bounds on $\maxphi_T$ that we computed for a range of time horizons $T$ by solving~\cref{e:sos-opt} with time-dependent polynomial $V$ of degrees $d=4$, 6, and 8. Also plotted in the figure are lower bounds on $\maxphi_T$, found by searching among initial conditions using adjoint optimization. The close agreement with our upper bounds shows that the degree-8 bounds are very close to sharp, and that adjoint optimization likely has found the globally optimal initial conditions. We find that $\maxphi_T = \maxphi_\infty \approx1.90318$ for all $T\ge3.2604$, indicating that $\Phi$ attains its maximum over all time when $T\approx 3.2604$. \begin{table}[t] \caption{Upper bounds on $\maxphi_T$ and $\maxphi_\infty$ for \cref{ex:sos-2d-example}, computed by solving~\cref{e:sos-opt}. The bounds for $\maxphi_T$ and $\maxphi_\infty$ were computed using time-dependent and time-independent $V$, respectively. Lower bounds are implied by the maximum of $\Phi$ on particular trajectories, whose initial conditions were found by adjoint optimization.} \label{t:sos-2d-example-bounds} \centering \small \begin{tabular}{c c c c c} \toprule & $\deg(V)$ & $T=2$ & $T=3$ & $T=\infty$ \\[2pt] \hline Upper bounds&4 & 1.948016 & 2.062952 & 2.194343 \\ &6 & 1.584910 & 1.918262 & 1.942396 \\ &8 & 1.584055 & 1.901411 & 1.931330 \\ &10 & " & 1.901409 & 1.916228 \\ &12 & " & " & 1.903525 \\ &14 & " & " & 1.903448 \\ &16 & " & " & 1.903185 \\ &18 & " & " & 1.903181 \\ \hline Lower bounds && 1.584055 & 1.901409 & 1.903178 \\ \bottomrule \end{tabular} \end{table} \Cref{t:sos-2d-example-bounds} reports upper bounds on $\maxphi_T$ computed with time-dependent $V$ up to degree 18 for $T=2$ and $T=3$, as well as upper bounds on $\maxphi_\infty$. The infinite-time implementation was restricted to time-independent polynomial $V(x)$ because polynomial dependence on $t$ gave no improvement in preliminary computations. This restriction lowers the computational cost because the first two WSOS constraints in~\cref{e:sos-opt} are independent of time and reduce to standard SOS constraints on $\mathbb{R}^2$. The resulting bounds are excellent for each $T$ reported in \cref{t:sos-2d-example-bounds}. As the degree of $V$ is raised, the upper bounds on $\maxphi$ apparently converge to the lower bounds produced by adjoint optimization. Note that this convergence is not guaranteed by \cref{th:strong-duality,th:sos-convergence} because the domain $\Omega$ is not compact. Finally, we illustrate how auxiliary functions can be used to localize optimal trajectories using the methods described in \cref{s:optimal-trajectories}. For a near-optimal $V$ we take the time-independent degree-$14$ auxiliary function that gives the upper bound $\lambda=1.903448$ reported in \cref{t:sos-2d-example-bounds}. Any trajectory that attains or exceeds a value $\lambda-\delta$ at some time $t^*$ must spend the interval $[t_0,t^*]$ inside the set $\mathcal{S}_\delta$ defined by \cref{e:Sdelta}. In the present example, the lower bound $1.903178\le\maxphi$ guarantees the existence of such trajectories for all $\delta\ge0.00027$. In general a good lower bound on $\maxphi$ may be lacking, in which case the sets $\mathcal{S}_\delta$ tell us where near-optimal trajectories must lie \emph{if} they exist. With this general situation in mind, \cref{f:sos-2d-example-bounding-sets}(a,b) show $\mathcal{S}_\delta$ for $\delta=0.01$ and $0.002$, along with the exactly optimal trajectories. The $\mathcal{S}_\delta$ sets localize the optimal trajectories increasingly well as $\delta$ is lowered, although they contain other parts of state space also. \Cref{f:sos-2d-example-bounding-sets}(c) shows the sets $\mathcal{R}_\varepsilon$, defined by \cref{e:Repsilon}, for $\varepsilon=0.008$ and 0.004. Each trajectory coming within $\delta=0.002$ of the upper bound, for example, cannot leave these $\mathcal{R}_\varepsilon$ for longer than $\delta/\varepsilon=0.25$ and $0.5$ time units, respectively, prior to any time at which $\Phi\ge\lambda-\delta$. The same is true of the intersections of these sets with $\mathcal{S}_\delta$, which are shown in \cref{f:sos-2d-example-bounding-sets}(d). \begin{figure \centering \vspace*{4ex} \includegraphics[scale=1]{./approximating_sets_unstable}\\ \begin{tikzpicture}[overlay] \node[fill=white] at (-4.75,-0.0) {\footnotesize(a)}; \node[fill=white] at (-1.35,-0.0) {\footnotesize(b)}; \node[fill=white] at (2.05,-0.0) {\footnotesize(c)}; \node[fill=white] at (5.4,-0.0) {\footnotesize(d)}; \draw[fill=matlabblue,draw=matlabblue] (-5.3,5.1) rectangle (-5,5.25); \draw[fill=matlaborange,draw=matlaborange] (-5.3,5.45) rectangle (-5,5.6); \node[anchor=west] at (-5,5.175) {\footnotesize$\mathcal{S}_{0.002}$}; \node[anchor=west] at (-5,5.525) {\footnotesize$\mathcal{S}_{0.01}$}; \draw[fill=matlabblue,draw=matlabblue] (-1.9,5.1) rectangle (-1.6,5.25); \draw[fill=matlaborange,draw=matlaborange] (-1.9,5.45) rectangle (-1.6,5.6); \node[anchor=west] at (-1.6,5.175) {\footnotesize$\mathcal{S}_{0.002}$}; \node[anchor=west] at (-1.6,5.525) {\footnotesize$\mathcal{S}_{0.01}$}; \draw[fill=matlabsafegreen,draw=matlabsafegreen] (1.4,5.1) rectangle (1.7,5.25); \draw[fill=matlabsafered,draw=matlabsafered] (1.4,5.45) rectangle (1.7,5.6); \node[anchor=west] at (1.7,5.175) {\footnotesize$\mathcal{R}_{0.004}$}; \node[anchor=west] at (1.7,5.525) {\footnotesize$\mathcal{R}_{0.008}$}; \draw[fill=matlabsafegreen,draw=matlabsafegreen] (4.2,5.1) rectangle (4.5,5.25); \draw[fill=matlabsafered,draw=matlabsafered] (4.2,5.45) rectangle (4.5,5.6); \node[anchor=west] at (4.5,5.175) {\footnotesize$\mathcal{S}_{0.002} \cap \mathcal{R}_{0.004}$}; \node[anchor=west] at (4.5,5.525) {\footnotesize$\mathcal{S}_{0.002} \cap \mathcal{R}_{0.008}$}; \end{tikzpicture} \\ \caption{Sets approximating the trajectories that attain $\maxphi_\infty$ for \cref{ex:sos-2d-example}: (a)~$\mathcal{S}_{0.01}$ and $\mathcal{S}_{0.002}$. (b)~Detail view of part of panel (a). (c)~$\mathcal{R}_{0.008}$ and $\mathcal{R}_{0.004}$. (d)~$\mathcal{S}_{0.002}\cap \mathcal{R}_{0.008}$ and $\mathcal{S}_{0.002} \cap \mathcal{R}_{0.004}$. All sets were computed using the same degree-$14$ polynomial $V(x)$ that yields the nearly sharp bounds in \cref{t:sos-2d-example-bounds}. Also plotted are the attracting equilibria (\mytriangle{black}), the set of initial conditions $X_0$ ({\color{black}\dashedrule}), the optimal initial conditions (\mycirc{black}), and the optimal trajectories before (\solidrule) and after (\dottedrule) the point at which $\maxphi_\infty$ is attained.} \label{f:sos-2d-example-bounding-sets} \end{figure} \markendexample\end{example} \begin{example} \label{ex:burgers-sos-example} \belowpdfbookmark{Example~\ref{ex:burgers-sos-example}}{bookmark:burgers-sos-example} Here we consider a $16$-dimensional ODE model obtained by projecting the Burgers equation~\cref{e:fractional-burgers} with ordinary diffusion ($\alpha=1$) onto modes $u_n(x) = \sqrt{2}\sin(2 n \pi x)$, $n=1,\,\ldots,\,16$. In other words, we substitute the expansion $u(x,t) = \sum_{m=1}^{16} a_m(t) u_m(x)$ into~\cref{e:fractional-burgers} with $\alpha=1$ and integrate the result against each $u_n(x)$ to derive $16$ nonlinear coupled ODEs for the amplitudes $a_1(t),\,\ldots,\,a_{16}(t)$. This gives \begin{equation} \label{e:burgers-truncated-ode} \dot{a}_n = -\left(2 \pi n\right)^2 a_n + \sqrt{2} \pi n \left[ \sum_{m=1}^{16-n} a_m a_{m+n} - \frac12 \sum_{m=1}^{n-1}a_m a_{n-m} \right], \qquad n=1,\,\ldots,\,16. \end{equation} Let $a=(a_1,\,\ldots,\,a_{16})$ denote the state vector. Similarly to what is done for the PDE in \cref{ex:fractional-burgers}, we bound the projected enstrophy $\Phi(a) := 2 \pi^2 \sum_{n=1}^{16} n^2 a_n^2$ along trajectories with initial conditions in the set $X_0 = \{a \in \mathbb{R}^{16}\,:\, \Phi(a)=\Phi_0\}$, and we consider various values $\Phi_0$ of the initial enstrophy. We construct time-independent degree-$d$ polynomial $V$ of the form % \begin{equation} \label{e:burgers-ode-V} V(a) = c \|a\|_2^d + P_{d-1}(a), \end{equation} % where $d$ is even, $c$ is a tunable constant, and $P_{d-1}(a)$ is a tunable polynomial of degree $d-1$. Since the nonlinear terms in~\cref{e:burgers-truncated-ode} conserve the leading $\|a\|_2^d$ term, $\mathcal{L}V$ has the same even leading degree as $V$, which is necessary for {(\ref{e:V-conditions}a,b)} to hold over the global spacetime set $\Omega = [0,\infty)\times \R^{16}$. We also construct local $V$ of the form~\cref{e:burgers-ode-V} by imposing {(\ref{e:V-conditions}a,b)} only on the smaller spacetime set $\Omega = [0,\infty) \times X$ with % \begin{equation} X := \left\{ a \in \R^{16}: \|a\|_2^2 \leq \frac{\Phi_0}{2\pi^2} \right\}. \end{equation} % All trajectories starting from $X_0$ remain in $X$ because~\cref{e:burgers-truncated-ode} implies $\ddt \|a\|_2^2 = -4 \Phi(a) \leq 0$, so $\|a\|_2^2$ is bounded by its initial value, and $\|a\|_2^2 \leq \frac{1}{2\pi^2} \Phi(a)$ pointwise. \begin{figure}[t] \centering \includegraphics[scale=0.9]{./truncated-burgers} \caption{Bounds on $\maxphi_\infty$ for~\cref{e:burgers-truncated-ode} computed with both global and local polynomial auxiliary functions $V$ of the form~\cref{e:burgers-ode-V} for $d=4$ (\mysquare{matlabpurple}~global, \mytriangle{matlabgreen}~local) and $d=6$ (\mycirc{matlabblue}~global, \mycross{matlabred}~local). Also plotted are lower bounds on $\maxphi_\infty$ obtained with adjoint optimization (\solidrule). All results are normalized by $\Phi_0^{3/2}$, the expected scaling at large~$\Phi_0$~\cite{Ayala2011}.} \label{f:burgers-bounds} \end{figure} \Cref{f:burgers-bounds} shows upper bounds on $\maxphi_\infty$ computed for $\Phi_0$ values spanning four orders of magnitude using both global and local $V$ of degrees 4 and 6. Also shown are lower bounds obtained using adjoint optimization. (Note that the 16-mode truncation~\cref{e:burgers-ode-V} accurately resolves Burgers equation only in cases with $\Phi_0\lesssim2\cdot10^5$.) We used {SPOT\sc{less}}\ and {MOSEK}\ to solve~\cref{e:sos-opt} and applied \cref{th:symmetry-reduction} to exploit symmetry under the transformation $a_n \mapsto (-1)^{n} a_n$. At each $\Phi_0$ value, constructing quartic $V$ required approximately 60 seconds on 4 cores with 16GB of memory. Local quartic $V$ produce better bounds than global ones, the results obtained with the former being within 1\% of the lower bounds from adjoint optimization for $\Phi_0 \lesssim 8000$. The results improve significantly with sextic $V$: for all tested $\Phi_0$, the upper bounds produced by global and local sextic $V$ are within 9\% and 5\% of the adjoint optimization results, respectively. Constructing sextic $V$ at a single $\Phi_0$ value required 16 hours on a 12-core workstation with 48GB of memory, which is significantly more expensive than adjoint optimization. However, we stress that auxiliary functions yield \textit{upper} bounds on $\maxphi_\infty$, while adjoint optimization gives only \textit{lower} bounds on $\maxphi_\infty$, so the two approaches give different and complementary results. \markendexample\end{example} It is evident that SOS optimization can produce excellent bounds on extreme events given enough computational resources, but care must be taken to assess whether numerical results can be trusted. As observed already in the context of SOS optimization~\cite{Waki2012}, numerical SDP solvers can return solutions that appear to be correct but are provably not so. The next example shows that this issue can arise when bounding $\maxphi_\infty$ in systems with periodic orbits. \begin{example} \label{ss:vdp} \belowpdfbookmark{Example~\ref{ss:vdp}}{bookmark:vdp} Consider a scaled version of the van der Pol oscillator~\cite{VanderPol1926}, % \begin{equation} \label{e:vdp} \begin{bmatrix} \dot{x}_1\\ \dot{x}_2 \end{bmatrix} = \begin{bmatrix} x_2\\ (1-9x_1^2)x_2-x_1 \end{bmatrix}, \end{equation} % which has a limit cycle attracting all trajectories except the unstable equilibrium at the origin (see \cref{f:vdp}). Let $\Phi = \|x\|_2^2$ be the observable of interest. We seek bounds on $\maxphi_\infty$ along trajectories starting from the circle $\|x\|_2^2 = 0.04$. All such trajectories approach the limit cycle from the inside, so $\maxphi_\infty$ coincides with the pointwise maximum of $\Phi$ on the limit cycle. Maximizing $\Phi$ numerically along the limit cycle yields $\maxphi_\infty \approx 0.889856$. \begin{figure \centering \includegraphics[scale=1]{./vdp} \caption{Limit cycle ({\color{matlabred}\solidrule}) for the scaled van der Pol oscillator~\cref{e:vdp}. Also plotted are trajectories (\solidrule) with initial conditions (\mycirc{black}) on the circle $\|x\|_2^2=0.04$ ({\color{matlabgray}\dashedrule}).} \label{f:vdp} \end{figure} \begin{table} \caption{Parameters for {SDPA-GMP}\ used in \cref{ss:vdp} to produce an invalid degree-22 auxiliary function for the scaled van der Pol oscillator. A description of each parameter can be found in~\cite{sdpagmp}.} \label{t:sdpa-gmp-parameters} \centering \small \begin{tabular}{rl c rl c rl c rl} \toprule epsilonStar & $10^{-25}$ && betaStar & 0.1 && lowerBound & -$10^{25}$ && maxIteration & 200\\ epsilonDash & $10^{-25}$ && betaBar & 0.3 && upperBound & \phantom{-}$10^{25}$ && precision & 200\\ lambdaStar & $10^4$ && gammaStar & 0.7 && omegaStar & \phantom{-}2\\ \bottomrule \end{tabular} \end{table} We implemented~\cref{e:sos-opt} with {YALMIP}\ using a time-independent polynomial auxiliary function $V(x)$ of degree 22. To confirm that difficulties were not easily avoided by increasing precision, we solved the resulting SDP in multiple precision arithmetic using the solver {SDPA-GMP}\ v.7.1.3. The solver parameters we used are listed in \cref{t:sdpa-gmp-parameters} in order to ensure that our results are reproducible; see~\cite{sdpagmp} for the meaning of each parameter. The solver terminated successfully after 95 iterations, reporting no error and returning the upper bound $\maxphi_\infty \leq 0.956911$. Although this bound is true, it reflects an invalid SOS solution because no time-independent polynomial $V$ of any degree can satisfy~\cref{e:cond1}. To see this, suppose that~\cref{e:cond1} holds, so $V$ cannot increase along trajectories of~\cref{e:vdp}. In particular, if $x(t)$ lies on the limit cycle and $\tau$ is the period, then for all $\alpha \in (0,1)$, % \begin{equation} V[x(t)] \geq V[x(t+\alpha \tau)] \geq V[x(t+\tau)] = V[x(t)]. \end{equation} % Thus, time-independent $V$ giving finite bounds on $\maxphi_\infty$ must be constant on the limit cycle. This is impossible if $V$ is polynomial because the limit cycle is not an algebraic curve~\cite{Odani1995}. There are two possible reasons why the SDP solver does not detect that the problem is infeasible despite the use of multiple precision. The first is that inevitable roundoff errors mean that our bound does not apply to~\cref{e:vdp}, but to a slightly perturbed system whose limit cycle \textit{is} an algebraic curve. The second possibility, which seems more likely, is that although no time-independent polynomial $V$ is feasible, there exists a feasible nonpolynomial $V$ that can be approximated accurately near the limit cycle by a degree-22 polynomial. In particular, the approximation error is smaller than the termination tolerances used by the solver, which therefore returns a solution that is not feasible but very nearly so. This interpretation is supported by the fact that {SDPA-GMP}\ issues a warning of infeasibility when its tolerances are tightened by lowering the values of parameters epsilonDash and epsilonStar to $10^{-30}$. \markendexample\end{example} \section{Extensions} \label{s:extensions} The framework for bounding extreme events presented in \cref{s:bounds-with-afs} can be extended in several ways. {Here we briefly summarize two extensions. Both are covered by the measure-theoretic approach of~\cite{Vinter1978,Vinter1978a,Lewis1980,Vinter1993}, but we give a more direct derivation.} The first extension applies when upper bounds are sought on the maximum of $\Phi$ at a fixed finite time $T$, rather than its maximum over the time interval $[0,T]$. Such bounds can be proved by relaxing inequality~\cref{e:cond2} to require that $V$ bounds $\Phi$ only at time $T$. A second extension lets extreme events be defined using integrals over trajectories in addition to instantaneous values. Precisely, suppose the quantity we want to bound from above is \begin{equation} \label{e:integral-cost} \sup_{\substack{x_0 \in X_0\\t \in \T}} \left\{\Phi[t,x(t;t_0,x_0)] + \int_{t_0}^t \Psi[\xi,x(\xi;t_0,x_0)] \dxi \right\} \end{equation} with chosen $\Phi$ and $\Psi$. One way to proceed is to augment the original dynamical system~\cref{e:system} with the scalar ODE $\dot{z} = \Psi(t,x)$, $z(t_0)=0$. Bounding~\cref{e:integral-cost} along trajectories of the original system is equivalent to bounding the maximum of $\Phi(t,x)+z$ pointwise in time along trajectories of the augmented system, and this can be done with the methods described in the previous sections. Another way to bound~\cref{e:integral-cost}, without introducing an extra ODE, is to replace condition~\cref{e:cond1} with \begin{equation} \label{e:condition-integral} \mathcal{L}V(t,x) + \Psi(t,x) \leq 0 \quad\forall (t,x) \in \Omega. \end{equation} Minor modification to the argument leading to~\cref{e:weak-duality} proves that \begin{equation} \label{e:bounds-integral} \sup_{\substack{x_0 \in X_0\\t \in \T}} \left\{\Phi[t,x(t;t_0,x_0)] + \int_{t_0}^t \Psi[\xi,x(\xi;t_0,x_0)] \dxi \right\} \leq \inf_{\subalign{V:\,&\cref{e:cond2}\\&\cref{e:condition-integral}}} \sup_{x_0 \in X_0} \, V(t_0,x_0). \end{equation} As in \cref{e:weak-duality}, the righthand minimization is a convex problem and can be tackled computationally using SOS optimization for polynomial ODEs when $\Phi$ and $\Psi$ are polynomial. Analogues of~\cref{th:strong-duality,th:sos-convergence} for~\cref{e:bounds-integral} hold if $\Psi$ is continuous. \section{Conclusions} \label{s:conclusion} We have {discussed} a convex framework for constructing \textit{a priori} bounds on extreme events in nonlinear dynamical systems governed by ODEs or PDEs. Precisely, we have {described} how to bound from above the maximum value $\maxphi$ of an observable $\Phi(t,x)$ over a given finite or infinite time interval, among all trajectories that start from a given initial set. This approach, {which is a particular case of general relaxation frameworks for optimal control and optimal stopping problems~\cite{Lewis1980,Cho2002}}, relies on the construction of auxiliary functions $V(t,x)$ that decay along trajectories and bound $\Phi$ pointwise from above. These constraints amount to the pointwise inequalities (\ref{e:V-conditions}a,b) in time and state space, which can be either imposed globally or imposed locally on any spacetime set that contains all trajectories of interest. Suitable global or local $V$ can be constructed without knowing any system trajectories, so $\maxphi$ can be bounded above even when trajectories are very complicated. We have given a range of ODE examples in which analytical or computational constructions give very good and sometimes sharp bounds. As a PDE example, we have proved analytical upper bounds on a quantity called fractional enstrophy for solutions to the one-dimensional Burgers equation with fractional diffusion. The convex minimization of upper bounds on $\maxphi$ over global or local auxiliary functions is dual to the non-convex maximization of $\Phi$ along trajectories. In the case of ODEs and local auxiliary functions, \cref{th:strong-duality}, {which is a corollary of Theorem 2.1 and equation (5.3) in~\cite{Lewis1980}, guarantees that this duality is strong when the time interval is finite and the ODE satisfies certain continuity and compactness assumptions.} This means that the infimum over bounds is equal to the maximum over trajectories, so there exist $V$ proving arbitrarily sharp bounds on $\maxphi$. Further, strong duality holds in several of our ODE examples to which the assumptions of \cref{th:strong-duality} do not apply, including formulations with global $V$ or infinite time horizons. {However, neither the proofs in~\cite{Lewis1980} nor our alternative proof in \cref{s:direct-proof-strong-duality} can be easily extended to these cases because they rely on compactness, and} we have given counterexamples to strong duality with infinite time horizon even when trajectories remain in a compact set. Better characterizing the dynamical systems for which strong duality holds remains an open challenge. Regardless of whether duality is weak or strong for a given dynamical system, constructing auxiliary functions that yield good bounds often demands ingenuity. Fortunately, as described in \cref{s:sos-optimization}, computational methods of sum-of-squares (SOS) optimization can be applied in the case of polynomial ODEs with polynomial $\Phi$. Moreover, \cref{th:sos-convergence} guarantees that if strong duality and mild compactness assumptions hold, then bounds computed by solving the SOS optimization problem~\cref{e:sos-opt} become sharp as the polynomial degree of the auxiliary function $V$ is raised. In practice, computational cost can become prohibitive as either the dimension of the ODE system or the polynomial degree of $V$ increases, at least with the standard approach to SOS optimization wherein generic semidefinite programs are solved by second-order symmetric interior-point algorithms. For instance, given a 10-dimensional ODE system with no symmetries to exploit, the degree of $V$ is currently limited to about 12 on a large-memory computer. Larger problems may be tackled using specialized nonsymmetric interior-point~\cite{Papp2019} or first-order algorithms~\cite{Zheng2017csl, Zheng2018}. One also could replace the weighted SOS constraints in~\cref{e:sos-opt} with stronger constraints that may give more conservative bounds at less computational expense~\cite{AAAhmadi2015, AAAhmadi2019}. In the case of PDEs, the bounding framework of \cref{s:bounds-with-afs} can produce valuable bounds, as in \cref{ex:fractional-burgers}, but theoretical results and computational tools are {lacking. \Cref{th:strong-duality}, which guarantees arbitrarily sharp bounds for many ODEs, does not apply to PDEs, nor can we directly apply the computational methods of \cref{s:sos-optimization} that work well for polynomial ODEs.} On the theoretical side, guarantees that feasible auxiliary functions exist for PDEs would be of great interest, not least because bounds on certain extreme events can preclude loss of regularity. {Statements formally dual to results in~\cite{Cho2002} for optimal stopping problems would imply that near-optimal auxiliary functions exist for autonomous PDEs, at least when extreme events occur at finite time, but such statements have not yet been proved.} On the computational side, constructions of optimal $V$ for PDEs would be very valuable, both to guide rigorous analysis and to improve on conservative bounds proved by hand. Methods of SOS optimization can be applied to PDEs in two ways. The first is to approximate the PDE as an ODE system and bound the error this incurs, obtaining an ``uncertain'' ODE system to which standard SOS techniques can be applied~\cite{Goulart2012, Chernyshenko2014, Huang2015, Goluskin2019}. The second approach is to work directly with the PDE using either the integral inequality methods of~\cite{Valmorbida2015tac, Valmorbida2015intsostools, Valmorbida2015cdc} or the moment relaxation techniques of~\cite{Korda2018,Marx2018}. These strategies have been used to study PDE stability, time averages, and optimal control, but they are in relatively early development. They have not yet been applied to extreme events as studied here, although the method in~\cite{Korda2018} applies to extreme behavior at a fixed time and could be extended to time intervals. It remains to be seen whether any of these strategies can numerically optimize auxiliary functions for PDEs of interest at reasonable computational cost, but recent advances in optimization-based formulations and corresponding numerical algorithms give us hope that this will be possible in the near future. \currentpdfbookmark{Acknowledgements}{bookmark:acknowledgements} \section*{Acknowledgments} We are indebted to Andrew Wynn, Sergei Chernyshenko, Ian Tobasco, and Charles Doering, who offered many insightful comments on this work. We also thank the anonymous referees for comments that considerably improved the original version of this work.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:intro} Recently a very interesting relation between the Nekrasov partition function of $\CN=2$ conformal invariant $SU(2)$ gauge theory and the conformal block of the Liouville field theory was proposed \cite{AGT}. It seems that this is the first example of a precise mathematical relationship between quantum field theories defined at different space-time dimensions. There have been various attempts at checking this AGT relation at lower instanton numbers by direct evaluation of Liouville correlation functions \cite{MMS1, MMS2, IO, MMMmatrix, MS}. There have also been attempts at proving the relation by comparing the recursion relation satisfied by the descendants of the conformal blocks and Nekrasov's partition function \cite{Poghossian, HJS0, FL, HJS}. On the other hand, a Penner type matrix model has been proposed to interpolate between the Liouville theory and gauge theory \cite{DV} and provide an explanation for the AGT relation. In a previous communication \cite{EM} we have studied this matrix model and also proposed models for asymptotically free theories obtained by decoupling some of massive flavors. We have shown that the spectral curves of these matrix models reproduce those based on the M-theory construction and their free energies satisfy the scaling identities known in the $SU(2)$ Seiberg-Witten theory. (See also \cite{IMO, SWyllard} for $A_{r}$ quiver matrix model). In this paper we would like to evaluate the free energies of these matrix models in the large $N$ limit explicitly and show that they in fact exactly reproduce the amplitudes of $SU(2)$ Seiberg-Witten theory. In section 2 we first describe the general properties of matrix models. In section 3 we compute the free energies: we integrate the Seiberg-Witten differential of the matrix model and evaluate the filling fraction in terms of the parameters of the spectral curve. We then invert this relation and derive the free energy. We present the computation for $SU(2)$ gauge theory with two, three and four flavors and show that they all reproduce the amplitudes of Seiberg-Witten theory. In section 4 we discuss decoupling limits of some quiver gauge theories. Section 5 is devoted to conclusion and discussion. Note: in our convention the free energy of the matrix model $F_m$ is off by a factor 4 from that of gauge theory. Thus we will check the agreement $4F_{m}=F_{gauge}$ throughout this paper. \section{$SU(2)$ gauge theories and matrix models} \label{sec:matrix} It has been proposed that the Nekrasov partition function for $\CN=2$, $SU(2)$ gauge theory with four flavors (summarized in appendix \ref{sec:Nekrasov}) coincide with the four-point conformal block of Liouville theory \cite{AGT}: \bea Z_{{\rm inst}}^{SU(2)} = Z_{{\rm CFT}} \equiv \left< V_{\tilde{m}_\infty} (\infty) V_{\tilde{m}_1}(1) V_{\tilde{m}_2}(q) V_{\tilde{m}_0}(0) \right>. \eea Here $V_{\tilde{m}}$ is the vertex operator, $Q = b + 1/b$ and the central charge of the Liouville theory is $c= 1 + 6Q^2$. In order to relate the Liouville theory to matrix model, we consider the Dotsenko-Fateev integral representation of the four-point conformal block in terms of the free field $\phi(z)$ \cite{DF}: \bea Z_{{\rm DF}} = \left< \left( \int d \lambda_I :e^{b \phi(\lambda_I)} : \right)^N V_{\tilde{m}_\infty} (\infty) V_{\tilde{m}_1}(1) V_{\tilde{m}_2}(q) V_{\tilde{m}_0}(0) \right>, \eea where the vertex operator $V_{\tilde{m}_i}(z_i)$ is given by $:e^{\tilde{m}_i \phi(z_i)}:$ and we have introduced the $N$-fold integration of screening operators. OPE of the scalar field is given by $\phi(z) \phi(\omega) \sim -2 \log (z - \omega)$. Momentum conservation condition relates the external momenta and the number of integrals as $\sum_{i=0}^2 \tilde{m}_i + \tilde{m}_\infty + b N = Q$. We redefine the momenta as $\tilde{m}_i = \frac{i m_i}{2 g_s}$ for $i = 0, \infty$ and $\tilde{m}_i = \frac{i m_i}{2 g_s} + \frac{Q}{2}$ for $i = 1, 2$ \cite{AGT}. Then the above condition becomes \bea \sum_{i=0}^2 i m_i + i m_\infty + 2 b g_s N = 0. \label{massrelation4} \eea As pointed out in \cite{MMM, KMMMP} and recently in \cite{DV} in the context of the AGT relation, the Dotsenko-Fateev representation may be identified as the $\beta$-deformation of a one matrix integral \bea Z_{{\rm DF}} = q^{\frac{m_0 m_2}{2g_s^2}} (1-q)^{\frac{m_1 m_2}{2g_s^2}} \left( \prod_{I=1}^N \int d \lambda_I \right) \prod_{I < J} (\lambda_I - \lambda_J)^{-2b^2} e^{\frac{-ib}{g_s} \sum_I W(\lambda_I)}. \label{matrix} \eea In the case of $b=i$, integrations over $\{\lambda_I,I=1,\cdots,N\}$ becomes an integral over a hermitian matrix $M$ with eigenvalues $\{\lambda_I\}$ and the action \bea W(M) = \sum_{i=0}^2 m_i \log (M - z_i), \hskip1cm z_0=0,\hskip3mm z_1=1,\hskip3mm z_2=q. \label{action4} \eea We identify the parameters $m_i$ with the mass parameters of the corresponding gauge theory. The identification of the parameter $b$ with the Nekrasov's deformation parameters is given by \bea \epsilon_1 = - i b g_s, ~~~ \epsilon_2 = - \frac{i g_s}{b}. \eea In this paper, we focus on the $b=i$ case, i.e. the self-dual background $\epsilon_1 = - \epsilon_2 = g_s$. The momentum conservation condition then reduces to \bea \sum_{i=0}^2 m_i + m_\infty + 2 g_s N = 0. \label{massrelation4} \eea This matrix model is expected to reproduce the results of $SU(2)$ gauge theory with $N_f=4$. More precisely, as we will see below, the matrix integral together with the overall factor $(1-q)^{\frac{m_1 m_2}{2g_s^2}}$ in (\ref{matrix}) corresponds to the $SU(2)$ gauge theory. Note that the factor $(1-q)^{\frac{m_1 m_2}{2g_s^2}}$ is the inverse of the $U(1)$ factor discussed in \cite{AGT}. (See appendix \ref{sec:Nekrasov}.) Another point is that the Coulomb moduli parameter $a$ of the gauge theory is identified as the filling fraction $g_s N_i$, where $N_i$ is a number of screening operators inserted into the same contour in Dotsenko-Fateev representation. For the four-point conformal block we introduce $N_1$ and $N_2$. The overall condition (\ref{massrelation4}) reduce these two degree of freedom to one which corresponds to the Coulomb modulus of $SU(2)$ theory. The parameters $m_i$ are the masses associated with the $SU(2)^4 (\subset SO(8))$ flavor symmetry. These are related to the masses of the hypermultiplets as \bea m_1 &=& \frac{1}{2} (\mu_1 + \mu_2), ~~~ m_2 = \frac{1}{2} (\mu_3 + \mu_4), \nonumber \\ m_\infty &=& \frac{1}{2} (\mu_1 - \mu_2), ~~~ m_0 = \frac{1}{2} (\mu_3 - \mu_4). \label{massSU(2)} \eea The matrix models associated with gauge theories with $N_f=2, 3$ are obtained by taking the decoupling limit of heavy flavors \cite{EM}. By taking a limit of $\mu_4 \rightarrow \infty$ while keeping $\mu_4 q = \Lambda_3$ fixed, the matrix model action becomes \bea W(z) = \mu_3 \log z + m_1 \log (z - 1) - \frac{\Lambda_{3}}{2 z}. \label{action3} \eea with the following condition: \bea m_1 + m_\infty + \mu_3 + 2 g_s N = 0. \label{massrelation3} \eea The prefactor in front of the matrix integral (\ref{matrix}) reduces to $e^{- \frac{m_1 \Lambda_3}{4g_s^2} }$ in this limit. This is identified with the (inverse of the) $U(1)$ factor of $N_f=3$ theory (see appendix \ref{subsec:Nf=2inst}). In order to obtain the $N_f=2$ matrix model, we further take the limit $\mu_2 \rightarrow \infty$ while keeping $\mu_2 \Lambda_3 = \Lambda_2^2$ fixed. The dynamical scale of this gauge theory is given by $\Lambda_2$. After rescaling $z \rightarrow \frac{\Lambda_3}{\Lambda_2} z$, the action (\ref{action3}) becomes \bea W(z) = \mu_3 \log z - \frac{\Lambda_{2}}{2} \left( z + \frac{1}{z} \right). \label{action2} \eea The mass relation reduces in this case to $ \mu_1 + \mu_3 + 2 g_s N = 0$. The prefactor becomes simply $e^{- \frac{\Lambda_2^2}{8g_s^2}}$. \section{Planar free energy and prepotential} In this section, we will evaluate the planar free energy of the matrix models introduced above. In \cite{EM}, we have shown that the free energy of these models satisfies the identities known in Seiberg-Witten thery \cite{Matone, STY, EY}. Here, we will evaluate the free energies explicitly and compare them with the instanton expansions of the prepotentials at lower orders. The computation is a bit simpler than in the Seiberg-Witten theory where both the $A$ and $B$ cycle integrals have to be computed \cite{KLT, IY}. Here we only have to compute the $A$ integral. We first consider the matrix model for $SU(2)$ gauge theory with $N_f=2$ in next subsection. Then, we will analyze the cases of $N_f = 3$ and $4$ theories in turn. \subsection{$SU(2)$ gauge theory with $N_f = 2$} \label{subsec:Nf=2} The matrix model action corresponding to the $SU(2)$ gauge theory with $N_f = 2$ is given by (\ref{action2}). For simplicity, we will omit the subscript $2$ of the dynamical scale $\Lambda_2$ below. There are two saddle points determined by the classical equation of motion: \bea W'(z) = \frac{\mu_3}{z} - \frac{\Lambda}{2} \left( 1 - \frac{1}{z^2} \right) = 0. \eea These lead to the two-cut spectral curve. The planar loop equation reads as usual \bea R(z) = - \frac{1}{2} \left( W'(z) - \sqrt{W'(z)^2 + f(z)} \right), \label{resolvent} \eea where the resolvent is defined by \begin{equation} R(z) = \langle \sum_I \frac{g_s}{z - \lambda_I} \rangle. \end{equation} The function $f$ is given by \begin{equation} f(z) = 4g_s \langle \sum_I \frac{W'(z) - W'(\lambda_I)}{z - \lambda_I} \rangle = \frac{c_1}{z} + \frac{c_2}{z^2}. \end{equation} Coefficients $c_1$ and $c_2$ are defined as \bea c_1 = - 4 g_s \left< \sum_I \left( \frac{\mu_3}{\lambda_I} + \frac{\Lambda}{2 \lambda_I^2} \right) \right> = - 2 g_s N \Lambda, ~~~ c_2 = - 2 g_s \left< \sum_I \frac{\Lambda}{\lambda_I} \right>. \eea In the formula for $c_1$ we have used the equations of motion $\langle \sum_I W'(\lambda_I) \rangle = 0$. Then, the spectral curve $x^2 = (2 \langle R(z) \rangle + W'(z))^2 = W'(z)^2 + f(z)$ is given by \bea x^2 = \frac{\Lambda^2}{4 z^4} + \frac{\mu_3 \Lambda}{z^3} + \frac{1}{z^2} \left( \mu_3^2 + c_2 - \frac{\Lambda^2}{2} \right) + \frac{\mu_1 \Lambda}{z} + \frac{\Lambda^2}{4}. \label{spectralNf2} \eea This is similar to the curve obtained in \cite{GMN}. The differential one form is identified with $\lambda_{m} = x dz$ which has double poles at $t=0$ and $\infty$ with residues $\mu_3$ and $\mu_1$. Note that the parameter $c_2$ corresponds to the variable $u$ in Seiberg-Witten theory. We evaluate the filling fraction as \bea g_s N_1 = \frac{1}{4 \pi i} \oint_{C_1} \lambda_{m}(c_2), \label{fillingfractionNf2} \eea where $C_1$ is a cycle around one of the cuts in the curve. This integral is identified with the Coulomb moduli $a$ in the gauge theory and we invert the above relation to solve the unknown parameter $c_2$. Let us compute the free energy of our model defined by \bea e^{F_m/g_s^2} = \left( \prod_{I=1}^N \int d \lambda_I \right) \prod_{I < J} (\lambda_I - \lambda_J)^{2} e^{\frac{1}{g_s} \sum_I W(\lambda_I)}. \eea The starting point is the formula for the $\Lambda$ derivative: \bea \frac{\partial F_m}{\partial \Lambda} = - \frac{g_s}{2} \left< \sum_{I} \left( \frac{1}{\lambda_I} + \lambda_I \right) \right> = \frac{c_2}{4 \Lambda} - \frac{g_s}{2} \langle \sum_I \lambda_I \rangle. \eea The expectation value $\langle \sum_I \lambda_I \rangle = \langle \tr M \rangle$ in the second term can be determined by studying the large $z$ behavior of the resolvent: $R(z) = - \frac{1}{2} (W'(z) - x) \approx \frac{g_s N}{z} + \frac{g_s \langle \tr M \rangle}{z^2} + \ldots$ \bea g_s \langle \tr M \rangle = - \frac{1}{2 \Lambda} (c_2 - \mu_1^2 + \mu_3^2). \eea Therefore, we obtain \bea \Lambda \frac{\partial F_m}{\partial \Lambda} = \frac{1}{4} (2 c_2 - \mu_1^2 + \mu_3^2). \label{freeNf2} \eea Our remaining task is to determine $c_2$ in terms of $g_s N_1$ by using (\ref{fillingfractionNf2}), and this leads to the explicit form of the free energy. To derive $c_2$, let us consider the derivative of (\ref{fillingfractionNf2}) with respect to $c_2$: \bea 4 \pi i \frac{\partial (g_s N_1)}{\partial c_2} = \oint_{C_1} \frac{1}{\Lambda} \frac{dz}{\sqrt{P_4(z)}}, \label{derivativefillingNf2} \eea where $P_4$ is the polynomial of degree 4: \bea P_4(z) = z^4 + \frac{4\mu_1}{\Lambda} z^3 + \frac{4}{\Lambda^2} ( \mu_3^2 + c_2 - \frac{\Lambda^2}{2}) z^2 + \frac{4\mu_3}{\Lambda} z + 1. \eea It is easy to transform this polynomial so that (\ref{derivativefillingNf2}) becomes the standard elliptic integral of the first kind. In the following, we set $A = \mu_3^2 + c_2 - \frac{\Lambda^2}{2}$ and express the result in terms of $A$. For simplicity, let us consider the equal mass case: $\mu_1 = \mu_3 = m$ in the following. In this case, by the transformation $z = \frac{t-1}{t+1}$ and rescaling of $t$, the integrand of the right hand side of (\ref{derivativefillingNf2}) can be brought to the standard form \bea \frac{\sqrt{2}}{\sqrt{S_+ (\Lambda^2 + 4m \Lambda + 2A)}} \frac{dt}{\sqrt{(1 - t^2)(1 - k^2 t^2)}}, \eea where $k^2 = S_-/S_+$ and \bea S_\pm = \frac{1}{\Lambda^2 + 4m \Lambda + 2A} \left( - 3 \Lambda^2 + 2A \pm \Lambda \sqrt{8 \Lambda^2 - 16A + 16m^2} \right). \eea Then, we can identify the integral (\ref{derivativefillingNf2}) in terms of the hypergeometric function: \bea 4 \pi i \frac{\partial (g_s N_1)}{\partial A} &=& \frac{2 \sqrt{2}}{\sqrt{S_+ (\Lambda^2 + 4m \Lambda + 2A)}} \int_{1/k}^1 \frac{dt}{\sqrt{(1 - t^2)(1 - k^2 t^2)}} \nonumber \\ &=& \frac{\sqrt{2} \pi i}{\sqrt{S_+ (\Lambda^2 + 4m \Lambda + 2A)}} F(\frac{1}{2}, \frac{1}{2}, 1; 1 - k^2). \eea where we have used $\int_{1/k}^1 \frac{dt}{\sqrt{(1 - t^2)(1 - k^2 t^2)}}= iK'(k) = iK (k')$ with $k'^2 = 1 - k^2$. We express the right hand side as a small $\Lambda$ expansion which corresponds to the instanton expansion in gauge theory. (Note that $k^2 = 1 + \CO(\Lambda)$.) After integrating over $A$, we obtain \bea 2 g_s N_1 &=& \sqrt{A} \Bigg( 1 - \frac{m^2}{4A^2} \Lambda^2 - \frac{A^2 - 6 A m^2 + 15 m^4}{64 A^4} \Lambda^4 - \frac{5 m^2 (3 A^2 - 14 A m^2 + 21 m^4)}{256 A^6} \Lambda^6 \nonumber \\ & & - \frac{15(A^4 - 28 A^3 m^2 + 294 A^2 m^4 - 924 A m^6 + 1001 m^8)}{16384 A^8} \Lambda^8 + \CO(\Lambda^{10}) \Bigg). \eea Then, we invert this equation and solve for $A$: \bea A &=& a^2 + \frac{m^2}{2a^2} \Lambda^2 + \frac{a^4 - 6 m^2 a^2 + 5 m^4}{32 a^6} \Lambda^4 + \frac{m^2 (5 a^4 - 14 m^2 a^2 + 9m^4)}{64 a^{10}} \Lambda^6 \nonumber \\ & & + \frac{5 a^8 - 252 m^2 a^6 + 1638m^4 a^4 - 2860 m^6 a^2 + 1469 m^8}{8192 a^{14}} \Lambda^8 + \CO(\Lambda^{10}), \eea where we have introduced $a = 2g_s N_1$. Finally, we substitute this into (\ref{freeNf2}) and integrate by $\Lambda$ to obtain \bea 4F_m &=& 2 \left( a^2 - m^2 \right) \log \Lambda + \frac{a^2 + m^2}{2 a^2} \Lambda^2 + \frac{a^4 - 6 m^2 a^2 + 5 m^4 }{64 a^6} \Lambda^4 + \frac{m^2 (5 a^4 - 14 m^2 a^2 + 9 m^4)}{192 a^{10}} \Lambda^6 \nonumber \\ & & ~~~~ + \frac{5 a^8 - 252m^2 a^6 + 1638m^4 a^4 - 2860 m^6 a^2 + 1469 m^8}{32768 a^{14}} \Lambda^8 + \CO(\Lambda^{10}). \eea This agrees with the $U(2)$ gauge theory prepotential with $\vec{a} = (a, -a)$ obtained from the Nekrasov partition function (\ref{prepotentialNf=2}) or from the Seiberg-Witten theory \cite{IY}. (The first term is the one-loop part and the others are the instanton part.) Together with the prefactor $e^{- \frac{\Lambda_2^2}{8g_s^2}}$ we see that the full free energy is the same as that of $SU(2)$ gauge theory. \subsection{$SU(2)$ gauge theory with $N_f = 3$} \label{subsec:Nf=3} Next, let us consider the matrix model corresponding to the gauge theory with $N_f = 3$. The matrix model action is given by (\ref{action3}). We will omit the subscript $3$ of the dynamical scale $\Lambda_3$ from now on. As in the previous subsection, there are two saddle points in the classical equation of motion. In the planar limit, the loop equation leads to the spectral curve $x(z)^2 = W'(z)^2 + f(z)$ where $f(z)$ is written as \bea f(z) = \frac{c_1}{z} + \frac{c_2}{z - 1} + \frac{c_3}{z^2}, \eea with coefficients \bea c_1 = - 4 g_s \left< \sum_I \left( \frac{\mu_3}{\lambda_I} + \frac{\Lambda}{2 \lambda_I^2} \right) \right>, ~~ c_2 = - 4 g_s \left< \sum_I \frac{m_1}{\lambda_I - 1} \right>, ~~ c_3 = - 2 g_s \left< \sum_I \frac{\Lambda}{\lambda_I} \right>. \label{f123} \eea We can easily see that $c_1 + c_2 = 0$ due to the equations of motion. The one form defined by $\lambda_m \equiv x(z) dz$ has a double pole at $z = 0$ and a simple pole at $z = 1$ and $\infty$ with residues $\mu_3$, $m_1$ and $m_\infty$, respectively. The residue at $z = \infty$ gives a further constraint on $c_i$: \bea c_2 + c_3 = m_\infty^2 - (\mu_3 + m_1)^2. \eea This condition together with the relation $c_1 + c_2 = 0$ leaves only one of the parameters independent. Let us choose $c_3$ to be independent. It is then related to the filling fraction by the integral \bea 4 \pi i g_s N_1 = \oint_{C_1} \lambda_{m}(c_3). \label{fillingfractionNf3} \eea For completeness, let us write down here the explicit form of the curve $x^2 = \frac{P_4(z)}{4z^4(z-1)^2}$ with \bea P_4(z) &=& 4 m_\infty^2 z^4 + 4 ((\mu_3 + m_1)\Lambda + m_1^2 - \mu_3^2 - m_\infty^2 - c_3)z^3 \nonumber \\ & & + (\Lambda^2 - 8 \Lambda \mu_3 + 4 \mu_3^2 - 4 \Lambda m_1 + 4 c_3) z^2 - 2 \Lambda (\Lambda - 2 \mu_3)z + \Lambda^2. \eea It is convenient to introduce the notation $B$ as \bea B= c_3 - \mu_3 \Lambda + \mu_3^2. \eea The polynomial is then rewritten as \bea P_4(z) &=& 4 m_\infty^2 z^4 + 4 (\Lambda m_1 + m_1^2 - m_\infty^2 - B)z^3 + (\Lambda^2 - 4 \Lambda (\mu_3 + m_1) + 4B) z^2 \nonumber \\ & & - 2 \Lambda (\Lambda - 2 \mu_3)z + \Lambda^2. \eea Let us consider the free energy of this matrix model. From the definition, its derivative in $\Lambda$ is written as \bea \frac{\partial F_m}{\partial \Lambda} = - \frac{g_s}{2} \left< \sum_I \frac{1}{\lambda_I} \right> = \frac{c_3}{4 \Lambda} = \frac{1}{4 \Lambda} (B + \mu_3 \Lambda - \mu_3^2). \label{freeenergyNf3} \eea In order to determine $B$ we take a derivative of (\ref{fillingfractionNf3}) with respect to $B$: \bea 4 \pi i \frac{\partial (g_s N_1)}{\partial B} = - \oint_{C_1} \frac{dz}{\sqrt{P_4(z)}}. \label{derivativeNf3} \eea For simplicity, we consider the case where $\mu_3 = m$ and $m_1 = m_\infty = 0$ in what follows. In this case, $P_4$ becomes a polynomial of degree 3: \bea P_3(z) = (z-1) (- 4 Bz^2 + (\Lambda^2 - 4 \Lambda m)z - \Lambda^2). \eea After a change of variable (first shifting $z \rightarrow z - p$ and then rescaling as $z = Q t$), we obtain \bea P_3(z) \rightarrow - 4 B Q^2 (1 + p) \times t (1 - t) (1 - k^2 t), \eea where \bea k^2 = \frac{Q}{1 + p}, ~~ p = \frac{1}{2} \left( - \frac{\Lambda}{4B} (\Lambda - 4m) + Q \right), ~~ Q = \frac{\Lambda}{4B} \sqrt{(\Lambda - 4m)^2 - 16 B}. \eea As a result, (\ref{derivativeNf3}) becomes \bea 4 \pi i \frac{\partial (g_s N_1)}{\partial B} &=& - \frac{1}{\sqrt{-B(1 + p)}} \int_0^1 \frac{dt}{\sqrt{t (1 - t) (1 - k^2 t)}} \nonumber \\ &=& - \frac{\pi}{\sqrt{-B(1 + p)}} F(\frac{1}{2}, \frac{1}{2}, 1; k^2). \eea By expanding the hypergeometric function and then integrating over $B$, we obtain \bea 2 g_s N_1 &=& \sqrt{B} \Bigg( 1 + \frac{m \Lambda}{4 B} - \frac{1}{64B^2} (B + 3m^2) \Lambda^2 + \frac{m}{256B^3} (5m^2 + B) \Lambda^3 \nonumber \\ & & - \frac{1}{16384 B^4}(3B^2 + 30m^2 B + 175 m^4) \Lambda^4 - \frac{m}{65536B^5}(9B^2 + 70m^2 B + 441m^4) \Lambda^5 \nonumber \\ & & - \frac{1}{1048576 B^6} (5 B^3 + 105m^2 B^2 + 735 m^4B + 4851 m^6) \Lambda^6 + \CO(\Lambda^7) \Bigg). \eea We invert this equation for $B$, \bea B &=& a^2 - \frac{m \Lambda}{2} + \frac{m^2 + a^2}{32a^2} \Lambda^2 + \frac{a^4 - 6a^2 m^2 + 5 m^4}{8192a^6} \Lambda^4 + \frac{m}{16384a^8} (9a^4 + 70m^2a^2 +441m^4) \Lambda^5 \nonumber \\ & & + \frac{m^2}{262144 a^{10}} (185a^4 + 1946m^2 a^2 + 15885 m^4) \Lambda^6 + \CO(\Lambda^7), \eea where we have defined $a = 2 g_s N_1$. Finally, by substituting this into (\ref{freeenergyNf3}), we obtain \bea 4F_m &=& (a^2 - m^2) \log \Lambda + \frac{m \Lambda}{2} + \frac{m^2 + a^2}{64 a^2} \Lambda^2 + \frac{a^4 - 6 m^2 a^2 +5 m^4}{2^{15} a^6} \Lambda^4 \nonumber \\ & & + \frac{m}{2^{14} \times 5} \frac{9a^4 + 70 m^2 a^2 +441 m^4}{a^8} \Lambda^5 + \frac{m^2}{2^{19} \times 3} \frac{185a^4 + 1946 m^2 a^2 + 15885 m^4}{a^{10}} \Lambda^6 + \CO(\Lambda^7). \nonumber \\ & & ~~~~~~~~ \eea Term with $\log \Lambda$ is the one-loop contribution. Remaining terms agree precisely with the prepotential obtained from the Nekrasov partition function (\ref{prepotentialNf=3}). \subsection{$SU(2)$ gauge theory with $N_f=4$} \label{subsec:Nf4} We now consider the matrix model with the original action (\ref{action4}). The planar loop equation $ R(z)=-{1\over 2}\left(W'(z)-\sqrt{W'(z)^2+f(z)}\right)$ involves a function $f(z)$ which now has a form $f(z) = \sum_{i=0}^2 \frac{c_i}{z-q_i}$. Parameters $\{c_i\}$ are given by \begin{equation} c_0=-4g_sm_0\langle \,\sum_I {1\over \lambda_I}\,\rangle,\hskip2mm c_1=-4g_sm_1\langle \,\sum_I {1\over \lambda_I-1}\,\rangle,\hskip2mm c_2=-4g_sm_2\langle \,\sum_I {1\over \lambda_I-q}\,\rangle. \end{equation} By studying the behavior of loop equation at large $z$ we find that the parameters obey \bea \sum_{i=0}^2 c_i = 0, ~~ c_1 + qc_2 = m_\infty^2 - (\sum_{i=0}^2 m_i)^2. \label{c} \eea By eliminating $c_1$ and $c_2$, the spectral curve becomes \bea x^2 = \frac{P_4(z)}{z^2(z-1)^2(z-q)^2}, \eea where $P_4$ is the following polynomial of degree $4$ \bea P_4(z) &=& m_\infty^2 z^4 + \Big( -(1+q)(m_\infty^2+ m_0^2) + (1-q) (m_1^2 - m_2^2) - 2 m_0 (q m_1 + m_2) + q c_0 \Big) z^3 \nonumber \\ & & + \Big( q m_\infty^2 + (1 + 3q + q^2)m_0^2 + (1-q)(m_2^2-qm_1^2) + 2(1+q)m_0 (qm_1 + m_2) \nonumber \\ & & - (1+q)q c_0 \Big) z^2 + \Big( -2q(1+q)m_0^2 - 2q^2m_0m_1 - 2qm_0m_2 + q^2 c_0 \Big)z + q^2 m_0^2. \eea The meromorphic one form $xdz$ has simple poles at $z = 0, 1, q$ and $z=\infty$ with residues $m_0,m_1,m_2$ and $m_\infty$. Again, we consider the derivative of the free energy: \bea \frac{\partial F_m}{\partial q} = g_s m_2 \left< \tr \frac{1}{q - M} \right> = m_2 R(z) |_{z=q}. \eea This can be easily computed by expanding the resolvent at $z = q$, $ R(z) = \frac{c_2}{4m_2} + \CO(z-q)$. Then, we obtain a simple expression for the free energy \bea \frac{\partial F_m}{\partial q} = \frac{c_2}{4} = \frac{1}{4(1-q)} \left( (\sum_{i=0}^2 m_i)^2 - m_\infty^2 - c_0 \right). \label{freeenergyderivative4} \eea In the last equality we used the relation (\ref{c}). In what follows, we consider the simple case where all the hypermultiplet masses are equal to $m$: i.e. $m_0 = m_\infty = 0$ and $m_1 = m_2 = m$. In this case, the polynomial is reduced to degree $3$: $P_3(z) = C z (z - z_+)(z - z_-)$, where we have introduced $C \equiv c_0 q$ and \bea z_\pm = \frac{1}{2} \left( 1+q - (1-q)^2 \frac{m^2}{C} \pm (1-q) \sqrt{1 - 2(1+q) \frac{m^2}{C} + (1-q)^2 \frac{m^4}{C^2} } \right). \eea By taking the $C$ derivative of $xdz$, the holomorphic one form becomes \bea \frac{\partial}{\partial C} xdz = \frac{1}{2 \sqrt{C z_+}} \frac{dz}{\sqrt{z (1-z)(1 - k^2 z)}}, ~~~ k^2 = \frac{z_-^2}{q}. \eea The remaining calculation is similar to those considered in the previous subsections. That is, we first evaluate the period integral of the above one form. Then by expanding in $\frac{m^2}{C}$ and integrating over $C$, we obtain \bea 2 i g_s N_1 = \sqrt{C} \left( h_0(q) - h_1(q) \frac{m^2}{C} - \frac{h_2(q)}{3} \frac{m^4}{C^2} - \frac{h_3(q)}{5} \frac{m^6}{C^3} + \CO(\frac{m^8}{C^4}) \right), \eea where $h_i(q)$ are the expansion coefficients of the period integral in $\frac{m^2}{C}$ and depend only on $q$. $h_0(q)$ is for the theory with massless flavors: \bea h_0(q) = 1 + \frac{1}{4} q + \frac{9}{64} q^2 + \frac{25}{256} q^3 + \frac{1225}{16384}q^4 + \CO(q^5). \label{B_0} \eea Lower order expansions of $h_1, h_2$ and $h_3$ are given by \bea h_1(q) &=& \frac{1}{2} + \frac{1}{8} q + \frac{1}{128} q^2 + \frac{1}{512} q^3 + \frac{25}{32768} q^4 + \CO(q^5), \nonumber \\ h_2(q) &=& \frac{3}{8} + \frac{27}{32} q + \frac{27}{512} q^2 + \frac{3}{2048} q^3 + \frac{27}{131072} q^4 + \CO(q^5), \nonumber \\ h_3(q) &=& \frac{5}{16} + \frac{125}{64} q + \frac{1125}{1024} q^2 + \frac{125}{4096} q^3 + \frac{125}{262144} q^4 + \CO(q^5). \eea Solving for $C$, we obtain \bea C &=& a^2 \Bigg( \frac{1}{h_0(q)^2} + \frac{2h_1(q)}{h_0(q)} \frac{m^2}{a^2} + \frac{2 h_0(q) h_2(q) - 3 h_1(q)^2}{3} \frac{m^4}{a^4} \nonumber \\ & & ~~~~~ + \frac{10 h_0(q) h_1(q)^3 - 10 h_0(q)^2 h_1(q) h_2(q) + 2 h_0(q)^3 h_3(q)}{5} \frac{m^6}{a^6} + \ldots \Bigg), \eea where $a = 2 i g_s N_1$. By substituting the above expression into (\ref{freeenergyderivative4}) and integrating over $q$, we finally obtain the $N_f=4$ free energy \bea 4F_m &=& (a^2 - m^2) \log q + \frac{a^4+6 a^2 m^2 + m^4}{2 a^2} q + \frac{13 a^8 + 100 m^2 a^6 + 22 m^4 a^4 - 12 m^6 a^2 + 5 m^8}{64a^6}q^2 \nonumber \\ & & + \frac{23 a^{12} + 204 m^2 a^{10} + 51 m^4 a^8 - 48 m^6 a^6 + 45 m^8 a^4 - 28 m^{10} a^2 + 9 m^{12}}{192a^{10}} q^3 + \CO(q^4). \eea This perfectly agrees with the instanton partition function (\ref{prepotentialNf=4}). Finally, we make a brief comment on the massless theory. In this case, the expression for $C$ simplifies and becomes $C=a^2/h_0^2(q)$ where $h_0(q)$ is (\ref{B_0}). Thus, it is easy to derive \bea 4 F_m = a^2 \left( \log q - \log 16 + \frac{1}{2} q + \frac{13}{64} q^2 + \frac{23}{192} q^3 + \frac{2701}{32768} q^4 + \frac{5057}{81920} q^5 + \CO(q^6) \right), \eea where we have added the one-loop contribution $- a^2 \log 16$. Note that this can be written as $4 F_m = a^2 \log q_{{\rm IR}}$ where $q$ and $q_{{\rm IR}}=e^{2\pi i\tau_{{\rm IR}}}$ are related by \bea q = \frac{\vartheta_2(\tau_{{\rm IR}})^4}{\vartheta_3(\tau_{{\rm IR}})^4} = 16 q_{{\rm IR}} - 128 q_{{\rm IR}}^2 + 704 q_{{\rm IR}}^3 - 3072 q_{{\rm IR}}^4 +11488 q_{{\rm IR}}^5 + \ldots. \eea as already discussed in \cite{DKM, GKMW, AGT, MMMcoupling, Poghossian, EM}. Thus the theory appears classical in terms of IR coupling constant $\tau_{{\rm IR}}$. \section{Matrix model and Quiver gauge theories} \label{sec:quiver} In this section, we study matrix models which describe $\CN=2$ $SU(2)$ quiver gauge theories. First of all, we consider a matrix model describing $SU(2)$ linear quiver gauge theory where each gauge group has a vanishing beta function \cite{Gaiotto}. Then by taking its decoupling limit, we propose models for asymptotically free gauge theories in subsection \ref{subsec:asymptotically}. According to the AGT conjecture, $SU(2)^{n-3}$ linear quiver gauge theory is related to the $n$-point conformal block of the Liouville theory, which is represented by the trivalent graph \cite{BPZ} as in Fig \ref{fig:quivern-2}. As seen in section \ref{sec:matrix}, the Dotsenko-Fateev representation of the conformal block suggests a matrix model with the following action \cite{DV}: \bea W(M) = \sum_{i = 0}^{n-2} m_i \log (M - t_i), \eea where $t_0 = 0$ and $t_1 = 1$. Other parameters $t_i = \prod_{k=1}^{i-1} q_k$ ($i = 2, \ldots, n-2$) describe complex structure of the $n$-punctured sphere. Note that we also have the prefactor as in (\ref{matrix}) \begin{figure}[t] \begin{center} \includegraphics[scale=0.7]{quiver_n-2.eps} \caption{{\small }} \label{fig:quivern-2} \end{center} \end{figure} From the gauge theory perspective, the parameters $q_k$ are related to the gauge coupling constants $q_k = e^{2 \pi i \tau_k}$ of the gauge group $SU(2)^{n-3}$. For $n=4$, this reduces to the matrix model which we studied in subsection \ref{subsec:Nf4}. Parameters $m_0$ and $m_{n-2}$ are related to the mass parameters of two hypermultiplets in the fundamental representation of the $SU(2)$ at one end of the quiver: $m_{n-2}=\frac{1}{2}(\mu_3 + \mu_4)$ and $m_0 = \frac{1}{2} (\mu_3 - \mu_4)$. Also, the masses of the bifundamental hypermultiplets are identified with $m_i$ ($i=2, \ldots, n-3$). Finally, the masses of two fundamental hypermultiplets of the other end of the quiver are related to $m_1$ and $m_\infty$ as $m_1=\frac{1}{2}(\mu_1 + \mu_2)$ and $m_\infty=\frac{1}{2}(\mu_1 - \mu_2)$. The mass parameter $m_\infty$ is introduced by the following condition: \bea \sum_{i=0}^{n-2} m_i + m_\infty + 2 g_s N = 0. \label{massrelationquiver} \eea The critical points are determined by the equation of motion \bea \sum_{i = 0}^{n-2} \frac{m_i}{\lambda_I - t_i} + 2 g_s \sum_{J (\neq I)} \frac{1}{\lambda_I - \lambda_J} = 0. \label{eom} \eea If we ignore the second term, we obtain $n-2$ critical points $e_p$ ($p = 1, 2, \ldots, {n-2}$). Let each $N_p$ ($p = 1, 2, \ldots, {n-2}$) be the number of the matrix eigenvalues which are at the critical point $e_p$. We take the large $N$ limit with mass parameters $\{m_i\}$ and filling fractions $\{\nu_p \equiv g_s N_p\}$ being kept fixed. Since this is one matrix model, the loop equation is still the same as in the previous cases (\ref{resolvent}) \bea f(z) \equiv 4 g_s \tr \left< \frac{W'(z) - W'(M)}{z - M} \right> = \sum_{i = 0}^{n-2} \frac{c_i}{z - t_i} \equiv \frac{Z(t)}{\prod_{i=0}^{n-2} (z - t_i)}. \label{fi} \eea We note that a polynomial $Z(t)$ is of degree $n-3$, since the leading term vanishes due to equations of motion. Finally, we define the meromorphic one form $\lambda = x(z) dz$ as \bea x(z)^2 \equiv \left( 2 \langle R(z) \rangle + W'(z) \right)^2 = W'(z)^2 + f(z). \label{spectral} \eea \subsection{Matrix model for asymptotically free quiver gauge theory} \label{subsec:asymptotically} The matrix model corresponding to asymptotically free quiver gauge theory can be obtained by taking the decoupling limit as in section \ref{sec:matrix}. Only possible limits which does not spoil the condition (\ref{massrelationquiver}) is the case where $\mu_2 (=m_1 - m_\infty)$ or $\mu_4 (=m_{n-2} - m_0)$ is taken to infinity. For the sake of illustration, let us consider the $n=5$ case with the action \bea W(z) = \sum_{i = 0}^{3} m_i \log (z - t_i), \eea where $t_2 = q_1$ and $t_3 = q_1 q_2$. This corresponds to $SU(2)_1 \times SU(2)_2$ quiver gauge theory whose gauge coupling constants are $q_1$ and $q_2$. We first take a limit $\mu_4 \rightarrow \infty$ with $\mu_4 q_2 = \tilde{\Lambda}$ fixed. In this limit, we obtain \bea W(z) \rightarrow \mu_3 \log z + \sum_{i=1,2} m_i \log (z - t_i) - \frac{q_1 \tilde{\Lambda}}{2z}. \label{asymptoticallyquiver1} \eea It is natural to anticipate that this matrix model corresponds to the quiver theory of one fundamental matter coupled to the second gauge group $SU(2)_2$ and two fundamental multiplets are coupled to the first gauge group. The relation of the mass parameters (\ref{massrelationquiver}) becomes $\mu_3 + \sum_{i=1,2} m_i + m_\infty + 2 g_s N =0$ in this limit. By further taking the limit $\mu_2 \rightarrow \infty$ with $\mu_2 q_1 = \Lambda$ fixed, we obtain from (\ref{asymptoticallyquiver1}) \bea W(z) \rightarrow \mu_3 \log z + m_2 \log (z - 1) - \frac{\Lambda z}{2} - \frac{\tilde{\Lambda}}{2z}, \eea where we have also rescaled $z \rightarrow q_1 z$. The relation of the mass parameters (\ref{massrelationquiver}) becomes \bea \mu_3 + m_2 + \mu_1 + 2 g_s N = 0. \eea This matrix model is expected to describe $SU(2)_1 \times SU(2)_2$ quiver gauge theory with each gauge factor coupled to one hypermultiplet. Both of the gauge factors have nonvanishing beta functions and the theory is asymptotically free. It is possible to generalize this construction to the case with $n>5$. A decoupling limit of a hypermultiplet at the last end of the quiver is $\mu_4 \rightarrow \infty$ with $\mu_4 q_{n-3} = \tilde{\Lambda}$ fixed. Also, another decoupling limit of a hypermultiplet at the first end of the quiver is $\mu_2 \rightarrow \infty$ with $\mu_2 q_{1} = \Lambda$ fixed. By taking these limits, we finally obtain \bea W(z) = \mu_3 \log z + m_2 \log (z - 1) + \sum_{i=3}^{n-3} m_i \log \left(z - \prod_{k=2}^{i-1} q_k \right) - \frac{\Lambda z}{2} - \frac{\tilde{\Lambda}}{2z} \left( \prod_{k=2}^{n-4}q_k \right), \eea with the following relation for the mass parameters: \bea \mu_3 + \sum_{i=2}^{n-3} m_i + \mu_1 + 2 g_s N = 0. \eea \section{Conclusion and discussion} In this paper we have studied the matrix model proposed to explain the AGT relation and interpolate the Liouville and $\CN=2$ $SU(2)$ gauge theories. We have explicitly evaluated the free energy of the matrix models describing $SU(2)$ gauge theory with $N_f=2,3,4$ flavors and have shown that they in fact reproduce the amplitudes of Seiberg-Witten theory. Our analysis is limited to the large $N$ limit and it is very important to see if our results can be generalized and reproduce full Nekrasov partition functions. There is already an interesting work in this direction \cite{FHT, KPW} and we hope that we can report further results in future publications. \section*{Acknowledgements} K.M. would like to thank K.~Hosomichi, H.~Itoyama and F.~Yagi for discussions and comments. He also would like to thank Ecole normale Superieure, SISSA and University of Amsterdam for warm hospitality during part of this project. Research of T.E. is supported in part by the project 19GS0219 of the Japan Ministry of Education, Culture, Sports, Science and Technology. Research of K.M. is supported in part by JSPS Bilateral Joint Projects (JSPS-RFBR collaboration).
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \section{ Introduction} Let $(M^m,g_M)$ be a smooth closed Riemannian manifold (smooth compact manifold without boundary). Let $[g_M]$ denote the conformal class of the metric $g_M$. The Yamabe constant of the conformal class $(M^m,[g_M])$ is defined as the infimum of the normalized total scalar curvature restricted to $[g_M]$, \begin{equation} \label{Yamabe} Y(M,[g_M])= \inf_{h \in [g_M]} \frac{\int s_{h} dV_{h}}{Vol(M,h)^{\frac{m-2}{m}}}, \end{equation} \noindent where $S_{h}$ and $dV_{h}$ are the scalar curvature and the volume element of $h$. By writing $h=f^{\frac{4}{m-2}} g_M$ ($f$ positive and $C^{\infty}$), we can rewrite (\ref{Yamabe}) in terms of functions of the sobolev space $L_1^2(M)$, \[Y(M,[g_M])=\inf_{f \in L_1^2(M), f\neq0} Q_{g_M}(f)\] \begin{equation} \label{Yamabe2} := \inf_{f \in L_1^2(M), f\neq0 } \frac{a \int_M |\nabla f|^2 dV_{g_M}+ \int_M s_{g_M} f^2 dV_{g_M}}{\int_M f^p dV_{g_M})^{\frac{2}{p}}}, \end{equation} \noindent where $a=\frac{4(m-1)}{(m-2)}$, $p=\frac{2m}{m-2}$. $Q_g(f)$ is called the Yamabe quotient. It is a fundamental result, proven in many steps by H. Yamabe \cite{Yam}, N. Trudinger \cite{Tru}, T. Aubin \cite{Aubin} and R. Schoen \cite{Schoen}, that, for closed manifolds, in each conformal class the infimum is achieved. The metric in each conformal class that achieves the infimum in (\ref{Yamabe}) is called a Yamabe metric and has constant scalar curvature. Meanwhile, the function that achieves the infimum in (\ref{Yamabe2}) is called a Yamabe minimizer. For any conformal class, the Yamabe constant is bounded from above, $Y(M^m,[g_M])\leq Y(S^m,[g_0])=m(m-1)Vol(S^m)$, where $g_0$ is the round metric on $S^m$ with constant sectional curvature 1. The Yamabe invariant $Y(M)$ of $M$ is defined as the supremum of the Yamabe constants over all the conformal classes (cf. in \cite{Koba}, \cite{Schoen2}). Hence, it is an easy consequence that $Y(M) \leq Y(S^m)=Y(S^m,[g_0])$. In the following, we will denote $Y(S^m)=Y(S^m,[g_0])$ by $Y_m$. When the Yamabe constant is non-positive, there is only one metric with constant scalar curvature in the conformal class. On the other hand, when the Yamabe constant is positive, there may be several metrics of constant scalar curvature. Examples include $(S^k\times M^m, g_0+g_M)$, where $M^m$ is a Riemannian manifold of constant scalar curvature $s_{g_M}$: it has been shown in \cite{Pet2} that in this case, the number of unit volume non-isometric metrics of constant scalar curvature in the conformal classs of $[g_0+g_M]$ grows at least linearly with $\sqrt{s_{g_M}}$. Possibly the simplest example of several metrics of constant scalar curvature, is the one exhibited in \cite{Aku}: if $(M_1^m,g_1)$ and $(M_2^n,g_2)$ are Riemannian manifolds with constant scalar curvature, and $s_{g_1}>0$, then $\delta^n g_1+\delta^{-m} g_2$ has volume one and constant scalar curvature greater than $Y_{m+n}$. Through the study of these cases, Akutagawa, Florit and Petean, found that if $(M_1^m,g_1)$ is a closed manifold ($m\geq2$) of positive scalar curvature and $(M_2^n,g_2)$ any closed manifold, then \begin{equation} \label{akutagawa} \lim_{r\rightarrow \infty} Y(M\times N,[g_1+r g_2])=Y(M\times \mathbf{R}^n,[g+g_E]) \end{equation} \noindent where $g_E$ is the Euclidean metric on $\mathbf{R}^n$ (Theorem 1.1 in \cite{Aku}). Making thus the Yamabe constant $Y(M\times \mathbf{R}^n,[g+g_E])$ of high relevance in the study of the Yamabe constant of product manifolds, since, for instance, from (\ref{akutagawa}) follows that the Yamabe invariant of $M\times N$ is bounded below, \[Y(M\times \mathbf{R}^n,[g_1+g_E])\leq Y(M\times N).\] As another example, the Yamabe constant of $Y(S^m\times \mathbf{R}^n,[g+g_E])$ is involved in a surgery formula for the Yamabe invariant of a compact manifold, as have shown recent results of B. Ammann, M. Dahl and E. Humbert \cite{Amm}. Also, it was through the case where $n=1$ that J. Petean found a lower bound to the Yamabe invariant of $M_1^m\times S^1$ (when $M_1^m$ an Einstein manifold), among other interesting results involving $Y(M\times \mathbf{R}^n,[g+g_E])$, \cite{Pet1}. In this article we study the Yamabe constant $Y(M^m\times \mathbf{R}^n,[g+g_E])$, where $M^m$ ($m\geq2$), as in (\ref{akutagawa}), is a closed manifold with positive scalar curvature. The Yamabe problem for non-compact manifolds has not been solved completely yet. Different counter-examples and conditions for existence and nonexistence of a constant scalar curvature in the conformal class of a metric, have been published for non-compact manifolds (cf. in \cite{Zhang2}). Results include, e.g., those of K. Akutagawa and B. Botvinnik in \cite{Bot}, where they study complete manifolds with cylindrical ends and solve affirmatively the Yamabe problem on cylindrical manifolds. Results include also some cases for noncompact complete manifolds of positive scalar curvature. We cite here the work of S. Kim in \cite{Kim}, where he introduces the notation \[\mathbf{Q(M)}:= \inf_{u\in C_0^{\infty}(M)} \frac{\int_M |\nabla f|^2 dV_{g_M}+ 1/a \int_M s_{g_M} f^2 dV_{g_M}}{\int_M f^p dV_{g_M})^{\frac{2}{p}}},\] \noindent and \[\mathbf{\bar Q(M)}:=\inf_{u\in C_0^{\infty}(M\setminus B_r)} \frac{\int_M |\nabla f|^2 dV_{g_M}+ 1/a \int_M s_{g_M} f^2 dV_{g_M}}{\int_M f^p dV_{g_M})^{\frac{2}{p}}}\] \noindent (where $r$ is the distance from $x$ to a fixed point $x_0\in M$, and $B_r$ the ball of radius $r$ and centered at $x_0$), and then proves the existence of a constant scalar curvature in the conformal class of $(M,g_M)$ whenever $Q(M)<\bar{Q}(M)$. In our case, given some of the particularities of $(M^m\times \mathbf{R}^n,g_M+g_E)$, we use a more direct approach to prove existence of a Yamabe minimizer. We first show that the Steiner symmetrization of a function ``improve'' the Yamabe quotient, making thus, the Steiner symmetrized functions, the best candidates for the Yamabe minimizer. Then, along with this result, we use the fact that $Y((M^m\times \mathbf{R}^n,g_M+g_E)<Y_{m+n}$ (a known result of Akutagawa, Florit and Petean (\cite{Aku})) to prove that the Yamabe minimizer exists and is positive and $C^{\infty}$. The fact that that Steiner symmetrizations ``improve'' the Yamabe quotient is a consequence of the following. \begin{thm} \label{8.2} Let $(N,g)=(M^m\times \mathbf{R}^n,g_M+g_E)$, and $u \in L_{1+}^{s}(N)$, $(1<s<\infty)$. Let $u^*$ be the Steiner symmetrization of $u$, with respect to $M$. Then $u^* \in L_{1+}^{s}(N)$, and \end{thm} \begin{equation} \label{polya} ||\nabla u^*||_s\leq ||\nabla u||_s. \end{equation} Indeed, using inequality (\ref{polya}) from the preceding theorem, and the fact that the norm is preserved under Steiner symetrizations ($||u^*||_s =||u||_s$, for any $s$), the next corollary follows. \begin{cor} \label{radial} Consider $(N,g)=(M^m\times \mathbf{R}^n,g_M+g_E)$, and the Yamabe quotient for $2\leq s\leq p$: \[Q_s(u)=\frac{a\int_N |\nabla u|^2 dV_g+\int_N s_g u^2 dV_g}{(\int_N u^s dV_g)^{\frac{2}{s}}},\] \noindent then $Q_s(u^*) \leq Q_s(u)$. \end{cor} The main result of this paper, the existence of the Yamabe minimizer of $(N,g)=(M^m\times \mathbf{R}^n,g_M+g_E)$, is stated in the next Theorem. \begin{thm} \label{principal} Let $(N,g)=(M^m\times \mathbf{R}^n,g_M+g_E)$, with $m\geq2$ and $s_{g_M}>0$. The Yamabe minimizer of $(N,g)$ exists, and is positive and $C^{\infty}$. \end{thm} The result we give is sharp, since counter-examples for manifolds of the type $(M^m\times \mathbf{R}^n,g_M+g_E)$, where $M^m$ has non-positive scalar curvature or where $m<2$, and a positive Yamabe metric is not achieved are known to exist. An example of the former is $N^9=S^1\times S^1\times S^1\times S^1\times S^1\times S^1\times \mathbf{R}^3$ (with the metric being the product of those usual ones on $\mathbf{R}^3$ and $S^1$), while an example of the latter is $N^4=S^1 \times \mathbf{R}^3$ (with the metric being the product of the usual metrics), as is shown by Zhang in \cite{Zhang2} and \cite{Zhang}. This paper is organized as follows. In section 2 we give the precise definition of the Steiner symmetrization of a function $u$, $u^*$, with respect to $M$; we also give the definition of Polarizations and we introduce other preliminaries. We also give a proof of Theorem \ref{8.2}; many of the proofs and lemmas we give there are due to Brock and Solynin \cite{Brock} and to Jean Van Schaftingen \cite{Van}, with some minor modifications. Finally, in section 4, we give the proof of Theorem \ref{principal}. In this last section we follow the ideas of the classical proof of the Yamabe problem for compact manifolds (cf. in \cite{Lee}), and we take into account the non-compactness of the situation through the techniques of the Compactness Concentration Principle of Lions \cite{Lions1}, \cite{Lions2}. \noindent \textbf{Acknowledgment.} The author would like to thank his supervisor J. Petean for many useful observations and valuable conversations on the subject. \section{Proof of Theorem 1} In this section we state some preliminary definitions and results we will need for the proof of Theorem \ref{8.2}. We begin by stating the definitions in $(N,g)=\left(M^m\times\mathbf{R}^n,g_M+g_E\right)$ of a Steiner symmetrization with respect to $M$, and of a Polarization by a polarizer in $\mathbf{R}^n$. We then prove some properties of Polarizations, such as the fact that Polarizations preserve the $s$ norm, for any $s\leq 1$, \begin{equation} \label{nablilla} ||\nabla u^H||_p=||\nabla u||_p, \end{equation} \noindent (lemma \ref{gradlemma}). At the end of this section we give a proof of Theorem \ref{8.2}, by showing that we can approximate any Steiner symmetrization by constructing a carefully chosen sequence of Polarizations, and then by verifying that a less or equal than relation in (\ref{nablilla}) between the gradient of $u^*$, and the gradient of $u$, is preserved in the limit of the sequence. These results are a more or less direct adaptation to our case of the work of Brock and Solynin \cite{Brock} and of Jean Van Schaftingen \cite{Van}. \subsection{Steiner symmetrizations} Consider $(N,g)=\left(M^m\times\mathbf{R}^n,g_M+g_E\right)$, where $(M^m,g_M)$ is a closed Riemannian manifold and $g_E$ the Euclidean metric. Through the course of this article we will refer to Steiner symmetrizations in $(N,g)$ with respect to $M$, simply as Steiner symmetrizations. We first define Steiner symmetrizations for sets. Let $U$ be a measurable set in $(N,g)$, we define its Steiner symmetrization $U^*$ as follows. For each $x_0 \in M$, if \[Vol(U \cap (\{x_0\}\times \mathbf{R}^n),g_E) >0,\] then \begin{equation} \label{choose} \left(U^*\cap(\{x_0\}\times \mathbf{R}^n)\right)=\bigg\{ \begin{array}{l} \{x_0\}\times B_{\rho}(0),\ \ \mathit{if} \ \ U \ \ \mathit{ is\ \ open},\\%\in \texttt{G},\\ \{x_0\}\times \bar{B}_{\rho}(0), \ \ \mathit{if} \ \ U \ \ \mathit{ is\ \ compact} \end{array} \end{equation} \noindent where $B_{\rho}(0)$ is an open ball in $\mathbf{R}^n$, of radius $\rho>0$, centered at the origin, and $\rho$ is such that \[Vol(U \cap (\{x_0\}\times \mathbf{R}^n),g_E)= Vol(B_{0}(\rho),g_E).\] \noindent In particular, $\rho$ depends on $x_0$. On the other hand, if $U$ is measurable but neither open nor compact, then the sets $U^*\cap(\{x_0\}\times \mathbf{R}^n)$ are defined in almost everywhere sense by either one of (\ref{choose}). Finally, if $Vol(U \cap (\{x_0\}\times \mathbf{R}^n),g_E) = 0$, then $U^*\cap(\{x_0\}\times \mathbf{R}^n)$ is either empty or the point $(x_0,0)$, according to whether $U\cap(\{x_0\}\times \mathbf{R}^n)$ is empty or not. It is not hard to see that for any sets $A$, $B$ $\subset N$, \begin{equation} \label{nonexp} A\subset B \Rightarrow A^* \subset B^* \end{equation} \noindent and that for measurable subsets $A\subset B \subset N$, \begin{equation} \label{nonexp2} Vol_g(B^*\backslash A^*)\leq Vol_g (B\backslash A). \end{equation} We now define Steiner symmetrizations for functions. Consider the measurable functions $u:N \rightarrow \mathbf{R}$ for which \[Vol(\{x \in N| u(x)>c\})< +\infty,\] \noindent $\forall c>0 \inf u$ (in the following, we will denote $\{x\in N|u(x)>c\}$ by $\{u>c\}$). We will call $Sym$ this class of functions. We note that $L^s(N)$, $L_{1}^s(N)$ and $C_{0}(N)$ are subspaces of $Sym$. The Steiner symmetrization of a measurable function $u:N \rightarrow \mathbf{R}^+$ in $Sym$ is defined as follows. Let $y \in N $, then \[u^*(y)= \sup \{c\in \mathbf{R} | y \in \{u>c\}^*\}.\] \noindent It follows that for any $c\in \mathbf{R}$, \begin{equation} \label{nonexp1} \{u>c\}^*=\{u^*>c\}. \end{equation} One important property of Steiner symmetrizations is that they are non-expansive. \begin{lemma} \label{expa} Given $1\leq s< \infty$, we have \begin{equation} \label{exa} ||u^*-v^*||_s\leq ||u-v||_s \end{equation} \end{lemma} \begin{proof} Recall that \[\int_N |u-v|^s dV_g = \int_{\{ \sigma \leq \tau\}} Vol\left(\{v>\tau\}\setminus\{vu>\sigma\}\right)\] \[+Vol\left(\{u>\tau\}\setminus\{v>\sigma\}\right) \ \ s(s-1)|\sigma-\tau|^{s-2} d\sigma d\tau. \] \noindent The result of the lemma then follows from equations (\ref{nonexp2}) and (\ref{nonexp1}). \end{proof} \subsection{Polarizations} Let $\Sigma$ be some $(n-1)$ dimensional affine hyperplane in $\mathbf{R}^n$. Consider $M^m\times \Sigma$ and assume that $H$ is one of the open spaces into which $N=M^m \times \mathbf{R}^n$ is subdivided by $M^m\times \Sigma$. We will call $H$ a polarizer, and denote its complement in $N$ by $H^c$. Let $\bar x$ denote the reflection in $M^m\times \Sigma$ with respect to $H$. That is, for $x=(a,b) \in M^m \times \mathbf{R}^n$, with $a\in M^m$ and $b\in \mathbf{R}^n$, \[\bar x =(a, b^{\Sigma}),\] where $b^{\Sigma}$ denotes the reflection of $b\in \mathbf{R}^n$, through the hyperplane $\Sigma \subset\mathbf{R}^n$, which defines $H$. If $u$ is measurable, we define its polarization with respect to a polarizer $H$, $u^H$, by \begin{equation} u^H (x)=\bigg\{ \begin{array} {l} \max\{u(x), u(\bar{x})\} \ \ \mathit{if} \ \ x \in H,\\ \min \{u(x), u(\bar{x})\} \ \ \mathit{if} \ \ x \in H^c. \end{array} \end{equation} One useful property of polarizations is that the s-norms of the gradient of a function $u \in L_1^s(M\times \mathbf{R}^n)$, do not change under polarizations, as it is shown in the next lemma. \begin{lemma} \label{gradlemma} Let $u \in L_{1+}^s(N)$, $(1\leq s \leq \infty)$, and let $H$ be some polarizer. Then $u_H \in L_1^s(N)$, and $|\nabla u|$ and $|\nabla u^H|$ are rearrangements of each other. In particular, we have \begin{equation} \label{grads} ||\nabla u^H||_s=||\nabla u||_s \end{equation} \end {lemma} \begin{proof} For the sake of simplicity, we first define the reflection of $u(x)$, and the reflection of the polarization of $u$ by $H$. That is, let \begin{equation} \begin{array}{l} v(x):=u(\bar x), \\ w(x):=u^H(\bar x),\\ for \ \ x \in H. \end{array} \end{equation} Next, we note that \[u^H(x)=max\{u(x),v(x)\}= v(x)+(u(x)-v(x))_+,\] \noindent and that \[w(x)=min\{u(x),v(x)\}= u(x)-(u(x)-v(x))_+,\] \noindent for all $x\in H$. Hence, we conclude that $u^H$, $w\in L_1^1(N)$, and that \[ \nabla u^H (x)= \bigg\{ \begin{array}{c c} \nabla u(x) \ \ \mathit{a.e. \ \ on} \ \ \{x \in N: u(x)>v(x)\}\cap H,\\ \nabla u(x) \ \ \mathit{a.e. \ \ on} \ \ \{x \in N: u(x)\leq v(x)\}\cap H, \end{array} \] \[ \nabla w(x) = \bigg\{ \begin{array}{c c} \nabla v(x) \ \ \mathit{a.e. \ \ on} \ \ \{x \in N: u(x)>v(x)\}\cap H,\\ \nabla u(x) \ \ \mathit{a.e. \ \ on} \ \ \{x \in N: u(x)\leq v(x)\}\cap H. \end{array} \] Now, to prove the assertions of the lemma, we define the following regions on $N$, \begin{equation} \label{regions} \begin{array}{c} R_1 = \{x \in N: u(x)>v(x)\}\cap H, \\ R_2 = \{x \in N: u(x)\leq v(x)\}\cap H, \\ R_3 = \{x \in N: u(x)>v(x)\}\cap H^c, \\ R_4 = \{x \in N: u(x)\leq v(x)\}\cap H^c, \\ \end{array} \end{equation} \noindent and we observe that $u^H=u$ in $R_1$ and $R_4$. Thus, we have \[\int_{R_1\cup R_4}{|\nabla u^H|^s}dV_g=\int_{R_1\cup R_4}{|\nabla u|^s}dV_g.\] We also note that $u^H=v$ in $R_2$ and $R_3$, i.e., $\int_{R_3}{|\nabla u^H|^s}dV_g=\int_{R_2}{|\nabla u|^s}dV_g$ and $\int_{R_2}{|\nabla u^H|^s}dV_g=\int_{R_3}{|\nabla u|^s}dV_g$. And so, the assertion follows: \[\int_{N}{|\nabla u^H|^s}dV_g=\int_{R_1\cup R_4}{|\nabla u^H|^s}dV_g+\int_{R_2}{|\nabla u^H|^s}dV_g+\int_{R_3}{|\nabla u^H|^s}dV_g\] \[=\int_{R_1\cup R_4}{|\nabla u|^s}dV_g+\int_{R_3}{|\nabla u|^s}dV_g+\int_{R_2}{|\nabla u|^s}dV_g=\int_{N}{|\nabla u|^s}dV_g.\] \end{proof} \begin{rem} \label{eich} By following the scheme of the proof of \textit{Lemma \ref{gradlemma}}, we may also note that $||u||_s=||u^H||_s$, for any $1\leq s \leq \infty$. \end{rem} \begin{rem} \label{expan} Polarizations are non-expansive (for $u$,$v$ $\in L^s(N)$, $1 \leq s \leq \infty$, $||u^H-v^H||_s\leq ||u-v||_s$). \end{rem} \subsection{Approximation of Steiner symmetrizations by Polarizations} We will now show that any Steiner symmetrization $u^*$ of a function $u$, can be approximated by a sequence of polarizations of $u$, $\{u^{H_i}\}$. To do so, we will first show that sequences of iterated polarizations $\{u^{H_i}\}$ are sequentially compact. Then, we will construct a sequence of polarizations, and establish some conditions for the convergence of the sequence to the Steiner symmetrization of the function. We begin this section by joining together the concepts of Steiner symmetrizations we defined earlier, with the concepts of polarizations, to define a special set of halfspaces in $N=M\times \mathbf{R}^n$. Let $\Sigma$ be a halfspace of $\mathbf{R}^n$, we will denote by $\mathbf{H}$ the set of all halfspaces $H$ of $N$ of the form $M \times \Sigma$, and by $\mathbf{H_0}$ the set of all halfspaces $H \in \mathbf{H}$, such that $M \times \{0\} \subset H$. \begin{rem} \label{6.7} It follows from the definition of a polarization, from the definition of $\mathbf{H}_0$, and from the symmetry of the Steiner symmetrization, that $(u^*)_H =u^*$, for any polarizer $H\in \mathbf{H}_0$. \end{rem} Another fact that makes $\mathbf{H_0}$ a special set of halfspaces, is that there is always some polarizer $H \in \mathbf{H_0}$, such that $u_H$ is strictly closer to $u^*$ than $u$ is. \begin{lemma} \label{closer} Let $u \in C_{0+}(N)$. If $u \neq u^*$, then there is some polarizer $H \in \mathbf{H}_0$, such that for each $1\leq s\leq\infty$, \[||u^H-u^*||_s<||u-u^*||_s,\] \noindent for $1\leq s \leq \infty$. \end{lemma} \begin{proof} Since $u\neq u^*$, then there is some $c>0$, such that $\{x \in N: u(x) >c\}\Delta\{x \in N:u^*(x)>c\}\neq \phi$. So, we choose some $y \in \{x \in N:u^*(x)>c\} \backslash\{x \in N:u(x)>c\}$. There is a polarizer $H \in H^*$, such that $y^H \in \{x \in N:u(x)>c\} \backslash \{x \in N:u^*(x)>c\}$. We now choose a sufficiently small neighborhood $W_0\subset H$ of $y$, so that $W_0^H \subset \{x \in N: u(x)>c\} \backslash \{x\in N : u^*(x)>c\}$. We then have \[u^H(x)=u(\bar x)>c\geq u^*(\bar x)\] and \[u^*(x)>c \geq u(x)=u^H(\bar x),\] and so, for $s\geq1$, \begin{equation} \label{ineq} |u(x)-u^*(x)|^s+|u(\bar x)-u^*(\bar x)|^s>|u^H(x)-u^*(x)|^s+|u^H(x)-u^*(\bar x)|^s. \end{equation} If $x \in W_0$, the corresponding inequality is non-strict. The integral inequality is obtained by integration of (\ref{ineq}) over $W_0$ and of the nonstrict inequality over $H\backslash W_0$. \end{proof} We now prove that for a sequence of polarizations $u_m=u^{H_1H_2...H_m}$, it suffices that the polarizers satisfy $\{H_i\}_{i\leq m} \subset \mathbf{H}_0$, for the existence of a function $f$, such that a subsequence of $\{u_m\}$ converges to $f$. \begin{lemma} \label{sequentially} Let $u \in C_{0+}(N)$. Let $\{u_m\}$ be a sequence of polarizations of $u$, with its respective sequence of polarizers $\{H_m \} \subset \mathbf{H}_0$, ($u_m=u^{H_1 ...H_m}$). Then there is a function $f \in C_{0+}(N)$, and an increasing subsequence $\{u_{m_k}\}$ of $\{u_{m}\}$, such that, for each $s$, $1\leq s \leq\infty$, we have \end{lemma} \[\lim_{k \rightarrow\infty}||f-u_{m_k}||_s=0.\] \begin{proof} This lemma follows from an application of the theorem of \textit{Arzela-Ascoli} (cf. \cite{Peter}). That is, to conclude that the sequence $\{u_m\}$ is compact, we need to prove that $\{u_m\}$ is equibounded, equicontinuous and that the supports are uniformly bounded. \begin{enumerate} \item Since $||u||_s=||u^H||_s$, for any polarizer $H\subset \mathbf{H}_0$ (remark \ref{eich}), it follows that $||u||_s=||u_m||_s$ for $m=1,2,...$. Thus, the functions $u_m$ are equibounded for all $m$. \item Let \[w_u(\delta)=\sup\{u(x)-u(y)|d(x,y)\leq\delta\},\] \noindent be the modulus of continuity of a function $u$. Let $H\subset \mathbf{H}_0$ be any polarization. We proceed to analyze the different cases. Let $\delta>0$, and consider any ball $B_{\delta}(p)$ in the domain of $u$, such that $w_u(\delta)= \sup_{B_{\delta}(p)}\{u(x)-u(y)\}$. If either $B_{\delta}(p) \subset H$ or $B_p(\delta) \subset H^c$, we then have $\sup\{u^H(x)-u^H(y)|d(x,y)\leq\delta\}=\sup\{u(x)-u(y)|d(x,y)\leq\delta\}$. If, on the other hand, $B_{\delta}(p)\cap H \neq \phi $ and $B_{\delta}(p)\cap H^c \neq \phi $, then we consider that \[\sup_{ B_{\delta}(p)} \{u^H(x)-u^H(y)\} = \sup_{ B_{\delta}(p)} \cap (R_1\cup R4) \{u(x)-u(y)\} \leq \sup_{ B_{\delta}(p)} \{u(x)-u(y)\},\] since $(B_{\delta}(p) \cap (R_1\cup R4))\subset (B_{\delta}(p))$. And so, we have that $w_{u^H}(\delta) \leq w_{u}(\delta)$. Which yields, by induction, $w_{u_m} \leq w_u$. Finally, since $u \in C_0(N)$, $u$ is uniformly continuous, and then the sequence $\{u_m\}$ is equicontinuous. \item The fact that the supports are equibounded follows from the fact that polarizations are monotone: since $u\in C_0(N)$, there is some $R>0$, and some $p \in N$, such that $\mathit{Supp}\ \ u \subseteq B_R(p)$ , and \[\mathit{Supp} \ \ u\subseteq B_R(p) \Rightarrow \mathit{Supp} \ \ u^H\subseteq B_R(p)^H = B_R(p),\] since polarizations are monotone. And then, by induction, $\mathit{Supp} \ \ u_m \subseteq B_R(p)$. \end{enumerate} We conclude by the \textit{Arzela-Ascoli} theorem that there is some $f\in C_{0+}(N)$, such that there is some subsequence $\{u_{m_k}\}$ of $\{u_m\}$, and that $u_{m_k} \rightarrow f$. \end{proof} We now construct a sequence of polarizations of $u \in C_{0+}(N)$ that will converge to $u^*$. We proceed inductively. As expected, we start with $u_0=u$. Then, to choose $H_{m+1} \in \mathbf{H_0}$, so that $u_{m+1}=u_m^{H_{m+1}}$, we look at \[\alpha_m = \sup_{H \in \mathbf{H_0}} \{||u_m-u^*||_1 - ||u_m^H -u^*||_1\}.\] \noindent By lemma \ref{closer}, we know that $\alpha_m$ is always strictly positive. Now, for some fixed $\kappa$ ($0<\kappa<1$), taking $\epsilon < \alpha_m (1-\kappa)$ we note that we can always choose $H_{m+1}\in \mathbf{H_0}$ so that, \[0<\alpha_m < ||u_m-u^*||_1 - ||u_m^{H_{m+1}} -u^*||_1 + \epsilon<||u_m-u^*||_1 - ||u_m^{H_{m+1}} -u^*||_1 +\alpha_m (1-\kappa).\] \noindent Then, it follows that \begin{equation} \label{kappa} \kappa \sup_{H \in \mathbf{H_0}} \{||u_m-u^*||_1 - ||u_m^H -u^*||_1\} < ||u_m-u^*||_1 - ||u_m^{H_{m+1}} -u^*||_1. \end{equation} Next, we prove that the sequence of polarizations we have just constructed converges to $u^*$. \begin{lemma} \label{kappalemma} Let $u\in C_{0+}(N)$. Let $\{u_m\}$ be a sequence of iterated polarizations of $u$, with corresponding halfspaces $\{H_m\} \subset \mathbf{H_0}$ ($u_m=u^{H_1H_2...H_m}$), and suppose that the $H_m$'s are chosen so that equation (\ref{kappa}) is satisfied. Then $u_m \rightarrow u^*$ in any s-norm ($1 \leq s \leq \infty$). \end{lemma} \begin{proof} It follows by lemma \ref{sequentially}, that there is some $f \in C_0(N)$, and some subsequence $\{u_{m_k}\}$ of $\{u_{m}\}$, such that $\{u_{m_k}\}$ converges to f, for any $L^p$ norm. Now, by the lower semi-continuity of the norm, \[||u^*-f^*||_1= \lim_{k \rightarrow \infty} ||u_{m'_k}^*-f^*||_1, \] and since the Steiner symmetrization is a non-expansive rearrangement, we have \[||u_{m_k}^*-f^*||_1 \leq ||u_{m_k}-f||_1. \] \noindent It follows that \[||u^*-f^*||_1= \lim_{k \rightarrow \infty} ||u_{m_k}^*-f^*||_1 \leq \lim_{k \rightarrow \infty} ||u_{m_k}-f||_1 =0, \] \noindent that is, $f^*=u^*$. Now, polarizations are also non-expansive, then, since $m_{k+1}\geq m_k+1$, we have that \[||u_{m_{k+1}}-u^*||_1 \leq ||u_{m_{k}+1}-u^*||_1,\] \noindent on the other hand, by equation (\ref{kappa}), for any polarizer $H \in \mathbf{H_0}$ we have, \[||u_{m_{k}+1}-u^*||_1 \leq ||u_{m_{k}}-u^*||_1 + \kappa(||u_{m_{k}}^H-u^*||_1-||u_{m_{k}}-u^*||_1)\] \[=(1-\kappa )||u_{m_{k}}-u^*||_1 + \kappa ||u_{m_{k}}^H-u^*||_1 \leq ||u_{m_{k}}-u^*||_1,\] \noindent since $\kappa ||u_{m_{k}}^H-u^*||_1 -||u_{m_{k}}-u^*||_1 \leq 0$. Hence, making $m_k \rightarrow \infty$, we get, \[||f-u^*||_1 \leq (1- \kappa) ||f-u^*||_1 + \kappa ||f^H -u^*||_1 \leq ||f-u^*||_1,\] \noindent that is \begin{equation} \label{A} ||f-u^*||_1=||f^H-u^*||_1. \end{equation} Now, since $f^*=u^*$, then $||f-f^*||_1=||f^H-f^*||_1$. So, we cannot have $f \neq u^*=f^*$, because then we would have $||f-f^*||_1>||f^{H_a}-f^*||_1$, for some $H_a$ by lemma \ref{closer}, which would contradict equation (\ref{A}). Then, we can only have that $\{u_{m_k}\}$ converges to $u^*$ for any $L^s$ norm. Finally, again by the non-expansiveness of polarizations, we note that, for any $s$, \[\lim_{k \rightarrow \infty} ||u_{k}-u^*||_s \leq \lim_{k \rightarrow \infty} ||u_{m_k}^*-u^*||_s = 0,\] \noindent as desired. \end{proof} Finally, because $C_{0+}(N)$ is dense in $L^s_+(N)$ ($1\leq s\leq \infty$), we show that the same results of lemma \ref{kappalemma} hold for functions in $L^s_+(N)$. \begin{lemma} \label{convergence} Let $u\in L^s(N)$ ($1\leq s <\infty$). For any steiner symmetrization, there is a sequence of polarizers $\{H_m\} \subset \mathbf{H}_0$, such that the sequence $\{u_m\}=\{u^{H_1..H_m}\}$, converges to $u^*$ in $L^s(N)$. \end{lemma} \begin{proof} First, we recall that there is a countable subset $V \subset C_0(N)$ that is dense in $L^s(N)$. Next, we choose a sequence $\{H_m\}$, for which (\ref{kappa}) holds for all $f \in V$. Then, we take any $f \in V$, sufficiently close to $u$, $||u-f||_s<\epsilon/3$. By contraction we have, \[||u_m-u^*||_s\leq ||u_m-f_m||_s+||f_m-f^*||_s +||f^*-u^*||_s. \] \noindent It remains to show that the right hand side is bounded by $\epsilon$. First, by non-expansiveness of the polarization, we have that $||u_m - f_m||_s \leq ||u-f||_s$. Second, by non-expansiveness of the Steiner symmetrization we have $||f^*-u^*||_s\leq ||f-u||_s$. Then, since $f\in V\subset C_{0+}(N)$, choosing $m$ sufficiently large, we have $||f_m-f^*||_s<\epsilon/3$, and then \[||u_m-u^*||_s\leq ||u_m-f_m||_s+||f_m-f^*||_s +||f^*-u^*||_s< \epsilon,\] \noindent as desired. \end{proof} \subsection{Proof of Theorem \ref{8.2}} We are now in position to prove Theorem \ref{8.2} and conclude that $||\nabla u^*||_s \leq ||\nabla u||_s$. \begin{proof} (of Theorem \ref{8.2}) Let $u \in {L_1^2}_+(N)$, and consider the sequence $\{u_m\}$ of polarizations of $u$, given by lemma \ref{convergence}. Then, for $1<s<\infty$, \[\lim_{m \rightarrow \infty} ||u_{m}-u^*||_s=0.\] Also, $||\nabla u^H||_s=||\nabla u||_s$, by lemma \ref{gradlemma}. Then there exists some function $f \in L_1^s(N)$, and a subsequence $\{u_{m_k}\}$ of $\{u_m\}$, such that $f$ is the weak limit of $u_{m_k}$ in ${L_1^s}_+(N)$. That is, for any compactly supported function $\varphi \in C_0(N)$ \[\lim_{k\rightarrow \infty}\int_N \varphi \ \ div \ \ u_{m_k} dV_g = \int_N \varphi\ \ div f dV_g,\] \noindent and \[\lim_{k\rightarrow \infty}\int_N \varphi \ \ div \ \ u_{m_k} dV_g= -\lim_{k\rightarrow \infty}\int_N \ \ div \varphi\ \ u_{m_k} dV_g =-\int_N div \varphi \ \ div\ \ u^* dV_g.\] \noindent Of course, this means that $v=u^*$. Finally, we recall that for $1<s<\infty$ the s-norm is weakly lower semicontinuous, that is, since $u_{m_k} \rightharpoonup u^*$ weakly in $L^s(N)$, then \[||\nabla u^*||_s \leq \liminf_{k\rightarrow \infty}||\nabla u_{m_k}||_s, \] \noindent hence \[||\nabla u^*||_s \leq ||\nabla u||_s, \] \noindent since $||\nabla u^H||_s=||\nabla u||_s$ for any $H$ (lemma \ref{gradlemma}). \end{proof} \section{Proof of Theorem \ref{principal}} Let $(N,g)= (M^m\times \mathbf{R}^n,g_M+g_E)$, where $M^m$ is a closed manifold ($m\geq2$) with positive scalar curvature, and $g_E$ is the Euclidean metric. In this section we will prove the existence of a Yamabe minimizer for $(N,g)$. The basic scheme of the proof we give is the following. We first note that the subcritical Yamabe equation for $(N,g)$, \begin{equation} \label{subway} a\Delta u+ S_g u^2=\lambda_s u^s, \end{equation} \noindent where $S_g$ is the scalar curvature of $(N,g)$ and $a=\frac{4(n+m-1)}{n+m-2}$, can be solved for $s<p=\frac{2(n+m)}{n+m-2}$ by a positive $C^{\infty}$ function $u_s$. We achieve this by making use of the techniques of the Yamabe problem in the compact case (cf. in \cite{Lee}), and those of the Concentration Compactness Principle of Lions, (\cite{Lions1}, \cite{Lions2}). We then find a uniform bound in $L^r(N,g)$ (for some $r>p$) for the family of solution functions $\{u_s\}$, for $s$ sufficiently close to $p$. Then, using standard regularity theory and the Sobolev Embedding Theorem, we note that the $\{u_s\}$ are $C^{2,\alpha}$ bounded in every compact subset $K_R=M\times B_R$ of $(N,g)$, and thus that $u_s \rightarrow u$ uniformly on every compact subset $K_R$ of $N$, by the Arzela-Ascoli Theorem. As a final step, we use again the techiniques of the Concentration Compactness Principle to prove that $u_s \rightarrow u$ uniformly on all of $N$, where $u$ is a positive and $C^{\infty}$ function that solves the Yamabe equation. \subsection{The subcritical problem for $(N,g)$} In this section we will prove that the equation \begin{equation} \label{two} a\Delta u+ S_g u^2=\lambda_s u^s, \end{equation} \noindent has a positive smooth solution, $u_s$, for $s<p$ and $s$ sufficiently close to $p$. Let \begin{equation} \label{a} Q_s(\varphi)= \frac{\int_N{(a|\nabla \varphi|^2+S_g \varphi^2)dV_g}}{(\int_N{\varphi^s dV_g})^{2/s}}, \end{equation} \noindent and \begin{equation} \label{b} \lambda_s= \inf \{Q_s(\varphi) | \varphi \in C_0^{\infty}(N,g)\}. \end{equation} Now, fix $s<p$, and choose a minimizing sequence $\{u_i\}$ of functions in $C_0^{\infty}(N)$, such that $Q_s(u_{i}) \rightarrow \lambda_s$, and such that $||u_i||_s=1$, $\forall i$. We remark that, by Theorem \ref{8.2}, we can choose a minimizing sequence such that $u_i=u_i^*$. Next, we note that \begin{equation} \label{firstbound} ||u_i||_{1,2} \leq C_1, \end{equation} \noindent where $C_1$ is some constant, independent of $i$ and $s$. To prove (\ref{firstbound}), we start with the following. \begin{lemma} \label{4.3} Consider the set $\{\lambda_s\}$, $2\leq s\leq p$, with $\lambda_s$ as defined by equation (\ref{b}). Then, $\lambda_s$ is upper semi-continuous at $p$, as a function of $s$ (for any $\epsilon>0$, there is some $\delta$ such that $\lambda_s\leq \lambda_p+\epsilon$, for all $s \in (p-\delta,p)$). \end{lemma} \begin{proof} Let $\varphi\in L_1^2(N)$. Given $s',s \leq p$, since \[Q_s(\varphi) = \frac{a\int_N|\nabla \varphi|^2 dV_g+ \int_M S_g \varphi^2 dV_g}{||\varphi||_s^2},\] \noindent then \begin{equation} \label{equal} Q_s(\varphi)=Q_{s'}(\varphi)\frac{||\varphi||_{s'}^2}{||\varphi||_s^2}. \end{equation} \noindent Now, since $\lambda_p$ is an infimum, given $\epsilon>0$ we may choose $\varphi_0$ such that \begin{equation} \label{moto} \lambda_p+\epsilon>Q_p(\varphi_0). \end{equation} \noindent On the other hand, by continuity of the norm, we have, for some $\delta>0$, \[1-\epsilon\leq \frac{||\varphi_0||_p^2}{||\varphi_0||_s^2}\leq 1+\epsilon,\] \noindent for all $s \in (p-\delta,p+\delta)$. Hence, \[\frac{||\varphi_0||_p^2}{||\varphi_0||_s^2} Q_p(\varphi_0) \leq Q_p(\varphi_0)(1+\epsilon),\] \noindent for all $s \in (p-\delta,p)$. Then, taking into account equation (\ref{equal}), we have \[Q_s(\varphi_0)\leq Q_p(\varphi_0)(1+\epsilon),\] \noindent and then, by (\ref{moto}) \[Q_s(\varphi_0)\leq Q_p(\varphi_0)(1+\epsilon)\leq (\lambda_p+\epsilon)(1+\epsilon). \] \noindent Finally, since $\lambda_s<Q_s(\varphi_0)$, we have \[\lambda_s < \lambda_p + C \epsilon+\epsilon^2.\] \noindent for all $s \in (p-\delta,p+\delta)$, with $C=\lambda_p+1$. \end{proof} \begin {rem} \label{AFP} It is a recent result of Akutagawa, Florit and Petean (Theorem 1.3 in \cite{Aku}) that $\lambda_p=Y(M\times \mathbf{R}^n,g_M+g_E)<Y(S^{n+m},g_0)=Y_{m+n}$, when $M$ is closed, of positive scalar curvature and $m\geq2$. Since the inequality is strict, we may choose $\epsilon>0$ small enough so that $\lambda_p+\epsilon<c<Y_{n+m}$, for some $c \in \mathbf{R}$. It then follows from lemma (\ref{4.3}), that for some $\epsilon>0$ small enough, there is some $\delta$, such that \[\lambda_s\leq \lambda_p+\epsilon<Y_{n+m},\] \noindent for every $s \in (p-\delta,p)$. That is \begin{equation} \label{asterisco} \frac{\lambda_s}{Y_{n+m}}<1, \end{equation} \noindent for $s$ close enough to $p$. \end{rem} We now go back to prove (\ref{firstbound}). We note that \[ ||u_i||_{1,2}= \int_N{|\nabla u_i|^2}+ \int_N{ u_i^2} \leq \frac{\lambda_s+1}{a}+ \frac{\lambda_s+1}{\min_M \{S_g\}}\] \[\leq \frac{Y_{n+m}+1}{a}+ \frac{Y_{n+m}+1}{\min_M \{S_g\}}= C_1, \] \noindent for $s$ close enough to $p$, by (\ref{asterisco}). That is $\{u_i \}$ is $L_1^2$ bounded independently of $i$ and of $s$. It then follows from the Rellich-Kondrakov theorem (cf. in \cite{Lee}) that for every compact $K\subset N$, there is some subsequence $\{u_{i_k}\} \subset \{u_i\}$ that converges weakly in $L_1^2(K)$ and strongly in $L^s(K)$ to a function that we will denote by $u_s|_K$. Consider now the compact subsets $K_R=M\times B_R \subset N$, and note that since $K_R\subset K_{R'}$, for $R<R'$, then we have uniqueness of limits on each compact (because the convergence on $L^s(K_R)$ is strong for each $R$). Also, note that $N=\bigcup_i^{\infty} K_i$. Then, we have our limit function $u_s$, as a well defined function on all of $N$ by taking $u_s=lim_{R\rightarrow \infty} u_s|_{K_R}$. Furthermore, on each $K_R$, by the weak convergence on $L_1^2(K_R)$ , we have \[{||\nabla u_s|_{K_R} ||_2^2}\leq \lim_{k\rightarrow \infty}\int_{K_R} \left\langle \nabla u_s|_{K_R},\nabla u_{i_k}\right\rangle dV_g, \] \noindent and this implies that \[{||\nabla u_s|_{K_R} ||_2^2}\leq \limsup_{k\rightarrow \infty} {||(\nabla u_{i_k})|_{K_R} ||_2^2}.\] On the other hand, by the strong convergence of $u_{i_k}$ to $u_{s}|_{K_R}$ in $L^s(K_R)$, and by H\"{o}lder's inequality, we have \[\int_{K_R} u_s|_{K_R}^2 dV_g= \lim_{k\rightarrow \infty}\int_{K_R} u_{i_k}^2 dV_g,\] \noindent and so, it follows that \begin{equation} \label{Qs} \int_{K_R}(a|\nabla u_{s}|_{K_R}|^2+S_g u_{s}|_{K_R}^2)dV_g\leq \limsup_{k\rightarrow \infty} \int_{K_R}(a|\nabla u_{i_k}|^2+S_g u_{i_k}^2)dV_g. \end{equation} Hence, to prove that $u_s$ in fact minimizes $Q_s$ on $N$, it remains to show that $||u_s||_s=1$. To this purpose, we introduce in the following lemmas the techniques of the Concentration Compactness Principle, due to Lions \cite{Lions1}, \cite{Lions2}. \begin{lemma} \label{concentration} Consider a sequence $\{\rho_k\}$ of $C^{\infty}$, non-negative functions, such that $\rho_k=\rho_k^*$, and \[\int_N{\rho_k dV_g}=1.\] \noindent Then, there exists a subsequence $\{\rho_{k_j}\} \subset \{\rho_{k}\}$, and some $\alpha$ ($0 \leq \alpha\leq 1$), such that the following is satisfied: for all $\epsilon>0$, there exists some $R_{\epsilon}$ ($0<R_{\epsilon}<\infty$), and some $j_0>1$ such that \[ \int_{M\times B_{R_{\epsilon}}} \rho_{k_j}dVg \geq \alpha -\epsilon, \] \noindent $\forall j>j_0$. Furthermore, for each $R>0$, given $\epsilon>0$, there is some $j_1>1$ such that \[\int_{M\times B_R} \rho_{k_j}dVg \leq \alpha+\epsilon, \] \noindent $\forall j>j_1$. \end{lemma} \proof First note that since $\rho_k=\rho_k^*$ for each $k>1$, then, for each $R$ we have \[\sup_{y\in \mathbf{R}^n} \int_{M\times \{y+ B_R\}} \rho_k dV_g = \int_{M\times B_R} \rho_k dV_g,\] \noindent where $B_R$ is the ball of radius $R$ centered at $0$, and $y+B_R$ the ball of radius $R$ centered at $y$. Now, consider the functions \[Q_k(t)= \int_{M\times B_t} \rho_k dV_g. \] \noindent It follows that for each $k$, $0\leq Q_k(t)\leq 1$. Thus, the functions $Q_k(t)$ are non-negative and uniformly bounded in $\mathbf{R}^+$. Furthermore, since the $\rho_k$ are non-negative, the functions $Q_k(t)$ are non-decreasing as fucntions of $t$. \noindent It follows then, from the Heine-Borel theorem, that there is a subsequence $\{Q_{k_j}\} \subset \{Q_{k}\}$, and a non-negative function $Q(t)$, such that \[\lim_{j\rightarrow \infty}Q_{k_j} = Q(t),\] \noindent for each $t\geq 0$. Now, let $\lim_{t \rightarrow \infty} Q(t)=\alpha$. We note that, $0\leq \alpha \leq 1$. Also, since $Q(t)$ is non-decreasing, and $\lim_{t\rightarrow\infty} Q(t)= \alpha$, then, given $\epsilon>0$, we may choose some $t_{\epsilon}$ such that $Q(t_{\epsilon})>\alpha-\epsilon$. Of course this implies that \begin{equation} \label{diko} \int_{M\times B_{t_{\epsilon}}}\rho_{k_j} dV_g \geq \alpha-\epsilon, \end{equation} \noindent for all $j>j_0$, for $j_0$ large enough. Moreover, since $Q(t)$ is non-decreasing, for all $t>0$ we have $Q(t)\leq \alpha$. This implies that \begin{equation} \label{diko2} \int_{M\times B_t}\rho_{k_j} dV_g \leq \alpha+\epsilon, \end{equation} \noindent for all $j>j_1$, for $j_1$ large enough. \endproof We now show that given $\beta \in (2,p)$ the $u_k^{\beta}$ ``concentrate'' in a compact set. \begin{lemma} \label{novnod} Consider a sequence $\{u_{k}^{b_k}\}$ of $C^{\infty}$, non-negative functions ($b_k>2, \forall k$), such that $u_k=u_k^*$, an \[\int_N{u_{k}^{b_k} dV_g}=1,\] for each $k$. Assume also that the sequence $\{u_{k}\}$ is bounded in $L_1^2(N)$. Then, there exists a subsequence $\{u_{k_j}\} \subset \{u_{k}\}$, such that for each $\beta$ ($\beta \in (2,p)$), we have that given $\epsilon>0$, there exists some $R_{\epsilon}$ ($0<R_{\epsilon}<\infty$), such that \[\int_{N\setminus(M\times B_{R_{\epsilon}})}{u_{k_j}^{\beta}} dVg\leq \epsilon, \] \noindent $\forall j>j_0$, for some $j_0>1$. \end{lemma} \begin{proof} Take $\rho_k=u_k^{b_k}$. Then, by Lemma \ref{concentration}, we have a subsequence $\{u_{k_j}^{b_{k_j}}\}$ of $\{u_k^{b_k}\}$, and an $\alpha$, $0\leq\alpha\leq 1$, such that, for every $\epsilon/2>0$, there is some $R_{\epsilon/2}$, such that \begin{equation} \label{ok} \alpha-\epsilon/2 < \int_{M\times B_{R_{\epsilon/2}}}u_{j}^{b_{j}} dV_g < \alpha+\epsilon/2, \end{equation} \noindent for all $j>j_0$, for some $j_0$ (for simplicity, we will denote $u_{k_j}^{b_{k_j}}$ by $u_{j}^{b_{j}}$). Also, since for every $R>0$ we have $\int_{M\times B_R} \rho_{k_j} dVg \leq \alpha+\epsilon/2$ (for $j>j_1$, $j_1$ large enough), then, it follows from (\ref{ok}) that for every for every compact $K$, $K\subset N \setminus M\times B_{R_{\epsilon}}$, we have \[\int_{K}u_j^{b_j} dV_g <\epsilon,\] for all $j>j_1$. Now, we choose $R_0>0$ such that $Vol(M\times B_{R_{0}})\leq 1$. Then, by H\"{o}lder's inequality, for any $y\in B_{R_\epsilon}^c$ \[ \int_{M\times \{y + B_{R_0}\}} u_j^2 dV_g \leq \int_{M\times \{y + B_{R_0}\}} u_j^{b_j} dV_g < \epsilon.\] \noindent Now, let $R_1=R_{\epsilon}+2R_0$, then, \begin{equation} \label{Lio} \sup_{\{y\in B_{R_\epsilon}^c} \int_{M\times \{y + B_{R_0}\}} u_j^2 dV_g <\epsilon. \end{equation} \noindent Of course, we can make $\epsilon \rightarrow 0$ by making $R_{\epsilon}$ (and thus $R_1$) go to infinity. We next divide the proof in cases. \textbf{Case 1}. The sequence $\{u_{k}\}$ is bounded in $L^{\infty}(N)$. Let $||u_k||_{\infty} <A_{\infty}$. Also, since $u_k$ is bounded in $L^2(N)$, let $A_{1,2}$ ($1<A_{1,2}<\infty)$ be such that $||u_k||_{1,2}<A_{1,2}$. Then, we have, for all $\beta_0>2$, given any $y\in B_{R_\epsilon}^c$ \[ \int_{M\times \{y + B_{R_0}\}} u_j^{\beta_0} dV_g \leq A_{\infty}^{\beta_0-2} \int_{M\times \{y + B_{R_0}\}} u_j^2 dV_g \] \begin {equation} \label{astro} <A_{\infty}^{\beta_0-2} \epsilon. \end{equation} \noindent by (\ref{Lio}). Now, take $\bar\beta$, such that $\bar\beta>2$. Of course, $2<2(\bar\beta-1)<\infty$. Then, by H\"{o}lder's inequality, for any given $y\in B_{R_\epsilon}^c$ \[ \int_{M\times \{y + B_{R_0}\}} |u_j|^{\bar\beta-1}|\nabla u| dV_g \] \[\leq \left(\int_{M\times \{y + B_{R_0}\}} |u_j|^{2(\bar\beta-1)}dV_g\right)^{\frac{1}{2}} \left(\int_{M\times \{y + B_{R_0}\}} |\nabla u|^2 dV_g \right)^{\frac{1}{2}} \] \begin{equation} \label{Lio3} \leq (A_{\infty}^{(\bar\beta-2)}\epsilon^{1/2})(A_{1,2}), \end{equation} \noindent where the last inequality follows from (\ref{astro}) and the fact that $\int_{N} |\nabla u|^2 dV_g$ is uniformly bounded by $A_{1,2}$. Then, by the Sobolev imbedding, for any $\gamma \in (1,\frac{m+n}{m+n-1})$, there is a constant $c_0$, independent of $y$, such that \[ \left( \int_{M\times \{y + B_{R_0}\}}{u_j^{\beta \gamma}dV_g}\right)^{1/\gamma} \leq c_o \int_{M\times \{y + B_{R_0}\}} {u_j^{\beta }+|\nabla (u_j)^{\beta }| dV_g}\] \[ \leq c_o \int_{M\times \{y + B_{R_0}\}} {u_j^{\beta }+\beta (u_j)^{\beta -1} |\nabla u_j| dV_g},\] \noindent for any $y\in B_{R_\epsilon}^c$. That is, \[\int_{M\times \{y + B_{R_0}\}}{u_i^{\bar\beta \gamma}dV_g} \leq C_o \left(\int_{M\times \{y + B_{R_0}\}} {u_j^{\bar\beta }+\bar\beta (u_j)^{\bar\beta -1} |\nabla u_j| dV_g}\right)^{\gamma}\] \[ \leq C_o\left(A_{\infty}^{\bar\beta-2}\epsilon+\bar\beta A_{1,2}(A_{\infty}^{(\bar\beta-2)}\epsilon^{1/2})\right)^{\gamma-1} \left(\int_{M\times \{y + B_{R_0}\}} {u_j^{\bar\beta }+\bar\beta (u_j)^{\bar\beta -1} |\nabla u_j| dV_g}\right)\] \[\leq C_1 \epsilon^{(\gamma-1)/2} \left(\int_{M\times \{y + B_{R_0}\}} {u_j^{\bar\beta }+\bar\beta (u_j)^{\bar\beta -1} |\nabla u_j| dV_g}\right),\] \noindent with $C_1=C_0 A_{\infty}^{(\bar{\beta}-2)(\gamma-1)}(\bar{\beta}A_{1,2})^{\gamma-1}$, and $C_0=c_0^{\gamma}$. We then cover $\mathbf{R}^n\setminus B_{R_1}$ with balls of radius $R_0$ in some way that any point $y \in (\mathbf{R}^n\setminus B_{R_1}) $ is not covered by more than $m$ balls ($m$ a prescribed integer). It follows that \[\int_{N\setminus M\times B_{R_1}}{u_j^{\bar\beta \gamma}dV_g} \leq m C_1 \epsilon^{(\gamma-1)/2} \int_{N \setminus (M\times B_{R_1})} {u_j^{\bar\beta }+\bar\beta (u_j)^{\bar\beta -1} |\nabla u_j| dV_g}\] \[\leq m C_1 \epsilon^{(\gamma-1)/2} (A_{\infty}^{\bar\beta-2}A_{1,2}+\bar\beta A_{1,2} A_{\infty}^{\bar\beta-2} A_{1,2}^{1/2}) \] \begin{equation} \label{gamma} \leq (m C_0 p A_{1,2}^2 A_{\infty}^2 )\epsilon^{(\gamma-1)/2} \end{equation} \[\leq C_2 \epsilon^{(\gamma-1)/2}.\] Finally, by noting that $C_2$ does not depend on $y\in \mathbf{R}^n$, we can make $\epsilon\rightarrow 0$, by making $R_{\epsilon}$ (and thus $R_1$) go to infinity. That is, given $\beta \in (2,p)$, for every $\delta>0$ we may find $R_{\delta}$ such that \[ \int_{N\setminus M\times B_{R_{\delta}}}{u_j^{\beta }dV_g} < \delta. \] This finishes the proof of case 1. We now remove the assumption that $u_j$ is bounded in $L^{\infty}(N)$. \textbf{Case 2}. The sequence $\{u_j\}$ is not bounded in $L^{\infty}(N)$. We note that for any $A>1$, the function $v_j=\min\{u_j, A\}$ is bounded in $L^{\infty}(N)$, and still satisfies the conditions needed for the previous proof, so that for any $\beta_1$ ($2<\beta_1<p$), given $\epsilon>0$, we have, by equation (\ref{gamma}), some $R_{1}>0$ such that \begin{equation} \label{bone} \int_{N\setminus M\times B_{R_1}} v_j^{\beta_1} < C_3 A^2 \epsilon. \end{equation} \noindent where $C_3$ is a constant that does not depend on $A$. We also have \begin{equation} \label{btwo} \int_{N\setminus M\times B_{R_1}} u_j^{\beta_1} dV_g \leq \int_{N\setminus M\times B_{R_1}} v_j^{\beta_1} dV_g+ \int_{N\setminus M\times B_{R_1}} {(u_j|_{\{u_j>A\}})^{\beta_1} dV_g}. \end{equation} We next choose $\beta_2 \in(2,p)$, such that $\beta_2 >\beta_1$. And then, \[ {A^{\beta_2-\beta_1}} \int_{N\setminus M\times B_{R_1}} (u_j|_{\{u_j>A\}})^{\beta_1} dV_g \leq \int_{N\setminus M\times B_{R_1}} (u_j|_{\{u_j>A\}})^{\beta_2} dV_g,\] \noindent it follows that \begin{equation} \label{bthree} \int_{N\setminus M\times B_{R_1}} (u_j|_{\{u_j>A\}})^{\beta_1} dV_g \leq \frac{K}{A^{\beta_2-\beta_1}}, \end{equation} \noindent since $\int_N u_j^{\beta_1} dV_g<K$, for some $K>0$, because $\beta_1<p$. Hence, from (\ref{bone}), (\ref{btwo}) and (\ref{bthree}), we have \begin{equation} \label{bfour} \int_{N\setminus M\times B_{R_1}} u_j^{\beta_1}<C_3 A^2 \epsilon + \frac{K}{A^{\beta_2-\beta_1}}. \end{equation} Then, given $\delta>0$, we may first choose $A$ such that $\frac{K}{A^{\beta_2-\beta_1}}<\frac{\delta}{2}$, and then we choose $\epsilon>0$, such that $C_3 A^2 \epsilon<\frac{\delta}{2}$. Of course, for this $\epsilon$ there is some $R_1$ such that $\int_{N\setminus M\times B_{R_1}} v_j^{\beta_1} < C_3 A^2 \epsilon$, and then by equation (\ref{bfour}) we have \begin{equation} \label{bfive} \int_{N\setminus M\times B_{R_1}} u_j^{\beta_1}<\delta. \end{equation} \noindent The conclusion of the lemma follows. \end{proof} We now go back to prove that $||u_s||_s=1$. By taking $b_k=s$, we note that the minimizing sequence $\{u_{k}\}$ satisfies the hypothesis of lemma \ref{novnod}, since in its construction we assumed that the minimizing sequence was symmetrized ($u_{k}=u_{k}^*$), and that $||u_{k}||_{s}=1$, for all $k>1$. On the other hand, equation (\ref{firstbound}) showed that $\{u_{k}\}$ was uniformly bounded in $L_1^2(N)$. Then, by taking $\beta=s<p$ in lemma \ref{novnod}, we have that for every $\delta>0$ there is some $R_1$ such that \[ \int_{N\setminus M\times B_{R_1}}{u_j^{s}dV_g} < \delta. \] Of course, this implies that $\alpha=1$. That is, $||u_s||_s=1$. Then, $u_s$ is a weak solution to equation (\ref{two}). It follows from a result of N. Trudinger (Theorem 3 in \cite{Tru}) that $u_s$ is smooth, since it is a weak solution of (\ref{subway}), and from the maximum principle (cf. in \cite{Lee}) that $u_s$ is positive, since $S_g$ is positive. We resume in the next lemma what we have just proved. \begin{lemma} \label{subcritical} For $s>2$ and close enough to $p$ (close enough so that equation (\ref{asterisco}) is satisfied), equation (\ref{two}) has a solution $u_s$, such that $Q_s(u_s)=\lambda_s$, and $||u_s||_s=1$. \end{lemma} \subsection{The limit as $s\rightarrow p$} We now investigate the limit of the functions $u_s$, as $s\rightarrow p$. We will show that the functions $u_s$ converge to a function $u$, which in turn will be the Yamabe minimizer for $(N,g)$. We will also show that $u$ is positive and $C^{\infty}$. By lemma \ref{subcritical}, we have a family $\{u_s\}$ of functions that solve equation (\ref{two}) and such that $||u_s||_s=1$. Next, we will prove that this family is uniformly $C^{2,\alpha}$ bounded in each compact set $M\times B_R\subset N$. We will achieve this by finding first a uniform bound for $||u_s||_r$, (for some $r>p$) and then, using standard elliptic regularity theory, and the Sobolev Embedding Theorem, we will find our $C^{2,\alpha}$ bound. We follow the techniques of Parker and Lee, \cite{Lee}. We begin by proving that the functions $||u_s||$ are uniformly bounded in $L^r(N)$, for some $r>p$, as $s\rightarrow p$. \begin{prop} \label{4.4} Given the collection of functions of lemma (\ref{subcritical}), $\{u_s\}$, there are some constants $s_0<p$, $r>p$, and $C>0$, such that \[||u_s||_r \leq C,\] \noindent for all $s>s_0.$ \end{prop} \begin{proof} Consider the Yamabe subcritical equation (\ref{two}). Let $\delta>0$ and multiply (\ref{two}) by $u_s^{1+2\delta}$. Then, integrating over $N$, we have \begin{equation} \label{a1} a\int_{N} u_s^{1+2\delta} \Delta u_s dV_g +\int_{N} S_g u_s^{2+2\delta} dV_g = \int_{N} \lambda_s u_s^{s+2\delta} dV_g. \end{equation} Next, by setting $w=u_s^{1+\delta}$, we get $dw=(1+\delta)u_s^\delta du_s$. And so, multiplying both sides of (\ref{a1}), by $(1+\delta)^2$, it simplifies to \[\frac{1+2\delta}{(1+2\delta)^2} a \int_N |dw|^2 dV_g + = \lambda_s \int_N u_s^{s-2}w^2 dV_g - \int_N S_g w^2dV_g.\] \noindent Then by using the ``integration by parts'' formula, \[ \int_{N} \left\langle \nabla \varphi, \nabla \psi \right\rangle dV_g= \int_{N} \varphi \Delta \psi dV_g,\] \noindent (cf. in \cite{Lee}, page 42), we have \begin{equation} \label{b1} \int_{N}|dw|^2 dV_g \leq \frac{(1+\delta)^2}{1+2\delta} \frac{\lambda_s}{a} \int_N u_s^{s-2}w^2 dV_g. \end{equation} Now, since $(N,g)$ is a complete manifold, it has bounded sectional curvature, and strictly positive injective radius, then the Sobolev Embedding Theorem holds (cf. in \cite{Au1}), that is, for any $\epsilon>0$, there is some $C_{\epsilon}$ such that \[||w||_p^2 \leq (1+\epsilon) \frac{a}{Y_{m+n}} \int_N |dw|^2 dV_g+ C_{\epsilon} \int_N w^2dV_g,\] \noindent hence, by equation (\ref{b1}), \[||w||_p^2 \leq (1+\epsilon) \frac{(1+\delta)^2}{1+2\delta} \frac{\lambda_s}{Y_{m+n}} \int_N u_s^{s-2}w^2 dV_g + C'_{\epsilon} \int_N w^2dV_g,\] \noindent and so, by H\"{o}lder's inequality, \begin{equation} \label{jackson} ||w||_p^2 \leq (1+\epsilon) \frac{(1+\delta)^2}{1+2\delta} \frac{\lambda_s}{Y_{m+n}} ||u_s||_{(s-2)(m+n)/2}^{s-2}||w||_p^2 + C'_{\epsilon} ||w||_2^2. \end{equation} Now we recall that by remark \ref{AFP}, there is some $\delta_1>0$ such that \[\frac{\lambda_s}{Y_{m+n}}<1,\] \noindent for all $s$, $p-\delta_1 \leq s \leq p$. On the other hand, we note that if $p-\delta \leq s\leq p$, then \[0\leq s- \left((s-2)\frac{n+m}{2}\right) \leq \delta \left(\frac{n+m}{2}\right).\] \noindent Meanwhile, by continuity of the norm, given $\epsilon>0$, there is some $\delta_{\epsilon}>0$ such that \[ ||u_s||_{s'}\leq ||u_s||_s + \epsilon,\] \noindent if $|s-s'|\leq\delta_{\epsilon}$. Then, by taking $\delta_2=\delta_{\epsilon}(\frac{2}{n+m})$, we have that for $s \in (p-\delta_2,p)$, \begin{equation} \label{s-2} ||u_s||_{(s-2)(n+m)/2}\leq ||u_s||_s + \epsilon=1+\epsilon, \end{equation} \noindent since $0\leq s-\left((s-2)(n+m)/2\right) \leq \delta_{\epsilon}$. Thus, in (\ref{jackson}), we can choose $\delta$ and $\epsilon$ small enough so that the coefficient of the first term is less than $1$ and hence, can be absorbed by the left-hand side. We note that we need $s$ to be close enough to $p$ so that both (\ref{s-2}) and (\ref{asterisco}) are satisfied. We then have from (\ref{jackson}) \begin{equation} \label{dobleu} ||w||_p^2\leq C||w||_2^2. \end{equation} Hence, to finish the proof, we only need to show that \[||w||_2^2=||u_s||_{2(1+\delta)}^{1+\delta},\] \noindent is bounded independently of $s$. We proceed as follows. First, we divide the support of $u_s$ in $\Omega_s=u_s^{-1}((1,\infty))$ and $\Omega_s^c$. Then we note that since $||u_s||_s=1$, then $Vol(\Omega_s)\leq1$, independently of $s$, and hence, by H\"{o}lder's inequality \begin{equation} \label{amayuscula} \left(\int_{\Omega_s}u_s^{2(1+\delta)}\right)^{\frac{1+\delta}{2(1+\delta)}}\leq||u_s||_{2(1+\delta)}^{1+\delta}<||u_s||_s^{1+\delta}=1. \end{equation} Meanwhile, outside $\Omega_s$, since $u_s<1$, then \[u_s^{2(1+\delta)}<u_s^2,\] \noindent and then \begin{equation} \label{bmayuscula} \int_{\Omega_s^c}u_s^{2(1+\delta)}<\int_{N}u_s^2<C_1, \end{equation} \noindent where $C_1$ is independent of $s$, by (\ref{firstbound}). It follows from (\ref{amayuscula}) and (\ref{bmayuscula}) that $||u_s||_{2(1+\delta)}^{1+\delta}$ is bounded uniformly. And then, from (\ref{dobleu}) \[||w||_p=||u_s||_{p(1+\delta)}^{1+\delta}\] \noindent is bounded independently of $s$. \end{proof} It follows from this $L^r$ bound that we may find a $C^{2,\alpha}$ bound for the family $\{u_s\}$ on each compact subset of $N$. \begin{lemma} \label{alphabound} For the family of solutions $\{u_s\}$ in lemma \ref{4.4}, that are bounded uniformly in $L^r(N)$, there is a $C^{2,\alpha}$ bound on each compact $M\times B_R\subset N$. \end{lemma} \begin{proof} Consider any compact subset $M\times B_R\subset N$, and take $R_0, R_1, R_2$, ($R<R_0<R_1<R_2$) large enough. Of course, for any $r>0$, $Y(M\times B_r,g_M+g_E)\leq Y(M\times \mathbf{R}^n,g_M+g_E)<Y_{m+n}$. Now, since $u_s\in L^r(N)$ (lemma \ref{4.4}), then by (\ref{subway}), \[|\Delta u_s|=|\lambda_s u_s^{s-1}- \frac{S_g}{a} u_s| \in L^q(M\times B_{R_2}),\] \noindent with $q=\frac{r}{s-1}$. Then, by standard elliptic regularity theory (for example, Gilbarg and Trudinger, \cite{Gil}), we have $u_s \in L_2^q(M\times B_{R_1})$. And then, from the Sobolev Embedding Theorem, $u_s \in L^{r'}(M\times B_{R_1})$, with $r'=\frac{(n+m)r}{(n+m)s-(n+m)-2r}$. Of course, $r>r'$, since $r>p=\frac{(n+m)(p-2)}{2}>\frac{(n+m)(s-2)}{2}$. By iterating this procedure we get $u_s\in L_2^q$ for all $q>1$. Then, again by the Sobolev Embedding Theorem, we have $u_s \in C^{\alpha}(M\times B_R)$ for some $\alpha>0$. Thus, using standard elliptic regularity theory one more time, we conclude that $u_s\in C^{2,\alpha}(M\times B_R)$. This implies that we have a uniform $C^{2,\alpha}$ bound on each compact subset $M\times B_R\subset N$. \end{proof} It follows now from the the Arzela-Ascoli Theorem that we can find a subsequence $\{u_{s_k}\}\subset\{u_s\}$ which converges to its limit $u$ on each compact subset of $(N,g)$. From this, we can construct the limit function $u$ such that $u_{s_k}$ converges to $u$ on all of $N$. Then, using lemma \ref{novnod} we will prove that $\lim_{k\rightarrow \infty} ||u_{s_k}||_p=1$. Naturally, the limit function $u$ would be a solution to the Yamabe equation, completing thus the proof of Theorem \ref{principal}. \begin{lemma} \label{finaldestination} Let $\{u_s\}$ be the sequence of functions given by lemma \ref{subcritical}, then, as $s\rightarrow p$ there is a subsequence $\{u_{s_k}\}\subset\{u_s\}$ such that it converges to a positive, $C^{\infty}$ solution, of \[a\Delta u+S_g u = \lambda u^{p-1},\] \noindent with \[||u||_p=1\] \noindent and \[Q_p(u)=Y(N,[g])=\lambda.\] \end{lemma} \begin{proof} By lemma \ref{alphabound} we have that the sequence $\{u_s\}$ is $C^{2,\alpha}$ uniformly bounded on each compact $M\times B_R \subset N$. Then, by the Arzela-Ascoli theorem (cf. in \cite{Peter}), this implies that for each compact $K_R=M\times B_R \subset N$, there is a subsequence $\{u_{s_k}\} \subset \{u_s\}$ such that it converges in $C^2(K_R)$ norm to a function in $C^2(K_R)$ that we will denote by $u|_K$. Then, since $K_R\subset K_{R'}$ for $R<R'$, we have uniqueness of limits on each compact (because of the $C^2(K_R)$ convergence for each R). Also, since $N=\bigcup_i^{\infty} K_i$, then we have our limit function $u$ as a well defined function on all of $N$ by taking $u=lim_{R\rightarrow \infty} u|_{K_R}$. We now prove that $\lim_{k\rightarrow \infty} ||u_{s_k}||_{s_k}=||u||_p=1$. We use lemma \ref{novnod}. First, we note that the hypothesis are satisfied by $\{u_{s_k}\}$. We already know that $u_{s_k}=u_{s_k}^*$ and that $||u_{s_k}||_{s_k}=1$, for each $k>1$. On the other hand, equation (\ref{firstbound}) shows that the $u_{s_k}$ are uniformly bounded in $L_1^2(N)$. To prove that the $u_{s_k}$ are uniformly bounded in $L^{\infty}(N)$, consider the compact set $K_1=(M\times \bar{B_1})$. We recall that $u_{s_k}\rightarrow u|_{K_1}$ on $K_1$, in $C^2$ norm. Hence, for all $k>k_1$, $k_1$ large enough, \[\sup_{K_1} u_{s_k}\leq (\sup_{K_1} u|_{K_1} )+1,\] \noindent Then, since $u_{s_k}=u_{s_k}^*$ for all $k>1$, we know that \[\sup_N u_{s_k}\leq \sup_{K_1} u_{s_k}.\] \noindent Of course this implies that $(\sup_{K_1} u_{s_k})\leq (\sup_{K_1} u|_{K_1})+1$, and then the $u_{s_k}$ are uniformly bounded in $L^{\infty}(N)$ for all $k>k_1$. Now, let $\beta \in (2,p)$. Let $\epsilon>0$, then, by lemma \ref{novnod}, there is some $R_{\epsilon}>0$ and some $k_2>1$ such that \begin{equation} \label{pre} \int_{N \setminus (M\times B_{R_{\epsilon}})} u_{s_k}^{\beta} dV_g <\epsilon \end{equation} \noindent for all $k>k_2$. On the other hand, since $u_k$ is bounded uniformly in $L^{\infty}(N)$, say $u_k\leq A_{\infty}$ (for all $k>k_3$, for some $k_3>1$) we have \[\int_{N\setminus (M\times B_{R_{\epsilon}})} u_{s_k}^{s_k} dV_g\leq A_{\infty}^{s_k-\beta}\int_{N\setminus (M\times B_{R_{\epsilon}})} u_{s_k}^{\beta} dV_g\] \begin{equation} \label{okmaguey} \leq C_A \int_{N\setminus (M\times B_{R_{\epsilon}})} u_{s_k}^{\beta} dV_g \leq C_A \epsilon \end{equation} \noindent where $C_A$ is a constant such that $C_A= \max \{1,A_{\infty}\}$ (and of course, we have chosen $k_4$ large enough so that $s_k-\beta>0$, for all $k>k_4$). The last inequality of (\ref{okmaguey}) is an application of (\ref{pre}). It follows from (\ref{okmaguey}) that \[\lim_{k\rightarrow \infty} ||u_{s_k}||_{s_k} =\alpha = 1.\] \noindent Hence $||u||_p=1$. Of course, this implies that there is a subsequence $\{u_{s_k}\}\subset\{u_s\}$ such that it converges in $C^2$ norm to a solution $u \in C^2(N)$ that satisfies \[a\Delta u+S_g u = \lambda u^{p-1},\] \noindent with \[Q_p(u)=\lambda,\] \noindent where $\lambda=\lim_{s\rightarrow p} \lambda_s$. The following continuity lemma implies that $\lambda=\lambda_p=Y(N,[g])$. \begin{lemma} \label{lambdas} Consider the set $\{\lambda_s\}$ as defined by equation \ref{b}, then $\lambda_s\rightarrow \lambda_p$ as $s\rightarrow p$. \end{lemma} \begin{proof} Since $\lim_{k\rightarrow \infty} ||u_k||_p=1$, recalling that $||u_k||_{s_k}=1$ and $Q_{s_k}(u_k)=\lambda_{s_k}$, we have by (\ref{equal}), \[Q_p(u_k)= \frac{\lambda_{s_k}}{||u_k||_p}.\] \noindent Then, for $s_k$ close enough to $p$, \[\lambda_p\leq Q_p(u_k)=\frac{\lambda_{s_k}}{||u_k||_p}\leq \lambda_{s_k}(1+\epsilon)\leq \lambda_{s_k}+\epsilon Y_{n+m}, \] \noindent since $\lambda_{s_k}< Y_{n+m}$, for all $s_k\leq p$, by (\ref{asterisco}). We conclude, using lemma \ref{4.3}, that $\lambda_s\rightarrow \lambda_p$ as $s\rightarrow p$. \end{proof} Finallly, the regularity of $u$ follows from a result of N. Trudinger (Theorem 3 in \cite{Tru}), since $u$ is an $L_1^2(N,g)$ solution of the Yamabe equation. On the other hand, since $S_g>0$ and $u$ is smooth, it follows from the maximum principle (cf. in \cite{Lee}) that $u$ is positive. \end{proof} Of course, from lemmma \ref{finaldestination}, Theorem \ref{principal} follows.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:introduction} \nomenclature[\a]{$\a$}{Radius of particle \verb+\a+} \nomenclature[\cpstar]{$\cpstar$}{Scaled polymer concentration \verb+\cpstar+} \nomenclature[B]{$B$}{ratio of buoyant stress to compressive yield stress of gel \verb+B+} \nomenclature[\D]{$\D$}{range of attractive potential \verb+\D+} \nomenclature[\delta rho]{$\Delta \rho$}{Density difference \verb+\Delta \rho+} \nomenclature[\etaL]{$\etaL$}{low-shear limiting viscosity \verb+\etaL \rho+} \nomenclature[g]{$g$}{Acceleration due to gravity \verb+g+} \nomenclature[h]{$h$}{height of gel \verb+h+} \nomenclature[\hcrit]{$\hcrit$}{critical height of gel \verb+\hcrit+} \nomenclature[\h0]{$\h0$}{Initial height of gel \verb+\h0+} \nomenclature[\k0]{$\k0$}{Initial permeability of gel \verb+\k0+} \nomenclature[\lcrit]{$\lcrit$}{stress transmission length \verb+\lcrit+} \nomenclature[\lg]{$\lg$}{mean width of strands \verb+\lg+} \nomenclature[M_{\mys{w}}]{$M_{\mys{w}}$}{Polymer molecular weight \verb+M_{\mys{w}}+} \nomenclature[\NA]{$\NA$}{Avocadro's constant \verb+\NA+} \nomenclature[\phicolloid]{$\phicolloid$}{Colloid volume fraction \verb+\phicolloid+} \nomenclature[\phic]{$\phic$}{Initial colloid volume fraction \verb+\phic+} \nomenclature[\rg]{$\rg$}{Polymer radius of gyration \verb+\rg+} \nomenclature[\sigcrit]{$\sigcrit$}{Critical stress for solid-liquid transiton \verb+\sigcrit+} \nomenclature[\sigg]{$\sigg$}{Buoyant stress on gel \verb+\sigg+} \nomenclature[\sigyield]{$\sigyield$}{compressive yield stress \verb+\sigyield+} \nomenclature[\sigma]{$\sigma$}{Stress variable \verb+\sigma+} \nomenclature[\taulife]{$\taulife$}{lifetime of percolating network of gel \verb+\taulife+} \nomenclature[\tausec]{$\tauesc$}{Kramers escape time \verb+\tauesc+} \nomenclature[\taubreak]{$\taubreak$}{Stress dependent time for solid-liquid transiton \verb+\taubreak+} \nomenclature[\taumean]{$\taumean$}{Mean delay time} \nomenclature[\tau0]{$\tau0$}{Characteristic diffusion time} \nomenclature[t]{$t$}{general time variable \verb+t+} \nomenclature[\tw]{$\tw$}{Waiting time \verb+\tw+} \nomenclature[\Uc]{$\Uc$}{interparticle attractive energy at contact \verb+\Uc+} \nomenclature[\v0]{$\v0$}{settling velocity of particle at infinite dilution \verb+\v0+} \nomenclature[\vs]{$\vs$}{Slow settling rate \verb+\vs+} \nomenclature[\vf]{$\vf$}{Fast settling rate \verb+\vf+} \nomenclature[\xi]{$\xi$}{hydrodynamic resistance due to flow of solvent in gel \verb+\xi+} Colloidal gels are components of everyday products such as foodstuffs, fabric conditioners, cosmetics, shampoos, and even toothpaste, yet despite their practical importance they present many challenges to our understanding of disordered materials. A gel is a solid containing a network of particles which is formed when a colloidal dispersion with attractive interactions $\Uc$ is quenched deep into a two-phase region of phase space \cite{7444}. Driven far out-of-equilibrium, the kinetics of phase separation are dramatically slowed down or, in the limit of strong short-range attractions ($\D / \a \ll 0.1$ with $\D$ the range of the attractive interactions and $a$ the particle radius), totally arrested. Partial phase separation generates a disordered network, whose initial structure is controlled by the strength of interaction $\Uc$, the range $\D / \a$ of the potential, as well as the volume fraction $\phicolloid$ of colloids. The long-time structural integrity of this network is, in the majority of cases, constrained by the gravitational stress exerted by its own weight. Given sufficient time, a gel settles under gravity, if it is not density-matched. We distinguish two limiting cases: In strong gels, where the attractive interactions are large in magnitude ($\Uc \gtrsim 20 \kBT $ and narrow in range $\D / \a \ll 0.1$), compaction occurs smoothly at a rate which decreases progressively with time. The time dependence of the height of the gel in this case is well described by a poroelastic settling model \cite{5922,3708,11697}; By contrast, in weak gels where attractions are comparable to $\kBT$ and wide in range $\D / \a > 0.1$ so that thermal fluctuations are significant, an anomalous behaviour called {\it delayed collapse} is observed. A weak gel, instead of instantaneously settling, hesitates for a well defined delay period $\taud$ without any sign of macroscopic settling, before suddenly undergoing a rapid and catastrophic collapse. Delayed collapse has been observed in a wide variety of systems \cite{5744,5926,Allain-1364,5382,2280,2308,2905,5390,5389,5237,5241,10832,12095} so the response appears to be a universal feature of weak gels yet to-date no theoretical framework has emerged to account for this process. The existence of a measurable delay in the collapse of a weak colloidal gel looks, at first sight, rather surprising. In a crystalline solid yield usually happens spontaneously at a well defined stress and there is no latency before the material flows. Similarly, if a constant gravitational stress is applied to a free-flowing colloidal suspension sedimentation occurs essentially instantaneously. The anomalous response evident in a weak gel has been interpreted as the signature of a non-equi\-librium solid-to-fluid transition, triggered by erosion of the gel network by internal flows \cite{5744,Allain-1364,2530,2308,5390,9091}, progressive fracture by an applied gravitation stress \cite{2280}, or as a consequence of thermal restructuring of the network \cite{5382,5237,12095}. The origins of delayed collapse are both scientifically fascinating as well as being technologically relevant because colloidal gels are often used to stabilize complex product formulations against macroscopic phase separation. The network of particles supports the gravitational stress exerted by the formulation and suppresses unwanted sedimentation of the product. This trick, widely used by formulators, works only for $\tw < \taud$ (where $\tw$ is the age of the gel) as the gel instability at $\taud$ eventually restores equilibrium and phase separation starts again. In such cases, $\taud$ fixes the ultimate physical self-life of the product. A demand for robust long-life formulations has heightened the need for a better microscopic understanding of the process of delayed collapse so that gel instability may be predicted and controlled. Much of the work to date on gel collapse has focused on macroscopic features, typically by measuring the time evolution of the height $h(\tw)$ of a gel, rather than on the internal structure and dynamics of a gel. However in the last few years new techniques, such as confocal scanning microscopy \cite{5241,10832,12095} and photon correlation imaging \cite{11697} has revealed that colloidal gels have a complex hierarchical structure, with different structural features at different length scales. So while at the individual particle level, a gel consists of dense aggregated colloids, the aggregates are organized on the micro-scale into relatively thick strands of particles, which at the mesoscale are assembled into a percolating network able to transmit a stress. In a weak gel, the stress bearing network has a number of distinctive characteristics. First, it is mechanically heterogeneous with a complex structure consisting of weakly connected soft regions of low particle density coexisting with stiff dense strands of spheres. The strands of the gel may be several radii thick, depending on the strength and range of the attractive interactions. Second, the network is dynamic and restructures with time, as inter-particle bonds break and reform via thermal fluctuations. A microscopic understanding of just how such a spatially and temporally disordered network evolves in time and how it transmits stress is still elusive yet just such an insight is essential for the prediction and control of delayed collapse. The goal of this paper is to summarize the key features of delayed collapse in weak depletion gels, to speculate on their origins, and to identify the key issues that still remain to be resolved. We briefly review the features of delayed collapse seen in the gravitational settling of weak gels. With a few recent exceptions, the majority of the previous work reported to date has focused on macroscopic features such as the time evolution of the height of a gel. We discuss the microscopic changes in the gel as it restructures by the breaking and reforming of inter-particle bonds. Using this experimental insight, we propose a new model for delayed collapse which emphasises the connectivity of the stress-transmitting network. Finally, we explore the role of an external applied force on the delay time of a gel and interpret the results in light of the new model. \section{Delayed collapse} \label{sec:collapse} \begin{figure}[ht] \includegraphics[width=0.9\textwidth]{fig1.pdf} \caption{(color online). Delayed collapse in a colloidal gel. (a) Time evolution of the total height $h(\tw)$ of a depletion gel ($\Delta / \a = 0.62$, $\a = 316$ nm) with $\phicolloid = 0.21$, and different polymer concentration $\cpstar$. (b) Probability distribution $N(\taud)$ for the delay time of a gel with $\Delta / \a = 0.45$, $\a = 272$ nm at $\phicolloid = 0.20$, $c_{\mys{p}} / c_{\mys{crit}} = 4$. The solid line shows a fit to a Gaussian. (c) Dependence of the mean delay time $\taumean$ on the polymer concentrations for a gel with $\Delta / \a = 0.62$, $\a =316$ nm, $\phicolloid = 0.21$. The solid line depicts the estimated single bond lifetime $\tauesc$. } \label{fig:mockup} \end{figure} In this section, we recall the principle \textit{macroscopic} features of delayed collapse in weak gels. We focus on the time-dependent evolution of the total height $h(\tw)$ of a gel in a gravitational field, as a function of its age $\tw$. Typically a suspension is randomized by shaking or mixing at time $\tw = 0$ and then left undisturbed during the sedimentation process. The height $h(\tw)$ usually displays three regimes \cite{12095}: an initial lag period of width $\taud$ where the height falls slowly but continuously with age, a non-linear regime of rapid collapse where the interface velocity $\mathrm{d}h / \mathrm{d} \tw$ speeds up with $\tw$, and finally a region of compaction where the height relaxes asymptotically towards an equilibrium value. The three-stage nature of delayed collapse is exemplified by the data reproduced in Fig.~\ref{fig:mockup}(a) on the settling of depletion gels \cite{12095} with $\phicolloid = 0.21$ for polymer concentrations $\cpstar \in [2.4,4.8]$. We use two sizes of low polydispersity surfactant-stabilized poly(dimethyl siloxane) (PDMS) emulsion droplets suspended in a refractive-index and near-density matched mixture of 1,2-ethane diol and water. The suspension of large droplets had a hydrodynamic radius of $\a =316 \pm 11$ nm and a polydispersity of 0.17 while the small droplets had a radius of $\a =272 \pm 10$ nm and a polydispersity of 0.18. A depletion attraction was induced by the addition of either the anionic polyelectrolyte xanthan ($M_{\mys{w}}$ = 4.66 x 10$^{6}$ g mol$^{-1}$, radius of gyration $\rg = 194$ nm) or neutral hydroxyethylcellulose ($M_{\mys{w}}$ = 1.3 x 10$^{6}$ g mol$^{-1}$, radius of gyration $\rg = 126$ nm) depending on the range of attractions required. The majority of experiments were conducted using a combination of large emulsion droplets and xanthan to give a colloid-polymer mixture with an attractive range of $\rg / \a = 0.62 \pm 0.04$ \cite{11551}. A small number of results were obtained using a colloid-polymer mixture with an attractive range of $\rg / \a = 0.45 $ \cite{Zhang2013}, obtained by combining small emulsion droplets with hydroxyethylcellulose. The magnitude of the attractions between droplets was adjusted by varying the concentration $\cpstar$ of added polymer, which is expressed in units of the overlap concentration $\cp^{*}$. The relative buoyancy of the emulsion droplets is $\Delta \rho = \rho_{\mys{c}} - \rho_{\mys{s}} = -130$ kg m$^{-3}$, with $\rho_{\mys{c}}$ and $\rho_{\mys{s}}$ the densities of the droplet and solvent mixture respectively. This relatively small density difference explains the absence, evident in Fig.~\ref{fig:mockup}(a), of any discernible settling during the lag phase. This is in contrast to many of the earlier studies where the initial stages of gel settling were often characterized by a relatively broad change in the height around $\tw = \taud$ as the gel was already settling slowly before the regime of rapid collapse started. This makes it more difficult to identify precisely the point where rapid collapse starts. In the PDMS system it is straight-forward to determine the characteristic delay time $\taud$, which is taken as the time at which the height $h(\tw)$ of the gel first begins to noticeably drop from its initial value $\h0$. In this paper we concentrate on the physical and chemical factors that determine the duration $\taud$ of this initial lag period. Experiments reveal \cite{11551,12095} that the characteristic delay time $\taud$ has a number of distinctive characteristics: First, measurements of $\taud$ from samples of identical materials display a significant statistical variance \cite{2308}; and second, the average delay time $\taumean$ is very sensitive to the magnitude of the attractive interactions {\it i.e.} doubling the polymer concentration in a depletion system may increase the delay time by 1--2 orders of magnitude \cite{5926,2905,5237,9091,10832,12095}. Figures~\ref{fig:mockup}(b) and (c) confirms the generality of these conclusions in the PDMS system. To establish the extent of statistical variations in the delay time, we performed 116 repeat measurements of the delay time on a depletion gel \cite{Zhang2013} with $\Delta / \a = 0.45$ ($\a = 272$ nm) in a temperature controlled environment with $\Delta T = 24.7 \pm 0.1 \ensuremath{^{\circ}}\mathrm{C}$. Each of the runs was made on a freshly-prepared gel sample under identical conditions to ensure that individual runs were uncorrelated. The resulting values of $\taud$ were used to construct the probability distribution $N(\taud)$ of delay times shown in Fig.~\ref{fig:mockup}(b). An appreciable variation in the measured delay times is seen with a scatter of about 14 \%. The distribution $N(\taud)$ is symmetric and well fitted by a Gaussian distribution (solid line in Fig.~\ref{fig:mockup}(b)) which suggests that delayed collapse is a consequence of a large number of independent uncorrelated stochastic events. A clue to the nature of the stochastic events responsible for delayed collapse is revealed by the strong correlation evident in Fig.~\ref{fig:mockup}(c) between the mean delay time $\taumean$ and the lifetime $\tauesc$ of an individual colloid-colloid bond. To estimate the bond lifetime we assume that the rupture of individual bonds occurs primarily as a consequence of thermal fluctuations and that the gravitational stress (which as noted above is small in the PDMS system) does not significantly accelerate the rate of bond rupture. When no force is applied across the bond, the lifetime $\tauesc$ may then be expressed in terms of the average time it takes a Brownian particle to diffuse out of the attractive potential (the Kramers' escape time). In the limit of $\Uc \gg \kBT$, the lifetime is given by the Arrhenius expression, $\tauesc = \tauzero \exp (\Uc / \kBT)$, where $\tauzero$ is a characteristic time which depends on the colloid diffusivity and the range and depth of the interaction potential. Estimating $\tauzero$ from the measured low shear rheology, and the width $\Delta$ and depth $\Uc$ of the depletion potential from accurate generalised free-volume theories \cite{12095} yields the lifetimes plotted in Fig.~\ref{fig:mockup}(c). As may be seen, the ratio of the two characteristic times is very nearly constant, for a wide range of polymer concentrations, with the delay time approximately 240 times the bond lifetime. The strong correlation evident in Fig.~\ref{fig:mockup}(c) highlights the pivotal role of spontaneous thermal fluctuations. The observation that delayed collapse occurs on timescales two orders of magnitude larger than the bond rupture time indicates that a microscopic particle-level model is inadequate to account for collapse. The large difference between $\taumean$ and $\tauesc$ highlights the crucial role played by the hierarchical structure of a gel. The characteristic ratio $\taumean / \tauesc$ is also likely to depend on the initial volume fraction $\phi$ of the gel but this dependence has, to date, been little studied experimentally. The delay time, in addition to its sensitivity to the interaction potential and the particle volume fraction $\phi$, has also been reported to depend on physical factors such as the height, width, and the cross-sectional shape of the sample cell \cite{5744,2308}. Starrs\emph{ et al.}, for instance, observed that weak depletion gels constructed from poly(methyl methacrylate) spheres with initial he\-ights $\h0$ above 7 mm, showed a height-independent delay time \cite{2308}. However in one sample with $\h0 = 5$ mm there was a marked increase in the delay time by a factor of about 70\%. Evans and Starrs \cite{2530} interpreted these observations as evidence for the existence in a gel of a new macroscopic length scale $\lcrit$, which they termed a `stress transmission length'. They argued that samples whose height $\h0$ were greater than $\lcrit$ could not transmit gravitational stress to the base of a sample so no properties of the gel could be a function of $\h0$. Aside from this study, the effect of height on gel collapse has not been systematically studied although Kim \emph{et al.} \cite{5243} and others have noted a qualitative change in settling with short samples generally showing a steady or `creeping sedimentation' while delayed collapse seemed only to be shown by tall samples. Interpretation of these observations is complicated by the relatively small range of initial heights used, but suggest that collapse phenomena might show a rich dependence on the initial height. To clarify the effect of height on collapse, we used two complementary measurement techniques to probe the delay process in the PDMS system over a wide range of initial heights. For relatively tall samples ($\h0 \gtrsim 5$ mm) a CCD camera was used to measure accurately the time dependence of the height, $h(\tw)$, of the gel with a spatial resolution of about 0.5 mm. In short samples the collapse kinetics was followed by confocal scanning laser microscopy. The gels were repeatedly imaged by scanning a large number ($\sim 100$) of slices perpendicular to the gravitational field at different time intervals (from minutes to days). The fluorescent intensity in each slice was integrated to give a height profile. Both the confocal and CCD experiments were performed in cylindrical glass cells with a constant width of 17 mm to eliminate the effects of width and cell geometry. The delay time, $\taud$, for gels with different polymer concentrations $\cpstar$ is shown in Fig.~\ref{fig:height-delay} as a function of the initial height, $\h0$. Fig.~\ref{fig:height-delay} indicates convincingly, at least for this system, that the time lag before the initiation of the rapid collapse process is independent of the initial height of the sample. These observations confirm the conclusions of the more limited set of height-data ($\h0 > 20$ mm) reported in \cite{12095}. The independence of $\taud$ on $\h0$ is particularly striking because the initial height of the gel was varied over almost two orders in magnitude, from 0.78 mm to 63 mm. This behaviour is in contrast with the more limited data reported by Starrs \textit{et al} \cite{2308} who observed a height-dependent delay. More experiments will clearly be needed to fully elucidate the general picture but the data in Fig.~\ref{fig:height-delay} suggests that a dependence on height at least is probably not a universal feature of delayed collapse. \begin{figure}[ht] \begin{center} \includegraphics[width=0.7\textwidth]{fig2.pdf} \caption{(color online). Dependence of the delay time on the initial height of a gel. The data is for a depletion gel with $\Delta /\a = 0.62$, $\a =316$ nm, $\phicolloid = 0.21$ and four different polymer concentrations.} \label{fig:height-delay} \end{center} \end{figure} \section{Microscopic picture of gel restructuring} \label{sec:microstructure} During the delay period, although there is no loss of overall mechanical integrity, the gel continuously restructures as thermal fluctuations favour the breaking of existing particle bonds and the formation of new ones. To understand the consequences of thermal rearrangements for the hierarchical structure of a gel we have used fluorescent confocal microscopy on a labelled PDMS gel to directly follow the time evolution of the network in real space \cite{12095}. We characterize how the micro-structure changes by determining two characteristic length scale, the correlation length $\R$ and the mean chord length $\lg$, which are illustrated in Fig.~\ref{fig:gel}. A series of two-dimensional confocal images were recorded, at a fixed height within the gel, as a function of the sample age $\tw$. To characterise the large-scale fluctuations in the particle density seen in the confocal images the structure factor $S(q)$ was calculated from a 2D Fourier transform and the characteristic wave-number of the intensity peak evaluated as $q = \int \mathrm{d}q qS(q) / \int \mathrm{d}q S(q)$. The existence of a peak in the scattering intensity $S(q)$ at small scattering vectors is a ubiquitous feature of colloidal aggregation \cite{Cipelletti-1517}. The peak signifies the presence of a correlation length $\R \sim \pi / q$. In the dilute particle limit $\phicolloid \rightarrow 0$, this characteristic length scale arises from the presence of close-packed fractal clusters of size $\R$, while at higher concentrations it may be more accurately interpreted as a measure of the heterogeneity or mesh size of the gel. To quantify changes in the \textit{local} micro-structure the mean chord (or intercept) length $\lg$ was measured. $\lg$ is the average extent of a randomly-orientated line segment which lies totally within the particle rich portion of the gel. Since the average is dominated by orientations perpendicular to the strands of particles $\lg$ is a convenient experimental measure of the strand diameter. \begin{figure}[tbp] \begin{center} \includegraphics[width=0.6\textwidth]{fig3.pdf} \caption{(color online). Schematic of a gel illustrating the two length scales, $\lg$ and $\R$, which characterize the disordered structure. The mean chord length $\lg$ is a measure of the average diameter of the particle strands while the correlation length $\R$ is a typical size of the pores within the framework of the gel.}% \label{fig:gel} \end{center} \end{figure} The interparticle processes which occur as a gel ages are illustrated by the confocal measurements, summarized in Figures~\ref{fgr:ageing}(a) and (b). The PDMS gel has a hierarchical structure with two clearly separable length scales. The confocal images reveals that at the microscale, the attractive particles aggregate into relatively thick gel strands which, for the PDMS gel, are between 3 and 8 diameters in width. The widths of the strands are observed to depend on the age of the gel and the strength of the attractive potential. At the next level of organization, the strands form on the mesoscale a percolating network with a correlation length $\R$ which ranges from between 50 and 100 particle radii in extent for the PDMS system. Both the correlation length $\R$ and the strand thickness $\lg$ grow continuously with the age $\tw$ of the gel and show no sign of the dynamical arrest transition expected for a strong gel. The rate of coarsening of the gel network is observed to depend sensitively on the strength of the attractive interactions $\Uc$. The more attractive the interparticle potential at contact, the slower is the observed rate of coarsening \cite{11551}. The picture which emerges of a network which grows progressively coarser and thicker with time implies that the shear elasticity $G'$ should also increase with $\tw$. This conjecture is confirmed by rheological measurements shown in Figure~\ref{fig:rheology} where $G'$ increases with $\tw$, prior to collapse. This however leads to an apparent contradiction -- why if the gel network is actually getting stiffer with time, does it ever collapse under a gravitational stress? One possibility is that it is not the average micro-structure which is important but the connectivity of the gel, and in particular how the ability of the network to transmit stress evolves with time. In this scenario, collapse occurs as ageing reduces the connectivity and increases the fragility of the network. In a network with few connections a bond breaking event results in large scale cooperative displacements, far away from the event that caused it. The response is highly non-local. A relatively small number of independent bond-breaking events, distributed randomly throughout such a fragile network, would then trigger a catastrophic macroscopic failure of the gel. \begin{figure}[tbp] \begin{center} \includegraphics[width=0.7\textwidth]{fig4.pdf} \caption{(color online). Elastic $G'$ and loss $G''$ moduli of a weak gel ($\Delta /\a = 0.62$, $\a = 316$ nm, $\phicolloid = 0.21$, $\cpstar = 4.0$) as a function of time elapsed since preparation $\tw$. The mechanical properties were measured by applying an oscillatory stress of 0.0025 Pa at a frequency of 0.5 Hz. }% \label{fig:rheology} \end{center} \end{figure} \begin{figure}[h] \centering \includegraphics[width=1.0\textwidth]{fig5.pdf} \caption{(color online). Restructuring of a thermal gel ($\Delta /\a = 0.62$, $\a = 316$ nm, $\phicolloid = 0.21$). Time evolution of (a) the correlation length $\R$, and (b) the mean strand thickness $\lg$, for indicated polymer concentrations.} \label{fgr:ageing} \end{figure} Support for the importance of connectivity comes from confocal microscope studies \cite{12095}. A relatively large random region within a PDMS gel was imaged hourly, for a total of 32 hours. Comparing consecutive images, the number of discrete strand association and dissociation events occurring per hour were identified. The results for the corresponding rates for strand association $K_{\mys{A}}$, and strand dissociation $K_{\mys{D}}$ are plotted in Figure~\ref{fig:aging-events}, as a function of the age $\tw$ of the gel. The data reveals a number of distinctive features. First, while the dissociation rate seems to be essentially unaffected by the age of the gel, the rate of association drops rapidly with increasing $\tw$. The two time dependences reflect the radically different physical origins of the processes involved. Dissociation is an activated process which requires particles to escape from a deep potential well and so is not expected to depend on the age of the sample. While, the decline in the rate of association with $\tw$ reflects the progressive slowing down of the microscopic dynamics frequently seen in soft glassy systems as they evolve towards a more homogeneous state \cite{4213}. The second distinctive feature of Figure~\ref{fig:aging-events} is the dominance at early times of association over dissociation events. The data reveals that dissociation is a relatively rare event and a strand once broken will be almost immediately reformed by an association event so that the connectivity of the network changes only very slowly with $\tw$. The network is essentially self-healing and reforms rapidly after any deformation. At longer times, the character of the network changes as the rates of association and dissociation become comparable. At this point the connectivity of the network drops rapidly with increasing age. The degradation of the network continues to a point at which a small number of uncorrelated dissociation events result in a rapid macroscopic failure of the gel. In support of these ideas, we find that the time at which $K_{\mys{A}} \approx K_{\mys{D}}$ correlates well with the macroscopic delay time $\taud$. \begin{figure}[tbp] \begin{center} \includegraphics[width=0.65\textwidth]{fig6.pdf} \caption{(color online). The number of association $K_{\mys{A}}$ and dissociation $K_{\mys{D}}$ events per hour as a function of the age $\tw$ of a restructuring gel ($\Delta /\a = 0.62$, $\a = 316$ nm, $\phicolloid = 0.21$, $\cpstar = 2.4$). The lines are guides to the eye. Delayed collapse occurs at $\taud \sim 120 \times 10^{3}$ s.}% \label{fig:aging-events} \end{center} \end{figure} \section{Microscopic model of collapse} \label{sec:model} We now propose a microscopic model which connects the kinetics of bond-breaking to gel collapse and use it to predict the effect of a small applied force on the time $\taud$ for gravitational failure. Our approach is based on the recent work of Lindstr\"{o}m et al. \cite{lindstrom_structures_2012} but modified to analyse a weak thermal-fluctuating gel. We begin by proposing that the delay time $\taud$ is determined by the time-scale at which the rate of strand association becomes comparable to the rate of strand dissociation. On this time-scale the subsequent lose of connectivity of the network and the eventual failure of the gel is rapid by comparison, and so can safely be neglected. Motivated by the experimental data presented above, we assume that the rate $K_{\mys{D}}$ at which a gel strand breaks is time-independent while $K_{\mys{A}}$, the rate at which the strands of the network associate and re-form, depends on the age of the gel. While the quench history of the gel, and its chemistry will, in general have a significant impact on the time dependence of $K_{\mys{A}}(\tw)$, close to the vicinity of the collapse point the data in Fig.~\ref{fig:aging-events} suggests the time dependence of $K_{\mys{A}}(\tw)$ may be approximated by a simple linear function of gradient $\mathrm{d} K_{\mys{A}}(\tw) / \mathrm{d}\tw = -\alpha$, where $\alpha$ is a positive constant. When a small perturbative force $f$ (which is defined here to be compressive if $f > 0$) is applied across a weak attractive particle-particle bond, the activation energy for bond dissociation is increased by a factor of $f \cdotp \Delta$ where $\Delta$ is the width of the attractive potential \cite{2161}, as illustrated in Fig.~\ref{fig:potential}(a). The corresponding bond dissociation rate changes to \begin{equation} k = \frac{1}{\tauesc} \exp \left ( -\frac{f} {f_{\mathrm{th}}} \right ) \end{equation} where $\tauesc$ is the bond lifetime when no force is present and $f_{\mathrm{th}}= \kBT/\Delta$ is the characteristic force for thermal bond rupture. The effect of a weak point force on the gel is more complex however because the failure of the gel is dictated by the rate of strand dissociation $K_\mys{D}$, which because a strand is several particles thick, could be significantly slower than the dissociation of an individual particle bond. Lindstr\"{o}m et al. \cite{lindstrom_structures_2012} have derived expressions for the cooperative dissociation rate of a strand of $n$ particles in different dynamical regimes when either association or dissociation events dominate. The experimental data in Fig.~\ref{fig:aging-events} suggests that, in our system, the number of association events is roughly comparable to the number of dissociation events with the ratio $K_\mys{A} / K_\mys{D}$ becoming no larger than about two even for $\tw \rightarrow 0$. In this case, the analysis \cite{lindstrom_structures_2012} of Lindstr\"{o}m et al. suggests that the dissociation rate of a strand with many bonds in its cross-section should be almost the same as the dissociation rate of the individual bonds that make up the strand, so that \begin{equation}\label{eq:strand} K_\mys{D}(f) = K_\mys{D}(0) \cdotp \exp \left ( - \frac{f} {f_{\mathrm{th}}} \right ) \end{equation} where $K_\mys{D}(0)$ is the dissociation rate of the strand in the absence of any applied force. Within this model, applying a counteracting force to the gel will reduce the rate at which the strands of the network breakup, postponing the approach to the instability condition $K_\mys{A} \approx K_\mys{D}$, and consequently increasing the delay time $\taud$. If a force $f$ is applied across each particle bond, collapse occurs after an interval $\taud$, which is the solution of the implicit equation $K_{\mys{A}}(\tw = \taud) = K_{\mys{D}}(f)$. To solve this equation, we recognize that Fig.~\ref{fig:aging-events} suggests that the dissociation rate is time-independent while the association rate varies approximately linearly with the elapsed time, \begin{equation} K_\mys{A}(\tw) = K_\mys{A}^{0} - \alpha \tw. \end{equation} Here $K_\mys{A}^{0}$ is the initial ($\tw \rightarrow 0$) rate of association. An expression for the dependence of the delay time $\taud(f)$ on the applied force $f$, \begin{equation}\label{eqn:model} \frac{\taud(f)}{ \taud(0)} = 1 + \frac{K_\mys{D}(0)}{K_\mys{A}^{0} - K_\mys{D}(0)} \left [ 1 - \exp \left ( - \frac{f} {f_{\mathrm{th}}} \right ) \right ], \end{equation} follows directly from the collapse condition, $K_{\mys{A}}(\taud) = K_{\mys{D}}(f)$, and Eq.~\ref{eq:strand}. Here $\taud(0) = (K_\mys{A}^{0}-K_\mys{D}(0))/\alpha$ is the spontaneous delay time in the absence of an external force. Eq.~\ref{eqn:model} predicts that the delay time $\taud$ remains close to $\taud(0)$ for weak forces ($f \ll f_{\mathrm{th}}$). Increasing $f$ results in an increase in $\taud$, which becomes particularly marked when the applied force $f$ approaches the characteristic force $f_{\mathrm{th}}$ of thermal bond rupture. For still larger forces, the delay time saturates at a plateau of $K_\mys{A}^{0}/\alpha$, the largest delay time permitted by our model. In this case where $f \gg f_{\mathrm{th}}$, the lifetime of the gel is extended so much that the rate of strand association approaches zero at $\tw = \taud$, so that any subsequent dissociation event causes an immediate failure. \section{Effect of an external force} \label{sec:stress} To experimentally check these predictions, we added a small number of monodisperse super-para\-magnetic beads, composed of a cross-linked poly\-styrene with embedded maghemite nanoparticles ($\gamma$-Fe$_{2}$O$_{3}$) (Invitrogen, Dynabeads M-270, radius 1.4 $\mu$m), to a weak depletion gel ($\Delta / \a = 0.62 \pm 0.04$, $\a = 316\pm11$ nm, $\phicolloid = 0.21$, $\cpstar = 2.4$). Microscopy confirmed that the super-paramagnetic beads were uniformly dispersed throughout the network of the gel. The number density of magnetic beads was estimated as $4.8 \times 10^{14}$ m$^{-3}$, significantly less than the number density $1.5 \times 10^{18}$ m$^{-3}$ of droplets in the gelling system, so the super-paramagnetic beads constitute a trace component. The gel was mounted in a magnetic field gradient $\partial B/\partial z$ generated by a pair of strong permanent NdFeB magnets mounted in a purpose-built alumininium housing \cite{Lin2012}. The strength of the field gradient and consequently the force applied to each super-paramagnetic bead was adjusted by controlling the vertical spacing between the gel and the magnet assembly. The mean spacing $l$ between the magnetic beads in the network was estimated as 17 $\mu$m, which is larger than the correlation length $\R$ of the gel so the forces applied to the gel network by the dilute dispersion of super-paramagnetic particles are expected to be independent of each other and uncorrelated. The force applied to each magnetic tracer particle was calculated from measurements of the in-situ magnetic field $B$, using a Hall-probe magnetometer, and literature estimates \cite{xu_simultaneous_2012} of the intrinsic magnetization properties of the magnetic beads. Figure~\ref{fig:potential}(b) shows measurements of the delay time of a gel as a function of the force $F$ applied by randomly dispersed super-paramagnetic beads, fixed within the gel network. The delay time increases with $F$ because the external magnetic field $B$ was orientated so that the force generated on the gel by the magnetic particles opposes the buoyancy forces on the gel. The dependence of $\taud$ on $F$ apparent in Figure~\ref{fig:potential}(b) is in qualitative agreement with the predictions of the microscopic model outlined above, with a significant increase in the delay time being observed for applied forces of $F \approx 1$ pN. At $F \gtrsim 5$ pN however the magnetic particles were observed to be stripped from the gel, as the local yield stress was exceeded and the gel broke. As a result, the high field plateau predicted for the delay time $\taud$ as a function of $F$ (see Section~\ref{sec:model}) could not be detected. To check quantitatively the microscopic model of Section~\ref{sec:model}, we first note that the force $f$ between particles will be proportional to the applied magnetic force $F$. The constant of proportionality will however depend on the characteristic way in which force propagates through the heterogeneous particle network of a gel. By analogy with granular materials \cite{majmudar2005contact}, we expect the transmission of force to be highly non-linear. Forces will propagate predominantly along chains of neighbouring particles while large areas of the surrounding gel remain nearly force-free. If we assume that the contact forces produced by a point force $F$ are concentrated along a linear path of $N_{\mys{c}}$ particles then, in the absence of end effects, the average force on each pair of particles should be $F/N_{\mys{c}}$ where $N_{\mys{c}}$ is the length of the linear force chain. In this case, Eq.~\ref{eqn:model} may be re-expressed as \begin{equation}\label{eqn:model-fit} \frac{\taud(F)}{ \taud(0)} = 1 + \delta \left [ 1 - \exp \left ( - \frac{F} {F_{0}} \right ) \right ] \end{equation} where $F_{0} = N f_{\mathrm{th}}$ and $\delta = K_\mys{D}(0)/(K_\mys{A}^{0} - K_\mys{D}(0))$. From the data presented in Fig.~\ref{fig:aging-events} we estimate $\delta \approx 1$ for the PDMS gel. A non-linear least-squares fit of Eq.~\ref{eqn:model-fit}, with $\delta = 1$, to the experimental data (shown as the solid line in Figure~\ref{fig:potential}(b)) yields a good representation of the experimental force distribution. The best-fit parameters were determined as $\taud(0) = 2.90 \pm 0.05 \times 10^{4}$ s and $F_{0} = 1.4 \pm 0.2$ pN. To interpret these values, we note that for $\Delta = 194$ nm the characteristic thermal rupture force is $f_{\mathrm{th}} \approx 0.02$ pN so the number of bonds in the equivalent linear force chain is $N_{\mys{c}} = F_{0}/f_{\mathrm{th}}$ or $N_{\mys{c}} \approx 70 \pm 10$. $N_{\mys{c}}$ should be comparable to the number of particle bonds between magnetic beads which is $l /(2\a)$ or $\approx$ 30, in the current experiments. This value is indeed comparable to the force chain length found experimentally of $N_{\mys{c}} \approx 70 \pm 10$, which is very reassuring. Overall, the force measurements provide reasonably strong support for the microscopic model of gel collapse outlined above. While more work clearly needs to be done, these initial observations strengthen our argument that a catastrophic loss of connectivity as a consequence of thermally-driven dissociation events is ultimately responsible for delayed collapse. \begin{figure}[tbp] \begin{center} \includegraphics[width=0.9\textwidth]{fig7.pdf} \caption{(color online). (a) Shift in the activation barrier for bond dissociation when a force $f$ is applied. $\Delta$ is the range of the attractive interparticle potential. (b) Dependence of the delay time $\taud$ on the force $F$ generated by super-paramagnetic particles embedded within the network of a gel ($\Delta /\a = 0.62$, $\a =316$ nm, $\phicolloid = 0.21$, $\cpstar = 2.4$). The force $F$ counteracts the buoyancy force acting on the gel so the delay time $\taud$ of the gel is increased. The solid line is the dependence predicted by the microscopic model of collapse (Eq.~\ref{eqn:model-fit}), calculated for $\taud(0) = 2.9 \times 10^{4}$ s, $F_{0} = 1.4$ pN, and $\delta = 1$. The inset shows the same data plotted on a linear scale. }% \label{fig:potential} \end{center} \end{figure} \section{Conclusions} \label{sec:conclusions} It is useful to briefly summarize what features of the collapse instability in depletion gels are well understood and what is still left to be clarified. Delayed collapse, whereby a gel which is initially sedimentating slowly after a certain time interval (the \textit{delay} time) switches to a regime of rapid collapse, is well established in systems with long-range attractive potentials. In this paper we have focused predominately on such long-range systems. The distinguishing feature of these gels is that when quenched into a two-phase region the particle network coarsens continuously with time, as a result of thermal breaking and reforming of interparticle bonds. In contrast, the sedimentation kinetics of depletion gels with short-range attractive potentials is more complex, as a result of the interplay between phase separation and dynamical arrest \cite{7444}, and is not as well understood. Finally, we note that the initial particle density is likely to have a significant effect on the time evolution of a gel but the effect of $\phi$ has to date been little studied. The delay time $\taud$, which can vary from minutes to months, has been reported to depend on the depth and range of the attractive potential, the volume fraction $\phicolloid$, the particle radius $\a$, the density mismatch \cite{5926,5389}, the presence of a weak shear stress \cite{5382}, and even the width and height of the sample cell \cite{5744,2308}. The link between these effects and the microstructure of a gel is in many cases currently missing. However the pivotal role of the thermal lifetime of a single particle-particle bond, the Kramers' escape time $\tauesc$, is generally recognized \cite{5237,9091}. Experiments confirm, for instance, that the time required for gel collapse scales approximately linearly with the escape time, over some two orders of magnitude of variation in $\taud$. The large value of the ratio $\taud / \tauesc$, which for the data presented here is of order $\sim 10^{2}$, highlights the significant role the hierarchical structure of a gel plays in the connection between a local bond dissociation event and the eventual overall failure of the network. Time-resolved confocal imaging experiments indicate that, although the gel is continuously coarsening, the most important feature of the thermal restructuring is probably a decrease in connectivity of the network with time. Analysis of the association and dissociation dynamics of strands within the gel suggest that collapse is triggered when the rate of strand association, which is a decreasing function of time, decreases to such an extent that it becomes comparable to the fixed rate of strand dissociation. At this point, the ability of the gel network to support a gravitational stress decreases rapidly and the gel soon afterwards fails macroscopically. This microscopic picture of collapse is supported by preliminary measurements on the effect of an applied point force on the dynamics of collapse. More work on the subtle link between delayed collapse, microscopic dissociation dynamics, and applied force is however essential to fully understand the anomalous mechanical response of weak gels. Particularly puzzling is the role of gravitational stress. The absence of any variation of the delay time with the height of a gel seems, at first sight, to be at variance with the results of the magnetic experiments where $\taud$ increases with applied force. Work is under way to address this intriguing question. \section{Acknowledgments} We gratefully acknowledge support from Bayer CropScience and the UK Engineering and Physical Sciences Research Council. In addition, we thank an anonymous referee for a number of incisive comments which considerably improved an earlier draft.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section*{abstract} We present first results of a new instrument, the Generalized Differential Image Motion Monitor (GDIMM), aiming at monitoring parameters of the optical turbulence (seeing, isoplanatic angle, coherence time and outer scale). GDIMM is based on a small telescope equipped with a 3-holes mask at its entrance pupil. The seeing is measured by the classical DIMM technique using two sub-pupils of the mask (6~cm diameter separated by a distance of 20~cm), the isoplanatic angle is estimated from scintillation through the third sub-pupil (its diameter is 10~cm, with a central obstruction of 4~cm). The coherence time is deduced from the temporal structure function of the angle of arrival (AA) fluctuations, thanks to the high-speed sampling rate of the camera. And the difference of the motion variances from sub-apertures of different diameters makes it possible to estimate the outer scale. GDIMM is a compact and portable instrument, and can be remotely controlled by an operator. We show in this paper the first results of test campaigns obtained in 2013 and 2014 at Nice observatory and the Plateau de Calern (France). Comparison with simultaneous data obtained with the Generalized Seeing Monitor (GSM) are also presented. \section{INTRODUCTION} \label{par:intro} Since several years, our group develops original techniques and instrumentation for measuring the optical turbulence of the atmosphere. Several prototypes were developped in the past, such as the generalized seeing monitor (GSM)\cite{Ziad00} which has become a reference for monitoring the coherence parameters of the wavefront at ground level, i.e. the seeing $\epsilon$, isoplanatic angle $\theta_0$, coherence time $\tau_0$ and outer scale ${\cal L}_0$. In the last 15 years GSM was used in a large number of astronomical observatories and for prospecting potential new sites (see [\cite{Ziad00}] and references therein). Since the beginning of the years 2000, our group was also engaged in the site qualification of the site of Dome C in Antarctica. Specific prototypes of a DIMM, an isoplanometer (to monitor $\theta_0$) and a GSM were developped for these campaigns and produced a lot of measurements at night time as well as during the day\cite{Aristidi05a, Aristidi09, Ziad08}. Taking advantage of the experience gained in developping these antarctic instruments, we propose now the Generalized Differential Image Motion Monitor, a compact instrument aiming at replacing the aging GSM. GDIMM is very similar to a DIMM, with 3 sub-apertures instead of 2. The seeing is obtained by the DIMM method, using two sub-pupils of same diameter. The third aperture has a diameter of 10~cm, allowing to estimate the isoplanatic angle via the scintillation. And the difference of the variances of the angle of arrivals through the pupils of different diameters makes it possible to derive the outer scale. The coherence time can be estimated from the temporal structure function of the angle of arrivals, thanks to the high sampling rate of the camera (up to 90~frames per seconds at full resolution). GDIMM has the advantage to be compact and easy to install, and is controlled by a dedicated software. Efforts are in progress to make it fully automatic. This paper is organized as follows. In section~\ref{par:theory} we briefly review the theory of the integrated turbulence parameters and the method used to estimate them. Section~\ref{par:instru} presents the instrumental setup. Section~\ref{par:process} describes the observations, the various calibration procedures, and the data processing. The results of the observations and a comparison with other instruments are presented Sect.~\ref{par:results}. Conclusions and perspectives in Sect.~\ref{par:concl} end the paper. \section{THEORY} \label{par:theory} \subsection{Seeing} \label{par:seeing} The seeing is the angular size of the FWHM of long-exposure images observed through the atmospheric turbulence. It is one of the most important parameters describing the optical turbulence since it is related to the resolution of the images. Seeing monitors are operated in major observatories such as ESO Paranal, and produce constant data used to optimize observations. The DIMM \cite{Sarazinroddier90, Verninmunoz95, Tokovinin02} is a seeing monitor which is very popular because of its simplicity. It is based on a small telescope with a entrance pupil made of 2 small subapertures, observing a bright single star with a short exposure (typically a few miliseconds). A tilt is given to the light propagating through one of the two apertures to produce twin images which move according to the turbulence. Angular distances $\Delta x$ and $\Delta y$ of the centroid of each star image are computed in both directions ($x$ is parallel to the basis of the sub-apertures). Longitudinal and transversal variances ($\sigma_l^2$ and $\sigma_t^2$ respectively) of $\Delta x$ and $\Delta y$ are estimated on a sequence of $N$ instantaneous snapshots ($N$ being typically several hundreds). The seeing $\epsilon$ (in radian) and Fried parameter\cite{Fried66} $r_0$ (in $m$) are computed using the following formulae\cite{Tokovinin02}~: \begin{equation} \epsilon_{l|t}=0.98\, (\cos z)^{-0.6}\frac{\lambda}{r_{0,l|t}}=0.98 \, (\cos z)^{-0.6}\:\left(\frac{D}{\lambda}\right)^{0.2}\:\left(\frac{\sigma_{l|t}^2}{K_{l|t}}\right)^{0.6} \label{eq:seeing} \end{equation} with \begin{eqnarray} K_l &=& 0.364\, (1-0.532 b^{-1/3}-0.024 b^{-7/3})\nonumber \\ \ \\ K_t &=& 0.364\, (1-0.798 b^{-1/3}+0.018 b^{-7/3})\nonumber \end{eqnarray} where $B$ is the distance between the sub-apertures, $D$ their diameter, $b=B/D$, $z$ is the zenithal distance and $\lambda$ the wavelength, traditionnaly set to 500~nm as a standard. Two estimations of the seeing are obtained for a given sequence, they are supposed to be the almost identical (isotropic hypothesis) and are averaged. \subsection{Isoplanatic angle} \label{par:isop} The isoplanatic angle $\theta_0$ can be estimated from the scintillation of a single star observed through a pupil of diameter 10~cm and a central obstruction of~4~cm. The principle of the calculation is based on the similarity of the theoretical expressions of $\theta_0$ and the scintillation index $s$\cite{Looshogge79, Ziad00}. $\theta_0$ is obtained in arcsec for a wavelength $\lambda=500$~nm by the following formula \begin{equation} \theta_0^{-5/3}=A \, (\cos z)^{-8/3}\, s \label{eq:isop} \end{equation} where $A=14.87$ is computed numerically from eqs. 19 and 21 of Ziad et al., 2000\cite{Ziad00}. $z$ is the zenithal distance of the star. \subsection{Outer scale} The Von-Karman outer scale ${\cal L}_0$ is related to the fluctuations of the angle of arrival of the light at a given position of the wavefront. The variance of the angular position of a star observed through a small aperture of diameter $D$ is given, in square radians by the following equation\cite{Ziad94} \begin{equation} \sigma_D^2=0.17\, \lambda^2 \, r_0^{-5/3} \, (D^{-1/3}-1.525 {\cal L}_0^{-1/3}) \label{eq:r0abs} \end{equation} For isotropic turbulence, the variances in $x$ and $y$ directions are identical. The pupil of GDIMM has 3 apertures, two of diameter $D_1=6$~cm, one of diameter $D_3=10$~cm. The following ratio \begin{equation} R=\frac{\sigma_{D_1}^2}{\sigma_{D_1}^2-\sigma_{D_3}^2}\; = \; \frac{D_1^{-1/3}-1.525 {\cal L}_0^{-1/3}}{D_1^{-1/3}-D_3^{-1/3}} \label{eq:l0} \end{equation} makes it possible to estimate the outer scale ${\cal L}_0$. \subsection{Coherence time} The coherence time $\tau_0$ relevant for adaptive optics and interferometry, as defined by Roddier\cite{Roddier81} is \begin{equation} \tau_0=0.31\, \frac{r_0}{\bar{v}} \end{equation} where $\bar{v}$, the effective wind speed, is a weighted average of the wind speed on the whole atmosphere\cite{Roddier81}. Ziad et al.\cite{Ziad12} have shown recently that it is possible to derive the effective wind speed from the the temporal structure functions $D_{x|y}(\tau)$ of the angle of arrivals, defined as $$ D_{x|y}(\tau)=\langle \left(x|y(t)- x|y(t+\tau)\right)^2\rangle $$ where $x$ (resp. $y$) stands for the angle of arrival (AA) in the $x$ (resp. $y$) direction (parrallel to the right ascension (resp. declination)). The brackets $\langle \rangle$ stand for temporal average. The AA is computed as the angular photocenter of the images produced by each sub-aperture. This function is zero for $\tau=0$ and saturates to a value $D_{\mbox{\scriptsize sat}}$ for $\tau\longrightarrow\infty$. We define its characteristic time $\tau_{a,x|y}$ as the value of $\tau$ for which \begin{equation} D_{x|y}(\tau_{a,x|y})=\frac{D_{\mbox{\scriptsize sat}}}{e} \end{equation} $\tau_{a,x|y}$ is indeed the AA coherence time. The effective wind speed as well as its direction $\gamma$ are derived from $\tau_{a,x}$ and $\tau_{a,y}$ using eqs.~10 and 11 of [\cite{Ziad12}], taking $k'=e$. \section{INSTRUMENTATION} \label{par:instru} \subsection{Telescope} The telescope is a Schmidt-Cassegrain Celestron 11 (diameter 280~mm) equipped with an entrance mask with 3 sub-pupils as shown in Fig.~\ref{fig:pupil}. It is an extension of the classical 2-apertures DIMM mask, with a supplementary sub-pupil used for estimating the isoplanatic angle. Two sub-pupils are circular with a diameter of 6~cm, they are both equipped with a glass prism with a deviation angle of $\simeq 30$ arcsec. The prims are oriented to give opposite tilts to the incident light. The mak is oriented so that the aperture separation is parallel to the declination axis. The third sub-aperture is also circular, with diameter 10~cm and a central obstruction of~4~cm to estimate the isoplanatic angle as described in section ~\ref{par:isop}. This aperture is left open and the corresponding image forms on the optical axis. The telescope is placed on an Astro-Physics 900 equatorial mount controlled remotelly by a computer via a RS-232 link. The tripod currently supporting the mount will be replaced in the future by a massive concrete pillar, allowing the pupil of GDIMM to be at a height of 5~m above the ground. \begin{figure} \begin{center} \includegraphics[width=10cm]{pupil-eps-converted-to.pdf} \end{center} \caption{Pupil mask of GDIMM. Top: photo taken in May 2014. Bottom: sectionnal view. Apertures 1 and 2 are circular with a diameter of 6~cm. They are both equipped with a glass prism to deviate the light away from the optical axis. Aperture 3 has a diameter of 10~cm and a central obstruction of 4~cm, it is used for isoplanatic angle and outer scale estimations. } \label{fig:pupil} \end{figure} \subsection{Cameras} The main camera is a Prosilica EC650, with a CCD chip of 659$\times$493 pixels and a dynamic range of 12~bits. The spectral sensitivity spans across the whole visible range with a peak at a wavelength of 500~nm (the quantum efficiency at 500~nm is 50\%). The pixel size is $7.4 \mu$m$\times 7.4 \mu$m. The exposure time is adjustable by software from 10$\mu$s to 10~s, the maximum frame rate is 90~fps at full resolution (windowing and binning options are available to increase the frame rate if necessary). A Barlow lens increases the focal length to 7.8~meters to allow a slight oversampling of the Airy discs (5~pixels at a wavelength of 500~nm in the Airy disc of the 10~cm diameter sub-pupil). The camera is connected to the computer via a Firewire interface cable. A second camera (USB webcam Logitech 9000) is placed at the focus of a finder with a wide field of 4 degrees. This camera is sensitive enough to detect bright stars. \begin{figure} \begin{center} \includegraphics[width=10cm]{snapshot-eps-converted-to.pdf} \end{center} \caption{GDIMM snapshot (zoom on the central part of the field). Top: grayscale plot showing the three images produced by the three subpupils. The central image corresponds to the 10~cm diameter aperture and is actually displaying speckles as the Fried parameter if lower than 10~cm. Bottom: horizontal cut of the image.} \label{fig:snapshot} \end{figure} \subsection{Software} The acquisition software written in C++/QT takes benefit of years of developpement of automated instruments for the Antarctic\cite{Aristidi05a, Daban10}. It can point the mount to the desired star, detects and centers automatically the target on the finder and on the science camera. Observations can be made manually or by acquisition sequences. Seeing and isoplanatic angle are computed in real time (in the future the coherence time and outer scale will be also given in real time). Various tests are performed to stop the observations when the star is lost (clouds) or if its zenithal distance becomes too large. Data are written in text-based csv files as described in the next section. It is also possible to record the images in FITS cubes. The software has a ``simulation mode'': it can read a FITS cube from a previous run, and use it as input for a virtual observation. This is particulary useful for developping and reprocessing of the data after sofware improvement. In the future we plan to install the GDIMM inside a dome at the Plateau de Calern and to remote-control the aperture of the dome via the acquisition software. \section{OBSERVATIONS AND DATA PROCESSING} \label{par:process} Observations with GDIMM are composed of continous sequences of one minute of time, each sequence giving one set of turbulence parameters: seeing $\epsilon$ (two values are calculated, longitudinal $\epsilon_t$ and transverse $\epsilon_l$), isoplanatic angle $\theta_0$, outer scale ${\cal L}_0$ and coherence time $\tau_0$. Currently the acquisition software computes in real time $\theta_0$ and the two values of $\epsilon$. Estimation of the outer scale is made afterwards since the algorithm is still in developpement, but we plan to include it to the software when it is stabilized. Coherence time calculation is not yet implemented. A typical observing sequence is the following: \begin{itemize} \item Every minute a cube of $N=1000$ continuous snapshots with a short exposure time $t$ of (3~ms gives sufficient signal to noise ratio for the brightest stars) is recorded into the computer memory (this takes about 20 seconds). \item Optionally a second cube of $N=1000$ images is recorded with a double exposure time $2t$ to compensate for finite exposure time. This option is denoted as ``2T mode''. \item For each image of the sequence we compute the sky background on the upper edge of the field of view (about 1/10 of the field is considered), then substract it to the images and apply a threshold (5 times the standard deviation of the sky background). \item For each frame we detect the three spots corresponding to the 3 sub-apertures, and compute their photocenters (in $x$ and $y$ directions) and their total flux. \item A data file containing the UT acquisition time of each frame (with millisecond precision) spot photocenters and flux is generated for each cube. \item Differential variances are computed from the photocenters of the two sub-images produced by the 6~cm diameter apertures, for the whole cube. In 2T mode, a compensation for exposure time is applied by the following formula\cite{Aristidi05a, Tokovinin02} \begin{equation} \sigma^2_{l|t}(0)=\sigma^2_{l|t}(t)^{1.75}\, \sigma^2_{l|t}(2t)^{-0.75} \end{equation} where $\sigma^2_{l|t}(t)$ (resp. $\sigma_{l|t}^2(2t)$) is the longitudinal$|$transverse differential variance computed for the exposure time $t$ (resp. 2t) and $\sigma^2_{l|t}(0)$ the longitudinal$|$transverse differential variance for zero~ms exposure time. the coefficients 1.75 and 0.75 are taken from the paper of Tokovinin (2002)\cite{Tokovinin02}. Further comparison with the GSM is foreseen next summer to check these coefficients and adjust them if necessary to better fit our GDIMM data. \item The seeing (transversal and longitudinal) is calculated from the variances using eq.~\ref{eq:seeing}. Scale calibration is made on bright double stars of known separation. Albireo (optical couple of separation 35$''$) or Mizar (very long period binary with separation of 14.5$''$) are good targets. \item Scintillation index $s=\frac{\sigma_I^2}{\langle I\rangle^2}$ of the total flux $I$ of the central spot is computed for the $N=1000$ images of the cube. If the 2T mode is selected, the following formula is applied to compensate for the exposure time\cite{Ziad00}: \begin{equation} s(0)=2 s(t)\, -\, s(2t) \end{equation} \item The isoplanatic angle is calculated from eq.~\ref{eq:isop} \item Variances, scintillation indexes, seeings and isoplanatic angle are summarized in a second data file, which contains, at the end of the observing run, all the integrated turbulence parameters for the night. \end{itemize} \section{RESULTS} \label{par:results} First test observations with GDIMM were carried out in June 2013 at the Plateau de Calern (South of France) on the bright stars Vega and Arcturus. The GSM instrument was operated simultaneously for cross-calibration. Results from this observing run are presented in this paper. More recently several test observations were performed on bright stars at the top of the Mont Gros (Nice, France). We present here data obtained on the nights of March 6th, March 20th and May 14th, 2014 on the star Arcturus. \subsection{Seeing and isoplanatic angle} Figure~\ref{fig:seeing_isop_calern}a presents times series of the seeing (average of transverse and longitudinal) obtained on June 18th 2013 at the Plateau de Calern. At that time the Barlow lens was not present and the compensation from exposure time was not implemented. Seeings are computed for an exposure time of 4~ms, and may be underestimated. Hence a small negative bias is observed between GSM and GDIMM seeings, but the shapes of the two curves are similar. The pixel size has not been calibrated with a double star for these data and we used the telescope focal to convert pixels into arcsec (this can explain part of the bias). The bias is increasing at the end of the sequence, probably due to a drop of the coherence time (meteo conditions were degradating). \begin{figure} \begin{center} \includegraphics[width=7cm]{seeing_vs_time-eps-converted-to.pdf} \ \ \includegraphics[width=7cm]{isop_vs_time-eps-converted-to.pdf} \end{center} \caption{Times series of (a) the seeing and (b) the isoplanatic angle obtained on the night of June 18, 2013 at the plateau de Calern. Solid line: GDIMM data. Dashed line : GSM data (compensated from exposure time wheras GDIMM values are for an exposure time of 4~ms).} \label{fig:seeing_isop_calern} \end{figure} \begin{figure} \begin{center} \includegraphics[width=7cm]{seeing_lt-eps-converted-to.pdf} \end{center} \caption{Transverse versus longitudinal seeings obtained with GDIMM obtained on the nights of June 18, 2013 at the Plateau de Calern and May 14, 2014 at the Mont Gros. In dashed-black the straight line of slope 1. The correlation coefficient of the combined data set is 0.88.} \label{fig:seeing_lt} \end{figure} As mentionned in section~\ref{par:seeing}, the DIMM method is valid for isotropic turbulence, i.e. identical values of the transverse and longitudinal seeings $\epsilon_{t|l}$. Figure~\ref{fig:seeing_lt} shows a plot of $\epsilon_l$ versus $\epsilon_t$ for the two data sets. The cloud of points is spread along the ligne $\epsilon_t=\epsilon_l$ with small dispersion (a well known effects of the wind speed and direction and of the finite exposure time which was studied by Martin\cite{Martin87}). The correlation coefficient of 0.88 confirms the isotropic hypothesis. Isoplanatic angle times series for the run at the Plateau de Calern (June 18th 2013) obtained together with GSM is shown on fig.~\ref{fig:seeing_isop_calern}b. Very good agreement is found between the two data sets, despite the absence of compensation from finite exposure time. \subsection{Outer scale} The outer scale is derived from the ratio $R$ defined in Eq.~\ref{eq:l0}. It requires the estimation of the ``absolute'' variances $\sigma_{D_1}^2$ and $\sigma_{D_3}^2$ of the angular position of the sub-images produced by the pupils of diameter $D_1=6$~cm and $D_3=10$~cm. Typical values are $R\simeq 5$ for ${\cal L}_0 = 20$~m. Note that $R$ is independent of the seeing and the scale calibration. The drawback of this method is that $\sigma_{D_1}^2$ may contain some bias caused by telescope vibrations. Careful attention must be made during the processing in order to detect and eliminate contaminated data. We propose the following algorithm: \begin{itemize} \item From a data cube of $N$ frames we calculate 2 values of $\sigma_{D_3}^2$ from the photocenter of the instantaneous sub-images corresponding to the 10~cm diameter pupil, one in the $x$ direction and one in the $y$ direction. \item Similarly we compute 4 values of $\sigma_{D_1}^2$ since we have two sub-pupils of diameter $D_1$. \item From $\sigma_{D_1}^2$ it is possible to estimate a value $r_{0 \mbox{\scriptsize abs}}$ of the Fried parameter using eq.~\ref{eq:r0abs} and neglecting the term in ${\cal L}_0$. Four values of $r_{0 \mbox{\scriptsize abs}}$ are computed for each cube and compared to the real value of $r_0$ calculated by the stantart way. We reject data for which $r_0-r_{0 \mbox{\scriptsize abs}}>\delta$ where $\delta$ is a threshold (we arbitrary took $\delta=4$~cm, but this is a point yet to be investigated). Fig.~\ref{fig:r0abs_r0diff}a displays a plot of the 4 values of $r_{0 \mbox{\scriptsize abs}}$ versus $r_0$ before applying the rejection, for data obtained at the Mont Gros on March 6 and March 20, 2014. The high correlation coefficients (between 77\% and 90\%) indicate that the major part of the absolute variances was due to the turbulence for these data. \item We finally compute 4 values of the ratio $R$ and derive 4 values of the outer scale for each cube. Outliers are rejeted, and the median of the remaining values is taken. \end{itemize} This algorithm was applied to data taken at the Mont Gros on the bright star Arcturus on March 6 and March 20, 2014. A total of 27 data cubes of $N=1000$ images with an exposure time of 5~ms was obtained during these two runs. After filtering (threshold and bad points), we kept 20 values of the outer scale ${\cal L}_0$. Time series is presented on Fig.~\ref{fig:r0abs_r0diff}b. We found a mean outer scale ${\cal L}_0\simeq 13$~m for the whole data set, with a good stability in time. These values are not compensated from the exposure time, they are indeed to be taken as a test of the method. \begin{figure} \begin{center} \includegraphics[height=6cm]{r0abs_r0diff-eps-converted-to.pdf} \includegraphics[height=6cm]{l0ts-eps-converted-to.pdf} \end{center} \caption{(a): Fried parameter $r_{0 \mbox{\scriptsize abs}}$ computed from the absolute variances $\sigma_{D_1}^2$, versus Fried parameter computed by the classical method using the differential variances. The 4 series of data correspond to the sub-pupils 1 and 2 in the $x$ and $y$ directions (namely $x_1$, $x_2$, $y_1$, $y_2$). (b): Time series of the outer scale ${\cal L}_0$ (uncorrected from the exposure time) for data taken on March 6 and March 20, 2014 at the Mont Gros site.} \label{fig:r0abs_r0diff} \end{figure} \subsection{Coherence time} \label{par:tau0} The algorithm for the estimation of the coherence time is in developpement. It required the computation of the temporal structure functions $D_{x|y}(\tau)$ of the AA in both directions. Each sub-pupil gives an estimation of $D_{x}(\tau)$ and $D_{y}(\tau)$ so that we have 3 values of the AA coherence time $\tau_{a,x|y}$ in both directions. We show on fig.~\ref{fig:temporal_st} an example for a data cube obtained on May 14, 2014 at 20:01 UTC at the Mont Gros site. Structure functions for the $x$ (declination) direction saturate as expected when $\tau > 1$~sec. In the $y$ (right ascension) direction the function $D_{x}(\tau)$ almost saturates but there is a remaining trend probably due to telescope vibrations (pursuit of the star in the RA direction). In the $x$ direction we can estimate the AA coherence time to $\tau_{a,x} \simeq $22~ms. This value is not corrected from the exposure time (5~ms). This corresponds to an effective wind speed $\bar v \in [1.3 - 2.3]$~m/s depending on the wind direction (we took ${\cal L}_0=20$~m but $\bar v$ is not very sensitive to ${\cal L}_0$ and we found nearly the same result for ${\cal L}_0=10$~m). The Fried parameter for this data cube, calculated by the DIMM method, is $r_0=5.4$~cm. This gives a coherence time $\tau_0\in [7 - 12]$~ms, which is of the order of magnitude of typical coherence times in the visible. Currently a strong limitation of the precision of the method is the temporal sampling of the camera. It is 22~ms for these data, but we read the entire CCD for each acquisition. In the future we plan to use windowing options to reduce the time lag between successive snapshots. \begin{figure} \begin{center} \includegraphics[width=17cm]{temporal_st-eps-converted-to.pdf} \end{center} \caption{Temporal structure functions $D_{x|y}(\tau)$ of the angle of arrivals for the data sequence recorded on May 14, 2014 at 20:09 UTC. $D_{x|y}$ is normalized to its saturation value. Left: curves for the AA in the $y$ direction (right ascension). Solid (1) and dashed (2) lines correspond to the subpupils of diameter 6cm, dotted line (3) corresponds to the 10~cm diameter aperture. Right: same curves for the $x$ direction (declination).} \label{fig:temporal_st} \end{figure} \section{CONCLUSIONS AND PERSPECTIVES} \label{par:concl} We have presented the GDIMM, a new turbulence monitor aiming at measuring the 4 integrated parameters of the optical turbulence ($\epsilon$, $\theta_0$, $\tau_0$ and ${\cal L}_0$). The instrument is compact and aims at being a successor for the GSM. Seeing and isoplanatic angle monitoring are now completely functional. For the outer scale the algorithm is still in developpement. The first results are encouraging and give confidence in the possibility to stabilize a reliable monitoring in the next future. For the coherence time we just began to transpose the technique based on the structure functions presented by Ziad et al.\cite{Ziad12}. Section~\ref{par:tau0} shows that the method is applicable to our data, though precice estimation of $\tau_0$ will require to increase the frame rate of the acquisitions. In the next months we plan to perform long-term observation campaigns at the plateau de Calern, together with GSM to finalize the GDIMM algorithms and test them in the widest possible range of situations. A GDIMM instrument is foreseen to be permanently installed on the top a 4~m high tower near the 1.5~m telescope MeO (plateau de Calern)\cite{Samain08}. It will give turbulence monitoring simultaneously with scientific MeO observations. \section*{ACKNOWLEDGMENTS} We whish to thank Flavien Blary, Victorien Merl\'e and Paul Verdier who participated to the observations. Thanks also to Alex Robini for his valuable contribution to the instrument.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Data Generation} \label{app:data-generation} \begin{algorithm}[tb]\captionsetup{labelfont={sc,bf}} \caption{\textsc{GenerateIHDP}: Semi-synthetic Data Generation Algorithm for Fair Causal Inference} \label{alg:ihdp-main} \begin{algorithmic} \STATE {\bfseries Step 1:} Remove all children from the dataset with non-white mothers who received the original treatment (as in \citet{hill2011bayesian}). \STATE {\bfseries Step 2:} Optional: Remove extra features from $X$. \STATE {\bfseries Step 3:} Normalize data (for each feature of $X$, subtract mean and divide by standard deviation). \STATE {\bfseries Step 4:} Remove some features $Z$ from the data to act as unobserved confounders. \STATE {\bfseries Step 5:} Remove some feature $A$ from the data to act as the sensitive attribute. \STATE {\bfseries Step 6:} Sample factual and counterfactual outcomes $\{y_{T=t,A=a} \ \forall t, a\} = \textsc{GenerateOutcomes(X, Z)}$. \STATE {\bfseries Step 7:} Sample factual and counterfactual treatments $\{t_{A=a} \ \forall a\} = \textsc{GenerateTreatments(Z)}$. \STATE {\bfseries Return} $Z, A, X, \{y_{T=t,A=a} \ \forall t, a\}, \{t_{A=a} \ \forall a\}$ \end{algorithmic} \end{algorithm} \begin{algorithm}[tb]\captionsetup{labelfont={sc,bf}} \caption{\textsc{GenerateOutcomes}: Generate outcomes for each value of the treatment and sensitive attribute (style of \citep{hill2011bayesian}, Resp. B)} \label{alg:ihdp-response} \begin{algorithmic} \STATE {\bfseries Input:} Features $X$, unobserved confounders $Z$ \STATE Let $[X, Z]$ denote the horizontal concatenation of $X$ and $Z$, and let the offset matrix $W$ be the shape of $[X, Z]$ with 0.5 in every position. \STATE Sample $\beta \sim P_{\beta}$, choose $\omega, \beta_A \in \mathds{R}$. \STATE Sample $y_{T=0,A=0} \sim \mathcal{N}(\exp(([X, Z] + W)\beta^T), 1)$ \STATE Sample $y_{T=1,A=0} \sim \mathcal{N}([X, Z]\beta^T - \omega, 1)$ \STATE Sample $y_{T=0,A=1} \sim \mathcal{N}(\exp(([X, Z] + W)\beta^T) + \beta_A, 1)$ \STATE Sample $y_{T=1,A=1} \sim \mathcal{N}([X, Z]\beta^T - \omega + \beta_A), 1)$ \STATE \textbf{Return} \{$y_{T=0,A=0}, y_{T=1,A=0}, y_{T=0,A=1}, y_{T=1,A=1}$\} \end{algorithmic} \end{algorithm} \begin{algorithm}[tb]\captionsetup{labelfont={sc,bf}} \caption{\textsc{GenerateTreatments}: Generate treatments for each value of the sensitive attribute} \label{alg:ihdp-treatment} \begin{algorithmic} \STATE {\bfseries Input:} Unobserved confounders $Z$ \STATE Choose $\alpha_0, \alpha_1 \in [0, 1], \zeta \in \mathds{R}$. \STATE Let $p_{A=0} = Clip(\alpha_0 + \zeta Z, 0, 1), p_{A=1} = Clip(\alpha_1 + \zeta Z, 0, 1)$ \STATE Sample $t_{A=0} \sim Bern(p_{A=0})$. \STATE Sample $t_{A=1} \sim Bern(p_{A=1})$. \STATE \textbf{Return} $\{t_{A=0}, t_{A=1}\}$ \end{algorithmic} \end{algorithm} We detail our dataset generation progess in Algorithm \ref{alg:ihdp-main}. We denote the outcome $Y$ under interventions $do(T=t), do(A=a$) as $y_{T=t,A=a}$. The subroutines in Algorithms \ref{alg:ihdp-response} and \ref{alg:ihdp-treatment} generate all factual and counterfactual outcomes and treatments for each example, one for each possible setting of $A$ and/or $T$. In Algorithm \ref{alg:ihdp-main}, we have several undefined constant variables. We use the following values for those variables: \begin{itemize} \item $\beta \sim P_{\beta}$: \begin{itemize} \item for continuous variables, \\ \ $\beta_i \sim Cat([0,.1,.2,.3,.4],[.5, .125, .125, .125, .125])$ \item for binary variables $\beta_i \sim Cat([0,.1,.2,.3,.4], [.6, .1, .1, .1, .1])$ \item for $Z$, $\beta_i = Cat([.4,.6], [.5,.5])$ \end{itemize} where $Cat(x, p)$ selects values from $x$ according to the array of probabilities $p$. \item $\beta_A = 6$ \item $\omega = -11$ \item $\alpha_0, \alpha_1, \zeta = 0.7, 0.4, 0.1$ \end{itemize} We also use the function $Clip$, which is defined as: \begin{equation} Clip(x, m, M) = \min(\max(x, m), M); x, m, M \in \mathds{R} \end{equation} \section{Identifiability of Causal Effects} \label{app:proofs} Here we show that if we can successfully recover the joint distribution $P(Z, A, X, T, Y)$, we can recover all three treatment effects we are interested in: \begin{enumerate} \item The effect of $T$ on $Y$ ($T \rightarrow Y$): $\mathds{E}(Y | do(T = 1), X, A) - \mathds{E}(Y | do(T = 0), X, A)$ \item The effect of $A$ on $T$ ($A \rightarrow T$): $\mathds{E}(T | do(A = 1), X) - \mathds{E}(T | do(A = 0), X)$ \item The effect of $A$ on $Y$ ($A \rightarrow Y$): $\mathds{E}(Y | do(A = 1), X) - \mathds{E}(Y | do(A = 0), X)$ \end{enumerate} Our proof will closely follow \citet{louizos2017causal}. For each effects, it will suffice to show that we can recover the first term on the right-hand side of each expression. (The argument for the second term is the same). We will show only the proof for the effect of $T$ on $Y$ --- the others are very similar. \\ {\bf Theorem.} Given the causal model in Fig. \ref{subfig:cevae-fair}, if we recover the joint distribution $P(Z, A, X, T, Y)$, then we can recover $\mathds{E}(Y | do(T = 1), X, A)$. {\it Proof.} We have that \begin{equation} \begin{aligned} P(Y | do(T = 1), X, A) &= \int_Z P(Y | do(T = 1), X, A, Z) P(Z | do(T = 1), X, A)\\ \end{aligned} \end{equation} By the $do$-calculus, we can reduce further: \begin{equation} \begin{aligned} &= \int_Z P(Y | T = 1, X, A, Z) P(Z | X, A) \end{aligned} \end{equation} If we know the joint distribution $P(Z, A, X, T, Y)$, we can identify the value of each term in this expression; hence we can identify the value of the whole expression. \hfill \qedsymbol\\ \section{Experimental details} \label{app:experiments} We run each model on 500 distinct data seed/model seed pairs, in order to get robust confidence estimates on the error of each model. We parametrize each function in our causal model with a neural network. Our networks between $X$ and $Z$ have a single hidden layer of 20 hidden units. The size of the learned hidden confounder $Z$ was 10 units. Each of our TARNets consist of a network outputting a shared representation, and two networks making predictions from that representation. Each of these network have 1 hidden layer with 100 hidden units. The size of the shared representation in the TARNets was 20 units. For simplicity, we set $g_X^{\sigma} = 1$ for all experiments (but not $g_Z^{\sigma}$)---this amounts to assuming unit variance for the data $X$, a sensible assumption because they are normalized during pre-processing. We used ELU non-linear activations \cite{clevert2015fast}. We trained our model with ADAM \citep{kingma2014adam} with a learning rate of 0.001, calculating the ELBO on a validation set and stopping training after 10 consecutive epochs without improvement. We sample 10 times from the posterior $q(Z|\cdot)$ at both training and test time for each input example. At training time we compute the average ELBO across the ten samples, while at test time we use the average prediction. \section{Introduction} \label{sec:intro} In this work, we consider the problem of fair decision-making from biased datasets. Much work has been done recently on the problem of fair classification \citep{zafar2015fairness,hardt2016equality,bechavod2017penalizing,agarwal2018reductions}, yielding an abundant supply of definitions, models, and algorithms for the purposes of learning classifiers whose outputs satisfy distributional constraints. Some of the canonical problems for which these algorithms have been proposed are loan assignment \citep{hardt2016equality}, criminal risk assessment \citep{chouldechova2017fair}, and school admissions \citep{friedler2016possibility}. However, none of these problems are fully specified by the classification paradigm. Rather, they are decision-making problems: each problem requires an action (or ``treatment'') to be taken in the world, which in turn yields an outcome. In other words, the central question is how to intervene in an ongoing and evolving process, rather than predict outcomes alone \citep{barabas2017interventions}. Decision-making, i.e. learning to intervene, requires a fundamentally different approach from learning to classify: historical training data are the product of past interventions and thus provide an incomplete view of all possible outcomes. Only actions which were previously chosen yield observable outcomes in the training data, while the implicit counterfactual outcomes (the outcome that would have occurred had another action been taken) are never observed. The incompleteness of this data can have great impact on learning and inference \cite{rubin1976inference}. It has been widely argued that biased data yields unfair machine learning systems \citep{kallus2018residual,hashimoto2018fairness,pmlr-v81-ensign18a}. In this work we examine dataset bias through the lens of causal inference. To understand how past decisions may bias a dataset, we first must understand how sensitive attributes may have affected the generative process which created the dataset, including the (historical) decision makers' actions (treatments) and results (outcomes). Causal inference is well suited to this task: since we are interested in decision-making rather than classification, we should be interested in the causal effects of actions rather than correlations. Causal inference has the added benefit of answering counterfactual queries: What would this outcome have been under another treatment? How would the outcome change if the sensitive attribute were changed, all else being equal? These questions are core to the mission of learning fair systems which aim to inform decision-making \citep{kusner2017counterfactual}. While there is much that causal inference can offer to the field of fair machine learning, it also poses several significant challenges. For example, the presence of \emph{hidden confounders}---unobserved factors that effect both the historical choice of treatment and the outcome---often prohibits the exact inference of causal effects. Additionally, understanding effects at the individual level can be especially complex, particularly if the outcome is non-linear in the data and treatments. These technical difficulties are often amplified by the problem scope of modern machine learning, where large and high-dimensional datasets are commonplace. To address these challenges, we propose a model for fairly estimating individual-level causal effects from biased data, which combines causal modeling \citep{pearl2009causality} with approximate inference in deep latent variable models \citep{kucukelbir2017automatic,louizos2017causal}. Our focus on individual-level causal effects and counterfactuals provides a natural fit for application areas requiring fair policies and treatments for individuals, such as finance, medicine, and law. Specifically, we incorporate the sensitive attribute into our model as a confounding factor, which can possibly influence both the treatment and the outcome. This is a first step towards achieving ``fairness through awareness'' \citep{dwork2012fairness} in the interventional setting. Our model also leverages recent advances in deep latent-variable modeling to model potential hidden confounders as well as complex, non-linear functions between variables, which greatly increases the class of relationships which it can represent. Through experimental analysis, we show that our model can outperform non-causal models, as well as causal models which do not consider the sensitive attribute as a confounder. We further explore the performance of this model, showing that fair-aware causal modeling can lead to more accurate, fairer policies in decision-making systems. \section{Background} \label{sec:causal-background} \subsection{Causal Inference}\label{sec:causal-inference} We employ Structural Causal Models (SCMs), which provide a general theory for modeling causal relationships between variables \citep{pearl2009causality}. An SCM is defined by a directed graph, containing vertices and edges, which respectively represent variables in the world and their pairwise causal relationships. There are two types of vertices: exogenous variables $\mathcal{U}$ and endogenous variables $\mathcal{V}$. Exogenous variables are unspecified by the model; we model them as unexplained noise distributions, and they have no parents. Endogenous variables are the objects we wish to understand; they are descendants of endogenous variables. The value of each endogenous variable is fully determined by its ancestors. Each $V \in \mathcal{V}$ has some function $f_V$ which maps the values of its immediate parents to its own. This function $f_V$ is deterministic; any randomness in an SCM is due to its exogenous variables. In this paper, we are primarily concerned with three endogenous variables in particular: $X$, the observable \emph{features} (or covariates) of some example; $T$, a \textit{treatment} which is applied to an example; and $Y$, the \textit{outcome} of a treatment. Our decision problem is: given an example with particular values for its features, $X=x$, what value should we assign to treatment $T$ in order to produce the best outcome $Y$? This is fundamentally different from a classification problem, since typically we observe the result of only one treatment per example \footnote{ {\color{black} Note that we use the terms \textit{treatment} and \textit{outcome} as general descriptors of a decision made/action taken and its result, respectively. These terms are associated with an alternative theory of causal inference \citep{rubin2005causal} which can also be used to describe the methods we propose, but which we will not discuss in this paper. } } . To answer this decision problem, we need to understand the value $Y$ will take if we \textit{intervene} on $T$ and set it to value $t$. Our first instinct may be to estimate $P(Y | T = t, X = x)$. However, this is unsatisfactory in general. If we are estimating these probabilities from observational data, then the fact that $x$ received treatment $t$ \textit{in the past} may have some correlation with the historical outcome $Y$. This ``confounding'' effect---the fact that $X$ has an effect on both $T$ and $Y$ is depicted in Figure \ref{subfig:pearl}, by the arrows pointing out of $X$ into $T$ and $Y$. For instance, in an observational medical trial, it is possible that young people are more likely to choose a treatment, and also that young people are more likely to recover. A supervised learning model, given this data, may then overestimate the average effectiveness of the treatment on a test population. Broadly, to understand the effect of assigning treatment $t$, supervised learning is not enough; we need to model the functions $\{f_V\}$ of the SCM. Once we have a fully defined SCM, we can use the $do$ operation \cite{pearl2009causality} to simulate the distribution over $Y$ given that we assign some treatment $t$---we denote this as $P(Y | do(T = t), X = x)$. We do the $do$ through graph surgery: we assign the value $t$ to $T$ by removing all arrows going into $T$ from the SCM and setting the corresponding structural equation output to the desired value regardless of its input $f_T(\cdot) = t$. We then set $X = x$ and continue with inference of $Y$ as we normally would. A common assumption in causal modelling is the ``no hidden confounders'' assumption, which states that there are no unobserved variables affecting both the treatment and outcome. We follow \citet{louizos2017causal}, and use variational inference to model confounders that are not directly observed but can be abstracted from proxies. {\color{black} In Sec. \ref{sec:models} we consider the implications of this approach and discuss alternative assumptions. } \subsection{Approximate Inference} Individual and population-level causal effects can be estimated via the \emph{do} operation when the values of all confounding variables are observed \citep{pearl2009causality}, which motivates the common no-hidden-confounders assumption in causal inference. However this assumption is rather strong and precludes classical causal inference in many situations relevant to fair machine learning, e.g., where ill-quantified and hard-to-observe factors such socio-economic status (SES) may significantly confound the observable data. Therefore we follow \citet{louizos2017causal} in modeling unobserved confounders using a high dimensional latent variable $Z$ to be inferred for each observation $(X, T, Y)$. They prove that if the full joint distribution is successfully recovered, individual treatment effects are identifiable, even in the presence of hidden confounders. In other words, causal effects are identifiable insofar as exact inference can be carried out, and the observed covariates are sufficiently informative. Because exact inference of $Z$ is intractable for many interesting models, we approximately infer $Z$ by variational inference, specifying $q(Z|X, T, Y)$ using a parametric family of distributions and learning the parameters that best approximate the true posterior $p(Z|X, T, Y)$ by maximizing the evidence lower bound (ELBO) of the marginal data likelihood \citep{wainwright2008graphical}. In particular, we amortize inference by training a neural network (whose functional form is specified separately from the causal model) to predict the parameters of $q$ given $(X, T, Y)$ \cite{kingma2013auto}. Amortized inference is much faster but less optimal than local inference \cite{kim2018semi}; alternate inference strategies could be explored for applications where the importance of accuracy in individual estimation justifies the additional computational cost. \subsection{TARNets} \label{sec:tarnets} TARNets \citep{shalit2016estimating} are {a \color{black} class of neural network} architectures for estimating outcomes of a binary treatment. {\color{black} The network comprises two separate arms---each predicts the outcomes associated with a separate treatment---that share parameters in the lower layers. The entire network is trained end to end using gradient-based optimization, but with } only one arm (the one with the treatment which was actually given) receiving error signal for any given example. The TARNet prediction of result $R$ and input variables $V$ and potential intervention $I$ is expressed by combining the shared representation function $\Phi$ with the two functions $h_0, h_1$ corresponding to the separate prediction arms. This yields two composed functions, \begin{equation} \begin{aligned} g^{I=0}_R(V, I) &= h_0(\Phi(V))\\ g^{I=1}_R(V, I) &= h_1(\Phi(V)) \end{aligned} \end{equation} with $h_0, h_1, \Phi$ realized as neural networks. \citet{shalit2016estimating} explore a group-wise MMD penalty on the outputs of $\Phi$; we do not use this. \section{Fair Causal Inference} \label{sec:problem-setup} As stated in Sec. \ref{sec:causal-inference}, we are interested in modeling the causal effects of treatments on outcomes. However, when attempting to learn fairly from a biased dataset, this problem takes on an extra dimension. In this context, we become concerned with understanding causal effects in the presence of a \emph{sensitive attribute} (or protected attribute). Examples include race, gender, age, or SES. When learning from a historical data, we may believe that one of these attributes affected the observable treatments and outcomes, resulting in a biased dataset. \citet{lum2016predict} give an example in the domain of predictive policing of how a dataset of drug crimes may become biased with respect to race through unfair policing practices. They note that it is impossible to collect a dataset of all drug crimes in some area; rather, these datasets are really tracking drug \emph{arrests}. Due to a higher level of police presence in heavily Black than heavily White communities, recorded drug arrests will by nature over-represent Black communities. Therefore, a predictive policing algorithm which attempts to fit this data will continue the pattern of over-policing Black communities. \citet{lum2016predict} provide experimental validation of this hypothesis through simulation, contrasting the output of a common predictive policing algorithm with independent, demographic-based estimates of drug use by neighborhood. Their work shows that wrongly specifying a learning problem as one of supervised classification can lead to replicating past biases. In order to account for this in the learning process, we should be aware of the biases which shaped the data --- which may include sensitive attributes that historically affected the treatment and/or outcome. Using the above example for concreteness, we specify the variables at play. The decision-making problem is: should police be sent to neighborhood $X$ at a given time? The variables are: \begin{itemize} \item $A \in \{0, 1\}$: a sensitive attribute. For example the majority race of a neighborhood. \item $T \in \{0, 1\}$: a treatment. For example the presence or absence of police in a certain neighborhood on a particular day. \item $Y \in \mathds{R}$: an outcome. For example the number of arrests recorded in a given neighborhood on a particular day. \item $X \in \mathds{R}^D$: $D$-dimensional observed features. For example statistics about the neighborhood, which may change day-to-day \end{itemize} We will represent sensitive attributes and treatments as binary throughout this paper; we recognize this is not always an optimal modeling choice in practice. Note that the choice of treatment will causally alter the outcome---an arrest cannot occur if there are no police in the area. Furthermore, the sensitive attribute can causally effect the outcome as well; research has shown that policing can disparately effect various races, even controlling for police presence \cite{gelman2007analysis} (the treatment in this case). We note that in various domains, there may be more variables of interest than the ones we list here, and more appropriate causal models than those shown in Fig. \ref{fig:cevae}. However, we believe that the setup we describe is widely applicable and contains the minimal set of variables to be useful for fairness-aware causal analysis. We are interested in calculating causal effects between the above variables. In particular, we seek answers to the following three questions: \paragraph{What is the effect of the treatment on the outcome?} This will help us to understand which $T$ is likely to produce a favorable outcome for a given $X$. Let us denote~$y_{T=t}(x, a) = \mathds{E}[y | do(T = t), X = x, A = a]$~as the expected conditional outcome under $T = t$, that is, the ground truth value taken by $Y$ when the treatment $T$ is assigned the value $t$, and conditioning on the values $x, a$ for the features and sensitive attribute respectively. Then, we can express the individual effect of $T$ on $Y$ ($IE_{T \rightarrow Y}$) as \begin{equation} IE_{T \rightarrow Y}(x, a) = y_{T=1}(x, a) - y_{T=0}(x, a). \end{equation} \paragraph{What is the effect of the sensitive attribute on the treatment?} This allows us to understand how the treatment assignment was biased in the data. Similarly, we can define $t_{A=a}(x) = \mathds{E}[t | do(A = a), X = x]$, which is the expected conditional treatment in the historical data when the value $a$ is assigned to the sensitive attribute. Then, the individual effect of $A$ on $T$ can be expressed as \begin{equation} IE_{A \rightarrow T}(x) = t_{A=1}(x) - t_{A=0}(x). \end{equation} \paragraph{What is the effect of the sensitive attribute on the outcome?} This allows us to understand what bias is introduced into the historically observed outcome. We can also define $y_{A=a}(x) = \mathds{E}[y | do(A = a), X = x, T=t_{A=a}(x)]$ as the expected conditional outcome under $A = a$; the ground truth value of $Y$ conditioned on the features being $x$ if the sensitive attribute were assigned the value $a$, and the treatment $T$ were assigned the ground truth value $t_{A=a}(x)$. Then, we can express the individual effect of $A$ on $Y$ as \begin{equation} IE_{A \rightarrow Y}(x) = y_{A=1}(x) - y_{A=0}(x). \end{equation} {\color{black} \subsection{Intervening on Sensitive Attributes} There has been some disagreement around the notion of intervening on an immutable (or effectively immutable) sensitive attribute. \citet{holland1986statistics} argue that there is ``no causation without manipulation'' --- i.e. an attribute can never be a cause; only an experience undergone can be. Briefly stated, they argue that if the factual and counterfactual versions cannot be ``defined in principle, it is impossible to define the causal effect''. In a counterargument, \citet{marini1988causality} claim that a ``synthesis of intrinsic and extrinsic determination [provides] a more adequate picture of causal relations'' --- meaning that both externally imposed experiences (extrinsic) and internally defined attributes (intrinsic) are valid conceptual components of a theory of causation. We agree with this view --- that the notion of a causal effect of an immutable attribute is valid, and believe that it is particularly useful in a fairness context. Specifically pertaining to race, some argue it is possible to understand the causal effect of an immutable attribute in terms of the effects of more manipulable attributes (proxies). \citet{vanderweele2014causal} argue that, rather than interpreting a causal effect estimate of $A$ as a hypothetical randomized intervention on $A$, one can interpret it as a particular type of intervention on some other set of manipulable variables related to $A$ (under certain graphical and distributional assumptions on those variables). \citet{sen2016race} take a constructivist approach, and consider race to be composed of constituent parts, some of which \textit{can} be theoretically manipulated. They describe several experimental designs which could estimate the effects of immutable attributes. Another issue with intervening on sensitive attributes is that, since many are ``assigned at conception'', all observed covariates $X$ are post-treatment \citep{sen2016race} (as reflected in the design of our SCM in Fig. \ref{subfig:cevae-fair}). In statistical analysis, a frequent approach is to ignore all post-treatment variables to avoid introducing collider biases \citep{gelman2007analysis,king1994designing}. However, in our model, the purpose of the covariates is to deduce the true (unobserved) values of the latent $Z$ for that individual. Therefore, when conditioning on the observed covariates, correlation of $A$ and $Z$ is the objective, rather than an undesired side effect. This is the first step (``Abduction'') of computing counterfactuals (according to \citet{pearl2009causality}); we can think of this as adjusting for bias (of the sensitive attribute) in the $X$-generating process. } \section{Proposed Method} \label{sec:models} In this section we first conceptualize and describe our proposed causal model---depicted in Fig. \ref{fig:proposed}---then discuss the parameterization of the corresponding SCMs and learning procedure. {\color{black} A common causal modelling approach is to define a new SCM for each problem \citet{pearl2009causality}, taking advantage of domain specific knowledge for that particular problem. This stands in contrast to a classic machine learning (ML) approach, which aims to process data and draw conclusions as generally as possible, by automatically discovering patterns of correlation in the data. While the causal modelling approach is capable of detecting effects the ML approach cannot, the ML approach is attractive since it provides modularity, generality and a more automated data processing pipeline. In this work, we aim to interpolate between the two approaches by considering a single, general causal model for observational data. Our model contains what we argue are a minimal set of fairly general causal variables for discovering treatment effects and biases in the data-generation process, allowing us to interface causally with arbitrary data that fits the proposed structure. } Two features of our causal model are noteworthy. First is the explicit consideration of the sensitive attribute---a potential source of dataset bias---as a confounder, which causally affects both the treatment $T$ and the outcome $Y$. This contrasts with approaches from outside the fairness literature (e.g. \citep{louizos2017causal}, Fig. \ref{subfig:cevae}), which in a fairness setting (Fig. \ref{subfig:cevae-sens}) would treat potential sensitive attributes as equivalent to other observed features. Our model accounts for the possibility that a sensitive attribute may have causal influence on the observed features, treatments and outcomes and the historical process which generated them. It makes the sensitive attribute distinct from the other attributes of $X$, which we understand not as confounders but observed proxies. We can think of this as a causal modeling analogue of ``fairness through awareness''. By actively adjusting for causal confounding effects of sensitive attributes, we can build a model which accounts for the interplay between the treatment and outcome for both values of the sensitive attribute. The other noteworthy aspect of our model is the latent variable $Z$. Together, $Z$ and $A$ make up all the confounding variables. {\color{black} We note two important points about these confounders. Firstly, we clarify that the model class we propose (a latent Gaussian and a deep neural network), is not necessarily the definitive model of the confounders of $T$ and $Y$; however, it is a flexible one, with numerous applications in machine learning \citep{rezende2014stochastic}. Secondly, we note that causal inference and machine learning have different conventions around unobserved (i.e. latent) variables --- in causal inference, these variables are generally considered to be nameable objects in the world (e.g. SES, historical predjudice), whereas in machine learning they represent some unspecified (and perhaps abstract) structure in the data. Our $Z$ follows the machine learning convention. } As in \citet{louizos2017causal}, $Z$ represents all the unobserved confounding variables which effect the outcomes or treatments (other than $A$). The features $X$ can be seen as proxies (noisy observations) for the confounders ($Z, A$). Altogether, the endogenous variables in our model are $X$, $A$, $Z$, $T$, and $Y$. We also have exogenous variables $\epsilon_X, \epsilon_A, \epsilon_Z, \epsilon_T, \epsilon_Y$ (not shown), each the immediate parent of (only) their respective endogenous variable. The structural equations are: \begin{align}\label{eq:structural-functions} \nonumber &Z = f_Z(\epsilon_Z) &A = f_A(\epsilon_A) \\ \nonumber &X = f_X(Z, A, \epsilon_X) &T = f_T(Z, A, \epsilon_T) \\ &Y = f_Y(Z, A, T, \epsilon_Y) &\medspace \epsilon_V \sim P_V(\epsilon_V) \ \forall V \in \{Z, A ,X, T, Y\} \end{align} {\color{black} Since $Z$ does not necessarily refer to tangible objects in the world, it is reasonable that $Z \perp A$ in our model. This does not prevent a characteristic such as SES (which may be correlated with $A$) from being a confounder --- rather, $Z$ could represent the component of SES which is not based on $A$. Since both confounders are inputs to all other variables in the SCM, the model can learn to represent variables which \textit{are} based on $A$, (e.g. SES) as a joint distribution of $Z$ and $A$. } With this SCM in hand, we can estimate various interventional outcomes, if we know the values of $f_V \ \forall \ V \in \{Z, A ,X, T, Y\}$. For instance, we might estimate: \begin{equation} \label{eq:scm-estimations} \begin{aligned} \mathds{E} \left[ Y | Z=z, A=a, do(T=1) \right] &= \mathds{E}_{\epsilon_Y \sim P_Y(\epsilon_Y)} [f_Y(z, a, 1, \epsilon_Y)]\\ \mathds{E} \left[ Y | Z=z, do(A=1), do(T=1) \right] &= \mathds{E}_{\epsilon_Y \sim P_Y(\epsilon_Y)} [f_Y(z, 1, 1, \epsilon_Y)]\\ \mathds{E} \left[ Y | Z=z, do(A=1) \right] &= \\ \mathds{E}_{\epsilon_Y \sim P_Y(\epsilon_Y)} \mathds{E}_{\epsilon_T \sim P_T(\epsilon_T)} &[f_Y(z, 1, f_T(z, 1, \epsilon_T), \epsilon_Y)] \end{aligned} \end{equation} which are the expected values over outcomes of interventions on $T$, $T$ and $A$, and just $A$, respectively. However, the problem with the calculations in Eq. \ref{eq:scm-estimations} is that $Z$ is unobserved, so we cannot simply condition on its value. Rather, we observe some proxies $X$. Since the structural equations go the other direction --- $X$ is a function of $Z$, not the other way around --- inferring $Z$ from a given $X$ is a non-trivial matter. In summary, we need to learn two things: a generative model which can approximate the structural functions $f$, and an inference model which can approximate the distribution of $Z$ given $X$. Following the lead of \citet{louizos2017causal}, we use variational inference parametrized by deep neural networks to learn the parameters of both of these models jointly. In variational inference, we aim to learn an approximate distribution over the joint variables $P(Z, A, X, T, Y)$, by maximizing a variational lower bound on the log-probability of the observed data. As demonstrated in \citet{louizos2017causal}, the causal effects in the model become identifiable if we can learn this joint distribution. We extend their proof in Appendix \ref{app:proofs} to show identifiability holds when including the sensitive attribute in the model (as in Fig. \ref{subfig:cevae-fair}). {\color{black} We discuss here the identifiability condition from \citet{louizos2017causal}. Given some treatment $T$ and outcome $Y$, the classic ``no hidden confounders'' assumption asserts that the set of observed variables $O$ blocks all backdoor paths from $T$ to $Y$. \citet{louizos2017causal} weaken this: they assume that there is a set of confounding variables $Z = O_Z \cup U_Z$ such that $Z$ blocks all backdoor paths from $T$ to $Y$, where $O_Z$ are observed and $U_Z$ are unobserved. They claim that if we recover the full joint distribtuion of $p(Z, X, T, Y)$, then we can identify the causal effect $T \rightarrow Y$. However, this is only possible if we have sufficiently informative proxies $X$. While recovering the full joint distribution does not mean we have to measure every confounder, we do have to at least measure some proxy for each confounder. This is a weaker assumption, but not fully general. There may be confounding factors which cannot be inferred from the proxies $X$ --- in this case, our model will be unable to learn the joint distribution, and the causal effect will be unidentifiable. In this case, we are back to square one; our causal estimates may be inaccurate. Determining the exact fairness implications of this remains an open problem --- it would depend on which confounders were missing, and which proxies were already collected. A complicating factor is that testing for unconfoundedness is difficult, and usually requires making further assumptions \citep{tran2016model}. Therefore we might unintentionally make unfair inferences if we are unaware that we cannot infer all confounders. If we think this is the case, one solution is to collect more proxies. This provides an alternative motivation for the idea of increasing fairness by measuring additional variables \citep{chen2018my}. } To learn a generative model of the data which is faithful to the structural model defined in Eq. \ref{eq:structural-functions}, we define distributions $p$ which will approximate various conditional probabilities in our model. We model the joint probability assuming the following factorization: \begin{equation} \label{eq:joint-probability} \begin{aligned} P(Z, A, X, T, Y) = p(Z) p(A) p(X | Z, A) p(T | Z, A) p(Y | Z, A, T) \end{aligned} \end{equation} Each of these $p$ corresponds to an $f$ in Eq. \ref{eq:structural-functions} --- formally, $p(V = v | W = w) = P_{\epsilon_{V}}[f_V(W, \epsilon_V)]$ for an endogenous variable $V$ and subset of endogenous variables $W$, where $\{V\}, W \subset \{Z, A, X, T, Y\}$. For simplicity, we choose computationally tractable probability distributions for each conditional probability in Eq. \ref{eq:joint-probability}: \begin{equation} \label{eq:distributions} \begin{aligned} p(Z) &= \prod_{j=1}^{D_Z} \mathcal{N}(Z_{j} | 0, 1)\\ p(A) &= Bern(A|\pi_A)\\ p(X | Z, A) &= \prod_{j=1}^{D_X} \mathcal{N}(X_{j} | \mu_X(Z, A), \sigma_X^2(Z, A) \\ p(T | Z, A) &= Bern(T | \pi_T(Z,A)) \end{aligned} \end{equation} where $D_Z, D_X$ are the dimensionalities of $Z$ and $X$ respectively, and $\pi_A \in [0, 1]$ is the empirical marginal probability of $A=1$ across the dataset (if this is unknown, we could use a Beta prior over that distribution; in this paper we assume $A$ is observed for every example). For $p(Y | Z, A, T)$, we use either a Bernoulli or a Gaussian distribution, depending on if $Y$ is binary or continuous: \begin{equation} \label{eq:distributions-Y} \begin{aligned} p_{binary}(Y | Z, A, T) &= Bern(Y | \pi_Y(Z,A,T))\\ p_{cont}(Y | Z, A, T) &= \mathcal{N}(Y | \mu_Y(Z,A,T), \sigma^2_Y(Z,A,T)) \end{aligned} \end{equation} To flexibly model the potentially complex and non-linear relationships in the true generative process, we specify several of the distribution parameters from Eqs. \ref{eq:distributions} and \ref{eq:distributions-Y} as the output of a function $g_V$, which is realized by a neural network (or TARNet \citep{shalit2016estimating}) with parameters $\theta_V$. We parametrize the model of $X$ with neural networks $g_X^{\mu}, g_X^{\sigma}$: \begin{equation}\label{eq:reparamatrization-mlp} \begin{aligned} \mu_X(Z,A) &= g^{\mu}_X(Z, A) \\ \sigma_X^2(Z,A) &= \exp{2g^{\sigma}_X(Z, A)} \end{aligned} \end{equation} We use TARNets \cite{shalit2016estimating} (see Sec. \ref{sec:tarnets}) to parameterize the distributions over $T$ and $Y$. In our model, $A$ acts as the ``treatment'' for the TARNet that outputs $T$. Likewise $A$ and $T$ are joint treatments affecting $Y$ --- our $Y$ model can be seen as a \textit{hierarchical TARNet}, with one TARNet for each value of $A$, where each TARNet has an arm for each value of $T$. In all, this yields the following parametrization: \begin{equation}\label{eq:reparamatrization-tarnets} \begin{aligned} p_T(Z,A) &= (1 - A) \sigma(g^{A=0}_T(Z, A)) + A \sigma(g^{A=1}_T(Z, A)); \\ p_Y(Z,A,T) &= (1 - T) (1 - A) \sigma(g^{T=0,A=0}_Y(Z, A, T))\\ &\quad + T (1 - A)\sigma(g^{T=1,A=0}_Y(Z, A, T)) \\ &\quad + (1 - T) A \sigma(g^{T=0,A=1}_Y(Z, A, T)) \\ &\quad + T A \sigma(g^{T=1,A=1}_Y(Z, A, T)); \end{aligned} \end{equation} and the same for $\mu_Y(Z,A,T)$ and $\sigma_Y^2(Z,A,T)$; where $\sigma$ is the sigmoid function $\sigma(x) = \frac{1}{1 + \exp(-x)}$ and $g^{I=0}_R(V, I)$ are defined as in Sec. \ref{sec:tarnets}. We further define an inference model $q$, to determine the values of the latent variables $Z$ given observed $X, A$. This takes the form: \begin{equation} \label{eq:inference-model} \begin{aligned} q(Z | X, A) = \mathcal{N}(\mu_Z(X,A), \sigma_Z^2(X,A)) \end{aligned} \end{equation} where the normal distribution is reparametrized analogously to Eq. \ref{eq:reparamatrization-mlp} with networks $g_Z^{\mu}, g_Z^{\sigma}$. Since $A$ is always observed, we do not need to infer it, even though it is a confounder. We note that this is a different inference network from the one in \citet{louizos2017causal} --- we do not use the given treatments and outcomes in the inference model. We found it to be a simpler solution (no auxiliary networks necessary), and did not see a large change in performance. This is similar to the approach taken in \citet{parbhoo2018causal}. To learn the parameters of this model, we can maximize the expected lower bound on the log probability of the data (the ELBO), which takes the form below, which we note is also a valid ELBO to optimize for lower-bounding the conditional log-probability of the treatments and outcomes given the data. \begin{equation} \label{eq:elbo} \begin{aligned} \mathcal{L} = \sum_{i=1}^n &\mathds{E}_{q(z_i | x_i, a_i)} [\log{p(x_i | z_i, a_i)} + \log{p(t_i | z_i, a_i)} \\ &+ \log{p(y_i | z_i, a_i, t_i)} + \log{p(z_i)} - \log{q(z_i | x_i, a_i)}] \end{aligned} \end{equation} \section{Related Work} \label{sec:related-work} Our work most closely relates to the Causal Effect Variational Autoencoder \citep{louizos2017causal}. Some follow-up work is done by \citet{parbhoo2018causal}, who suggest a purely discriminative approach using the information bottleneck. Our model differs from this work in that they did not include a sensitive attribute in their model, and their model does not contain a ``reconstruction fidelity'' term, in this case $\log{p(x_i | z_i, a_i)}$. Previous papers which learn causal effects using deep learning (with all confounders observed) include \citet{shalit2016estimating} and \citet{johansson2016learning}, who propose TARNets as well as some form of balancing penalty. The intersection of fairness and causality has been explored recently. Counterfactual fairness --- the idea that a fair classifier is one which doesn't change its prediction under the counterfactual value of $X$ when $A$ is flipped --- is a major theme \citep{kusner2017counterfactual}. Criteria for fairness in treatments are proposed in \citet{nabi2018fair}, and fair interventions are further explored in \citet{kusner2018causal}. \citet{zhang2018fairness} present a decomposition which provides a different way of understanding of unfairness in a causal inference model. Other work focuses on the causal relationship between sensitive attributes and proxies in fair classification \citep{kilbertus2017avoiding}. \citet{kallus2018residual} explore the idea of learning from biased data, making the point that a ``fair'' predictor learned on biased data may not be fair under certain forms of distributional shift, while not touching on causal ideas. Some conceptually similar work has looked at the ``selective labels'' problem \citep{lakkaraju2018selective,dearteaga2018learning}, where only a biased selection of the data has labels available. There has also been related work on \textit{feedback loops} in fairness, and the idea that past decisions can affect future ones, in the predictive policing \citep{lum2016predict,pmlr-v81-ensign18a} and recommender systems \citep{hashimoto2018fairness} contexts, for example. \citet{barabas2017interventions} advocate for understanding many problems of fair prediction as ones of intervention instead. Another variational autoencoder-based fairness model is proposed in \citet{louizos2015variational}, but with the goal of fair representation learning, rather than causal modelling. \citet{dwork2012fairness} originated the term ``fairness through awareness'', and argued that the sensitive attribute needed to be given a place of privilege in modelling in order to reduce unfairness of outcomes. \section{Experiments} \label{sec:experiments} In this section we compare various methods for causal effect estimation. The three effects we are interested in are \iffalse \begin{wraptable}{r}{7cm} \caption{A wrapped table going nicely inside the text.}\label{wrap-tab:1} \begin{tabular}{ccc}\\\toprule Shorthand & Description & Individual-level quantity \\\midrule $A \rightarrow T$ &The effect of $A$ on $T$ & $P(T = 1 | do(A = 1), X) - P(T = 1 | do(A = 0), X)$ \\ \midrule 2 &3 & 5\\ \midrule 2 &3 & 5\\ \bottomrule \end{tabular} \end{wraptable} \fi \begin{itemize} \item $A \rightarrow T$, the causal effect of $A$ on $T$: \begin{equation*} \mathds{E}(T = 1 | do(A = 1), X) - \mathds{E}(T = 1 | do(A = 0), X) \end{equation*} \item $A \rightarrow Y$, the causal effect of $A$ on $Y$: \begin{equation*} \mathds{E}(Y = 1 | do(A = 1), X) - \mathds{E}(Y = 1 | do(A = 0), X) \end{equation*} \item $T \rightarrow Y$, the causal effect of $T$ on $Y$: \begin{equation*} \mathds{E}(Y = 1 | do(T = 1), X, A) - \mathds{E}(Y = 1 | do(T = 0), X, A) \end{equation*} \end{itemize} Note that all three effects are individual-level; that is, they are conditioned on some observed $X$ (and possibly $A$), and then averaged across the dataset. \subsection{Data} We evaluate our model using semi-synthetic data. The evaluation of causal models using non-synthetic data is challenging, since a random control trial on the intervention variable is required to validate correctness --- this is doubly true in our case, where we are concerned with two different possible interventions. Additionally, while data from random control trials for treatment variables exists (albeit uncommon), conducting a random control trial for a sensitive attribute is usually impossible. We have adapted the IHDP dataset \citep{multisite1990enhancing,brooks-gunn_liaw_klebanov_1994}---a standard semi-synthetic causal inference benchmark---for use in the setting of causal effect estimation under a sensitive attribute. The IHDP dataset is from a randomized experiment run by the Infant Health and Development Program (in the US), which "targeted low-birth-weight, premature infants, and provided the treatment group with both intensive high-quality child care and home visits from a trained provider" \cite{hill2011bayesian}. Pre-treatment variables were collected from both child (e.g. birth weight, sex) and the mother at time of birth (e.g. age, marital status) and behaviors engaged in during the pregnancy (e.g. smoked cigarettes, drank alcohol), as well as the site of the intervention (where the family resided). We choose our sensitive attribute to be mother's race, binarized as White and non-White. We follow a similar method for generating outcomes to the Response B surface proposed in \citet{hill2011bayesian}. However, our setup differs since we are interested in additionally modelling a sensitive attribute and hidden confounders, so there are three more steps which must be taken. First, we need to generate outcomes $Y$ for each example for $T \in \{0, 1\}$ under the counterfactual sensitive attribute $A$. Second, we need to generate a treatment assignment for each example for the counterfactual value of the sensitive attribute. Finally, we need to remove some data from the observable measurements to act as a hidden confounder, as in \citet{louizos2017causal}. We detail our full data generation method in Appendix \ref{app:data-generation}. We denote the outcome $Y$ under interventions $do(T=t), do(A=a$) as $y_{T=t,A=a}$. The subroutines in Algorithms \ref{alg:ihdp-response} and \ref{alg:ihdp-treatment} generate all factual and counterfactual outcomes and treatments for each example, one for each possible setting of $A$ and/or $T$. Values of the constants that we use for data generation can be found in Appendix \ref{app:data-generation}. We choose our hidden confounding feature $Z$ to be birth weight. In the second (optional) step of data generation, we choose to remove 0, 1, or 2 other features. Especially if we choose features which are highly correlated with the hidden confounder, this has the effect of making the estimation problem more difficult. When removing 0 features, we do nothing. When removing 1 feature, we remove the feature which is most highly correlated with $Z$ (head size). When removing 2 features, we remove two features most highly correlated with $Z$ (head size \& weeks born preterm). \subsection{Experimental Setup} \label{sec:exp-setup} We run four different models for comparison, including the one we propose. Since we are interested in estimating three different causal effects simultaneously ($A \rightarrow T, A \rightarrow Y, T \rightarrow Y$), we cannot compare against most standard causal inference benchmark models for treatment effect estimation. The models we test are the following: \begin{itemize} \item \textbf{Counterfactual MLP (CFMLP)}: a multilayer perception (MLP) which takes the treatment and sensitive attribute as input, concatenated to $X$, and aims to predict outcome. Counterfactual outcomes are calculated by simply flipping the relevant attributes and re-inputting the modified vector to the MLP. A similar auxiliary network learns to predict the treatment from a vector of $X$ concatenated to $A$. \item \textbf{Counterfactual Multiple MLP (CF4MLP)}: a set of four MLPs --- one for each combination of $(A, T) \in \{0, 1\}^2$. Examples are inputted into the appropriate MLP for the factual outcome, and simply inputted into another MLP for the appropriate counterfactual outcome. A similar pair of auxiliary networks predict treatment. \item \textbf{Causal Effect Variational Autoencoder with Sensitive Attribute (CVAE-A, Fig. \ref{subfig:cevae-sens})}: a model similar to \citet{louizos2017causal}, but with the simpler inference model we propose. We incorporate a sensitive attribute $A$ by concatenating $X$ to $A$ as input; counterfactuals along $A$ are taken by flipping $A$ and re-inputting the modified vector. Counterfactuals along $T$ are taken as in \citet{louizos2017causal}. \item \textbf{Fair Causal Effect Variational Autoencoder (FCVAE, Fig. \ref{subfig:cevae-fair})}: our proposed fair-aware causal model, with $A$ concatenated to $Z$ as confounders. We run two versions: one where $A$ is used to help with reconstructing $X$ and inferring $Z$ from $X$ (FCVAE-1), and one where it is not (FCVAE-2). Formally, the inference model and generative model of $X$ in FCVAE-1 are $q(Z | X, A)$ and $p(X | Z, A)$, and in FCVAE-2 are $q(Z | X)$ and $p(X | Z)$ respectively. In both versions, $A$ is a confounder of both the treatment and the outcome. \end{itemize} The CFMLP is purely a classification baseline. It learns a mapping from input to output, estimating the conditional distribution $P(Y | X, A, T)$. The CF4MLP shares this goal, but has a more complex architecture---it learns a disjoint set of parameters for each setting of interventions, allowing it to model completely separate generative processes. However, it is still ultimately concerned with supervised prediction. Furthermore, neither of these models is built to consider the impact of hidden confounders. The CVAE-A is a model for causal inference of outcomes from treatments. Therefore, we should expect it to perform well in estimating $T \rightarrow Y$. It is also created to model these effects under hidden confounders. Therefore, the difference between CVAE-A and the MLPs will tell us the improvement which comes from appropriate causal modelling rather than classification. However, the CVAE-A does not consider the sensitive attribute $A$ as a confounder; rather, it treats it simply as another covariate of $X$. So in comparing the FCVAE to the CVAE-A, we observe the improvement that comes from causally modelling the dataset unfairness stemming from a sensitive attribute. In comparing the FCVAE to the MLPs, we observe the full impact of the FCVAE --- joint causal modelling of treatments, outcomes, sensitive attributes, and hidden confounders. See Appendix \ref{app:experiments} for experimental details. \subsection{Results} \label{sec:results} \subsubsection{Estimating Causal Effects} In this section, we evaluate how well the models from Sec. \ref{sec:exp-setup} can estimate the three causal effects $A \rightarrow T, A \rightarrow Y, T \rightarrow Y$. To avoid confusion with the words \textit{treatment} and \textit{outcome}, in each of these three causal interactions, we will refer to to the causing variable as the \textit{intervention} variable, and the affected variable as the \textit{result} variable. To evaluate how well our model can estimate causal effects, we use PEHE: Precision in Estimation of Heterogeneous Effects \cite{hill2011bayesian}. This is calculated as: $PEHE = \sqrt[]{\mathds{E}[( (r_1 - r_0) - (\hat{r}_1 - \hat{r}_0)) ^2]}$, where $r_i$ is the ground truth value of result from the intervention $i$, and $\hat{r}_i$ is our model's estimate of that quantity. PEHE measures our ability to model both the factual (ground truth) and the counterfactual results. \begin{table}[] \begin{tabular}{llll} \hline Model & A $\rightarrow$ T & T $\rightarrow$ Y & A $\rightarrow$ Y \\ \hline CFMLP & 0.681 $\pm$ 0.00 & 4.51 $\pm$ 0.13 & 3.28 $\pm$ 0.07 \\ CF4MLP & 0.667 $\pm$ 0.00 & 4.58 $\pm$ 0.13 & 3.71 $\pm$ 0.09 \\ CVAE-A & 0.665 $\pm$ 0.00 & \textbf{3.80} $\pm$ 0.10 & 3.04 $\pm$ 0.06 \\ FCVAE-1 & \textbf{0.659} $\pm$ 0.00 & \textbf{3.82} $\pm$ 0.11 & \textbf{2.88}$\pm$ 0.06 \\ FCVAE-2 & \textbf{0.659} $\pm$ 0.00 & \textbf{3.81} $\pm$ 0.11 & \textbf{2.78 }$\pm$ 0.06 \\ \hline \end{tabular} \caption{PEHE for each model on IHDP data (no extra features removed). Mean and standard errors shown, as calculated over 500 random seedings.} \label{table:ihdp-pehe-0} \end{table} \begin{table}[] \begin{tabular}{llll} \hline Model & A $\rightarrow$ T & T $\rightarrow$ Y & A $\rightarrow$ Y \\ \hline CFMLP & 0.675 $\pm$ 0.00 & 4.30 $\pm$ 0.11 & 3.42 $\pm$ 0.08 \\ CF4MLP & \textbf{0.661} $\pm$ 0.00 & 4.37 $\pm$ 0.11 & 3.89 $\pm$ 0.07\\ CVAE-A & 0.672 $\pm$ 0.00 & \textbf{4.05} $\pm$ 0.10 & 3.53 $\pm$ 0.07 \\ FCVAE-1 & 0.663 $\pm$ 0.00 & \textbf{4.00} $\pm$ 0.10 & \textbf{3.39} $\pm$ 0.08 \\ FCVAE-2 & 0.663 $\pm$ 0.00 & \textbf{3.99} $\pm$ 0.10 & \textbf{3.25} $\pm$ 0.07 \\ \hline \end{tabular} \caption{PEHE for each model on IHDP data (1 most informative feature removed). Mean and standard errors shown, as calculated over 500 random seedings.} \label{table:ihdp-pehe-1} \end{table} \begin{table}[] \begin{tabular}{llll} \hline Model & A $\rightarrow$ T & T $\rightarrow$ Y & A $\rightarrow$ Y \\ \hline CFMLP & 0.666 $\pm$ 0.00 & 6.03 $\pm$ 0.21 & 4.30 $\pm$ 0.12 \\ CF4MLP & \textbf{0.659} $\pm$ 0.00 & 5.77 $\pm$ 0.18 & 4.59 $\pm$ 0.10 \\ CVAE-A & 0.672 $\pm$ 0.00 & \textbf{5.46} $\pm$ 0.18 & 4.19 $\pm$ 0.10 \\ FCVAE-1 & \textbf{0.659} $\pm$ 0.00 & \textbf{5.40} $\pm$ 0.18 & \textbf{4.07}$\pm$ 0.11 \\ FCVAE-2 & \textbf{0.659} $\pm$ 0.00 & \textbf{5.39} $\pm$ 0.18 & \textbf{3.95} $\pm$ 0.10 \\ \hline \end{tabular} \caption{PEHE for each model on IHDP data (2 most informative features removed). Mean and standard errors shown, as calculated over 500 random seedings. Lower is better.} \label{table:ihdp-pehe-2} \end{table} In Tables \ref{table:ihdp-pehe-0}-\ref{table:ihdp-pehe-2}, we show the PEHE for each of the models described in Sec. \ref{sec:exp-setup}, for each causal effect of interest. Each table shows results for a version of the dataset with 0-2 of the most informative features removed (as measured by correlation with the hidden confounder). Therefore, the easiest problem is with zero features removed, the hardest is with two. Note that in IHDP, $Y \in \mathds{R}$. Generally, as expected, we observe that the causal models achieve lower PEHE for most estimation problems. Also as expected, we observe that that the PEHE for the more complex estimation problems ($A \rightarrow Y, T \rightarrow Y$) increases as the most useful proxies are removed from the data. We suspect there is less variation in the results for $A \rightarrow T$ since it is a simpler problem: there are no extra confounders (other than $Z$) or mediating factors to consider. We find that our model (the FCVAE) compares favorably to the other models in this experiment. We see that in general, the fair-aware models (FCVAE-1 and FCVAE-2) have lower PEHE than all other models when estimating the causal effects relating to the sensitive attribute ($A \rightarrow Y, A \rightarrow T$). Furthermore, the FCVAE also performs similarly to the CVAE-A at $T \rightarrow Y$ estimation as well, demonstrating a slight improvement (at least in the more difficult 1, 2 features removed cases). One interesting note is that FCVAE-1 (where $A$ is used in reconstruction of $X$ and in inference of $Z$) and FCVAE-2 seem to perform similarly, with FCVAE-2 being slightly better, if anything. This may seem surprising at first, since one might imagine that using $A$ would allow the model to learn better representations of $X$, particularly for the purpose of doing counterfactual inference across $A$. To explore this further, we examine in table \ref{table:ihdp-mizx} the latent representations $Z$ learned by each model in terms of their encoder mutual information between $Z$ and $X$, which is calculated as $KL(q(Z | X) || p(Z))$, the KL-divergence from the encoder posterior to the prior. This quantity is roughly the same for both versions of the FCVAE, implying that the inference network $q(Z|\cdot)$ does not leverage the additional information provided by $A$ in its latent code $Z$. This is in fact sensible because FCVAE has access to $A$ as an observed confounder in modeling the structural equations. We also noticed that CVAE contains about one bit of extra information in its latent code, implying some degree of success in capturing relevant information about $A$ in $Z$. But if CVAE models all confounders during inference, why does it underperform relative to FCVAE estimating the downstream causal effects, especially $A \rightarrow Y$? By making explicit the role of $A$ as confounder, we hypothesize that FCVAE can learn the interventional distributions with respect to $A$ (e.g., $p(Y|T, do(A=a), Z))$ rather than the conditional distributions of CVAE (e.g., $p(Y|T, Z(A)))$; we suspect that the gating mechanism of the TARNet implementation of the structural equations to be important in this regard. \begin{table}[] \begin{tabular}{ll} \hline Model & $KL \left[ q(z|\cdot) || p(z) \right] $ \\ \hline CVAE-A & 4.28 $\pm$ 0.10 \\ FCVAE-1 & 3.50 $\pm$ 0.12 \\ FCVAE-2 & 3.53 $\pm$ 0.12 \\ \hline \end{tabular} \caption{KL divergence from the encoder posterior $q(z|\cdot)$ to prior $p(z)$ after training on IHDP; equivalent to encoder mutual information \cite{alemi2018fixing}. CVAE and FCVAE-1 use $(X, A)$ as input to encoder, while FCVAE-2 uses $X$ only. Mean and standard errors shown, as calculated over 500 random seedings.} \label{table:ihdp-mizx} \end{table} \subsubsection{Learning a Treatment Policy} \label{sec:treatment-policy} The next natural question is: how does estimating these causal effects contribute to a fair decision-making policy? We examine two dimensions of this. We define a \textit{policy} $\pi: X, A \rightarrow T$ as a function which maps inputs (features and sensitive attribute) to treatments. We suppose the goal is to assign treatments $T$ using a policy $\hat T = \pi(X, A)$ that maximizes its expected \textit{value} $V(\pi)$, defined here as the expected outcome $Y$ it achieves over the data, i.e. $V(\pi) = \mathds{E}_{x, a}[Y | do(T=\pi(x, a)), A=a, X=x]$. For example, we could imagine the treatments to be various medications, and the outcome to be some health indicator (e.g. number of months survived post-treatment). We can derive a policy from an outcome prediction model simply by outputting the predicted argmax value over treatments, i.e. $\pi(x, a) = \argmax_{t \in T} \mathds{E}_{\hat{Y}}[\hat{Y} | do(T=t), A=a, X=x]$, where $\hat{Y}$ is the model's prediction of the true outcome $Y$. The optimal policy $\pi^\star(x, a )= \argmax_{t \in T} \mathds{E}_Y[Y | do(T=t), A=a, X=x]$ takes the argmax over ground truth outcomes every time. First, we look at the mean \textit{regret} of the policy $\pi$, which is the difference between its achieved value and the the value of the optimal policy: $R(\pi) = V(\pi^\star) - V(\pi)$. {\color{black} We note that in general, a policy's regret is not easy to compute or bound without assumptions on the outcome distribution in the data. } In Table \ref{table:ihdp-regret}, we display the expected regret values for the learned policies. We observe that the fair-aware model achieves lower regret than the unaware causal model, and much lower regret than the non-causal models, for both the easier and more difficult settings of the IHDP data. \begin{table}[] \begin{tabular}{llll} \hline Model & 0 removed & 1 removed & 2 removed \\ \hline CFMLP & 0.37 $\pm$ 0.02 & 0.42 $\pm$ 0.02 & 0.81 $\pm$ 0.04 \\ CF4MLP & 0.31 $\pm$ 0.02 & 0.43 $\pm$ 0.02 & 0.59 $\pm$ 0.02 \\ CVAE-A & 0.21 $\pm$ 0.01 & 0.38 $\pm$ 0.01 & 0.59 $\pm$ 0.02 \\ FCVAE-1 & \textbf{0.19} $\pm$ 0.01 & \textbf{0.36} $\pm$ 0.01 & \textbf{0.55} $\pm$ 0.02 \\ FCVAE-2 & \textbf{0.19} $\pm$ 0.01 & \textbf{0.35} $\pm$ 0.01 & \textbf{0.55} $\pm$ 0.02\\ \hline \end{tabular} \caption{Regret for each model's policy on IHDP data with 0, 1, or 2 of the most useful covariates removed. Mean and standard errors shown, as calculated over 500 random seedings. Lower regret is better.} \label{table:ihdp-regret} \end{table} Next, we attempt to measure the policy's fairness. Most fairness metrics are designed for evaluating classification, not for intervention. However, \citet{chen2018my} explore an idea which is easily adjusted to the interventional setting: that an algorithm is unfair if it is much less accurate on one subgroup. Here, we adapt this notion to evaluate treatment policy fairness. For any $x$, let us say the policy $\pi$ is \textit{accurate} if it chooses the treatment which in fact yields the best outcome for that individual; i.e. if $\pi(x, a) = \pi^\star(x, a)$. We can define the \textit{accuracy} of the policy $Acc(\pi) = \mathds{E}_{x, a}[\mathds{1}(\pi(x, a) = \pi^\star(x, a))]$, where $\mathds{1}$ is an indicator function. We can define the subgroup accuracy $Acc_{\alpha}$ as accuracy calculated while conditioning (not intervening) on a particular value $\alpha$ of $A$: $Acc_\alpha(\pi) = \mathds{E}_{x | A =\alpha}[\mathds{1}(\pi(x, \alpha) = \pi^\star(x, \alpha))]$. We condition rather than intervene on $A$ here since we are interested in measuring the impact of the policy on real, existing populations, rather than hypothetical ones. Finally, to evaluate the fairness of the policy, we can look at the \textit{accuracy gap}: $| Acc_1(\pi) - Acc_0(\pi) |$. If this is high, the model is more unfair, since the policy has been more successful at modelling one group than the other, and is much more consistently choosing the correct treatment for individuals in that group. In Table \ref{table:ihdp-acc-gap} we display the accuracy gaps for our models and baselines on the IHDP dataset. We observe that the FCVAE achieves a smaller accuracy gap than those which do not consider the effect of the sensitive attribute. This is an encouraging sign that by understanding the confounding influence of sensitive attributes in biasing historical datasets, we can learn treatment policies which are more accurate for all subgroups of the data. \begin{table}[] \begin{tabular}{llll} \hline Model & 0 removed & 1 removed & 2 removed \\ \hline CFMLP & 0.042 $\pm$ 0.002 & 0.033 $\pm$ 0.002 & 0.062 $\pm$ 0.002 \\ CF4MLP & 0.034 $\pm$ 0.002 & 0.038 $\pm$ 0.002 & 0.054 $\pm$ 0.002 \\ CVAE-A & 0.033 $\pm$ 0.001 & \textbf{0.028} $\pm$ 0.001 & 0.051 $\pm$ 0.002 \\ FCVAE-1 & \textbf{0.031} $\pm$ 0.001 & \textbf{0.028} $\pm$ 0.001 & \textbf{0.046} $\pm$ 0.001 \\ FCVAE-2 & \textbf{0.030} $\pm$ 0.001 & \textbf{0.027} $\pm$ 0.001 & \textbf{0.047} $\pm$ 0.001 \\ \hline \end{tabular} \caption{Accuracy gap for each model's policy on IHDP data with 0, 1, or 2 of the most useful covariates removed. Mean and standard errors shown, as calculated over 500 random seedings. Lower gap is more fair.} \label{table:ihdp-acc-gap} \end{table} \section{Discussion} \label{sec:conclusions} In this paper, we proposed a causally-motivated model for learning from potentially biased data. We emphasize the importance of modeling the potential confounders of historical datasets: we model the sensitive attribute as an observed confounder contributing to dataset bias, and leverage deep latent variable models to approximately infer other hidden confounders. In Sec. \ref{sec:treatment-policy}, we demonstrated how to use our model to learn a simple treatment policy from data which assigns treatments more accurately and fairly than several causal and non-causal baselines. Looking forward, the estimation of sensitive attribute causal effects suggests several compelling new research directions, which we non-exhaustively discuss here: \begin{itemize} \item \textbf{Counterfactual Fairness:} Our model learns outcomes for counterfactual values of both $T$ and $A$. This means we could choose to implement a policy where we assess everyone under the same value $a'$, by assigning treatments to all individuals, no matter their original value $a$ of $A$, based on the inferred outcome distribution $P(Y | do(A=a'), X, T)$. Such a policy respects the definition of \textit{counterfactual fairness} proposed by \citet{kusner2017counterfactual}, which requires invariance to counterfactuals in $A$ at the individual level. \item \textbf{Path-Specific Effects:} Our model allows us to decompose $A \rightarrow Y$ into direct and indirect effects through mediation analysis of $T$ \citep{robins1992identifiability}. By estimating this decomposition, we could learn a policy which respects \textit{path-specific} fairness, as proposed by \citet{nabi2018fair}. \item \textbf{Analyzing Historical Bias:} Estimating causal effects between $A$, $T$, and $Y$ permits for the analysis and comparison of bias in historical datasets. For instance, the effect $A \rightarrow T$ is a measure of bias in a historical policy, and the effect $A \rightarrow Y$ is a measure of bias in whatever system historically generated the outcome. This could serve as the basis of a \textit{bias auditing technique} for data scientists. \item \textbf{Data Augmentation:} The absence of data (especially not-at-random) has strong implications for downstream modeling in both fairness \citep{kallus2018residual} and causal inference \citep{rubin1976inference}. Our model outputs counterfactual outcomes for both $A$ and $T$, which could be used for \textit{fair missing data imputation} \citep{van2018flexible,sterne2009multiple}. This could in turn enable the application of simpler methods like supervised learning to interventional problems. \item \textbf{Fair Policies Under Constraints:} In this paper, we consider an approach to fairness where understanding dataset bias is paramount, rather than the more common fairness-accuracy constraint-based tradeoff \citep{hardt2016equality,menon2017cost}. However, in some domains we may be interested in policies which satisfy a \textit{fairness constraint} (e.g., the same distribution of treatments are given to each group). Estimating the underlying causal effects would be useful for constrained policy learning. \item \textbf{Incorporating Prior Knowledge:} Graphical models (both probabilistic and SCM) permit the specification of \textit{prior knowledge} when modeling data, and provide a framework for inference that balances these beliefs with evidence from the data. This is a powerful fairness idea---we may believe a priori that a dataset \textit{should} look a certain way if not for some bias. In the context of a fair machine learning pipeline that considers many datasets, this relates to the AutoML task of learning distributions over datasets that share global parameters \cite{edwards2016towards}. \end{itemize} In automated decision making, the focus on intervention over classification \cite{barabas2017interventions} suggests the more equitable deployment of machine learning when only biased data are available, but also raises significant technical challenges. We believe causal modeling to be an invaluable tool in addressing these challenges, and hope that this paper contributes to the discussion around how best to understand and make predictions from existing datasets without replicating existing biases.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Due to the AdS/CFT correspondence~\cite{Mald,Gubs,Witten1}, over the past years a lot of attention has been focused on black holes in anti-de Sitter (AdS) space. It was convincingly argued by Witten~\cite{Witten2} that thermodynamics of black holes in AdS spaces (AdS black holes) can be identified with that of a certain dual conformal field theory (CFT) in high temperature limit. With this correspondence, one can gain some insights into thermodynamic properties and phase structures of strong coupling CFTs by studying thermodynamics of AdS black holes. Nowadays it is well-known that the AdS Schwarzschild black hole is thermodynamically unstable when its horizon radius is small, while it is stable for large radius; there is a phase transition, named Hawking-Page phase transition~\cite{Hawk}, between the large stable black hole and a thermal AdS space. This phase transition is explained by Witten~\cite{Witten2} as the confinement/deconfinement transition of Yang-Mills theory in the AdS/CFT correspondence. Thus it is of interest to consider rotating/charged generalization of black holes in AdS spaces. In the AdS/CFT correspondence, the rotating black holes in AdS space are dual to certain CFTs in a rotating space~\cite{Haw}, while charged ones are dual to CFTs with chemical potential~\cite{R-charged}. Indeed, the most general higher dimensional rotating black holes in AdS space have been recently found~\cite{Haw,Gibbons}. On the other hand, it is also of interest to consider corrected AdS black holes due to higher derivative curvature terms in the low energy effective action of string theories. In the AdS/CFT correspondence, these higher derivative curvature terms correspond to the correction terms of large $N$ expansion in the CFT side. Among the gravity theories with higher derivative curvature terms, the so-called Gauss-Bonnet gravity is of some special features. For example, first, the resulting field equations contain no higher derivative terms of the metric than second order and it has been proven to be free of ghosts when expanding about the flat space, evading any problems with unitarity; second, the Gauss-Bonnet term appears in the low energy effective action of heterotic string theory; and third, the most important is that in the Gauss-Bonnet gravity, the analytic expression of static, spherically symmetric black hole solution can be found~\cite{Deser,Whee,Cai}. Indeed, the Gauss-Bonnet term gives rise to some interesting effect on the thermodynamics of black holes in AdS space~\cite{Myers,Nojiri}. It is of great interest to find some rotating black hole solutions in the Gauss-Bonnet gravity. However, it is a rather hard task since the equations of motion of the theory are highly nonlinear ones. In this work, we would like to report on some results for slowly rotating black hole solutions in the Gauss-Bonnet gravity theory, here the rotating parameter appears as a small quantity. Such so-called slowly rotating black holes have also been investigated in other gravity theories~\cite{Horne}-\cite{Ghosh}. Here we would like to mention that some rotating black brane solutions have been obtained in the Gauss-Bonnet gravity theory in \cite{Dehg}, but those so-called rotating solutions are essentially obtained by a Lorentz boost from corresponding static ones; they are equivalent to static ones locally, although not equivalent globally. This paper is organized as follows. In the next section, we first present the slowly rotating Gauss-Bonnet black hole solutions in AdS space. The black hole horizon could be a surface with positive, zero, or negative constant scalar curvature. Some related thermodynamic quantities are calculated there. In Sec.~III, we generalize to the charged case, and obtain the slowly rotating charged Gauss-Bonnet black hole solutions in AdS space. Sec.~IV is a simple summary. \section{Slowly Rotating Gauss-Bonnet Black Holes in AdS Space} The Einstein-Hilbert action with a Gauss-Bonnet term and a negative cosmological constant, $\Lambda=-(d-1)(d-2)/2l^2$, in $d$ dimensions can be written down as~\cite{Deser,Cai,Wilt} \begin{equation} \label{3eq1} S=\frac{1}{16\pi G}\int d^dx\sqrt{-g}\left(R +\frac{(d-1)(d-2)}{l^2} + \alpha (R_{\mu\nu\gamma\delta}R^{\mu\nu\gamma\delta} -4 R_{\mu\nu} R^{\mu\nu}+R^2)-4\pi G F_{\mu\nu}F^{\mu\nu}\right), \end{equation} where $\alpha$ is the Gauss-Bonnet coefficient with dimension $(length)^2$ and is positive in the heterotic string theory. We therefore restrict ourselves to the case $\alpha \ge 0$. The Gauss-Bonnet term is a topological invariant in four dimensions. Therefore $d\ge 5$ is assumed in this paper. Further, $F_{\mu\nu}$ is the Maxwell field strength. Varying the action yields the equations of gravitational field \begin{eqnarray} \label{3eq2} R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R &= &\frac{(d-1)(d-2)}{2l^2}g_{\mu\nu} + \alpha \left (\frac{1}{2}g_{\mu\nu}(R_{\gamma\delta\lambda\sigma} R^{\gamma\delta\lambda\sigma}-4 R_{\gamma\delta}R^{\gamma\delta} +R^2) \right. \nonumber \\ &&~~~- \left. 2 RR_{\mu\nu}+4 R_{\mu\gamma}R^{\gamma}_{\ \nu} +4 R_{\gamma\delta}R^{\gamma\ \delta}_{\ \mu\ \ \nu} -2R_{\mu\gamma\delta\lambda}R_{\nu}^{\ \gamma\delta\lambda} \right) \nonumber \\ &&\quad +8\pi G \left(F_{\mu\alpha}F_\nu^{~\alpha}-\frac{1}{4}g_{\mu\nu} F_{\alpha\beta}F^{\alpha\beta}\right) . \end{eqnarray} We assume the metric being of the following form \begin{equation} \label{eq:metric} ds^2 = -f(r)dt^2 +\frac{1}{f(r)}dr^2 +r^2h_{ij}dx^idx^j +2 a r^2 g(r)h(\theta) dt d\phi, \end{equation} where $a$ is a constant. $h_{ij}dx^idx^j$ represents the line element of a ($d-2$)-dimensional hypersurface with constant curvature $(d-2)(d-3)k$ and volume $\Sigma_k$, where $k$ is a constant. Without loss of generality, one may take $k=1$, $0$, and $-1$, respectively. When $k=1$, one has $h_{ij}dx^idx^j=d\theta^2+\sin^2\theta d\phi^2 +\cos^2\theta d\Omega_{d-4}^2$ and $h=\sin^2\theta$; when $k=0$, $h_{ij}dx^idx^j=d\theta^2+d\phi^2 + dx_{d-4}^2$ and $h=1$; and when $k=-1$, $h_{ij}dx^idx^j=d\theta^2+\sinh^2\theta d\phi^2 +\cosh^2\theta d\Omega_{d-4}^2$ and $h=\sinh^2\theta$, where $dx^2_{d-4}$ is the line element of a $(d-4)$-dimensional Ricci flat Euclidian surface, while $d\Omega_{d-4}^2$ denotes the line element of a $(d-4)$-dimensional unit sphere. Thus, the horizon of the black hole (\ref{eq:metric}) is a positive, zero, or negative constant scalar curvature surface, as $k=1$, $0$, and $-1$, respectively~\cite{Cai}. In this section, we first consider the case without charge, namely $F_{\mu\nu}=0$. When $g(r)=0$, the function $f(r)$ describing a static black hole solution was found in \cite{Cai} \begin{equation}\label{eq4} f(r)=k +\frac{r^2}{2\tilde\alpha}\left ( 1 - \sqrt{1-\frac{4\tilde \alpha}{l^2}}\sqrt{1+\frac{ m}{r^{d-1}}} \right), \end{equation} where $\tilde\alpha= (d-3)(d-4)\alpha$ and the integration constant $m$ has a relation to the gravitational mass $M$ of the solution \begin{eqnarray} \label{eq:m} M= \frac{(d-2)\Sigma_k (1-4\tilde \alpha/l^2) m}{64 \pi G\tilde \alpha} . \end{eqnarray} In the limit of $\tilde\alpha\rightarrow 0$, we have \begin{eqnarray} \label{f:KerrAdS} f_{\rm AdS}(r)= k+\frac{r^2}{l^2}-\frac{16 \pi G M}{(d-2)\Sigma_k}\frac1{ r^{d-3}} , \end{eqnarray} which gives the AdS-Schwarzschild black hole solution with positive, zero, or negative constant scalar curvature horizon, depending on $k$. On the other hand, in the large $r$ limit, $f(r)$ becomes \begin{equation}\label{eq:f:asym} f(r)=k +\frac{r^2}{l_{\rm eff}^2} - \frac{16\pi G M_{\rm eff}}{(d-2) \Sigma_k }\frac{1}{r^{d-3}} +O(r^{5-2d}), \end{equation} from which we can read off the effective cosmological constant and effective mass \begin{eqnarray} \label{eq:leff} \frac{1}{l_{\rm eff}^2}=\left ( 1 - \sqrt{1-\frac{4\tilde \alpha}{l^2}}\right) \frac{1}{2\tilde\alpha}, \quad \quad M_{\rm eff} = \frac{M}{\sqrt{1-4\tilde\alpha/l^2}} . \end{eqnarray} \begin{figure}[htb] \includegraphics[width=.6\linewidth]{frnew.eps}\\ \caption The Behavior of the function $f(x=r/\sqrt{\tilde\alpha})$ in the case of $k=1, ~d=5$, and $l^2=125\tilde\alpha$. The curves correspond to the mass parameter $m=\frac{7n+n^2}{2}\tilde\alpha^2$ with $n=0,1,\cdots ,10$, respectively from top to bottom. }\label{fig:fr} \end{figure} In Fig.~1 we plot the function $f(r)$ in the case of $d=5$ and $k=1$. In this case, $f(r)$ is a pure increasing function of $r$ for $r> 0$. It approaches to the asymptotic form $k+\frac{r^2}{2\tilde\alpha}(1-\sqrt{1-\frac{4\tilde\alpha}{l^2}})$ for large $r$. The metric has horizon if $m\geq 4 \tilde \alpha^2$. Since the solution with $M=0$ is the AdS vacuum solution, there is a mass gap from the AdS vacuum to the minimal black hole with mass $m=4\tilde\alpha^2$. when $0<m< 4\tilde\alpha^2$, the solution describes a spacetime with a deficit angle. When $d>5$, the mass gap disappears. When $k=0$ or $k=-1$, the mass gap also disappears. For more details see \cite{Cai}. Now we consider the slowly rotating black hole solution with $g(r)\ne 0$. To the linear order of the parameter $a$, the metric function $f(r)$ still keeps the form (\ref{eq4}). On the other hand, the $t\phi$ component of equations of gravitational field leads to an equation for the function $g(r)$ \begin{eqnarray} \label{eq:ddg} \left(\log g'(r)\right)' = - \frac{2 d\, r^{d-1}+m (d+1)}{2 r \left(r^{d-1}+m\right)} . \end{eqnarray} It is interesting to note that this equation is independent of $k$. After explicit integration, we have \begin{eqnarray} \label{eq10} g(r) =c_1+ \frac{2 c_2}{(d-1)m} \sqrt{1+\frac{m}{r^{d-1}}}\,, \end{eqnarray} where $c_1$ and $c_2$ are two integration constants. Now we fix these constants. Expanding (\ref{eq10}) up to the leading order of large $r$, one has \begin{equation} g(r) =\left(c_1 + \frac{2c_2}{(d-1)m}\right) +\frac{c_2}{(d-1)r^{d-1}} +\cdots . \end{equation} Comparing this with the large $r$ asymptotic behavior of higher dimensional Kerr-AdS solution given in \cite{Haw}, we find \begin{equation}\label{eq:c12} c_1= \frac{1}{2\tilde \alpha}, \quad c_2= -\frac{(d-1)\sqrt{1-4\tilde \alpha /l^2}}{4\tilde \alpha}m . \end{equation} Then we obtain the function $g(r)$ \begin{eqnarray} \label{sol:gr:n} g(r)=\frac{1}{2\tilde\alpha}\left ( 1 - \sqrt{1-\frac{4\tilde \alpha}{l^2}}\sqrt{1+\frac{ m}{r^{d-1}}} \right). \end{eqnarray} As a self-consistency check, we see that when $\tilde\alpha \to 0$, the solution (\ref{sol:gr:n}) indeed gives the asymptotic behavior of slowly rotating Kerr-AdS solution~\cite{Haw}. For the slowly rotating solution, the horizon $r_+$ is still determined by the equation $f(r)=0$, up to the linear order of the rotating parameter $a$. The coordinate angular velocity of a locally nonrotating observer, with four-velocity $u^\mu$ such that $u\cdot \xi_{(\phi)}=0$, is \begin{eqnarray} \label{eq:Omega} \Omega=-\frac{g_{t\phi}}{g_{\phi\phi}} = a g(r)= \frac{a}{2\tilde\alpha}\left ( 1 - \sqrt{1-\frac{4\tilde \alpha}{l^2}}\sqrt{1+\frac{ m}{r^{d-1}}} \right). \end{eqnarray} In contrast to the ordinary Kerr black hole in asymptotically flat spacetime, the angular velocity does not vanish at spatial infinity in the present case. Instead, we have the expression \begin{eqnarray} \label{eq:Omega:infty} \Omega_\infty = \frac{a}{2\tilde\alpha}\left(1-\sqrt{1-\frac{4\tilde\alpha}{l^2}} \right)=\frac{a}{l_{\rm eff}^2}. \end{eqnarray} We can see that only when $l\rightarrow \infty$, the angular velocity vanishes at spatial infinity. This is a remarkable feature in AdS space. With $\tilde \alpha\rightarrow 0$, we also get the correct Kerr-AdS limit. On the other hand, the angular velocity in~(\ref{eq:Omega}) on the horizon turns out to be \begin{eqnarray} \label{eq:Omega:horizon} \Omega_H= -\frac{k a}{r_+^2}, \end{eqnarray} where we have used the fact that on the horizon $f(r_+)=0$. This velocity $\Omega_H$ can be thought of as the angular velocity of the black hole. One can also define the angular velocity of the black hole with respect to a frame that is static at spatial infinity. We have \begin{eqnarray} \label{eq:omega} \omega_H= \Omega_H-\Omega_\infty=-\left(\frac{a}{l^2_{\rm eff}}+ \frac{a k}{r_+^2}\right). \end{eqnarray} Apparently, this angular velocity will vanish for $k=-1$ even for nonzero $a$ when $r_+^2= 2\tilde \alpha$. However, this case will not happen since in the case of $k=-1$, the minimal horizon of black hole $r_{\rm min}^2>2\tilde\alpha$~\cite{Cai}. It is just this angular velocity $w_H$ which enters into the first law of rotating black hole thermodynamics in AdS space~\cite{Gibb}. The mass of the black holes can be expressed in terms of the horizon radius $r_+$ \begin{eqnarray} \label{Mass} M= \frac{(d-2)\Sigma_k r_+^{d-3}}{16\pi G} \left(k+ \frac{\tilde \alpha k^2}{r_+^2}+\frac{r_+^2}{l^2}\right), \end{eqnarray} which is the same as the static one~\cite{Cai}. The angular momentum of the black hole is \begin{eqnarray} J =\frac{2 a M}{d-2} = \frac{a \Sigma_k r_+^{d-3}}{8\pi G} \left(k +\frac{\tilde \alpha k^2}{r_+^2}+\frac{r_+^2}{l^2}\right) . \end{eqnarray} The Hawking temperature of the black holes can be easily obtained by requiring the absence of conical singularity at the horizon in the Euclidean sector of the black hole solution. It is the same as the static case \begin{eqnarray} \label{Temperatue} T=\frac{(d-1) r_+^4+(d-3) k l^2 r_+^2 +(d-5) \tilde \alpha k^2 l^2}{ 4\pi l^2 r_+(r_+^2+ 2\tilde \alpha k)}, \end{eqnarray} up to the linear order of the rotating parameter $a$, since the leading order of the parameter $a$ enters into the $g_{tt}$ metric component is second order, namely, $a^2$. From the first law of black hole thermodynamics $dM=TdS+\omega_H dJ$, one may easily see that the variation of the entropy is also second order in $a$. Therefore, to the linear order, the entropy expression will not be changed, the same as the static one~\cite{Cai} \begin{equation} S= \frac{\Sigma_k r_+^{d-2}}{4G}\left( 1+ \frac{d-2}{d-4}\frac{2\tilde \alpha k}{r_+^2}\right). \end{equation} \section{Slowly Rotating Charged Gauss-Bonnet Black Holes in AdS Space} In this section, we consider the charged case. In this case, the action, field equations, and the metric ansatz are the same as in Eqs.~(\ref{3eq1}), (\ref{3eq2}), and (\ref{eq:metric}), respectively. In the case of a charged static black hole, the spherical symmetry of the metric and the flux conservation gives us the electric field \begin{eqnarray} \label{eq:Er:Ansatz} -A_t'=F_{tr}=\frac{Q}{4\pi r^{d-2}} . \end{eqnarray} This gives the $A_t$ component of electro-magnetic potential \begin{eqnarray} \label{A0} A_t = \frac{Q}{4\pi(d-3) r^{d-3}} . \end{eqnarray} When $g(r)=0$, the function $f(r)$ describing a static charged black hole solution in $d-$dimensions is given by~\cite{Wilt,Nojiri} \begin{eqnarray}\label{eq:f:r} f(r) &=& k+\frac{r^2}{2\tilde\alpha}\left( 1 - \sqrt{1-\frac{4\tilde\alpha}{l^2}}\sqrt{1+\frac{m}{r^{d-1}}-\frac{q^2}{ r^{2d-4}}} \right) ,\nonumber \end{eqnarray} where the gravitational mass $M$ and charge $Q$ of the solution are \begin{eqnarray} \label{eq:m} M= \frac{(d-2)\Sigma_k (1-4\tilde \alpha/l^2)}{64 \pi\tilde \alpha }m, \quad\quad Q^2=\frac{\pi(d-2)(d-3)(1-4\tilde \alpha/l^2)}{2\tilde \alpha G}q^2 . \end{eqnarray} \begin{figure}[htb] \includegraphics[width=.6\linewidth]{frnewQ.eps}\\ \caption The behavior of the function $f(x=r/\sqrt{\tilde\alpha})$ for different $q$ in the case of $k=1, ~d=5$, $l^2=15\tilde\alpha$, and $m=100\tilde\alpha^2$. The curves corresponds to $q^2=5(7n+n^2)\tilde\alpha^3$ with $n=0,1,\cdots ,10$, respectively from the bottom to top. }\label{fig:frQ} \end{figure} The general behavior of the function $f(r)$ with respect to the variation of $Q$ is given in Fig.~\ref{fig:frQ}. The behavior of the solution is well analyzed in Ref.~\cite{Nojiri}. Since the black hole rotates along the direction $\phi$, it generates a magnetic field. To take into account this effect we add the vector potential $A_\phi$ \begin{eqnarray} \label{eq:EMvector:Ansatz} A_\phi = -a Q c(r) h(\theta) . \end{eqnarray} Then, the field equations of the electro-magnetic field $\partial_a(\sqrt{-g} F^{ab})=0$, using the metric ansatz~(\ref{eq:metric}), lead to an equation for function $c(r)$ \begin{eqnarray} \label{eq:EMfield:eom} (r^{d-4} f(r) c'(r))' -2k(d-3)r^{d-6} c(r)= \frac{g'(r)}{4\pi} . \end{eqnarray} Thus, once we obtain the metric $g(r)$, we can get the differential equation of electro-magnetic potential $c(r)$. In addition, up to the linear order of $a$, $A_t$ still keeps the form (\ref{A0}). As the case without charge, to the linear order of $a$, the metric function $f(r)$ will not get correction from the rotation, that is, the same as the static case. On the other hand, the $t\phi$ component of the field equations decouples from $c(r)$ and leads to an equation for the function $g(r)$ \begin{eqnarray} \label{eq:ddg} \left(\log g'(r)\right)'=-\frac{d}{dr}\left(\log r^d \sqrt{1+\frac{m}{ r^{d-1}}-\frac{q^2}{r^{2d-4}} }\right) . \end{eqnarray} Integrating this differential equation, we get a formal solution \begin{eqnarray} \label{eq:gr} g(r)&=&c_1-c_2\int\frac{ dr}{r^d \sqrt{1+\frac{m}{ r^{d-1}}-\frac{q^2}{r^{2d-4}} }} \nonumber \\ &=&c_1-\frac{c_2}{m} \int^{\frac{r}{m^{\frac1{d-1}}}} \frac{dx}{x^d\sqrt{1+\frac1{x^{d-1}}- \frac{q^2}{m^{\frac{2d-4}{d-1}}x^{2d-4}}}} . \end{eqnarray} When $q=0$ we can explicitly integrate this integral to get the result~(\ref{eq10}) given in the previous section \begin{eqnarray} g_{q=0}(r)=c_1+ \frac{2 c_2}{(d-1)m} \sqrt{1+\frac{m}{r^{d-1}}}\,.\nonumber \end{eqnarray} When $q \ne 0$, we are not able to give an explicit expression for $g(r)$. However, since the charge $q$ does not affect to the large $r$ behavior of the solution, the constants $c_1$ and $c_2$ can be determined, which are to be the same as Eq.~(\ref{eq:c12}), and therefore the solution $g(r)$ is \begin{eqnarray} \label{gr:gen} g(r) =\frac{1}{2\tilde\alpha}\left(1- \sqrt{1-\frac{4\tilde\alpha}{l^2}}\right) -\frac{(d-1)\sqrt{1-4\tilde \alpha/l^2}}{4\tilde \alpha} \int_{y}^\infty \frac{dx}{x^d\sqrt{1+\frac1{x^{d-1}}- \frac{q^2}{m^{\frac{2d-4}{d-1}}x^{2d-4}}}}, \end{eqnarray} where $y=\frac{r}{m^{1/(d-1)}}$. The asymptotic form of $g(r)$ is given by \begin{eqnarray} \label{gr:asym} g(r) &=& \frac{1}{l_{\rm eff}^2}-\frac{m\sqrt{1-\frac{4\tilde\alpha}{l^2}}}{ 4\tilde \alpha r^{d-1}} \left(1 -\frac{m}{4 r^{d-1}} +\frac{(d-1)q^2}{2(3d-5)r^{2d-4}}+\cdots\right), \nonumber \end{eqnarray} where $g(r)$ approaches to $l_{\rm eff}^{-2}$. We also can obtain small $q$ approximation of $g(r)$ \begin{eqnarray} \label{g:small q} g(r) &=& \frac{1}{2\tilde\alpha}\left ( 1 - \sqrt{1-\frac{4\tilde \alpha}{l^2}}\sqrt{1+\frac{ m}{r^{d-1}}} \right) \\ &-&\frac{m q^2\sqrt{1-4\tilde\alpha/l^2}}{4\tilde \alpha \, r^{3d-5} } \frac{-1+\sqrt{1+ m/r^{d-1}} _2F_1(\frac{7d-11}{2(d-1)}, \frac12, \frac{9d-11}{2(d-1)},-\frac m{r^{d-1}})}{ \sqrt{1+r^{d-1}/m}}+O(q^4).\nonumber \end{eqnarray} On the horizon, up to ${\cal O}(q^2)$, it is \begin{eqnarray} \label{grH} g(r_+)&=& -\frac{k}{r_+^2}-\frac{q^2}{4\tilde\alpha} \frac{\sqrt{1-4\tilde\alpha/l^2}}{r_+^{2d-4} \sqrt{1+ \frac{m}{r_+^{d-1}}}} \nonumber \\ &-&\frac{m q^2\sqrt{1-4\tilde\alpha/l^2}}{4\tilde \alpha \, r_+^{3d-5} } \frac{-1+\sqrt{1+ m/r_+^{d-1}} _2F_1(\frac{7d-11}{2(d-1)}, \frac12, \frac{9d-11}{2(d-1)},-\frac m{r_+^{d-1}})}{ \sqrt{1+r_+^{d-1}/m}}. \end{eqnarray} Therefore, the angular velocity of horizon $\Omega_H=a g(r_+)$ also gets the corrections of ${\cal O}(q^2)$. As we see in Fig.~\ref{fig:gbar}, $g(r_+)$ increases as $q$ increases. As a result, the angular velocity of horizon also increases with $q$ for fixed $r_+$. \begin{figure}[htb] \includegraphics[width=.6\linewidth]{grnew.eps}\\ \caption The behavior of $g(r)$ for different $q$ with $d=5$, $l^2=15 \tilde \alpha$, $m=100\tilde\alpha^2$. $q$ is taken to be $5(7j+j^2)\tilde\alpha^3$ with $j=0,\cdots 10$ from the bottom to top, respectively. $g(r)$ approaches to a constant $-\frac{1}{2\tilde\alpha} \left(1-\sqrt{1-\frac{4\tilde \alpha}{l^2}} \right) $ in the large $r$ limit. }\label{fig:gbar} \end{figure} The function $c(r)$ is obtained by solving the differential equation~(\ref{eq:EMfield:eom}) \begin{eqnarray} \label{diff:c} (r^{d-4} f c')'-2k (d-3)r^{d-6} c= \frac{(d-1)m \sqrt{1-4\tilde\alpha/l^2}}{16\pi\tilde\alpha} \frac{1}{r^d\sqrt{1+\frac{m}{r^{d-1}}- \frac{q^2}{r^{2d-4}}}} . \end{eqnarray} We find that $c(r) $ can be written down as \begin{eqnarray} \label{c:delta} c(r)= -\frac{1}{4\pi (d-3) r^{d-3}} + q^2 \epsilon(r) . \end{eqnarray} Note that here the first term is independent of $k$. The function $\epsilon(r)$ satisfies \begin{eqnarray} \label{eq:epsilon} (r^{d-4} f \epsilon')'-2k(d-3)r^{d-6} \epsilon= \frac{(d-2) \sqrt{1-4\tilde\alpha/l^2}}{4\pi \,r^{2d-3}} \frac{1}{\sqrt{1+\frac{m}{r^{d-1}}- \frac{q^2}{r^{2d-4}}}}. \end{eqnarray} We find that the leading term of $\epsilon$ is of the form $\epsilon(r)\propto r^{-3d+7} $. Thus, in the large $r$ limit, we arrive at $$ c(r)\simeq -\frac{1}{4\pi (d-3) r^{d-3}} +\frac{ \sqrt{1-4\tilde\alpha/l^2}}{8\pi(3d-7)}\frac{q^2l_{\rm eff}^2}{ r^{3d-7}} . $$ As a result, the electro-magnetic fields associated with the solution are \begin{eqnarray} \label{eq:Br} F_{tr} = \frac{Q}{4\pi r^3},~~ F_{r\phi} = -a Q c'(r)h(\theta),~~ F_{\theta\phi} = -a Q c(r)h'(\theta). \end{eqnarray} The expressions for the mass and the angular momentum for this solution does not change through the introduction of charge $Q$ since it does not alter asymptotic behavior of the metric. The magnetic dipole moment for this slowly rotating Gauss-Bonnet black hole is \begin{eqnarray} \label{eq:MDM} \mu= Q a. \end{eqnarray} Therefore, the gyromagnetic ratio is given by \begin{eqnarray} \label{eq:gyro} g= \frac{2\mu M}{Q J} = d-2 \end{eqnarray} which depends only on the number of spacetime dimensions. The value is the same as the case without the Gauss-Bonnet term~\cite{Aliev}. In conclusion, the Gauss-Bonnet term does not change the gyromagnetic ratio of the rotating black hole. \section{Summary and Discussion} Starting from the non-rotating charged Gauss-Bonnet black hole solutions in anti-de Sitter spacetime, we have obtained the slowly rotating solution by introducing a small angular momentum and solving the equations of motion up to the linear order of the angular momentum parameter. If one chooses the metric $g_{t\phi}$ to be proportional to $r^2 g(r)$, the equation for $g(r)$ is much simplified as an integrable equation. The radial electric field is chosen so that the electric flux line to be continuous. The vector potential is chosen to have non-radial component $A_\phi=-aQ c(r)\sin^2\theta$ to represent the magnetic field due to the rotation of the black hole. Since the off diagonal component of the stress-tensor of electro-magnetic field is independent of $c(r)$, the equation for $g(r)$ decouples from $c(r)$ and is integrable. As expected, our solution reduces to the slowly rotating Kerr-AdS black hole solution if the Gauss-Bonnet coefficient vanishes $\tilde \alpha \rightarrow 0$. The expressions of the mass, temperature, and entropy of the black hole solution, in terms of the black hole horizon, do not change, up to the linear order of the angular momentum parameter $a$. The angular momentum is written in terms of $a$ and the mass $M$ of the black hole and the gyromagnetic ratio of the Gauss-Bonnet black hole is obtained. It is shown that the Gauss-Bonnet term will not change the gyromagnetic ratio of the rotating black holes. \begin{acknowledgments} We thank K. Maeda and N. Ohta for helpful discussions. This work was supported in part by the Korea Research Foundation Grant funded by Korea Government (KRF-2005-075-C00009; H.-C.K.). RGC was supported in part by a grant from the Chinese Academy of Sciences, and NSF of China under grants No.~10325525, No.~10525060 and No.~90403029. \end{acknowledgments} \vspace{1cm}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The ``Andrews' squeezing system'' was first described by Giles in \cite{gi78:CTP} and further studied in \cite{ma81:CTS}. It is a planar multibody system whose topology consists of closed kinematic loops (see Figure \ref{fig:angles}). The Andrews' system was promoted in \cite{sc90:MSH} as a benchmark problem to compare different multibody solvers. Nowadays it is a well-known benchmark problem \cite{ha-wa91:SODE2,testset} for numerical integration of differential-algebraic equations as well. The equations are of the Lagrangian form (or descriptor form, see also \cite{ar01:RCS}) \begin{equation} \label{eq:1} \begin{cases} f(t,y,y',y'',\lambda) = 0 \\ g(y) = 0 \end{cases} \end{equation} where the function $f$ describes the dynamical equations and $g$ gives the (holonomic) constraints. Here $y\in\mathbb{R}^n$ are the (generalized) position coordinates, $y'$ and $y''$ are the first and second derivatives, respectively, and $\lambda$ is the Lagrange multiplier. It is well known that singularities of any kind hinder solving equations numerically \cite{ro-sc88:DMS,ha-wa91:SODE2,ba-av94:SFA,ei-ha95:RMC}. Intuitively, a singularity is where the (generic) number of degrees of freedom of the system changes. Mathematically these are the points where the rank of the Jacobian of $g$ drops. Hence in this paper we will not consider the actual dynamical equations and analyse only the constraints given by $g$. Most differential equation solvers include a possibility to monitor singularities, and usually when proximity of a singularity is detected, the computation is best to be interrupted. But this kind of monitoring is local only, that is, it does not tell us a priori where the singularities lie but only alert us when it is too late to fix things, so to speak. Also, the monitoring is often a non-negligible part of computational cost. Therefore, it would be highly useful to know {\em a priori} where the singularities are, or to make sure that there are no singularities, or perhaps even remove them (for the latter approach, see \cite{ar01:RCS}). Locating singularities has been studied also in \cite{mc00:GDL}. If we cannot avoid or remove the singularities, at least knowing where they are encountered is helpful (indeed, necessary) when planning the computation without interruptions. One can then tune the chosen integration algorithm such that the disturbing effect of the singularities is diminished, for example by compensating the singularity of the Kepler problem by a local change of variables as in \cite{le-re05:SHD} within the computation. Further techniques on compensating singularities in multibody systems are gathered and concisely compared in \cite{ba-av94:SFA} and \cite{ei-ha95:RMC}. The paper is organized as follows: in the next Section we present the situation in detail and formulate the constraint equations in polynomial form. Section \ref{sec:algebra} gathers the necessary algebraic tools. Section \ref{sec:analysis} contains the actual analysis where we show that the mechanism indeed has singularities for certain parameter values. In Section \ref{sec:examples} there are some numerical examples of singular configurations, and in Section \ref{sec:discuss} we summarize and discuss the results, and address possible future work. \section{Andrews' squeezing mechanism} The squeezing mechanism is given by the following equations. \begin{equation} \label{eq:2} g(y) = \begin{cases} a_1\cos(y_1) - a_2\cos(y_1 + y_2) - a_3\sin(y_3) - b_1 \\ a_1\sin(y_1) - a_2\sin(y_1 + y_2) + a_3\cos(y_3) - b_2 \\ a_1\cos(y_1) - a_2\cos(y_1 + y_2) - a_4\sin(y_4 + y_5) - a_5\cos(y_5) - w_1 \\ a_1\sin(y_1) - a_2\sin(y_1 + y_2) + a_4\cos(y_4 + y_5) - a_5\sin(y_5) - w_2 \\ a_1\cos(y_1) - a_2\cos(y_1 + y_2) - a_6\cos(y_6 + y_7) - a_7\sin(y_7) - w_1 \\ a_1\sin(y_1) - a_2\sin(y_1 + y_2) - a_6\sin(y_6 + y_7) + a_7\cos(y_7) - w_2 \end{cases} \end{equation} Compared to the original articles mentioned above, we have chosen the following notation for the parameters and angles: \begin{align*} & a_1=rr\quad a_2=d \quad a_3=ss\quad a_4=e \quad a_5=zt\quad a_6=zf \quad a_7=u \\ & b_1=xb\quad b_2=yb\quad w_1=xa\quad w_2=ya \\ & y_1 = \beta \quad y_2=\Theta \quad y_3=\gamma \quad y_4=\Phi \quad y_5=\delta \quad y_6=\Omega \quad y_7=\epsilon \end{align*} so the positions in Cartesian coordinates of the fixed nodes $A$ and $B$ are given by $b=(b_1,b_2)$ and $w=(w_1,w_2)$, and the lengths of the rods by $a=(a_1,\dots,a_7)$, see Figures \ref{fig:angles} and \ref{fig:lengths}. Fixing the parameters $a$, $b$, and $w$, we have a map $g\,:\,\mathbb{R}^7\to\mathbb{R}^6$. Hence the set of possible configurations, which is the zeroset $M_g=g^{-1}(0)$, is in general a curve (or possibly empty). Our task is to analyse the singularities of $M_g$, so let us state more precisely what is meant by a singularity. As mentioned before, in a singularity the number of degrees of freedom changes. It is well known \cite{ro-sc88:DMS,ba-av94:SFA,mc00:GDL} that this corresponds to the situation where the rank of Jacobian drops. \begin{define} Let $f\,:\,\mathbb{R}^n\to\mathbb{R}^k$ be any smooth map where $k<n$ and let $df$ be its Jacobian matrix. Let $M=f^{-1}(0)\subset\mathbb{R}^n$ be the zeroset of $f$. A point $q\in M$ is a \emph{singular point} of $M$, if $df$ does not have maximal rank at $q$. \end{define} What in fact geometrically ``happens'' at a singular point may be quite complicated to determine. Typically the tangent space to $M$ does not change continuously in the neighbourhood of a singular point, or possibly $M$ intersects itself there. However, in all cases numerical problems occur, so it is important to try to find all singular points. Note that the constraint equations \eqref{eq:2} (and hence the elements of its Jacobian matrix) are {\em not} polynomials, yet our algebraic approach works only in a polynomial setting. However, this problem is circumvented by reformulating $g(y)$ as polynomials in the sines and cosines of $y_i$ by using the trigonometric identities \begin{align*} & \cos(x)^2+\sin(x)^2=1\\ & \sin(x\pm y)=\sin(x)\cos(y)\pm\cos(x)\sin(y) \\ & \cos(x\pm y)=\cos(x)\cos(y)\mp\sin(x)\sin(y) \end{align*} Setting $c_i=\cos(y_i), \quad s_i=\sin(y_i)$ we get the equations \begin{equation} p(c,s)=\begin{cases} a_1 c_1-a_2\big(c_1c_2-s_1s_2\big)-a_3s_3-b_1=0\\ a_1 s_1-a_2\big(s_1c_2+c_1s_2\big)+a_3c_3-b_2=0\\ a_1 c_1-a_2\big(c_1c_2-s_1s_2\big)-a_4\big(s_4c_5+c_4s_5\big) -a_5c_5-w_1=0\\ a_1 s_1-a_2\big(s_1c_2+c_1s_2\big)+a_4\big(c_4c_5-s_4s_5\big) -a_5s_5-w_2=0\\ a_1 c_1-a_2\big(c_1c_2-s_1s_2\big)-a_6\big(c_6c_7-s_6s_7\big) -a_7s_7-w_1=0\\ a_1 s_1-a_2\big(s_1c_2+c_1s_2\big)-a_6\big(s_6c_7+c_6s_7\big) +a_7c_7-w_2=0\\ c_i^2+s_i^2-1=0,\qquad i=1,\dots,7. \end{cases} \label{psysteemi} \end{equation} We have 13 polynomial equations ($p_i=0$), 11 parameters ($a_1,\dots,a_7,\,b_1,b_2,w_1,w_2$) and 14 variables ($c_1,s_1,\dots,c_7,s_7$). Note that each $p_i$ is of degree two in $c_i,s_i$. The equations $p_1=0,\dots,p_6=0$ correspond directly to the 6 original equations $g(y)=0$ with the simple substitutions above (for example $\cos(y_1+y_2)=c_1c_2-s_1s_2$) and the equations $p_7=0,\dots,p_{13}=0$ are the extra identities due to ``forgetting'' the angle variables $y_i$. Note that this reformulation of the constraints as algebraic equations is not just a trick which happens to work in this special case; indeed most constraints appearing in the simulation of multibody systems are of this type. Now the above equations define $p$ as a map $p\,:\,\mathbb{R}^{14}\to\mathbb{R}^{13}$. Hence we expect that the zeroset $V=p^{-1}(0)\subset\mathbb{R}^{14}$ is a curve (or possibly empty). Singularities are then the points of this curve where the rank of $dp$ is not maximal. To find these points we need now to introduce some tools from commutative algebra. \begin{figure}[htbp] \begin{center} \epsfig{file=angles.eps,height=10cm,width=13cm} \end{center} \caption{The angles $y_i$ of the Andrews' system.} \label{fig:angles} \end{figure} \begin{figure}[htbp] \begin{center} \epsfig{file=lengths.eps,height=10cm,width=13cm} \end{center} \caption{The lengths $a_i$ and nodes of the Andrews' system.} \label{fig:lengths} \end{figure} \section{Background} \label{sec:algebra} In this section we present briefly the necessary definitions from commutative algebra and algebraic geometry. More details can be found in \cite{cox-li-os92:IVA}, \cite{singularbook}, \cite{no76:FFR}, and \cite{ei96:ca}. These are roughly in the order of increasing difficulty, \cite{cox-li-os92:IVA} being the most accessible, but unfortunately not containing the necessary material on the Fitting ideals. \subsection{Ideals and varieties} Let $\mathbb{K}$ be an algebraic field and let $\mathbb{K}[x_1,\,\dots,\,x_n]$ be the ring of polynomials in $x_1,\,\dots,\,x_n$, with coefficients in $\mathbb{K}$. A subset $I \subset \mathbb{K}[x_1,\,\dots,\,x_n]$ is an {\em ideal} if it satisfies \begin{enumerate} \item[(i)] $0 \in I$. \item[(ii)] If $f,g \in I$, then $f+g \in I$. \item[(iii)] If $f \in I$ and $h \in \mathbb{K}[x_1,\,\dots,\,x_n]$, then $hf \in I$. \end{enumerate} Ideals are often given by \emph{generators}. Let $f_1,\dots,f_s \in \mathbb{K}[x_1,\,\dots,\,x_n]$. Then the set \begin{equation*} \langle f_1,\dots,f_s \rangle := \left\{ \sum_{i=1}^s h_i f_i \mid h_1,\dots,h_s \in \mathbb{K}[x_1,\,\dots,\,x_n] \right\} \end{equation*} is an {\em ideal generated by} $f_1,\dots,f_s$. Any set of generators is called a \emph{basis}. Ideals are purely algebraic objects. The geometrical counterpart of an ideal is its locus, or variety. Let $I$ be an ideal in $ \mathbb{K}[x_1,\,\dots,\,x_n]$. Its corresponding {\em variety} is \[ \mathsf{V}_{\mathbb{F}}(I) = \{ (a_1,\dots,a_n)\in\mathbb{F}^n \mid f(a_1,\dots,a_n)=0 \quad \forall f\in I \} \] where $\mathbb{F}$ is some field extension of $\mathbb{K}$. Note that it is often natural to choose $\mathbb{F}$ different from $\mathbb{K}$. If the field is clear from context we will sometimes write simply $\mathsf{V}(I)$. Now different ideals may have the same variety. However, if one is interested mainly in the variety then it is useful to define \[ \sqrt{I}=\big\{f\in\mathbb{K}[x_1,\dots,x_n]\,|\, f^n\in I \textrm{\ \ for\ some\ \ }n\ge 1\big\}. \] If $I$ is an ideal, then $\sqrt{I}$ is the \emph{radical} of $I$; it is the biggest ideal that has the same variety as $I$ and all ideals having the same variety have the same radical. Also, always $I\subset \sqrt{I}$ and if $I=\sqrt{I}$ we say that $I$ is a {\em radical ideal}. Some rudimentary properties among ideals and their varieties are in the following \begin{lem} Let $I$ and $J$ be ideals. Then \begin{enumerate} \item $\mathsf{V}(I \cup J) = \mathsf{V}(I) \cap \mathsf{V}(J)$. \item $\mathsf{V}(I \cap J) = \mathsf{V}(I) \cup \mathsf{V}(J)$. \item $I \subset J$ if and only if $\mathsf{V}(I) \supset \mathsf{V}(J)$. \end{enumerate} \end{lem} Next we have to express the rank condition algebraically. To this end we need \begin{define} If $I= \langle f_1,\dots,f_s \rangle$, its {\em Fitting ideal} $F_I$ is the ideal generated by all maximal minors of the Jacobian matrix of $(f_1,\dots,f_s)$.\footnote{In general one can define Fitting ideals of minors of any given size. However, the above definition is sufficient for purposes of the present paper.} \end{define} Now $\mathsf{V}(F_I)$ corresponds to the points where the rank is not maximal. However, the points are required also to be on $\mathsf{V}(I)$. Hence we conclude that the set of singular points, $S$, is given by \[ S=\mathsf{V}(I\cup F_I) \] In analysing varieties it is often helpful to decompose them to simpler parts. Similarly one may try to decompose a given ideal to simpler parts. This leads to following notions. \begin{define} A variety $V$ is {\em irreducible} if $V=V_1 \cup V_2$ implies $V=V_1$ or $V=V_2$.\\ An ideal $I$ is {\em prime} if $f,g\in\mathbb{K}[x_1,\,\dots,\,x_n]$ and $fg\in I$ imply that either $f\in I$ or $g\in I$. \end{define} There is a very close connection between prime ideals and irreducible varieties. The precise nature of this depends on the chosen field. However, for our purposes the following is sufficient. \begin{lem} If $I$ is prime, then $\mathsf{V}(I)$ is irreducible.\\ Any radical ideal can be written uniquely as a finite intersection of prime ideals, \begin{equation*} \sqrt{I} = I_1 \cap \cdots \cap I_r, \end{equation*} where $I_i \not\subset I_j$ for $i \neq j$. \end{lem} This is known as the {\em prime decomposition of $\sqrt I$} and the $I_i$'s are called the minimal associated primes of $I$. The above Lemma then immediately gives: \begin{coroll} \[ \mathsf{V}(I) = \mathsf{V}(\sqrt{I}) = \mathsf{V}(I_1) \cup \cdots \cup \mathsf{V}(I_r), \] where all $\mathsf{V}(I_i)$ are irreducible. \end{coroll} Hence our strategy in analysing varieties is to compute the minimal associated primes of the relevant ideal, and then examine each irreducible component separately. \subsection{Gr\"obner bases} An essential thing is that all the operations above, especially finding the radical and the prime decomposition can be computed {\em algorithmically} using the given generators of $I$. To do this we need to compute special bases for ideals, called Gr\"obner bases. We will only briefly indicate the relevant ideas and refer to \cite{cox-li-os92:IVA} and \cite{singularbook} for more details. First we need to introduce {\em monomial orderings}. All the algorithms handling the ideals are based on some orderings among the terms of the generators of the ideal. Intuitively, an ordering $\succ$ is such that given a set of monomials (e.g. terms of a given polynomial), $\succ$ puts them in order of importance: given any two monomials $x^{\alpha}:=x_1^{\alpha_1}\dots x_n^{\alpha_n}$ and $x^{\beta}$, where $\alpha\neq\beta$ are different multi-indices, then either $x^{\alpha}\succ x^{\beta}$ or $x^{\beta}\succ x^{\alpha}$. A common choice is to use \emph{degree reversed lexicographic} ordering \cite{cox-li-os92:IVA}. In our analysis we shall frequently need {\em product orders}, which are formed as follows: if $\succ_A$ and $\succ_B$ are two orderings, we shall divide the variables $x_i$ into two subsets, and use $\succ_A$ on the first subset and $\succ_B$ on the second. This is indicated with the following notation: \[ \mathbb{K}[(x_4,x_5,x_7),(x_1,x_2,x_3,x_6)]. \] This is the same set as $\mathbb{K}[x_1,\dots,x_7]$ but now the parenthesis indicate that we will use $\succ_A$ among the variables $(x_4,x_5,x_7)$, and $\succ_B$ among the variables $(x_1,x_2,x_3,x_6)$, and moreover all monomials where variables of the first group appear are always bigger than monomials where there are only variables of the second group. We will see later why this is useful. Finally, the aforementioned Gr\"obner basis is a special kind of generating set, with respect to some ordering. Given any set of generators and an ordering, the corresponding Gr\"obner basis exists and can be computed. The relevant algorithm is usually called the \emph{Buchberger algorithm}. The drawback of this algorithm is that it has a very high complexity in the worst case, and in practice the complexity depends quite much on the chosen ordering.\footnote{So far, no satisfactory theory of Gr\"obner basis complexity has been done.} Anyway Gr\"obner bases have proved to be very useful in many different applications. Nowadays there exist many different implementations and improvements of the Buchberger algorithm. We chose to use the well-known program {\sf Singular} \cite{GPS05}, \cite{singularbook} in all the computations in this paper. \section{Analysing singularities} \label{sec:analysis} \subsection{Geometric description of the singularities} Now getting back to our system \eqref{psysteemi} we see that we can take the components of $p$ to be elements of $\mathbb{Q}(a,b,w)[c,s]$ where $\mathbb{Q}(a,b,w)$ is the field of rational functions of $a$, $b$, and $w$. Hence we have an ideal $J=\langle p_1,\dots,p_{13}\rangle\subset \mathbb{Q}(a,b,w)[c,s]$ and the corresponding Fitting ideal $F_J$. On the other hand we may view the ``parameters'' $a$, $b$, and $w$ also as variables since they appear polynomially in the equations; hence we could also consider $J\subset \mathbb{Q}[a,b,w,c,s]$. Taking this point of view we can give an intuitive description of what kind of situations we can expect. \[ \begin{cases} J\subset \mathbb{Q}[a,b,w,c,s]\\ V_{\mathbb{R}}( J)\subset \mathbb{R}^{25}. \end{cases} \] In this way $V_{\mathbb{R}}( J)$ should be 12 dimensional (recall $J$ is generated by 13 equations), i.e. a curve depending on 11 parameters. On the other hand if we fix parameters $a$, $b$, and $w$ we get a curve in $\mathbb{R}^{14}$ which will be denoted by $V_{a,b,w}$. In the same way we can view $ V_{\mathbb{R}}( J\cup F_J)$ as a variety in $\mathbb{R}^{25}$, and fixing the parameters we get the singular points $V_{a,b,w}^S$. Obviously $V_{a,b,w}^S\subset V_{a,b,w}\subset\mathbb{R}^{14}$. Then what kind of variety should $V_{\mathbb{R}}( J\cup F_J)$ be? Since the Jacobian of $p$ is of size $13\times 14$, \emph{generically} we expect to get 2 independent conditions in order the rank to drop. That is, augmenting $J$ with $F_J$ should bring in 2 more equations. Hence we expect that $V_{\mathbb{R}}( J\cup F_J)$ is 10 dimensional; in other words we expect that if 11 parameters are chosen independently then $V_{a,b,w}^S$ should be empty. On the other hand if a single condition among parameters is satisfied, then $V_{a,b,w}^S$ should consist of isolated points. Further, if there are 2 conditions among parameters (i.e. 9 parameters freely chosen), then it would be possible that $V_{a,b,w}^S$ were one dimensional. But then our original constraint equations would be redundant, i.e. there would be more than one degree of freedom. Below we will in fact observe that if a certain condition on parameters is satisfied, $V_{a,b,w}^S$ is indeed a finite set of points. \subsection{Singular variety} To study $V_{\mathbb{R}}( J\cup F_J)$ we could in principle use Gr\"obner basis theory in a straightforward manner. Let $G$ be the Gr\"obner basis of $J\cup F_J$ using the product order $\mathbb{Q} [(c,s),(a,b,w)]$. Let us denote by $g_1,\dots,g_r$ the elements of $G$ which do not depend on $c$ and $s$. \begin{define} Let $S_J=\langle g_1,\dots,g_r\rangle$; then we say that $\mathsf{V}_{\mathbb{R}}(S_J)\subset\mathbb{R}^{11}$ is the singular variety associated to $J$. \end{define} It follows from the Gr\"obner basis theory that $V_{a,b,w}$ can have singularities \emph{only if} $(a,b,w)\in \mathsf{V}_{\mathbb{R}}(S_J)$. Hence theoretically, we could now find the singularities of the Andrews' system in a straightforward manner by calculating the Gr\"obner basis of $J\cup F_{J}$. But this is an enormous task, due to $F_J$ being generated by high degree polynomials, not to mention including the 11 parameters $a,b,w$. We could not get the solution in a finite time using our work station with 64GB memory. Instead, something else needs to be done. Luckily there is another approach: noting that $p_1,p_3,p_5$ have common terms, as well as $p_2,p_4,p_6$, gives us motivation to study two subsystems. One spanned by $p_5-p_3$ and $p_6-p_4$, the other one spanned by $p_5-p_1$ and $p_6-p_2$ (along with the relevant trigonometric identities from $p_7,\dots,p_{13}$). These subsystems are handleable and give useful information for the whole system as well. Proceeding in this way we could at least determine that the singular variety is not empty and we could compute some subvarieties of it. \subsection{Subsystem 4567} \label{sec:subsystem4567} Intuitively, the nodes and bars 4, 5, 6, 7 formulate a subsystem, see Figures \ref{fig:angles} and \ref{fig:lengths}. We suspect that when the lengths $a_4,\dots,a_7$ are such that the ``4567'' system is able to become one-dimensional, hence in some sense degenerated, there should be a singularity in the whole system (see also the net example in \cite{ar01:RCS}). We will shortly see that this is indeed the case. Define \begin{align*} q_1 &:= p_5-p_3 = a_4\big(s_4c_5+c_4s_5\big)+a_5c_5-a_6\big(c_6c_7-s_6s_7\big)-a_7s_7 \\ q_2 &:= p_4-p_6 = a_4\big(c_4c_5-s_4s_5\big)-a_5s_5+a_6\big(s_6c_7+c_6s_7\big)-a_7c_7 \\ q_{i} &:= p_{i+7}= c_{i+1}^2+s_{i+1}^2-1, \quad i=3,\dots,6. \end{align*} Note that $q_1,q_2$ contain only angles $c_i$, $s_i$ and parameters $a_i$ for $i=4,\dots,7$. That is why we do not need the other $p_i$'s. Let $J_{4567}$ be the ideal spanned by $q_1,\dots,q_6$. Hence we have \begin{equation} \label{eq:S1} J_{4567} \subset \mathbb{Q}[(c_4,s_4,c_5,s_5,c_6,s_6,c_7,s_7),(a_4,a_5,a_6,a_7)] \end{equation} where we have indicated the relevant product order. The Gr\"obner basis $G$ for $J_{4567}\cup F_{J_{4567}}$ with respect to this ordering contains 191 elements (denoted by $g_1,\dots,g_{191}$), out of which 3 are especially enlightening: \begin{align*} g_5 &= c_6a_6a_7, \\ g_{16} &= c_4a_4a_5, \qquad \text{ and} \\ g_1 &= \prod_{i=1}^{8}t_i, \qquad \text{ where} \\ &t_1=a_4-a_5-a_6-a_7\\ &t_2=a_4-a_5+a_6+a_7\\ &t_3=a_4+a_5+a_6+a_7\\ &t_4=a_4+a_5-a_6-a_7\\ &t_5=a_4-a_5+a_6-a_7\\ &t_6=a_4-a_5-a_6+a_7\\ &t_7=a_4+a_5-a_6+a_7\\ &t_8=a_4+a_5+a_6-a_7. \end{align*} Since $g_1$ is the only generator which does not contain any variables $c_i$ and $s_i$ we conclude that \begin{thm} \label{thm:t_i} The singular variety of $J_{4567}$ is \[ S_{J_{4567}}=\mathsf{V}(\langle g_1\rangle). \] \end{thm} Note that the factorization of $g_1$ gives us the prime decomposition of $\langle g_1\rangle$ and hence decomposition of $\mathsf{V}(\langle g_1\rangle)$ into 8 linear irreducible varieties. Our next task is to show that at least some points of the singular variety extend to actual (physically relevant) singularities of the whole system. Recall that each generator $g_i$ corresponds to an equation $g_i=0$. Since $a_i>0$ in physically relevant cases, generators $g_5$ and $g_{16}$ imply that all the singularities of $J_{4567}$ have necessarily $c_6=c_4=0$ (conditions for the angles $4$ and $6$). In other words, in ideal-theoretic language, we can as well study the ideal \[ T:=\langle J_{4567},\, F_{J_{4567}},\, c_4,\, c_6 \rangle. \] Now the prime decomposition of $\sqrt{T}$ has 16 components: \begin{equation} \sqrt{T} = T_1 \cap \ldots \cap T_{16}. \end{equation} Inspecting the generators of each of $T_j$, it is noticed that every $T_j$ contains the $t_i$'s or $a_i$'s. Recall that a generator $a_i$ in an ideal corresponds in the variety to a condition $a_i=0$ which is non-physical. Moreover, $t_3$ is now a non-physical condition contradicting $a_i>0\,\forall i$. Hence we discard (as in \cite{ar01:RCS}) those ideals which have a non-physical generator that would imply $a_i\le 0$ for some $i$, and we are left with 7 ideals, whose generators are: \begin{align*} T_1 &=\langle c_7^2+s_7^2-1,\, t_1,\, s_6+1,\, s_5-c_7,\, c_5+s_7,\, s_4+1,\, c_4,\, c_6 \rangle \\ T_2 &=\langle c_7^2+s_7^2-1,\, t_2,\, s_6+1,\, s_5+c_7,\, c_5-s_7,\, s_4+1,\, c_4,\, c_6 \rangle \\ T_3 &=\langle c_7^2+s_7^2-1,\, t_4,\, s_6+1,\, s_5+c_7,\, c_5-s_7,\, s_4-1,\, c_4,\, c_6 \rangle \\ T_4 &=\langle c_7^2+s_7^2-1,\, t_5,\, s_6-1,\, s_5-c_7,\, c_5+s_7,\, s_4+1,\, c_4,\, c_6 \rangle \\ T_5 &=\langle c_7^2+s_7^2-1,\, t_6,\, s_6-1,\, s_5+c_7,\, c_5-s_7,\, s_4+1,\, c_4,\, c_6 \rangle \\ T_6 &=\langle c_7^2+s_7^2-1,\, t_7,\, s_6-1,\, s_5-c_7,\, c_5+s_7,\, s_4-1,\, c_4,\, c_6 \rangle \\ T_7 &=\langle c_7^2+s_7^2-1,\, t_8,\, s_6-1,\, s_5+c_7,\, c_5-s_7,\, s_4-1,\, c_4,\, c_6 \rangle. \end{align*} Especially, we see that $s_6=\pm 1$, $s_5=\pm c_7$, $c_5=\pm s_7$, and $s_4=\pm 1$. Now we are ready to continue with the original system $J \cup F_J$. \begin{rem}\label{rem:T_i_physical} Mathematically speaking the analyses of all cases $T_i$ are completely similar. However, on physical grounds the cases $T_1$, $T_2$, $T_6$ and $T_7$ are not so interesting. Indeed, in these cases the length of one of the rods corresponding to $a_4$, $a_5$, $a_6$ and $a_7$ is equal to the sum of the lengths of three others. Hence all four rods could be modelled as a single rod which would make the whole model significantly simpler. In the remaining cases no such reduction can be done, and we chose to examine the ideal $T_5$ in detail. See also remark \ref{rem:all_Ti_same_Q}. \end{rem} The case $T_5$ gives us conditions $s_4=-1$, $s_6=1$, $s_5=-c_7$, $c_5=s_7$, and $a_7=a_5+a_6-a_4$ which we substitute into the original system. Next we will show that the resulting system has real solutions. These will be the required singular points. The above substitutions simplify the generators of $J\cup F_J$ so that we get the following ideal: \begin{equation} \label{kideaali} \begin{aligned} K=&\ \langle K_1\cup K_2\rangle,\\ K_1\quad:&\quad\begin{cases} k_1= a_2(-c_1c_2+s_1s_2)+c_1a_1-s_3a_3-b_1 \\ k_2= a_2(-s_1c_2-c_1s_2)+s_1a_1+c_3a_3-b_2 \\ k_3= c_1^2+s_1^2-1 \\ k_4= c_2^2+s_2^2-1, \end{cases}\\ K_2\quad:&\quad\begin{cases} k_5= s_7(a_4-a_5)+s_3a_3+b_1-w_1 \\ k_6= c_7(a_5-a_4)-c_3a_3+b_2-w_2 \\ k_7= c_3^2+s_3^2-1 \\ k_8= c_7^2+s_7^2-1. \end{cases} \end{aligned} \end{equation} In $K_2$ we have 4 equations for 4 unknowns $c_3$, $s_3$, $c_7$, and $s_7$; hence it appears reasonable that we can get a finite number of solutions. Then we can substitute the computed values to $K_1$ which then becomes also a system of 4 equations for 4 unknowns $c_1$, $s_1$, $c_2$, and $s_2$. By the same reasoning we again expect that it is possible to get some solutions for appropriate parameter values. We could numerically solve the variables from these equations (and, indeed, we will, in the numerical examples), but to analyze the situation in more detail we need to study these further. Then starting with the system $K_2$ we solve the angles 3 and 7 by the following trick. First we inspect the ideal generated by $K_2$ in the ring \[ \mathbb{Q}(b_1,b_2,w_1,w_2,a_3,a_4,a_5)[c_3,s_3,c_7,s_7]. \] Calculating the Gr\"obner basis $\tilde G$ of $\langle K_2\rangle$ with respect to the lexicographic ordering we get 4 generators: \begin{equation} \label{eq:gtilde} \begin{aligned} \tilde g_1&= f_1s_7^2+f_2s_7-f_3f_4 \\ \tilde g_2 &= 2(b_2-w_2)(a_4-a_5)c_7-2(b_1-w_1)(a_4-a_5)s_7+f_5=0\\ \tilde g_3 &= a_3s_3+(a_4-a_5)s_7+b_1-w_1=0 \\ \tilde g_4 &= a_3c_3+(a_4-a_5)c_7+w_2-b_2=0. \end{aligned} \end{equation} where the auxiliary expressions $f_i$ are lengthy combinations of the parameters $a_i,b_i$ (see the appendix).\footnote{The algorithms actually give by default only sums of monomials instead of products like $2(b_2-w_2)(a_4-a_5)$ but we have simplified these by hand. Also \textsf{Singular} \cite{GPS05} could be used to automatically factorize into products but would involve some more elaborate programming.} Now $\tilde g_1$ contains only $s_7$ and parameters. Note that $f_1=0$ if and only if $a_4=a_5$. Assuming $a_4\neq a_5$ the equation $\tilde g_1=0$ is a polynomial in $s_7$ of degree 2, hence in order to have real solutions we need to impose the condition \begin{align} \label{S1_T5_D_condition} f_2^2 +4 f_1 f_3 f_4 \geq 0. \end{align} This condition can easily be checked when the parameters $a,b,w$ have been given numerical values. Once $s_7$ is known, $c_7,s_3,c_3$ can be solved from the linear equations of $\tilde G$, provided $a_4\neq a_5$ and $w_2\neq b_2$. The cases $w_2=b_2$ and/or $a_4=a_5$ can be summarized as follows: \begin{itemize} \item[(i)] If $w_2=b_2$ but $a_4\neq a_5$, we still get equations similar to $\tilde G$, but now $s_3$ has a quadratic equation instead of $s_7$. \item[(ii)] If $a_4=a_5$, the system typically does not have solutions. At least, a further condition among parameters, namely $|b-w|=a_3$, arises. We shall not elaborate this nongeneric behaviour further. In Section \ref{sec:a4_is_a5} we consider an example of this situation. \end{itemize} \begin{rem} In general, when the inequality in \eqref{S1_T5_D_condition} is strict, $s_7$ has 2 possible values. Therefore, the tuples $(s_3,c_3,s_7,c_7)$ have in general 2 possible values because the other ones in the tuple are determined uniquely from $s_7$. \end{rem} The only thing left to be done, in this $J_{4567}$ subsystem case, is to solve $c_1,s_1,c_2,s_2$. This is done with the ideal $\langle K_1\rangle$ given in \eqref{kideaali}. \begin{rem} \label{rem:all_Ti_same_Q} Had we used any other $T_i$ instead of $T_5$ above, we would have ended up with this same ideal $\langle K_1\rangle$. \end{rem} We calculate the Gr\"obner basis $\hat G$ of $\langle K_1\rangle$ , this time in the ring \[ \mathbb{Q}(a_1,a_2,a_3,b_1,b_2,c_3,s_3)[c_1,s_1,c_2,s_2]. \] Note especially that $s_3,c_3$ are here treated as parameters, due to being now known expressions in the parameters $a$, $b$, $w$. We again use lexicographic ordering and get 4 generators $\hat g_1,\dots,\hat g_4$. Analogously to $s_7$ above, now for $s_2$ we get the second degree polynomial equation \begin{equation}\label{eq:Q_s2} \hat g_{1}=(-4a_1^2a_2^2)s_2^2-n_1n_2=0 \end{equation} where \begin{align*} n_1&=a_1^2+2a_1a_2+a_2^2 -a_3^2 -2a_3b_1s_3+2a_3b_2c_3-b_1^2-b_2^2\\ n_2&=a_1^2-2a_1a_2+a_2^2 -a_3^2 -2a_3b_1s_3+2a_3b_2c_3-b_1^2-b_2^2 \end{align*} and linear equations for $c_2,s_1,c_1$: \begin{align*} &\hat g_2 = d_1c_2+d_2+d_3 \\ &\hat g_3 = l_1s_1+l_2+l_3 \\ &\hat g_4 = (a_1^2-a_2^2)c_1+ l_4 \end{align*} where the auxiliary expressions $d_i,\,l_i$ are certain known (but lengthy) functions of $a,b$, apart from $l_4$ which depends on $s_1,s_2,c_2$ as well. (See the appendix.) In order to have real solutions for $s_2$, \eqref{eq:Q_s2} implies the condition \begin{align} \label{S1_T5_E_condition} E := n_1 n_2 \leq 0. \end{align} These $\hat g_i$ determine $s_2,c_2,s_1,c_1$ provided $d_1\neq 0$, $l_1\neq 0$, $a_1\neq a_2$. To analyse the cases $d_1=0$, $a_1=a_2$, and/or $l_1=0$, it is helpful to define \[ d_0:=a_3^2 +2a_3b_1s_3 -2a_3b_2c_3 +b_1^2+b_2^2. \] It turns out that $l_1=0 \Leftrightarrow d_1=0 \Leftrightarrow d_0=0$. After rearranging the terms (see the appendix) it can be seen that the condition \eqref{S1_T5_E_condition} is equivalent to \[ (a_1-a_2)^2 \le d_0 \le (a_1+a_2)^2. \] Therefore, if $a_1\neq a_2$ then $d_0\neq 0$ and the equations above can be solved. The case $a_1=a_2$, $d_0\neq 0$ does not essentially change the situation: we still have a quadratic equation for $s_2$, and linear ones for the others, with a different coefficient for $c_1$. The remaining case $a_1=a_2$, $d_0=0$ corresponds to the situation where the centre node coincides with the origin. This gives another singularity (the angle $y_1$ remains arbitrary) but is a rather special case and will not be pursued further here. \begin{thm} \label{thm:1} Let us suppose that the parameters $a$, $b$, $w$ satisfy the following conditions: $a_4\neq a_5$ and \begin{align} & n_1 (4a_1a_2-n_1) \geq 0 \tag{\ref{S1_T5_E_condition}} \\ & f_2^2 + 16(a_4-a_5)^2|b-w|^2 f_3f_4\geq 0 \tag{\ref{S1_T5_D_condition}} \end{align} Then $V_{a,b,w}$ contains at least 2 singular points. If the inequalities are strict we get in general at least 4 singular points. \end{thm} It may appear that we also have at most 4 singular points. However, it is a priori possible that the other systems $T_i$ yield more singular points with the same parameter values. \begin{proof} The first part of the theorem merely collects what we have shown above, with the simplifications $n_2=n_1-4a_1a_2$ and $f_1=4(a_4-a_5)^2|b-w|^2$. The conditions are due to univariate second degree polynomial equations, which have real solutions if and only if \eqref{S1_T5_D_condition} and \eqref{S1_T5_E_condition} (for $s_7$ and $s_2$, respectively) are fulfilled. The other variables are determined from linear equations: $s_4,c_4,\dots,s_6,c_6$ from $T_5$; $s_3,c_3,c_7$ from $K_1$; $s_1,c_1,c_2$ from $K_2$. For the number of singular configurations, note that we have second order equations for $s_7$, hence at most 2 values for the tuple $(s_3,c_3,s_7,c_7)$, and $s_2$. So in general if there are two separate roots both for $s_7$ and $s_2$, we get four different singularities. \end{proof} Similar results can be presented for any $T_i$ but we will not catalogue them here. \subsection{Subsystem 367} \label{sec:subsystem367} Comparing to examples in \cite{ar01:RCS} it was perhaps intuitively clear that subsystem $J_{4567}$ produces singularities. It is a bit more surprising that there is another subsystem producing singularities: the one formed by the nodes 3, 6, and 7. Define \begin{align*} h_1 &:= -p_5+p_1 = a_6 \big(c_6 c_7-s_6 s_7 \big)+a_7 s_7-a_3 s_3+w_1-b_1 \\ h_2 &:= -p_6+p_2 = a_6 \big(s_6 c_7+c_6 s_7 \big)-a_7 c_7+a_3 c_3+w_2-b_2 \\ h_3 &:= p_{9} = c_3^2+s_3^2-1 \\ h_4 &:= p_{12} = c_6^2+s_6^2-1 \\ h_5 &:= p_{13} = c_7^2+s_7^2-1. \end{align*} It is important to note that $h_1,h_2$ contain only angles 3,6, and 7, therefore only $p_9,\,p_{12},\,p_{13}$ are relevant to them. As parameters we now have not only the lengths $a_3,a_6,a_7$, but also $b_1,\dots,w_2$ i.e. the positions of the fixed nodes $A$ and $B$ in Figure \ref{fig:lengths}. Let $J_{367}$ be the ideal generated by $h_1,\dots,h_5$. We will proceed in a similar way as with the subsystem $J_{4567}$. First we will consider the singularities of the subsystem $J_{367}$ using the following product order: \begin{equation} \label{eq:S2} J_{367}\cup F_{J_{367}} \subset \mathbb{Q}[(c_3,s_3,c_6,s_6,c_7,s_7),(a_3,a_6,a_7,b_1,b_2,w_1,w_2)] \end{equation} The relevant Gr\"obner basis $G$ contains 96 generators of which two are especially interesting: \begin{equation} \begin{aligned} g_{12} &= c_6a_6a_7 \\ g_{1} &= \prod_{i=1}^{4}z_i \qquad\text{ where} \\ & z_1 = (a_3-a_6+a_7)^2-|b-w|^2 \\ & z_2 = (a_3+a_6+a_7)^2-|b-w|^2 \\ & z_3 = (a_3+a_6-a_7)^2-|b-w|^2 \\ & z_4 = (a_3-a_6-a_7)^2-|b-w|^2 . \end{aligned} \label{eq:U_z} \end{equation} The latter one gives us the singular variety $S_{J_{367}}$. \begin{thm} \label{thm:z_i} The singular variety of $J_{367}$ is \[ S_{J_{367}}=\mathsf{V}(\langle g_1\rangle). \] \end{thm} \begin{rem} It is worth noting that, contrary to the linear constraints $t_i$ in Theorem \ref{thm:t_i} related to $J_{4567}$, the $z_i$ in Theorem \ref{thm:z_i} give {\em quadratic} constraints $z_i=0$ related to $J_{367}$ and have the interpretation ``$|a_3\pm a_6 \pm a_7| =$ distance between the fixed points A and B''. Furthermore, again the factors $z_i$ give the irreducible decomposition of the singular variety. \end{rem} Since $a_i>0$, we get $c_6=0$ from $g_{12}=0$. This simplifies computations considerably. Let us define \[ U:=\langle J_{367},\, F_{J_{367}},\, c_6 \rangle. \] The prime decomposition of $U$ turns out to have 8 components: \begin{equation*} \sqrt{U} = U_1 \cap \dots \cap U_8. \end{equation*} Inspecting the generators of each of $U_i$, it is noticed that the ideals $U_k,\quad k=5\dots 8$ contain generators which imply $a_i=0$ for some $i$. Hence those are discarded as non-physical and we are left with 4 ideals: \begin{align*} U_{1} &= \langle u_1 ,\,u_2,\, c_7^2+s_7^2-1,\, c_6,\, s_6-1,\, s_3+s_7,\, c_3+c_7 \rangle \\ U_{2} &= \langle u_1 ,\, u_2 ,\, c_7^2+s_7^2-1,\, c_6,\, s_6+1,\, s_3+s_7,\, c_3+c_7 \rangle \\ U_{3} &= \langle u_1 ,\,u_2 ,\, c_7^2+s_7^2-1,\, c_6,\, s_6+1,\, s_3-s_7,\, c_3-c_7 \rangle \\ U_{4} &= \langle u_1 ,\, u_2 ,\, c_7^2+s_7^2-1,\, c_6,\, s_6-1,\, s_3-s_7,\, c_3-c_7 \rangle\\[2mm] \textrm{where}&\quad \begin{cases} u_1=-s_6c_7a_6-c_3a_3+c_7a_7+b_2-w_2\\ u_2=s_6s_7a_6+s_3a_3-s_7a_7+b_1-w_1. \end{cases} \end{align*} With these, we continue studying the whole system $J\cup F_J$. Each $U_i$ will lead to a different case with $s_6=\pm 1$, $s_3=\pm s_7$, $c_3=\pm c_7$. Let us look for example the ideal $U_1$.\footnote{As with $J_{4567}$ and $T_5$, the other cases are completely similar and we will comment them shortly.} This gives \begin{equation} \label{S2_U1} \begin{aligned} s_6 &= 1, \\ c_7 &= \frac{b_2-w_2}{a_6-a_3-a_7}, \\ s_7 &= \frac{b_1-w_1}{a_3-a_6+a_7}, \\ c_3 &= -c_7,\\ s_3 &= -s_7. \end{aligned} \end{equation} We should expect to run into an equation $z_i=0$ for some $i$, where the expressions $z_i$ are given in \eqref{eq:U_z}. Combined with $c_7^2+s_7^2-1=0$ the equations \eqref{S2_U1} give $z_1=0$. Likewise, $U_i$ implies $z_i=0$ for $i=2,3,4$. \begin{rem}\label{rem:U_i_physical} The condition $z_2=0$ is physically a redundant case: it means that the system can barely reach from $A$ to $B$ when the subsystem of the rods $a_3,a_6,a_7$ is fully stretched, i.e. it has no room to move. Therefore also $U_2$ corresponds to a rather trivial case. See also Remark \ref{rem:T_i_physical}. \end{rem} Using $U_1$ we can now eliminate the variables corresponding to angles 3, 6, and 7. Doing the substitutions in $J\cup F_J$ we are left with the following generators. \begin{equation} \label{lideaali} \begin{aligned} L=&\ \langle L_1\cup L_2\rangle,\\ L_1\quad: &\quad \begin{cases} l_1= a_2(-c_1c_2+s_1s_2)+c_1a_1+s_7a_3-b_1\\ l_2= a_2(-s_1c_2-c_1s_2)+s_1a_1-c_7a_3-b_2\\ l_3= c_1^2+s_1^2-1\\ l_4= c_2^2+s_2^2-1, \end{cases} \\ L_2\quad : &\quad \begin{cases} l_5= a_4(s_4c_5+c_4s_5)+c_5a_5+s_7(a_6-a_7)\\ l_6= a_4(c_4c_5-s_4s_5)-s_5a_5+c_7(a_6-a_7)\\ l_7= c_4^2+s_4^2-1\\ l_8= c_5^2+s_5^2-1, \end{cases} \end{aligned} \end{equation} where the $s_7,\, c_7$ are no longer variables, but known expressions from \eqref{S2_U1} and kept here only for clarity of notation. \begin{rem} Before working on $L_1$ and $L_2$ we comment briefly on the other $U_i$ cases. Introduce $L_3$ and $L_4$: \begin{align*} L_3: & \begin{cases} & a_2(-c_1c_2+s_1s_2)+c_1a_1-s_7a_3-b_1=0\\ & a_2(-s_1c_2-c_1s_2)+s_1a_1+c_7a_3-b_2=0\\ & c_1^2+s_1^2-1=0\\ & c_2^2+s_2^2-1=0 \end{cases} \\ L_4: & \begin{cases} & a_4(s_4c_5+c_4s_5)+c_5a_5-s_7(a_6+a_7)=0\\ & a_4(c_4c_5-s_4s_5)-s_5a_5-c_7(a_6+c_7)=0\\ & c_4^2+s_4^2-1=0\\ & c_5^2+s_5^2-1=0. \end{cases} \end{align*} Had we used $U_2$ instead of $U_1$, we would end up with the system $L_1,\,L_4$. Likewise, $U_3$ would give the system $L_3,\,L_2$, and $U_4$ would give the system $L_3,\,L_4$. Yet another point of view is, that $s_6=\pm 1$ picks between $L_2$ and $L_4$, while $(c_3,s_3)=\pm(c_7,s_7)$ picks between $L_1$ and $L_3$. More precisely, $s_6=1$ ($s_6=-1$) gives $L_2$ ($L_4$), and $(c_3,s_3)=(-c_7,-s_7)$ gives $L_1$. The choice $(c_3,s_3)=(c_7,s_7)$ would give $L_3$. \end{rem} Continuing with $L_1$ and $L_2$, we notice that $L_2$ contains only the variables $c_5,s_5,c_4,s_4$ (angles 4 and 5), has 4 equations and 4 variables hence is expected to have a finite solution set and will be handled analogously to the ideal $K_2$ in \eqref{kideaali}. Calculating its Gr\"obner basis $G$ in the ring \[ \mathbb{Q}(a_4,a_5,a_6,a_7)[(c_4,c_5,s_5,c_7,s_7),(s_4)] \] we obtain 12 generators, the first one being \[ g_1 = 2a_4a_5s_4 + a_4^2+a_5^2-a_6^2+2a_6a_7-a_7^2. \] Hence $s_4$ can be explicitly solved: \begin{equation} \label{S2_U1_s4} s_4 = \frac{a_4^2+a_5^2-a_6^2+2a_6a_7-a_7^2}{-2a_4a_5}. \end{equation} The other generators are too messy to be of much use. Then using the formula $c_4^2=1-s_4^2$ we get \begin{align}\label{S2_U1_c4} c_4^2 &= -\frac{(a_4+a_5-a_6+a_7)(a_4-a_5+a_6-a_7)(a_4-a_5-a_6+a_7)(a_4+a_5+a_6-a_7)}{4a_4^2a_5^2} \notag \\ &= -\frac{t_7 t_5 t_6 t_8}{4a_4^2a_5^2}. \end{align} The product term in the numerator has to be nonpositive, in order to have any real solutions: \begin{equation} t_5 t_6 t_7 t_8 \le 0. \end{equation} After solving $s_4,c_4$ we can proceed to solve $s_5$ and $c_{5}$. For this we use the ordering \[ \mathbb{Q}(a_4,a_5,a_6,a_7)[c_5,s_5,c_4,s_4,c_7,s_7] \] and pick the two relevant equations from the corresponding Gr\"obner basis: \begin{align*} (-a_6+a_7)s_5 -a_4c_4s_7 +a_4s_4c_7 +a_5c_7 &=0 \\ (-a_6+a_7)c_5 -a_4c_4c_7 -a_4s_4s_7 -a_5s_7 &=0, \end{align*} which are linear equations for $s_5,c_5$, provided $a_6\neq a_7$. \begin{rem} In the case $a_6=a_7$ the situation is different: $L_2$ then decomposes into 3 prime ideals, of which only one is physically feasible and gives a singularity only if $a_4=a_5$. Thence this is a rather special case and will not be considered further here. \end{rem} The subsystem $L_2$ is now fully solved. Moving on to $L_1$, we will see that the analysis is very similar to that of $K_1$ from \eqref{kideaali}. Therefore we will skip some details. After forming the Gr\"obner basis of $L_1$ in the ring \[ \mathbb{Q}(b_1,b_2,a_1,a_2,a_3,c_7,s_7)[c_1,s_1,c_2,s_2] \] with respect to the lexicographic ordering, we get for $s_2$, after simplifications, the relation \begin{align} s_2^2 &= \frac{n_{3}(4a_1a_2-n_3)}{4a_1^2a_2^2},\\ &\qquad\text{ where} \quad n_3 = |b|^2+2a_3(b_2c_7-b_1s_7)-(a_1-a_2)^2+a_3^2 \notag \end{align} Again for the real solutions the numerator has to be nonnegative \begin{equation} n_{3}(4a_1a_2-n_3)\ge 0 \end{equation} We can now solve $c_{2}$, $s_{1}$ and $c_1$, provided their coefficients are nonzero, from the linear equations \begin{align*} &2a_1a_2n_4c_2-4a_1^2a_2^2s_2^2+r_1=0, \\ &-2a_1n_4s_1+r_2+r_3=0, \\ &({a_1^2-a_2^2})c_1+r_4=0. \end{align*} where \[ n_4=|b|^2+a_3^2+2a_3(b_2c_7-b_1s_7) \] and $r_i$ are lengthy, yet polynomial, expressions in the parameters, apart from $r_4$ which depends on $s_1,s_2,c_2$ as well. (See the appendix.) What about the cases $n_4=0$ and/or $a_1=a_2$? It can be shown, as with $d_0$, that the condition $n_{3}(4a_1a_2-n_3)\ge 0$ is equivalent to \[ (a_1-a_2)^2 \le n_4 \le (a_1+a_2)^2. \] Therefore, if $a_1\neq a_2$ then $n_4\neq 0$ and the equations above are sufficient. The case $a_1=a_2$, $n_4\neq 0$ does not essentially change the situation: we still have a quadratic equation for $s_2$, and linear ones for the others, with a different coefficient for $c_1$. The remaining case $a_1=a_2$, $n_4=0$ is analogous to the $n_2=0$ case within $J_{4567}$ and likewise will not be pursued further. \begin{thm} \label{thm:2} Let us suppose that the parameters $a$,$b$,$w$ satisfy the following conditions: \begin{align} & a_6\neq a_7 \notag \\ & n_4 \neq 0 \notag \\ & n_{3}(4a_1a_2-n_3)\geq 0 \label{U1_cond_1} \\ & t_7 t_5 t_6 t_8\leq 0 \label{U1_cond_2}. \end{align} Then $V_{a,b}$ contains at least 2 singular points. If the inequalities are strict we get in general at least 4 singular points. \end{thm} Similar results can be represented for any $\mathsf{V}(U_i)$ but we will not catalogue them here. \begin{proof} The last two conditions are due to univariate second degree polynomial equations, which have real solutions if and only if \eqref{U1_cond_1} (for $s_2$) and \eqref{U1_cond_2} (for $c_4$) are fulfilled. The first condition is needed for the other variables to be determined uniquely: $s_3,c_3,s_6,c_6,s_7,c_7$ from $\mathsf{V}(U_1)$, $s_4,s_5,c_5$ from $L_2$, and $s_1,c_1,c_2$ from $L_1$. For the number of singular configurations, note that we have second order equations, hence at most 2 values, for $c_4$ and $s_2$. So in general if there are two separate roots both for $c_4$ and $s_2$, we get four different singularities. \end{proof} \subsection{Two special cases with symmetry} Let us look more closely at two special cases: $a_4=a_6,\,a_5=a_7$, and either $a_4=a_5$ or $a_4\neq a_5$. \subsubsection{The case $a_4\neq a_5$} \label{sec:a4_not_a5} Motivated by the original benchmark values \cite{sc90:MSH} we give the following \begin{lem}\label{lem:1} When $a_4=a_6$ and $a_5=a_7$, there is a relation between the angles 4 and 6: either $y_6=-y_4$ or $y_6=y_4+\pi$. Furthermore, if also $a_4\neq a_5$, the angle $y_7$ variables, i.e. $c_7,s_7$, are uniquely determined from $c_4,s_4,c_5,s_5$. \end{lem} \begin{proof} Looking for relations between solely angles 4 and 6, we substitute $a_4=a_6$ and $a_5=a_7$ to the subsystem $J_{4567}$ and formulate a suitable elimination ideal. In ideal-theoretic language, we define \begin{align*} & r_1:=a_4\big(s_4c_5+c_4s_5\big)+a_5c_5 -a_4\big(c_6c_7-s_6s_7\big)-a_5s_7\\ & r_2:=a_4\big(c_4c_5-s_4s_5\big)-a_5s_5 +a_4\big(s_6c_7+c_6s_7\big)-a_5c_7\\ & r_{i+2}=c_{i+3}^2+s_{i+3}^2-1,\quad i=1,\dots,4, \end{align*} where $r_i=q_i$ with substitutions $a_4=a_6$ and $a_5=a_7$, and investigate the ideal $I:=\langle r_1,\ldots,r_6\rangle$ in the ring \[ \mathbb{Q}(a_4,a_5,a_6,a_7)[(c_5,s_5,c_7,s_7),(c_4,s_4,c_6,s_6)]. \] Calculating the elimination ideal $I_{4,6}:=I\cap\mathbb{Q}[c_4,s_4,c_6,s_6]$ we get \begin{eqnarray*} I_{4,6}= \langle s_4+s_6,c_6^2+s_6^2-1,c_4^2+s_4^2-1\rangle. \end{eqnarray*} Calculating the prime decomposition of $\sqrt{I_{4,6}}$ we get \begin{equation*} \sqrt{I_{4,6}}=\langle c_6^2+s_6^2-1,c_4-c_6,s_4+s_6\rangle \cap\langle c_6^2+s_6^2-1,c_4+c_6,s_4+s_6\rangle. \end{equation*} Since $I_{4,6}\subset I \subset J \subset J\cup F_J$, we have $$ \mathsf{V}(I_{4,6}) \supset \mathsf{V}(J\cup F_J). $$ From these prime ideals we can see that everywhere in $\mathsf{V}(I_{4,6})$, and therefore in the variety of the singularities of the whole system as well, $s_6=-s_4$ and either $c_6=c_4$ or $c_6=-c_4$. These translate into two possible relations between the angles $y_4$ and $y_6$. \begin{equation} (c_6,s_6)=(c_4,-s_4) \Leftrightarrow y_6=-y_4, \qquad (c_6,s_6)=(-c_4,-s_4) \Leftrightarrow y_6=y_4+\pi. \end{equation} This proves the first claim. If we take into account either one of the prime ideals of $\sqrt{I_{4,6}}$ in $I$ and calculate the Gr\"obner bases we get ideals where $c_7$ and $s_7$ depend linearly on $c_4$, $s_4$, $c_5$ and $s_5$, and can be explicitely solved, as we will show next to prove the latter claim of the lemma. For the case $(s_6,c_6)=(-s_4,-c_4)$ we get \begin{equation} \begin{cases} c_7 &= -s_5\\ s_7 &= c_5 \end{cases} \text{ which imply } y_7=y_5+\frac{\pi}{2}. \end{equation} For the case $(s_6,c_6)=(-s_4,c_4)$ the expressions are, albeit linear, slightly more complicated: \begin{align*} &c_7\big(a_4^2(s_4^2-c_4^2)-a_5(2a_4s_4+a_5)\big)+s_7\big(2a_4(a_5+a_4c_4s_4)\big)-s_5\big((a_4^2+a_5^2)-2a_4a_5s_4\big) =0 \\ -&c_7\big(2a_4^2c_4s_4\big)+s_7\big(a_4^2(c_4^2-s_4^2)+a_5^2)\big)+(a_4^2-a_5^2)c_5-2a_4a_5s_5c_4 =0. \end{align*} We prove that these indeed determine $c_7,s_7$: all we need to do is check that the determinant of the coefficient matrix $A$ of the linear equations does not equal zero: \begin{equation*} A:= \begin{pmatrix} a_4^2(s_4^2-c_4^2)-a_5(2a_4s_4+a_5) & 2a_4(a_5+a_4c_4s_4) \\ -2a_4^2c_4s_4 & a_4^2(c_4^2-s_4^2)+a_5^2 \end{pmatrix},\text{ prove } \det(A) \neq 0. \end{equation*} Now det$(A)$ simplifies due to $c_4^2+s_4^2=1$, resulting in \begin{align*} \det(A) &=2a_4a_5(a_4+a_5)(a_4-a_5)s_4+(a_4-a_5)(a_4+a_5)(a_4^2+a_5^2) \end{align*} Let us then consider det$(A)$ as a function of $s_4$. Since $s_4\in[-1,1]$, $\det(A):[-1,1]\mapsto\mathbb{R}$. Clearly if $a_4=a_5$, $\det(A)\equiv 0$ so we need to assume $a_4\neq a_5$. Set $$ h(s_4):=\frac{\det(A)}{(a_4+a_5)(a_4-a_5)}=2a_4a_5s_4+(a_4^2+a_5^2) $$ and inspect when $h=0$. Since $a_4>0$ and $a_5>0$ the linear function $h$ has its minimum at $-1$. $$ h(-1)=a_4^2+a_5^2-2a_4a_5=(a_5-a_4)^2>0. $$ This proves $h\neq 0$ always, therefore under the assumption $a_4\neq a_5$ also $\det(A)\neq 0$ as claimed. \end{proof} \subsubsection{The case $a_4=a_5$} \label{sec:a4_is_a5} We study the special case $a_4=a_5=a_6=a_7$, whence the 4567-subsystem is capable of ``buckling'' in more complicated ways, thereby producing further interesting configurations. This resembles then the net example in \cite{ar01:RCS}. Let us see how $J_{4567}$ simplifies with substitutions $a_4=a_5=a_6=a_7$. Note that the assumptions of Lemma \ref{lem:1} considering $y_7$ are no longer valid. Let $$ I := J_{4567} \text{ with }a_4=a_5=a_6=a_7 \text{ and }s_6=-s_4 $$ and compute its prime decomposition. This results in \begin{multline}\label{eq:a4_is_a5} \sqrt{I} = I_1 \cap I_2 \cap I_3 \quad \text{ with generators } \\[3mm] I_1 = \begin{cases} s_4^2+c_6^2-1,\\ c_4-c_6,\\ c_7^2+s_7^2-1,\\ s_5+c_7s_4-s_7c_6,\\ c_5-c_7c_6-s_7s_4 \end{cases} \qquad I_2 = \begin{cases} c_6,\\ s_4+1,\\ c_4,\\ c_7^2+s_7^2-1,\\ c_5^2+s_5^2-1 \end{cases} \qquad I_3 = \begin{cases} s_4^2+c_6^2-1,\\ c_4+c_6,\\ c_7^2+s_7^2-1,\\ s_5+c_7,\\ c_5-s_7 \end{cases} \end{multline} Each of these has a geometrical interpretation, see Figure \ref{fig:a4567_equal}. $I_2$ corresponds to $y_4=-\pi/2, y_6=\pi/2$ which means that nodes $A$ and $P_2$ coincide. This is like the $T_5$ situation. Indeed, the ideal $J\cup F_J\cup I_2$ turns out to be exactly $T_5$ with the extra condition $a_4=a_5$. Although it is not immediately apparent but in that situation there also arises a new condition among the parameters: $a_3=|b-w|$, i.e. ``$a_3$ equals the distance between $A$ and $B$''. Note that here the Fitting ideal $F_{J_{4567}}$ has not been used at all, contrary to the $T_5$ calculations. $I_3$ corresponds to $y_6=y_4+\pi$ and $y_5=y_7-\pi/2$ so that now nodes $P_3$ and $P_4$ coincide. Then again, $I_1$ corresponds to $y_6=-y_4$ and $y_5=y_6+y_7$, which interestingly is {\em not a singularity} but merely expressing a symmetry in the system due to $a_4=a_5=a_6=a_7$. \begin{figure}[htb] \centering \epsfig{file=erikoistapaus.eps, width=0.9\textwidth, height=0.3\textwidth} \caption{The configurations corresponding to $I_1,\,I_2,\,I_3$ in the case $a_4=a_5=a_6=a_7$.} \label{fig:a4567_equal} \end{figure} \subsection{Other subsystems} Now contemplating Figure \ref{fig:lengths} we see that it would be possible to find other singularities by analysing still other subsystems. For example the subsystem corresponding to rods 3, 4 and 5 is by symmetry similar to subsystem 367: we simply exchange the roles of variables and parameters associated to rods 4 and 6, and 5 and 7. Further we could consider other subsystems formed from different ``paths'' between the nodes $A,B,O$: i.e. subsystems $J_{123},J_{1245},J_{1267}$. Again by symmetry the system $J_{1267}$ is completely similar to $J_{1245}$, but cases $J_{123}$ and $J_{1245}$ give new singularities. We checked that in these cases the singular variety is not empty, and that at least for some parameter values we get singular points. We did not analyse these cases in detail because computations are quite similar to those given above for subsystems $J_{4567}$ and $J_{367}$. Hence we did not feel including these would give significant additional value and therefore left them out to avoid expanding this quite a long presentation further. \section{Numerical examples} \label{sec:examples} In this section we will calculate numerical examples for both types of singularities. Interestingly, the explicit expressions within $\tilde{G},\,\hat{G}$, as well as in the Gr\"obner bases of $L_1$ and $L_2$, are unstable for numerical computations. It is better to use the original defining equations of $K_1,K_2,L_1,L_2$ in the computations. We shall not explore this stability issue here due to its non-relevance for the present context. We present 4 examples: \begin{enumerate} \item The original benchmark parameter values, see \cite{testset}. We show that then the system is avoiding singularities.\footnote{Thereby validating its benchmark status. That is, the numerical difficulties encountered there are indeed due to the ``numerical stiffness'' of the problem, not to a nearby singularity.} \item We explore how should $a_1,a_2$ be changed in order to have $J_{4567}$ type singularities in the system. Here we have an interpretation for the result: the lengths $a_1,a_2$ must be such that the subsystem 4567 has room for a certain kind of ``buckled'' configuration. \item We explore how should $b_1,a_1,a_2$ be changed in order to have $J_{367}$ type singularities in the system. \item A special case which shows a rational solution, that is $c_i,s_i\in\mathbb{Q}$ for all $i$. This shows unambiguously that we can find singular points because in this case there are no numerical errors related to floating point computations. \end{enumerate} \subsection{Original values} \label{Exam:1} In this example, we will use the original values for the parameters $a_i,b_i$ and show that the system then has no singularities. The original parameters used in the benchmark tests \cite{sc90:MSH,ha-wa91:SODE2,testset} are \begin{align} \label{orig_a_b} & a_1=0.007\quad a_2=0.028 \quad a_3=0.035\quad a_4=0.020 \quad a_5=0.040\quad a_6=0.020 \quad a_7=0.040 \nonumber \\ & b_1=-0.03635\quad b_2=0.03273\quad w_1=-0.06934\quad w_2=-0.00227. \end{align} Since $a_7=a_5$ and $a_6=a_4$, we have $t_4=t_6=0$ (and $t_1<0$, $t_5<0$) so we could have an $J_{4567}$ singularity: $T_3$ or $T_5$. \begin{rem} Interpretation: both $T_3$ and $T_5$ describe a situation where the 4567 system has 'collapsed' into a 1-dimensional object. The ideal $K_2$ tells us how $a_3$ restricts the possible attitudes of 4567. In $T_5$ the centre node $P_2$ has been pushed in, in $T_3$ it has been pulled out. \end{rem} Let us look more closely first at $T_5$, say, and check the conditions \eqref{S1_T5_D_condition} and \eqref{S1_T5_E_condition}. The first one is fulfilled. For $E$ we first need to solve $c_3,s_3$ from $\mathsf{V}(K_2)$. Their solutions are \begin{multline} \label{eq:T5c3s3} (c_3,s_3,c_7,s_7)\in \{ (0.4299535996, \,-0.9028509856, \,-0.9975812008, \,0.06951077517),\\ (0.9266735994, \,-0.3758670513, \,-0.1283212011, \, 0.9917326602)\} \end{multline} With these $c_3,s_3$ we can compute $E$. Both sets in \eqref{eq:T5c3s3} give $E=\mathcal{O}(10^{-5})>0$ and the condition \eqref{S1_T5_E_condition} is violated, hence there are no ($J_{4567}-$)singularities. What about other singularities? This is answered by the following \begin{thm}\label{thm:3} With the original benchmark parameter values \eqref{orig_a_b}, the Andrews' squeezing system has no singularities. \end{thm} \begin{proof} We now have $a_4=a_6$, $a_5=a_7$ and $a_4\ne a_5$. Lemma \ref{lem:1} implies variables $c_6$, $s_6$, $c_7$, $s_7$, and so $y_6$ and $y_7$ can be explicitely solved in terms of $c_4$, $s_4$, $c_5$, and $s_5$. It is then possible to reduce the original system of constraint equations, by forgetting the last two equations from \eqref{eq:2}, and consider \begin{eqnarray*} \begin{cases} a_1\cos(y_1) - a_2\cos(y_1 + y_2) - a_3\sin(y_3) - b_1 &= 0 \\ a_1\sin(y_1) - a_2\sin(y_1 + y_2) + a_3\cos(y_3) - b_2 &= 0 \\ a_1\cos(y_1) - a_2\cos(y_1 + y_2) - a_4\sin(y_4 + y_5) - a_5\cos(y_5) - w_1 &= 0 \\ a_1\sin(y_1) - a_2\sin(y_1 + y_2) + a_4\cos(y_4 + y_5) - a_5\sin(y_5) - w_2 &= 0. \end{cases} \end{eqnarray*} These are equivalent to \begin{eqnarray*} \begin{cases} a_1\cos(y_1) - a_2\cos(y_1 + y_2) - a_3\sin(y_3) - b_1 &= 0 \\ a_1\sin(y_1) - a_2\sin(y_1 + y_2) + a_3\cos(y_3) - b_2 &= 0 \\ - a_4\sin(y_4 + y_5) - a_5\cos(y_5)+a_3\sin(y_3)+(b_1 - w_1) &= 0 \\ a_4\cos(y_4 + y_5) - a_5\sin(y_5) -a_3\cos(y_3)+(b_2- w_2) &= 0 \\ \end{cases} \end{eqnarray*} These can be again represented as polynomials. \begin{align*} & m_1 := a_1 c_1-a_2\big(c_1c_2-s_1s_2\big)-a_3s_3-b_1=0\\ & m_2 := a_1 s_1-a_2\big(s_1c_2+c_1s_2\big)+a_3c_3-b_2=0\\ & m_3 := a_1 c_1-a_2\big(c_1c_2-s_1s_2\big)-a_4\big(s_4c_5+c_4s_5\big) -a_5c_5-w_1=0\\ & m_4 := a_1 s_1-a_2\big(s_1c_2+c_1s_2\big)+a_4\big(c_4c_5-s_4s_5\big) -a_5s_5-w_2=0\\ & m_{i+4} := c_i^2+s_i^2-1=0,\quad i=1,\dots,5. \end{align*} Substituting the original parameter values \eqref{orig_a_b}, as rational numbers, into the polynomials $m_i$ we form an ideal $I:=\langle m_1,\ldots,m_9\rangle$. Let $K:=I\cup F_I$, where $F_I$ is the Fitting ideal of $I$, and inspect $K$ in the ring \[ \mathbb{Q}[(c_1,s_1,c_2,s_2), (c_3,s_3,c_4,s_4,c_5,s_5)]. \] Now it is possible to compute the Gr\"obner basis $G_K$ for $K$ explicitly (unlike for $J\cup F_J$ in the introduction) and results in \begin{eqnarray*} G_{K}=\langle 1 \rangle. \end{eqnarray*} This implies $\mathsf{V}(K)=\emptyset$, proving that with these original parameter values there are no singularities. \end{proof} \subsection{$J_{4567}$ singularity: original values, apart from $a_1,a_2$} \label{Exam:2} Let us see how changing $a_1$ and/or $a_2$ might produce $J_{4567}$ type singularities. Our analysis reveals that by suitable combinations of $a_1$ and $a_2$ we can get between zero and four singularities (of type $J_{4567}$, that is). The number of singularities is determined by $c_3,s_3$, and $E$. Considering $E$ as a function of $a_1,a_2$ we plot the area where $E\le 0$. Recall that $E$ depends on $c_3$ as well, and $c_3$ has two possible values so we get two functions: $E=E_1(a_1,a_2)$ (resp. $E=E_2(a_1,a_2)$) corresponding to the first (resp. second) value of $c_3$ from \eqref{eq:T5c3s3}. See Figure \ref{fig:E_1_and_2} where the areas inside the rectangular areas are $E_i<0$. \begin{figure}[htb] \centerline{ \epsfig{file=E_1_ja_2_implicit.eps,height=7cm,width=7cm} \qquad \epsfig{file=T3_E1_ja_2_implicit.eps,height=7cm,width=7cm} } \caption{The rectangular lines are $E_1=0$ (thick line) and $E_2=0$ (thin line), the areas inside each $E_i=0$ line are where $E_i<0$. Left panel: $T_5$ case, right panel: $T_3$ case.} \label{fig:E_1_and_2} \end{figure} \begin{itemize} \item no singularities: $E_1>0,E_2>0$. \item 1 singularity: $E_1=0,E_2=0$, which leads (with $T_5$) to two possible values: \[ (a_1=0.05986, \,a_2=0.01035 ), \quad (a_1=0.01035, \,a_2=0.05986 ) \] \item 2 singularities: one of $E_1,E_2$ is $< 0$, the other one $>0$. \item 3 singularities: one of $E_1,E_2$ is $< 0$, the other one $=0$. \item 4 singularities: $E_1 < 0$, $E_2 < 0$. \end{itemize} For example, let us concentrate on $T_5$ and choose $a_1=0.03,\,a_2=0.055$, say, whence the system is able to reach four singular configurations (see the left panel of Figure \ref{fig:E_1_and_2}). Now $s_i,c_i$ for $i=1,2,3,7$ are determined by $\mathsf{V}(K)$. The other values, for angles 4,5,6, are determined by $\mathsf{V}(T_5)$. The results are in the Table \ref{tab:J_4567}. The corresponding configurations are visualized in Figure \ref{fig:S1_T5_sing}. \begin{table}[htb] \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline variable & singularity $1$ & singularity $2$ & singularity $3$& singularity $4$\\ \hline $c_1$ & -0.8322 & -0.4564 & -0.1157 & -0.1038 \\ \hline $s_1$ & -0.5544 & 0.8898 & -0.9933 & 0.9946 \\ \hline $c_2$ & -0.3045 & -0.3045 & 0.4467 & 0.4467 \\ \hline $s_2$ & 0.9525 & -0.9525 & 0.8947 & -0.8947 \\ \hline $c_3$ & 0.4300 & 0.4300 & 0.9267 & 0.9267 \\ \hline $s_3$ & -0.9029 & -0.9029 & -0.3759 & -0.3759 \\ \hline $c_4$ & 0 & 0 & 0 & 0 \\ \hline $s_4$ & -1 & -1 & -1 & -1 \\ \hline $c_5$ & 0.0695 & 0.0695 & 0.9917 & 0.9917 \\ \hline $s_5$ & 0.9976 & 0.9976 & 0.1283 & 0.1283 \\ \hline $c_6$ & 0 & 0 & 0 & 0 \\ \hline $s_6$ & 1 & 1 & 1 & 1 \\ \hline $c_7$ & -0.9976 & -0.9976 & -0.1283 & -0.1283 \\ \hline $s_7$ & 0.0695 & 0.0695 & 0.9917 & 0.9917 \\ \hline \end{tabular} \end{center} \vspace*{5mm} Calculating the corresponding angles we get the following values. \vspace*{5mm} \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline Angle & singularity $1$ & singularity $2$ & singularity $3$ & singularity $4$\\\hline $y_1$ & -2.5539 & 2.0448 & -1.6867 & 1.6747 \\ \hline $y_2$ & 1.8802 & -1.8802 & 1.1077 & -1.1077 \\ \hline $y_3$ & -1.1264 & -1.1264 & -0.3853 & -0.3853 \\ \hline $y_4$ & -1.5708 & -1.5708 & -1.5708 & -1.5708 \\ \hline $y_5$ & 1.5012 & 1.5012 & 0.1287 & 0.1287 \\ \hline $y_6$ & 1.5708 & 1.5708 & 1.5708 & 1.5708 \\ \hline $y_7$ & 3.0720 & 3.0720 & 1.6995 & 1.6995 \\ \hline \end{tabular} \end{center} \caption{The singularities of $J_{4567}$ type, original values apart from $a_1,a_2$. The values are presented only with 4 decimals but were computed with 16 decimals.} \label{tab:J_4567} \end{table} \begin{figure}[htb] \centering \begin{tabular}{l|r} \epsfig{file=S1_T5_sing_1.eps, width=0.45\textwidth} & \epsfig{file=S1_T5_sing_2.eps, width=0.45\textwidth} \\ \hline \epsfig{file=S1_T5_sing_3.eps, width=0.45\textwidth} & \epsfig{file=S1_T5_sing_4.eps, width=0.45\textwidth} \\ \hline \end{tabular} \caption{Singular positions (according to $J_{4567}$, $T_5$) when $a_1=0.03,\,a_2=0.055$ and $a_3,\dots,a_7$ have the original values. One can see a physical explanation to the singularity: the centre node $P_2$ is 'pushed in' so that nodes $P_3$ and $P_4$ coincide.} \label{fig:S1_T5_sing} \end{figure} Doing similar tests with $T_3$ instead of $T_5$ yields the $E_i$ areas in the right hand panel of Figure \ref{fig:E_1_and_2}. Singular configurations implied by $T_3$, with choices $a_1=0.06,\, a_2=0.06$ which imply 4 singularities, are in Figure \ref{fig:S1_T3_sing}. To save space we have not tabulated the actual values of the angles in $T_3$ case. \begin{figure}[htb] \centering \begin{tabular}{l|r} \epsfig{file=sing_1.eps, width=0.45\textwidth} & \epsfig{file=sing_2.eps, width=0.45\textwidth} \\ \hline \epsfig{file=sing_3.eps, width=0.45\textwidth} & \epsfig{file=sing_4.eps, width=0.45\textwidth} \\ \hline \end{tabular} \caption{Singular positions (according to $J_{4567}$, $T_3$) when $a_1=0.06,\,a_2=0.06$ and $a_3,\dots,a_7$ have the original values. One can see a physical explanation to the singularity: the centre node $P_2$ is now 'pulled out' so that nodes $P_3$ and $P_4$ coincide.} \label{fig:S1_T3_sing} \end{figure} \subsection{$J_{367}$ singularity: original values, apart from $b_1,a_1,a_2$} \label{Exam:3} A necessary condition to have a $J_{367}$ type singularity is at least one of the $z_i$'s vanishes \eqref{eq:U_z}. Substituting the original parameter values we notice that none of these is zero. Let us then investigate how we should change some of the parameters in order to have $J_{367}$ type singularities. Take $b_1$ and $U_1$, say, and choose $b_1:=-0.026913593$ so that $z_1=0$. \footnote{This corresponds to moving $B$ slightly to left.} We seek to further fulfil the sufficient requirements by $U_1$: \begin{align*} &n_{3}(4a_1a_2-n_3)\geq 0 \tag{\ref{U1_cond_1}} \\ & t_7 t_5 t_6 t_8\leq 0, \tag{\ref{U1_cond_2}} \end{align*} and use $L_1,L_2$ to find the actual singular configurations. With the original parameter values $t_6=0$, therefore \eqref{U1_cond_2} is fulfilled. Therefore we only need to study \eqref{U1_cond_1}. For that, we proceed analogously to Example \ref{Exam:2}: treat the expression $n_{3}(4a_1a_2-n_3)$ as a function of $a_1,a_2$. For that, we first need $c_7,s_7$. Them we get from \eqref{S2_U1} \begin{align*} c_7 &= \frac{b_2-w_2}{a_6-a_3-a_7}=-0.6364 \\ s_7 &= \frac{b_1-w_1}{a_3+a_7-a_6}=0.7714. \end{align*} The region of $a_1,a_2$ plane where $n_{3}(4a_1a_2-n_3)\ge 0$ is shown in Figure \ref{fig:S2_n3n4}. \begin{figure}[htb] \centering \epsfig{file=S2_n3n4_implicit.eps, width=7cm, height=7cm} \caption{$J_{367},U_1$ case: the region inside the annulus is where $n_{3}(4a_1a_2-n_3)\ge 0$.} \label{fig:S2_n3n4} \end{figure} We pick a value inside the ``allowed'' annulus, say $a_1=0.02$ and $a_2=0.055$ in order to get singularities. Then let us find the actual singular configurations: since $t_6=0$, from \eqref{S2_U1_c4} we get $c_4=0$ and from \eqref{S2_U1_s4} $s_4=-1$. The other angles are found as follows: 3 and 6 from \eqref{S2_U1} and the remaining ones 1,2,5 from $L$. The results are in Table \ref{tab:J_367}. The corresponding singular configurations are drawn in Figure \ref{fig:S2_sing}. Note that there are only two singular configurations, instead of four, since \eqref{S2_U1_c4} has only one (double) root $c_4=0$ instead of two separate roots. \begin{table}[htb] \begin{center} \begin{tabular}{|c|c|c|} \hline variables & singularity $1$ & singularity $2$\\ \hline $c_1$ & -0.3621 & 0.0127 \\ \hline $s_1$ & -0.9322 & 0.9999 \\ \hline $c_2$ & 0.1860 & 0.1860 \\ \hline $s_2$ & 0.9862 & -0.9826 \\ \hline $c_3$ & 0.6364 & 0.6364 \\ \hline $s_3$ & -0.7714 & -0.7714 \\ \hline $c_4$ & 0 & 0 \\ \hline $s_4$ & -1 & -1 \\ \hline $c_5$ & 0.7714 & 0.7714 \\ \hline $s_5$ & 0.6364 & 0.6364 \\ \hline $c_6$ & 0 & 0 \\ \hline $s_6$ & 1 & 1 \\ \hline $c_7$ & -0.6364 & -0.6364 \\ \hline $s_7$ & 0.7714 & 0.7714 \\ \hline \end{tabular} \end{center} Expressed in angles, these are \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline Angles & singularity $1$ & singularity $2$ \\\hline $y_1$ & -1.9413 & 1.5581 \\\hline $y_2$ & 1.3837 & -1.3837 \\\hline $y_3$ & -0.8810 & -0.8810 \\\hline $y_4$ & 1.5708 & 1.5708 \\\hline $y_5$ & 0.6898 & 0.6898 \\\hline $y_6$ & 1.5708 & 1.5708 \\\hline $y_7$ & 2.2606 & 2.2606 \\\hline \end{tabular} \end{center} \caption{The singularities of $J_{367}$ type, original values apart from $b_1,a_1,a_2$. The values are presented only with 4 decimals but were computed with 16 decimals.} \label{tab:J_367} \end{table} \begin{figure}[htb] \centering \begin{tabular}{l|r} \epsfig{file=S2_sing1.eps, width=0.45\textwidth} & \epsfig{file=S2_sing2.eps, width=0.45\textwidth} \\ \hline \end{tabular} \caption{Singular positions (according to $J_{367}$, $U_1$) when $b_1=-0.02691,\, a_1=0.02,\, a_2=0.055$ and $a_3,\dots,a_7$ have the original values. The physical interpretation is as in Figure \ref{fig:S1_T5_sing}.} \label{fig:S2_sing} \end{figure} \subsection{A rational case} \label{Exam:4} Finally, let us show a rational valued singularity, that is $c_i,s_i\in\mathbb{Q}$. Choose \begin{align*} a_4=a_5=a_6=a_7=3/20 &\quad a_1=1/10 \quad a_2=a_3=1/2 \\ b_1=-1/10 \quad b_2=1/5 &\quad w_1=-2/5 \quad w_2=-1/5 \end{align*} and solve $c,s$ from the generators of $I_2 \cup J \cup F_J$ in \eqref{eq:a4_is_a5}. Now $c_5,s_5,c_7,s_7$ are arbitrary (apart from $c_5^2+s_5^2=1$, $c_7^2+s_7^2=1$) and the chosen result is (see also Figure \ref{fig:rat_cos_sin}) \begin{align*} c &= (0, 3/5, 4/5, 0, 3/5, 0, 4/5) \\ s &= (1, -4/5, -3/5, -1, 4/5, 1, 3/5). \end{align*} \begin{figure}[htb] \centering \epsfig{file=rational_cos_sin.eps, width=0.7\textwidth} \caption{A singular configuration with rational $c_i,s_i,a_i,b_i$.} \label{fig:rat_cos_sin} \end{figure} \section{Conclusion} \label{sec:discuss} We have studied singularities of the multibody system ``Andrews' squeezing system'' which is a well-known benchmark problem both for multibody solvers and differential-algebraic equation solvers. Using our tools we have shown in Theorem \ref{thm:3} that the original benchmark problem is indeed void of singularities, thereby assuring that whatever numerical problems in the benchmark tests are met, they are indeed due to something else than a nearby singularity of the system. Apparently, this non-singularity of the problem has not been rigorously proven in the literature. However, we have shown that with suitably chosen parameters $(a,b,w)$, this system can exhibit singular configurations. In fact, there are families of values $(a,b,w)$ that produce singularities, see Theorems \ref{thm:1} and \ref{thm:2}. We provide examples of singularities, calculated using the original benchmark parameter values apart from $b_1,a_1,a_2$. Considering $a_1,a_2$ as freely chosen parameters, Figures \ref{fig:E_1_and_2} and \ref{fig:S2_n3n4} show the areas of $a_1,a_2$ plane where the system exhibits singularities. For example, choosing the point $(a_1,a_2)$ within the intersection of the three areas in Figures \ref{fig:E_1_and_2} (both panels) and \ref{fig:S2_n3n4} would give a system with 10 singular configurations. A natural question that remains is, if these presented singularities are the only possible ones? In other words are there singularities which do not come from the singularities of some subsystem? While the Gr\"obner bases techniques {\em in principle} provide a way to answer this question directly, we could not do so in practice due to complexity problems. \subsection{Appendix} \paragraph{The coefficients $f_i$:} The coefficients $f_1,\dots,f_5$ in the context of $T_5$ are \\ \begin{align*} f_1 &=4(a_5-a_4)^2(b_1^2-2b_1w_1+b_2^2-2b_2w_2+w_1^2+w_2^2)\\ & =4(a_5-a_4)^2|b-w|^2, \\ f_2 &= 4(w_1-b_1)(a_4-a_5)(-b_1^2+2b_1w_1-b_2^2+2b_2w_2-w_1^2-w_2^2+a_3^2-a_4^2+2a_4a_5-a_5^2)\\ &=4(w_1-b_1)(a_4-a_5)\big(a_3^2-(a_4-a_5)^2-|b-w|^2\big), \\ f_3 &=b_1^2-2b_1w_1+b_2^2-2b_2w_2+2b_2a_4-2b_2a_5+w_1^2+w_2^2-2w_2a_4+2w_2a_5-a_3^2+a_4^2-2a_4a_5+a_5^2\\ &=|b-w|^2+2(b_2-w_2)(a_4-a_5) -a_3^2+(a_4-a_5)^2, \\ f_4 &=b_1^2-2b_1w_1+b_2^2-2b_2w_2-2b_2a_4+2b_2a_5+w_1^2+w_2^2+2w_2a_4-2w_2a_5-a_3^2+a_4^2-2a_4a_5+a_5^2 \\ &= |b-w|^2-2(b_2-w_2)(a_4-a_5) -a_3^2+(a_4-a_5)^2, \\ f_5 &= a_3^2-a_4^2+2a_4a_5-a_5^2-b_1^2+2b_1w_1-b_2^2+2b_2w_2-w_1^2-w_2^2 \\ &= a_3^2-(a_4-a_5)^2-|b-w|^2. \end{align*} \paragraph{The coefficients $d_i,l_i$:} The coefficients $d_i,l_i$ in the context of $K_2$ are \\ \begin{tabular}[h]{ll} \hline\\ $ d_1 $ &= $ 2a_1a_2(a_3^2 +2a_3b_1s_3 -2a_3b_2c_3 +b_1^2+b_2^2) $ \\[3mm] $ d_2 $ &= $ -4a_1^2a_2^2s_2^2$ \\[3mm] $ d_3 $ &= $ -a_1^4 +2a_1^2a_2^2 +a_1^2a_3^2 +2a_1^2a_3b_1s_3 -2a_1^2a_3b_2c_3 +a_1^2b_1^2 $ \\[3mm] & $ +a_1^2b_2^2 -a_2^4 +a_2^2a_3^2 +2a_2^2a_3b_1s_3 -2a_2^2a_3b_2c_3 +a_2^2b_1^2 +a_2^2b_2^2 $ \\[3mm] \hline\\ $ l_1 $ &= $ -2a_1a_2(a_3^2 +2a_3b_1s_3 -2a_3b_2c_3 +b_1^2+b_2^2) $ \\[3mm] $ l_2 $ &= $ 2a_1a_2(a_3s_3+b_1) $ \\[3mm] $ l_3 $ &= $ -(a_3c_3-b_2)(a_1^2 -a_2^2 +a_3^2 +2a_3b_1s_3 -2a_3b_2c_3 +b_1^2+b_2^2) $ \\[3mm] $ l_4 $ &= $ 2a_1a_2s_1s_2-(a_3s_3+b_1)a_2c_2+(a_3c_3-b_2)a_2s_2-(a_3s_3+b_1)a_1 $. \\[3mm] \hline\\ \end{tabular} We can also simplify these expressions: \begin{eqnarray*} d_0 &=& a_3^2 +|b|^2+2a_3(b_1s_3-b_2c_3) \\ d_1 &=& 2a_1a_2d_0 \\ d_2 &=& n_1n_2 \\ d_3 &=& (a_1^2+a_2^2)d_0-(a_1^2-a_2^2)^2 \\ n_1 &=& (a_1+a_2)^2 -d_0 \\ n_2 &=& (a_1-a_2)^2 -d_0 = 4a_1a_2-n_1 \\ l_1 &=& -d_1 \\ l_3 &=& -(a_3c_3-b_2)(a_1^2-a_2^2+d_0) \\ l_4 &=& -(a_3s_3+b_1)(a_2c_2+a_1) +a_2s_2 (a_3c_3-b_2+2a_1s_1) \\ \hat g_1 &=& -4a_1^2a_2^2s_2^2+n_1(4a_1a_2-n_1) \\ \hat g_2 &=& 2a_1a_2 d_0 c_2 +(a_1^2+a_2^2)d_0 -(a_1^2-a_2^2)^2 \\ \hat g_3 &=& -2a_1a_2 d_0 s_1 +2a_1a_2 (a_3 s_3+b_1) -(a_3c_3-b_2)(a_1^2-a_2^2+d_0) \\ \hat g_4 &=& (a_1^2-a_2^2)c_1+ l_4 \end{eqnarray*} \paragraph{The coefficients $r_i$:} The coefficients $r_i$ in the context of $L_1$ are \\ \begin{tabular}[h]{ll} \hline\\ $ r_1$ &= $(a_1^2+a_2^2)|b|^2-2b_1a_1^2a_3s_7-2b_1a_2^2a_3s_7+2b_2a_1^2a_3c_7+2b_2a_2^2a_3c_7-(a_1^2-a_2^2)^2 +(a_1^2 +a_2^2)a_3^2 $ \\[3mm] $r_2$ &= $2a_1(b_1a_2-a_2a_3s_7)s_2 $ \\[3mm] $r_3$ &= $b_1^2b_2+b_1^2a_3c_7-2b_1b_2a_3s_7-2b_1a_3^2c_7s_7+b_2^3+3b_2^2a_3c_7+b_2a_1^2-b_2a_2^2+3b_2a_3^2c_7^2 $ \\[3mm] & $+b_2a_3^2s_7^2+a_1^2a_3c_7-a_2^2a_3c_7+a_3^3c_7 $ \\[3mm] $r_4$ &= $(2a_1a_2)s_1s_2+(-b_1a_2+a_2a_3s_7)c_2+(-b_2a_2-a_2a_3c_7)s_2+(-b_1a_1+a_1a_3s_7)$\\[3mm] \hline\\ \end{tabular}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{intro} For a graph $X$, the \emph{spectrum} of $X$ is the spectrum of its adjacency matrix. For a group $G$ and a subset $T\subseteq G$, the \emph{Cayley graph} of $G$ generated by $T$, denoted by $X(G,T)$, is the graph with vertex set $G$ and an edge $(g,tg)$ for each $g\in G$ and $t\in T$. If $T$ is inverse-closed, then $X(G,T)$ is an undirected graph, which we will assume henceforth. In this paper we are interested in the spectrum of the Cayley graph $X(S_n,T_n)$, where $n\geq2$ is an integer, $S_n$ is the symmetric group and $T_n:=\{(1~2),(1~3),\dots,(1~n)\} \subseteq S_n$. Friedman~\cite{Friedman00} proved that if $T_n' \subseteq S_n$ is any set of $n-1$ transpositions and $T_n' \neq T_n$ (up to conjugacy) then the spectrum of $X(S_n,T_n')$ is never integral. Abdollahi and Vatandoost~\cite{AV} conjectured that spectrum of $X(S_n,T_n)$ is integral, and contains all integers in the range from $-(n-1)$ to $n-1$ (with the sole exception that when $n\leq3$, zero is not an eigenvalue of $X(S_n,T_n)$). Using a computer, the conjecture was verified for $n\leq 6$. In this paper we prove the second part of the conjecture. \begin{theorem} \label{main} Let $n \geq 2$ be an integer and let $T_n=\{(1~2),(1~3),\dots,(1~n)\} \subseteq S_n$. For each integer $1 \leq \ell \leq n-1$, $\pm(n-\ell)$ are eigenvalues of $X(S_n,T_n)$ with multiplicity at least ${n-2 \choose \ell-1}$. If $n\ge4$, then $0$ is an eigenvalue of $X(S_n,T_n)$ with multiplicity at least ${n-1 \choose 2}$. \end{theorem} Note that $\pm(n-1)$ is a simple eigenvalue of $X(S_n,T_n)$ since the graph is $(n-1)$-regular, bipartite, and connected. \section{Partial permutation graphs} \label{sec:PartialPermutationGraphs} In this section we introduce a family of graphs, called \emph{partial permutation graphs}. Let $d$ and $n$ be positive integers such that $1 \leq d \leq n$. Let $S_{d,n}$ be the set of all $d$-tuples with entries from the set $[n]=\{1,\dots,n\}$ having no repeated entries, that is, $$ S_{d,n} = \{(a_1,\dots,a_d)\mid a_1,\dots,a_d \in [n]~\textrm{ and~} a_i\neq a_j \textrm{~for~} 1\leq i<j \leq d\}. $$ The \emph{partial permutation graph} $P(d,n)$ is defined as follows. Its vertex set is $S_{d,n}$, and two $d$-tuples are adjacent if and only if they differ in exactly one coordinate. The following lemma, whose proof is straightforward, lists some basic properties of $P(d,n)$. \begin{lemma} \label{basic-prop} Let $d$ and $n$ be positive integers such that $1 \leq d \leq n$. Then $P(d,n)$ satisfies the following properties: \begin{enumerate} \item[\rm(i)] $|V(P(d,n))|=\frac{n!}{(n-d)!}$. \item[\rm(ii)] The graph $P(d,n)$ is $d(n-d)$-regular. \item[\rm(iii)] The cardinality of every maximal clique in $P(d,n)$ is $n-d+1$. \end{enumerate} \end{lemma} Next we show that $X(S_n,T_n)$ is a partial permutation graph. \begin{lemma} \label{bipartite} Let $n\geq2$ be an integer. Then $P(n-1,n)$ is isomorphic to $X(S_n,T_n)$. In particular, $P(n-1,n)$ is bipartite. \end{lemma} \begin{proof} Set $P:=P(n-1,n)$ and $X:=X(S_n,T_n)$. For an $n$-tuple $(c_1,\dots,c_n)$ of pairwise distinct elements of $[n]$, let $\pi_{(c_1,\dots,c_n)} \in S_n$ be the permutation defined by $\pi_{(c_1,\dots,c_n)}(c_j)=j$ ($j=1,\dots,n$). Let $\phi:V(P) \rightarrow V(X)$ be the mapping defined by $\phi((a_1,\dots,a_{n-1}))=\pi_{(a_0,a_1,\dots,a_{n-1})}$, where $\{a_0\} = [n] \setminus \{ a_1,\dots,a_{n-1} \}$. We claim that $\phi$ is a graph isomorphism. Clearly, $\phi$ is bijective. It remains to show that $p_1p_2 \in E(P)$ if and only if $\phi(p_1)\phi(p_2) \in E(X)$. Suppose $p_1p_2 \in E(P)$. Let $p_1=(a_1,\dots,a_{n-1})$ and $p_2=(b_1,\dots,b_{n-1})$. Let $1 \leq \ell \leq n-1$ so that $p_1$ and $p_2$ differ in the $\ell$th coordinate (since $p_1p_2 \in E(P)$, $\ell$ exists and it is unique). By definition of $\phi$, we have $\phi(p_1)=(b_{\ell},a_1,\dots, a_{n-1})$ and $\phi(p_2)=(a_{\ell},a_1,\dots,a_{\ell-1},b_{\ell},a_{\ell+1},\dots, a_{n-1})$. Then $\phi(p_1)=\sigma \cdot \pi_{(a_{\ell},a_1,\dots,a_{\ell-1},b_{\ell},a_{\ell+1},\dots, a_{n-1})} = \sigma\cdot\phi(p_2)$, where $\sigma \in S_n$ is the transposition $(1~\ell+1)$, hence $\phi(p_1)\phi(p_2) \in E(X)$. For the converse, observe that if $p_1p_2 \notin E(P)$, then $\phi(p_1)$ and $\phi(p_2)$ differ in at least two coordinates that are different from the first coordinate. Hence, $\phi(p_1)\neq \sigma \cdot \phi(p_2)$ for any transpositions $\sigma$ of the form $(1~i)$ ($1 < i \leq n$). \end{proof} For integers $1\leq d<n$, let $I(n,d)$ be the set of all subsets of $[n]$ of cardinality $d+1$. For $I \in I(n,d)$, let $A_I$ be the set of all $d$-tuples (with no repetitions) with entries from the set $I$. Note that $|I(n,d)|={n \choose d+1}$ and $|A_I|= (d+1)!$. Let $\mathcal{A}(n,d):=\{A_I\mid I \in I(n,d)\}$. For $\mathcal{A}\subseteq \mathcal{A}(n,d)$, we say that a $d$-tuple $t$ is \emph{unique} with respect to $\mathcal{A}$, if there exists $A \in \mathcal{A}$, such that $t \in A$ but $t \notin A'$, for every $A' \in \mathcal{A} \setminus A$. We say that $\mathcal{A}$ is \emph{independent}, if every subset of $\mathcal{A}$ has (at least one) unique $d$-tuple. With this notation we have the following. \begin{lemma} \label{IS} There exists an independent set $\mathcal{A} \subseteq \mathcal{A}(n,d)$ of cardinality $n-1 \choose d$. \end{lemma} \begin{proof} We proceed by induction on $n+d$. If $d=1$, let $\mathcal{A}=\{A_{\{1,2\}},A_{\{1,3\}},\dots, \allowbreak A_{\{1,n\}}\}$. Note that $\{1,i\}\in I(n,1)$ and $A_{\{1,i\}}=\{(1),(i)\}$, for $i=2,\dots,n$. Clearly, $\mathcal{A}$ is an independent set of cardinality $n-1$ and the claim follows. If $d=n-1$, then $|I(n,n-1)|=1$, and the claim follows with $\mathcal{A}=\{A_{\{1,\dots,n\}}\}$. We may assume that $d \geq 2$ and $n\geq d+2$. Let $\mathcal{A}^*(n,d)= \mathcal{A}(n,d) \setminus \mathcal{A}(n-1,d)$. By the induction hypothesis, there exists an independent set $\mathcal{A}_1 \subseteq \mathcal{A}(n-1,d)$ of cardinality $n-2 \choose d$. Next we claim that \begin{itemize} \item [(1)] there exists an independent set $\mathcal{A}_2 \subseteq \mathcal{A}^*(n,d)$ of cardinality $n-2 \choose d-1$, such that for every subset $\mathcal{A}_2' \subseteq \mathcal{A}_2$ there is a unique $d$-tuple with respect to $\mathcal{A}_2'$, such that one of its entries contains the value $n$. \end{itemize} To prove (1), let $\phi: \mathcal{A}^*(n,d) \rightarrow \mathcal{A}(n-1,d-1)$ be a mapping defined by $$\phi(A_I)=A_{I \setminus \{n\}}$$ where $ I \in I(n,d)$ and $n \in I$. Clearly, $\phi$ is bijective. By the induction hypothesis, there is an independent set $\mathcal{A}' \subseteq \mathcal{A}(n-1,d-1)$ of cardinality $n-2 \choose d-1$. It is easily seen that $\mathcal{A}_2:=\phi^{-1}(\mathcal{A'})$ is an independent set in $\mathcal{A}^*(n,d)$ with desired property. This proves (1). \smallskip By definition, $\mathcal{A}_1$ and $\mathcal{A}_2$ are disjoint, hence $|\mathcal{A}|=|\mathcal{A}_1|+|\mathcal{A}_2|={n-2 \choose d} + {n-2 \choose d-1}= {n-1 \choose d}$. To conclude the proof it remains to show that $\mathcal{A}=\mathcal{A}_1 \cup \mathcal{A}_2 \subseteq \mathcal{A}(n,d)$ is an independent set. Let $\mathcal{A}' \subseteq \mathcal{A}$. We will show that $\mathcal{A}'$ has a unique $d$-tuple. This is trivially true if $\mathcal{A}' \subseteq \mathcal{A}_1$. Hence, we may assume that $ \mathcal{A}_2 \cap \mathcal{A'} \neq \emptyset$. Let $t$ be the unique $d$-tuple with respect to $\mathcal{A}_2 \cap \mathcal{A'} \subseteq \mathcal{A}_2$ as exists by (1). Then, one of the entries of $t$ contains the value $n$. The uniqueness of $t$ with respect to $\mathcal{A}'$ follows since $t$ is unique with respect to $\mathcal{A}_2$ and for each $A \in \mathcal{A}_1$, no $d$-tuple of $A$ has an entry with value $n$. This concludes the proof. \end{proof} \begin{lemma} \label{function} Let\/ $1\leq d < n $ be integers and let $X=P(d,n)$. Then there exists a family $\mathcal{F}_X$ of functions, each with domain $V(X)$ and range $\{-1,0,1\}$, such that the following holds: \begin{enumerate} \item[\rm(i)] $\{-1,1\} \subseteq Im(\phi)$, for every $\phi \in \mathcal{F}_X$. \item[\rm(ii)] $\sum_{v \in V(K)} \phi(v) =0$, for every $\phi \in \mathcal{F}_X$ and every maximal clique $K$. \item[\rm(iii)] If we view each $\phi$ as a vector in $\mathbb{R}^{V(X)}$, then $\mathcal{F}_X$ contains a linearly independent set of cardinality $n-1 \choose d$. \end{enumerate} \end{lemma} \begin{proof} Consider $A_I \in \mathcal{A}(n,d)$, for some $I \in I(n,d)$. Since each element of $A_I$ is a $d$-tuple, we can view $A_I$ as a subset of $V(X)$. Then, $X[A_I]\subseteq X$ (the subgraph of $X$ induced on the vertex set $A_I$) is isomorphic to $P(d,d+1)$. By Lemma \ref{bipartite}, $X[A_I]$ is bipartite. Let $A_I^1,A_I^2 \subseteq A_I$ ($A_I^1 \cup A_I^2=A_I$) be the corresponding bipartition. Let $\phi_{I}:V(X) \rightarrow \{-1,0,1\}$ be defined by $$ \phi_{I}(v) = \left\{ \begin{array}{rl} -1, & v \in A_I^1,\\[1mm] 1, & v \in A_I^2,\\[1mm] 0, & v\in V(X)\setminus A_I. \end{array} \right. $$ Let $\mathcal{F}_X:=\{\phi_{I}\mid I \in I(n,d)\}$. We will show that $\mathcal{F}_X$ satisfies (i)--(iii). Property (i) is satisfied trivially, since for every $I \in I(n,d)$, $X[A_I]$ is bipartite with at least one edge. For (ii) we argue as follows. We may assume that $n\geq d+2$, for if $n=d+1$ then $X[A_I]$ is isomorphic to $X$ (since then $A_I=V(X)$) and $\phi_{I}$ satisfies (ii) as required. Let $K \subseteq V(X)$ be a maximal clique in $X$. Since $n \geq d+2$, every edge of $X$ is contained in a clique of size at least 3, and so $|K| \geq 3$. Since $X[A_I]$ is bipartite, $|K \cap A_I| \leq 2$. We claim that \begin{enumerate} \item [(1)] $|K\cap A_I| \neq 1$. \end{enumerate} To prove (1), suppose to the contrary that $|K\cap A_I|=1$. Let $v \in K\cap A_I$. Let $a \in I$ be such that $a$ does not appear in the $d$-tuple $v$ ($a$ exists since $|I|=d+1$). Since $K$ is a clique, the $d$-tuples in $K$ agree on exactly $d-1$ coordinates and pairwise differ in exactly one coordinate, say the $j$th coordinate ($1 \leq j \leq d$). Now consider the $d$-tuple $y$, obtained from $v$ by changing its $j$th entry to $a$. Clearly $y$ has no repetitions, $y\neq v$, and since $K$ is maximal, $y \in V(K)$. But by the definition of $A_I$, $y \in A_I$. Hence, $|K\cap A_I| \geq 2$; a contradiction. This proves (1). By (1) and since $|K \cap A_I| \leq 2$, we have either $|K \cap A_I| =0$ or $|K \cap A_I| = 2$. In both cases $\phi_{I}$ satisfies the equation in (ii), thus (ii) holds. Finally, part (iii) is a consequence of Lemma~\ref{IS}. First, we take a subset $\mathcal{A} \subseteq \mathcal{A}(n,d)$ of cardinality $n-1 \choose d$ which is independent in the sense of Lemma~\ref{IS}. Next, we consider the functions in $\{\phi_{I}~|~A_I \in \mathcal{A} \}$. The existence of unique $d$-tuples shows that for each $\phi_I$, there is a $d$-tuple $a$ such that $\phi_I(a)\ne0$, but $\phi_J(a)=0$ for every $A_J \in {\mathcal A} \setminus \{ A_I \}$. Thus we have a linearly independent set in ${\mathcal F}_X$ of cardinality $ n-1 \choose d$. \end{proof} \section{Proof of the main result} \label{sec:ProofOfMainResult} Let $G$ be a group, $H \leq G$ a subgroup of $G$ and $T \subseteq G$. The \emph{Schreier coset graph} on $G / H$ generated by $T$ is the graph $X=X(G,H,T)$ with $V(X)=G/H=\{gH:~g\in G\}$, the set of left cosets of $H$, and there is an edge $(gH,tgH)$ for each coset $gH$ and each $t \in T$. If $T$ is inverse-closed, then $X$ is an undirected multigraph (possibly with loops). Note that if $1_G$ is the identity element of $G$, then $X(G,\{1_G\},T)=X(G,T)$ is the Cayley graph on $G$ generated by $T$. The following lemma is well-known, see, e.g.~\cite{Friedman00}. \begin{lemma} \label{eigenforeqi} Let $G$ be a group, $T$ an inverse-closed subset of $G$, and $H \leq K \leq G$. If $\lambda \in \mathbb{R}$ is an eigenvalue of $X(G,K,T)$ of multiplicity $p$, then $\lambda$ is an eigenvalue of $X(G,H,T)$ of multiplicity at least $p$. \end{lemma} The following lemma is straightforward. \begin{lemma} \label{tran} For $1\leq k \leq n-1$, let $S_{n-k}'$ be the subgroup of $S_n$ containing all permutations fixing the elements $n-k+1,\dots,n$. For each $k$-tuple $(i_1,\dots,i_k) \in [n]^k$, where $i_1,\dots,i_k$ are pairwise distinct, let $\sigma_{(i_1,\dots,i_k)} \in S_n$ be a permutation so that $\sigma_{(i_1,\dots,i_k)}(n-j+1)=i_j$, for $j=1,\dots,k$. Let $Z:=\{\sigma_{(i_1,\dots,i_k)}: i_1,\dots ,i_k ~\textrm{~are pairwise distinct} \}$. Then, $S_n$ is a disjoint union of all left cosets $zS_{n-k}'$, $z \in Z$. \end{lemma} Let $X[k]$ be the Schreier coset graph $X(S_n,S_{n-k}',T)$, where $1 \leq k \leq n-1$ and $T \subseteq S_n$ is inverse-closed. By Lemma~\ref{tran}, we see that $X[k]$ is isomorphic to the graph whose vertex set $V(X[k])$ is the set of all $k$-tuples (with no repetition) from the set $[n]$, in which a $k$-tuple $(i_1,\dots,i_k)$ is adjacent to $(t(i_1),\dots,t(i_k))$, for each $t\in T$. (This was also observed by Bacher in \cite{Ba}.) Note that elements of $T$ may give rise to multiple edges and loops in $X[k]$. Now suppose $T=\{(1~2),(1~3),\dots,(1~n)\}$. Let $V_1 \subseteq V(X[k])$ be the set of $k$-tuples containing the value $1$. Let $\overline{V_1}:=V(X[k]) \setminus V_1$. The following lemma is easily verified. \begin{lemma} \label{propofeqi} Let $V_1,\overline{V_1}$ be the partition of $X[k]$ defined above. Then the following holds: \begin{itemize} \item[\rm(i)] No two distinct vertices of $\overline{V_1}$ are adjacent in $X[k]$. \item[\rm(ii)] Every $v \in \overline{V_1}$ has $n-k-1$ self-loops and $k$ neighbors in $V_1$. \item[\rm(iii)] Let $u \in V_1$. Then $u$ is adjacent in $X[k]$ to exactly $n-k$ vertices in $\overline{V_1}$, that are obtained from $u$ be replacing the entry 1 with a number different from the $k$ entries of $u$. \end{itemize} \end{lemma} \begin{lemma} \label{gettingeigenvalues} Let $1 \leq k \leq n-2$. Then $n-k-1$ is an eigenvalue of $X[k]$ with multiplicity at least ${n-2 \choose k}$. \end{lemma} \begin{proof} Let $X[k]=(V[k],E[k])$ and let $V_1$ and $\overline{V_1}$ be as above. We construct the following auxiliary graph $\mathcal{G}$. The vertex set of $\mathcal{G}$ is $\overline{V_1}$, and two vertices $u,v \in \overline{V_1}$ are adjacent if and only if $v\neq u$ and there exists $w \in V_1$ such that $wu,wv \in E[k]$. Clearly, if $vu \in E(\mathcal{G})$ then $v$ and $u$ differ in exactly one coordinate. Hence, $\mathcal{G}$ is isomorphic to $P(k,n-1)$. Note that $n-1$ comes from the fact that the value $1$ does not appear in any of the vertices of $\overline{V_1}$. Observe that $k<n-1$, so $P(k,n-1)$ is defined. Let $\mathcal{F}_{\mathcal{G}}$ be the set of functions $\phi: \overline{V_1} \to \{-1,0,1\}$ satisfying the properties stated in Lemma~\ref{function}, and let $\mathcal{F} \subseteq \mathcal{F}_{\mathcal{G}}$ be a linearly independent set in $\mathcal{F}_{\mathcal{G}}$ of cardinality $n-2 \choose k$, whose existence is given by Lemma~\ref{function}(iii). For each $\phi \in \mathcal{F}$, let $\phi' : V[k] \rightarrow \{0,1,-1\}$ be an extension of $\phi$ to $V[k]$ defined by: $$ \phi'(v) = \left\{ \begin{array}{rl} \phi(v), & v \in \overline{V_1},\\[1mm] 0, & v \in V_1. \end{array} \right. $$ To conclude the proof, it suffices to show that for every $\phi \in \mathcal{F_G}$, $\phi'$ is an eigenvector of $X[k]$ with eigenvalue $n-k-1$, since $\mathcal{F_G}$ is an independent set. This task is equivalent to verifying that, for every $v\in V[k]$, the following eigenvalue equation holds: \begin{equation} \label{eq:1} \mbox{$(n-k-1)\phi'(v)=\displaystyle\sum_{vu\in E[k]} \phi'(u)$} \end{equation} where the sum is over all edges of $E[k]$ incident with $v$ including possible self-loops. Recall that $V[k]=V_1 \cup \overline{V_1}$. If $v \in \overline{V_1}$, then by Lemma~\ref{propofeqi}~(i) and (ii), the only edges contributing non-zero values to the right hand side of (\ref{eq:1}) are the $n-k-1$ self-loops at $v$. Hence (\ref{eq:1}) holds in this case. If $v \in V_1$, then the left hand side of (\ref{eq:1}) is equal to 0. Now, by the definition of $\phi'$, the only edges contributing non-zero values to the right hand side of (\ref{eq:1}) are the edges $vu$, where $u \in \overline{V_1}$. By Lemma~\ref{propofeqi}~(iii), $v$ has precisely $n-k$ neighbors in $\overline{V_1}$, and they form a clique $K$ in $\mathcal{G}$ of cardinality $n-k$. By Lemma~\ref{basic-prop}, $K$ is a clique of maximum cardinality in $\mathcal{G}$ (recall that $\mathcal{G}$ is isomorphic to $P(k,n-1)$). By Lemma~\ref{function}~(ii), the sum of the $\phi'$-values of the vertices of every maximum clique is zero and thus (\ref{eq:1}) holds in this case as well. \end{proof} The proof of the main result follows at once. \begin{proof}[Proof of Theorem~\ref{main} (for non-zero eigenvalues).] Since $G=X(S_n,T_n)$ is $(n-1)$-regular, $n-1$ is an eigenvalue of $G$ of multiplicity 1. Lemma~\ref{gettingeigenvalues} implies that for $1\leq k \leq n-2$, $n-k-1$ is an eigenvalue of $X[k]$ with multiplicity at least ${n-2 \choose k}$, and hence by Lemma~\ref{eigenforeqi} also of $G$. The same conclusion holds for the negative values since $G$ is bipartite, and hence the spectrum is symmetric with respect to 0. \end{proof} \section{Eigenvalue zero} In this section we prove that 0 is an eigenvalue of $X(S_n,T_n)$ of multiplicity at least $\binom{n-2}{2}$. Let $K(2,n)$ be the graph on vertex set $S_{2,n}$ (all pairs of distinct elements from $[n]$). Two such pairs are adjacent if and only if either they have the same second coordinate, or they are transpose of each other (one is obtained from the other by interchanging the coordinates). \begin{proposition} \label{prop:cover} The Cayley graph $X(S_n,T_n)$ is a cover over $K(2,n)$. \end{proposition} \begin{proof} For $(i,j)\in S_{2,n}$, let $U_{ij} = \{\pi\in S_n\mid \pi(i)=1 \textrm{ and } \pi(j)=n\}$. Consider the mapping $p:S_n\to S_{2,n}$ defined as $p(\pi)=(i,j)$ if $\pi\in U_{ij}$. We claim that $p$ is a covering projection $X=X(S_n,T_n)\to K(2,n)$. Since both graphs are regular of degree $n-1$, it suffices to see that every $\pi\in U_{ij}$ is adjacent in $X$ to a permutation in $U_{lj}$ for each $l\in [n]\setminus\{i,j\}$ and is adjacent to a permutation in $U_{ji}$. This is confirmed below. Clearly, if $t=\pi(l)$, where $l\ne i,j$, then $t\ne 1,n$ and $(1\, t)\pi(l)=1$ and $(1\, t)\pi(j)=n$. Thus $\pi' = (1\, t)\pi \in U_{lj}$. Similarly, $(1\, n)\pi(j)=1$ and $(1\, n)\pi(i)=n$. Thus $\pi' = (1\, n)\pi \in U_{ji}$. This completes the proof since in each case, $\pi'$ is a neighbor of $\pi$ in $X$. \end{proof} The following corollary provides the missing evidence for the completion of the proof of Theorem \ref{main}. \begin{proposition} \label{prop:0 eigenvalue} If\/ $n\ge 4$, then zero is an eigenvalue of $K(2,n)$ and hence also of $X(S_n,T_n)$ of multiplicity at least $\binom{n-1}{2}$. \end{proposition} \begin{proof} It is well known that eigenvalues of the base graph are also eigenvalues of the cover. By Proposition \ref{prop:cover}, it suffices to show that 0 is an eigenvalue of $K(2,n)$ with multiplicity at least $\binom{n-1}{2}$. Let $A,B \subset [n]$ be disjoint subsets of $[n]$, where $|A|\ge2$ and $|B|\ge 2$. For $i\in [n]$, let $\alpha_i$ and $\beta_i$ be real numbers so that $\alpha_i\ne0$ for $i\in A$, $\alpha_i=0$ for $i\notin A$, $\beta_i\ne0$ for $i\in B$, and $\beta_i=0$ for $i\notin B$, such that $\sum_{i=1}^n \alpha_i = 0$ and $\sum_{j=1}^n \beta_j = 0$. Finally, for each $(i,j)\in S_{2,n}$, define \begin{equation} x_{ij} = \alpha_i \beta_j + \alpha_j \beta_i. \label{eq:xij} \end{equation} Observe that $x_{ij}=x_{ji}$ and $x_{ii}=0$ for any $i,j\in [n]$. We claim that $x=(x_{ij})$ is an eigenvector for eigenvalue 0 in $K(2,n)$. To see this, we have to show that the sum $s$ of the values on all neighbors of $(i,j)$ in $K(2,n)$ is zero. But this is easy to see: $$ s = x_{ji} + \sum_{l\ne i,j} x_{lj} = \sum_{l=1}^n x_{lj} = \sum_{l=1}^n (\alpha_l \beta_j + \alpha_j \beta_l) = \beta_j\,\sum_{i=1}^n \alpha_l + \alpha_j\,\sum_{i=1}^n \beta_l = 0. $$ The eigenvectors $(x_{ij})$ for eigenvalue 0 as defined above span a subspace of dimension at least $\binom{n-1}{2}$. The proof is by induction on $n$. For $n=4$, consider partitions $A\cup B$ of $\{1,2,3,4\}$, where $A=\{1,4\}$, $A=\{2,4\}$, or $A=\{3,4\}$ (respectively), and $B=[4]\setminus A$. They give three linearly independent vectors. To see this, note that each of the corresponding eigenvectors defined by (\ref{eq:xij}) has a non-zero value where the other two have value zero. For $n\ge 5$, consider $\binom{n-2}{2}$ independent vectors obtained by taking subsets $A,B$ of $[n-1]$. They all have coordinate 0 for every $(i,n)$ ($1\le i<n$). Finally, we can add $n-2$ other eigenvectors that have precisely one non-zero coordinate $(k,n)$ for $k\in\{1,\dots,n-2\}$: for the $k$th one, take $A=\{1,n\}$ and $B=\{k,n-1\}$, except for $k=1$, when $A=\{n-1,n\}$ and $B=\{1,n-1\}$). All together we have $\binom{n-2}{2} + n-2 = \binom{n-1}{2}$ independent eigenvectors. \end{proof} \bibliographystyle{abbrv}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The uncertainty principle is a central feature of quantum mechanics, prohibiting certain properties of quantum systems from being simultaneously well-defined. The Heisenberg uncertainty relation$^1$ lower bounds the product of uncertainties, i.e., the spread measured by standard deviation, of measurement outcomes for two non-commutating observables$^2$. An improved form of the uncertainty relation was proposed by Robertson$^3$ and Schrodinger$^4$, incorporating both commutators and anti-commutators of more general observables. Motivated by various physical considerations, several other versions of the uncertainty principle have since been suggested. Notable among them are reformulations that take into account the inevitable noise and disturbance associated with measurements$^5$. Efforts for eliminating the state-dependence of the lower bound of uncertainty have lead to the formulation of various entropic versions of the uncertainty principle$^{6,7,8,9}$. Entropic uncertainty relations have been tightened due different effects, such as the presence of correlations$^{10,11,12,13,14}$. A fine-grained version of the uncertainty relation arises as a result of distinguishing the uncertainty of obtaining specific combinations of outcomes for different measurements$^{15}$. An optimal lower bound of entropic uncertainty in the presence of any type of correlations may be determined by fine-graining$^{16}$. For a recent review of uncertainty relations, see Ref.~$^{17}$. The subject of quantum information science that has seen rapid progress in recent years, was inspired originally to a great extent by the pioneering work of Einstein, Podolsky and Rosen (EPR)$^{18}$. The word `entanglement' was first coined by Schrodinger to describe the property of spatially separated but correlated particles whose paradoxical features were highlighted by EPR. The first testable formulation of the EPR paradox was proposed$^{19}$ using the position-momentum uncertainty relation, in terms of an inequality involving products of inferred variances of incompatible observables. This lead to the experimental realization$^{20}$ of the EPR paradox for the case of two spatially separated and correlated light modes. A modern formulation of the EPR-Schrodinger concept of quantum steering based on violations of steering inequalities$^{21}$, akin to the Bell-type local-realist inequalities$^{22,23}$, is derived again using uncertainty relations in their entropic version. Entropic steering relations are indispensable for demonstrating steering in certain continuous variable systems where correlations are not manifest up to second order (variances of observables), as shown recently for several non-Gaussian states$^{24}$. Several other important applications of uncertainty relations in the realm of quantum information processing have been uncovered in recent years. The uncertainty principle has been used for discrimination between separable and entangled quantum states in the realm of continuous variable systems$^{25}$. The utility of the Robertson-Schrodinger uncertainty relation$^{3,4}$ has also been exploited in this context$^{26,27}$. Moreover, the Robertson-Schrodinger uncertainty relation$^{3,4}$ has recently been employed in the domain of discrete variables to distinguish between pure and mixed states of single as well as bipartite qubit and qutrit systems$^{28}$. The fine-grained uncertainty relation can be used to determine the nonlocality of the underlying physical system$^{15}$, as has been demonstrated for the case of bipartite$^{15}$ and tripartite$^{29}$ systems, as well as in the arena of biased nonlocal games$^{30}$. The uncertainty principle plays a crucial role in the domain of quantum cryptography since security of quantum key distribution protocols relies basically on quantum uncertainty$^{31}$. Specifically, the amount of key extractable per state has been linked to the lower limit of entropic uncertainty$^{10,32}$. Uncertainty relation in their different versions have many important applications in quantum information theory. In the present article, we review some aspects of a few of these applications, limited mainly by the areas in the which the present authors have worked upon. The plan of this article is as follows. In the next Section we discuss the Robertson-Schrodinger uncertainty relation and briefly sketch how it could be used for distinguishing pure states from mixed states of discrete variables. In Section III we focus on the topic of quantum steering where steering using the Heisenberg uncertainty relation as well as entropic steering relations are discussed in the context of continuous variables. The connection between uncertainty and nonlocality of quantum games is presented in Section IV as an application of the fine-grained steering relation. Section V contains a brief review of entropic uncertainty relations in the presence of quantum memory. Certain concluding remarks are made in Section VI. \section{Determining purity of states using the Robertson-Schrodinger uncertainty relation} In experimental protocols for information processing, the interaction with the environment inevitably affects the purity of a quantum system. A relevant issue for an experimenter is to ascertain whether a prepared pure state has remained isolated from environmental interaction. It becomes important to test whether a given quantum state is pure, in order to use it effectively as a resource for quantum information processing. The purity of a given state is also related to the entanglement of a larger multipartite system of which it may be a part$^{33}$. The mixedness of states can be quantified by their linear entropy, which is a nonlinear functional of the quantum state. The linear entropy can be extracted from the given state by tomography which usually is expensive in terms of resources and measurements involved. In this section we discuss how the Robertson-Schrodinger (RS) uncertainty relation may be used to determine the mixedness of quantum states of discrete variables. For the case of continuous variable systems there exist certain pure states for which the uncertainty as quantified by the Robertson-Schrodinger uncertainty relation is minimized$^{34}$, The connection of purity with observable quantities of the relevant states have been found$^{35}$. It has been shown recently that the RS uncertainty relation can be used to distinguish between pure and mixed states of finite dimensional systems$^{28}$. The RS uncertainty relation could be used as a witness of mixedness in the following way. For any pair of observables $A,B$ and for any quantum state represented by the density operator $\rho$, the RS uncertainty relation can be written as$^{3,4}$ \begin{eqnarray} Q(A,B,\rho) \ge 0 \label{gur1} \end{eqnarray} where \begin{eqnarray} Q(A,B,\rho)&=&(\Delta A)^2 (\Delta B)^2- |\frac{\langle[A,B]\rangle}{2}|^2 \nonumber \\ && - |(\frac{\langle\{A,B\}\rangle}{2}-\langle A\rangle \langle B\rangle)|^2 \label{gur2} \end{eqnarray} with $(\Delta A)^2$ and $(\Delta B)^2$ representing the variances of the observables, $A$ and $B$, respectively, given by $(\Delta A)^2=(\langle A^2\rangle)-(\langle A\rangle)^2$, $(\Delta B)^2=(\langle B^2\rangle)-(\langle B\rangle)^2$, and the square (curly) brackets representing the standard commutators (anti-commutators) of the corresponding operators. The quantity $Q(A,B,\rho)$ involves the measurable quantities, i.e., the expectation values and variances of the relevant observables in the state $\rho$. States of a $d$-level quantum system are in one to one correspondence with Hermitian, positive semi-definite, unit trace operators acting on a $d$-dimensional Hilbert space. The defining properties of these density operators $\rho$ are (i) $\rho\dagger=\rho$, (ii) $\rho\geq 0 $, (iii) $tr[\rho]=1$. Pure states correspond to the further condition $\rho^2=\rho$ which is equivalent to the scalar condition $tr[\rho^2]=1 $. Hence, complement of the trace condition can be taken as a measure of mixedness given by the linear entropy defined for a $d$-level system as \begin{eqnarray} S_l(\rho) = (\frac{d}{d-1})(1-tr(\rho^{2})) \label{linentrop} \end{eqnarray} We now describe how the quantity $Q(A,B,\rho)$ can act as an experimentally realizable measure of mixedness of a system$^{28}$. Let us here discuss the case of two-level systems. The density operator for qubit systems can be expressed in terms of the Pauli matrices. The state of a single qubit can be written as $\rho(\vec{n})=\frac{(I+\vec{n}.\vec{\sigma})}{2}, \>\> \vec{n}\in \mathbb{R}^{3}$. Positivity of this Hermitian unit trace matrix demands $ |\vec{n}|^2\leqslant1$. It follows that single qubit states are in one to one correspondence with the points on or inside the closed unit ball centred at the origin of $\mathbb{R}^{3}$. Points on the boundary correspond to pure states. For a pair of suitably chosen spin observables, the RS relation is satisfied as an equality for the states extremal, i.e., the pure states, and as an inequality for points other than extremals, i.e., for the mixed states$^{28}$. The linear entropy of the state $\rho$ can be written as $S_l(\rho)=(1-\vec{n}^{2})$. If we choose spin observables along two different directions, i.e., $A=\hat{r}.\vec{\sigma}$ and $B=\hat{t}.\vec{\sigma}$, then $Q$ becomes \begin{eqnarray} Q(A,B,\rho)=(1-(\Sigma r_{i}t_{i})^{2})S_l(\rho) \label{qub} \end{eqnarray} It thus follows that for $\hat{r}.\hat{t}=0$, $Q$ coincides with the linear entropy. For orthogonal spin measurements, the uncertainty quantified by the RS relation, $Q$ and the linear entropy $S_l$ are exactly same for single qubit systems. Thus, it turns out that $Q=0$ is both a necessary and sufficient condition for any single qubit system to be pure when the pair of observables are qubit spins along two different directions. For two-qubit systems the states considered may be taken to be polarized along a specific known direction, say, the $z$- axis forming the Schmidt decomposition basis. In order to enable $Q(A,B,\rho)$ to be a mixedness measure, $A$ and $B$ are chosen for the two-qubit case to be of the form $A=(\hat{m}.\vec{\sigma}^{1})\otimes(\hat{n}.\vec{\sigma}^{2})$, and $B=(\hat{p}.\vec{\sigma}^{1})\otimes(\hat{q}.\vec{\sigma}^{2})$, respectively, where $\hat{m},\hat{n},\hat{p},\hat{q}$ are unit vectors. For enabling $Z(A,B,\rho)$ to be used for determining the purity of the given two qubit state, the appropriate choice of observables $A$ and $B$ is found to be that of lying on the two dimensional $x-y$ plane (i.e.,$\hat{m},\hat{n},\hat{p},\hat{q}$ are all taken to be on the $x-y$ plane), normal to the $z$-axis pertaining to the relevant Schmidt decomposition basis. Then, $Q(A,B,\rho)=0$ necessarily holds good for pure two-qubit states whose individual spin orientations are all along a given direction (say, the $z$-axis) normal to which lies the plane on which the observables $A$ and $B$ are defined. On the other hand, $Q(A,B,\rho) > 0$ holds good for most settings of $A$ and $B$ for two qubit isotropic states, for the Werner class of states given by $\rho_{w}=((1-p)/4)I+p\rho_{s}$ ($\rho_s$ is the two-qubit singlet state), as well for other types of one parameter two-qubit states which comprise of pure states whose individual spin orientations are all along the same given direction normal to the plane on which the observables $A$ and $B$ are defined. The RS uncertainty relation has been shown to determine the purity of qutrit systems, as well$^{28}$. Three-level systems are of fundamental relevance in laser physics, and have generated much recent interest from the perspective of information processing$^{36}$. It has been shown using examples of single and bipartite class of qutrit states that the RS uncertainty relation can be satisfied as an equality for pure states while it remains an inequality for mixed states by the choice of suitable observables. An observational scheme which can detect mixedness of qutrit systems unambiguously, requires less resources compared to tomography, and is implementable through the measurement of Hermitian witness-like operators$^{28}$. It may be relevant to note here though that the set of pure states is not convex, and hence, such witness-like operators do not arise from any geometrical separability criterion inherent to the theory of entanglement witnesses$^{37}$, that has been applied more recently to the cases of teleportation witnesses$^{38}$, as well as for witnesses of absolutely separable states$^{39}$. The operational determination of purity using the RS relation requires a few additional steps. A scheme for using the uncertainty relation to determine whether a given state is pure or mixed, provided the prior knowledge of the basis is available, has been outlined in Ref.$^{28}$. The limitation of instrumental precision could make the observed value of $Q$ for pure states to be a small number in stead of exactly zero. In order to take into account the experimental inaccuracy, a parameter $\varepsilon$ is introduced in the analysis. For a single-qubit system, by choosing the measurement settings for $A$ and $B$ as qubit spins along $z$ and $x$ directions, respectively, the measured value of the uncertainty obtained as $Q\ge\varepsilon$ leads to the conclusion that the given state is mixed. This prescription of determining mixedness holds for all single-qubit states $\rho(\vec{n})=\frac{(I+\vec{n}.\vec{\sigma})}{2}$, except those lying in the narrow range $1 \ge n \ge \sqrt{1- 2 \varepsilon/3}$, as determined by putting $Q < \varepsilon$. To summarize, the RS uncertainty relation is able to distinguish between pure and mixed states for a broad category of two- and three-level systems. For single party systems, the scheme works for all qubits and up to three-parameter family of qutrit states$^{40}$. For bipartite systems, the scheme has been shown to work for the mixture of two arbitrary pure states, the isotropic class, and the Werner class of states, as well. The determination of mixedness using GUR may require in certain cases a considerably lesser number of measurements compared to tomography. In the case of single qutrit states, full tomography involves the estimation of eight parameters, while through the prescription detailed in Ref.$^{28}$ sometimes four measurements may suffice for detecting purity of a single qutrit state. A maximum of eight measurements suffices to distinguish between pure and mixed states of single qutrit up to three-parameter families. The difference in the number of required measurements is substantially enhanced for composite states. For two qubits, the RS relation requires up to five measurements compared to fifteen required by tomography. For the case of two-qutrits the measurement of at most eight expectation values suffices. \section{Quantum steering} The Einstein-Podolsky-Rosen (EPR) paradox$^{18}$ has not only inspired a huge body subsequent debate, but has played a pivotal role in the unfolding of several rich features of quantum mechanics relevant for information processing. Considering a position-momentum correlated state of two particles, and assuming the notions of spatial separability, locality, and reality to hold true at the level of quantum particles, EPR argued that that the quantum mechanical description of the state of a particle is not complete. The EPR paradox arises from the correlations between two non-commuting observables of a sub-system with those of the other sub-system, for instance, the correlations between the measurement outcomes of positions and momenta for two separated particles, i.e., $<x,p_y> \neq 0$, with $<x>=0=<p_y>$ individually. Due to the presence of correlations, the measurement of the position of, say, the first particle leads one to infer the correlated value of the position for the second particle (say, $x_{\inf}$). Now, if the momentum of the second particle is measured giving the outcome, say $p$, the value of the product of uncertainties $(\Delta x_{\inf})^2 (\Delta p_{\inf})^2$ may turn out to be lesser than that allowed by the uncertainty principle, {\it viz.} $(\Delta x)^2 (\Delta p )^2 \ge 1$, thus leading to the paradox. Following the work of EPR, Schrodinger$^{41}$ observed that correlations between spatially separated particles entailed the possibility of steering of the state on one side merely by the choice of the measurement basis on the other side, without in any way having direct access to the affected particle. The word 'entanglement' was first coined by Schrodinger to describe the property of such spatially separated but correlated particles. Consider a bipartite entangled state which may be expressed in two different ways, as \begin{eqnarray} \vert\Psi\rangle = \sum_{n=1}^{\infty}c_n\vert\psi_n\rangle\vert u_n\rangle = \sum_{n=1}^{\infty}d_n\vert\phi_n\rangle\vert v_n\rangle \label{ensemb} \end{eqnarray} where $\{\vert u_n\rangle\}$ and $\{\vert v_n\rangle\}$ are two orthonormal bases for one of the parties (say, Alice). If Alice chose to measure in the $\{\vert u_n\rangle\}$ ($\{\vert v_n\rangle\}$) basis, she projects Bob's system into one of the states $\vert\psi_n\rangle$ ($\vert\phi_n\rangle$). Note that though there is no physical interaction between Alice and Bob, the ensemble of $\vert\psi_n\rangle$s is in general different from the ensemble of $\vert\phi_n\rangle$s. This ability of Alice to affect Bob's state due to her choice of the measurement basis was dubbed as ``steering'' by Schrodinger$^{41}$. A testable formulation of the EPR paradox was proposed many years later by Reid$^{19}$ for continuous variable systems using the position-momentum uncertainty relation. An inequality involving products of inferred variances of incompatible observables was derived in the context of continuous variables, as follows. Consider the quadrature phase components of two correlated and spatially separated light fields. The quadrature amplitudes associated with the fields $E_{\gamma}=C[\hat{\gamma} e^{-i\omega_{\gamma} t} + \hat{\gamma}^{\dagger} e^{i\omega_{\gamma} t}]$ (where, $\gamma\in\{a,b\}$, are the bosonic operators for two different modes, $\omega_{\gamma}$ is the frequency, and $C$ is a constant incorporating spatial factors taken to be equal for each mode) are given by \begin{eqnarray} \hat{X}_{\theta}=\frac{\hat{a}e^{- i \theta} + \hat{a}^{\dagger} e^{i \theta}}{\sqrt{2}}, \hspace{0.5cm} \hat{Y}_{\phi}=\frac{\hat{b}e^{- i \phi} + \hat{b}^{\dagger} e^{i \phi}}{\sqrt{2}}, \label{Quard} \end{eqnarray} where, \begin{eqnarray} \hat{a} &=& \frac{\hat{X} + i \hat{P}_x}{\sqrt{2}},\hspace{0.5cm} \hat{a}^\dagger = \frac{\hat{X} -i \hat{P}_x}{\sqrt{2}},\nonumber\\ \hat{b}&=& \frac{\hat{Y}+i \hat{P}_y}{\sqrt{2}}, \hspace{0.5cm} \hat{b}^\dagger = \frac{\hat{Y}- i \hat{P}_y}{\sqrt{2}}, \label{boson_op} \end{eqnarray} and the commutation relations of the bosonic operators are given by $[\hat{a},\hat{a}^{\dagger}]=1=[\hat{b},\hat{b}^{\dagger}]$. The correlations between the quadrature amplitudes $\hat{X}_{\theta}$ and $\hat{Y}_{\phi}$ are defined by the correlation coefficient, $ C_{\theta,\phi}$ as$^{19,20}$ \begin{eqnarray} C_{\theta,\phi}=\frac{\langle \hat{X}_{\theta} \hat{Y}_{\phi} \rangle}{\sqrt{\langle \hat{X}^2_{\theta} \rangle \langle \hat{Y}^2_{\phi} \rangle}}, \label{Cr_f} \end{eqnarray} where $\langle \hat{X}_{\theta} \rangle=0=\langle \hat{Y}_{\phi} \rangle$. The correlation is perfect for some values of $\theta$ and $\phi$, if $|C_{\theta,\phi}|=1$, and vanishes for uncorrelated variables. As a consequence of correlations in the measurement outcomes, the quadrature amplitude $\hat{X}_{\theta}$ can be inferred by measuring the corresponding amplitude $\hat{Y}_{\phi}$. In realistic situations the correlations are not perfect because of the interaction with the environment as well as finite detector efficiency. Hence, the estimated amplitudes $\hat{X}_{\theta 1}$ and $\hat{X}_{\theta 2}$ with the help of $\hat{Y}_{\phi 1}$ and $\hat{Y}_{\phi 2}$, respectively, are subject to inference errors, and given by$^{19}$ \begin{eqnarray} \hat{X}_{\theta 1}^{e}=g_1 \hat{Y}_{\phi 1}, \hspace{0.5cm} \hat{X}_{\theta 2}^{e}=g_2 \hat{Y}_{\phi 2}, \label{Est} \end{eqnarray} where $g_1$ and $g_2$ are scaling parameters. Now, one may choose $g_1$, $g_2$, $\phi 1$, and $\phi 2$ in such a way that $\hat{X}_{\theta 1}$ and $\hat{X}_{\theta 2}$ are inferred with the highest possible accuracy. The errors given by the deviation of the estimated amplitudes from the true amplitudes $\hat{X}_{\theta 1}$ and $\hat{X}_{\theta 2}$ are captured by $(\hat{X}_{\theta 1}- \hat{X}_{\theta 1}^{e})$ and $(\hat{X}_{\theta 2}- \hat{X}_{\theta 2}^{e})$, respectively. The average errors of the inferences are given by \begin{eqnarray} (\Delta_{\inf} \hat{X}_{\theta 1})^2 &=& \langle (\hat{X}_{\theta 1}- \hat{X}_{\theta 1}^{e})^2\rangle = \langle (\hat{X}_{\theta 1}- g_1 \hat{Y}_{\phi 1})^2\rangle, \nonumber \\ (\Delta_{\inf} \hat{X}_{\theta 2})^2 &=& \langle (\hat{X}_{\theta 2}- \hat{X}_{\theta 2}^{e})^2\rangle = \langle (\hat{X}_{\theta 2}- g_2 \hat{Y}_{\phi 2})^2\rangle. \label{Erro} \end{eqnarray} The values of the scaling parameters $g_1$ and $g_2$ are chosen such that $\frac{\partial (\Delta_{\inf} \hat{X}_{\theta 1})^2}{\partial g_1} =0 = \frac{\partial (\Delta_{\inf} \hat{X}_{\theta 2})^2}{\partial g_2}$, from which it follows that \begin{eqnarray} g_1 = \frac{\langle \hat{X}_{\theta 1} \hat{Y}_{\phi 1} \rangle}{\langle \hat{Y}_{\phi 1}^2 \rangle}, \hspace{0.5cm} g_2 = \frac{\langle \hat{X}_{\theta 2} \hat{Y}_{\phi 2} \rangle}{\langle \hat{Y}_{\phi 2}^2 \rangle}. \label{g's} \end{eqnarray} The values of $\phi 1$ ($\phi2$) are obtained by maximizing $C_{\theta1,\phi1}$ ($C_{\theta 2,\phi2}$). Now, due to the commutation relations $[\hat{X},\hat{P}_X]=i;~~[\hat{Y},\hat{P}_Y]=i$, it is required that the product of the variances of the above inferences $(\Delta_{\inf} \hat{X}_{\theta 1})^2 (\Delta_{\inf} \hat{X}_{\theta 2})^2 \ge 1/4$. Hence, the EPR paradox occurs if the correlations in the field quadratures lead to the condition \begin{eqnarray} EPR \equiv (\Delta_{\inf} \hat{X}_{\theta 1})^2 (\Delta_{\inf} \hat{X}_{\theta 2})^2 < \frac{1}{4}. \label{P_Uncer} \end{eqnarray} Experimental realization of the EPR paradox was first carried out by Ou et al.$^{20}$ using two spatially separated and correlated light modes. Similar demonstrations of the EPR paradox using quadrature amplitudes of other radiation fields were performed later$^{42}$. Subsequent works have shown that the Reid inequality is effective in demonstrating the EPR paradox for systems in which correlations appear at the level of variances, though there exist several pure entangled states which do not display steering through the Reid criterion. Moreover, in systems with correlations manifesting in higher than the second moment, the Reid formulation generally fails to show occurrence of the EPR paradox, even though Bell nonlocality may be exhibited$^{43,44}$. On the other hand, a modern formulation of quantum steering in terms of an information theoretic task was proposed by the work of Wiseman et al.$^{21,45}$. They considered a bipartite situation in which one of two parties (Alice) prepares a quantum state and sends one of the particles to Bob. The procedure is repeated as many times as required. Bob's particle is assumed to possess a definite state, even if it is unknown to him (local hidden state). No such assumption is made for Alice, and hence, this formulation of steering is an asymmetric task. Alice and Bob make measurements on their respective particles, and communicate classically. Alice's task is to convince Bob that the state they share is entangled. If correlations between Bob's measurement results and Alice's declared results can be explained by a local hidden state (LHS) model for Bob, he is not convinced. This is because Alice could have drawn a pure state at random from some ensemble and sent it to Bob, and then chosen her result based on her knowledge of this LHS. Conversely, if the correlations cannot be so explained, then the state must be entangled. Alice will be successful in her task of steering if she can create genuinely different ensembles for Bob by steering Bob's state. Using similar formulations for entanglement as well as Bell nonlocality, a clear distinction between these three types of correlations is possible using joint probability distributions, with entanglement being the weakest, steering the intermediate, and Bell violation the strongest of the three. Bell nonlocal states constitute a strict subset of steerable states which, in turn, are a strict subset of entangled states. For the case of pure entangled states of two qubits the three classes overlap. An experimental demonstration of these differences has been performed for mixed entangled states of two qubits$^{46}$ For the case of continuous variables, Walborn et al.$^{43}$ have proposed another steering condition which is derived using the the entropic uncertainty relation (EUR)$^{6}$. EUR for the position and momentum distribution of a quantum system is given by \begin{eqnarray} h_Q(X)+h_Q(P)\geq \ln \pi e. \label{entropy_uncertainty} \end{eqnarray} Walborn et al.$^{43}$ considered a joint probability distribution of two parties corresponding to a non-steerable state for which there exists a local hidden state (LHS) description, given by \begin{eqnarray} \mathcal{P}(r_A,r_B)=\sum_\lambda \mathcal{P}(\lambda)\mathcal{P}(r_A|\lambda)\mathcal{P}_Q(r_B|\lambda), \label{steer2} \end{eqnarray} where, $ r_A $ and $ r_B $ are the outcomes of measurements $ R_A $ and $ R_B $ respectively; $ \lambda $ are hidden variables that specify an ensemble of states; $ \mathcal{P} $ are general probability distributions; and $ \mathcal{P}_Q $ are probability distributions corresponding to the quantum state specified by $ \lambda $. Now, using a rule for conditional probabilities $P(a,b|c) = P(b|c)P(a|b)$ which holds when $\{b\} \in \{c\}$, i.e., there exists a local hidden state of Bob predetermined by Alice, it follows that the conditional probability $\mathcal{P}(r_B| r_A)$ is given by \begin{eqnarray} \mathcal{P}(r_B|r_A)=\sum_\lambda \mathcal{P}(r_B,\lambda|r_A) \label{steer3} \end{eqnarray} with $P(r_B,\lambda | r_A) = P(\lambda |r_A)P_Q(r_B|\lambda)$. Note that (\ref{steer2}) and (\ref{steer3}) are similar conditions for non-steerability. Next, considering the relative entropy (defined for two distributions $p(X)$ and $q(X)$ as $\mathcal{H}(p(X)||q(X))= \sum_xp_x\ln(p_x/q_x)$) between the probability distributions $ \mathcal{P}(r_B,\lambda|r_A) $ and $ \mathcal{P}(\lambda|r_A)\mathcal{P}(r_B|r_A) $ , it follows from the positivity of relative entropy that \begin{eqnarray} \sum_\lambda \int dr_B \mathcal{P}(r_B,\lambda|r_A) \ln \frac{\mathcal{P}(r_B,\lambda|r_A)}{\mathcal{P}(\lambda|r_A)\mathcal{P}(r_B|r_A)}\geq 0 \end{eqnarray} Using the non-steering condition (\ref{steer3}), the definition of the conditional entropy ($h(X|Y) = -\sum_{x,y} p(x,y)\ln p(x|y)$), and averaging over all measurement outcomes $r_A$, it follows that the conditional entropy $h(R_B|R_A)$ satisfies \begin{eqnarray} h(R_B|R_A) \ge \sum_{\lambda} \mathcal{P}(\lambda) h_Q(R_B|\lambda) \label{cond1} \end{eqnarray} Considering a pair of variables $S_A,S_B$ conjugate to $R_A,R_B$, a similar bound on the conditional entropy may be written as \begin{eqnarray} h(S_B|S_A) \ge \sum_{\lambda} \mathcal{P}(\lambda) h_Q(S_B|\lambda) \label{cond2_Steer} \end{eqnarray} For the LHS model for Bob, note that the entropic uncertainty relation (\ref{entropy_uncertainty}) holds for each state marked by $\lambda$. Averaging over all hidden variables, it follows that \begin{eqnarray} \sum_{\lambda} \mathcal{P}(\lambda)\biggl(h_Q(R_B|\lambda) + h_Q(S_B|\lambda)\biggr) \ge \ln \pi e \label{cond3} \end{eqnarray} Now, using the bounds (\ref{cond1}) and (\ref{cond2_Steer}) in the relation (\ref{cond3}) one gets the entropic steering inequality given by \begin{eqnarray} h(R_B|R_A)+h(S_B|S_A)\geq \ln \pi e. \label{entropy_steering} \end{eqnarray} Entropic functions by definition incorporate correlations up to all orders, and the Reid criterion follows as a limiting case of the entropic steering relation$^{43}$. EPR steering for Gaussian as well as non-Gaussian states has been studied in the literature$^{43,24,47}$. Non-Gaussian states may be generated by the process of photon subtraction and addition$^{48}$, and these states generally have higher degree of entanglement than the Gaussian states. We conclude this section by discussing the example of steering by one such non-Gaussian state, {\it viz.}, the eigenstate of the two-dimensional harmonic oscillator. The energy eigenfunctions of the two-dimensional harmonic oscillator may be expressed in terms of Hermite-Gaussian (HG) functions given by$^{48}$ \begin{eqnarray} u_{nm}(x,y) &&= \sqrt{\frac{2}{\pi}} \left(\frac{1}{2^{n+m} w^2 n!m!}\right)^{1/2} \nonumber\\ && \times H_n \left(\frac{\sqrt{2}x}{w}\right) H_m \left(\frac{\sqrt{2}y}{w}\right) e^{-\frac{(x^2+y^2)}{w^2}}, \nonumber \\ \int |u_{nm}(x,y)|^2 dx dy &&=1 \label{hermite} \end{eqnarray} Entangled states may be constructed from superpositions of HG wave functions \begin{eqnarray} \Phi_{nm}(\rho,\theta) = \sum_{k=0}^{n+m} u_{n+m-k,k}(x,y)\frac{f_k^{(n,m)}}{k!}(\sqrt{-1})^k \nonumber \\ \times \sqrt{\frac{k! (n+m-k)!}{n! m! 2^{n+m}}} \label{legherm} \end{eqnarray} \begin{eqnarray} f_k^{(n,m)} = \frac{d^k}{dt^k} ((1-t)^n(1+t)^m)|_{t=0}, \label{def11} \end{eqnarray} where $\Phi_{nm}(\rho,\theta)$, the Laguerre-Gaussian (LG) functions are given by$^{48}$ \begin{eqnarray} \Phi_{nm}(\rho,\theta) = e^{i(n-m)\theta}e^{-\rho^2/w^2}(-1)^{\mathrm{min}(n,m)} \left(\frac{\rho \sqrt{2}}{w}\right)^{|n-m|} \label{waveLG} \\ \times \sqrt{\frac{2}{\pi n! m ! w^2}} L^{|n-m|}_{\mathrm{min}(n,m)} \left(\frac{2\rho^2}{w^2}\right) (\mathrm{min}(n,m)) ! \nonumber \end{eqnarray} with $\int |\Phi_{nm}(\rho,\theta)|^2 dx dy =1$, where $w$ is the beam waist, and $L_p^l(x)$ is the generalized Laguerre polynomial. The superposition (\ref{legherm}) is like a Schmidt decomposition thereby signifying the entanglement of the LG wave functions. In terms of dimensionless quadratures $\{X,~P_X\}$ and $\{Y,~P_Y\}$, given by $x (y) \rightarrow \frac{w}{\sqrt{2}} ~~X (Y)$, and $p_x (p_y) \rightarrow \frac{\sqrt{2} \hbar}{w} ~~P_X (P_Y)$, the canonical commutation relations are $[\hat{X},\hat{P}_X]=i;~~[\hat{Y},\hat{P}_Y]=i$, and the operator $\hat{P}_X$ and $\hat{P}_Y$ are given by $\hat{P}_X = - i \frac{\partial}{\partial X}$ and $\hat{P}_Y = - i \frac{\partial}{\partial Y}$, respectively. The Wigner function corresponding to the LG wave function in terms of the scaled variables is given by$^{24}$ \begin{eqnarray} W_{nm}(X,P_X;Y,P_Y)&=&\frac{(-1)^{n+m}}{(\pi)^{2}} L_{n}[4(Q_0+Q_2)] \label{WF_LG_n1_n2} \\ && L_{m}[4(Q_0-Q_2)]~exp(-4Q_0) \nonumber \end{eqnarray} where $Q_0 = \frac{1}{4}\left[ X^2 + Y^2 + P_X^2+P_Y^2\right]$, and $Q_2 = \frac{XP_Y-YP_X}{2}$. It was shown in Ref.$^{24}$ that the Reid criterion is unable to reveal steering for the LG wave function. The entropic steering inequality in this case may be written in terms of the conjugate pairs of dimensionless quadratures, (\ref{entropy_steering}) given by \begin{eqnarray} h(\mathcal{X}|\mathcal{P_Y})+h(\mathcal{P_X|}\mathcal{Y})\geq \ln \pi e, \label{LG_entropy_steering} \end{eqnarray} where $ X,~Y,~P_X $ and $ P_Y $ are the outcomes of measurements $ \mathcal{X},~\mathcal{Y},~\mathcal{P_X} $ and $ \mathcal{P_Y} $ respectively. For $ n=0 $ and $ m=0 $, the LG wave function factorizes into a product state with the corresponding Wigner function given by \begin{eqnarray} W_{00}(X,P_X;Y,P_Y)=\frac{e^{-X^2-Y^2-P_X^2-P_Y^2}}{\pi^2}. \end{eqnarray} In this case the relevant entropies turn out to be $h(\mathcal{X},\mathcal{P_Y})=h(\mathcal{P_X},\mathcal{Y})=\ln \pi e$ and $h(\mathcal{Y})=h(\mathcal{P_Y})=\frac{1}{2}\ln \pi e$, and hence, the entropic steering inequality becomes saturated$^{24}$, i.e., \begin{eqnarray} h(\mathcal{X}|\mathcal{P_Y})+h(\mathcal{P_X|}\mathcal{Y}) = \ln \pi e. \end{eqnarray} For $ n=1 $ and $ m=0 $, the Wigner function has the form \begin{eqnarray} W_{10}(X,P_X;Y,P_Y) &=& e^{-X^2-Y^2-P_X^2-P_Y^2} \\ && \times \frac{(P_X - Y)^2 +(P_Y+X)^2 -1}{\pi^2} \nonumber \end{eqnarray} and the relevant entropies are given by $h(\mathcal{X},\mathcal{P_Y})=h(\mathcal{P_X},\mathcal{Y}) \approx 2.41509$, and $h(\mathcal{Y})=h(\mathcal{P_Y}) \approx1.38774$. Hence, the entropic steering relation in this case becomes \begin{eqnarray} h(\mathcal{X}|\mathcal{P_Y})+h(\mathcal{P_X|}\mathcal{Y}) \approx 2.05471 < \ln \pi e \end{eqnarray} Steering is thus demonstrated here. For higher values of angular momentum, the violation of the inequality becomes stronger for higher values of $n$, as shown in Ref.$^{24}$. It may be noted that the Laguerre-Gaussian functions are physically realizable field configurations$^{49}$ with interesting topological$^{50}$ and coherence$^{51}$ properties, and are considered to be potentially useful for several information processing applications$^{52}$. Steering has been demonstrated using the entropic steering relation for other classes of non-Gaussian states such as photon subtracted squeezed vaccum states and N$00$N states in Ref.$^{24}$ where it has been proposed that it may be easier to detect entaglement in some such states using steering compared to the manifestation of Bell violation. Note also that further generalizations of entropic steering inequalities to the case of symmetric steering$^{53}$, loss-tolerant steering$^{54}$, as well as to the case of steering with qauntum memories$^{55}$ have also been proposed recently. \section{Fine-graining and its connection with nonlocality} Uncertainty relations impose restrictions on the knowledge about the properties of a system described by its state of a system. The Heisenberg uncertainty relation prohibits the certain prediction of the measurement outcomes of two non-commutating observables. For example, when one predicts certainly the spin orientation of a qubit along the $z$-axis, the knowledge of spin orientation of that qubit along the $x$-axis is completely uncertain, as the probability of getting spin up and down are equal. With the motivation of distinguishing the uncertainty inherent in obtaining any combination of outcomes for different measurements, Oppenheim and Wehner$^{15}$ proposed a fine-grained form of the uncertainty relation. Such fine-graining is aimed at capturing the plurality of simultaneous possible outcomes of a set of measurements. Considering bipartite systems they formulated a fine-grained uncertainty relation for a special class of nonlocal retreival games for which there exist only one winning answer for one of the two parties. The upper bound of the uncertainty relation which is also the maximum winning probability of the retrieval game was shown to specify the degree of nonlocality of the underlying physical theory. In particular, such an upper bound is applicable to discriminate between the degree of nonlocality pertaining to classical theory, quantum theory, and no-signalling theory with maximum nonlocality for bipartite systems$^{15}$. Similar formulations of fine-graining in the context of nonlocal games have been later used to distinguish the nonlocality of tripartite systems$^{29}$, as well as in the context of biased bipartite and tripartite games$^{30}$. The fine-grained uncertainty relation (or rather, a set of relations) as proposed by Oppenheim and Wehner$^{15}$ is given by \begin{eqnarray} P(\sigma ,\textbf{x}):= \displaystyle\sum_{t=1}^n p(t) p(x^{(t)}|t)_{\sigma} \leq \zeta_{\textbf{x}}(\mathcal{T},\mathcal{D}) \label{FUR1} \end{eqnarray} where $P(\sigma ,\textbf{x})$ is the probability of possible outcomes written as a string $\textbf{x}=\{x^{(1)}, ..., x^{(n)}\}$ corresponding to a set of measurements $\{t\}$ $(\in \mathcal{T})$ chosen with probabilities $\{p(t)\}$ ($\in \mathcal{D}$, the probability distribution of choosing measurements), $p(x^{(t)}|t)_{\sigma}$ is the probability of obtaining outcome $x^{(t)}$ by performing measurement labeled `t' on the state of a general physical system $\sigma$, $n (=|\mathcal{T}|)$ is the total number of different measurement settings, and $\zeta_{\textbf{x}}(\mathcal{T},\mathcal{D})$ is given by \begin{eqnarray} \zeta_{\textbf{x}}(\mathcal{T},\mathcal{D})= \max_{\sigma} \displaystyle\sum_{t=1}^n p(t) p(x^{(t)}|t)_{\sigma} \label{maxfur} \end{eqnarray} where the maximum is taken over all possible states allowed on a particular system. The uncertainty of measurement outcome occurs for the value of $\zeta_{\textbf{x}}(\mathcal{T},\mathcal{D})<1$. The value of $\zeta_{\textbf{x}}(\mathcal{T},\mathcal{D})$ is bound by the particular physical theory. The no-signaling theory with maximum nonlocality gives the upper bound $\zeta_{\textbf{x}}(\mathcal{T},\mathcal{D})=1$. For the case of the single qubit in quantum theory, the form of the fine-grained uncertainty relation is given by \begin{eqnarray} P(\mathcal{T},\sigma_A)=\displaystyle\sum_{t=1}^n p(t) p(a=x^{(t)}|t)_{\sigma_A}\leq \zeta_{\textbf{x}}(\mathcal{T},\mathcal{D}) \end{eqnarray} where $p(a=x^{(t)}|t)_{\sigma_A}=Tr[A_t^a.\sigma_A]$ with $A_t^a$ being the measurement operator corresponding to measurement setting `t' giving outcome `a', and $\zeta_{\textbf{x}}(\mathcal{T},\mathcal{D})=\max_{\sigma_A} P(\mathcal{T},\sigma_A)$. Here the maximum is taken over all possible single qubit states. The value of $\zeta_{\textbf{x}}(\mathcal{T},\mathcal{D})$ that occurs for the spin measurements along the z-axis and along the x-axis with equal probability ( $p(t)=1/2$) on the eigenstates of $(\sigma_x+\sigma_z)/\sqrt{2}$ and $(\sigma_x-\sigma_z)/\sqrt{2}$, is $(\frac{1}{2} + \frac{1}{2 \sqrt{2}})$. The connection between fine-graining and nonlocality was observed by Oppenheim and Wehner$^{15}$ for the case of bipartite systems. They provided specific examples of nonlocal retrieval games (for which there exist only one winning answer for one of the two parties) for the purpose of discriminating different types of theories by the upper bound of $\zeta$ (the degree of nonlocality). According to these games, Alice and Bob receive questions `s' and `t' respectively, with some probability distribution $p(s,t)$ (for simplicity, $p(s,t)=p(s) p(t)$); and their answer `a' or `b' will be winning answers determined by the set of rules, i.e., for every setting `s' and the corresponding outcome `a' of Alice, there is a string $\textbf{x}_{s,a}=(x_{s,a}^{(1)}, ..., x_{s,a}^{(n)})$ of length $n=|\mathcal{T}|$ that determines the correct answer $b=x_{s,a}^{t}$ for the question `t' for Bob. In the particular game considered, Alice and Bob share a state $\rho_{AB}$ which is emitted and distributed by a source. Alice and Bob are spatially separated enough so that no signal can travel while experimenting. Alice performs either of her measurements $A_{0}$ and $A_{1}$ and Bob, either of $B_{0}$ and $B_{1}$ at a time. These measurements having the outcomes $+1$ and $-1$, can be chosen by Alice and Bob without depending on the choice made by the other. The CHSH inequality$^{23}$ \begin{equation} \frac{1}{4} [E(A_{0}B_{0})+ E(A_{0}B_{1})+E(A_{1}B_{0})-E(A_{1}B_{1})]\leq \frac{1}{2} \end{equation} holds for any local hidden variable model and can be violated when measurements are done on quantum particles prepared in entangled states. Here $E(A_{i}B_{j})$ are the averages of the product of measurement outcomes of Alice and Bob with $i,j=0,1$. In the context of the above game, Alice and Bob receive respective binary questions $s,t \in \{0,1\}$ (i.e., representing two different measurement settings on each side), and they win the game if their respective outcomes (binary) $a,b\in \{0,1\}$ satisfy the condition $a\oplus b=s.t$. At the starting of the game, Alice and Bob discuss their strategy (i.e., choice of shared bipartite state and also measurement). They are not allowed to communicate with each other once the game has started. The probability of winning the game for a physical theory described by bipartite state ($\sigma_{AB}$) is given by \begin{eqnarray} P^{game}(\mathcal{S},\mathcal{T},\sigma_{AB})=\displaystyle\sum_{s,t} p(s,t) \displaystyle\sum_a p(a,b=x_{s,a}^t|s,t)_{\sigma_{AB}} \label{FUR2} \end{eqnarray} where the form of $p(a,b=x_{s,a}^t|s,t)_{\sigma_{AB}}$ in terms of the measurements on the bipartite state $\sigma_{AB}$ is given by \begin{eqnarray} p(a,b=x_{s,a}^t|s,t)_{\sigma_{AB}}= \displaystyle\sum_b V(a,b|s,t) \langle (A_s^a\otimes B_t^b)\rangle_{\sigma_{AB}} \label{prob2} \end{eqnarray} where $A_s^a$ ($=\frac{(\mathcal{I}+(-1)^a A_s)}{2}$) is a measurement of the observable $A_s$ corresponding to setting `s' giving outcome `a' at Alice's side; $B_t^b$ ($=\frac{(\mathcal{I}+(-1)^a B_s)}{2}$) is a measurement of the observable $B_t$ corresponding to setting `t' giving outcome `b' at Bob's side, and $V(a,b|s,t)$ is the winning condition given by \begin{eqnarray} V(a,b|s,t)&=& 1 \text{\phantom{xxxxx} iff $a\oplus b = s.t$} \nonumber \\ &=& 0 \text{\phantom{xxxxx} otherwise} \label{cond2} \end{eqnarray} Using Eqs. (\ref{FUR2}), (\ref{prob2}), (\ref{cond2}) and taking $p(s,t)=p(s)p(t)=1/4$, the expression of $P^{game}(\mathcal{S},\mathcal{T},\sigma_{AB})$ for the bipartite state $\sigma_{AB}$ is obtained to be \begin{eqnarray} P^{game}(\mathcal{S},\mathcal{T},\sigma_{AB})=\frac{1}{2}(1+\frac{\langle\mathcal{B}_{CHSH}\rangle_{\sigma_{AB}}}{4}) \end{eqnarray} where \begin{eqnarray} \mathcal{B}_{CHSH}=A_0\otimes B_0+A_0\otimes B_1+A_1\otimes B_0-A_1\otimes B_1 \end{eqnarray} corresponds to the Bell-CHSH operator$^{22,23}$. To characterize the allowed distribution under the theory, we need to know the maximum winning probability, maximized over all possible strategies for Alice and Bob. The maximum winning probability is given by \begin{eqnarray} P^{game}_{\max} = \max_{\mathcal{S},\mathcal{T},\sigma_{AB}} P^{game}(\mathcal{S},\mathcal{T},\sigma_{AB}) \label{maxgame} \end{eqnarray} The value of $P^{game}_{\max}(\mathcal{S},\mathcal{T},\sigma_{AB})$ allowed by classical physics is $\frac{3}{4}$ (as classically, the Bell-CHSH inequality is bounded by $2$), by quantum mechanics is $(\frac{1}{2}+\frac{1}{2 \sqrt{2}})$ (due to the maximum violation of Bell inequality, $\langle \mathcal{B}_{CHSH} \rangle =2\sqrt{2}$), and by no-signaling theories with maximum Bell violation ($\langle \mathcal{B}_{CHSH}\rangle=4$, that occurs for the PR-box$^{56}$ is $1$. The connection of Eq.(\ref{cond2}) with the no-signalling constraint for the general case of a bipartite system was elaborated by Barrett et al.$^{57}$ The above description refers to the scenario when the two parties have no bias towards choosing a particular measurement. Nonlocality in the context of biased games has been discussed in Ref.$^{30}$ using the fine-grained uncertainty relation. In the particular game chosen$^{58}$ the biased game, the intention of Alice is to choose $A_{0}$ with probability $p$($0\leqslant p\leqslant 1$) and $A_{1}$ with probability $(1-p)$. Bob intends to choose $B_{0}$ and $B_{1}$ with probabilities $q$($0\leqslant q\leqslant 1$) and $(1-q)$, respectively. The measurements and their outcomes are coded into binary variables pertaining to an input-output process. Alice and Bob have binary input variables $s$ and $t$, respectively, and output variables $a$ and $b$, respectively. Input $s$ takes the values $0$ and $1$ when Alice measures $A_{0}$ and $A_{1}$, respectively. Output $a$ takes the values $0$ and $1$ when Alice gets the measurement outcomes $+1$ and $-1$, respectively. The identifications are similar for Bob's variables $t$ and $b$. Now, the rule of the game is that Alice and Bob's particles win (as a team) if their inputs and outputs satisfy \begin{equation} a\oplus b = s.t \end{equation} where $\oplus$ denotes addition modulo $2$. Input questions $s$ and $t$ have the probability distribution $p(s,t)$ (for simplicity, $p(s,t)= p(s)p(t)$ where $p(s=0)=p$, $p(s=1)= (1-p)$, $p(t=0)= q$ and $p(t=1)= (1-q)$). The fine-grained uncertainty relation is now invoked. The expression of $P^{game}$ is given by \begin{equation} P^{game}(\mathcal{S},\mathcal{T},\rho_{AB})= \frac{1}{2}[1+ \langle CHSH(p,q)\rangle_{\rho_{AB}}] \end{equation} with $CHSH(p,q)= [pq A_{0}\otimes B_{0}+ p(1-q)A_{0}\otimes B_{1}+ (1-p)q A_{1}\otimes B_{0}-(1-p)(1-q)A_{1}\otimes B_{1}]$ being the form of CHSH-function after introducing bias. The maximum probability $P^{game}$ of winning the biased game was obtained$^{30}$ by maximizing the function $\langle CHSH(p,q)\rangle$ for different theories. Such maximization was earlier performed in the literature for the unbiased scenario$^{59}$ and subsequently, for the biased case as well$^{58}$, in the latter case by considering two halves of the ranges of the parameters $p$ and $q$. First, for the case of $p,q\geq 1/2$, the classical maximum is obtained using an extremal strategy where the values of all the observables are $+1$ giving the maximum value of the biased CHSH-function to be $1-2(1-p)(1-q)$. With this classical maximum, the winning probability is given by$^{30}$ \begin{equation} P^{game}(\mathcal{S},\mathcal{T},\rho_{AB})|^{classical}_{maximum}= 1-(1-p)(1-q) \end{equation} This reduces to the value $\frac{3}{4}$ for the unbiased case when $p=q=\frac{1}{2}$. For the quantum strategy, the parameter space is divided in two regions of [$p,q$] with the first region corresponding to $1\geq p\geq (2q)^{-1}\geq \frac{1}{2}$. Here $\langle CHSH(p,q)\rangle \leq 1-2(1-p)(1-q)$ leads to \begin{equation} P^{game}(\mathcal{S},\mathcal{T},\rho_{AB})|^{region}_1= 1-(1-p)(1-q)~~ \end{equation} showing that the upper bound is the same as achieved by classical theory. Thus, quantum correlations (entanglement) offers no advantage over classical correlations in performing the specified task in this region. However, in the other region $1\geq (2q)^{-1}> p\geq \frac{1}{2}$ one gets the value $\langle CHSH(p,q)\rangle \leq \sqrt{2}\sqrt{q^{2}+(1-q)^{2}}\sqrt{p^{2}+(1-p)^{2}}$ that is greater than the classical bound. So, the biasing parameters in this region enable discrimination among classical and quantum correlations. The upper bound of the fine-grained uncertainty relation is in this case given by, \begin{eqnarray} P^{game}(\mathcal{S},\mathcal{T},\rho_{AB})&|&^{quantum}_{maximum}\nonumber\\ =\frac{1}{2}[1+\sqrt{2}\sqrt{q^{2}+(1-q)^{2}}&&\sqrt{p^{2}+(1-p)^{2}}] \end{eqnarray} The extent of non-locality that can be captured by the fine-grained uncertainty relation is regulated by the bias parameters. The fine-grained uncertainty relation has been applied to study the nonlocality of tripartite systems, as well$^{29}$. In this case a nonlocal retrieval game similar to CHSH-game for bipartite systems is considered, as follows. Three parties, Alice, Bob and Charlie receive respective binary questions `s', `t', and `u' $\in\{0,1\}$ (corresponding to their two different measurement settings at each side), and they win the game if their respective outcomes (binary) `a', `b', and `c' $\in\{0,1\}$ satisfy certain rules. Three kinds of no-signaling boxes, known as full-correlation boxes have been considered, for which all one-party and two-party correlation in the system vanish$^{60}$. The game is won if their answers satisfy the set of rules, either \begin{eqnarray} a\oplus b \oplus c=s.t \oplus t.u \oplus u.s \label{box1} \end{eqnarray} or \begin{eqnarray} a\oplus b \oplus c=s.t \oplus s.u \label{box2} \end{eqnarray} or else \begin{eqnarray} a\oplus b \oplus c=s.t.u \label{box3} \end{eqnarray} All the above boxes violate the Mermin inequality$^{61}$, whereas the Svetlichny inequality$^{62}$ is violated only by the box given by Eq. (\ref{box1}) (known as the Svetlichny box). The winning probability of the game under a physical theory described by a shared tripartite state $\sigma_{ABC}$ (among Alice, Bob and Charlie) is given by \begin{eqnarray} && P^{game}(\mathcal{S},\mathcal{T},\mathcal{U},\sigma_{ABC})\nonumber \\ &&=\displaystyle\sum_{s,t,u} p(s,t,u) \displaystyle\sum_{a,b} p(a,b,c=x_{s,t,a,b}^{(u)}|s,t,u)_{\sigma_{ABC}} \label{FUR3} \end{eqnarray} where $p(s,t,u)$ is the probability of choosing the measurement settings `s' by Alice, `t' by Bob and `u' by Charlie, and $p(a,b,c|s,t,u)_{\sigma_{ABC}}$ the joint probability of getting outcomes `a', `b' and `c' for corresponding settings `s', `t' and `u' given by \begin{eqnarray} && p(a,b,c=x_{s,t,a,b}^{(u)}|s,t,u)_{\sigma_{ABC}} \nonumber \\ && =\displaystyle\sum_{c} V(a,b,c|s,t,u) \langle A_s^a\otimes B_t^b\otimes C_u^c\rangle_{\sigma_{ABC}} \label{prob3} \end{eqnarray} where $A_s^a$, $B_t^b$ and $C_u^c$ are the measurements corresponding to setting `s' and outcome `a' at Alice's side, setting `t' and outcome `b' at Bob's side, and setting `u' and outcome `c' at Charlie's side, respectively; and $V(a,b,c|s,t,u)$ (the winning condition) is non zero ($=1$) only when the outcomes of Alice, Bob and Charlie are correlated by either of Eqs. (\ref{box1}), (\ref{box1}) or (\ref{box3}), and is zero otherwise. The maximum winning probability over all possible strategies (i.e., the choice of the shared tripartite state and measurement settings by the three parties) for any theory is given by \begin{eqnarray} P^{game}_{\max} = \max_{\mathcal{S},\mathcal{T},\mathcal{U},\sigma_{ABC}} P^{game}(\mathcal{S},\mathcal{T},\mathcal{U},\sigma_{ABC}) \end{eqnarray} which is a signature of the allowed probability distribution under that theory. The cases corresponding to classical, qauntum and no-signalling theories with super-quantum correlations for the above different full-correlation boxes (rules of the nonlocal game) have been studied in Ref.$^{29}$. For the case of the winning condition given by Eq. (\ref{box1}), the expression of $P^{game}(\mathcal{S},\mathcal{T},\mathcal{U},\sigma_{ABC})$ for the shared tripartite state $\sigma_{ABC}$ is given by \begin{eqnarray} P^{game}(\mathcal{S},\mathcal{T},\mathcal{U},\sigma_{ABC})=\frac{1}{2} [1+\frac{\langle \textbf{S}_1\rangle_{\sigma_{ABC}}}{8}] \end{eqnarray} where \begin{eqnarray} \textbf{S}_1=&&A_0\otimes B_0\otimes C_0+A_0\otimes B_0\otimes C_1+A_0\otimes B_1\otimes C_0 \nonumber \\ && +A_1\otimes B_0\otimes C_0-A_0\otimes B_1\otimes C_1-A_1\otimes B_0\otimes C_1\nonumber \\ &&-A_1\otimes B_1\otimes C_0-A_1\otimes B_1\otimes C_1 \end{eqnarray} The value of $P^{game}_{\max}$ allowed in classical physics is $3/4$ which follows from the Svetlichny inequality$^{62}$ \begin{eqnarray} \langle \textbf{S}_1\rangle_{\sigma_{ABC}} \leq 4 \end{eqnarray} For the case of quantum physics, the maximum violation of the Svetlichny inequality is $4\sqrt{2}$ which occurs for the GHZ-state$^{63}$. The value of $P^{game}_{\max}$ allowed in quantum physics is $(\frac{1}{2}+\frac{1}{2 \sqrt{2}})$. For the case of the no-signalling theory, the algebraic maximum of the Svetlichny inequality is $8$, and the value of $P^{game}_{\max}$ in this case is $1$, corresponding to a correlation with maximum nonlocality. It was found in Ref.$^{29}$ that none of the other two full corelation Mermin boxes (\ref{box2}) and (\ref{box3}) are able to distinguish classical theory from quantum theory in terms of their degree of nonlocality. The fine-grained uncertainty relation determines the degree of nonlocality as manifested by the Svetlichny inequality for tripartite systems corresponding to the wining condition given by (\ref{box1}), in the same way as it determines the nonlocality of bipartite systems manifested by Bell-CHSH inequality. One is able to differentiate the various classes of theories (i.e., classical physics, quantum physics and no-signaling theories with maximum nonlocality or superquantum correlations) by the value of $P^{game}_{\max}$ for tripartite systems. A biased tripartite system had also been explored$^{30}$. However, it was observed using a bipartition model$^{64}$ that there is a zone specified by the biasing parameters where even the Svetlichny inequality cannot perform the discrimination between various physical systems based on their degree of nonlocality. \section{Quantum memory} In quantum information theory, an uncertainty relation in terms of entropy is regarded to be more useful than that in terms of standard deviation. The uncertainty relating to the outcomes of observables is reformulated in terms of Shannon entropy instead of standard deviation. Entropic uncertainty relations for two observables in the context of discrete variables was introduced by Deutsch$^{7}$. An improved version was conjectured by Kraus$^{8}$, given by \begin{eqnarray} \mathcal{H}(R)+\mathcal{H}(S) \geq \log_2 \frac{1}{c} \label{EUR1} \end{eqnarray} and later proved by Maassen and Uffink$^{9}$. Here $\mathcal{H}(i)$ denotes the Shannon entropy of the probability distribution of the measurement outcomes of observable $i$ ($i\in\{R,S\}$) and $\frac{1}{c}$ quantifies the complementarity of the observable. For non-degenerate observables, $c= \max_{i,j}c_{i,j} = \max_{i,j} |\langle a_i|b_j\rangle|^2$, where $|a_i\rangle$ and $|b_j\rangle$ are eigenvectors of $R$ and $S$, respectively. Using entanglement between the state of the observed system and another quantum system (memory), Berta et al.$^{10}$ have shown that the lower bound of entropic uncertainty (given by Eq.(\ref{EUR1})) can be improved in the presence of quantum correlations. The entropic uncertainty relation in the presence of quantum memory is given by$^{10}$ \begin{eqnarray} \mathcal{S}(R|B)+\mathcal{S}(S|B) \geq \log_2 \frac{1}{c} + \mathcal{S}(A|B) \label{EUR-QM} \end{eqnarray} where $\mathcal{S}(R|B)$ ($\mathcal{S}(S|B)$) is the conditional von Neumann entropy of the state given by $\sum_{j} (|\psi_j\rangle\langle\psi_j|\otimes I)\rho_{AB}(|\psi_j\rangle\langle\psi_j|\otimes I)$, with $|\psi_j\rangle$ being the eigenstate of observable $R (S)$, and $\mathcal{S}(R|B)$ ($\mathcal{S}(S|B)$) quantifies the uncertainty corresponding to the measurement $R (S)$ on the system ``A" given information stored in the system ``B" (i.e., quantum memory). $\mathcal{S}(A|B)$ quantifies the amount of entanglement between the quantum system possessed by Alice and the quantum memory possessed by Bob. For example, the sum of uncertainties of two measurement outcomes ($\mathcal{H}(R)+\mathcal{H}(S)$) for measurement of two observables $(R,S)$ on the quantum system (``A", possessed by Alice) can be reduced to $0$ (i.e., there is no uncertainty) if that system is maximally entangled with an another system, called quantum memory (``B", possessed by Bob). Here, Bob is able to reduce his uncertainty about Alice's measurement outcome with the help of communication from Alice regarding the choice of her measurement performed, but not its outcome. Recently, Coles and Piani$^{14}$ have made the lower bound of entropic uncertainty in the presence of quantum memory tighter. Their modified form of the entropic uncertainty relation is given by \begin{eqnarray} \mathcal{S}(R_A|B)+\mathcal{S}(S_A|B) \geq c^{\prime}(\rho_{A}) + \mathcal{S}(A|B) \label{EUR-QM_CP} \end{eqnarray} where $c^{\prime}(\rho_{A}) = \max\{c^{\prime}(\rho_A,R_A,S_A), c^{\prime}(\rho_A,S_A,R_A) \}$. $c^{\prime}(\rho_A,R_A,S_A)$ and $c^{\prime}(\rho_A,S_A,R_A)$ are defined by \begin{eqnarray} c^{\prime}(\rho_A,R_A,S_A) = \displaystyle\sum_i p^r_i \log_2 \frac{1}{\max_j c_{ij}} \nonumber \\ c^{\prime}(\rho_A,S_A,R_A) = \displaystyle\sum_j p^s_j \log_2 \frac{1}{\max_i c_{ij}}, \label{Complemetary_RS} \end{eqnarray} where $p^r_i = \langle r |\rho_{A}|r \rangle$ with $\sum_i p^r_i = 1$ and $p^s_j =\langle s |\rho_{A}|s \rangle $ with $\sum_j p^s_j=1$. Here, the uncertainty for the measurement of the observable $R_A$ ($S_A$) on Alice's system by accessing the information stored in the quantum memory with Bob is measured by $\mathcal{S}(R_A|B)$ ($\mathcal{S}(S_A|B)$) which is the conditional von Neumann entropy of the state given by \begin{eqnarray} \rho_{R_A(S_A)B}&=&\sum_{j} (|\psi_j\rangle_{R_A(S_A)}\langle\psi_j|\otimes I)\rho_{AB}(|\psi_j\rangle_{R_A(S_A)}\langle\psi_j|\otimes I)\nonumber \\ &=&\sum_j p_j^{R_A(S_A)} \Pi_j^{R_A(S_A)}\otimes \rho_{B|j}^{R_A(S_A)}, \label{QState} \end{eqnarray} where $\Pi_j^{R_A(S_A)}$'s are the orthogonal projectors on the eigenstate $|\psi_j\rangle_{R_A(S_A)}$ of observable $R_A (S_A)$, $p_j^{R_A(S_A)}=Tr[(|\psi_j\rangle_{R_A(S_A)}\langle\psi_j|\otimes I)\rho_{AB}(|\psi_j\rangle_{R_A(S_A)}\langle\psi_j|\otimes I)]$, $\rho_{B|j}^{R_A(S_A)}=Tr_A[(|\psi_j\rangle_{R_A(S_A)}\langle\psi_j|\otimes I)\rho_{AB}(|\psi_j\rangle_{R(S)}\langle\psi_j|\otimes I)]/p_j^{R_A(S_A)}$ and $\rho_{AB}$ is the state of joint system `$A$' and `$B$'. In another work, Pati et al.$^{11}$ have extended the concept of memory to include more general quantum correlations beyond entanglement. This leads to the improvement of the lower bound given by \begin{eqnarray} \mathcal{S}(R_A|B)+\mathcal{S}(S_A|B) \geq && c^{\prime}(\rho_{A}) + \mathcal{S}(A|B) \label{EUR-QM_P} \\ && + \max\{0,D_A(\rho_{AB})-C_A^M(\rho_{AB})\}, \nonumber \end{eqnarray} where the quantum discord $D_A(\rho_{AB})$ is given by$^{65}$ \begin{eqnarray} D_A(\rho_{AB}) = \mathcal{I}(\rho_{AB})- C_A^M(\rho_{AB}), \label{QDis} \end{eqnarray} with $\mathcal{I}(\rho_{AB})$ ($=\mathcal{S}(\rho_A)+\mathcal{S}(\rho_B)-\mathcal{S(\rho_{AB})}$) being the mutual information of the state $\rho_{AB}$ which contains the total correlation present in the state $\rho_{AB}$ shared between the system $A$ and the system $B$, and the classical information $C_A^M(\rho_{AB})$ for the shared state $\rho_{AB}$ (when Alice measures on her system) is given by \begin{eqnarray} C_A^M(\rho_{AB}) = \max_{\Pi^{R_A}}[\mathcal{S}(\rho_B) - \displaystyle\sum_{j=0}^1 p_j^{R_A} \mathcal{S}(\rho_{B|j}^{R_A}) ] \label{Cinf_M} \end{eqnarray} Experiments have demonstrated the effectiveness of reducing quantum uncertainty using quantum memory, for the case of pure$^{66}$ as well as mixed$^{67}$ entangled states. For the purpose of experimental verification of inequality (\ref{EUR-QM}), the entropic uncertainty is recast in the form of the sum of the Shannon entropies $\mathcal{H}(p^R_d) + \mathcal{H}(p^S_d)$ when Alice and Bob measure the same observables $R(S)$ on their respective systems and get different outcomes whose probabilities are denoted by $p^R_d(p^S_d)$, and $\mathcal{H}(p^{R(S)}_d) = -p^{R(S)}log_2p^{R(S)} -(1-p^{R(S)})log_2(1-p^{R(S)})$. Making use of Fano's inequality$^{68}$, it follows that $\mathcal{H}(p^R_d) + \mathcal{H}(p^S_d) \ge \mathcal{S}(R|B)+\mathcal{S}(S|B)$ which using Eq.(\ref{EUR-QM}) gives \cite{Expt.1} \begin{eqnarray} \mathcal{H}(p^R_d) + \mathcal{H}(p^S_d) \ge \log_2 \frac{1}{c} + \mathcal{S}(A|B) \label{EUR-QM2} \end{eqnarray} The right hand side of the inequality (\ref{EUR-QM2}) can be determined from the knowledge of the state and the measurement settings. The entropic uncertainty relation has been used for verifying the security of key distribution protocols$^{69}$. Devetak and Winter$^{70}$ derived that the amount of key $K$ that Alice and Bob are able to extract per state should always exceed the quantity $\mathcal{S}(R|E) - \mathcal{S}(R|B)$, where the quantum state $\rho_{ABE}$ is shared between Alice, Bob and the evesdropper Eve ($E$). Extending this idea by incorporating the effect of shared quantum correlation between Alice and Bob, Berta et al.$^{11}$ reformulated their relation (\ref{EUR-QM}), in the form of $\mathcal{S}(R|E) + \mathcal{S}(R|B) \ge \log_2 \frac{1}{c}$ enabling them to derive a new lower bound on the key extraction rate, given by $K \ge \log_2 \frac{1}{c} - \mathcal{S}(R|B)- \mathcal{S}(S|B)$. It has been recently realized that a further improvement in the lower bound of entropic uncertainty is possible using fine graining. A new form of the uncertainty relation in the presence of quantum memory was derived$^{16}$, in which the lower bound of entropic uncertainty corresponding to the measurement of two observables is determined by fine-graining of the possible measurement outcomes. The fine-grained uncertainty relation$^{15}$, as discussed in the previous section, is here considered in the context of a quantum game played by Alice and Bob who share a two-qubit state $\rho_{AB}$ which is prepared by Alice. Bob's qubit which he receives from Alice, represents the quantum memory. Bob's uncertainty of the outcome of Alice's measurement of one of two incompatible observables (say, $R$ and $S$), is reduced with the help of fine-graining, when Alice helps Bob by communicating her measurement choice of a suitable spin observable on her system. In this game Alice and Bob are driven by the requirement of minimizing the value of the quantity $\mathcal{H}(p^R_d) + \mathcal{H}(p^S_d)$ which forms the left hand side of the entropic uncertainty relation (\ref{EUR-QM2}). The minimization is over all incompatible measurement settings such that $R \neq S$, i.e., \begin{eqnarray} \mathcal{H}(p^R_d) + \mathcal{H}(p^S_d) \ge \min_{R, S\neq R}[\mathcal{H}(p^R_d) + \mathcal{H}(p^S_d)] \label{step1} \end{eqnarray} To find the minimum value, the choice of the variable $R$ was fixed without the loss of generality to be $\sigma_z$ (spin measurement along the $z$-direction), and then the minimization was performed over the other variable $S$. The uncertainty defined by the entropy $ \mathcal{H}(p^S_d)$ is minimum when the certainty of the required outcome is maximum, corresponding to an infimum value for the probability $p^S_d$. In order to obtain the infimum value of $p^S_d$, the fine-grained uncertainty relation was used in a form relevant to the game considered where the infimum value of the winning probability (corresponding to minimum uncertainty) is given by \begin{eqnarray} p_{\inf}^{S}= \inf_{S(\ne \sigma_z)}\displaystyle\sum_{a,b} V(a,b) Tr[(A_S^a\otimes B_S^b).\rho_{AB}], \label{inf-S} \end{eqnarray} with the winning condition $V(a,b)$ given by \begin{eqnarray} V(a,b) &=& 1 \textit{~~~~~ iff $a\oplus b=1$} \nonumber \\ &=& 0 \textit{~~~~~ otherwise}. \end{eqnarray} with $A_S^a$ being a projector for observable $S$ with outcome `$a$', given by $S^{\alpha}=\frac{I+(-1)^{\alpha} \vec{n}_{S}.\vec{\sigma}}{2}$ (and similarly for $B_S^b$), where $\vec{n}_{S}(\equiv \{\sin(\theta_{S}) \cos(\phi_{S}), \sin(\theta_{S}) \sin(\phi_{S}),$ $\cos(\theta_{S}) \} )$; $\vec{\sigma}\equiv \{\sigma_x,\sigma_y,\sigma_z\}$ are the Pauli matrices; $\alpha$ takes the value either $0$ (for spin up projector) or $1$ (for spin down projector). The above winning condition proposed in Ref.$^{16}$ is different from the winning conditions used in Refs.$^{15,29,30}$ for the purpose of capturing the nonlocality of quantum systems. Here the fine-grained uncertainty relation is to make it directly applicable to the experimental situation of quantum memory$^{66,67}$. The form of the entropic uncertainty relation obtained by fine-graining is given by$^{16}$ to be \begin{eqnarray} \mathcal{H}(p^R_d)+\mathcal{H}(p^S_d) \geq \mathcal{H}(p^{\sigma_z}_d) + \mathcal{H}(p^S_{inf}) \label{FURQM1} \end{eqnarray} The value of $p^S_{inf}$ has been calculated for various quantum states such a the Werner state, Bell-diagonal state and a state with maximally mixed marginals$^{16}$. The above uncertainty relation (\ref{FURQM1}) is able to account for the experimental results obtained for the case of maximally entangled states$^{66}$ and mixed Bell-diagonal states$^{67}$. Moreover, the limit set by (\ref{FURQM1}) prohibits the attainment of the lower bound of entropic uncertainty$^{10}$ as defined by the right hand side of equation (\ref{EUR-QM}) for the class of two-qubit states with maximally mixed marginals. The uncertainty relation (\ref{FURQM1}) is independent of the choice of measurement settings as it optimizes the reduction of uncertainty quantified by the conditional Shannon entropy over all possible observables. Given a bipartite state possessing quantum correlations, inequality (\ref{FURQM1}) provides the fundamental limit to which uncertainty in the measurement outcomes of any two incompatible variables can be reduced. Since the uncertainty principle in its entropic form could be used for verifying the security of key distribution protocols, there exist ramifications of Eq.(\ref{FURQM1}) on the key extraction rate in quantum key generation. It is possible to obtain a tighter lower bound on the key rate$^{16}$ given by $K \ge \log_2 \frac{1}{c} - \mathcal{H}(p^{\sigma_z}_d) + \mathcal{H}(p^S_{inf})$ when the two parties involved in the protocol retain data whenever they make the same choice of measurement on their respective sides. The relation (\ref{FURQM1}) is the optimized lower bound of entropic uncertainty, which represents the ultimate limit to which uncertainty of outcomes of two non-commuting observables can be reduced by performing any set of measurements in the presence of quantum memory. \section{Conclusions} In this article we have discussed various applications of different versions of uncertainty relations. Much of the review presented here deals with various formulations of entropic uncertainty relations$^{6,8,9,10,11,14,16}$ in different situations for the case of both discrete and continuous variables. However, we have also briefly discussed the Heisenberg uncertainty relation$^{1}$ and its Robertson-Schrodinger variant$^{3,4}$ in the context of two specific applications, namely, demonstration of EPR-steering$^{18,19}$, and determination of the purity of states$^{28}$, respectively. We conclude with a section-wise summary of the main results discussed in this article, and a few possible future directions of study. We have discussed in Section II how the Robertson-Schrodinger uncertainty relation may be connected to the property of purity and mixedness of single and bipartite qubit systems$^{28}$. The uncertainty corresponding to the measurement of suitable observables vanishes for pure states, and is positive definite for mixed states. Using this feature a scheme was proposed to distinguish pure and mixed states belonging to the classes of all single-qutrit states up to three parameters, as well as several classes of two-qutrit states, when prior knowledge of the basis is available$^{28}$. A possible implementation of the proposed witnesses for detecting mixedness here could be through techniques involving measurement of two-photon polarization-entangled modes for qutrits$^{71}$. Since the class of all pure states is not convex, the witnesses proposed for detecting mixedness do not arise from the separability criterion that holds for the widely studied entanglement witnesses$^{37}$, as well as the recently proposed teleportation witnesses$^{38}$, and witnesses for absolutely separable states$^{39}$. However, a similar prescription of distinction of categories of quantum states based on the measurement of expectation values of Hermitian operators is followed. In Section III a discussion of EPR steering$^{18,19,41}$ is presented in the context of continuous variable entangled states. Though entangled states form a strict subset of steerable states$^{21,45}$, several entangled pure states fail to reveal steering through the Reid criterion$^{19}$ for wide ranges of parameters. Using the entropic uncertainty relation for continuous variables$^{6}$, an entropic steering condition can be derived$^{43}$. Examples of various non-Gaussian states for which entropic steering can be demonstrated, such as, the two-dimensional harmonic oscillator states, the photon subtracted squeezed vacuum state, and the N00N state have been studied$^{24}$. Steering with such states may be demonstrated by computing the relevant conditional entropies using the Wigner function whose non-Gaussian nature plays an inportant role. These examples reiterate the fact that though Bell violation guarantees steerability, the two types of quantum correlations are distinct from each other. Moreover, the presence of quantum correlations in certain class of states may be more easily detected through the violation of the entropic steering inequality compared to the violation of the Bell inequality$^{24}$. This could be useful for detecting and manipulating correlations in non-Gaussian states for practical purposes in information processing and quantum metrology. The relation between uncertainty and nonlocality is discussed in Section IV. The connection between the degree of nonlocality of the underlying physical theory and the fine-grained uncertainty relation has been proposed$^{15}$, as expressed in terms of the maximum winning probability of certain nonlocal games. A generalization of this connection to the case of tripartite systems has been formulated$^{29}$. The fine-grained uncertainty relation determines the degree of nonlocality as manifested by the Svetlichny inequality$^{62}$ for tripartite systems in the same way as it determines the nonlocality of bipartite systems manifested by Bell-CHSH inequality$^{22,23}$. With the help of the fine-grained uncertainty relation, one is able to differentiate the various classes of theories (i.e., classical physics, quantum physics and no-signaling theories with maximum nonlocality or superquantum correlations) by the value of the maximum winning probability of the relevant retrieval game. The fine-grained uncertainty relation$^{15}$ has been further employed$^{30}$ to distinguish between classical, quantum and super-quantum correlations based on their strength of nonlocality, in the context of biased games$^{58}$ involving two or three parties. Discrimination among the underlying theories with different degrees of nonlocality is in this case possible for a specific range of the biasing parameters where quantum correlations offer the advantage of winning the particular nonlocal game over classical correlations. Analytical generalizations to multiparty nonlocal games may further be feasible using such an approach$^{30}$. Section V deals with the issue of entropic uncertainty relations for discrete variables in the presence of quantum memory$^{10}$. The optimized lower bound of entropic uncertainty in the presence of quantum memory has been derived$^{16}$ with the help of the fine-grained uncertainty principle$^{15}$. Since entropy (or uncertainty) is directly related to probability, the analysis of fine-graining involves the minimization (or maximization) of probability in order to minimize uncertainty. In measurements and communication involving two parties, the lower bound of entropic uncertainty cannot fall below the bound derived using fine-graining, as is illustrated with several examples of pure and mixed states of discrete variables$^{16}$. After fine-graining the entropic uncertainty relation furnishes a fundamental limitation on the precision of outcomes for measurement of two incompatible observables in the presence of quantum memory. Implications on the key rate for secure key generation is also discussed. Further work along this direction may be able to shed light on the information theoretic resources entailed in the process of fine-graining. \section*{Acknowledgments} ASM acknowledges support from the project SR/S2/LOP-08/2013 of DST, India. TP acknowledges financial support from ANR retour des post-doctorants NLQCC (ANR-12-PDOC-0022- 01).
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Recasting the ATLAS search for $\ensuremath{t\bar{t}\hsm}$ in multileptons} \label{sec:recast} \subsection{Overview} The production of a Higgs boson in association with two top quarks is expected in the SM and has therefore been searched for by ATLAS and CMS~\cite{ATLAS-CONF-2016-080,CMS-PAS-HIG-16-038,ATLAS-CONF-2016-058,CMS-PAS-HIG-16-022}. In our study, we focus on the ATLAS $13$~TeV search in multilepton final states \cite{ATLAS-CONF-2016-058}, since it is based on a traditional cut-and-count analysis that is possible to recast. This search is targeted at events where the Higgs decays to either $WW^*$, $ZZ^*$, or $\tau^+\tau^-$. All these channels can lead to signatures with up to four leptons, for which the backgrounds are extremely low at the LHC. The potentially interesting events are grouped into four different signal regions: two with a pair of same-sign light leptons and either zero or one hadronic tau ($2\ell 0\tau$ and $2\ell 1\tau$), one with three light leptons ($3\ell$) and one with four light leptons ($4\ell$). The $13.2~\text{fb}^{-1}$ data have shown an excess in both the $2\ell 1\tau$ and the $2\ell 0\tau$ regions, more particularly for events with only one $b$-tagged jet. The peculiar structure of this excess cannot be explained solely by an enhanced $\ensuremath{t\bar{t}\hsm}$ cross section and could be a sign for new physics. Naively, the $(bW)(bW^*\tau\overline{\tau})$ final state associated with our Type I 2HDMs is similar to the $t\overline{t}(\ensuremath{{H_{_\text{SM}}}}\rightarrow \tau\overline{\tau})$ signal that is looked for in this search. The kinematics of the final states is however significantly different. Notably, only one $W$ boson arises from the direct decay of a top quark. The other one is produced through the decay of the charged Higgs to $\ensuremath{{H^0}}/\ensuremath{{A^0}}$ and is therefore much softer or even off-shell. Consequently, when the mass splitting between $H^+$ and $\ensuremath{{H^0}}/\ensuremath{{A^0}}$ is small, events with $4$ leptons will have lower efficiencies for trigger and preselection cuts. Similarly, the efficiency of our signal in the $3\ell$ region should be lower than that of $t\overline{t}\ensuremath{{H_{_\text{SM}}}}$. The observed rates for the $2\ell 0\tau$ and $2\ell 1\tau$ regions should however remain significant. Broadly speaking, this structure is similar to the excess observed in ATLAS. \subsection{The recasting procedure} In the ATLAS $\ensuremath{t\bar{t}\hsm}$ search, events are selected and classified into four exclusive signal regions associated with various cuts on the multiplicity and momenta of leptons, hadronic taus, light jets and $b$-jets. In addition to these cuts, vetoes against $m_{\ell\ell}\simeq m_{Z}$ as well as low mass leptonic Drell-Yan are imposed in events with same-flavor lepton pairs. Depending on the signal region, the light flavor leptons can also be required to verify specific isolation criteria. Although most of the selection cuts are straightforward, the tagging of hadronic taus as well as the isolation requirements on the light leptons require a more careful treatment that we detail in what follows. \begin{table}[t] \begin{tabular}{|c|| c|c|c|c|} \hline & $~2\ell 0\tau~$ & $~2\ell 1 \tau~$ & $~~3\ell~~$ & $~~4\ell~~$ \\ \botrule $N_{\text{PGS}}$ & 1.2 & 1.4 & 1.7 & 0.20 \\ \hline $N_{\text{Delphes}}$ & 1.0 & 1.1 & 2.1 & 0.55 \\ \hline $N_\text{ATLAS}$ & 1.7 & 0.73 & 1.2 & 0.11\\ \hline \end{tabular} \caption{\label{tab:validation1} Expected signal yield from $t\bar t (\ensuremath{{H_{_\text{SM}}}}\rightarrow \tau\tau)$ in the signal regions of \cite{ATLAS-CONF-2016-058}, obtained using \texttt{Delphes} and \texttt{PGS} for simulation of the detector response, as well as the expected yield provided by ATLAS.} \end{table} In the $2\ell 0\tau$, $2\ell 1\tau$ and $3\ell$ regions, the light leptons have to pass loose and tight isolation cuts. Since these cuts are associated with efficiencies of $99$\% and $96$\% respectively~\cite{Aad:2016jkr}, we do not implement them in our analysis. In the $4\ell$ region, the electrons and muons are submitted to so-called ``gradient'' isolation cuts, with a $p_T$-dependent efficiency. These cuts can have a significant impact at low $p_T$ and we therefore take them into account when using \texttt{Delphes}~\cite{deFavereau:2013fsa} for detector simulation. To estimate the errors associated with our modeling of the detector response, we use two different detector simulators: \texttt{Delphes 3} and \texttt{PGS~4}~\cite{deFavereau:2013fsa,Conway:2008pgs}. In \texttt{Delphes}, we set the $b$- and $\tau$-tagging efficiencies/mistag rates to the values used by ATLAS in~\cite{ATLAS-CONF-2016-058} and initially loosen the electron and muon isolation criteria. The rest of the parameters are set to the default values given in the ATLAS $13$~TeV card from \texttt{Delphes}. In order to take into account the possible loss of low $p_T$ electrons or muons in the $4\ell$ signal region, we select the light leptons in this region with efficiencies corresponding to the ones of the gradient isolation cuts. When generating events with \texttt{PGS}, we use the default lepton tagging algorithm and do not apply any additional isolation cuts. For hadronic taus, however, since the efficiency of the \texttt{PGS} tagging algorithm is much lower than the one used by ATLAS, we modify the \texttt{PGS} code to identify all jets within $\Delta R = 0.2$ of a parton-level tau as hadronic taus. We apply a flat tau-tagging efficiency {\it ex post facto}, given by the branching-ratio-weighted average of the one-prong and three-prong efficiencies given in \cite{ATLAS-CONF-2016-058}. Likewise, we modify the \texttt{PGS} $b$-tagging algorithm to better represent the working point used in \cite{ATLAS-CONF-2016-058}. All the other cuts besides the preselection cuts are implemented without any change. In order to validate our analysis, we generate $\ensuremath{t\bar{t}\hsm}$ events using \texttt{MadGraph5}~\cite{Alwall:2011uj}, and study each of the Higgs decay modes ($\tau^+\tau^-$, $WW^*$ and $ZZ^*$) independently. We match these events up to one additional jet and shower them with \texttt{Pythia6}~\cite{Sjostrand:2006za} using MLM matching~\cite{Hoche:2006ph} with the $k_T$ shower scheme~\cite{Alwall:2014hca}. As mentioned above, we use both \texttt{PGS 4}~\cite{Conway:2008pgs} and \texttt{Delphes 3}~\cite{deFavereau:2013fsa} to simulate the detector response. The expected event yields obtained by our MC study for each of the three Higgs decay channels are given in tables~\ref{tab:validation1}, \ref{tab:validation2} and \ref{tab:validation3}. We observe that \texttt{Delphes} and \texttt{PGS} exhibit complementary performances. \texttt{Delphes} gives a better modeling of the ATLAS efficiencies in the $2\ell 1\tau$ region, whereas \texttt{PGS} shows better agreement in the $2\ell 0\tau$, $3\ell$ and $4\ell$ regions. Overall, \texttt{PGS} provides a better modeling of the SM $\ensuremath{t\bar{t}\hsm}$ efficiencies, and therefore we use \texttt{PGS} for recasting the results in \cite{ATLAS-CONF-2016-058} in terms of rare top decays to $b\,H^{+}$. In our analysis, we use the recasting procedure as is. That is, we do not apply any {\it ad hoc} correction factors to the efficiencies, since we could not determine exactly the origin of our small discrepancies. The overall uncertainties in our detector simulation translate into a factor of $\sim 2$ uncertainty in our results for the signal yields. \begin{table}[t] \begin{tabular}{|c||c|c|c|c|} \hline & $~2\ell 0\tau~$ & $~2\ell 1 \tau~$ & $~~3\ell~~$ & $~~4\ell~~$ \\ \botrule $N_{\text{PGS}}$ & 5.8 & 1.1 & 4.8 & 0.42 \\ \hline $N_{\text{Delphes}}$ & 6.7 & 0.80 & 8.4 & 1.3 \\ \hline $N_\text{ATLAS}$ & 7.5 & 0.66 &4.6 & 0.42\\ \hline \end{tabular} \caption{\label{tab:validation2} Expected signal yield from $t\bar t (\ensuremath{{H_{_\text{SM}}}}\rightarrow WW^*)$ in the signal regions of \cite{ATLAS-CONF-2016-058}, obtained using \texttt{Delphes} and \texttt{PGS} for simulation of the detector response, as well as the expected yield provided by ATLAS.} \end{table} \begin{table} \begin{tabular}{|c||c|c|c|c|} \hline & $~2\ell 0\tau~$ & $~2\ell 1 \tau~$ & $~~3\ell~~$ & $~~4\ell~~$ \\ \botrule $N_{\text{PGS}}$ & 0.25 & 0.06 & 0.17 & 0.021 \\ \hline $N_{\text{Delphes}}$ & 0.37 & 0.069 & 0.59 & 0.16 \\ \hline $N_\text{ATLAS}$ & 0.29 & 0.03 & 0.25 & 0.05\\ \hline \end{tabular} \caption{\label{tab:validation3} Expected signal yield from $t\bar t (\ensuremath{{H_{_\text{SM}}}}\rightarrow ZZ^*)$ in the signal regions of \cite{ATLAS-CONF-2016-058}, obtained using \texttt{Delphes} and \texttt{PGS} for simulation of the detector response, as well as the expected yield provided by ATLAS.} \end{table} \begin{figure}[t] \includegraphics[width=0.3\linewidth]{plots/Hpprod.pdf} \caption{\label{fig:offshell} Feynman diagram for non-resonant production of $H^+\bar{t}b$.} \end{figure} \subsection{Signal generation} A charged Higgs can be produced along with a top quark and a bottom quark through two distinct processes. First, as highlighted throughout this paper, this final state can arise from on-shell top pair-production, with one of the tops decaying to $H^+ b$. As the charged Higgs gets heavier than about $155$~GeV, however, the branching ratio $\text{Br}(t\rightarrow b\,H^+)$ gets suppressed, and off-shell production of the charged Higgs must be included as it gives an important contribution to the signal rate. The dominant non-resonant $H^\pm$ production process, $pp\rightarrow \bar t\,b\,H^+$, is shown in Fig.~\ref{fig:offshell}. We include this process in our study for all charged Higgs masses above $150$~GeV, and use the NLO cross sections provided in \cite{Degrande:2016hyf}. We use \texttt{MadGraph}~\cite{Alwall:2011uj} to generate $t\bar{t}+j$ events, and to decay both tops to $(\bar b W^-)(b\,H^+)$ (and the corresponding charge conjugate process). We match these events up to one additional jet and shower them using \texttt{Pythia}~\cite{Sjostrand:2006za} with MLM matching~\cite{Hoche:2006ph} and the $k_T$ shower scheme~\cite{Alwall:2014hca}. We use \texttt{Pythia} to further decay the $W$'s, $H^\pm$'s and $A^0$'s. For the detector simulation, we use \texttt{PGS}~\cite{Conway:2008pgs} with the same settings as described above. The non-resonant process $tbH^\pm\rightarrow(\bar b W^-)bH^\pm$ and its charge conjugate are generated with \texttt{MadGraph}. The subsequent steps are the same as for the first process. \section{Recast of the ATLAS 13 TeV Search in Multileptons} \label{sec:discussion} \newcommand{\cite{ATLAS-CONF-2016-058} }{\cite{ATLAS-CONF-2016-058} } \newcommand{\tau_\text{had} }{\tau_\text{had} } At the time of writing of this paper, \cite{ATLAS-CONF-2016-058} was the most recent ATLAS search for $\ensuremath{t\bar{t}\hsm}$ production in the multilepton channel, corresponding to $13.2~\text{fb}^{-1}$ of 13 TeV data. It targeted leptonic decays of the SM Higgs in $WW^*$, $\tau^+\tau^-$, and $ZZ^*$, by looking into four exclusive signal regions, namely, same-sign dileptons and one hadronically decaying $\tau$ ($2\ell1\tau_\text{had} $), same-sign dileptons vetoing hadronically decaying $\tau$'s ($2\ell0\tau_\text{had} $), 3 leptons ($3\ell$), and 4 leptons ($4\ell$). Rare top decays in Fig.~\ref{fig:topology}, with $A^0\rightarrow \tau^+\tau^-$, can contaminate all of these signal regions. We performed a Monte Carlo (MC) study to obtain a quantitative estimate of this contamination and the sensitivity of \cite{ATLAS-CONF-2016-058} to the Type I 2HDM spectra of (\ref{massHierarchy}). Overall, we were able to validate our MC simulation of \cite{ATLAS-CONF-2016-058} by reproducing the $\ensuremath{t\bar{t}\hsm}$ efficiencies provided by ATLAS to within a factor of 2 (12 independent efficiencies in total). This translates into a factor of $\sim2$ uncertainty in our results for the signal strength, and a $\sim50\%$ uncertainty in our results for $\ensuremath{\rm tan \beta}$ (since our signal scales as $\ensuremath{\rm tan \beta}^{-2}$). For a detailed description of our MC study, as well as our statistical treatment of fits and exclusions, we refer the reader to Appxs.~\ref{sec:recast}, \ref{appx:stats}. Throughout our study, we include the contribution of the SM $\ensuremath{t\bar{t}\hsm}$ process, assuming $\mu_{t\bar t H}=1$. \begin{figure}[t] \centering \begin{subfigure} \centering \includegraphics[width=0.495\textwidth]{plots/hpluslimits} \end{subfigure}% \begin{subfigure} \centering \includegraphics[width=0.49\textwidth]{plots/tanb} \end{subfigure} \caption{Recast of the ATLAS $\ensuremath{t\bar{t}\hsm}$ search in multileptons. (Left) Lower bounds on $\ensuremath{\rm tan \beta}$ inferred from limits on $t \rightarrow b\,\ensuremath{{H^+}}\rightarrow b\,W^{+*} (A^0\rightarrow \tau^+\tau^-)$. (Right) Values of $\ensuremath{\rm tan \beta}$ that best fit the data.} \label{fig:tanbLimFit} \end{figure} We first obtain upper bounds on the contamination from $t\rightarrow b \ensuremath{{H^+}}$, so that the expected number of events in the signal regions of \cite{ATLAS-CONF-2016-058} are compatible with observations. We interpret these constraints as a lower bound on $\ensuremath{\rm tan \beta}$ in the $m_\ensuremath{{H^+}} ,\,m_\ensuremath{{A^0}}$ mass plane, shown in Fig.~\ref{fig:tanbLimFit}(a). Our signal yield is suppressed in the compressed regions $m_{H^\pm} - m_\ensuremath{{A^0}}\lsim30$~GeV (where $\text{Br}(H^\pm\rightarrow W^{\pm(*)}A^0)$ is small), and $m_t-m_{H^\pm}\lsim20$~GeV (where $\text{Br}(t\rightarrow b\,H^{+})$ is small). In these regions of suppressed signal yield, the inferred lower bounds on $\ensuremath{\rm tan \beta}$ are innocuous. These limits would be stronger but for the presence of excesses in the data. Consequently, it behooves us to see whether we can understand these excesses as arising from rare top decays. In Fig.~\ref{fig:tanbLimFit}(b) we show the best fit for $\ensuremath{\rm tan \beta}$ at each point in the parameter space of $m_{H^+}$ and $m_{A^0}$. Notably, the efficiencies for the process $t \rightarrow b\,\ensuremath{{H^+}}\rightarrow b W^+ (A^0\rightarrow \tau^+\tau^-)$ (before folding in branching ratios) do not vary dramatically across the $m_\ensuremath{{H^+}} ,\,m_\ensuremath{{A^0}}$ mass plane. \begin{figure}[t] \includegraphics[width=0.6\textwidth]{plots/favoredRegion} \caption{Preferred regions from the fit to the data in \cite{ATLAS-CONF-2016-058}, at 68\% C.L. (green region), and 90\% C.L. (yellow region). Also shown: regions in tension with indirect flavor observables, and in tension with CMS limits on $\text{Br}(t \rightarrow b\, (\ensuremath{{H^+}}\rightarrow\tau^+\nu))$. These tensions only apply if the best $\ensuremath{\rm tan \beta}$ fit is assumed for each mass point.} \label{fig:parameterspace} \end{figure} In Fig.~\ref{fig:parameterspace}, we show the preferred regions of parameter space, defined from the goodness of fit to the data in \cite{ATLAS-CONF-2016-058} . Here, the values of $\ensuremath{\rm tan \beta}$ are profiled at each mass point to yield the best fit (see Fig.~\ref{fig:tanbLimFit}(b)). Under this assumption, some regions of parameter space are excluded by other measuments. In particular, the compressed region $m_{H^\pm} - m_\ensuremath{{A^0}}\lsim40$~GeV is in tension with the CMS bounds on $\text{Br}(t \rightarrow b\, (\ensuremath{{H^+}}\rightarrow\tau^+\nu))$. Likewise, the compressed region $m_t-m_{H^\pm}\lsim10$~GeV is in tension with $b$ flavor observables discussed in Sec.~\ref{sec:constraintsIndirect}. \begin{figure}[t] \includegraphics[width=0.8\textwidth]{plots/signalStrengths} \caption{Signal strenght measured by ATLAS in each of the four signal regions, as well as the predictions from our two benchmark points, B1 and B2. The prediction from the best $\ensuremath{\rm tan \beta}$ fits of B1 and B2 are shown as red and blue dots, respectively. Moreover, $\ensuremath{\rm tan \beta}$ is varied to encompass the $1\sigma$ range favored by data in the $2\ell1\tau_\text{had} $ channel. Only values of $\ensuremath{\rm tan \beta}$ consistent with flavor constraints are used, leading to a shorter allowed range for B2. Note that the signal strengths displayed here are subject to up to a factor of 2 uncertainty, stemming from our MC estimation of signal efficiencies.} \label{fig:mu} \end{figure} Finally, we select two benchmark points to illustrate the pattern of contamination across the four signal regions of \cite{ATLAS-CONF-2016-058}. The first benchmark point, B1, is at low mass, $m_{H^\pm}=130$~GeV, $m_\ensuremath{{A^0}}=40$~GeV. For this benchmark, $H^\pm$ can decay to an on-shell $W^\pm$, yielding a harder charged lepton. Direct production of $H^+H^-$ and $H^\pm A^0$ is several hundreds of femtobarns at 13 TeV, and in principle dedicated searches in final states with 1 or 2 leptons plus 3 or more $b$-jets could be sensitive to this point. The second benchmark, B2, is at high mass, and close to the excluded compressed regions, $m_{H^\pm}=160$~GeV, $m_\ensuremath{{A^0}}=95$~GeV. Because of the small mass splitting between $t$ and $H^+$, the $b$-jet from $t\rightarrow b\,H^+$ is relatively soft and has a lower probability of passing the $b$-tagging requirements. Overall, B2 predicts that roughly 72\% of its signal in the $2\ell1\tau_\text{had} $ region has only one $b$-tagged jet. Indeed, the data in the $2\ell1\tau_\text{had} $ channel seem to indicate that the excess appears exclusively in the 1 $b$-tag category. Since current statistics is very low, this might be due to Poisson fluctuations, and only more data will be able to settle this matter. A final comment on B2 regards our choice of $m_{A^0}$ for this benchmark. This was motivated by an excess at $m_{H^0}\approx (90-100)$~GeV in the combination of LEP seaches for the SM Higgs \cite{Barate:2003sz}, which could be interpreted in our scenario as a signal of $H^0$, if this were the lightest neutral scalar with $\zeta_H^2\sim\mathcal{O}(10^{-1})$ (see Eqs.(\ref{eq:scalarcouplings1}),(\ref{eq:scalarcouplings2})). In Fig.~\ref{fig:mu}, we show the predicted signal strength from benchmarks B1 and B2, measured in units of the SM $\ensuremath{t\bar{t}\hsm}$ signal strength, for each of the four signal regions of \cite{ATLAS-CONF-2016-058}. Both the combined best fit, as well as a selected range of $\ensuremath{\rm tan \beta}$ are shown. Specifically, this range is chosen so it encompasses the $1\sigma$ range favored by data in the $2\ell1\tau_\text{had} $ channel. We point out that for B2, it is not possible to reach the $1\sigma$ upper range of $2\ell1\tau_\text{had} $ without violating flavor observables. For this reason, we cut off the range of B2 at this exclusion boundary. We can see that the typical excess pattern seen by \cite{ATLAS-CONF-2016-058} is well explained by the hypothesis of contamination from rare top decays. At this point, however, the uncertainties are still large enough that even the no signal hypothesis is marginally consistent with observations. Upcoming analyses with more data will either tighten the exclusions of our model, or, optimistically, corroborate the deviations from the SM expectation. \section{Signals in \ensuremath{t\bar{t}\hsm} ~Searches} \label{sec:colliders} \begin{figure}[t] \includegraphics[width=0.7\textwidth]{plots/topology} \caption{Signal from rare top decay to $b\,H^\pm$ in a light Type I 2HDM, whose final states overlap with those of SM \ensuremath{t\bar{t}\hsm}.} \label{fig:topology} \end{figure} Although the parameter space of Type I 2HDMs with mass hierarchy: \begin{eqnarray}\label{massHierarchy} M_{\phi^0=A^0\,\text{or}\,H^0}~<~~ M_{H^0_{_{\text{SM}}}}\;,\;M_{H^\pm}~~<~M_{t} \end{eqnarray} is still experimentally viable, its phenomenology remains relatively unexplored in comparison to that of heavier 2HDM spectra. A stricking signal of models with (\ref{massHierarchy}) are rare top decays that can contaminate $\ensuremath{t\bar{t}\hsm}$ searches, particularly those targeting leptonic or $b\bar b$ decays of the SM Higgs, as illustrated in Fig.~\ref{fig:topology}. Given the very large top pair cross section, and the fact that $\text{Br}(t\rightarrow b H^+)$ can be as high as a few percent, this contamination can lead to observable excesses in $\ensuremath{t\bar{t}\hsm}$ measurements relative to the SM expectation. The excess pattern, however, would appear inconsistent across different channels if interpreted as an enhanced $\ensuremath{t\bar{t}\hsm}$ signal strength. Generically, no excess should appear on $\gamma\gamma$ channels, since searches typically require $m_{\gamma\gamma}\approx 125$~GeV (within resolution) to specifically target the SM Higgs. On the other hand, one would expect excesses in channels targeting $b\bar b$ and $\tau^{+} \tau^{-}$ final states, albeit with different strengths. Many of these analyses use Multivariate discriminants (MVAs), such as boosted decisions trees, neural networks, etc., which may be tuned to the specific final state kinematics of $\ensuremath{t\bar{t}\hsm}$. In those cases, the contamination from rare top decays may be partially filtered out, depending on how well the MVAs can discriminate between the two processes. Normally, details of MVA based studies are not public, and the extraction of limits on contaminant signals is unfeasible. In fact, SM Higgs searches as early as the Tevatron's could have been contaminated by rare top decays. The CDF collaboration has searched for $t\,\bar t\,(h^0\rightarrow b\,\bar b) $ over the mass range $m_{h^0}=(100-150)$~GeV~\cite{Collaboration:2012bk, Aaltonen:2013ipa}, and observed an $\mathcal{O}(2\sigma)$ excess above expectations at $m_{h^0}\sim(100-105)$~GeV. The reported best fit for the signal strength was $\mu_{t\bar t h^0}=7.40^{+4.65}_{-3.80}$\, for $m_{h^0}=100~{~\rm GeV}$, and $\mu_{t\bar t h^0}=8.56^{+4.82}_{-4.10}$\, for $m_{h^0}=105~{~\rm GeV}$. This would correspond to a rate of roughly $(65\pm 40)$~fb, or, in terms of the inclusive top pair cross section, $(0.003 - 0.015)\times\sigma_{t\,\bar t}$. This analysis was MVA based, and its mass resolution was limited due to the presence of four $b$-quarks in the signal final state, leading to a combinatoric ambiguity in identifying the $b$-jets originating from $h^0$ decays, and therefore to a broadening of the expected $m_{b\bar b}$ peak. All these factors preclude us from inferring any concrete implications regarding a potential contamination from BSM processes, but the results are nonetheless intriguing, and, if corroborated with further deviations at the LHC, could warrant a re-analysis of the Tevatron's data. At the LHC, existing $\ensuremath{t\bar{t}\hsm}$ results from ATLAS and CMS do seem to suggest a pattern of excesses inconsistent with the hypothesis of enhanced $\ensuremath{t\bar{t}\hsm}$ production, although with small statistical significance. At 8 TeV, the uncertainties in $\ensuremath{t\bar{t}\hsm}$ measurements are too large to offer any indication of an excess or lack thereof \footnote{One notable exception is the CMS measurement in the same-sign dilepton channel \cite{Khachatryan:2014qaa}, which gives the following best fit for the signal strength: $\mu_{ttH}=5.3^{+2.1}_{-1.8}$\,.}, with a combined best fit of $\mu_{ttH}=2.3^{+0.7}_{-0.6}$ (ATLAS + CMS, all channels) \cite{Khachatryan:2016vau}. Existing 13 TeV data is inconclusive as well, showing no excess in CMS $b\bar b$ (MVA) \cite{CMS-PAS-HIG-16-038} and ATLAS $\gamma\gamma$ \cite{ATLAS:2016nke}, and $\mathcal{O}(1\sigma)$ excesses in CMS multileptons (MVA) \cite{CMS-PAS-HIG-16-022, CMS:2017vru}, CMS $\gamma\gamma$ \cite{CMS:2016ixj}, ATLAS $b\bar b$ (MVA) \cite{ATLAS-CONF-2016-080}, and ATLAS multileptons \cite{ATLAS-CONF-2016-058}. Of all 13 TeV searches to date, only the latter, ATLAS multileptons, employs a traditional cut-and-count procedure\footnote{The $4\ell$ category in the CMS multilepton analysis \cite{CMS:2017vru} employs a cut-and-count procedure as well, but the final measurement has uncertainties substantially larger than the ones in ATLAS, and therefore, is less sensitive.}, and therefore is amenable to recasting in terms of our charged Higgs signal. We shall do so in the following section. We end by commenting on our choice of spectrum when reinterpreting the ATLAS multilepton results. As previously discussed, if (\ref{massHierarchy}) is realized, the charged Higgs will dominantly decay to $W^{(*)}\phi^0$, where $\phi^0$ is the {\it lightest} neutral scalar. Since LHC measurements of the 125~GeV Higgs push this model into the alignment limit, $(\sin\delta)^2\begin{array}{c}\,\sim\vspace{-26pt}\\< \end{array} 0.1$, the signals in Fig.~\ref{fig:topology} will be essentially independent of whether $\phi^0=A^0$ or $H^0$, since \begin{equation} \frac{\text{Br}(H^\pm\rightarrow W^{\pm(*)}H^0)}{\text{Br}(H^\pm\rightarrow W^{\pm(*)}A^0)}\,\Bigg|_{m_{A^0}=m_{H^0}}=~~~1-(\sin\delta)^2. \end{equation} Moreover, if lighter than $\sim 110$ GeV, $A^0$ and $H^0$ will have the same leading branching ratios, namely, $\text{Br}(\phi^0\rightarrow b\overline b)\approx 0.8$ and $\text{Br}(\phi^0\rightarrow \tau^+\tau^-)\approx 0.08$. For practical purposes, therefore, we will choose $A^0$ as the lightest neutral scalar and decouple $H^0$ (i.e., set $m_{H^0}>m_{H^\pm}$) for the remainder of the paper. None of the results that follow change in any significant way if $A^0 \rightarrow H^0$. The only loss of generality that comes with this assumption is the possibility of two independent decay modes. However, \cite{ATLAS-CONF-2016-058} does not rely on a mass peak, the efficiencies do not vary dramatically over the range of $m_{A^0,\,H^0}$ considered, and, in the presence of a mass hierarchy between $A^0$ and $H^0$, the lighter mode dominates the bosonic decays of $H^\pm$. Consequently, even this complication should not impact our results significantly. \section{Discussion} \label{sec:conclusion} While the LHC has so far found no compelling evidence for new physics, tremendous possibilities still exist for discovery of new particles, including light electroweakly charged ones. In Type I two Higgs doublet models, a charged Higgs lighter than the top quark can naturally be produced at significant rates from rare top decays. Remarkably, if there are additional light neutral scalars in the spectrum, the final state for such signals has a substantial overlap with those of SM $\ensuremath{t\bar{t}\hsm}$ processes. As a consequence, it is natural to consider signals from light extended Higgs sectors as a contaminant to existing SM searches. Interestingly, many - but not all - of the existing \ensuremath{t\bar{t}\hsm}\ searches show excesses, both at the Tevatron as well as the 8 and 13 TeV runs of the LHC. It is challenging to simultaneously reconcile these excesses with each other - significant excesses in leptonic channels, and an inconclusive pattern in $\gamma \gamma$ and $b\bar b$ channels, for instance. If these excesses persist, they could potentially be explained by the contamination of a new, charged Higgs signal from top decays. On general grounds, one would expect that the more tuned a given analysis is to the specific final state kinematics of the $\ensuremath{t\bar{t}\hsm}$ process, the less sensitive it should be to BSM top decays. That would be the case of LHC searches for $t\bar t(\ensuremath{{H_{_\text{SM}}}}\rightarrow \gamma \gamma)$, as these focus on a narrow $m_{\gamma \gamma}$ window around 125 GeV; or of (post Higgs discovery) MVA analyses in general. On the other hand, more inclusive analyses should be more prone to contamination from signals of extended Higgs sectors. Examples of more inclusive analyses are the Tevatron's CDF search for $t\bar t (h^0\rightarrow b\bar b)$ \cite{Collaboration:2012bk, Aaltonen:2013ipa}, which considers a broader $m_{b\bar b}$ window of $100 - 150$~GeV; and the 13 TeV ATLAS search for leptonic $\ensuremath{t\bar{t}\hsm}$ \cite{ATLAS-CONF-2016-058}, which employs a more conventional cut-and-count strategy, and might have non-negligible acceptance to BSM signals in multileptons plus two or more $b$-jets. We have recast the 13 TeV ATLAS search for $\ensuremath{t\bar{t}\hsm}$ in multilepton final states \cite{ATLAS-CONF-2016-058}, and found that its signal regions can be naturally contaminated by a light Type I 2HDM spectrum at low $\ensuremath{\rm tan \beta}$. Our recast of the results of \cite{ATLAS-CONF-2016-058} provides new limits on these models. Furthermore, these models can also explain the excesses observed in the data without exceeding null results from other measurements. In principle, this signal could also show up in other searches, such as those targeting $\ensuremath{{H_{_\text{SM}}}}\rightarrow b\bar b$, depending on the details of the MVA used, and the masses of $\ensuremath{{A^0}}/H^0$. Indeed, considering the high branching ratio of $\ensuremath{{A^0}}/H^0 \rightarrow b \bar b$, final states with many $b$-jets may provide the strongest tests going forward. Should the excesses persist, it is clear that light Type I 2HDM spectra provide an attractive potential explanation. Broadening search windows, especially for $\ensuremath{{A^0}}/H^0 \rightarrow b \bar b$ at lower $m_{b \bar b}$, could further constrain this scenario, or, possibly, provide the first evidence at the LHC of physics beyond the Standard Model. \section{Constraints on a Light Higgs Sector} \label{sec:constraints} While new scalars with sizeable couplings to SM fermions or gauge bosons are subject to constraints from LEP, Tevatron, and LHC data, additional Higgs bosons with suppressed yukawa couplings are more elusive to existing searches. In this section, we summarize the constraints on the charged and neutral Higgs bosons of Type I 2HDMs, with a focus on the light mass region. \subsection{Indirect Bounds} \label{sec:constraintsIndirect} As previously mentioned, indirect bounds on light Type I 2HDMs are mild. Besides $b\rightarrow s\gamma$ already discussed in Sec.~\ref{sec:intro}, other competing constraints from flavor observables are $B_s\rightarrow\mu^+\mu^-$ and $\Delta M_{B_{s,d}}$, which are only marginally stronger, requiring that $\ensuremath{\rm tan \beta}\begin{array}{c}\sim\vspace{-26pt}\\> \end{array} 1.8-2.2$ in the mass range $m_{H^\pm}<m_t$ \cite{Enomoto:2015wbn}. Another source of indirect constraints comes from contributions to the electroweak oblique parameters, particularly $T$, induced by the mass splittings between $H^\pm$, $A^0$ and $H^0$. For the light spectra considered here, however, we have checked that $\Delta T$ constraints are easily evaded. \subsection{Collider bounds} LEP placed a robust lower bound on $m_{H^\pm}\begin{array}{c}\sim\vspace{-26pt}\\> \end{array} 78.6$~GeV by searching for the decay modes $H^+ \rightarrow c \bar{s},\, \tau^{+} \nu$, assuming the absence of any non-fermionic decays \cite{Searches:2001ac}. The DELPHI and OPAL collaborations also considered the bosonic decay $H^{\pm} \rightarrow W^{\pm*} A^{0}$~\cite{Abdallah:2003wd,Abbiendi:2008aa}. In Type I 2HDM scenarios, the $\ensuremath{\rm tan \beta}$-independent limits obtained were $m_{H^\pm}\begin{array}{c}\sim\vspace{-26pt}\\> \end{array} 77.6$ GeV (DELPHI) and $m_{H^\pm}\begin{array}{c}\sim\vspace{-26pt}\\> \end{array} 65$ GeV (OPAL), provided that $m_{A^{0}}\gsim12$~GeV. The OPAL collaboration has searched for the associated production $e^+e^-\rightarrow A^0H^0$, with $A^0,\,H^0\rightarrow q\bar q,\,gg$, and $\tau^+\tau^-$ \cite{Abbiendi:2004gn,Abbiendi:2004ww}. While the resulting mass limits vary across the parameter space, they essentially turn off for either $A^0$ or $H^0$ heavier than $\sim 80{~\rm GeV}$. \begin{figure}[t] \includegraphics[width=0.6\textwidth]{plots/taunulimits} \caption{\label{fig:taunulimits} Reinterpretation of the 8 TeV CMS limits on $\text{Br}(t\rightarrow b\,H^+)\times\text{Br}(H^+\rightarrow \tau^+\nu)$ \cite{CMS:2014cdp} as lower bounds on $\ensuremath{\rm tan \beta}$ in a Type I 2HDM, assuming that $m_{H^0}>m_{H^\pm}$.} \end{figure} LEP, Tevatron and LHC searches for the SM Higgs are also potentially sensitive to the neutral scalars, $A^0$ and $H^0$. If $A^0,\,H^0$ are lighter than $\sim 110{~\rm GeV}$, however, these states decay dominantly to $b\bar b$ final states (with $\tau^+\tau^-$ as the subleading mode), and are challenging to probe at the Tevatron and LHC due to the large backgrounds and suppressed cross sections relative to the SM Higgs. While SM Higgs searches at LEP cannot constrain $A^0$ due to the absent $A^0Z^0Z^0$ coupling (see Eqs.~(\ref{eq:scalarcouplings1},\ref{eq:scalarcouplings2})), they are sensitive to $e^+e^-\rightarrow H^0Z^0$ production, and constrain $\zeta_{H}^2\lsim0.01-0.3$ in the range $m_{H^0}\simeq (15-115){~\rm GeV}$ \cite{Barate:2003sz}. Furthermore, LHC measurements of the SM Higgs properties provide non-trivial constraints on the parameters of the scalar potential (\ref{eq:higgspot}). If $\phi^0$ ($=A^0$ or $H^0$) is ligher than $m_{H_\text{SM}}/2$, the decay channel $\ensuremath{{H_{_\text{SM}}}}\rightarrow \phi^0\phi^0$ can easily dominate the SM Higgs width for generic values of the quartic couplings in (\ref{eq:higgspot}). In order to avoid conflict with observations, in the mass range $m_{\phi^0}\lesssim62$~GeV the tri-scalar coupling $\lambda_{\phi\phi\ensuremath{{H_{_\text{SM}}}}}$ must be suppressed, $\lambda_{\phi\phi\ensuremath{{H_{_\text{SM}}}}}\lesssim (2-6)$~GeV. While this condition is not generically satisfied, it is a parameter that can be adjusted independently of the physical masses and mixing angles $\delta$ and $\beta$. Since the charged Higgs phenomenology we will consider is not directly affected by $\lambda_{\phi\phi\ensuremath{{H_{_\text{SM}}}}}$, we will include the region $m_{\phi^0}\lesssim62$~GeV in our study, with the implicit assumption that $\Gamma(\ensuremath{{H_{_\text{SM}}}}\rightarrow \phi^0\phi^0)$ is not in conflict with observations. Another parameter that is directly constrained by SM Higgs measurements is $\delta$ -- current data pushes the model towards the alignment limit where $\delta$ is small and the properties of the 125 GeV Higgs are ``SM-like''. For a more thorough discussion on that, see \cite{Alves:2012ez,Bernon:2015qea,Bernon:2015wef,Haber:2017udj}. Previously mentioned upper bounds on $\text{Br}(t\rightarrow b\,(H^+\rightarrow \tau^+\nu))$ from ATLAS and CMS are sensitive enough to be relevant even if the decay mode $H^+\rightarrow \tau^+\nu$ is subdominant, as is the case of the Type I 2HDMs we are considering. We recast the 8 TeV CMS constraints on the branching ratio $\text{Br}(t\rightarrow b\,H^+)\times\text{Br}(H^+\rightarrow \tau^+\nu)$ \cite{CMS:2014cdp} as lower bounds on the value of \ensuremath{\rm tan \beta}, displayed in Fig.~\ref{fig:taunulimits} as a function of $m_{H^\pm}$ and $m_{A^0}$, assuming for simplicity (and without loss of generality) that $m_{H^0}>m_{H^\pm}$. To summarize, the light mass region of Type I 2HDMs is still experimentally viable in a vast swath of parameter space. In what follows, we investigate how this region can be constrained by existing LHC searches for the Standard Model $t\,\bar t\,H_{_\text{SM}}$ process. \section{\label{sec:intro} Introduction} So far, LHC searches have not provided conclusive signs of new particles, nor significant deviations from Standard Model predictions. Generic limits on new colored particles are particularly severe, with squarks and gluinos from Supersymmetry already tightly constrained if lighter than 1.3~TeV and 1.9~TeV, respectively \cite{ATLAS:2016kts}. Exclusion limits on direct production of new electroweak particles, in contrast, have not been as dramatic. Mass limits on charginos produced via electroweak interactions, for instance, do not extend beyond 400-500 GeV \cite{CMS:2016gvu}. The reason for this is straightforward: at a $p$-$p$ machine such as the LHC, electroweak cross sections are simply much smaller than strong cross sections. Because of that, the best means to search for new electroweak states is often in cascade decays of copiously produced colored particles. This approach however relies on the existence of heavier colored particles within the reach of the LHC. As this possibility becomes more and more remote, the only realization of this scenario that can be concretely studied are rare top quark decays, which unfortunately cannot probe new particles heavier than about $170$ GeV. The range below the top quark mass is nonetheless a well motivated region to search for states beyond the Standard Model (BSM). One possibility of great interest would be rare top decays to charged Higgs bosons, $t \rightarrow b\, \ensuremath{{H^+}}$. With the $t\overline{t}$ cross section at the 13 TeV LHC being over 800 pb, even small branching ratios $\text{Br}(t \rightarrow b\, \ensuremath{{H^+}})\sim\mathcal{O}(10^{-3})$ would yield a $H^\pm$ production rate at the $\mathcal{O}$(pb) level or higher. Since direct electroweak production of these states ranges from $\mathcal{O}(30-200)~{\rm fb}$ at 13 TeV, the rate from top quark decays could easily dominate. This scenario has not gone without scrutiny. Indeed, there is a variety of independent constraints on new charged scalars lighter than the top quark. In {\it Type II}\; Two Higgs Doublet Models (2HDM) in particular, flavor changing observables, most notably $b\rightarrow s\gamma$, exclude $m_{H^\pm} \begin{array}{c}\,\sim\vspace{-26pt}\\< \end{array} 580 {~\rm GeV}$ \cite{Enomoto:2015wbn,Misiak:2017bgg}, absent cancellations. Flavor bounds are highly model dependent, however. To contrast, in {\it Type I}\; 2HDMs, constraints from $b\rightarrow s\gamma$ are mild at best, requiring only that $\tan\beta \begin{array}{c}\sim\vspace{-26pt}\\> \end{array} (1.5-2)$ for $m_\ensuremath{{H^+}}<m_t$. Thus, at least for Type I 2HDM scenarios, there is a compelling case to consider direct searches for new Higgs states at colliders, in particular in rare top decays. A critical point to consider here is that, in a Type I 2HDM, one of the Higgs doublets is fermiophobic. This can drastically alter the phenomenology of the charged Higgs relative to a Type II scenario. In particular, if a lighter neutral scalar $\phi^0 \,(= A^0,\,H^0)$ exists, the charged Higgs will dominantly decay as $\ensuremath{{H^+}} \rightarrow \phi^0\, W^{+(*)}$, and the familiar fermionic decays (e.g., $\ensuremath{{H^+}} \rightarrow \tau^+ \nu,\,c\,\bar{s},\, t^*\,\overline{b}$) will be suppressed. In this region of parameter space, our process of interest will be: \begin{eqnarray} \label{ttsignal} p\,p~\rightarrow~ t\,\bar t~&\rightarrow&~ (bW^+)(\bar{b}\,H^-)\nonumber\\ &\rightarrow&~ (bW^+)(\bar{b}\;W^{-(*)}\phi^0) \end{eqnarray} with $\phi^0$ dominantly decaying as \begin{equation} \label{phidecay} \phi^0\rightarrow b\,\bar{b}\,,\;\tau^+\tau^-, \end{equation} if $m_{\phi^0}\lsim110~\text{GeV}$. The resulting final state, with rates of $\mathcal{O}$(pb) or higher, can lead to a large signal contamination in searches for the SM Higgs boson produced in association with top quark pairs, $t\bar{t}\ensuremath{{H_{_\text{SM}}}}$, whose SM cross section is about $0.5~\text{pb}$. Indeed, existing and future measurements of the $t\bar{t}\ensuremath{{H_{_\text{SM}}}}$ cross section can be used to constrain the Type I 2HDM signatures in (\ref{ttsignal}), (\ref{phidecay}). A more exciting prospect would be to explain recent excesses in existing $t\bar{t}\ensuremath{{H_{_\text{SM}}}}$ searches as due to contamination from rare top decays to charged Higgses. While the significance of current excesses is mild, upcoming results with more data will lead to a clearer picture of the excess pattern, would it persist. The layout of this paper is as follows: in Sec.~\ref{sec:model} we briefly review Type I 2HDMs and describe the charged Higgs phenomenology in the light mass region. Direct and indirect bounds on the relevant region of parameter space are reviewed in Sec.~\ref{sec:constraints}. In Secs.~\ref{sec:colliders}, \ref{sec:discussion}, we discuss how $t\bar{t}\ensuremath{{H_{_\text{SM}}}}$ searches can be used to constrain charged Higgs production, and describe the degree to which the claimed excess in various channels can be explained by this model. Finally, in Sec.\ref{sec:conclusion} we discuss the implications of these results and comment on future searches that might help better constrain this light region of Type I 2HDMs. \newpage \section{Model and Signals} \label{sec:model} In a type I 2HDM, one of the Higgs doublets, $H_2$, is fermiophobic, and all fermion masses stem from Yukawa couplings to $H_1$: \begin{equation} \mathcal{L}_{\text{yukawa}}=H_1\,Q\,Y_u U^c + H_1^\dagger\,Q\,Y_d D^c + H_1^\dagger\,L\,Y_\ell E^c~+~\text{h.\,c.} \end{equation} The scalar potential can be generically parameterized as \cite{Gunion:1989we}: \begin{eqnarray} V_{\text{scalar}}&=&\lambda_1\left(|H_1|^2-v_1^2\right)^2\,+~\lambda_2\left(|H_2|^2-v_2^2\right)^2\nonumber\\ &+&\lambda_3\left( (|H_1|^2-v_1^2) + (|H_2|^2-v_2^2) \right)^2 \label{eq:higgspot}\\ &+&\lambda_4\left( |H_1|^2|H_2|^2 - |H_1^\dagger H_2|^2 \right) \nonumber\\ &+&\lambda_5\left( \text{Re}(H_1^\dagger H_2) - v_1v_2 \right)^2 \,+~ \lambda_6\left(\text{Im}(H_1^\dagger H_2)\right)^2\,,\nonumber \end{eqnarray} where both doublets, $H_1$ and $H_2$, have hypercharge $Y=1/2$, and for simplicity we assume that CP is conserved and all parameters in (\ref{eq:higgspot}) are real. Conventionally, the mass eigenstates of this theory are parameterized by two angles; $\alpha$, the mixing angle between the CP-even neutral states, and $\beta$, defined as $\text{tan}\beta \equiv v_1/v_2$: \begin{eqnarray} \Spvek{H_1^0; H_2^0} ~=~ \Spvek{v_1; v_2}~+~\frac{1}{\sqrt{2}} R_\alpha \Spvek{H^0_\text{light}~ ;H^0_\text{heavy}}~+~\frac{i}{\sqrt{2}}R_{\beta}\Spvek{G^0;A^0}\,, \end{eqnarray} \begin{eqnarray} \Spvek{H_1^\pm; H_2^\pm} ~=~ R_\beta \Spvek{G^\pm;H^\pm}\,, \end{eqnarray} where \begin{equation} R_\alpha = \left(\begin{matrix} ~\cos \alpha & \sin \alpha \\ -\sin \alpha & ~\cos \alpha \end{matrix} \right)~~,~~~~ R_\beta = \left(\begin{matrix} -\sin \beta & ~\cos \beta \\ ~\cos \beta & \sin \beta \end{matrix} \right). \end{equation} Usually, the ``SM-like'' Higgs (corresponding to the state discovered at the LHC with mass $m_h=125$ GeV) is the lighter CP-even neutral scalar, $H^0_\text{light}$. That does not need to be the case though, and in principle the SM-like Higgs could be the heavier CP-even scalar, $H^0_\text{heavy}$. Since we are interested in both regimes, we will adopt the following, more generic parameterization \cite{Alves:2012ez} \begin{equation} \Spvek{H_1^0; H_2^0} ~=~ \Spvek{v_1; v_2}~+~\frac{1}{\sqrt{2}} R_{\beta+\delta} \Spvek{H_{_\text{SM}}^0;\ensuremath{{H^0}}~}~+~\frac{i}{\sqrt{2}}R_{\beta}\Spvek{G^0;\ensuremath{{A^0}}}\,, \end{equation} where \begin{equation} R_{\beta-\delta} ~=~ \left(\begin{matrix} \sin (\beta -\delta)& -\cos (\beta - \delta) \\ \cos (\beta -\delta)& ~\sin (\beta -\delta) \end{matrix} \right). \end{equation} Here, $\delta$ is a parameter that describes the deviation from the alignment limit. If the SM-like Higgs corresponds to the lightest CP-even scalar, $\delta$ is defined by $\delta \equiv \beta - \alpha - \pi/2$. Conversely, if the SM-like Higgs corresponds to the heaviest CP-even scalar, then $\delta \equiv \beta - \alpha$. The advantage of this parameterization is that $\delta$ quantifies the deviation from a SM-like Higgs, and there is no discontinuity in our description of fields as the mass hierarchy changes. That is, $H_{_\text{SM}}^0$ is always the SM-like state, and $H^0$ is always the state with suppressed couplings to fermions. In terms of the mass eigenstates, the Yukawa couplings can be re-written as: \begin{eqnarray} \label{scalarYukawaL} \mathcal{L}_{\text{yukawa}}~=~\sum_{f}~~ &&\xi_{_{H_\text{SM}}} \frac{m_f}{v}\, H^0_{_\text{SM}} ff^c \;+~ \xi_{_H} \frac{m_f}{v}\, H^0 ff^c\quad \nonumber \\ +~ && i\,\xi_{_A}^f\; \frac{m_f}{v}\, A^0 ff^c \;+~ \xi_{_A}^f\; \frac{m_f}{v}\, \sqrt{2}\, U_{ff^{\prime}}\, H^\pm f f^{\prime\,c}\;~+~~\text{h.\,c.}\,,~~ \end{eqnarray} where $v=246$~GeV, $U_{ff^{\prime}}$ is a CKM matrix element if $f,f^{\prime\,c}$ are quarks, and $U_{ff^{\prime}}=1$ if $f,f^{\prime\,c}$ are leptons. Moreover, \begin{eqnarray} \label{scalarYukawaScaling} \xi_{_{H_\text{SM}}}=\cos\delta-\frac{\sin\delta}{\ensuremath{\rm tan \beta}}~,~~~\xi_{_H}=\sin\delta+\frac{\cos\delta}{\ensuremath{\rm tan \beta}}~,~~~\xi_{_A}^u=-\xi_{_A}^{d,e}=\frac{1}{\ensuremath{\rm tan \beta}}. \end{eqnarray} Likewise, the EW symmetry breaking couplings of scalars to vector bosons are given by: \begin{eqnarray} \mathcal{L}_{\phi VV}~=~\zeta_{\phi} \,\frac{2\,m_W^2}{v}\,\phi^0\, W^+W^- +~ \zeta_{\phi}\, \frac{m_Z^2}{v}\,\phi^0\, ZZ \;, \label{eq:scalarcouplings1} \end{eqnarray} where $\phi^0$ generically denotes $H^0_{_\text{SM}}$, $H^0$, $A^0$, and, \begin{eqnarray} \zeta_{_{H_\text{SM}}}=\cos\delta~,~~~\zeta_{H}=\sin\delta~,~~~\zeta_{A}=0. \label{eq:scalarcouplings2}\end{eqnarray} \begin{figure}[t] \includegraphics[width=0.55\textwidth]{plots/BRtop.pdf} \caption{\label{fig:topBR} Contours of $\text{Br}(t\rightarrow b\,\ensuremath{{H^+}})$ in a Type I 2HDM, as a function of $m_{H^\pm}$ and $\ensuremath{\rm tan \beta}$.} \end{figure} In Type I 2HDMs, unlike Type II, the couplings of $A^0$ and $H^\pm$ to fermions are suppressed by $\ensuremath{\rm tan \beta}$, as shown in (\ref{scalarYukawaL}) and (\ref{scalarYukawaScaling}), and so are the $H^0$ yukawa couplings in the alignment limit $\delta\rightarrow 0$. As we are especially interested in $H^\pm$ production from top decays, we show in Fig.~\ref{fig:topBR} the branching ratio $\text{Br}(t\rightarrow b\,\ensuremath{{H^+}})$ as a function of $m_{H^\pm}$ and $\ensuremath{\rm tan \beta}$. Note that even at large $\ensuremath{\rm tan \beta} \sim \mathcal{O}(10)$, branching ratios of $\mathcal{O}(10^{-3})$ are possible, while at low $\ensuremath{\rm tan \beta} \begin{array}{c}\,\sim\vspace{-26pt}\\< \end{array} 3$, the top branching ratio to $H^\pm$ can reach the few percent level, $\text{Br}(t\rightarrow b\,\ensuremath{{H^+}})\sim\mathcal{O}(3\%) \;-$ roughly the limit where tension might arise with measurements of the top pair cross section $\sigma_{t \bar t}$ \cite{Czakon:2011xx,Peters:2015kka,Andrea:2013qoa,Schilling:2012dx,ATLAS:2012dpa,ATLAS:2012fja,CMS:2012dya,CMS:2014gta,LHCtopWG}. \subsection{Charged Higgs Decays} The decay patterns of the charged Higgs can vary dramatically across the parameter space of 2HDMs, significantly impacting the experimental strategies to search for this state. \begin{figure}[t] \includegraphics[width=0.7\textwidth]{plots/BRHc.pdf} \caption{\label{fig:hplusdecays}Charged Higgs branching ratios in a Type I 2HDM, assuming $m_{A^0}=m_{H^0}=100$ GeV, $\ensuremath{\rm tan \beta}=3$, and $\delta=0$.} \end{figure} Due to the large popularity of Type II 2HDMs, the most thoroughly explored $H^\pm$ decays, both in phenomenological studies as well as in experimental efforts, have been the fermionic modes. This is due to the $\ensuremath{\rm tan \beta}$ enhancement of the $H^\pm$ couplings to down type quarks and leptons, causing the $\tau\nu$, $cs$ and $b\,t^*$ modes to dominate the $H^\pm$ branching ratios. ATLAS \cite{Aad:2014kga,Aaboud:2016dig} and CMS \cite{CMS:2014cdp,CMS-PAS-HIG-16-031} have extensively searched for signatures of $t\rightarrow b\,(H^+\rightarrow \tau^+\nu)$, and placed upper bounds on the overall top branching ratio to this final state at the sub-percent level. Analogous searches \cite{Khachatryan:2015uua,CMS-PAS-HIG-16-030} for the $H^+\rightarrow c\bar s,\,c\bar b$ channels have also set percent-level bounds on the corresponding branching ratios. The same constraints are applicable in Type I 2HDMs if the only kinematically open channels for $H^\pm$ decays are light fermions, which is the case if $m_{H^\pm}\begin{array}{c}\,\sim\vspace{-26pt}\\< \end{array} m_\ensuremath{{H^0}},m_\ensuremath{{A^0}}$. However, if the spectrum contains a lighter neutral scalar with a large $H_2$ component, such as $A^0$ or $H^0$, the mode $\ensuremath{{H^+}}\rightarrow W^{\pm(*)}\,A^0/H^0$ would naturally dominate the $H^\pm$ branching ratio, even with the 3-body phase space suppression of an off-shell $W^{*}$. This is due to the unsuppressed gauge couplings that mediate this decay mode, and the large suppression of the competing fermionic modes, stemming from the smallness of the Yukawa couplings and from the $1/\ensuremath{\rm tan \beta}$ dependence of the $H^\pm$ coupling to fermions. This effect is illustrated in Fig.~\ref{fig:hplusdecays}, where we have set $m_{A^0}=m_{H^0}=100$ GeV, $\ensuremath{\rm tan \beta}=3$, and $\delta=0$. The suppression of fermionic modes is even more pronounced for larger values of $\ensuremath{\rm tan \beta}$, or lower masses of $A^0$ or $H^0$. This qualitatively different phenomenology of charged Higgs decays has been previously noted in phenomenological studies \cite{Djouadi:1995gv,Akeroyd:1998dt,Kling:2015uba,Coleppa:2014cca,Coleppa:2014hxa,Arhrib:2016wpw,Bechtle:2016kui}, but to the best of our knowledge, no dedicated experimental analysis has explicitly searched for these signatures. A critical question is then: could such a particle have contaminated studies of other SM or BSM processes? And if so, what constraints could existing searches place on these particular charged Higgs signals? \section{Signal yields and statistical procedure} \label{appx:stats} \subsection{Signal yields} As discussed in Appx.~\ref{sec:recast}, we use MC to obtain the signal efficiencies in each of the four signal regions defined by ATLAS, for both the $t\bar t$ on-shell process: \begin{equation} pp\rightarrow\bar t\,t \rightarrow \bar t\,b\, (H^+\rightarrow W^{+(*)}\, ( A^0\rightarrow \tau^+\tau^-))\,, \end{equation} as well as the non-resonant process in Fig.~\ref{fig:offshell} \begin{equation} pp\rightarrow \bar t\,b\, (H^+\rightarrow W^{+(*)}\, ( A^0\rightarrow \tau^+\tau^-))\,. \end{equation} We generically denote the respective efficiencies by $\epsilon_\text{res}$ and $\epsilon_\text{nonres}$. The signal yield for a specific signal region is then given by: \begin{equation} S= S_\text{res} + S_\text{nonres}\,, \end{equation} where \begin{equation} S_\text{res} = \epsilon_\text{res}\times\left(\sigma_{t\bar t}\times\mathcal{L}\right)\times\left(2\,\text{Br}(t\rightarrow b \, H^+)\times\text{Br}(H^+\rightarrow W^{+(*)}\,A^0)\times\text{Br}(A^0\rightarrow \tau^+\tau^-)\right)\,, \end{equation} and \begin{equation} S_\text{nonres} = \epsilon_\text{nonres}\times\left(\frac{\sigma_{\text{nonres}}}{\ensuremath{\rm tan \beta}^2}\times\mathcal{L}\right)\times\left(\text{Br}(H^+\rightarrow W^{+(*)}\,A^0)\times\text{Br}(A^0\rightarrow \tau^+\tau^-)\right)\,. \end{equation} For the total cross sections, we use $\sigma_{t\bar t}\,\big|_{\text{13 TeV}}=830$~pb, and the following approximate fit for $\sigma_{\text{nonres}}$ extracted from \cite{Degrande:2016hyf}: \begin{equation} \sigma_{\text{nonres}}(m_{H^\pm})\,\Big|_{\text{13 TeV}} = \left(22.53 - 0.106\;\frac{m_{H^\pm}}{\text{GeV}} \right)\;\text{pb} \end{equation} in the range $m_{H^\pm}\sim[150-175]$~GeV. \subsection{Branching ratios} Throughout our study we set $\text{Br}(A^0\rightarrow \tau^+\tau^-)=0.083$. The top branching ratio to $b\,H^+$ is given by \cite{Gunion:1989we}: \begin{equation} \text{Br}(t\rightarrow b \, H^+)\,=\,\frac{ R_{H^+}}{\;1\,+\, R_{H^+}}\,, \end{equation} where \begin{eqnarray} R_{H^+}~&&=~\frac{\Gamma(t\rightarrow b \, H^+)}{\Gamma(t\rightarrow b \, W^+)} \nonumber\\ &&=~ \frac{p_{H^+}}{p_{W^+}}\,\frac{1}{\ensuremath{\rm tan \beta}^2}\,\frac{(m_t^2+m_b^2-m_{H^+}^2)(m_t^2+m_b^2)-4\,m_b^2\,m_t^2}{m_W^2\,(m_t^2+m_b^2-2\,m_W^2)+(m_t^2-m_b^2)^2}\,. \end{eqnarray} Above, $p_{H^+}$ is the momentum of $H^+$ in the top's rest frame, \begin{equation} p_{H^+}\;=~\frac{m_t}{2}\,\sqrt{\left(1-\frac{m_{H^+}^2}{m_t^2}\right)^2 -2\;\frac{m_b^2}{m_t^2}\,\left(1+\frac{m_{H^+}^2}{m_t^2}\right)+\;\frac{m_b^4}{m_t^4}\, }\;, \end{equation} and $p_{W^+}$ is defined in an analogous way. The branching ratios of the charged Higgs are computed from its various widths. For completeness, we list all of them below. The charged Higgs width to a pair of {\it light} on-shell SM fermions is given by: \begin{equation} \Gamma(H^+\rightarrow f \bar f^\prime) ~=~ \frac{G_F}{4\sqrt{2}\pi}\,\frac{m_{H^+}}{\ensuremath{\rm tan \beta}^2}\;N_c\,|U_{f\bar f^\prime}|^2\,\left( m_f^2\,+\,m^2_{\bar f^\prime} \right)\,, \end{equation} where $N_c=3$ and $U_{f\bar f^\prime}$ is an element of the CKM matrix if $f,\bar f^\prime$ are quarks, and $N_c=1$ and $U_{f\bar f^\prime}=1$ otherwise; $m_f$, $m_{\bar f^\prime}$ above are the {\it running masses} at $\mu=m_{H^+}$. Since we are interested in spectra where $m_{H^+}<m_t$, the width $\Gamma(H^+\rightarrow t^*\, \bar b)$ goes via an off-shell top quark, and is given by \cite{Djouadi:1995gv}: \begin{eqnarray} \Gamma(H^+\rightarrow W^+b\,\bar b)~=~&&\frac{3\,G_F^2\,m_t^4}{64\,\pi^3}\,\frac{m_{H^+}}{\ensuremath{\rm tan \beta}^2}\,\bigg(\frac{\kappa_W^2}{\kappa_t^3}\,(4\kappa_W\kappa_t+3\kappa_t-4\kappa_W)\,\text{log}\left(\frac{\kappa_W\,(\kappa_t-1)}{\kappa_t-\kappa_W}\right) \nonumber\\ &&+ ~(3\kappa_t^2-4\kappa_t-3\kappa_W^2+1)\,\text{log}\left(\frac{\kappa_t-1}{\kappa_t-\kappa_W}\right) -\frac{5}{2} \\ && +~\frac{1-\kappa_W}{\kappa_t^2}\,(3\kappa_t^3-\kappa_t\kappa_W-2\kappa_t\kappa_W^2+4\kappa_W^2) + \kappa_W\,(4-3\kappa_W/2 ) \bigg)\, , \nonumber \end{eqnarray} where $\kappa_t=m_t^2/m_{H^+}^2$, and $\kappa_W=m_W^2/m_{H^+}^2$. Above, we neglect the contribution from the bottom yukawa coupling. The bosonic widths of the charged Higgs, $\Gamma(H^+\rightarrow W^{+(*)}\,\phi^0)$, depend on whether the final state $W^+$ is on- or off-shell. In the former case, we have: \begin{equation} \Gamma(H^+\rightarrow W^{+}\,\phi^0)~=~(1-\zeta_\phi^2)\,\frac{G_F}{8\sqrt{2}\,\pi}\,\frac{m_W^4}{m_{H^+}}\,\sqrt{\lambda(m_{\phi^0}^2,m_W^2,m_{H^+}^2)}\,\lambda(m_{\phi^0}^2,m_{H^+}^2,m_W^2)\, , \end{equation} where $\zeta_\phi$ for $\phi^0 = \ensuremath{{H_{_\text{SM}}}},\, H^0,\,A^0$ are given in Sec.~\ref{sec:model}, and \begin{equation} \lambda(x,y,z)~=~\left(1-\frac{x}{z}-\frac{y}{z} \right)^2-4\,\frac{x\,y}{z^2}. \end{equation} On the other hand, if the $W^{+}$ is off-shell, we have \cite{Djouadi:1995gv}: \begin{equation} \Gamma(H^+\rightarrow W^{+*}\,\phi^0)~=~(1-\zeta_\phi^2)\;m_{H^+}\,\frac{9\, G_F^2\,m_W^4}{8\,\pi^3}\,G\left(\frac{m_{\phi^0}^2}{m_{H^+}^2},\frac{m_W^2}{m_{H^+}^2}\right)\,, \end{equation} where \begin{eqnarray} G(x,y)~&&=~\frac{1}{8}\,\Bigg( 2\,(-1-x+y)\,\sqrt{\lambda_G(x,y)}\,\bigg(\frac{\pi}{2}+\text{ArcTan}\bigg[\frac{y\,(1+x-y)-\lambda_G(x,y)}{(1-x)\sqrt{\lambda_G(x,y)}} \bigg] \bigg) \nonumber\\ &&+~\big(\lambda_G(x,y)-2x\big)\,\text{log}(x) + \frac{(1-x)}{3\,y}\,\big(5\,y\,(1+x)-4\,y^2+2\,\lambda_G(x,y) \big)\Bigg)\,, \end{eqnarray} and \begin{equation} \lambda_G(x,y)~=\, -1+2\,x+2\,y-(x-y)^2\,. \end{equation} \subsection{Statistical treatment of fits and exclusions} For each signal region in \cite{ATLAS-CONF-2016-058} with $N_\text{sr}$ observed events, $\mu_{B\,\text{sr}} \pm\Delta B_\text{sr}$ expected background events, and $S_\text{sr}$ expected signal events, we define the likelihood function: \begin{equation} \mathcal{L}_{\text{sr}}(S_\text{sr}, B_\text{sr})=P(N_\text{sr}\,|\,S_\text{sr}+B_\text{sr})\times G(B_\text{sr}\,|\,\mu_{B\,\text{sr}}\,,\,\Delta B_\text{sr})\,, \end{equation} where $P$ and $G$ are Poisson and Gaussian distributions, respectively. The likelihood for the combination of all four signal regions is given by the product of the individual likelihoods: \begin{equation} \mathcal{L}(\mu_S, \theta_B)=\prod_{\text{srj}}\mathcal{L}_{\text{srj}}(S_\text{srj},B_\text{srj})\,, \end{equation} where $\theta_B\, =\, (B_\text{sr1},\,B_\text{sr2},\,B_\text{sr3},\,B_\text{sr4})$, and $\mu_S=(m_{H^+}, m_{A^0}, \ensuremath{\rm tan \beta})$ uniquely specifies a point in the model parameter space, and unambiguously determines the signal yield $S_\text{srj}$ in each signal region. Since the correlations between background uncertainties in the four signal regions were not provided in \cite{ATLAS-CONF-2016-058}, we treat these uncertainties as uncorrelated. We note, however, that the post-fit backgrounds in \cite{ATLAS-CONF-2016-058} did not change substantially relative to their pre-fit counterparts. This, combined with the factor of 2 uncertainties in our MC modeling of signal efficiencies, leads us to expect that the our results would not change substantially were we to properly include all background correlations in our fits. For obtaining limits, we use a profiled log-likelihood analysis. First, we define: \begin{equation} \lambda(\mu_S)~=~\text{log}\,\mathcal{L}(\mu_S, \hat{\hat{\theta}}_B) - \text{log}\,\mathcal{L}(\hat{\mu}_S, {\hat{\theta}}_B)\,, \end{equation} where the unconditional likelihood estimators $\hat{\mu}_S, \,{\hat{\theta}}_B$ maximize the global $\text{log}\,\mathcal{L}(\mu_S, \theta_B)$, and the conditional estimator $\hat{\hat{\theta}}_B$ maximizes $\text{log}\,\mathcal{L}(\mu_{S}, \theta_B)$ for a given $\mu_{S}$. Since there are $3$ independent degrees of freedom in ${\mu}_S$, namely, $m_{H^+}$, $m_{A^0}$, and $\ensuremath{\rm tan \beta}$, the $p$-value for a specific model defined by ${\mu}_S$ is determined by: \begin{equation} p~=~1-\text{CDF}(\chi^2_3,\,-2\,\lambda(\mu_S)), \end{equation} where $\text{CDF}(\chi^2_3,\,-2\,\lambda(\mu_S))$ is the cumulative distribution function for a $\chi^2$-distribution with 3 degrees of freedom, evaluated at $-2\,\lambda(\mu_S)$. From the $p$-value we can determine the exclusion confidence level for all points in the studied parameter space. Finally, we note that when finding the goodness of fit of a given mass point $\mu^\prime_S=(m_{H^+}, m_{A^0})$, we profile over the value of $\ensuremath{\rm tan \beta}$ that maximizes the log-likelihood, i.e., we use \begin{equation} \lambda(\mu^\prime_S)~=~\text{log}\,\mathcal{L}(\mu^\prime_S,\, \text{tan}\hat{\hat{\beta}},\, \hat{\hat{\theta}}_B) - \text{log}\,\mathcal{L}(\hat{\mu}^\prime_S,\, \text{tan}{\hat{\beta}},\, {\hat{\theta}}_B)\,. \end{equation} In this case, we define the $p$-value in an analogous way, \begin{equation} p~=~1-\text{CDF}(\chi^2_2,\,-2\,\lambda(\mu^\prime_S)), \end{equation} but instead use a $\chi^2$-distribution with only 2 degrees of freedom.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \subsection{Motivation} The last decade has witnessed a surge of interest in wireless local area networks (WLAN), where mobile stations share a common wireless medium through contention-based medium access control (MAC). In WLANs, collision of packets occurs when more than one station transmits at the same time, causing a waste of bandwidth. Recent advances in multiuser detection (MUD) techniques \cite{Verdu:98} open up new opportunities for resolving collisions in the physical (PHY) layer. For example, in CDMA \cite{Rapajic:99} or multiple-antenna \cite{Telatar:99} systems, multiple packets can be received simultaneously using MUD techniques without collisions. It is expected that, with improved multipacket reception (MPR) capability from the PHY layer, the MAC layer will behave differently from what is commonly believed. In particular, to fully utilize the MPR capability for capacity enhancement in WLAN, it is essential to understand the fundamental impact of MPR on the MAC-layer design. As such, this paper is an attempt to study the MAC-layer throughput performance and the collision resolution schemes for WLANs with MPR. \subsection{Key Contributions} The key contributions of this paper are as follows: \begin{itemize} \item {To demonstrate MPR as a powerful capacity-enhancement technique at the system level, we analyze the MAC-layer throughput of WLANs with MPR capability under both finite-node and infinite-node assumptions. Our model is sufficiently general to cover both carrier-sensing and non-carrier-sensing networks. We prove that in random-access WLANs, network throughput increases super-linearly with the MPR capability of the channel. That is, throughput divided by \emph{M} increases as \emph{M} increases, where \emph{M} is the number of packets that can be resolved simultaneously. The super-linear throughput scaling implies that the achievable throughput per unit cost increases with MPR capability of the channel. This provides a strong incentive to deploy MPR in next-generation wireless networks.} \item {We study the effect of MPR on the MAC-layer collision resolution scheme, namely exponential backoff (EB). When packets collide in WLANs, an EB scheme is used to schedule the retransmissions, in which the waiting time of the next retransmission will get multiplicatively longer for each collision incurred. In the commonly adopted binary exponential backoff (BEB) scheme (e.g., used in Ethernet \cite{Ansi}, WiFi \cite{WLAN}, etc.), the multiplicative (a backoff factor) is equal to 2. We show in this paper that the widely used BEB does not necessarily yield the close-to-optimal network throughput with the improved MPR capability from the PHY layer. As a matter of fact, BEB is far from optimum for both non-carrier-sensing networks and carrier-sensing networks operated in basic access mode. The optimal backoff factor increases with the MPR capability. Meanwhile, BEB is close to optimum for carrier-sensing networks when RTS/CTS access mode is adopted.} \item {Built on the theoretical underpinnings established above, we propose a practical protocol to fully exploit the MPR capability in IEEE 802.11-like WLANs. In contrast to \cite{Zhao:03}-\cite{Zhao:04}, we consider not only the MAC layer protocol design, but also the PHY-layer signal processing to enable MPR in distributed random-access WLANs. As a result, the proposed protocol can be implemented in a fully distributed manner with marginal modification of current IEEE 802.11 MAC.} \end{itemize} \subsection{Related Work on MPR and Collision Resolution Schemes} The first attempt to model a general MPR channel in random-access wireless networks was made by Ghez, Verdu, and Schwartz in \cite{Ghez:88}-\cite{Ghez:89} in 1988 an 1989, respectively, in which stability properties of conventional slotted ALOHA with MPR were studied under a simple infinite-user and single-buffer assumption. No collision resolution scheme (such as EB) was considered therein. This work was extended to CSMA systems by Chan et al in \cite{Chan:04} and to finite user ALOHA systems by Naware et al in \cite{Naware:05}. It has been shown in \cite{Ghez:88}-\cite{Naware:05} that MPR improves the stable throughput of ALOHA only when the MPR capability is comparable to the number of users in the system. In practical networks where the MPR capability is much smaller than the number of users, the stable throughput of conventional ALOHA is equal to 0, same as the case without MPR. To date, little work has been done to investigate the throughput enhancing capability of MPR in practical WLANs with collision resolution schemes. Our paper here is an attempt along this direction. Protocols that exploit the MPR capability of networks have been studied by Zhao and Tong in \cite{Zhao:03}-\cite{Zhao:04}. In \cite{Zhao:03}, a multi-queue service room (MQSR) MAC protocol was proposed for networks with heterogeneous users. The drawback of the MQSR protocol is its high computational cost due to updates of the joint distribution of all users' states. To reduce complexity, a suboptimal dynamic queue protocol was proposed in \cite{Zhao:04}. In both protocols, access to the common wireless channel is controlled by a central controller, which grants access to the channel to an appropriate subset of users at the beginning of each slot. In \cite{Chan:05}, Chan et al proposed to add a MUD layer to facilitate MPR in IEEE 802.11 WLAN. To implement the MUD techniques mentioned as examples in \cite{Chan:05}, the AP is assumed to have perfect knowledge of the number of concurrent transmissions, the identities of the transmitting stations, and the channel coefficients. These information, while easy to get in a network with centralized scheduling (e.g., cellular systems), is unkown to the AP a priori in random access networks. Moreover, the preambles of concurrent packets overlap, and hence it is difficult for the AP to have a good estimation of the channel coefficients with the current protocol. By contrast, our paper provides a solution to this issue by incorporating blind signal processing in the proposed protocol. Exponential Backoff (EB) as a collision resolution technique has been extensively studied in different contexts \cite{Goodman:88}-\cite{Kwak:05}. Stability upper bound of BEB has been given by Goodman under a finite-node model in \cite{Goodman:88} and recently improved by Al-Ammal in \cite{Ammal:01}. The throughput and delay characteristics of a slightly modified EB scheme have been studied in \cite{Jeong:95} in the context of slotted ALOHA. The characteristics of EB in steady state is further investigated in \cite{Kwak:05} in time slotted wireless networks with equal slot length. All the existing work on EB has assumed that the wireless channel can only accommodate one ongoing transmission at a time. This paper is a first attempt to look at EB for an MPR system. The remainder of this paper is organized as follows. In Section II, we describe the system model and introduce the background knowledge on MUD and EB. In Section III, we prove that the maximum achievable throughput of MPR WLAN scales super-linearly with the MPR capability of the channel. In Section IV, the effect of MPR on EB is investigated. We show that the widely used BEB scheme is no longer close-to-optimal in MPR networks. To realize MPR in IEEE 802.11 WLANs, a MAC-PHY protocol is presented in Section V. In Section VI, we discuss some practical issues related to MPR. Finally, Section VII concludes this paper. \section{Preliminary and System Model} \subsection{System Description} We consider a fully conected infrastructure WLAN where \emph{N} infinitely backlogged mobile stations communicate with an access point (AP). We assume that the time axis is divided into slots and packet transmissions start only at the beginning of a slot. In addition, after each transmission, the transmitting stations have a means to discover the result of the transmission, i.e., success or failure. If the transmission fails due to collision, the colliding stations will schedule retransmissions according to a collision resolution scheme (e.g., EB). We assume that the channel has the capability to accommodate up to \emph{M} simultaneous transmissions. In other words, packets can be received correctly whenever the number of simultaneous transmissions is no larger than \emph{M}. When more than \emph{M} stations contend for the channel at the same time, collision occurs and no packet can be decoded. We refer to \emph{M} as MPR capability. In our model, the length of a time slot is not necessarily fixed and may vary under different contexts \cite{Bianchi:00}. We refer to this variable-length slot as backoff slot hereafter. In WLANs, the length of a backoff slot depends on the contention outcome (hereafter referred to as channel status). Let $T_i$ denote the length of an idle time slot when nobody transmits; $T_c$ denote the length of a collision time slot when more than \emph{M }stations contend for the channel; and $T_s$ denote the length of a time slot due to successful transmission when the number of transmitting stations is anywhere from 1 to \emph{M}. The durations of $T_i$, $T_c$, and $T_s$ depend on the underlying WLAN configuration. For non-carrier-sensing networks such as slotted ALOHA, the stations are not aware of the channel status and the duration of all backoff slots are equal to the transmission time of a packet. That is, \begin{equation}\label{eqn:1} T_{slot}=T_i=T_c=T_s=L/R \end{equation} where \emph{L} is the packet size and \emph{R} is the data transmission rate of a station. On the other hand, for carrier-sensing networks, stations can distinguish between various types of channel status and the durations of different types of slots may not be the same. For example, in IEEE 802.11 DCF basic access mode, \begin{eqnarray}\label{eqn:2} T_i&=&\sigma \nonumber \\ T_s&=&H+L/R+SIFS+\delta+ACK+DIFS+\delta \nonumber\\ T_c&=&H+L/R+DIFS+\delta \end{eqnarray} where $\sigma$ is the time needed for a station to detect the packet transmission from any other station and is typically much smaller than $T_c$ and $T_s$; $H$ is the transmission time of PHY header and MAC header; ACK is the transmission time of an ACK packet; $\delta$ is the propagation delay; and SIFS and DIFS are the inter-frame space durations \cite{WLAN}. Similarly, in IEEE 802.11 DCF request-to-send/clear-to-send (RTS/CTS) access scheme, the slot durations are given by \begin{eqnarray}\label{eqn:3} T_i&=&\sigma \nonumber \\ T_s&=&RTS+SIFS+\delta+CTS+SIFS+\delta \nonumber\\ &&+H+L/R+SIFS+\delta+ACK+DIFS+\delta \nonumber\\ T_c&=&RTS+DIFS+\delta \end{eqnarray} where $RTS$ and $CTS$ denote the transmission time of RTS and CTS packets, respectively. By allowing the durations of $T_i$, $T_c$, and $T_s$ to vary according to the underlying system, the analysis of this paper applies to a wide spectrum of various WLANs, including both non-carrier-sensing and carrier-sensing networks. \subsection{Multiuser Detection} This subsection briefly introduces the PHY layer MUD techniques used to decode multiple packets at the receiver. Let $x_k(n)$ denote the data symbol transmitted by user \emph{k} in symbol duration \emph{n}. If there are \emph{K} stations transmitting together, then the received signal at a receiver is given by \begin{eqnarray}\label{eqn:4} \mathbf{y}(n)&=&\sum_{k=1}^K\mathbf{h}_k(n)x_k(n)+\mathbf{w}(n)\nonumber\\ &=&\mathbf{H}(n)\mathbf{x}(n)+\mathbf{w}(n) \end{eqnarray} where $\mathbf{w}(n)$ denotes the additive noise, $\mathbf{H}(n)=[\mathbf{h}_1(n), \mathbf{h}_2(n),\cdots,\mathbf{h}_K(n)]$, and $\mathbf{x}(n)=[x_1(n),\cdots,x_K(n)]^T$. In multiple antenna systems, $\mathbf{h}_k$ is the channel vector, with the $m^{th}$ element being the channel coefficient from user \emph{k} to the $m^{th}$ receive antenna.\footnote{In this paper, we assume that each station only transmits one data stream at a time.} In CDMA systems, vector $\mathbf{h}_k$ is multiplication of the spreading sequence of user \emph{k} and the channel coefficient from user \emph{k} to the AP. The receiver attempts to obtain an estimate of the transmitted symbols $\mathbf{x}(n)$ from the received vector $\mathbf{y}(n)$. To this end, various MUD techniques have been proposed in the literature. For example, the zero-forcing (ZF) receiver is one of the most popular linear detectors. It multiplies the received vector by the pseudo-inverse of matrix $\mathbf{H}(n)$, denoted by $\mathbf{H}^\texttt{+}(n)$, and the decision statistics become \begin{eqnarray}\label{eqn:5} \mathbf{r}^{ZF}(n)&=&\mathbf{H}^\texttt{+}(n)\mathbf{y}(n)\nonumber\\ &=&\mathbf{x}(n)+\mathbf{H}^\texttt{+}(n)\mathbf{w}(n). \end{eqnarray} The minimum-mean-square-error (MMSE) receiver is the optimal linear detector in the sense of maximizing the signal-to-interference-and-noise ratio (SINR). The decision statistics is calculated as \begin{equation}\label{eqn:6} \mathbf{r}^{MMSE}(n)=(\mathbf{H}(n)\mathbf{H}^H(n)+\eta\mathbf{I})^{-1}\mathbf{H}^H(n)\mathbf{y}(n) \end{equation} where $\mathbf{I}$ is the identity matrix, and $\eta$ is the variance of the additive noise. Given the decision statistics, an estimate of $x_k(n)$ can be obtained by feeding the $k^{th}$ element of $\mathbf{r}^{ZF}(n)$ or $\mathbf{r}^{MMSE}(n)$ into a quantizer. Other MUD techniques include maximum-likelihood (ML), parallel interference cancellation (PIC), successive interference cancellation (SIC), etc. Interested readers are referred to \cite{Verdu:98} for more details. \subsection{Exponential Backoff} EB adaptively tunes the transmission probability of a station according to the traffic intensity of the network. It works as follows. A backlogged station sets its backoff timer by randomly choosing an integer within the range $[0,W-1]$, where \emph{W} denote the size of the contention window. The backoff timer is decreased by one following each backoff slot. The station transmits a packet in its queue once the backoff timer reaches zero. At the first transmission attempt of a packet, $W=W_0$, referred to as the minimum contention window. Each time the transmission is unsuccessful, the \emph{W} is multiplied by a backoff factor \emph{r}. That is, the contention window size $W_i=r^iW_0$ after \emph{i} successive transmission failures. \section{Super-Linear Throughput Scaling in WLANs with MPR} This section investigates the impact of MPR on the throughput of random-access WLANs. In particular, we prove that the maximum achievable throughput scales super-linearly with the MPR capability $M$. In practical systems, $M$ is directly related to the cost (e.g., bandwidth in CDMA systems or antenna in multi-antenna systems). Super-linear scaling of throughput implies that the achievable throughput \emph{per unit cost} increases with $M$. This provides a strong incentive to consider MPR in next-generation wireless networks. As mentioned earlier, the transmission of stations is dictated by the underlying EB scheme. To capture the fundamentally achievable throughput of the system, the following analysis assumes that each station transmits with probability $p_t$ in an arbitrary slot, without caring how $p_t$ is achieved. The assumption will be made more rigorous in Section IV, which relates $p_t$ to EB parameters such as $r$ and $W_0$. \subsection{Throughput of WLANs with MPR} Define throughput to be the average number of information bits transmitted successfully per second. Let $S_N(M,p_t)$ denote the throughput of a WLAN with $N$ stations when each station transmits at probability $p_t$ and the MPR capability is $M$. Then, $S_N(M,p_t)$ can be calculated as the ratio between the average payload information bits transmitted per backoff slot to the average length of a backoff slot as follows. \begin{equation}\label{eqn:7} S_N(M,p_t)=\frac{\sum_{k=1}^Mk\Pr\{X=k\}L}{P_{idle}T_i+P_{coll}T_c+P_{succ}T_s} \end{equation} In the above, \emph{X} is a random variable denoting the number of attempts in a slot. \begin{equation}\label{eqn:8} \Pr\{X=k\}=\binom{N}{k}p_t^k(1-p_t)^{N-k}. \end{equation} Let \begin{equation}\label{eqn:9} P_{idle}=(1-p_t)^N \end{equation} be the probability that a backoff slot is idle; \begin{equation}\label{eqn:10} P_{succ}=\sum_{k=1}^M\Pr\{X=k\}=\sum_{k=1}^M\binom{N}{k}p_t^k(1-p_t)^{N-k} \end{equation} be the probability that a backoff slot is busy due to successful packet transmissions; and \begin{equation}\label{eqn:11} P_{coll}=\sum_{k=M+1}^N\Pr\{X=k\}=\sum_{k=M+1}^N\binom{N}{k}p_t^k(1-p_t)^{N-k} \end{equation} be the probability that a backoff slot is busy due to collision of packets. The throughput of non-carrier-sensing networks such as slotted ALOHA can be obtained by substituting (\ref{eqn:1}) into (\ref{eqn:7}), which leads to following expression: \begin{eqnarray}\label{eqn:12} S_N(M,p_t)&=&\frac{\sum_{k=1}^Mk\Pr\{X=k\}L}{T_{slot}}\nonumber\\ &=&R\sum_{k=1}^Mk\binom{N}{k}p_t^k(1-p_t)^{N-k} \end{eqnarray} Similarly, the throughput of carrier-sensing networks, such as IEEE 802.11 DCF basic-access mode and RTS/CTS access mode, can be obtained by substituting (\ref{eqn:2}) and (\ref{eqn:3}) into (\ref{eqn:7}) respectively. We now derive the asymptotic throughput when the population size \emph{N} approaches infinity. In this case, we assume that (i) the system has a non-zero asymptotic throughput; and (ii) the number of attempts in a backoff slot is approximated by a Poisson distribution with an average attempt rate $\lambda=Np_t$ \cite[pp. 258]{DeGroot}. Both of these assumptions are valid under an appropriate EB scheme, which will be elaborated in Section IV. Let $S_{\infty}(M,\lambda)$ be the asymptotic throughput when MPR capability is \emph{M} and average attempt rate is $\lambda$. Then, we derive from (\ref{eqn:7}) that \begin{eqnarray}\label{eqn:13} S_{\infty}(M,\lambda)&=&\lim_{N\rightarrow\infty}S_N\nonumber\\ &=&\frac{L\sum_{k=1}^Mk\Pr\{X=k\}}{P_{idle}T_i+P_{coll}T_c+P_{succ}T_s}\nonumber\\ &=&\frac{L\sum_{k=1}^Mk\frac{\lambda^k}{k!}e^{-\lambda}}{P_{idle}T_i+P_{coll}T_c+P_{succ}T_s}\nonumber\\ &=&\frac{L\lambda\sum_{k=0}^{M-1}\frac{\lambda^k}{k!}e^{-\lambda}}{P_{idle}T_i+P_{coll}T_c+P_{succ}T_s}\nonumber\\ &=&\frac{L\lambda\Pr\{X\leq M-1\}}{P_{idle}T_i+P_{coll}T_c+P_{succ}T_s} \end{eqnarray} where the third equality is due to the Poisson approximation. In particular, when $T_{slot}=T_i=T_c=T_s=L/R$, \begin{eqnarray}\label{eqn:14} S_{\infty}(M,\lambda)&=&R\sum_{k=0}^{M-1}\frac{\lambda^{k+1}}{k!}e^{-\lambda}\nonumber\\ &=& R\lambda\Pr\{X \leq M-1\} \end{eqnarray} \subsection{Super-Linear Throughput Scaling} Having derived the throughput expressions for both finite-population and infinite-population models, we now address the question: how does throughput scale as \emph{M} increases. In particular, we are interested in the behavior of the maximum throughput when the channel has a MPR capability of \emph{M}. This directly relates to the channel-access efficiency that is achievable in MPR networks. Given \emph{M}, the maximum throughput can be achieved by optimizing the transmission probability $p_t$ (or equivalently $\lambda$ in the infinite-population model). The optimal transmission probability can in turn be obtained by adjusting the backoff factor \emph{r} in practical WLANs, as will be discussed in Section IV. Let $S_N^*(M)=S_N(M,p_t^*(M))$ and $S_{\infty}^*(M)=S_{\infty}(M,\lambda^*(M))$ denote the maximum achievable throughputs, where $p_t^*(M)$ and $\lambda^*(M)$ denote the optimal $p_t$ and $\lambda$ when the MPR capability is \emph{M}, respectively. In Theorem 1, we prove that the throughput scales super-linearly with \emph{M} in non-carrier-sensing network with infinite population. In other words, $S_{\infty}^*(M)/M$ is an increasing function of \emph{M}. In Theorem 2, we further prove that $S_{\infty}^*(M)/MR$ approaches 1 when $M\rightarrow\infty$. This implies that the throughput penalty due to distributed random access diminishes when \emph{M} is very large. In Theorem 3 in Appendix I, we prove that the same super-linearity holds for WLANs with finite population. \begin{theorem} \label{theorem:1} \emph{(Super-Linearity)} $S_{\infty}^*(M)/M$ is an increasing function of \emph{M}. It is obvious that at the optimal $\lambda^*(M)$ \begin{eqnarray}\label{eqn:15} \frac{\partial S_{\infty}(M,\lambda)}{\partial\lambda}\bigg|_{\lambda=\lambda^*(M)}&=&R\sum_{k=0}^{M-1}\frac{(k+1)(\lambda^*(M))^k}{k!}e^{-\lambda^*(M)}\nonumber\\ &&-R\sum_{k=0}^{M-1}\frac{(\lambda^*(M))^{(k+1)}}{k!}e^{-\lambda^*(M)}\nonumber\\ &=&0 \end{eqnarray} Consequently, \begin{equation}\label{eqn:16} \sum_{k=0}^{M-1}\frac{(\lambda^*(M))^k}{k!}e^{-\lambda^*(M)}=\frac{(\lambda^*(M))^M}{(M-1)!}e^{-\lambda^*(M)}, \end{equation} or \begin{equation}\label{eqn:17} \Pr\{X \leq M-1\}\bigg|_{\lambda=\lambda^*(M)}=M\Pr\{X=M\}\bigg|_{\lambda=\lambda^*(M)}. \end{equation} To prove Theorem 1, we show that $S_{\infty}^*(M+1)/(M+1) \geq S_{\infty}^*(M)/M$ for all \emph{M} in the following. \begin{eqnarray}\label{eqn:18} &&S_{\infty}^*(M+1)=S_{\infty}(M+1,\lambda^*(M+1)) \nonumber\\ &\geq& S_{\infty}(M+1,\lambda^*(M))\nonumber\\ &=& R\sum_{k=0}^{M-1}\frac{\lambda^*(M)^{k+1}}{k!}e^{-\lambda^*(M)}\nonumber\\ &&+R\frac{\lambda^*(M)^{M+1}}{M!}e^{-\lambda^*(M)}\nonumber\\ &=&S_{\infty}(M,\lambda^*(M))+R\lambda^*(M)\Pr\{X=M\}\bigg|_{\lambda=\lambda^*(M)}\nonumber\\ &=&\frac{M+1}{M}S_{\infty}^*(M) \end{eqnarray} where the last equality is due to (\ref{eqn:14}) and (\ref{eqn:17}). Therefore, we have \begin{eqnarray} \frac{S_{\infty}(M+1)}{M+1} \geq \frac{S_{\infty}(M)}{M}\; \forall M \nonumber \end{eqnarray} \begin{flushright} $\square$ \end{flushright} \end{theorem} It is obvious that in a WLAN with MPR capability of $M$, the maximum possible throughput is $MR$ when there exists a perfect scheduling. In practical random-access WLANs, the actual throughput is always smaller than $MR$, due to the throughput penalty resulting from packet collisions and idle slots. For example, the maximum throughput is well known to be $Re^{-1}$ when $M=1$. Theorem 2 proves that the throughput penalty diminishes as $M$ becomes large. That is, the maximum throughput approaches $MR$ even though the channel access is based on random contentions. \begin{theorem} \label{theorem:2} \emph{(Asymptotic channel-access efficiency)} $\lim_{M\rightarrow\infty}S_{\infty}^*(M)\big/MR=1$. Before proving Theorem 2, we present the following two lemmas. \begin{lemma} (a) $\lim_{M\rightarrow\infty}S_\infty(M)\big/\lambda R=1$ for any attempt rate $\lambda<M$; (b) $\lim_{M\rightarrow\infty}S_\infty(M)\big/\lambda R=0$ for any attempt rate $\lambda>M$; (c) $\lim_{M\rightarrow\infty}S_\infty(M)\big/\lambda R=0.5$ for attempt rate $\lambda=M$. \end{lemma} \emph{Proof of Lemma 1(a):} \begin{eqnarray}\label{eqn:19} S_{\infty}(M,\lambda)&=& R\lambda\Pr\{X \leq M-1\}\nonumber\\ &=& R\lambda\big(1-\sum_{k=M}^{\infty}\frac{\lambda^k}{k!}e^{-\lambda}\big)\nonumber\\ &\geq&R\lambda\big(1-z^{-M}\sum_{k=M}^{\infty}\frac{(\lambda z)^k}{k!}e^{-\lambda}\big)\nonumber\\ &\geq&R\lambda\big(1-z^{-M}e^{\lambda(z-1)}\big)\; \forall z>1 \end{eqnarray} Let $f(z)=R\lambda\big(1-z^{-M}e^{\lambda (z-1)}\big)$ be the lower bound of $S_{\infty}(M)$. By solving \begin{equation}\label{eqn:20} \frac{\partial f(x)}{\partial z}=R\lambda\big(Mz^{-M-1}e^{\lambda(z-1)}-\lambda z^{-M}e^{\lambda(z-1)}\big)=0 \end{equation} it can be easily found that $z^*=M\big/\lambda$ maximizes $f(z)$ and \begin{equation}\label{eqn:21} \frac{f(z^*)}{\lambda R}=1-\bigg(\frac{\lambda}{M}\bigg)^Me^{M(1-\frac{\lambda}{M})} \end{equation} Since $z^*>1$, $\lambda<M$. Let $\lambda=cM$ where $c<1$. eqn. (\ref{eqn:21}) can be written as \begin{equation}\label{eqn:22} \frac{f(z^*)}{\lambda R}=1-\big(ce^{1-c}\big)^M \end{equation} It is obvious that \begin{equation}\label{eqn:23} ce^{1-c}<1 \;\forall c\neq 1. \end{equation} Therefore, \begin{eqnarray}\label{eqn:24} \lim_{M\rightarrow\infty}\frac{S_{\infty}(M,\lambda)}{\lambda R}&\geq&\lim_{M\rightarrow\infty}\frac{f^*(z)}{\lambda R}\nonumber\\ &=&\lim_{M\rightarrow\infty}\bigg(1-\big(ce^{1-c}\big)^M\bigg)\nonumber\\ &=&1. \end{eqnarray} On the other hand, the first equality of (\ref{eqn:19}) implies \begin{equation}\label{eqn:25} \frac{S_\infty (M,\lambda)}{\lambda R}\leq 1. \end{equation} Combining (\ref{eqn:24}) and (\ref{eqn:25}), we have \begin{equation}\label{eqn:26} \lim_{M\rightarrow\infty}\frac{S_\infty (M,\lambda)}{\lambda R}= 1 \; \forall \lambda<M, \end{equation} and Lemma 1(a) follows. \begin{flushright} $\Box$ \end{flushright} \emph{Proof of Lemma 1(b)}: \begin{eqnarray}\label{eqn:B1} S_\infty(M)&=&R\lambda\Pr\{X\leq M-1\}=R\lambda\sum_{k=0}^{M-1}\frac{\lambda^k}{k!}e^{-\lambda}\nonumber\\ &\leq&R\lambda z^{-M}\sum_{k=0}^{M-1}\frac{(\lambda z)^k}{k!}e^{-\lambda}\nonumber\\ &\leq&R\lambda z^{-M}\sum_{k=0}^\infty\frac{(\lambda z)^k}{k!}e^{-\lambda}\nonumber\\ &=&R\lambda z^{-M}e^{\lambda(z-1)}\;\forall z<1. \end{eqnarray} Let $g(z)=R\lambda z^{-M}e^{\lambda(z-1)}$ be the upper bound of $S_\infty(M)$. By solving \begin{equation}\label{eqn:B2} \frac{\partial g(z)}{\partial z}=R\lambda\big(-Mz^{-M-1}e^{\lambda(z-1)}+\lambda z^{-M}e^{\lambda(z-1)}\big)=0 \end{equation} it can be easily found that $z^*=M/\lambda$ minimizes $g(z)$ and \begin{equation}\label{eqn:B3} \frac{g(z^*)}{R\lambda}=\bigg(\frac{\lambda}{M}\bigg)^Me^{M(1-\frac{\lambda}{M})}. \end{equation} Since $z^*<1$, $\lambda>M$. Let $\lambda=cM$ where $c>1$. eqn. (\ref{eqn:B3}) can be written as \begin{equation}\label{eqn:B4} \frac{g(z^*)}{R\lambda}=\bigg(ce^{1-c}\bigg)^M. \end{equation} Due to eqn. (\ref{eqn:23}) \begin{eqnarray}\label{eqn:B5} \lim_{M\rightarrow\infty}\frac{S_\infty(M)}{R\lambda}&\leq& \lim_{M\rightarrow\infty}\frac{g^*(z)}{R\lambda}\nonumber\\ &=&\lim_{M\rightarrow\infty}\big(ce^{1-c}\big)^M=0 \end{eqnarray} On the other hand, it is obvious that \begin{equation}\label{eqn:B6} \frac{S_\infty(M)}{R\lambda}\geq0 \end{equation} Combining (\ref{eqn:B5}) and (\ref{eqn:B6}), we have \begin{equation}\label{eqn:B7} \lim_{M\rightarrow\infty}\frac{S_\infty(M)}{R\lambda}=0\;\forall\lambda>M, \end{equation} and Lemma 1(b) follows. \emph{Proof of Lemma 1(c)}: To prove Lemma 1(c), we note that the median of Poisson distribution is bounded as follows \cite{Choi:94}-\cite{Chen:86}: \begin{equation}\label{eqn:B8} \lambda-\log2\leq median \leq \lambda+1/3. \end{equation} When $\lambda=M$ and $M\rightarrow\infty$, the median approaches $M$. According to the first equality of (\ref{eqn:14}), \begin{eqnarray}\label{eqn:B9} \lim_{M\rightarrow\infty}\frac{S_\infty(M)}{R\lambda}&=&\lim_{M\rightarrow\infty}\Pr\{X\leq M-1\} \nonumber\\ &\approx &\lim_{M\rightarrow\infty}\Pr\{X\leq M\} =0.5 \end{eqnarray} \begin{flushright} $\Box$ \end{flushright} \begin{lemma} The optimal attempt rate $\lambda^*(M)<M$ and $\lim_{M\rightarrow\infty}\lambda^*(M)\big/M=1$. \end{lemma} \emph{Proof of Lemma 2:} The mode of Poisson distribution is equal to $\lfloor\lambda\rfloor$, where $\lfloor\cdot\rfloor$ denotes the largest integer that is smaller than or equal to the argument. When $\lambda\geq M$, \begin{equation}\label{eqn:27} \Pr\{X=M\}>\Pr\{X=i\}\;\forall 0\leq i\leq M-1, \end{equation} which conflicts with eqn. (\ref{eqn:17}). Therefore, the optimal attempt rate \begin{equation}\label{eqn:28} \lambda^*(M)<M. \end{equation} Combining (\ref{eqn:14}), (\ref{eqn:17}), (\ref{eqn:28}) and Lemma 1, we have \begin{equation}\label{eqn:29} \lim_{M\rightarrow\infty}M\Pr\{X=M\}\big|_{\lambda=\lambda^*(M)}=1. \end{equation} Let $\lambda^*=cM$ where $c<1$. eqn. (\ref{eqn:29}) can be written as \begin{equation}\label{eqn:30} \lim_{M\rightarrow\infty}\frac{(cM)^M}{(M-1)!}e^{-cM}=1. \end{equation} and \begin{eqnarray}\label{eqn:31} c&=& \lim_{M\rightarrow\infty}\frac{\big((M-1)!\big)^{1/M}}{M}e^c\nonumber \\ &\approx&\lim_{M\rightarrow\infty} \frac{(M!)^{1/M}}{M}e^c \nonumber\\ &=& e^{-(1-c)} \end{eqnarray} where the last equality is due to the Stirling's formula \cite{Feller:68}. Solving eqn. (\ref{eqn:31}), we have \begin{equation} \lim_{M\rightarrow\infty}\frac{\lambda^*}{M}=\lim_{M\rightarrow\infty}c=1 \end{equation} \begin{flushright} $\Box$ \end{flushright} \emph{Proof of Theorem 2}: From Lemma 1 and Lemma 2, it is obvious that $\lim_{M\rightarrow\infty}S_{\infty}^*(M)\big/MR=1$. \begin{flushright} $\Box$ \end{flushright} \end{theorem} The above results are illustrated in Fig. \ref{fig:1}, where $S_\infty^*(M)\big/MR$ is plotted as a function of $M$ in non-carrier-sensing slotted ALOHA systems. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{superlinear_ALOHA.eps} \caption{Super-linear scalability of the throughput of non-carrier-sensing slotted ALOHA networks}\label{fig:1} \end{figure} \begin{theorem} \emph{(Super-linearity with finite population)} $S_N^*(M+1)\big/M+1\geq S_N^*(M)\big/M$ for all $M<N$. \end{theorem} \emph{Proof of Theorem 3}: See Appendix I. In Theorem 1-3, super-linearity is proved assuming the network is non-carrier-sensing. In Fig. \ref{fig:2} and Fig. \ref{fig:3}, the optimal throughput $S_\infty^*(M)$ and $S_\infty^*(M)\big/M$ are plotted for carrier-sensing networks, respectively, with system parameters listed in Table I. The figures show that system throughput is greatly enhanced due to the MPR enhancement in the PHY layer. Moreover, the super-linear throughput scaling holds for carrier-sensing networks when \emph{M} is relatively large. \begin{table} \caption{System Parameters Used in Carrier-Sensing Networks (Adopted from IEEE 802.11g)} \centering \begin{tabular}{|c|c|} \hline Packet payload & 8184 bits \\ \hline MAC header & 272 bits \\ \hline PHY overhead & 26 $\mu s$ \\ \hline ACK & 112 bits + PHY overhead \\ \hline RTS & 160 bits + PHY overhead \\ \hline CTS & 112 bits + PHY overhead \\ \hline Basic rate & 6 Mbps \\ \hline Data rate & 54 Mbps \\ \hline Slot time $\sigma$ & 9 $\mu s$ \\ \hline SIFS & 10 $\mu s$ \\ \hline \end{tabular} \end{table} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{opt_thrput.eps} \caption{Optimal throughput of carrier-sensing networks}\label{fig:2} \end{figure} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{superlinear_basic_access_rtscts.eps} \caption{Super-linear scalability of the throughput of carrier-sensing networks}\label{fig:3} \end{figure} \section{Impact of MPR on EB in WLAN MAC} In this section, we study the characteristic behavior of WLAN MAC and EB when the channel has MPR capability. We first establish the relationship between transmission probability $p_t$ (or $\lambda$) and EB parameters including backoff factor \emph{r} and minimum contention window $W_0$. Based on the analysis, we will then study how the optimal backoff strategy changes with the MPR capability $M$. \subsection{Transmission Probability} We use an infinite-state Markov chain, as shown in Fig. \ref{fig:4}, to model of operation of EB with no retry limit. The reason for the lack of a retry limit is that it is theoretically more interesting to look at the limiting case when the retry limit is infinitely large. Having said this, we note that the analysis in our paper can be easily extended to the case where there is a retry limit. The state in the Markov chain in Fig. \ref{fig:4} is the backoff stage, which is also equal to the number of retransmissions experienced by the station. As mentioned in Section II, the contention window size is $W_i=r^iW_0$ when a station is in state $i$. In the figure, $p_c$ denotes the conditional collision probability, which is the probability of a collision seen by a packet being transmitted on the channel. Note that $p_c$ depends on the transmission probabilities of stations other than the transmitting one. In our model, $p_c$ is assumed to be independent of the backoff stage of the transmitting station. In our numerical results, we show that the analytical results obtained under this assumption are very accurate when $N$ is reasonably large. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{markov.eps} \caption{Markov chain model for the backoff stage}\label{fig:4} \end{figure} With EB, transmission probability $p_t$ is equal to the probability that the backoff timer of a station reaches zero in a slot. Note that the Markov process of MPR networks is similar to the ones in \cite{Bianchi:00}, \cite{Kwak:05}, except that the conditional collision probability $p_c$ is different for $M>1$. Therefore, eqn. (\ref{eqn:32}) can be derived in a similar way as \cite{Bianchi:00}, \cite{Kwak:05}: \begin{equation}\label{eqn:32} p_t=\frac{2(1-rp_c)}{W_0(1-p_c)+1-rp_c} \end{equation} where $rp_c<1$ is a necessary condition for the steady state to be reachable. The detailed derivation of (\ref{eqn:32}) is omitted due to page limit. Interested readers are referred to \cite{Bianchi:00}, \cite{Kwak:05}. Likewise, the conditional collision probability $p_c$ is equal to the probability that there are $M$ or more stations out of the remaining $N-1$ stations contending for the channel. We thus have the following relationship: \begin{equation}\label{eqn:33} p_c=1-\sum_{k=0}^{M-1}\binom{N-1}{k}p_t^k(1-p_t)^{N-k-1}. \end{equation} It can be easily shown that $p_t$ is a decreasing function of $p_c$ for any $r>1$ in (\ref{eqn:32}). Meanwhile, $p_c$ is an increasing function of $p_t$ in (\ref{eqn:33}). Therefore, the curves determined by (\ref{eqn:32}) and (\ref{eqn:33}) have a unique intersection corresponding to the root of the nonlinear system. By solving the nonlinear system (\ref{eqn:32})-(\ref{eqn:33}) numerically for different $N$, we plot the analytical results of $Np_t$ in Fig. \ref{fig:5}. In the figures, BEB is adopted. That is, $r=2$. The minimum contention window size $W_0=16$ or 32. To validate the analysis, the simulation results are plotted as markers in the figures. In the simulation, the data are collected by running 5,000,000 rounds after 1,000,000 rounds of warm up. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{Fig3.eps} \caption{Plots of $Np_t$ versus $N$ when $r=2$; lines are analytical results calculated from (\ref{eqn:2}) and (\ref{eqn:3}), markers are simulation results.}\label{fig:5} \end{figure} From the figures, we can see that the analytical results match the simulations very well. Moreover, it shows that $Np_t$ converges to a constant quantity when $N$ becomes large. This is a basic assumption in the previous section when we calculated the asymptotic throughput. The constant quantity that $Np_t$ converges to can be calculated as follows. For large $N$, the number of attempts in a slot can be modeled as a Poisson process \cite[pp. 258]{DeGroot}. That is, \begin{equation}\label{eqn:34} \Pr\{X=k\}=\frac{\lambda^k}{k!}e^{-\lambda} \end{equation} where \begin{equation}\label{eqn:35} \lambda=\lim_{N\rightarrow\infty}Np_t. \end{equation} The conditional collision probability in this limiting case is given by \begin{equation}\label{eqn:36} \lim_{N\rightarrow\infty}p_c=\Pr\{X\geq M\}=1-\sum_{k=0}^{M-1}\frac{\lambda^k}{k!}e^{-\lambda}. \end{equation} When the system is steady, the total attempt rate $\lambda=\lim_{N\rightarrow\infty}Np_t$ should be finite. Therefore, \begin{equation}\label{eqn:37} \lim_{N\rightarrow\infty}p_t=\lim_{N\rightarrow\infty}\frac{2(1-rp_c)}{W_0(1-p_c)+1-rp_c}=0, \end{equation} which implies \begin{equation}\label{eqn:38} \lim_{N\rightarrow\infty}p_c=\frac{1}{r}. \end{equation} Combining (\ref{eqn:36}) and (\ref{eqn:38}), we get the following equation \begin{equation}\label{eqn:39} \sum_{k=0}^{M-1}\frac{\lambda^k}{k!}e^{-\lambda}=1-\frac{1}{r}. \end{equation} $\lambda$ can be calculated numerically from (\ref{eqn:39}) given $M$ and $r$. Fig. \ref{fig:5} shows that $Np_t$ calculated from (\ref{eqn:32}) and (\ref{eqn:33}) does converge to $\lambda$ when $N$ is large. Note that the relationship between $p_t$, $\lambda$, and EB established above do not depend on the duration of the underlying backoff slots, and therefore can be applied in both non-carrier-sensing and carrier-sensing networks. Before leaving this sub-section, we validate another assumption adopted in Section III. That is, EB guarantees a non-zero throughput when $N$ approaches infinity. To this end, the throughput of slotted ALOHA is plotted as a function of $N$ in Fig. \ref{fig:6} when BEB is adopted. It can be seen that the throughputs with the same $M$ converge to the same constant as $N$ increases, regardless of the minimum contention window $W_0$. Similar phenomenon can also be observed in carrier-sensing networks, as illustrated in Fig. \ref{fig:7}, where the throughput of IEEE 802.11 WLAN with basic access mode is plotted with detailed system parameters listed in Table I. The asymptotic throughput when $N$ is very large depends only on the MPR capability $M$ and the backoff factor $r$. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{Fig4.eps} \caption{Normalized throughput of non-carrier-sensing slotted ALOHA networks when $r=2$}\label{fig:6} \end{figure} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{Fig5.eps} \caption{Throughput of carrier-sensing basic-access networks when $r=2$}\label{fig:7} \end{figure} \subsection {Optimal Backoff Factor} In Section III, we have investigated the maximum network throughput that is achieved by optimal transmission probability $p_t^*(M)$ and $\lambda^*(M)$. The previous sub-section shows that transmission probability is a function of backoff factor $r$. Mathematically, the optimal $r$ that maximizes throughput can be obtained by solving the equation $\partial S(M)\big/\partial r=0$. In this section, we investigate how the optimal backoff factor $r$ changes with the MPR capability $M$. In Fig. \ref{fig:8} and Fig. \ref{fig:9}, we plot the throughput as a function of $r$ for both non-carrier sensing networks and carrier-sensing networks in basic-access mode. From the figure, we can see that the optimal $r$ that maximizes throughput increases with $M$ for moderate to large $M$. This observation can be intuitively explained for non-carrier-sensing networks by (\ref{eqn:14}), (\ref{eqn:39}), and Lemma 1 as follows. Eqns. (\ref{eqn:14}) and (\ref{eqn:39}) indicate that \begin{equation}\label{eqn:40} \frac{S_{\infty}(M,\lambda)}{R\lambda}=\Pr\{X\leq M-1\}=1-\frac{1}{r}. \end{equation} As Lemma 1 indicates, when $M$ is large, $S_{\infty}(M,\lambda)\big/R\lambda$ increases with $M$ and eventually approaches 1. Consequently, $r$ increases with $M$. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{Fig9_AsymThr_vs_r_NCS.eps} \caption{Throughput versus $r$ for non-carrier-sensing slotted ALOHA networks}\label{fig:8} \end{figure} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{Fig8_AsymThr_vs_r_CS.eps} \caption{Throughput versus $r$ for carrier-sensing networks with basic-access mode}\label{fig:9} \end{figure} As the figures show, the throughput decreases sharply when $r$ moves from the optimal $r^*$ to 1. On the other hand, it is much less sensitive to $r$ when $r$ is larger than the $r^*$. Therefore, in order to avoid dramatic throughput degradation, it is not wise to operate $r$ in the region between 1 and $r^*$. Note that when $M$ is large, $r^*$ is larger than 2. This implies that the widely used BEB might be far from optimal in MPR WLANs. To further see how well BEB works, we plot the ratio of the throughput obtained by BEB to the maximum achievable throughput in Fig. \ref{fig:10}. The optimal $r$ that achieves the maximum throughput is plotted versus $M$ in Fig. \ref{fig:11}. In the figures, we can see that BEB only achieves a small fraction of the maximum achievable throughput when $M$ is large in non-carrier-sensing and IEEE 802.11 basic-access mode. For example, when $M=10$ BEB only achieves about 80 percent of the maximum throughput in non-carrier-sensing networks. In RTS/CTS mode, in contrast, the performance of BEB is close to optimal for a large range of $M$. Therefore, we argue from an engineering point of view that BEB (i.e., $r=2$) is a good choice for RTS/CTS access scheme, while on the other hand tuning $r$ to the optimal is important for non-carrier-sensing and basic-access schemes. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{Fig10_BEBpercent_vs_M_both.eps} \caption{Ratio of BEB throughput to the maximal throughput versus $M$}\label{fig:10} \end{figure} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{Fig11_opt_r_vs_M_both.eps} \caption{Optimal $r$ versus $M$}\label{fig:11} \end{figure} Having demonstrated the significant capacity improvement that MPR brings to WLANs, we are highly motivated to present practical protocols to implement MPR in the widely used IEEE 802.11 WiFi. In particular, we will propose protocols that consist of both MAC-layer mechanisms and PHY-layer signal processing schemes in the next Section. \section{MPR Protocol for IEEE 802.11 WLAN} In this section, we present a MPR protocol for IEEE 802.11 WLAN with RTS/CTS mechanism. The proposed protocol requires minimum amendment at mobile stations, and hence will be easy to implement in practical systems. Throughout this section, we assume that the MPR capability is brought by the multiple antennas mounted at the access point (AP). This assumption complies with the hardware request of the latest MIMO-based WLAN standards. However, the proposed MAC-PHY protocol can be easily extended to CDMA networks, as the received signal structures in multi-antenna and CDMA systems are almost the same (refer to Section II-C). \subsection{MAC Protocol Design} The MAC protocol closely follows the IEEE 802.11 RTS/CTS access mechanism, as illustrated in Fig. \ref{fig:12}. A station with a packet to transmit first sends an RTS frame to the AP. In our MPR MAC model, when multiple stations transmit RTS frames at the same time, the AP can successfully detect all the RTS frames if and only if the number of RTSs is no larger than $M$. When the number of transmitting stations exceeds $M$, collisions occur and the AP cannot decode any of the RTSs. The stations will retransmit their RTS frames after a backoff time period according to the original IEEE 802.11 protocol. When the AP detects the RTSs successfully, it responds, after a SIFS period, with a CTS frame that grants transmission permissions to all the requesting stations. Then the transmitting stations will start transmitting DATA frames after a SIFS, and the AP will acknowledge the reception of the DATA frames by an ACK frame. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{Fig12.eps} \caption{Time line example for the MPR MAC}\label{fig:12} \end{figure} The formats of the RTS and Data frames are the same as those defined in 802.11, while the CTS and ACK frames have been modified to accommodate multiple transmitting stations for MPR. In particular, there are $M$ receiver address fields in the CTS and ACK frames to identify up to $M$ intended recipients. As described above, our MPR MAC is very similar to the original IEEE 802.11 MAC. In fact, to maintain this similarity in the MAC layer, the challenge is pushed down to the physical layer. For example, in the proposed MPR MAC, the AP is responsible to decode all the RTSs transmitted simultaneously. However, due to the random-access nature WLAN, the AP has no priori knowledge of who the senders are and the channel state information (CSI) on the corresponding links. This imposes a major challenge on the PHY layer, as the MUD techniques introduced in Section II, such as ZF and MMSE cannot be directly applied. To tackle these problems, we introduce the physical layer techniques in next subsection. \subsection{PHY-layer Signal Processing Mechanism} In this subsection, we propose a PHY mechanism to implement MPR in IEEE 802.11. The basic idea is as follows. RTS packets are typically transmitted at a lower data rate than the data packets in IEEE 802.11. This setting is particularly suitable for blind detection schemes which can separate the RTS packets without knowing the prior knowledge of the senders' identities and CSI \cite{Talwar:96}-\cite{Papadias:97}. Upon successfully decoding the RTS packets, the AP can then identify the senders of the packets. Training sequences, to be transmitted in the preamble of the data packets, are then allocated to these users to facilitate channel estimation during the data transmission phase. Since the multiple stations transmit their data packets at the same time, their training sequences should be mutually orthogonal. In our system, no more than $M$ simultaneous transmissions are allowed. Therefore, a total of $M$ orthogonal sequences are required to be predefined and made known to all stations. The sequence allocation decision is sent to the users via the CTS packet. During the data transmission phase, CSI is estimated from the orthogonal training sequences that are transmitted in the preamble of the data packets. With the estimated CSI, various MUD techniques can be applied to separate the multiple data packets at the AP. Using coherent detection, data packets can be transmitted at a much higher rate than the RTS packets without involving excessive computational complexity. As MUD techniques have been introduced in Section II, we focus on the blind separation of RTS packets in this subsection. Assume that there are $K$ stations transmitting RTS packets together. Then, the received signal in symbol duration $n$ is given by (\ref{eqn:4}), where the $(m, k)^{th}$ element of $\mathbf{H}$ denotes the channel coefficient from user $k$ to the $m^{th}$ antenna at the AP. Assuming that the channel is constant over an RTS packet, which is composed of $N$ symbol periods, we obtain the following block formulation of the data \begin{equation}\label{eqn:41} \mathbf{Y}=\mathbf{HX}+\mathbf{W} \end{equation} where $\mathbf{Y}=[\mathbf{y}(1),\mathbf{y}(2),\cdots,\mathbf{y}(N)]$, $\mathbf{X}=[\mathbf{x}(1),\mathbf{x}(2),\cdots,\mathbf{x}(N)]$, and $\mathbf{W}=[\mathbf{w}(1),\mathbf{w}(2),\cdots,\mathbf{w}(N)].$ The problem to be addressed here is the estimation of the number of sources $K$, the channel matrix $\mathbf{H}$, and the symbol matrix $\mathbf{X}$, given the array output $\mathbf{Y}$. \subsubsection{Estimation of the number of sources K} For an easy start, we ignore the white noise for the moment and have $\mathbf{Y}=\mathbf{HX}$. The rank of $\mathbf{H}$ is equal to $K$ if $K<M$. Likewise, $\mathbf{X}$ is full-row-rank when $N$ is much larger than $K$. Consequently, we have $rank(\mathbf{Y})=K$ and $K$ is equal to the number of nonzero singular values of $\mathbf{Y}$. With white noise added to the data, $K$ can be estimated from the number of singular values of $\mathbf{Y}$ that are significantly larger than zero. \subsubsection{Estimation of \textbf{X} and \textbf{H}} In this paper, we adopt the Finite Alphabet (FA) based blind detection algorithm to estimate $\mathbf{X}$ and $\mathbf{H}$, assuming $K$ is known. The maximum-likelihood estimator yields the following separable least-squares minimization problem \cite{Talwar:96} \begin{equation}\label{eqn:42} \min_{\mathbf{H},\mathbf{X}\in\Omega}\|\mathbf{Y}-\mathbf{HX}\|_F^2 \end{equation} where $\Omega$ is the finite alphabet to which the elements of $\mathbf{X}$ belong, and $\|\cdot\|_F^2$ is the Frobenius norm. The minimization of (\ref{eqn:42}) can be carried out in two steps. First, we minimize (\ref{eqn:42}) with respect to $\mathbf{H}$ and obtain \begin{equation}\label{eqn:43} \hat{\mathbf{H}}=\mathbf{YX}^\texttt{+}=\mathbf{YX}^H(\mathbf{XX}^H)^{-1}, \end{equation} where $(\cdot)^+$ is the pseudo-inverse of a matrix. Substituting $\hat{\mathbf{H}}$ back into (\ref{eqn:42}), we obtain a new criterion, which is a function of $\mathbf{X}$ only: \begin{equation}\label{eqn:44} \min_{\mathbf{X}\in\Omega}\|\mathbf{YP}_{\mathbf{X}^H}^\bot\|_F^2, \end{equation} where $\mathbf{P}_{\mathbf{X}^H}^\bot=\mathbf{I}-\mathbf{X}^H(\mathbf{XX}^H)^{-1}\mathbf{X}$, and $\mathbf{I}$ is the identity matrix. The global minimum of (\ref{eqn:44}) can be obtained by enumerating over all possible choices of $\mathbf{X}$. Reduced-complexity iterative algorithms that solve (\ref{eqn:44}) iteratively such as ILSP and ILSE were introduced in \cite{Papadias:97}. Not being one of the foci of this paper, the details of ILSP and ILSE are not covered here. Interested readers are referred to \cite{Talwar:94} and the references therein. Note that the scheme proposed in this section is only one way of implementing MPR in WLANs. It ensures that the orthogonal training sequences are transmitted in the preambles of data packets. This leads to highly reliable channel estimation that facilitates the user of MUD techniques. Moreover, the modification to the original protocol is mainly restrained within the AP. Minimum amendment is needed at mobile stations. \section{Discussions} \subsection{Random channel error} In our analysis so far, we have assumed that packet error rate due to random fading effect is negligible when the number of simultaneous transmission is smaller than \emph{M} and is close to 1 otherwise. This assumption is quite accurate when data packets are well protected by error correction codes (e.g., convolutional codes in IEEE 802.11 protocol) and linear MUD is deployed at the receiver. The simplification allows us to focus on the effect of MPR on WLAN without the need to consider signal processing details such as coding and detection schemes. In this section, we relax the assumption and investigate how random channel errors would affect our analysis. Fortunately, we can prove that super-linear throughput scaling still holds even when random channel error is taken into account, as detailed in the following. Denote by $P_M^{err}(k)$ the packet error rate due to wireless channel fading when \emph{k} packets are transmitted at the same time in a network with MPR capability $M$. Then, $P_M(k)=1-P_M^{err}(k)$ is the packet success rate, which is the probability that a packet \emph{survives} random channel fading \cite{Proakis}. Typically, $P_M(k)\geq P_M(k')$ for $k\leq k'$ and $P_M(k)\geq P_{M'}(k)$ for $M\geq M'$. Assuming linear detectors, we have $P_M(k)\approx 0$ if $k>M$ and $P_M(M)\approx P_{M'}(M')$ for $M\neq M'$ \cite{Winters:94}. For simplicity, assume $T_{slot}=T_s=T_i=T_c=L/R$. Then, asymptotic throughput is given by \begin{eqnarray} S_\infty(M,\lambda)&=&R\sum_{k=1}^Mk\frac{\lambda^{k}}{k!}e^{-\lambda}P_M(k)\nonumber\\ &=&R\sum_{k=0}^{M-1}\frac{\lambda^{k+1}}{k!}e^{-\lambda}P_M(k+1) \end{eqnarray} At the optimal $\lambda^*(M)$, $\frac{\partial S_\infty(M,\lambda)}{\partial \lambda}=0$. Consequently, \begin{eqnarray}\label{eqn56} &&\sum_{k=0}^{M-1}\frac{(\lambda^*(M))^k}{k!}e^{-\lambda^*(M)}P_M(k+1)\nonumber\\ &=&\sum_{k=0}^{M-2}\frac{(\lambda^*(M))^{k+1}}{k!}e^{-\lambda^*(M)}(P_M(k+1)-P_M(k+2))\nonumber\\ &+&\frac{(\lambda^*(M))^M}{(M-1)!}e^{-\lambda^*(M)}P_M(M)\nonumber\\ &\leq&\frac{(\lambda^*(M))^M}{(M-1)!}e^{-\lambda^*(M)}P_M(M) \end{eqnarray} We are now ready to prove super-linear throughput scaling $\frac{S_\infty^*(M+1)}{M+1}\geq\frac{S_\infty^*(M)}{M}$ in the following. \begin{eqnarray} &&S_\infty^*(M+1)\geq S_\infty(M+1,\lambda^*(M))\nonumber\\ &=&R\sum_{k=0}^{M-1}\frac{\lambda^*(M)^{k+1}}{k!}e^{-\lambda^*(M)}P_{M+1}(k+1)\nonumber\\ &&+R\frac{\lambda^*(M)^{M+1}}{M!}e^{-\lambda^*(M)}P_{M+1}(M+1)\nonumber\\ &\geq&S_\infty(M,\lambda^*(M))+R\frac{\lambda^*(M)^{M+1}}{M!}e^{-\lambda^*(M)}P_{M}(M)\nonumber\\ &\geq& \frac{M+1}{M}S_\infty^*(M) \end{eqnarray} where the last inequality is due to (\ref{eqn56}). \subsection{Near far effect} One implicit assumption in our analysis is that each station transmits at the same data rate $R$. In practice, stations experience different channel attenuation to the AP due to their random locations. If stations transmit at the same power level, then the data rate sustainable on each link would differ. In this case, the airtime occupied by a busy period is dominated the lowest data rate involved. Hence, the effective throughput enjoyed by high-rate stations would degenerate to the level of the lowest rate. Such problem, known as ``performance anomaly", is not unique to MPR. It exists in all multi-rate IEEE 802.11 networks. Fortunately, performance anomaly only causes the data rate $R$ in our throughput expression to degrade to $R_{min}$, where $R_{min}$ is the lowest possible data rate. Therefore, it will not affect the scaling law of throughput in MPR networks. \subsection{Comparison with multiuser SIMO systems} In this paper, we have demonstrated the drastic increase in spectrum efficiency brought by MPR. To implement MPR, modification is needed in both MAC and PHY layers, as discussed in Section V. With the same hardware enhancement (e.g., having $M$ antennas at the AP), an alternative is to let each link transmit at a higher data rate, but keep the single-packet-reception restriction unchanged. This essentially becomes a traditional WLAN with SIMO (single-input-multiple-output) links. The capacity of a SIMO link increases logarithmically with the number of antennas at the receiver \cite{Tse}. That is, \begin{equation} \label{eqn59} R_{SIMO}\approx R_{SISO}+\log(M) \end{equation} where $R_{SISO}$ is the data rate of a SISO (single-input-single-output) link. In contrast, the data rate $R$ of each link in MPR WLAN is set to $R_{SISO}$, for antenna diversity is used to separate multiple data streams instead of increasing the rate of one stream therein. With (\ref{eqn59}), the throughput of WLAN with SIMO links is \begin{equation} S_N^{SIMO}=\frac{L\sum_{k=1}^Mk\Pr\{X=k\}}{P_{idle}^{SIMO}T_i^{SIMO}+P_{coll}^{SIMO}T_c^{SIMO}+P_{succ}^{SIMO}T_s^{SIMO}} \end{equation} where the expressions for $P_{idle}^{SIMO}$, $P_{coll}^{SIMO}$, and $P_{succ}^{SIMO}$ are the same as (\ref{eqn:9}), (\ref{eqn:10}), and (\ref{eqn:11}) with $M=1$, respectively. Likewise, $T_i^{SIMO}$, $T_c^{SIMO}$, and $T_s^{SIMO}$ are the same as (\ref{eqn:1}), (\ref{eqn:2}), or (\ref{eqn:3}) except that $R$ is replaced by $R_{SIMO}$. Specifically, throughput in the ALOHA case becomes \begin{equation} S_N^{SIMO}=(R+\log(M))Np_t(1-p_t)^{N-1}. \end{equation} and the optimal $p_t$ that maximizes the throughput is equal to $1/N$. In particular, the maximum achievable throughput when $N$ is large is \begin{equation} S_\infty^{SIMO*}(M)=(R+\log(M))e^{-1}. \end{equation} It is obvious that the normalized throughput $\frac{S_\infty^{SIMO*}(M)}{M}$ decreases with $M$ in SIMO networks. This, in contrast to the super-linear throughput scaling in MPR networks, suggests that multiple antennas at the AP should be used to resolve simultaneous transmissions instead of increasing per-link data rate in random access WLANs. \section{Conclusion} With the recent advances in PHY-layer MUD techniques, it is no longer a physical constraint for the WLAN channel to accommodate only one packet transmission at one time. To fully utilize the MPR capability of the PHY channel, it is essential to understand the fundamental impact of MPR on the MAC-layer. This paper has studied the characteristic behavior of random-access WLANs with MPR. Our analysis provides a theoretical foundation for the performance evaluation of WLANs with MPR, and it is useful for system design in terms of setting operating parameters of MAC protocols. Our analytical framework is general and applies to various WLANs including non-carrier-sensing and carrier-sensing networks. In Theorems 1 and 3, we have proved that the throughput increases super-linearly with $M$ for both finite and infinite population cases. This is the case in non-carrier-sensing networks for all $M$, and in carrier-sensing networks for moderate to large $M$. Moreover, Theorem 2 shows that the throughput penalty due to distributed random access diminishes when $M$ approaches infinity. Such scalability provides strong incentives for further investigations on engineering and implementation details of MPR systems. Based on the analysis, we found that the commonly deployed BEB scheme is far from optimum in most systems except the carrier-sensing systems with RTS/CTS four-way handshake. In particular, the optimum backoff factor $r$ increases with $M$ for large $M$. We further note that the throughput degrades sharply when $r$ is smaller than the optimum value, while it is much less sensitive to $r$ when $r$ exceeds the optimum. Having understood the fundamental behavior of MPR, we propose practical protocols to exploit the advantage of MPR in IEEE 802.11-like WLANs. By incorporating advanced PHY-layer blind detection and MUD techniques, the protocol can implement MPR in a fully distributed manner with marginal modification of MAC layer. \appendices \section{Super-Linear Throughput Scaling in WLANs with Finite Population} \begin{theorem} (Super-linearity with finite population) $S_N^*(M+1)\big/(M+1) \geq S_N^*(M)\big/M$ for all $M<N$ From (\ref{eqn:12}), we have \begin{eqnarray}\label{eqn:A1} S_N(M,p_t)&=& R\sum_{k=1}^Mk\binom{N}{k}p_t^k(1-p_t)^{N-k} \nonumber\\ &=& R\frac{Np_t}{1-p_t}\sum_{k=0}^{M-1}\binom{N}{k}p_t^k(1-p_t)^{N-k} \nonumber\\ &&-R\frac{p_t}{1-p_t}\sum_{k=0}^{M-1}k\binom{N}{k}p_t^k(1-p_t)^{N-k}\nonumber \\ &=&R\frac{Np_t}{1-p_t}\Pr\{X\leq M-1\}\nonumber\\ &&-\frac{p_t}{1-p_t}S_N(M-1,p_t) \end{eqnarray} and \begin{equation}\label{eqn:A2} S_N(M+1,p_t)=R\frac{Np_t}{1-p_t}\Pr\{X\leq M\}-\frac{p_t}{1-p_t}S_N(M,p_t). \end{equation} Meanwhile \begin{eqnarray}\label{eqn:A3} S_N(M+1,p_t)&=&R\sum_{k=1}^{M+1}k\binom{N}{k}p_t^k(1-p_t)^{N-k}\nonumber\\ &=&S_N(M,p_t)+R(M+1)\Pr\{X=M+1\}\nonumber\\ && \end{eqnarray} Substituting (\ref{eqn:A3}) to (\ref{eqn:A2}), we get \begin{eqnarray}\label{eqn:A4} S_N(M,p_t)=RNp_t\Pr\{X\leq M\}\nonumber\\ -R(1-p_t)(M+1)\Pr\{X=M+1\}\;\forall M<N, p_t \end{eqnarray} At the optimal $p_t^*(M)$, the derivative $\partial S_N(M,p_t)/\partial p_t=0$. Thus, \begin{eqnarray}\label{eqn:A5} \frac{\partial S_N(M,p_t)}{\partial p_t}\bigg|_{p_t=p_t^*(M)}=RN\Pr\{X\leq M\}\big|_{p_t=p_t^*(M)}\nonumber\\ +R(M+1)\big(1-\frac{M+1}{p_t}\big)\Pr\{X=M+1\}\big|_{p_t=p_t^*(M)}=0, \end{eqnarray} Combining (\ref{eqn:A4}) and (\ref{eqn:A5}), \begin{eqnarray}\label{eqn:A6} && S_N(M,p_t^*(M))=p_t^*(M)\frac{\partial S_N(M,p_t)}{\partial p_t}\bigg|_{p_t=p_t^*(M)}\nonumber\\ &&-R(M+1)\big(p_t^*(M)-(M+1)\big)\Pr\{X=M+1\}\big|_{p_t=p_t^*(M)}\nonumber\\ &&-R(M+1)\big(1-p_t^*(M)\big)\Pr\{X=M+1\}\nonumber\\ &=&RM(M+1)\Pr\{X=M+1\}\big|_{p_t=p_t^*(M)} \end{eqnarray} It is obvious that \begin{eqnarray}\label{eqn:A7} &&S_N(M+1, p_t^*(M+1))\geq S_N(M+1,p_t^*(M))\nonumber\\ &&=S_N(M,p_t^*(M))+R(M+1)\Pr\{X=M+1\}\big|_{p_t^*(M)}\nonumber\\ && \end{eqnarray} Substituting (\ref{eqn:A6}) to (\ref{eqn:A7}), we have \begin{eqnarray}\label{eqn:A8} &&S_N(M+1,p_t^*(M+1))\nonumber\\ &\geq& S_N(M,p_t^*(M))+\frac{S_N(M,p_t^*(M))}{M} \nonumber \\ &=&S_N(M,p_t^*(M))\frac{M+1}{M}. \end{eqnarray} Hence, $S_N^*(M+1)\big/(M+1) \geq S_N^*(M)\big/M$ for all $M<N$. \begin{flushright} $\Box$ \end{flushright} \end{theorem}
{ "redpajama_set_name": "RedPajamaArXiv" }
\chapter*{Abstract} Subshifts are sets of configurations over an infinite grid defined by a set of forbidden patterns. In this thesis, we study two-dimensional subshifts of finite type ($2$D SFTs), where the underlying grid is $\mathbb Z^2$ and the set of forbidden patterns is finite. We are mainly interested in the interplay between the computational power of $2$D SFTs and their geometry, examined through the concept of expansive subdynamics. $2$D SFTs with expansive directions form an interesting and natural class of subshifts that lie between dimensions $1$ and $2$. An SFT that has only one non-expansive direction is called extremely expansive. We prove that in many aspects, extremely expansive $2$D SFTs display the totality of behaviours of general $2$D SFTs. For example, we construct an aperiodic extremely expansive $2$D SFT and we prove that the emptiness problem is undecidable even when restricted to the class of extremely expansive $2$D SFTs. We also prove that every Medvedev class contains an extremely expansive $2$D SFT and we provide a characterization of the sets of directions that can be the set of non-expansive directions of a $2$D SFT. Finally, we prove that for every computable sequence of $2$D SFTs with an expansive direction, there exists a universal object that simulates all of the elements of the sequence. We use the so called hierarchical, self-simulating or fixed-point method for constructing $2$D SFTs which has been previously used by G\'{a}cs, Durand, Romashchenko and Shen. \tableofcontents \cleardoublepage \pagenumbering{arabic} \chapter{Historical overview} This thesis is about two-dimensional subshifts of finite type ($2$D SFTs), and more specifically, the behaviour of 2D SFTs with respect to a dynamical-geometrical notion called expansive subdynamics. The mathematical study of 2D SFTs began with the paper of Wang \cite{wang}. A \dfn{Wang tile set} consists of a finite number of unit squares with coloured edges, which are called \dfn{tiles}. A valid tiling is a way to fill the entire plane with tiles such that the squares are edge-to-edge and such that the colors of abutting edges are the same. Wang asked the following question about Wang tile sets, which is called the \dfn{tiling problem}: Does there exist an algorithm that takes as input an arbitrary Wang tile set and decides whether it admits a valid tiling? He conjectured that the answer to this question is positive and proved that the problem is strongly correlated to the problem of the existence of an \dfn{aperiodic tile set}, that is a tile set that admits some valid tiling but no periodic valid tiling. However, Berger \cite{berger} proved that this is not the case. In fact, he proved that the tiling problem is undecidable. In addition, his proof contained an explicit construction of an aperiodic tile set. Later, several authors have given alternative constructions of aperiodic tile sets and proofs of the undecidability of the tiling problem \cite{robinson,jarkkosmall,jarkkoundec}. There is an alternative way of looking at and talking about the same problem. Let $\A$ be a finite set, called the \dfn{alphabet}. A (two-dimensional, or $2$D) \dfn{configuration} is a map $c \colon \mathbb Z^2 \to \A$. The set of all configurations $\A^{\mathbb Z^2}$ is called the \dfn{full shift}. A \dfn{pattern} is a map $p \colon D \to \A$, where $D \subseteq \mathbb Z^2$ is a \emph{finite} set. Let $\mathcal{F}$ be a set of \dfn{forbidden} patterns. The corresponding ($2$D) \dfn{subshift} $X_{\mathcal{F}}$ is the set of all configurations that avoid the patterns of $\mathcal{F}$: for all finite $D \subseteq \mathbb Z^2$ and $c \in X_{\mathcal{F}}$, $c\restr{D} \notin \mathcal{F}$. If $X=X_{\mathcal{F}}$ for some \emph{finite} set of forbidden patterns, then it is called a 2D subshift of finite type (SFT). In this thesis, we will only talk about $2$D subshifts and SFTs, so that we will usually omit the dimension, except in statements of theorems. It is not difficult to see that the set of valid tilings of a Wang tile set is an SFT. In addition, for every SFT, we can construct a Wang tile set whose set of valid tilings is, in some sense, equivalent to the given SFT. The tiling problem can thus be rephrased as the \dfn{emptiness problem} for SFTs: Given a finite set of forbidden patterns, can we algorithmically decide whether $X_{\mathcal{F}} \neq \emptyset$? The undecidability of the tiling problem then is then immediately translated to the undecidability of the emptiness problem of SFTs. Wang tiles and forbidden patterns give a geometrical definition of SFTs, but there also exists an equivalent dynamical definition. First of all, the full shift can be endowed with the product topology of the discrete topology on $\A$. This gives rise to a compact, metrizable topological space. The \dfn{horizontal} and \dfn{vertical shifts}, which consist in moving a configuration one step to the left and up, respectively, are continuous with respect to this topology and obviously commute. This defines a $\mathbb Z^2$ action over the full shift and we can study it using the usual tools of topological dynamics. For example, one can prove that subshifts are exactly the closed, shift-invariant subsets of the full shift, or, equivalently the subsystems of the full shift. SFTs correspond to the chain-mixing subsystems of the full shift. More importantly, for the purposes of this thesis, we can study $2$D SFTs from the point of view of their \dfn{expansive subdynamics}. This notion was defined by Boyle and Lind \cite{expsubd} as a tool for studying multidimensional dynamical systems by looking at the (lower-dimensional) actions induced by the subgroups of the original action. Intuitively, this is the same as when we look at the lower-dimensional projections of a surface in order to understand some of its properties. The general definition of expansive subdynamics and the main results of \cite{expsubd} fall out of the scope of this thesis. However, for $2$D subshifts there exists an equivalent, natural geometrical definition. Let $X \subseteq \A^{\mathbb Z^2}$ be a subshift, $l\in\Rb\defeq \mathbb R \sqcup \{\infty\}$ a slope and $\lin{l}\subset\mathbb R^{2}$ the corresponding line that passes through the origin. We say that $l$ is an \dfn{expansive direction} of $X$ if there exists a finite shape $V\subset\mathbb R^2$ such that, for all $x,y \in X$, \[x\restr{(\lin{l}+V) \cap \mathbb Z^2}=y\restr{(\lin{l}+V) \cap \mathbb Z^2}\impl x=y~.\] In other words, there exists a fixed width $b >0$ such that every configuration of $X$ is uniquely defined by its restriction to the strip of slope $l$ and width $b$ that goes through the origin (in fact, by shift invariance, by any strip). Geometrically, this means that in $X$ the ($2$D) information of the configuration is \xpr{packed} inside the one-dimensional strip of slope $l$. In some sense, even though $X$ is a two-dimensional object, it is determined by a one-dimensional strip, so that subshifts with directions of expansiveness are somewhere between dimensions $1$ and $2$. A direction that is not expansive is called \dfn{non-expansive}. Let $\NE(X)$ be the set of non-expansive directions of $X$. Boyle and Lind proved that $\NE(X)$ is closed in the one-point compactification topology of $\Rb$ and that $\NE(X) \neq \emptyset$ if and only if $X$ is infinite. Since finite subshifts are rather trivial, the most restricted non-trivial case with respect to non-expansive directions is the case when $X$ has a unique direction of non-expansiveness. We call such a subshift \dfn{extremely expansive}. Extremely expansive SFTs form the main object of interest in this thesis. We prove that in many aspects, extremely expansive SFTs are computationally as powerful as general SFTs. Before stating the results, we find it useful to talk about another class of SFTs with many directions of non-expansiveness, namely those that arise from deterministic tile set. A tile set is called \dfn{NW-deterministic} (the initials stand for North and West) if every tile is uniquely determined by the colors of its top and left sides\cite{nilpind}. Similarly, we can define SW, SE and NE deterministic tile sets (S and E stand for South and East, respectively). A tile set is called \dfn{4-way deterministic} if it is SW,NW,SE and NE deterministic \cite{karipapasoglou}. One can easily see that for the SFT associated to a 4-way deterministic tile set and for every direction $l$ that is not the vertical or the horizontal one (slopes $\infty$ and $0$, respectively), $l$ is an expansive direction. Guillon, Kari and Zinoviadis recently proved \cite{pierreunpub} that the vertical and the horizontal direction must indeed be non-expansive unless the associated SFT is in some sense trivial, namely vertically or horizontally periodic. We can now start stating the results of the thesis. The first result concerns the existence of an aperiodic extremely expansive SFT. As mentioned earlier, for the unrestricted case, there exist various constructions of aperiodic SFTs. Kari and Papasoglou \cite{karipapasoglou} have constructed an aperiodic 4-way deterministic tile set. According to what was said in the previous paragraph, the SFT associated to this tile set has exactly two non-expansive directions, the vertical and the horizontal one. We prove that \begin{theorem} There exists an aperiodic extremely expansive $2$D SFT. \end{theorem} Of course, our construction does not use a 4-way deterministic tile set. It might seem that this result is strictly better than the one using 4-way deterministic tile sets, since we have one non-expansive direction less. However, there exists a small nuance here: 4-way deterministic tile sets give rise to SFTs with so-called \dfn{bounded radii of expansiveness}, while our construction does not have this property. In addition, in \cite{pierreunpub} it is also proved that every aperiodic SFT with bounded radii of expansiveness must have at least two non-expansive directions. Therefore, the 4-way deterministic construction is also optimal, in the class of SFTs with bounded radii of expansiveness, and it might be more precise to say that the two results are incomparable. As mentioned already, the existence of an aperiodic tile set was originally constructed in order to prove that the tiling problem is undecidable. Kari \cite{nilpind} prove that the tiling problem for NW-deterministic tile sets is undecidable. In addition, Lukkarila \cite{lukkarila} used the 4-way deterministic tile set of Kari and Papazoglou in order to prove that the tiling problem is undecidable for 4-way deterministic tile sets as well. As the reader has probably guessed already, we prove that \begin{theorem} The emptiness problem of extremely expansive $2$D SFTs is undecidable. More precisely, the emptiness problem is undecidable for $2$D SFTs such that the vertical direction is the only non-expansive direction. \end{theorem} One should understand the previous statement in the following sense: even if one is given an SFT $X$ (as a finite set of forbidden patterns) and is given the additional information that $X$ is either empty or extremely-expansive (and in this case $\NE(X)=\{\infty\}$), even then it is not possible to decide whether $X=\emptyset$. In other words, it is not possible to algorithmically separate the sets of forbidden patterns that define empty SFTs from those that define extremely expansive non-empty SFTs. The third result can be considered a stronger version of the undecidability of the emptiness problem. We prove that there exist extremely expansive SFT whose configurations are computationally as complicated as possible. In order to describe this result, we need to introduce some classical notions of computation theory. For the purposes of this introduction, a \dfn{computable function} will mean a function $f \colon \A^{\mathbb N} \to \A^{\mathbb N}$ such that there exists a Turing Machine that outputs $f(c)$ when originally its reading tape contains $c$ (\textit{i.e.}, it outputs $f(c)$ with oracle $c$). Using an effective enumeration of $\mathbb Z^2$, it is possible to talk about computable functions with domain or range $\A^{\mathbb Z^2}$, and in general $\A^\mathbb M$, where $\mathbb M$ is any effectively enumerable set. We say that $d \in \A^{\mathbb M}$ is \dfn{reducible} to $c \in \A^{\mathbb M'}$ if there exists a computable function $f$ such that $f(c)=d$. This means that $c$ is computationally at least as complicated as $d$, since it is possible to obtain $d$ using $c$ and a computable function. A subset $Y \subseteq \A^{\mathbb M}$ is called \dfn{Medvedev} reducible to $X \subseteq \A^{\mathbb M'}$ if every point of $Y$ is reducible to some point of $X$. Intuitively, we can compute any point of $Y$ with the help of a suitable point of $X$ and a computable function. The relation of Medvedev reducibility is a pre-order on subsets. Two sets are called Medvedev equivalent if they are Medvedev reducible to each other. This is an equivalence relation, whose equivalence classes are called \dfn{Medvedev degrees}. There exists a partial order on the set of Medvedev degrees given by the natural lift of the Medvedev reducibility pre-order. Computable sets are the least element of this order and, in a certain sense, the higher a set is in this hierarchy, the more difficult it is to compute a point of this set relative to the sets that lie lower in the hierarchy. The survey \cite{hinman} contains a thorough study of Medvedev degrees. A set $X \subseteq \A^{\mathbb M}$ is called \dfn{effectively closed} if its complement is semi-decidable. Effectively closed sets form the so-called $\Pi_{0}^{1}$ sets and they play a very important role in computation theory. It is easy to see that SFTs are effectively closed, even though there exist many effectively closed sets (and even effectively closed subshifts) that are not SFTs. However, Simpson \cite{simpson} proved that every effective Medvedev degree (\textit{i.e.}, the Medvedev degree of an effectively closed set) contains a 2D SFT. Therefore, in some sense, not only is the emptiness problem undecidable for 2D SFTs, but their points can be as difficult to compute as possible. We improve this result to the extremely expansive case: \begin{theorem} Every effective Medvedev degree contains an extremely expansive 2D SFT. In other words, for every effectively closed set $Z\subseteq \A^{\mathbb M}$, there exists an extremely expansive 2D SFT $Y$ that is Medvedev equivalent to $Z$. \end{theorem} In fact, we prove something stronger, giving a complete characterization of the so-called \dfn{Turing degrees} of $Y$ relative to those of $Z$, but it is not necessary to go into these details here. The next result is of a dynamical flavour and it does not concern extremely expansive SFTs, but sequences of SFTs with a common rational direction of expansiveness. It also uses the notion of simulation, which is of central importance in the proofs of the previous results and, in general, for the whole thesis, even though it wasn't mentioned until now. We say that subshift $X \subseteq \A^{\mathbb Z^2}$ \dfn{simulates} subshift $Y \subseteq \B^{\mathbb Z^2}$ with parameters $(S,T)$ if there exists a $\B$-colouring of the $S \times T$ blocks of $X$ with the following property: Every configuration of $X$ can be partitioned in a unique way into $S \times T$ rectangles such that when we color these rectangles with the $\B$-colouring we obtain a configuration of $Y$. Inversely, every configuration of $Y$ can be obtained in this way. This is weaker than the notion of simulation that we actually use, but it follows from it, is enough to describe the result and is much easier to describe. It corresponds to the definitions in \cite{drs}. It was proved in \cite{laffite} that for every computable sequence of SFTs, there exists an SFT that simulates all of them. This is a surprising and really strong result. We prove a version of it in the case where all the SFTs of the sequence have a common, rational expansive direction (which without loss of generality we assume to be the horizontal one): \begin{theorem} Let $X_0,X_1, \ldots$ be a computable sequence of 2D SFTs such that $0 \in \NE(X_i)$, for all $i \in \mathbb N$. Then, there exists a 2D SFT $X$ such that $X$ simulates $X_i$ for all $i \in \NE$ and $0 \in \NE(X)$. \end{theorem} We note that there cannot exist a 2D SFT with an expansive direction that simulates all 2D SFTs with the same expansive direction, because this would imply the decidability of the emptiness problem for extremely expansive SFTs, according to an argument of Hochman \cite{hochmanuniv}. The final result of the thesis answers a natural question which arises immediately after the construction of an extremely expansive SFT. As stated already, the unique non-expansive direction of the SFT that we construct is the vertical one. Which other directions can be the unique direction of non-expansiveness for 2D SFTs? Obviously, we can achieve any rational direction by rotating with elements of $SL_2(\mathbb Z)$, but can we do more? More generally, what are the sets of directions that can be the set of non-expansive directions of a 2D SFT? Hochman \cite{nexpdir} proved that for general 2D subshifts (not necessarily of finite type, or even effective), any closed set of directions can be the set of non-expansive directions, while any direction can be the unique direction of non-expansiveness. Recall that Boyle and Lind proved that the sets of non-expansive directions must be closed, so it turns out that in the case of general subshifts this necessary topological condition is also sufficient. In the case of SFTs, there is an additional necessary condition, namely that the set of non-expansive directions be \dfn{effectively closed}, which is equivalent to saying that its complement is the union of an increasing, computable sequence of open intervals. It turns out that this condition is necessary and sufficient for 2D SFTs: \begin{theorem} A set of directions $\NE$ is the set of non-expansive directions of a 2D SFT if and only if it is effectively closed. More precisely, a direction $l$ is the unique direction of non-expansiveness of a 2D SFT if and only if it is computable. \end{theorem} This answers Question~11.2 in Boyle's Open Problems for Symbolic Dynamics \cite{opsd}. Using our methods, we could easily prove Theorems~1-3 for SFTs whose unique direction of non-expansiveness is $l$, where $l$ is any computable direction. This is a stronger version of the results, which we do not prove for lack of space. In any case, once one has mastered our method, it is possible to prove various new results and variants of already proved ones. Since this method is as important (if not more) as some of our results, it is probably worth saying some words about its history, too. It is the so-called \dfn{fixed-point tile} or \dfn{self-simulating} method for constructing 2D SFTs. It was firstly described by Kurdyumov \cite{kurdyumov} in order to give a counterexample to the Positive Rates conjecture, even though only a sketch of a proof was included in this paper. It was G\'{a}cs \cite{gacs1} who elaborated Kurdyumov's idea into a full proof of the positive rates conjecture and formalized the notion of a hierarchy of simulating SFTs (he talks about 1D cellular automata, but this does not make a big difference). Later, he significantly improved his construction and the result in a notoriously lengthy and difficult paper \cite{gacs}. Gray's reader guide to that paper \cite{gray} and the description therein of self-simulation and the problems one encounters when trying to construct a self-simulating SFT are also a very useful exposition of the ideas of G\'{a}cs and Kurdymov. It was not until the work of Durand, Romashchenko and Shen \cite{drs} that the method became accessible to a broader mathematical audience. They work in the framework of 2D SFTs, which allows for a more clear, geometrical description of the basic ideas. G\'{a}cs' construction did not have any direction of expansiveness, because it was a non-reversible cellular automaton. Nonetheless, it had the horizontal direction as a direction of \xpr{semi-expansiveness}. On the other hand, the construction of Durand, Romashchenko and Shen did not have neither directions of expansiveness neither directions of \xpr{semi-expansiveness}. A large part of this thesis consists in making their construction expansive in the horizontal direction. We need to introduce some tricks in order to do this, but once we achieve it, then self-simulation and a previous result of Hochman immediately give an extremely expansive aperiodic SFT. Something similar was also done in \cite{zinoviadis1}, but the construction of that paper was significantly easier because we dealt with non-reversible cellular automata, so that we only needed a direction of \xpr{semi-expansiveness}. Our current construction can be seen as an improvement of the construction of that paper, and using it we can easily retrieve its main result, which was a characterization of the numbers that can appear as the topological entropy of a (not necessarily reversible) CA. One thing that all the constructions have in common, including ours, is that they are complicated and rather difficult to explain (for the writer) and understand (for the reader). This is unavoidable, in some degree, and the author's personal opinion is that there does not exist a \xpr{perfect} way to write them. Either the exposition is very formal, covering all details and defining every little thing, which is the road that we have chosen, or the construction is informal, in which case it is not clear what exactly the constructed SFT is, over which alphabet it is defined etc., which is the choice made by Durand, Romashchenko and Shen. Taking the middle road, as was more or less done by G\'{a}cs, does not help very much, either. Our opinion is that the best thing is to be familiar with \emph{all} the constructions and use them accordingly. On the one hand, the constructions of Durand, Romashchenko and Shen are convincing for someone already familiar with the technique and they allow to explain a new idea concisely and efficiently, as was recently done in \cite{drs2}, while on the other hand our more formal presentation can be used to acquire mastery with the technique by dealing with all the unexpected little problems that arise during the construction and to convince those people who want to understand all the details. Let us now describe the structure of the thesis: In Chapter~\ref{s:prelim}, we give the basic definition that we will need throughout the paper. In Chapter~\ref{s:simul}, we define the precise notion of simulation that we will use and give some of its properties. We believe that some of the results of this chapter are of independent interest. In Chapter~\ref{c:programming}, we describe a pseudo-programming language that will be used to describe 2D SFTs in a concise way. In Chapter~\ref{construction}, we construct a family of SFT (which depend on the parameters $S,T$) with $0$ as a direction of expansiveness which are, in some sense, universal: They can simulate every SFT with $0$ as a direction of expansiveness, provided that its alphabet size is small compared to $S,T$ and it can be computed fast compared to $S,T$. This family of SFTs is of great importance for all subsequent constructions. This is the part of the thesis where we modify the construction of Durand, Romashchenko and Shen so as to make it reversible. In Chapter~\ref{s:hierarchy}, we prove Theorems~1 - 4. The constructions and the proofs all follow the same pattern, but we give as many details as possible for all of them for reasons of completeness. Finally, in Chapter~\ref{sec:expdir}, we prove Theorem~5. This proof is a modification of the proof of the result in \cite{nexpdir}. We try to explain what are the differences between that construction and ours and why the changes that we make are necessary. Finally, let us mention that all of the aforementioned results have been obtain in collaboration with Pierre Guillon during various visits by him in Turku as well as of the author in Marseille. Currently, a series of joint papers is under construction that will contain even more applications of our method. Theorem~1 has also appeared in \cite{zinoviadis2}, even though because of lack of space, most of the details of the construction do not appear in that paper. \chapter{Preliminaries}\label{s:prelim} \section{Basic definitions}\label{s:encoding} We will denote by $\mathbb Z$, $\mathbb N$, ${\mathbb N}_1$, $\mathbb Q$ and $\mathbb R$ the sets of integers, non-negative integers, positive integers, rational and real numbers, respectively, by $\co ij$ and $\cc ij$ the integer intervals $\{i,\ldots,j-1\}$ and $\{i,\ldots,j\}$, respectively, while $[\varepsilon,\delta]$ will denote an interval of real numbers. If $f,g \colon \mathbb N \to \mathbb N$, then we will use the classical $f \in O(g)$ notation to denote that $f(n) \le c g(n)$, for some constant $c$ and all $n \in \mathbb N$. If $f \colon X \pto Y$ is a partial function, then its \dfn{domain} $\mathcal D}%\DeclareMathOperator*{\dom}{dom(f) \subseteq X$ is the set of elements of $X$ whose image through $f$ is defined. Two partial functions are equal when they have the same domain and they agree on their common domain. If $f \colon X \pto Y$ and $g \colon Y \pto Z$ are partial functions, then $g\circ f \colon X \pto Z$ is the partial function defined in the usual way (\textit{i.e.}, $g(f(w))$ does not exist if either $w \notin \mathcal D}%\DeclareMathOperator*{\dom}{dom(f)$ or $f(w) \notin\mathcal D}%\DeclareMathOperator*{\dom}{dom(g)$). A \dfn{partial permutation} is a bijection over its domain onto its range, \textit{i.e.}, an injective partial map. In the following, when defining a partial function, it will be implicit that any non-treated argument has an undefined image, and that saying that two partial functions are equal means in particular that their domains are the same. If $Z\subset X$ and $f:X\pto Y$, we may abusively consider $f\restr{Z}$ as a partial map from $X$ to $Y$ whose domain is $Z\cap\mathcal D}%\DeclareMathOperator*{\dom}{dom(f)$. For $m\ge n$, $\vec T\defeq(T_i)_{0\le i<m}$ and $\vec t\defeq(t_i)_{0\le i< n}$ such that for all $i$, $0\le t_i<T_i$, we note $\anib[\vec T]{\vec t}\defeq\sum_{0\le i < n}t_i\prod_{0\le j \le i}T_j$ the numeric value represented by the adic representation $\vec t$ in base $\vec T$. In general, $T_i$ and $t_i$ can belong in $\mathbb R$, not necessarily in $\mathbb N$. By convention, if $\vec t$ has length $0$, then $\anib[\vec T]{\vec t}\defeq0$. Similarly, for a sequence $\seq T\defeq(T_i)_{i\in\mathbb N}$, we note $\anib[\seq T]{\vec t}\defeq\anib[T_{\co0n}]{\vec t}$. For a sequence $\seq t \defeq(t_i)_{i \in \mathbb N}$, we note $\anib[\seq T]{\seq t}\defeq\lim\anib[T_{\co0n}]{t_{\co{0}{n}}}$, when this limit exists. An \dfn{alphabet} is any finite set, whose elements are often called \dfn{symbols}. If $\A$ is an alphabet, $\A^*\defeq\bigcup_{n\in\mathbb N}\A^n$ denotes the set of finite \dfn{words} over $\A$, and $\A^{**} \defeq \bigcup_{m\in\mathbb N}{(\A^*)^m}$ the set of finite tuples of words. (Notice that the notation $\A^{**}$ is a little ambiguous as it could also stand for the set $\bigcup_{m\in\mathbb N}{(\A^m)^*}$. Obviously, the two interpretations are isomorphic, but they are different objects.) The \dfn{empty} word is denoted by $\motvide \in \A^*$. If $w \in \A^n$, we write $w=w_0\cdots w_{n-1}$, and call $\length{w}\defeq n$ the \dfn{length} of $w$. If $\vec u\in (\A^*)^m$, we write $\vec u=(u_0,\ldots,u_{m-1})$, and $\length{\vec u}\defeq(\length{u_0},\ldots,\length{u_{m-1}})$. For every $i \in \mathbb N$, we define the projection $\pi_i$ as a partial function $\pi_i \colon \A^{**} \pto \A^{*}$: $\pi_i(\vec u)=u_i$ if $\vec u \in (\A^{*})^m$ with $m\geq i$ (and $\pi_i(\vec u)$ is undefined otherwise). A \dfn{field} is a projection $\pi_i$ together with a label \field, written in type-writer form. The notion of fields is simply a convenient way of talking about tuples of words. The names of the fields will be chosen so as to reflect the role that the field plays in the construction. We note $\mathbb N^*\defeq\bigcup_{m\in\mathbb N}\mathbb N^m$ the set of integer tuples of any dimension, where $m$ is the \dfn{dimension} of the tuple $\vec k\defeq(k_0,\ldots,k_{m-1})\in\mathbb N^m$. Let $\A^{\vec{k}}\defeq\A^{k_0}\times\ldots\times\A^{k_{m-1}}$; any subalphabet of $\A^{\vec{k}}$ is said to have \dfn{constant lengths}. We will mainly use the special alphabets $\haine n\defeq\{0,\ldots,n-1\}$, for $n\in\{2,3,4,5\}$. Of course, instead of $\haine5$ we could use any alphabet with $n$ letters. However, since some letters will have a fixed role throughout the thesis, it is better to fix the notation and get used to these roles. The non-negative integers can be easily embedded into $\haine2^*$ thanks to the injection $n\mapsto\anib n$ which gives the shortest binary representation of $n \geq 1$. $\norm n\defeq\length{\anib n}=\spart{\log_2n}+1$ is the \dfn{length of $n$}. By definition, $\anib{0} \defeq \motvide$ and $\norm{\motvide} \defeq 0$. Inversely, if $u \in \haine2^*$, then $\bina u$ is the number represented by $u$ in the binary representation system: for all $ u\in\haine2^*,\anib{\bina u}$ is the suffix of $u$ that is obtained after removing the initial $0$s. (The \xpr{lower bar} is applied before the \xpr{top bar}.) We will also need to embed some finite sets in $\haine2^*$. For instance, we will say that $\{-1,+1\}$ is $\haine2$ by identifying $-1$ with $0$ and $+1$ with $1$. Finite alphabets of bigger cardinality can be embedded into $\haine2^k$, for some suitable $k$. Now, in the perspective of computing functions with many arguments, we are going to use symbol $2$ to encode tuples into words. If $\vec u\in(\haine5^*)^m$ for some $m\in\mathbb N$, then $\Chi{\vec u}$ is defined as the concatenation \begin{equation*} \Chi{u_0}\Chi{u_1}\ldots\ldots\Chi{u_{m-1}}\in\haine3^*, \end{equation*} where $\Chi v\defeq2\double{v}$, and $v\mapsto\double v$ is some monoid injection (\textit{i.e.}, code) from $\haine5^*$ to $\haine2^*$. In this paper, we will use the code defined by $\double0\defeq000$, $\double1\defeq001$, $\double2\defeq010$, $\double3\defeq011$, $\double4\defeq100$. Note that the structure of the encoding of word tuples depends only on $\length{\vec{u}}$. We can also define $\Chi{\seq u}\defeq\Chi{u_0}\Chi{u_1}\ldots\in\haine3^\mathbb N$ for $\seq u\in\haine5^{\mathbb N}$. Let us now prove a basic fact about $\Chi{\cdot}$. Namely, for every $\vec{k} \in \mathbb N^*$, there exists an easily computable function that gives the positions of the $2$s in encodings of $\haine5^{\vec{k}}$ and the positions of the encodings of the components of a letter. \begin{fact}\label{f:encodings} Let $M \in \mathbb N$ and $\vec{k} \in \mathbb N^M$. For all $0 \le i <M$, let us define $l_{\vec{k},i}\defeq3\sum_{j=0}^{i-1}{k_j}+i$. Then, for all $\vec{u} \in \haine5^{\vec{k}}$: \begin{enumerate} \item $\norm{\Chi{\vec u}}=l_{\vec{k},M}$, \item ${\Chi{\vec u}}_{\co{l_{\vec{k},i}}{l_{\vec{k},i+1}}}= 2 \double{\pi_i(\vec{u})}$ \end{enumerate} \end{fact} These statements correspond to what Durand, Romashchenko and Shen refer to as ``the TM know the place where such and such information is held in the encoding''. Symbol $3$ will be used in Subsection~\ref{s:turing} to encode the start and the end of the tape of a Turing machine. Symbol $4$ will be used in order to construct alphabets with constant lengths. In the computation, we indeed want words of various lengths to be able to represent the same objects. For this, we define $\sh[l]u\defeq4^{l-\length u}u$, for every $l\in\mathbb N$ and $u\in\haine4^*$ with $\length{u} \le l$ ($\sh[l]u$ is undefined otherwise). For instance, $\sh[\norm n]{\anib n}=\anib n$ for any integer $n\in\mathbb N$, and the encoding $\sh[l]\motvide=4^l$ of the empty word is a sequence of $4$s. It is clear that the partial function \[\papp {\mathbb N\times\haine4^*}{4^*\haine4^*}{(l,u)}{\sh[l]u}\] is injective (over its domain) and surjective; let us write $\hs w\in\haine4^*$ for the longest suffix in $\haine4^*$ of a word $w\in4^*\haine4^*$, in such a way that $\hs{\sh[l]u}=u$ for any $l\geq \length{u}$ and $u\in\haine4^*$. These two maps can be adapted to vectors in the obvious way: $\sh[\vec k]{\vec u}\defeq(\sh[k_0]{u_0},\ldots,\sh[k_{m-1}]{u_{m-1}})$ for any $\vec k\in\mathbb N^m$, $m\in\mathbb N$ and $\vec u\defeq(u_0,\ldots,u_{m-1})\in\haine4^{*m}$. Note that this is defined if and only if $\vec k\ge\length{\vec u}$. Similarly, $\hs{\vec w}\defeq(\hs{w_0},\ldots,\hs{w_{m-1}})$ for any $\vec w\defeq(w_0,\ldots,w_{m-1})\in(4^*\haine4^*)^m$. Recall that a partial permutation is simply an injective partial map. If $\alpha:\haine4^{**}\pto\haine4^{**}$ is a partial permutation that preserves the number of fields (\textit{i.e.}, $\alpha(\haine4^*)^l \subseteq (\haine4^*)^l$ for all $l \in \mathbb N$), we can transform it into an equivalent permutation that also preserves the lengths: \[\pappl[\sh\alpha]{(4^*\haine4^*)^*}{(4^*\haine4^*)^*}{\vec w}{\sh[\length{\vec w}]{\alpha(\hs{\vec w})}~.}\] \begin{remark}\label{sharpization}~ \begin{itemize} \item For any $\vec k\in\mathbb N^*$, $\sh{\alpha}$ is also a partial permutation. \item The restriction of $\alpha$ to any subalphabet is implemented by that of $\sh\alpha$ to large enough words \[\forall \vec{u}\in\haine4^{**},\forall \vec k\ge\max\{\length{\vec{u}},\length{\alpha(\vec{u})}\},\sh\alpha(\sh[\vec k]{\vec{u}})=\sh[\vec k]{\alpha(\vec{u})}~.\] \end{itemize}\end{remark} \begin{proof} For the first part, assume that $\sh\alpha(\vec w)=\sh\alpha(\vec{w'})$. This implies that $\length{\vec w}=\length{\vec{w'}}$. In addition, $\alpha(\hs{\vec{w}})=\hs{\sh\alpha(\vec w)}=\hs{\sh\alpha(\vec w')}=\alpha(\hs{\vec{w'}})$. Since $\alpha$ is a partial permutation, this implies that $\hs{\vec{w}}=\hs{\vec{w'}}$. Therefore, $\vec{w}=\sh[\length{\vec{w}}]{\hs{\vec{w}}}=\sh[\length{\vec{w'}}]{\hs{\vec{w'}}}=\vec{w'}$. For the second part, let $\vec{u}\in\haine4^{**}$ and $\vec k\ge\max\{\length{\vec{u}},\length{\alpha(\vec{u})}\}$. Then, $\sh[\vec{k}]{\vec{u}}$ and $\sh[\vec{k}]{\alpha(\vec{u})}$ exist and $\length{\sh[\vec{k}]{\vec{u}}}=\length{\sh[\vec{k}]{\alpha(\vec{u}})}=\length{\vec{k}}$. Therefore, $\sh{\alpha}(\sh[\vec{k}]{\vec{u}})=\sh[\vec{k}]{\alpha(\hs{\sh[\vec{k}]{\vec{u}}})}=\sh[\vec{k}]{\alpha(\vec{u})}$. \end{proof} In the rest of the paper, we will often implicitly use Remark~\ref{sharpization} both to construct partial permutations that preserve the lengths of the fields, as well as to state and prove things about them. It allows us to describe the behaviour of a partial permutation $\alpha$, and then translate this result into the behaviour of $\sh{\alpha}$, provided that the lengths of the fields are sufficiently large, thus omitting the (confusing) $\hs{\cdot}$ and $\sh{\cdot}$ symbols. Let $i_1, \ldots, i_l$ be a set of fields, and $w\in\haine5^*$. Then, \begin{equation*} \emp[w]{i_1,\ldots,i_l} \defeq \set{\vec{u}}{\haine5^{**}}{\hs{\pi_{i_k}(u)}=w, \text{ for } k=1, \ldots,l} \end{equation*} is the set of all symbols that have fields $i_1,\ldots, i_l$ equal to $w$ (up to the application of $\hs{\cdot}$). If $n\in\mathbb N$, let \begin{equation*} \emp[n]{i_1,\ldots,i_l}\defeq\set{\vec u}{\haine5^{**}}{\bina{\hs{\pi_{i_k}(u)}}=n, \text{ for } k = 1, \ldots, l} \end{equation*} be the set of all symbols who have the values $n$ (in binary form) in the fields $i_1,\ldots,i_l$. \section{Computation} \subsection{Turing machines}\label{s:turing} The reader is assumed to be familiar with classical concepts in computability theory. We just fix some terminology and give a variant of a definition of Turing machines, imposing some additional technical restrictions which, however, do not restrict the computational power. A \dfn{Turing machine} (TM) is a partial (\xpr{global}) map $\mathcal{M}$ from $\haine4^\mathbb Z\times Q\times\mathbb Z$ into itself, where $Q\subset\haine2^*$ is a finite set of \dfn{states} containing the \dfn{initial} state $0$ and the \dfn{accepting} state $\motvide$, and depending on a partial \dfn{transition map} $\delta_\mathcal{M}:\haine4\times Q\setminus\{\motvide\}\pto\haine4\times Q\times\{-1,+1\}$ such that: \[\mathcal{M}(z,q,j)=\soit{(z,q,j)&\si q=\motvide\\(z',q',j')& \text{ otherwise, where } (z'_j,q',j'-j)=\delta_\mathcal{M}(z_j,q)\\ & \text{ and } z'_i=z_i, \forall i\ne j~,}\] for any $(z,q,j)\in\haine4^\mathbb Z\times Q\times\mathbb Z$, which will sometimes be called a \dfn{machine configuration}, the first component being the \dfn{tape content}, the second the (head) \dfn{internal state}, the third the \dfn{head position}. The model of TM that we use satisfies the following assumptions, which, as can be easily seen, do not restrict the computational power of TM. \begin{itemize} \item There is only one tape, from which the TM reads the input and on which it writes the output. \item The internal states are words of $\haine2^*$ (this is just a semantic restriction). \item All machines have the same initial and accepting states $0$ and $\motvide$, respectively. \item he global map is still defined after having accepted, and is then equal to the identity. \item There is no precise rejecting state (instead, we use undefined transitions over non-accepting states). \item In every accepting transition, the head disappears and moves to the right. In other words, every accepting transition is of the form $\delta(q,a)=(a',\motvide,+1)$. This is a technical assumption which simplifies the construction of an IPPA that simulates $\mathcal{M}$ in Section~\ref{Jarkko}. \end{itemize} We denote by $\mathcal{M}^t$ the $t$'th power of the (global) map $\mathcal{M}$. If $\mathcal{M}^t(\pinf 3. \Chi{\vec{u}}3^{\infty},0,0)= (\pinf 3. \Chi{\vec{u'}}3^{\infty},\epsilon,j)$, for some $t \in \mathbb N, j \in \mathbb Z$, then we say that $\mathcal{M}$ \dfn{halts} over (or \dfn{accepts}) \dfn{input} $\vec{u}\in\haine5^{**}$, and \dfn{outputs} $\vec{u'}\in\haine5^{**}$, and we define $f_\mathcal{M}(\vec{u})\defeq \vec{u'}$ and $t_\mathcal{M}(\vec{u})$ as the minimal $t$ for which this holds (if this never holds, or if $\pinf 3. \Chi{\vec{u}}3^{\infty}$ is rejected, then $t_\mathcal{M}(\vec{u})$ is undefined). Notice that $f_{\mathcal{M}}(\vec{u})$ is well-defined, since when the accepting state $\motvide$ appears, the machine configuration is no more modified. We say that $\mathcal{M}$ \dfn{computes} the partial map $f_\mathcal{M}:\haine5^{**}\pto\haine5^{**}$, with time \dfn{complexity} \[\appl[t_\mathcal{M}]\mathbb N\N n{\max_{\length{\Chi{\vec{u}}}=n}t_\mathcal{M}(\vec{u})~,}\] where, by definition, the $\max$ is taken only over \emph{accepted} inputs. $t_{\mathcal{M}}$ is well-defined since there are only finitely many accepted inputs of each length. \subsection{Computability}\label{ss:comput} A partial function $f:\haine5^{**}\pto\haine5^{**}$ is called \dfn{computable} if there exists a TM $\mathcal{M}$ such that $f=f_\mathcal{M}$. Recall that integers (and finite sets) can be identified to words, hence allowing us to talk about computable maps between Cartesian products involving $\mathbb N$ and finite sets. We also say that a set $X \subseteq \haine4^{**}$ is \dfn{computable} if its characteristic function $\iota_X\colon \haine4^{**} \to \haine2$ is computable, and that it is \dfn{computably enumerable} if it is the domain of a computable function. We will say that a partial function $f:X\pto\haine5^{**}$, with $X\subset\haine5^{**}$ is \dfn{computable} if both $X$ and the extension of $f$ to $\haine5^{**}$ (by not defining images outside of $X$) are computable. A partial function $\Phi:\haine2^\mathbb N\pto\haine2^{\mathbb N}$ is called \dfn{computable} if there exists a TM $\mathcal{M}$ such that $x\in\mathcal D}%\DeclareMathOperator*{\dom}{dom(\Phi)$ if and only if for all $n\in\mathbb N$, there exists $m\in\mathbb N$ such that $f_\mathcal{M}(x_{\co{0}{m}},n)$ is defined, in which case it is equal to $\Phi(x)_n$. Finally, by parametrizing $\mathbb Z$ with $\mathbb N$, we can talk about computable functions $\Phi:\haine2^\mathbb Z\pto\haine2^{\mathbb Z}$. An equivalent definition is that $\Phi:\haine2^\mathbb Z\pto\haine2^{\mathbb Z}$ is computable if there exists a TM $\mathcal{M}$ such that $x\in\mathcal D}%\DeclareMathOperator*{\dom}{dom(\Phi)$ if and only if for all $n\in\mathbb N$, there exists $m\in\mathbb N'$ such that $f_\mathcal{M}(x_{\co{-m}{m}},n)$ is defined, in which case it is equal to $\Phi(x)_n$. Since $\mathbb R$ can be identified with $\haine2^{\mathbb N}$, we can also talk about computable functions of real numbers. A partial function $\Psi:\mathbb R\pto\mathbb R$ is \dfn{computable} if there exists a computable function $f \colon \mathbb R \times \mathbb N \to \mathbb Q$ with the following property: $\abs{\Psi(x)-f(x,n)} < 2^{-n}$. This is the classical definition of computability for real functions and it says that we can compute better and better approximations of $x$. If $\mathcal{M}$ is a TM, let \begin{equation*} \X_{\mathcal{M}}\defeq \set {z}{\haine2^\mathbb N}{\forall t\in\mathbb N,\mathcal{M}^t(\uinf3.z,0,0) \text{ exists and is not in } \haine4^{\mathbb Z}\times\{\motvide\}\times\mathbb Z} \end{equation*} be the set of one-sided binary sequences over which $\mathcal{M}$ runs for an infinite amount of time. We say that a subset $X\subset\haine2^{\mathbb N}$ is \dfn{effectively closed} (or $\Pi^0_1$) if $\Chi{X}=\X_{\mathcal{M}}$ for some TM $\mathcal{M}$, or equivalently if the set of words that do not prefix any sequence in it is computably enumerable. This can be extended to sets of sequences that can be encoded with words, in particular over finite alphabets: a subset $X\subset\prod_{t\in\mathbb N}\A_t$, where $\A_t$ is a finite subalphabet of $\haine5^*$, is \dfn{effectively closed} if $\Chi X=\X_{\mathcal{M}}$ for some program $\mathcal{M}$ (we encode every finite alphabet with $\haine2^k$, for some suitable $k$ which depends on $t \in \mathbb N$). $\mathcal{M}$ is called \dfn{polynomial} if $t_{\mathcal{M}} \in O(P)$, for some polynomial $P$. A partial function $f$ is called \dfn{polynomially computable} if $f_{\mathcal{M}}=f$ for some polynomial TM $\mathcal{M}$. It is easy to see that the class of (polynomially) computable functions with this version of TM corresponds to the classical one. Analogously, $X$ is a polynomially computable set if its characteristic function $\iota_X$ is polynomially computable. We say that a function (or sequence) $f$ is \dfn{polynomially checkable} if it can be computed in time $O(P(\log{f}))$, for some polynomial $P$. The terminology comes from the fact that even though $f$ might not be polynomially computable, its graph (\textit{i.e.}, the set of pairs element-image) is a polynomially computable set. For example $f(n) = 2^{2^n}$ is a polynomially checkable sequence even though it is not polynomially computable. Instead of a universal TM, we use the following essentially equivalent: \begin{fact} There exists an injection that associates to each TM $\mathcal{M}$ a \dfn{program} $p_{\mathcal{M}} \in \haine4^{*}$ such that if we denote by $Q_p$ the state set of the TM corresponding to program $p$, then \begin{itemize} \item The language $\sett{p_{\mathcal{M}}}{\mathcal{M} \text{ is a TM}} \subseteq \haine4^*$ is polynomially decidable. \item The characteristic function $(p,q)\mapsto\iota_{Q_p}(q)$ that checks whether $q \in Q_p$ is polynomially computable. \item The \xpr{universal} transition rule \[\pappl[\delta_\U]{\haine4\times\haine4^*\times\haine4^*}{\haine4\times\haine4^*\times\{-1,+1\}}{(a,q,p_\mathcal{M})}{\delta_\mathcal{M}(a,q)~}\] is polynomially computable. \item In addition, $\card{Q_p} \le \length{p}$. (We can assume that $p$ contains a list of the states of $Q_p$.) \end{itemize} \end{fact} We will use the following notations: If $p$ is the program of a TM that computes a reversible function $f$, then $p^{-1}$ will denote the program of the inverse function $f^{-1}$ (it will always be computable in our constructions). Also, $t_p$ and $\X_p$ will be used to denote $t_{\mathcal{M}_p}$ and $\X_{\mathcal{M}_p}$, where $\mathcal{M}_p$ is the TM that corresponds to the program $p$. The first examples of polynomially computable functions, which will be most useful in the sequel, are the encodings presented in Subsection~\ref{s:encoding}. Clearly, $\sh[\cdot]\cdot$ and its (right) inverse $\hs\cdot$ are polynomially computable. Moreover, the projections $\pi_i:\haine5^{**}\to\haine5^*$, for $i\in\mathbb N$, are polynomially computable and so are the functions $(\vec{k},i) \to l_{\vec{k},i}$ (as defined in Fact~\ref{f:encodings}) and $\Chi{\cdot}$. \subsection{Degrees} In the following, $\mathbb M$ and $\mathbb M'$ can stand for either $\mathbb N$ or $\mathbb Z$. Two sets $X,Y \in \haine2^{\mathbb M}$ are \dfn{computably homeomorphic} if there exists a computable bijection between them. We say that $d\in\haine2^{\mathbb M'}$ is \dfn{Turing-reducible} to $c\in\haine2^{\mathbb M}$ if $d = \Phi(c)$, for some computable function $\Phi$. This yields a preorder over configurations, whose equivalence classes are called \dfn{Turing degrees}. If $d$ is Turing-reducible to $c$, then in a computational sense, $c$ is more complicated than $d$. A \dfn{cone} over degree $d$ is the set of Turing degrees that are higher than $d$. Moreover, we say that subset $Y\subset\haine2^{\mathbb M'}$ is \dfn{Medvedev-reducible} to subset $X\subset\haine2^{\mathbb M}$ if there is a computable partial function $\Phi:\haine2^{\mathbb M}\pto\haine2^{\mathbb M'}$ such that $\mathcal D}%\DeclareMathOperator*{\dom}{dom(\Phi) \supseteq X$ and $\Phi(X)\subseteq Y$. This also yields a pre-order over sets, whose equivalence classes are called \dfn{Medvedev degrees}. Finally, we say that subset $Y\subset\haine2^{\mathbb M'}$ is \dfn{Mu\v cnik-reducible} to subset $X\subset\haine2^\mathbb M$ if every point of $X$ is Turing-reducible to some point of $Y$ (but not in a uniform way, as in Medvedev-reducibility). This again yields a pre-order over sets, whose equivalence classes are called \dfn{Mu\v cnik degrees}. Medvedev and Mu\v cnik degrees of a set are an attempt to formalize the notion of how computationally difficult it is to compute a point of the set. Of course, computable homeomorphism implies having the same Turing degrees, which implies Medvedev-equivalence, which in turns implies Mu\v cnik-equivalence. We do not get too much into details, but the notion holds in the large setting of effective topological spaces (see for instance \cite{gacshoyruprojas}). \section{Symbolic dynamics} $\A^{\Z^d}$ is the set of $d$-dimensional \dfn{configurations}, endowed with the product of the discrete topology, and with the \dfn{shift} dynamical system $\sigma$, defined as the action of $\mathbb Z^d$ by $(\sigma^\vec{i})_{\vec{i} \in \mathbb Z^d}$, where $\sigma^\vec{i}(x)_\vec{k}\defeq x_{\vec{i}+\vec{k}}$ for any configuration $x\in\A^{\Z^d}$ and any $\vec{i},\vec{k}\in\mathbb Z^d$. A \dfn{pattern} over a (usually finite) \dfn{support} $D\subset\mathbb Z^d$ is a map $p\in\A^D$. Two patterns $u_1\colon D_1 \to \A$ and $u_2 \colon D_2 \to \A$ are called \dfn{disjoint} if $D_1$ and $D_2$ are disjoint shapes of $\mathbb Z^d$. If $u_1,u_2$ are disjoint, let $u_1\vee u_2$ be the pattern over shape $D_1\sqcup D_2$ defined by $(u_1\vee u_2)(\vec{i})=u_{j}(\vec{i})$, if $\vec{i}\in D_j$, $j=1,2$. Inductively, we can define $\bigvee_{1\le i\le k}u_i$, when $u_1,\ldots,u_k$ are mutually disjoint pattens. Let $E,D\subset\mathbb Z^2$ be two shapes, and $u\in\A^D$ be a 2D pattern. We denote $u_E$ the restriction of $u$ to $D\cap E$ (this is a pattern with support $D \cap E$). If $I \subseteq \mathbb Z$ and $(c_i)_{i \in I}$ is a family of configurations of $\A^{\mathbb Z}$, $|(c_i)_{i \in I}$ denotes the (possibly infinite) pattern $u \colon \mathbb Z \times I \to \A$ such that $u_{\mathbb Z \times \{i\}} = c_i$, for all $i \in I$. Here we implicitly identify patterns on horizontal strips up to vertical translation. Formally, the domains of $u_{\mathbb Z \times \{i\}}$ and $c_i$ are not the same. If $I = \co{0}{n}$, then $|(c_0,\ldots,c_{n-1})$ is the horizontal strip of width $n$ obtained by putting $c_0, \ldots, c_{n-1}$ on top of each other (in this order). If $I = \mathbb Z$, then we obtain a configuration in $\A^{\mathbb Z^2}$. Let $x \in \A^{\Z^d}$ and $\vec S\defeq(S_0,\ldots,S_{d-1}) \in {\mathbb N}_1^{d}$. The \dfn{$\vec S$-bulking} (or higher-power representation) of $x$ is the configuration $\bulk[\vec S]x\i (\A^{S_0\times\ldots S_{d-1}})^{\mathbb Z^d}$ such that for any $\vec i=(i_0,\ldots,i_{d-1})\in\mathbb Z^d$, \begin{equation*} {\bulk[\vec S]x}_{\vec i}\defeq x_{\co{i_0S_0}{(i_0+1)S_0}\times\ldots\times\co{i_{d-1}S_{d-1}}{(i_{d-1}+1)S_{d-1}}}. \end{equation*} A ($d$-dimensional) \dfn{subshift} is a closed set $X\subset\A^{\Z^d}$ such that $\sigma^\vec{i}(X)=X$ for all $\vec{i}\in\mathbb Z^d$. Equivalently, $X$ is a subshift if and only if there exists a family of patterns $\mathcal{F}\subset\bigcup_{D\subfini\mathbb Z^d}{\A^{D}}$ such that \begin{equation*} X=\set x{\A^{\mathbb Z^d}}{\forall \vec{i}\in \mathbb Z^d,\forall D\subfini\mathbb Z^d,\sigma^{\vec{i}}(x)\restr{D}\notin\mathcal{F}}. \end{equation*} If $\mathcal{F}$ can be chosen finite, we say that $X$ is a \dfn{subshift of finite type} (SFT). If $\mathcal{F}$ can be chosen computably enumerable, then $X$ is called an \dfn{effective subshift}. A continuous map $\Phi$ from subshift $X$ to subshift $Y$ is a \dfn{morphism} if $\Phi\sigma=\sigma\Phi$. If it is surjective, then it is a \dfn{factor map}, and $Y$ is a \dfn{factor} of $X$ (this defines a preorder); if it is bijective, then it is a \dfn{conjugacy}, and $X$ and $Y$ are \dfn{conjugate} (this defines an equivalence relation). A subshift $Y \subseteq \A^{\mathbb Z^d}$ is called \dfn{sofic} if it is a factor of some SFT, which is then called a \dfn{cover} for $Y$. A configuration $x \in \A^{\mathbb Z^d}$ is called \dfn{periodic} with \dfn{period} $\vec{j} \neq \vec{0}\in \mathbb Z^d$ if $\sigma^\vec{j}(x)=x$. A subshift $X$ is called \dfn{aperiodic} if it does not contain any periodic configurations. Abusing notation, we use the notations $\emp[w]{i_1,\ldots,i_l}$ and $\emp[n]{i_1, \ldots, i_l}$ (where $w \in \haine5^{**}$ and $n \in \mathbb N$) also for configurations. For example, if $c \in (\haine5^{**})^{\mathbb Z}$, we will say that $c \in \emp[w]{i_1,\ldots,i_l}$ if $c_i \in \emp[w]{i_1,\ldots,i_l}$ for all $i \in \mathbb Z$. Finally, for $N \in \mathbb N$ and $n \in \co{0}{N}$, let \begin{equation*} \per[n,N]{i_1,\ldots,i_l} \defeq \{c \in (\haine5^{**})^{\mathbb Z} \colon \bina{\hs{\pi_{i_k}(c_j)}}=j+n \mod N, \text{ for all } j \in\mathbb Z, 1 \le k \le l\} \end{equation*} be the set of all configurations such that \begin{equation*} \pi_{i_k}(c)=.\dinf{(n\ldots(N-1)01\ldots(n-1))}, \text{ for } 1 \le k \le l, \end{equation*} where for all $w$, $.\dinf{w}$ denotes the configuration $c$ which satisfies that $c_{\co{j\length{w}}{(j+1)\length{w}}}=w$, for all $j \in \mathbb Z$. \section{Cellular automata}\label{s:ca} A (1D) \dfn{partial cellular automaton} (PCA) is a partial (\xpr{global}) continuous function $F:\A^{\Z}\pto\A^{\Z}$ whose domain is an SFT, and such that $F\sigma=\sigma F$. Equivalently by some extension of the so-called Curtis-Lyndon-Hedlund theorem, there exist a \dfn{neighbourhood} $V\subfini\mathbb Z$ and a partial \dfn{local rule} $f:\A^{V}\pto\A$ such that for all $z\in\A^{\Z}$, $F(z)$ is defined if and only if $f(z\restr{i+V})$ is defined for all $i\in\mathbb Z$, in which case $F(z)_i \defeq f(z\restr{i+V})$. If $V \subseteq \cc{-r}{r}$, then $r$ is called a \dfn{radius} of the PCA. The radius of a PCA is not uniquely determined A PCA is called \dfn{reversible} (RPCA) if it is injective. In this case, it is known that there exists another RPCA, denoted by $F^{-1}$, such that $FF^{-1}$ and $F^{-1}F$ are restrictions of the identity, and $\mathcal D}%\DeclareMathOperator*{\dom}{dom(F^{-1})=F(\A^{\Z})$ (the argument for this is similar to the one in \cite{hedlund}). In particular, there exist so-called inverse radius and inverse local rule. If $r$ is both a radius and an inverse radius for an RPCA $F$, we call it a \dfn{bi-radius} for $F$. In the rest of the paper, we only consider RPCA with bi-radius $1$. This is not a significant restriction, since these PCA and RPCA exhibit the whole range of computational and dynamical properties of general PCA and RPCA. For $t\in\mathbb N$, the $t^{\text{th}}-$\dfn{order range} of $F$ is the (sofic) subshift $\Omega_F^t\defeq F^t(\A^{\Z})\cap F^{-t}(\A^{\Z})$ and its \dfn{limit set} is the (effective) subshift $\Omega_F\defeq\Omega_F^\infty\defeq\bigcap_{t\in\mathbb Z}\Omega_F^t$, containing all the configurations that are \emph{not} \dfn{ultimately rejected} (either in the past or the future). There is a canonical way to associate a 2D SFT $\orb F$ to an RPCA $F$: it consists of the infinite space-time diagrams of the configurations that are not ultimately rejected. Formally, $\orb F\defeq\sett{\orb[x]F}{x\in\Omega_F}$, where $\orb[x]F\defeq|(F^t(x))_{t\in\mathbb Z}\in\A^{\Z^2}$ for any $x\in\Omega_F$. One can see that $\orb F$ is conjugate to the $\mathbb Z^2$-action of $(F,\sigma)$ over $\Omega_F$. Note nevertheless that the same SFT may correspond to distinct RPCA (if the RPCA have different transient phases, \textit{i.e.}, they reject some configurations after different amounts of steps). A pattern $w\in\A^D$, with $D\subset\mathbb Z^2$, is \dfn{locally valid} for $f$ if for any $(i,t)\in D$ such that $C\defeq(i+\cc{-1}1)\times\{t-1\}\subset D$, we have $p_{(i,t)}=f(p\restr C)$. Note that, in general, this notion depends on the local rule and not only on the RPCA. By compactness, if there exist locally valid square patterns of arbitrarily large height and width, then $\orb F\ne\emptyset$, \textit{i.e.}, there are configurations which are never rejected. If $x\in F^{-t}(\A^{\Z})$, then $|(x,F(x),\ldots,F^t(x))$ is a \dfn{locally valid horizontal strip} of height $t+1$. The notion of a locally-valid horizontal strip depends only on the RPCA and not on the local rule, \textit{i.e.}, it is a ''global`` notion. For every $m\in \mathbb N$, $\vec{\delta}=(\delta_0,\ldots,\delta_{m-1})\in \{-1,0,1\}^m$, we define the shift product $\sigma^{\vec{\delta}} = \sigma^{\delta_0} \times \ldots \times \sigma^{\delta_{m-1}}$. A \dfn{partial partition} (cellular) \dfn{automaton} (PPA) is a PCA $F=\sigma^{\vec\delta}\circ\alpha$ over some alphabet $\A=\A_0\times\ldots\times \A_{m-1}$, where $\alpha$ is (the parallel synchronous application of) a partial permutation of $\A$. $-\delta_i$ is called the \dfn{direction} of field $i$. The (counter-intuitive) \xpr{$-$} is due to the fact that the normal definition of $\sigma$ shifts everything to the left, while we are used to thinking of the positive direction as going to the right. So, if we want to have a field with speed $+1$, then we should apply $\sigma^{-1}$ to it. Every PPA is a RPCA with bi-radius $1$ and conversely every RPCA is essentially a PPA (see for instance \cite[Proposition~53]{jarkkoppa}). Note, however, that the inverse of a PPA is not, formally, exactly a PPA: the permutation is performed after the shifts, in the form $\alpha^{-1}\circ\sigma^{-\vec{\delta}}$. Nevertheless, it is conjugate, via $\alpha$, to the corresponding PPA. In order to define families of PPA that are somehow uniform, we consider the corresponding objects acting on infinite alphabets. A \dfn{partial partition automaton with infinite alphabet} (IPPA) is a partial map $F:(\haine5^{*m})^\mathbb Z\pto(\haine5^{*m})^\mathbb Z$, where $m\in\mathbb N$, $F=(\sigma^{\delta_0}\times\ldots\times\sigma^{\delta_{m-1}})\circ\alpha$, the $\sigma^{\delta_j}$ are shifts over infinite $\haine5^*$ (that is $\sigma(y)_i=y_{i+1}$ for any $y\in(\haine5^*)^\mathbb Z$ and $i\in\mathbb Z$), and $\alpha \colon \haine5^{*m} \to \haine5^{*m}$ is a partial (infinite) \dfn{permutation}. By restricting the domain and the co-domain of an IPPA to finite subsets of $\haine5^{*m}$, we obtain normal (finite) PPA. In our constructions, the permutation $\alpha$ will always be length-preserving and the restriction will be taken over an alphabet of the form $\haine5^{\vec{k}}$. If $F \colon \A^{\mathbb Z} \to \A^{\mathbb Z}$ and $G \colon \B^{\mathbb Z} \to \B^{\mathbb Z}$ are PCA, then we say that $G$ is a \dfn{factor} of $F$ if there exists a continuous map $H \colon \A^{\mathbb Z} \to \B^{\mathbb Z}$ such that $GH=HF$. If $F$ and $G$ are RPCA and $F$ factors onto $G$, then it is easy to see that $\orb{F}$ factors onto $\orb{G}$ through the map that sends $\orb{x}$ to $\orb{H(x)}$, for all $x \in \A^{\mathbb Z}$. However, the notion of factoring for RPCA is stronger, since it also takes into account the transient times of the RPCA, \textit{i.e.}, the number of steps for which the image of an ultimately rejected configuration is defined before it is rejected (which are not relevant in the corresponding 2D SFTs). Let $F_0,\ldots,F_{n-1}$ be RPCA such that $\mathcal D}%\DeclareMathOperator*{\dom}{dom(F_i) \cap \mathcal D}%\DeclareMathOperator*{\dom}{dom(F_j) = \emptyset$, for all $i \neq j$. Then, $\bigsqcup_{i\in \co{0}{n}}F_i$ denotes the map with domain $\bigsqcup_{i \in \co{0}{n}}\mathcal D}%\DeclareMathOperator*{\dom}{dom(F_i)$ and that agrees with $F_i$ on $\mathcal D}%\DeclareMathOperator*{\dom}{dom(F_i)$, for all $i \in \co{0}{n}$. $\bigsqcup_{i\in \co{0}{n}}F_i$ is not always an RPCA, since there might be a configuration that is not in any $\mathcal D}%\DeclareMathOperator*{\dom}{dom(F_i)$ but that is locally everywhere in the domains (which are SFTs). However, and this will always be the case in this paper, $\bigsqcup_{i\in \co{0}{n}}F_i$ is also an RPCA if $\mathcal D}%\DeclareMathOperator*{\dom}{dom(F_i)$ and $\mathcal D}%\DeclareMathOperator*{\dom}{dom(F_j)$ are over disjoint alphabets, for $i \neq j$. In this case, $\Omega_{\bigsqcup_{i\in \co{0}{n}}F_i}=\bigsqcup_{i \in \co{0}{n}} \Omega_{F_i}$ and $\orb{\bigsqcup_{i\in \co{0}{n}}F_i}=\bigsqcup_{i \in \co{0}{n}}\orb{F_i}$. \subsection{Expansiveness} The projective line $\Rb\defeq\mathbb R\sqcup\{\infty\}$ is seen as the set of \dfn{slopes} to the \emph{vertical} direction. Here, quite unconventionally, the horizontal direction is represented by $\infty$ and the vertical one by $0$. The relevance of this choice will appear later, but in any case it does not affect any set-theoretical, topological or computable property because the inversion map over $\Rb$ is a computable homeomorphism. The projective line $\Rb$ admits a natural effective topology if seen as the quotient of the circle by central symmetry: a subset is effectively closed if the corresponding subset of the circle is effectively closed as a subset of $[0,1]^2$. This topology is equivalent to the one-point compactification of the $\mathbb R$ and renders $\Rb$ a compact, metric space. Let $X$ be a 2D subshift, $l\in\Rb$ a slope and $\lin{l}\subset\mathbb R^{2}$ the corresponding vectorial line. We say that direction $l$ is \dfn{expansive} for $X$ if there exists a bounded shape $V\subset\mathbb R^2$ such that, for all $x,y \in X$, \[x\restr{(\lin{l}+V)\cap \mathbb Z^2}=y\restr{(\lin{l}+V) \cap \mathbb Z^2}\impl x=y~.\] We denote by $\NE(X)$ the {set of \dfn{non-expansive} directions} (\textit{i.e.}, the set of directions that are not expansive). The terminology comes from the fact that if $l=p/q$ is rational (or infinite), then $l$ is expansive for $X$ if and only if the dynamical system $(X,\sigma^{(p,q)})$ is expansive, in the classical sense of expansive dynamical systems Expansive directions were first introduced by Boyle and Lind \cite{expsubd} in a more general setting. The following fact is a particular case of \cite[Theorem~3.7]{expsubd}. \begin{proposition}\label{p:atleastone} Let $X$ be a 2D subshift. Then, $\NE(X)$ is closed. In addition, $\NE(X)$ is empty if and only if $X$ is finite. \end{proposition} We say that $X$ is \dfn{extremely expansive} if $\card{\NE(X)}=1$, which is, according to Proposition~\ref{p:atleastone}, the most constrained non-trivial case. In the case of SFTs (actually, of all effective subshifts), we have an additional restriction on the set of non-expansive directions that comes from computation theory, as is usually the case, see \cite{projsft, entrsft}. A direction $l \in\Rb$ can be represented as the pair of coordinates of the intersection of the line $\lin{l}$ with the unit circle. This gives two (symmetric with respect to the origin) representations for each direction which are computably equivalent. Computability questions about expansive directions can then be transferred to computability questions about pairs of real numbers, which we already know how to deal with. It can be noted that effectively closed subsets that do not contain $\{\infty\}$ are exactly the effectively closed subsets of $\mathbb R$. The restriction map from $\Rb$ (with the above-defined effective topology) onto $\mathbb R$ is actually computable, and it can be noted that the pre-image of an effectively closed set by a computable function is effectively closed. \begin{lemma}\label{lem:nonexpsftrestr} Let $X$ be a 2D SFT. Then, $\NE(X)$ is effectively closed. \end{lemma} In particular, if an SFT $X$ has a unique direction of non-expansiveness, then this direction must be computable. \begin{proof} The statement follows from the following two facts: First, it is semi-decidable whether a direction is expansive, \textit{i.e.}, there exists a TM that takes as input a (rational direction) and halts if the direction is expansive. This follows from \cite[Lemma 3.2]{expsubd}. Secondly, it is semi-decidable whether two \emph{expansive} directions belong in the same expansive component. (The expansive component of an expansive direction is the largest connected set that includes the direction and is included in the set of expansive directions. One can see that it is always an open interval.) This follows from \cite{nasu154}, as described in \cite[Appendix C]{opsd}. Having these two facts in mind, it is not difficult to see that the following algorithm enumerates a sequence of intervals whose union is the complement of $\NE(X)$: For each rational direction, check whether it is expansive. Every time you find an expansive direction, check whether it is in the same component with one of the expansive directions that you have already found. Every time this is the case, output the whole interval of directions that is between them. \end{proof} A subshift $Y$ is called \dfn{extremely-expansively sofic} if there exists an extremely expansive SFT that factors onto $Y$. Since expansive directions are not preserved through block maps, an extremely-expansively sofic subshift need not be extremely expansive itself. In fact, as we will see, there exist extremely-expansively sofic subshifts that do not have any direction of expansiveness. \begin{lemma}\label{lem:basicstuffaboutNE} Let $X_0,X_1,\ldots$ be 2D subshifts over the same alphabet $\A$. \begin{itemize} \item If $X_0\subseteq X_1$, then $\NE(X_0)\subseteq\NE(X_1)$. \item If $\NE(X_0)\cap\NE(X_1)=\emptyset$, then $X_0\cap X_1$ is a finite subshift. \item If $\bigsqcup_wX_w$ is a closed disjoint (possibly uncountable) union, then $\NE(\bigsqcup_wX_w)=\bigcup_w\NE(X_w)$. \end{itemize}\end{lemma} \begin{proof} The first claim follows immediately from the definitions. For the proof of the second claim, we have that $\NE(X_0 \cap X_1) \subseteq \NE(X_0) \cap \NE(X_1) = \emptyset$ according to the first claim. Therefore, $\NE(X_0 \cap X_1) = \emptyset$, and since $X_0 \cap X_1$ is a subshift, Proposition~\ref{p:atleastone} gives that it is finite. Finally, for the last claim, the inclusion $\NE(X_w) \subseteq \NE(\bigsqcup_wX_w)$ comes from the first point. For the other inclusion, assume $l\in\NE(\bigsqcup_wX_w)$ Then, there exist $x,y\in\bigsqcup_wX_w$ which coincide over an open half-plane $H_l \subseteq \mathbb R^2$ of slope $l$ and disagree somewhere outside it. The orbits of $x$ and $y$ under the shift action have a common limit point $z$. Then, $z$ is in the intersection $X_x \cap X_y$ of the subshifts that contain $x$ and $y$, respectively. By disjointness, we get that $X_x=X_y=X_{w'}$, for some $w'$, which means that $l \in \NE(X_{w'})$. \end{proof} If $F$ is an RPCA, then we denote $\NE(F)\defeq\NE(\orb F)$. It is straightforward that the horizontal direction (which according to our definition is $\infty$) is expansive for $F$. It is not much more complicated to see that, if the bi-radius is $1$, $\NE(F) \subseteq [-1,1]$ (directions around the horizontal are expansive). Conversely, it can be shown that, up to a recoding, every 2D SFT for which the horizontal direction is expansive is equal to $\orb F$, for some RPCA $F$. \chapter{Simulation}\label{s:simul} \section{Simulation} If $S,T\in{\mathbb N}_1$ and $Q\in\mathbb Z$, we say that RPCA $F:\A^{\Z}\pto\A^{\Z}$ \dfn{$(S,T,Q)$-simulates} RPCA $G:\B^{\Z}\pto\B^{\Z}$ if there is a partial continuous \dfn{decoding} surjection $\Phi:\A^{\Z}\pto\B^{\Z}$ such that $\sigma\Phi=\Phi\sigma^S$, $G\Phi=\Phi\sigma^QF^T$, $G^{-1}\Phi=\Phi\sigma^{-Q}F^{-T}$ and the \dfn{simulating} subshift $\rock\Phi\defeq\bigsqcup_{\begin{subarray}c0\le t<T\\0\le s<S\end{subarray}}\sigma^sF^t(\mathcal D}%\DeclareMathOperator*{\dom}{dom(\Phi))$ is a disjoint union In other words, $1$ step of $G$ is encoded into $T$ steps of $F$, up to some shift by $Q$, and the intermediary steps used are not valid encodings. We note $F\simu[S,T,Q,\Phi]G$, or when some parameters are clear from the context or not so important, $F\simu[S,T,\Phi]G$, $F\simu[S,T,Q]$, $F\simu[S,T]G$, or $F\simu G$ (each time this symbol will be used, $F$ and $G$ are meant to be RPCA). We remind the reader that according to our notations, $\sigma\Phi=\Phi\sigma^S$ and $G\Phi=\Phi\sigma^QF^T$ and $G^{-1}\Phi=\Phi\sigma^{-Q}F^{-T}$ imply that the domains of the two partial functions are identical. This is in fact crucial for understanding the notion of simulation and it will be used extensively in the proofs and constructions to come. For example, this means that the equality $G\Phi=\Phi\sigma^QF^T$ does not immediately imply $G^{-1}\Phi=\Phi\sigma^{-Q}F^{-T}$, because the domains of $G^{-1}\Phi$ and $\Phi\sigma^{-Q}F^{-T}$ might be different (if we only had the equality $G\Phi=\Phi\sigma^QF^T$, it could happen that $x \in \mathcal D}%\DeclareMathOperator*{\dom}{dom(G^{-1}\Phi)$ but $x \notin F^T\sigma^Q(\A^\mathbb Z)$). In fact, one can see that the couple of conditions $G\Phi=\Phi\sigma^QF^T$ and $G^{-1}\Phi=\Phi\sigma^{-Q}F^{-T}$ is equivalent to the triple of conditions $G\Phi=\Phi\sigma^QF^T$, $\mathcal D}%\DeclareMathOperator*{\dom}{dom(G\Phi)=\mathcal D}%\DeclareMathOperator*{\dom}{dom(\Phi\sigma^{-Q}F^{-T})$ and $\mathcal D}%\DeclareMathOperator*{\dom}{dom(G^{-1}\Phi) = \mathcal D}%\DeclareMathOperator*{\dom}{dom(\Phi F^{T} \sigma^Q)$. $F$ \dfn{exactly simulates} $G$ if $\Phi$ is actually bijective. In other words, there exists a well-defined \dfn{encoding} function $\Phi^{-1}:\B^{\Z}\to\mathcal D}%\DeclareMathOperator*{\dom}{dom(\Phi)$. $F$ \dfn{completely simulates} $G$ if, besides, $\Omega_F\subset\rock\Phi$. In other words, every bi-infinite orbit of $F$ will eventually encode some orbit of $G$. Actually, in our constructions we will even have the stronger $\mathcal D}%\DeclareMathOperator*{\dom}{dom(F^{t'})\subset\rock\Phi$, for some $t' \in \mathbb Z$. \begin{remark}\label{r:simsynchr}~\begin{enumerate} \item \mathcal D}%\DeclareMathOperator*{\dom}{dom(\Phi)=\sigma^S(\mathcal D}%\DeclareMathOperator*{\dom}{dom(\Phi))$. \item $F\simu[S,T,DS]G$ if and only if $F\simu[S,T,0]\sigma^{D}G$. \item For any $s\in\co0S,t\in\co0T$, $\bulk[S]{\sigma^sF^t(\mathcal D}%\DeclareMathOperator*{\dom}{dom(\Phi))}$ is an SFT. \item Since the union $\rock\Phi$ is disjoint, there exists a shape $U\subfini\mathbb Z$ such that for any $x\in\rock\Phi$ , $x\restr U$ determines the (unique) $s\in\co0S$ and $t\in\co0T$ such that $x\in\sigma^sF^t(\mathcal D}%\DeclareMathOperator*{\dom}{dom(\Phi))$. \end{enumerate}\end{remark} \begin{proof} The first two claims follow immediately from the definitions. For the third claim, notice that since $\Phi$ is continuous and $\sigma \Phi = \Phi \sigma^S$, this means that $\mathcal D}%\DeclareMathOperator*{\dom}{dom(F)$ is the domain of a PCA over $\bulk[S]{\A^{\mathbb Z}}$, so it is an SFT. Since $\sigma$ and $F$ are invertible maps and the property of being an SFT is preserved under invertible maps, we have that $\bulk[S]{\sigma^sF^t(\mathcal D}%\DeclareMathOperator*{\dom}{dom(\Phi))}$ is an SFT for all $s \in \co{0}{S}$ and $t \in \co{0}{t}$. The last claim follows easily from the disjointness using a classical compactness argument. \end{proof} We can prove an analogue of Curtis-Lyndon-Hedlund theorem for decoding and encoding functions. \begin{remark}\label{r:simhedlund} The decoding function $\Phi$ admits a {neighbourhood} $V\subfini\mathbb Z$ and a partial \dfn{bulked local rule} $\phi:\A^{V}\pto\B$ such that for all $x\in\A^{\Z}$, $\Phi(x)$ is defined if and only if $\phi(x\restr{iS+V})$ is defined for any $i\in\mathbb Z$, in which case the latter is equal to $\Phi(x)_i$. \\ If the simulation is exact, the encoding function $\Phi^{-1}$ admits a {neighbourhood} $V\subfini\mathbb Z$ and a partial \dfn{unbulked local rule}, abusively noted $\phi^{-1}:\B^{V}\pto\A^S$ such that for all $y\in\B^\mathbb Z$, $\Phi^{-1}(y)$ is defined if and only if $\phi^{-1}(y\restr{i+V})$ is defined for any $i\in\mathbb Z$, in which case the latter is equal to $\Phi^{-1}(y)_{\co{iS}{(i+1)S}}$. \end{remark} Exact complete vertical (\textit{i.e.}, $Q=0$) simulation is stronger than most notions found in the literature. In particular: \begin{itemize} \item $\orb F$ simulates $\orb G$ in the sense of \cite{drs}. \item The $\mathbb Z^2$-action $(F,\sigma)$ over the limit set $\Omega_F$ (or the 2D SFT $\orb F$) is conjugate to a suspension of $\Omega_G$ in the sense of a homeomorphism \[\appl[\Psi]{\Omega_F}{\Omega_G\times\co0S\times\co0T}x{(\Phi F^{-t}\sigma^{-s}(x),s,t),~\text{where}~F^{-t}\sigma^{-s}(x) \in \mathcal D}%\DeclareMathOperator*{\dom}{dom(\Phi)}\] \item The $\mathbb Z^2$-action $(G,\sigma)$ over the limit set $\Omega_G$ (or the 2D SFT $\orb G$) is conjugate to the $\mathbb Z^2$-action $(F^T,\sigma^S)$ restricted to $\mathcal D}%\DeclareMathOperator*{\dom}{dom(\Phi) \cap \Omega_F$ (see \cite{gacs}); \item $G$ is a sub-automaton of a rescaling of $F$, so that $F$ simulates $G$ according to the definition of simulation given in \cite{ollingersimulation}. While it is not necessary to formally define this notion of simulation, we can intuitively say that rescaling corresponds to the role of parameters $S$ and $T$ in our definition, while the sub-automaton condition corresponds to the decoding function $\Phi$. We notice, however, that Ollinger's definition is more general than ours, since it does not require $\rock\Phi$ to be a disjoint union, while the simulated can also be rescaled. \end{itemize} But the definition above also involves the transient part: every locally valid horizontal strip of height $t+1$ for $G$ gives a locally valid horizontal strip of height $Tt+1$ for $F$. The following facts about our notion of simulation follow directly from the definition: \begin{itemize} \item Each kind of simulation is a conjugacy invariant. \item If $F$ simulates $G$\resp{exactly}, then it simulates\resp{exactly} any of its subsystems (but clearly, completeness is not preserved). If $F$ factors onto $G$, then $F \simu[1,1,0] G$ completely. \item $F\times G\simu[1,1,0]F$ completely if $G$ does not have empty domain. Also $F\times G\simu[1,1,0]F$ exactly if $G$ includes a singleton subsystem (recall that $G$ is a PCA, so that it does not necessarily have periodic points). The simulation is simultaneously exact and complete if $G$ is a singleton system. \item The surjectivity of $\Phi$ implies that only systems with empty domain can be simulated by systems with empty domain. \end{itemize} We will mainly focus on \dfn{non-trivial} simulations: this means that $S,T>1$ and $G$ does not have empty domain. \begin{remark}\label{r:locvalid} If $F\simu[S,T,Q,\Phi]G$ non-trivially, then for all $j \in \cc{0}{T}$, $F^{j}(\mathcal D}%\DeclareMathOperator*{\dom}{dom(G\Phi))=\sigma^{-Q}F^{-(T-j)}(\mathcal D}%\DeclareMathOperator*{\dom}{dom(G^{-1}\Phi) \neq \emptyset$. \end{remark} More specifically, a configuration \xpr{in the middle} of the work period, \textit{i.e.}, when $j = \ipart{T/2}$ has at least $\ipart{T/2}$ forward and backward images, or, in other words, it belongs to $\Omega_F^{\ipart{T/2}}$. The following lemma states that the limit sets correspond, in the case of complete simulation. It is a more mathematical and detailed version of the comment that we made earlier, that a valid strip horizontal of height $t+1$ in $G$ gives a valid horizontal strip of height $Tt+1$ in $F$ (provided that the strip is simulated). \begin{lemma}\label{l:simlim} Assume $F\simu[S,T,Q,\Phi]G$. \begin{enumerate} \item\label{i:simo} If $j\in\mathbb N\sqcup\{\infty\}$, then \[\rock[j]\Phi\defeq\bigsqcup_{\begin{subarray}c0\le t<T\\0\le s<S\end{subarray}}\sigma^sF^t\Phi^{-1}(\Omega_G^j) \] is a disjoint union and a subshift, included in $\Omega_F^{(j-1)T+1}$. In addition, $\rock[j]\Phi\supset\rock[j+1]\Phi$ and $\rock[\infty]\Phi=\bigcap_{j\in\mathbb N}\rock[j]\Phi$. \item $\Omega_F\supset\rock[\infty]\Phi$ \item\label{i:limlim} If the simulation is complete, then $\Omega_F=\rock[\infty]\Phi$. \end{enumerate}\end{lemma} \begin{proof}~\begin{enumerate} \item It is clear that $\rock[j]\Phi$ is a disjoint union and a subshift, each subset in the union being (syntactically) included in one in the expression of $\rock\Phi$. Assume that $F\simu[S,T,Q,\Phi]G$ for some $Q\in\mathbb Z$. Now, \begin{eqnarray*} \Phi^{-1}(\Omega_G^j)&=& \mathcal D}%\DeclareMathOperator*{\dom}{dom(G^j\Phi)\cap\mathcal D}%\DeclareMathOperator*{\dom}{dom(G^{-j}\Phi) \\&=& \mathcal D}%\DeclareMathOperator*{\dom}{dom(\Phi\sigma^{jQ}F^{jT})\cap\mathcal D}%\DeclareMathOperator*{\dom}{dom(\Phi\sigma^{-jQ}F^{-jT}) \\&\subset& \mathcal D}%\DeclareMathOperator*{\dom}{dom(F^{jT})\cap\mathcal D}%\DeclareMathOperator*{\dom}{dom(F^{-jT})=\Omega_F^{jT}~. \end{eqnarray*} Hence, for any $s\in\co0S$ and any $t\in\co0T$, $\sigma^sF^t\Phi^{-1}(\Omega_G^j)\subset\Omega_F^{jT-T+1}$. The other claims follow from the definitions. \item It is obvious from the previous point that $\bigcap_{j\in\mathbb N}\Omega_F^{jT}\supset\bigcap_{j\in\mathbb N}\rock[j]\Phi=\rock[\infty]\Phi$. \item Conversely, assume $x\in\Omega_F$, so that clearly $\forall k\in\mathbb Z,F^k(x)\in\Omega_F$. By completeness, there exist $y\in\mathcal D}%\DeclareMathOperator*{\dom}{dom(\Phi)$ and $s\in\co0S,t\in\co0T$ such that $\sigma^sF^t(y)=x$. Disjointness and a direct induction give that for all $k\in\mathbb Z$, $F^k(y)\in F^{k\bmod T}\sigma^{Q\lfloor k / T \rfloor}(\mathcal D}%\DeclareMathOperator*{\dom}{dom(\Phi))$. In particular, for all $j\in\mathbb Z$, $G^j\Phi(y)=\Phi F^{jT}\sigma^{jQ}(y)$ is defined. This gives that $\Phi(y)\in\Omega_G$, so $x \in \sigma^{-s}F^{-t}\Phi^{-1}(\Omega_G)= \sigma^{S-s}F^{T-t}\Phi^{-1}(\Omega_G)$. \end{enumerate}\end{proof} The following remark links the periodic points of the simulating and simulated systems. It is essential for proving aperiodicity of the subshifts that we construct. The same result appears in \cite{drs,twobytwo}, even though the argument essentially goes back to the kite-and-dart tile set of Penrose. We give a slightly more general version of the usual result also takes into consideration the shift by $Q$. \begin{remark}\label{r:penrose} If $F\simu[S,T,Q]G$ completely, then $\orb F$ admits a configuration with period $(s-lQ,t)$ if and only if $\orb G$ admits a configuration with period $(k,l)$, where $s=kS$ and $t=lT$. \end{remark} We will only use the case $Q=0$, for which it is intuitively clear to see that it holds true. When $q \neq 0$, one has to have in mind that for every $T$ time steps of a configuration of $F$, the simulated configuration is shifted $Q$ steps to the left. \section{Nested simulations} In the sequel, we will be most interested in infinite sequences of simulations of the form: $F_0 \simu F_1 \simu F_2 \simu \ldots$. This looks like a formidable task, since every RPCA of the sequence must contain the information about an infinite number of configurations and update this information within a determined time, but, as the results of this section will imply, an infinite sequence of simulations gives RPCA with very useful properties. The construction of these sequences forms the basic part of our constructions and will be done in the following chapters. If $\vec{S}=(S_i)_{ 0 \le i \le n-1}$ is a sequence of numbers, then $\vec{1S}$ is the sequence whose first element is equal to $1$ with the elements of $\vec{S}$ shifted by one after it. If $\vec{S}=(S_i)_{ 0 \le i \le n-1}$ and $\vec{T}=(T_i)_{ 0 \le i \le n-1}$ are finite sequences of non-zero numbers, then $\vec{1S/T}$ is the sequence $(S_{i-1}/T_i)_{0 \le i \le n-1}$, where $S_{-1}\defeq 1$. A short calculation shows that \begin{equation*} \anib[\vec{1S/T}]{\vec{Q}}\prod T_i = \sum_{0 \le i \le n-1} \left(Q_i\prod_{0 <j<i} S_j \prod_{i < j \le n-1} T_j\right). \end{equation*} \begin{lemma}\label{l:simtrans} Simulation\resp{exact, complete, exact and complete} is a preorder. \\ More precisely, if $F_0\simu[S_0,T_0,Q_0,\Phi_0]F_1\simu[S_1,T_1,Q_1,\Phi_1]\ldots\simu[S_{n-1},T_{n-1},Q_{n-1},\Phi_{n-1}]F_n$\resp{exactly, completely} for some $n\in\mathbb N$, then $F_0\simu[S,T,Q,\Phi]F_n$\resp{exactly, completely}, where \begin{equation*} (S,T,Q,\Phi) = (\prod S_i,\prod T_i,{\anib[\vec{1S/T}]{\vec{Q}}\prod T_i},\Phi_{n-1}\cdots\Phi_0). \end{equation*} \end{lemma} The products range from $0$ to $n-1$. If there were no shifts in the simulation (\textit{i.e.}, if $Q_i=0$ for all $i$) the above statement would be more or less trivial. Even in the presence of shifts, the proof is essentially a simple verification. \begin{proof}~\begin{itemize} \item Clearly $F\simu[1,1,0,\id]F$. \item Now suppose $F\simu[S,T,Q,\Phi]G\simu[S',T',Q',\Phi']H$. Then it is clear that $\sigma\Phi'\Phi=\Phi'\sigma^{S'}\Phi=\Phi'\Phi\sigma^{S'S}$ and $H\Phi'\Phi=\Phi'\sigma^{Q'}G^{T'}\Phi=\Phi'\Phi\sigma^{QT'+SQ'}F^{T'T}$. Moreover: \begin{eqnarray*} \bigsqcup_{\begin{subarray}c0\le t<T\\0\le s<S\end{subarray}}\sigma^sF^t(\mathcal D}%\DeclareMathOperator*{\dom}{dom(\Phi)) &\supset \bigsqcup_{\begin{subarray}c0\le t<T\\0\le s<S\end{subarray}}\sigma^sF^t\Phi^{-1}(\bigsqcup_{\begin{subarray}c0\le t'<T'\\0\le s'<S'\end{subarray}}\sigma^{s'}G^{t'}(\mathcal D}%\DeclareMathOperator*{\dom}{dom(\Phi')))\\ &=&\bigsqcup_{\begin{subarray}c0\le t<T\\0\le s<S\end{subarray}}\bigsqcup_{\begin{subarray}c0\le t'<T'\\0\le s'<S'\end{subarray}}\sigma^sF^t\Phi^{-1}(\sigma^{s'}G^{t'}(\mathcal D}%\DeclareMathOperator*{\dom}{dom(\Phi')))\\ &=&\bigsqcup_{\begin{subarray}c0\le t<T\\0\le s<S\end{subarray}}\bigsqcup_{\begin{subarray}c0\le t'<T'\\0\le s'<S'\end{subarray}}F^{t+t'T}\sigma^{s+s'S+t'Q}(\mathcal D}%\DeclareMathOperator*{\dom}{dom(\Phi'\Phi))\\ &=&\bigsqcup_{\begin{subarray}c0\le t<TT'\\0\le s<SS'\end{subarray}}\sigma^sF^t(\mathcal D}%\DeclareMathOperator*{\dom}{dom(\Phi'\Phi))\eqdef\rock{\Phi'\Phi}~. \end{eqnarray*} This proves that $F\simu[SS',TT',QT'+SQ',\Phi'\Phi]H$. \item If $\Phi$ and $\Phi'$ are bijections, then $\Phi'\Phi$ is also a bijection. \item If both simulations are complete, then by Point \ref{i:limlim} of Lemma \ref{l:simlim}, \begin{eqnarray*} \Omega_F&=&\bigsqcup_{\begin{subarray}c0\le t<T\\0\le s<S\end{subarray}}\sigma^sF^t\Phi^{-1}(\Omega_G)\\ &\subset&\bigsqcup_{\begin{subarray}c0\le t<T\\0\le s<S\end{subarray}}\sigma^sF^t\Phi^{-1}(\rock{\Phi'})\\ &=&\rock{\Phi'\Phi} \end{eqnarray*} \item A direct induction gives the expected results. \end{itemize}\end{proof} Similarly to simulations, which involve a decomposition of the system in terms of how much is shifted the grid on which to read the encoding, a sequence of simulations involves a nested decomposition, which gives a full skeleton, inside each configuration, as expressed by the following lemma. Here, and in the following, we use gothic letters to denote sequences, but the corresponding normal letters to denote the elements of the sequences. Also, if $\seq S$ is an infinite sequence and $n \in \mathbb N$, then $\seq{S}_{\co{0}{n}}$ is the finite prefix of length $n$ of $\seq{S}$. Finally, if $(\Phi_i)_{i \in \mathbb N}$ is a sequence of decoding functions, then $\Phi_{\co{0}{n}}$ will be the decoding function $\Phi_{n-1} \cdots \Phi_0$. \begin{lemma}\label{l:nonvide}~\begin{enumerate} \item\label{i:infsim} If $F_0\simu[S_0,T_0,\Phi_0]F_1\simu[S_1,T_1,\Phi_1]\ldots\simu[S_{n-1},T_{n-1},\Phi_{n-1}]F_n\simu[S_n,T_n,\Phi_n]\ldots$ and $j\in\mathbb N\sqcup\{\infty\}$, then \begin{eqnarray*} \rock[j]{\seq\Phi} &\defeq &\bigcap_{n\in\mathbb N}\rock[j]{\Phi_{\co0n}}\\ & = &\bigsqcup_{\begin{subarray}c\seq t\in\prod_{i\in\mathbb N}\co0{T_i}\\\seq s\in\prod_{i\in\mathbb N}\co0{S_i}\end{subarray}}\bigcap_{n\in\mathbb N}\sigma^{\anib[\seq S]{\seq{s}_{\co0n}}}F_0^{\anib[\seq T]{\seq{t}_{\co0n}}}\Phi_0^{-1}\cdots\Phi_{n-1}^{-1}(\Omega_{F_n}^j) \end{eqnarray*} is a disjoint union and a subshift. \\ In addition, $\rock[j]{\seq\Phi}\supset\rock[j+1]{\seq\Phi}$ and $\rock[\infty]{\seq\Phi}=\bigcap_{j\in\mathbb N}\rock[j]{\seq\Phi}$. \item\label{i:nonempty} If, besides, all simulations are nontrivial, then $\rock[2]{\seq\Phi}=\rock[\infty]{\seq\Phi}\subset\Omega_{F_0}$ is uncountable. \item If the simulations (in the hypothesis of Point \ref{i:infsim}) are complete, then $\rock[\infty]{\seq\Phi}=\Omega_{F_0}$. \item\label{i:skel} If the sequence $(\Phi_n)_{n \in \mathbb N}$ is computable, then the map $x \in \rock[\infty]\Phi \to (s_i,t_i)_{i \in \mathbb N}$, where $(s_i,t_i)_{i \in \mathbb N}$ is the (unique) sequence such that $x\in\bigcap_n\sigma^{\anib[\seq S]{\seq{s}_{\co0n}}}F_0^{\anib[\seq T]{\seq{t}_{\co0n}}}\mathcal D}%\DeclareMathOperator*{\dom}{dom(\Phi_{\co{0}{n}})$, is computable. \end{enumerate}\end{lemma} Point \ref{i:nonempty} implies nonemptiness of $\Omega_{F_0}$ and $\orb{F_0}$, and of any $\Omega_{F_n}$, since all those statements can be applied to the sequence starting from $n$. Point \ref{i:skel} states that we can always recover the skeleton from a valid configuration. In particular the skeleton map is continuous. \begin{proof}~\begin{enumerate} \item By Lemma \ref{l:simtrans} and compactness, it is clear that $\rock[j]{\seq\Phi}$ is a subshift. The equality is rather easily checkable. We can see that the union is disjoint: if $(\seq s,\seq t)\ne(\seq{s'},\seq{t'})$, say $(s_m,t_m\ne s'_m,t'_m)$, then $\bigcap_{n\in\mathbb N}\sigma^{\anib[\seq S]{\seq{s}_{\co0n}}}F_0^{\anib[\seq T]{\seq{t}_{\co0n}}}\Phi_0^{-1}\cdots\Phi_{n-1}^{-1}(\Omega_{F_n}^j)$ is included in $\sigma^{\anib[\seq S]{\seq{s}_{\co0m}}}F_0^{\anib[\seq T]{\seq{t}_{\co0m}}}\Phi_0^{-1}\cdots\Phi_{m-1}^{-1}(\Omega_{F_m}^j)$, which is, according to Lemma~\ref{l:simtrans}, disjoint from $\sigma^{\anib[\seq S]{\seq{s}'_{\co0m}}}F_0^{\anib[\seq T]{\seq{t}'_{\co0m}}}\Phi_0^{-1}\cdots\Phi_{m-1}^{-1}(\Omega_{F_m}^j)$ which includes \\$\bigcap_{n\in\mathbb N}\sigma^{\anib[\seq S]{\seq{s}'_{\co0n}}}F_0^{\anib[\seq T]{\seq{t}'_{\co0n}}}\Phi_0^{-1}\cdots\Phi_{n-1}^{-1}(\Omega_{F_n}^j)$ ~. \item Since for any $n\in\mathbb N$ and $m\ge n$, $F_n\simu[S_n\cdots S_{m-1},T_n\cdots T_{m-1},\Phi_{m-1}\cdots\Phi_n]F_{m}$, then Point \ref{i:simo} of Lemma \ref{l:simlim} says that $\Omega_{F_n^{T_n\cdots T_{m-1}+1}}\supset\rock[2]{\Phi_{\co nm}}~.$ If the simulations are nontrivial, then $T_n\cdots T_{m-1}\to\infty$ when $n$ is fixed and $m \to \infty$, and $\Omega_{F_n}\supset\bigcap_{m\in\mathbb N}\Omega_{F_n^{T_0\cdots T_{m-1}+1}}\supset\bigcap_{m\in\mathbb N}\rock[2]{\Phi_{\co nm}}=\rock[2]{\Phi_{\co n{\infty}}}~.$ Injecting this inclusion in the definition of $\rock[\infty]{\Phi_{\co0n}}$ gives that $\rock[\infty]{\seq\Phi}\supset\bigcap_{m\ge 0}\rock[2]{\Phi_{\co0m}}\supset\rock[2]{\seq\Phi}.$ The converse is trivially true, and Point \ref{i:limlim} of Lemma \ref{l:simlim} already tells us that $\rock[\infty]{\seq\Phi}\subset\Omega_{F_0}$. \\ Moreover, since $F_n\simu[S_nS_{n+1},T_nT_{n+1}]F_{n+2}$ with $T_nT_{n+1}\ge 4$, then Remark \ref{r:locvalid} gives that $\Omega_{F_n}^2$ is non-empty. Therefore, each of the uncountably many subsets in the disjoint union expressing $\rock[2]{\seq\Phi}$ is a closed non-empty intersection. \item If $n\in\mathbb N$ is such that $F_0\simu[S_0\cdots S_{n-1},T_0\cdots T_{n-1},\Phi_{n-1}\cdots\Phi_0]F_n$ completely, then by Point \ref{i:limlim} of Lemma \ref{l:simlim}, $\Omega_{F_0}=\rock[\infty]{\Phi_{\co0n}}$. \item This follows from repeated application of Remark~5.4 and the fact that $\Phi_{\co{0}{n}}$ is a decoding function for all $n \in \mathbb N$. \end{enumerate}\end{proof} The following extends Lemma \ref{l:nonvide} (which can be recovered by $\B_i$ being singletons). In this case, every RPCA simulates a disjoint union of RPCA, each one of which simulates a disjoint union of RPCA and so on. In this way, we obtain an \xpr{infinite tree} of simulations. Along any branch of this tree, Lemma~\ref{l:nonvide} is true, but, more importantly, something similar is true even when we take all the (possibly uncountable) branches of this tree together. \begin{lemma}\label{l:nonvides}~\begin{enumerate} \item\label{i:uncsim} Let $(\B_n)_{n\in\mathbb N}$ be a sequence of finite alphabets, such that for any word $u\in\prod_{i<n}\B_i$ of length $n\in\mathbb N$, there exist $S_u,T_u, Q_u \in \mathbb N$, a decoding function $\Phi_u$ and a RPCA $F_u$ such that $F_u\simu[S_u,T_u,\Phi_u]\bigsqcup_{b\in\B_n}F_{ub}$. Let $\rocks[j]z{\seq\Phi}\defeq\bigcap_{n\in \mathbb N}\rock[j]{\Phi_{z_{\co0n}}}$ for all $j\in\mathbb N\sqcup\{\infty\}, z\in\prod_{i\in\mathbb N}\B_i$. Then, for any $j\in\mathbb N \sqcup \{\infty\}$ and any closed $Y\subset \prod_{i\in\mathbb N}\B_i$, $\rocks[j]Y{\seq\Phi}\defeq\bigsqcup_{z\in Y}\rocks[j]z{\seq\Phi}$ is a disjoint union and a subshift, and $\rocks[2]Y{\seq\Phi}=\rocks[\infty]Y{\seq\Phi}\subset \Omega_{F_\motvide}$. \item Besides, the set $Z\defeq\set z{\prod_{i\in\mathbb N}\B_i}{\rocks[\infty]z{\seq\Phi}\ne\emptyset}$ corresponding to nested nontrivial, non-empty simulations is closed. If the simulations are complete, then $\rocks[2]Z{\seq\Phi}=\rocks[\infty]Z{\seq\Phi}=\Omega_{F_\motvide}$. \end{enumerate}\end{lemma} In the above statement, the notation $\Phi^z_{{\co0n}}$ stands for the composition $\Phi_{z_{\co{0}{n}}}\cdots\Phi_{z_0}\Phi_{\motvide}$, which is the decoding function from $F_{z_{\co{0}{n}}}$ onto $F_{\motvide}$. \begin{proof}~\begin{enumerate} \item Point \ref{i:nonempty} of Lemma \ref{l:nonvide} gives that $\rocks[\infty]z{\seq\Phi}\ne\emptyset$ if $F_{z_{\co0n}}\simu F_{z_{\cc{0}{n+1}}}$ non trivially for any $n\in\mathbb N$, \textit{i.e.}, all these RPCA have non-empty domain. The converse is obvious. \\ By the same distributivity of decreasing intersections over unions as for Point \ref{i:infsim} of Lemma~\ref{l:nonvide}, it can be easily seen that \[\rocks[j]{Y}{\seq\Phi}=\bigcap_{n\in\mathbb N}\bigsqcup_{u\in\mathcal L_n(Y)}\bigsqcup_{\begin{subarray}c0\le t<\prod_{i<n}T_{u_{\co0i}}\\0\le s<\prod_{i<n}S_{u_{\co0i}}\end{subarray}}\sigma^sF_\motvide^t\Phi_\motvide^{-1}\Phi_{u_0}^{-1}\cdots\Phi_{u}^{-1}(\Omega_{F_u}^j)~,\] which is a decreasing intersection of finite unions of subshifts, and we have $\rocks[2]z{\seq\Phi}=\rocks[\infty]z{\seq\Phi}\subset\Omega_{F_\motvide}$ for all $z\in Y$. \item If $F_u\simu\bigsqcup_{a\in\B_n}F_{ua}$ completely, then Point \ref{i:limlim} of Lemma \ref{l:simlim} gives \[\Omega_{F_u}=\bigsqcup_{\begin{subarray}c0\le t<T_u\\0\le s<S_u\end{subarray}}F_u^t\sigma^s\Phi_u^{-1}(\bigsqcup_{a\in\B_n}\Omega_{F_{ua}})~.\] An immediate induction gives for any $n\in\mathbb N$, \[\Omega_{F_\motvide}=\bigsqcup_{u\in\mathcal L_{n+1}(Z)}\bigsqcup_{\begin{subarray}c0\le t<\prod_{i<n}T_{u_{\co0i}}\\0\le s<\prod_{i<n}S_{u_{\co0i}}\end{subarray}}\sigma^sF_\motvide^t\Phi_\motvide^{-1}\Phi_{u_0}^{-1}\cdots\Phi_{u_{\co0n}}^{-1}(\Omega_{F_u})~.\] Being true for any $n$, this gives the result. \end{enumerate}\end{proof} Lemmas~\ref{l:nonvide} and~\ref{l:nonvides} can be seen as extensions of Lemma~\ref{l:simlim} in the case of an infinite nested simulation. The following lemma can be seen as such an extension of Remark~\ref{r:penrose}. \begin{lemma}\label{lem:aperiodichierarchy} If $F_0\simu[S_0,T_0]F_1\simu[S_1,T_1]\ldots\simu[S_{n-1},T_{n-1}]F_n\simu[S_{n},T_{n}]\ldots$ completely, with $S_n,T_n>1$ for any $n\in\mathbb N$, then $\orb{F_0}$ is aperiodic. \end{lemma} In particular, either $\Omega_{F_n} = \emptyset$ ($= \orb{F_n}$) for all $n \in \mathbb N$ or, $\Omega_{F_n}$ (and $\orb{F_n}$) is aperiodic uncountable, for all $n \in \mathbb N$. \begin{proof} From Lemma~\ref{l:simtrans}, $F_0\simu[S_0\cdots S_{n-1},T_0\cdots T_{n-1}]F_n$ completely. By Remark~\ref{r:penrose}, $\orb{F_0}$ cannot have any nontrival period less than $S_0\cdots S_{n-1}$ horizontally and less than $T_0\cdots T_{n-1}$ vertically. If these two products go to infinity, we get that there cannot exist any periodic points. \end{proof} In fact, it follows from the proof that it is enough that one of the products $\prod_{i \in \mathbb N} S_i$ and $\prod_{i \in \mathbb N} T_i$ is infinite. It is well known that a non-empty, aperiodic 2D SFT is uncountable. Lemma~\ref{l:nonvide} gives some additional information about how uncountability occurs in the case of an infinite nested simulation. \section{Expansiveness and simulation} The following lemmas highlight the relation between the notions of simulation and expansive directions. This subsection extends slightly Section~5 in \cite{nexpdir}. The following lemmas correspond to Lemma~5.1 and Lemma~5.3 in \cite{nexpdir}, which examine how the so-called ``shape of prediction'' evolves. It also motivates the choice of considering the horizontal direction as $\infty$, which will make many future expressions clearer. \begin{lemma}\label{lem:relsimulexp} Suppose $F\simu[S,T,Q]G$ exactly. Then $\NE(F)\supseteq\frac1{T}\left(Q+S\NE(G)\right)$. Moreover, if the simulation is complete, then $\NE(F)=\frac1{T}\left(Q+S\NE(G)\right)$. \end{lemma} In particular, $\NE(\sigma^{-Q}G)=\NE(G)+Q$ and $\NE(G^T)=\frac1T\NE(G)$. \begin{proof} Let us consider the matrix $M\defeq\left[\begin{array}{cc}S&Q\\0&T\end{array}\right]$ as acting over $\mathbb R^2$. Consider a slope $l\in\Rb$, $\lin l\subset\mathbb R^2$ the corresponding vectorial line, $\lin l'\defeq M\lin l$ the vectorial line corresponding to slope $\frac STl+\frac QT$. Roughly, $\lin l'$ for $F$ corresponds to $\lin l$ for $G$. \begin{itemize} \item Consider a finite shape $W'\subset\mathbb R^2$, $U$ and $f$ the neighbourhood and local rule of $F$, $V$ and $\phi^{-1}$ those of $\Phi^{-1}$, as defined in Remark \ref{r:simhedlund}. Without loss of generality, we can assume that $U = \cc{-uS}{uS}$, for some $u \in \mathbb N$. Let $W\defeq M^{-1}W'+(T\cc{-u}{u} + V + \co{-Q}{0}) \times \{0\} + [-1,2[\times[0,1[$. If $l\in\NE(G)$, then there exist configurations $x \neq y\in\Omega_G$ such that $\orb[x]G\restr{\lin l+W}=\orb[y]G\restr{\lin l+W}$. Then, $\orb[\Phi^{-1}(x)]F \neq \orb[\Phi^{-1}(y)]F$, but we claim that \begin{equation*} \orb[\Phi^{-1}(x)]F\restr{\lin l'+W'}=\orb[\Phi^{-1}(y)]F\restr{\lin l'+W'}. \end{equation*} Since $W'$ was an arbitrary finite shape, this implies that $\frac STl+\frac QT \in \NE(F)$, which proves that $\NE(F)\supseteq\frac1{T}\left(Q+S\NE(G)\right)$. Let us proceed with the proof of the claim. Let $(p_1,p_2) \in \lin l'+W'$ and write $p_1\eqdef mS+r$, $p_2\eqdef nT+q$ and $n \defeq m'S+r'$, where $m,r,n,q,m',r' \in \mathbb Z$ and $0\le r,r'<S$ and $0\le q<T$. Intuitively, we can think that $(p_1,p_2)$ belongs to the encoding of the $m$'th letter of $\sigma^{-m'Q}G^n(x)$ and $G^n(y)$. More precisely, a straightforward computation shows that \begin{equation*} M^{-1}(p_1,p_2)=(m-Qm'+r/S-r'/{S}+q/T,n+q/T), \end{equation*} so that $(m-Qm',n) \in M^{-1}(p_1,p_2)+ [-1,2[ \times [0,1[$. This, in turn, implies that $(m-Qm',n) + (T\cc{-u}{u}+\co{-Q}{0}+V) \times \{0\}$ is included in $\lin l+ W$, so that \begin{equation*} \orb[x]{G}\restr{(m-Qm',n)+(T\cc{-u}{u}+\co{-Q}{0}+V)\times \{0\}} = = \orb[y]{G}\restr{(m-Qm',n)+(T\cc{-u}{u}+\co{-Q}{0}+V)\times \{0\}}. \end{equation*} Using the facts that $V$ is the neighbourhood of $\phi^{-1}$ and that $\Phi^{-1}$ \xpr{blows-up} letters into blocks of size $S \times T$ with an additional shift of $Q$ for every vertical time step, we deduce that \begin{multline*} \orb[\Phi^{-1}(x)]{F}\restr{(\co{(m-Qm')S}{(m-Qm'+1)S}+T\cc{-uS}{uS}+\co{-QS}{0}+nQ)\times \{nT\}} = \\ =\orb[\Phi^{-1}(y)]{F}\restr{(\co{(m-Qm')S}{(m-Qm'+1)S}+T\cc{-uS}{uS}+\co{-QS}{0}+nQ)\times \{nT\}}. \end{multline*} Notice that $nQ - Qm'S = r'Q$. Now, using the fact that $T\cc{-uS}{uS}$ is a neighbourhood for $f^q$ and $\co{-QS}{0}$ for $\sigma^{-r'Q}$, we obtain that \begin{multline*} \sigma^{-r'Q}F^q\left(\orb[\Phi^{-1}(x)]{F}\right)\restr{\co{mS+r'Q}{(m+1)S+r'Q} \times \{nT\}} = \\ =\sigma^{-r'Q}F^q\left(\orb[\Phi^{-1}(y)]{F}\right)\restr{\co{mS+r'Q}{(m+1)S+r'Q} \times \{nT\}}. \end{multline*} The last equality implies that $\orb[\Phi^{-1}(x)]{F}\restr{(p_1,p_2)} = \orb[\Phi^{-1}(y)]{F}\restr{(p_1,p_2)}$, because \begin{eqnarray*} & & \sigma^{-r'Q}F^q\left(\orb[\Phi^{-1}(x)]{F}\right)\restr{\co{mS+r'Q}{(m+1)S+r'Q} \times \{nT\}} \\&=& \orb[\Phi^{-1}(x)]{F}\restr{\co{mS}{(m+1)S} \times \{nT+q\}} \\&=& \orb[\Phi^{-1}(x)]{F}\restr{\co{mS}{(m+1)S} \times \{p_2\}} \end{eqnarray*} and $p_1 \in \co{mS}{(m+1)S}$. \item Consider a finite shape $W\subset\mathbb R^2$, $U$ the synchronizing shape as defined in Remark \ref{r:simsynchr}, $V$ and $\phi$ the neighbourhood and local rule of $\Phi$ as defined in Remark \ref{r:simhedlund}, and $W'\defeq MW+(V\cup U)\times\{0\}-\co0S\times\co0T$. If $\frac STl+\frac QT\in\NE(F)$, then there exist configurations $x \neq y\in\Omega_F$ such that $\orb[x]F\restr{\lin l'+W'}=\orb[y]G\restr{\lin l'+W'}$. By Remark \ref{r:simsynchr} and completeness of the simulation, there exist common $s\in\co0S$ and $t\in\co0T$ such that $x'\defeq\sigma^{-s}F^{-t}(x)$ and $y'\defeq\sigma^{-s}F^{-t}(y)$ are in $\mathcal D}%\DeclareMathOperator*{\dom}{dom(\Phi)$. It follows easily from the definitions that $\orb[x']F\restr{\lin l'+MW+V\times\{0\}}= \orb[y']F\restr{\lin l'+MW+V\times\{0\}}$ By injectivity of $\Phi$, $\orb[\Phi(x')]G$ and $\orb[\Phi(y')]G$ are also distinct, but we claim that they coincide in $\lin l+W$. Since $W$ is an arbitrary finite shape, this implies that $l \in \NE(G)$, which proves that $\NE(F)\subseteq\frac1{T}\left(Q+S\NE(G)\right)$. Let $(p_1,p_2) \in \lin l+W$. Then, $M(p_1,p_2)+V\times\{0\}\subset \lin{l'} +MW+V\times\{0\}$; it follows from this that \begin{equation*} \orb[x']F\restr{(p_1S+p_2Q+V,p_2T)}= \orb[y']F\restr{(p_1S+p_2Q+V,p_2T)}. \end{equation*} In addition, we have that \begin{eqnarray*} \orb[\Phi(x')]G_{(p_1,p_2)}&=& G^{p_2}\Phi(x')_{p_1} \\&=& \Phi\sigma^{p_2Q}F^{p_2T}(x')_{p_1} \\&=& \phi(\sigma^{p_2Q}F^{p_2T}(x')\restr{p_2S+V}) \\&=& \phi(\orb[x']F\restr{(p_1S+V+p_2Q,p_2T)}) \end{eqnarray*} The same holds for $y'$, and since, as we have noticed earlier, the final expression is the same for $x'$ and $y'$, we get that $\orb[\Phi(x')]G\restr{(p_1,p_2)} = \orb[\Phi(y')]G\restr{(p_1,p_2)}$, as claimed. \end{itemize}\end{proof} Lemmas \ref{l:simtrans} and \ref{lem:relsimulexp} can be combined to obtain expansive directions in nested simulations, which will be used extensively in Section~\ref{sec:expdir}. \begin{lemma}\label{lem:iterrelsimulexp} If $F_0\simu[S_0,T_0,D_0S_0]F_1\simu[S_1,T_1,D_1S_1]\ldots\simu[S_{n-1},T_{n-1},D_{n-1}S_{n-1}]F_{n}$ completely exactly, and all these RPCA have bi-radius $1$, then \begin{equation*} \NE(F_0)\subseteq\anib[\vec{1S/T}]{\vec{S}\vec{D}}+\left(\prod_{i<n}\frac{S_i}{T_i}\right)[-1,1]=\anib[\vec{S/T}]{\vec{D}}+\left(\prod_{i<n}\frac{S_i}{T_i}\right)[-1,1]~. \end{equation*} \end{lemma} \begin{proof} We already noted that the radius of a RPCA $F_n$ with bi-radius $1$ has $\NE(F_n)\subseteq[-1,1]$. From Lemma \ref{l:simtrans}, we know that $F_0\simu[S,T,Q]F_n$ exactly completely, where $(S,T,Q) = (\prod S_i,\prod T_i,{\anib[\vec{1S/T}]{\vec S \vec D}\prod T_i})$ and from Lemma \ref{lem:relsimulexp}, we deduce that: \[\NE(F_0)=\anib[\vec{1S/T}]{\vec S \vec D}+\left(\prod_{i<n}\frac{S_i}{T_i}\right)\NE(F_n)\subseteq \anib[\vec{1S/T}]{\vec S \vec D}+\left(\prod_{i<n}\frac{S_i}{T_i}\right)[-1,1]~.\] Also, by definition we have that $\anib[\vec{1S/T}]{\vec S \vec D}=\anib[\vec{S/T}]{\vec D}$. \end{proof} In the limit case of an infinite nested simulation, we obtain the following proposition, which slightly extends Theorem~5.4 in \cite{nexpdir}. \begin{proposition}\label{prop:hochman} If $F_i \simu[S_i,T_i,D_iS_i] F_{i+1}$ completely exactly, for all $i \in \mathbb N$, then \begin{equation*} \NE(F_0)\subseteq\anib[\seq S/\seq T]{\seq D}+\left(\inf_{n\in\mathbb N}\prod_{i<n}\frac{S_i}{T_i}\right)[-1,1]~. \end{equation*} In particular, if the simulations are non-trivial and $\prod_{i<n}S_i/T_i$ converges to $0$, then $\NE(F_0)=\{\anib[\seq S/\seq T]{\seq D}\}$. \end{proposition} \begin{proof} From Lemma \ref{lem:iterrelsimulexp}, we know that \begin{equation*} \NE(F_0)\subseteq\bigcap_{n\in\mathbb N}\left(\prod_{i<n}\frac{S_i}{T_i}\right)[-1,1]+\anib[\seq{S}_{\co{0}{n}}/\seq{T}_{\co{0}{n}}]{\seq{D}_{\co{0}{n}}}, \end{equation*} which gives the wanted inclusion, when $n$ goes to $\infty$. For the second claim, if all the simulations are non-trivial, then from Lemma \ref{l:nonvide} we know that $\orb{F_0}$ is uncountable, hence by Proposition \ref{p:atleastone}, it has at least one non-expansive direction. In addition, by the first claim and the assumption $\prod_{i<n}S_i/T_i \to 0$, we know that $\NE(F_0) \subseteq \{\anib[\seq S/\seq T]{\seq D}\}$, and we must actually have equality. \end{proof} \section{Explicit simulation}\label{sub:simulconv} In the previous sections of this chapters, we defined a notion of simulation and then proved some facts about this notion, which suggest that it is a good choice. However, we have not given any non-trivial example of simulation until now, nor have we explained how this could happen. For example, the decoding function $\Phi$ could be anything. The simulation that we construct all have the same basic \xpr{form}. We call these simulation \dfn{explcit}, because the simulated configuration is explicitly written letter by letter in the simulating configuration. In order to make this more precise, we need to give some more definitions and notations. Let us fix a some fields $\texttt{Addr}$, $\texttt{Addr}_{+1}$, $\texttt{Clock}$ and $\texttt{Clock}_{+1}$ (In fact, these are just distinct numbers that we use to project letters on). These are sometimes called \emph{coordinate} fields. For $s \in \co{0}{S}$ and $t \in \co{0}{T}$, let $\gra{s}{t}{S}{T} \defeq \per[s,S]{\texttt{Addr},\texttt{Addr}_{+1}} \cap \emp[t]{\texttt{Clock},\texttt{Clock}_{+1}}$. In $\gra{s}{t}{S}{T}$ the values of $\texttt{Addr}$ grow by $1$ modulo $S$ from left to right and the value of $\texttt{Clock}$ is constant and equal to $t$, while the origin has $\texttt{Addr}$ $s$. This is the usual way to break up a configuration into blocks, with one small difference. Normally, we only need the fields $\texttt{Addr}$ and $\texttt{Clock}$ to do this. However, since we are using PPA, we need to have some right- (or left-) moving copies of these fields in order to check the compatibility of these fields. Having this in mind, we define $\gra{s}{t}{S}{T}$ in the above way, since it will make notation a little lighter later on. The union $\bigsqcup_{\begin{subarray}c0\le t<T\\0\le s<S\end{subarray}}\gra{s}{t}{S}{T}$ is disjoint. In addition, let $\grs{s}{S}\defeq \per[s,S]{\texttt{Addr},\texttt{Addr}_{+1}}$. In $\grs{s}{S}$, we do not care about the value of $\texttt{Clock}$ (or if it is even constant). Clearly, $\gra{s}{t}{S}{T} \subseteq \grs{s}{S}$. For $c \in \grs{s}{S}$ and $i \in \mathbb Z$, the pattern \begin{equation*} \col{i}{c}=c_{\co{-s+iS}{-s+(i+1)S}} \end{equation*} is called a \dfn{colony} of $c$. Clearly, $(\col{i}{c})_{i \in \mathbb Z}=\bulk[S]{\sigma^{-s}(c)}$ and in $\col{i}{c}$, the value of $\texttt{Addr}$ (and $\texttt{Addr}_{+1}$) grows from $0$ to $S-1$ from left to right. This is the natural way to break a configuration into colonies of size $S$. Now, we are going to use every colony to encode one letter of the simulated configuration. For this, we have to define the appropriate decoding function. Let $\tilde{\phi} \colon (\haine5^{*})^* \pto \haine5^{**}$ be the following function, which is the basis of all the decoding functions that we will use: Let $w \in (\haine5^*)^*$ be a \emph{word} over the infinite alphabet $\haine5^*$ (we look at $w$ as a finite part of some $1$D configuration over $\haine5^*$). If $\hs{w}= \Chi{\vec{u}}3^{\length{w}-\length{\Chi{\vec{u}}}}$, where $\vec{u} \in \haine5^{**}$ (we look at $\vec{u}$ as a tuple of elements of $\haine5^*$), then we define $\tilde{\phi}(w)=\vec{u}$. Notice that $\Chi{\vec{u}} \in \haine3^*$. In other words, $w$ is equal to $\Chi{\vec{u}}$ up to appending some $3$s at the end of $\Chi{\vec{u}}$ (this gives a word in $\haine4^*$) and then adding some $4$s in front of every \emph{letter} of $\Chi{\vec{u}}3^{\length{w}-\length{\Chi{\vec{u}}}}$ (which gives a word in $(\haine5^{*})^*$). Unless $w$ has this very specific form, $\tilde{\phi}(w)$ is not defined. $\tilde{\phi}$ is well-defined because $\Chi{\cdot}$ is an injection and because $3$ does not appear as a letter of $\Chi{\vec{u}} \in \haine3^*$. A necessary condition so that $\tilde\phi(w)=\vec{u}$ is that $\length{w} \geq \length{\Chi{\vec{u}}}$. Let $\field$ be a new field and $\tilde{\phi}_{\field} \colon (\haine5^{**})^* \pto \haine5^{**}$ be defined as $\tilde{\phi} \pi_{\field}$. $\tilde{\phi}_{\field}$ can read words over letters with many fields by ignoring the other fields and using $\tilde{\phi}$ on $\field$. We can extend $\tilde{\phi}$ in a natural way to a map $\tilde{\Phi} \colon (\haine5^*)^{\mathbb Z} \pto (\haine5^{**})^{\mathbb Z}$ as follows: for all $c \in (\haine5^{*})^{\mathbb Z}$ and $i \in \mathbb Z$, $\tilde{\Phi}(c)_i=\tilde{\phi}(c\restr{\co{iS}{(i+1)S}})$. Similarly, $\tilde\phi_{\field}$ can be naturally extended to a map $\tilde{\Phi}_{\field} \colon (\haine5^{**})^{\mathbb Z} \pto (\haine5^{**})^{\mathbb Z}$. The idea is that every configuration will be divided into colonies using the coordinate fields and then $\tilde{\phi}_{\field}$ will be used on every colonies so as to obtain a letter. Putting these letters together, we obtain the simulated configuration. Formally, a decoding function $\Phi$ will be equal to $\tilde{\Phi}_\field{\restr{\Sigma}}$, where $\Sigma \subseteq \grs{0}{S}$, for some $S$ that is large enough. If $b_i = \tilde{\phi}_\field(\col{i}{c})$, then $\Phi(c)= \tilde{\Phi}_\field(c)=(b_i)_{i \in \mathbb Z}$. We call $b_i$ the \dfn{simulated letter} of the $i$'th colony and the letters of $c$ are the \dfn{simulating letters} The decoding functions that we will use in our constructions will \emph{always} be of the form ${\tilde{\Phi}_{\field}}{\restr{\Sigma}}$, where $\Sigma \subseteq \grs{0}{S}$. For such functions, we immediately obtain two of the conditions of a decoding function of a simulation: \begin{remark}\label{twoconditionsofsimulation} Let us fix a field list $\C=[\texttt{Addr},\texttt{Addr}_{+1},\texttt{Clock},\texttt{Clock}_{+1},\texttt{Tape}]$, $S \in {\mathbb N}_1$ and vectors $\vec{k}, \vec{k'} \in \mathbb N^{*}$ such that the following inequalities hold: \[\both{ k_\texttt{Addr}\ge\norm S\\ k_\texttt{Tape}\ge 1\\ S \geq \length{\Chi{\haine5^{\vec{k'}}}} ~,}\] Let $\Sigma \defeq (\haine5^{\vec{k}})^{\mathbb Z}\cap \grs{0}{S}\cap \tilde{\Phi}_{\texttt{Tape}}^{-1}((\haine5^{\vec{k'}})^{\mathbb Z})$. Then $\Phi \defeq {\tilde{\Phi}_{\texttt{Tape}}}{\restr{\Sigma}} \colon (\haine5^{\vec{k}})^{\mathbb Z} \to (\haine5^{\vec{k'}})^{\mathbb Z}$ is surjective and $\Phi \sigma^S = \sigma \Phi$. In addition, for every $b \in (\haine5^{**})^{\vec{\mathbb Z}}$, we are free to chose the values of the anonymous fields in any way we like in a pre-image. \end{remark} In the above remark, $\Sigma$ contains those configurations over $\haine5^{\vec{k}}$ that are well-structured (\textit{i.e.}, divided into colonies with the origin having address $0$) and such that in the $i$'th colony we have the encoding of a letter of $\haine5^{\vec{k'}}$, for all $i \in \mathbb Z$. \chapter{The programming language}\label{c:programming} \section{Definitions and basic permutations} In our constructions, we want to use permutations that are computed fast. It is not possible to formally state what fast means, but polynomially computable and, more generally, polynomially checkable permutations is fast enough. This is a common feature of all self-similar and hierarchical constructions and the reasons why it is needed are explained very thoroughly in \cite{gray}. For our purposes, it is enough to describe a pseudo-programming language, with which we will write \xpr{programs} that are interpreted as permutations $\alpha \colon \haine5^{**} \pto \haine5^{**}$. Let us start describing this programming language: It has four types, \dfn{terms} (that are denoted $t,t'\ldots$), \dfn{valuations} (that are denoted $v,v',\ldots$), \dfn{conditions} (that are denoted $c,c',\ldots$) and \dfn{permutations} (that are denoted $\alpha,\alpha',\ldots$). Each type is semantically interpreted as a different kind of mathematical object. Terms are interpreted as maps $t \colon \haine5^{**} \pto \haine5^{*}$. They represent some word information that can be extracted from a tuple. Valuations are interpreted as functions $v \colon \haine5^{**} \pto \mathbb N$. Valuations represent numerical information that can be extracted from tuples. Conditions are predicates over $\haine5^{**}$, or equivalently maps $q \colon \haine5^{**} \pto \{0,1\}$. Finally, permutations are, rather predictably, interpreted as (partial) permutations $\haine5^{**} \pto \haine5^{**}$ which will be used to define IPPA Let us describe each type with more details. We are not going to try to give a formal definition of the programming language, since it is would be unnecessarily complicated. It would involve a global induction on the various types, starting from some basic objects and taking a closure under some inductive operations. Instead, we will simply list the objects that we are actually going to use in the rest of the thesis. The proofs that they are polynomially computable are often trivial and will be omitted in most cases. \paragraph{Terms} \begin{itemize} \item Every word $w \in \haine5^{*}$ is a term (understood as the constant function); \item for all $i \in \mathbb N$, the projection $\pi_i$ of the $i$'th field is a term; \item if $t$ is a term, then $\Chi{t}$ is also a term ($\Chi{t}(\vec{u})=\Chi{t(\vec{u})}$, for all $\vec{u}$ in $\haine5^{**}$); \item if $v$ is a valuation and $t$ is a term, then $t\restr{v}$ is also a term, where $t\restr{v}(\vec{u}) \defeq t(\vec{u})\restr{v(\vec{u})}$. In other words, $t\restr{v}$ uses $v$ as a pointer for $t$ and it gives the letter at the $v(\vec{u})$'th position of $t(\vec{u})$. \end{itemize} \paragraph{Valuations} \begin{itemize} \item Every natural $n \in \mathbb N$ is a valuation, understood as a constant function; \item if $t$ is a term, then $\length{t}$ is a valuation; \item For all vectors $\vec{k} \in \mathbb N^{*}$ and $i \in \mathbb N$, the function $l_{\vec{k},i}$ defined in Fact~\ref{f:encodings} is a valuation. \item If $\seq{S} \colon \mathbb N \to \mathbb N$ is a sequence of numbers and $v$ a valuation, then $S_v$ (where $S_v(\vec{u}) \defeq S_{v(\vec{u})}$) is also a valuation. (In general, the complexity of this valuation depends on the complexity of $\seq{S}$ and it is not polynomially computable if $\seq{S}$ is not.) \item Basic arithmetical operations (addition, subtraction, multiplication etc) of valuations are still valuations. \end{itemize} In fact, we will need the following, more general version of the third bullet: \begin{itemize} \item For all valuations $v$, vector sequences $\vec{k} \colon \mathbb N \to \mathbb N^M$ and $i \in \mathbb N$, $l_{\vec{k}_v,i}$ (where $l_{\vec{k}_v,i}(\vec{u}) \defeq l_{\vec{k}_{v(\vec{u})},i}(\vec{u})$) is also a valuation. In this version, the vector whose structure $l_{\vec{k}_v,i}$ gives depends on the input letter. Of course, if $\vec{k}$ is not a polynomially computable sequence, then neither is $l_{\vec{k}_v,i}$. \end{itemize} A \dfn{vector valuation} is a collection $\vec{v}=(v_i)_{0 \le i \le M-1}$ of valuations, for some $M \in \mathbb N$. Vector valuations are used to obtain lengths of alphabets in a polynomially computable way. \paragraph{Conditions} \begin{itemize} \item If $v_1, v_2$ are valuations, then $v_1 \geq v_2$ is a condition whose interpretation is clear; \item if $t_1, t_2$ are terms, then $t_1 = t_2$ is a condition; \item if $t,t_1$ are terms and $(Q_w)_{w \in \haine5^{*}}$ is a sequence of subsets of $\haine5^*$, then $t_1 \in Q_t$ is a condition. ($\vec{u}$ satisfies $t_1 \in Q_t$ if $t_1(\vec{u}) \in Q_{t(\vec{u})}$.) \item if $t$ is a term and $i_1, \ldots, i_n$ are fields, then $\emp[t]{i_1, \ldots, i_n}$ is a condition (that is true for $\vec{u}$ if and only if $\vec{u} \in \emp[t(\vec{u})]{i_1, \ldots, i_n}$); \item $\halt{p}{v}{t}$ is a condition, where $\vec{u}$ satisfies $\halt{p}{v}{t}$ if and only if the TM defined by program $p$ does not stop within $v(\vec{u})$ steps over term $t(\vec{u})$; \item boolean operations of conditions are also conditions. \end{itemize} \paragraph{Permutations} \begin{itemize} \item For every condition $q$, $\chekk[q]$ is a permutation. $\chekk[q](\vec{u})$ is equal to $\vec{u}$ if and only if $\vec{u}$ satisfies $q$ (and is undefined otherwise). This is an involution. \item For every valuation $v$ and field $i \in \mathbb N$, $\incr[v,i]$ is a permutation defined in the following way: Let $\vec{u} \in \haine5^{**}$ and define $\vec{u'}$ in the following way: $u'_j\defeq u_j$ for all $j\ne i$, and $u'_i\defeq\sh[\length{u_i}]\gamma(u_i)$, where $\gamma(w)\defeq\anib{\bina w+1\bmod v(\vec u)}$ when $\bina w<v(\vec u)$ (undefined otherwise); then $\incr[v;i](\vec u)\defeq\vec{u'}$ if $v(\vec u)=v(\vec{u'})$ (undefined otherwise). Essentially $\incr[v;i]$ adds $1$ modulo $v(\vec{u})$ to the $i$'th field of $\vec{u}$. The additional complications are due to the fact that we want this rule to always be reversible (which would not necessarily be true if $v(\vec{u'})$ is not equal to $v(\vec{u})$) and length preserving (which is the reason that we use the strange $\gamma$ function). \item $\alpha_\U[t;\texttt{Tape},\texttt{Head}_{-1},\texttt{Head}_{+1}]$ is a permutation for every term $t$ and fields $\texttt{Tape}$, $\texttt{Head}_{-1}$, $\texttt{Head}_{+1}$. We direct the reader to Section~\ref{Jarkko} for the definition of this permutation, since it uses a permutation that is defined and examined therein. \item Let $t$ be a term and $i$ be a field such that $t$ \dfn{does not depend} on $i$. In other words, if $\vec{u},\vec{u'} \in \haine5^{**}$ and $\pi_j(\vec{u})=\pi_j(\vec{u'})$ for all $j \neq i$, then $t(\vec{u})=t(\vec{u'})$. Then, $\rite[t;i]$ is a permutation defined as follows: Let $\vec{u} \in \haine5^{**}$. $\rite[t;i](\vec{u})$ is defined if and only if $\hs{u_i}= \motvide$. In this case, all fields remain the same except for $i$ which becomes equal to $\sh[\length{u_i}]{t(\vec{u})}$. Essentially, we check that the field $i$ is empty and then write $t(\vec{u})$ on it, while preserving the lengths. The condition that $t$ does not depend on $i$ is essential to ensure reversibility. $\rite[t;i]^{-1}$ first checks that the $i$'th field is equal to $t(\vec{u})$ and then empties it, while preserving the lengths. This is a way to reversibly erase some information from a letter, namely compare it with some other place of the letter where the same information is held. \item For all fields $i,i'$, $\exch[i,i']$ is a permutation defined as follows: Let $\vec{u} \in \haine5^{**}$. $\exch[i,i'](\vec{u})$ is defined if and only if $\length{u_i}=\length{u_{i'}}$. In this case, all fields are unchanged except for $i$ and $i'$ whose values are exchanged. This is a length-preserving involution. \item For every condition $q$ and permutation $\alpha$, $\algorithmicif\ q\ \algorithmicthen\ \alpha$ is a permutation. On input $\vec{u}\in \haine5^{**}$, it applies $\alpha$ if condition $q(\vec{u})$ is satisfied and $q(\vec{u})=q(\alpha(\vec{u}))$. If $q(\vec{u})$ is satisfied and $q(\vec{u}) \neq q(\alpha(\vec{u}))$, then it is not defined on $\vec{u}$ (this ensures reversibility). Finally, if $q(\vec{u})$ is \emph{not} satisfied, it is equal to the identity. \item The composition of permutations is also a permutation. In constructions, we will denote the composition $\alpha_2 \circ \alpha_1$ by writing $\alpha_2$ below $\alpha_1$. \end{itemize} In the definition, we check that the values of the valuations, terms and conditions that are given as parameters do not change. This is a technical point that ensures that they are interpreted as reversible functions. In all our constructions, these conditions will easily be satisfied because the valuations, terms and conditions will either be constant or depend on fields that are not modified by the rule at hand. If we were giving a complete, formal description of a language, then this would be the point where by a large, tedious induction we would prove that, given some natural conditions on the parameters, every permutation of the language is polynomially computable, or, more precisely, polynomially computable in its parameters (this means that its complexity is a polynomial of the complexity of its parameters) and that short programs exist for the permutations. Namely, the size of the program is $O(p_{t,v, \ldots})$, where $t,v$ etc. are the parameters of the permutation. We can also prove that the size of a program of a permutation is approximately the same as the size of the program of its inverse. \section{Conventions about defining IPPA}\label{s:conv} In the first part of this chapter, we gave a short exposition of the programming language that will be used in the rest of the thesis in order to define permutations of $\haine5^{**}$. However, in order to define a PPA, the number of fields and the directions of the fields also have to be fixed. Recall that we want to define PPA, \textit{i.e.}, RPCA of the form $F = \sigma^{\vec\delta}\circ\alpha$, where $\vec\delta \in \{-1,0,+1\}^M$ is the shift vector and $\alpha$ is a partial permutation of $\A=\A_0 \times \ldots \A_{M-1}$, for some $M \in \mathbb N$. In our case, $F$ will always be the restriction of an IPPA, \textit{i.e.}, $\A$ will be equal to $\haine5^{\vec{k}}$, for some $\vec{k} \in \mathbb N^M$ and $\alpha \defeq \beta\restr{\haine5^{\vec{k}}}$ will be the restriction of some (infinite) permutation $\beta$ defined in the programming language. We will use the following conventions when constructing such PPA: \begin{itemize} \item We first give a list of so-called \dfn{explicit} field labels. Such a list will often be noted in the form $\C\defeq[\field_{e},\ldots,\field'_{e'}]$, where $e,e' \in \{-1,0,+1\}$. The subscripts $e,\ldots,e'$ correspond to the \emph{directions} of the fields (if the direction is equal to $0$, then it will be omitted). The field list is a tuple of pairwise different natural projections, that are used by the permutation, together with their directions, that will be used by the shift. (The labels of the fields will make the permutations more understandable than the corresponding indices $i,i',\ldots$). The field list is not fixed, so in fact for every field list, we give a different permutation, even though they only differ in the enumeration of the fields. The permutation is assumed to reject any element of $\haine5^{**}$ that does not involve all field numbers in the list, but note that it does not reject tuples that have more fields; the so-called \dfn{anonymous} fields, that are not in the list, are not modified by the permutation (but they might be used by some other PPA with which we compose). This allows us to define some simple PPA with few fields and then use them as \xpr{building blocks} in order to build more complicated ones in the following sense: the complicated PPA has more fields than the simple one, but, if it does not \xpr{touch} any of its fields, its behaviour on those fields is described by the corresponding behaviour of the building block. If $\C$ and $\C'$ are two lists of field labels, then $\C \cup \C'$ is the list that contains the fields of $\C$ and $\C'$. Usually, the lists will be disjoint, so that we will use the notation $\C \sqcup \C'.$ \item After giving the field list, we describe an (infinite) permutation using the programming language defined in the first part of this chapter. \item Then, we need to fix $M \in \mathbb N$ and $\vec{k} \in \mathbb N^M$. If we do not care about the existence of anonymous fields, then we always assume that $M$ is some number greater than or equal to the largest natural appearing in the field list $\C$. In this way, we ensure that the configurations will not be rejected simply because the program tries to access a field that is not there. When we do not want anonymous fields to exist (for example, when we want to achieve exactness of a simulation), then we assume that the field list $\C$ is equal to $[0,\ldots,M-1]$ and we choose this $M$ for the number of fields. In any case, after choosing $M$, we fix some vector $\vec{k}\in \mathbb N^M$ satisfying some appropriate conditions (which are case-specific). \item Finally, we need to define the directions of the fields. However this has already been done in the definition of the field list with the use of the subscripts $e,e'$ etc. The directions of the anonymous fields can be anything. In fact, our statements will be true for \emph{all} directions of the anonymous fields, since we will not refer to them. \end{itemize} \chapter{The universal simulator}\label{construction} In this chapter, our aim is to construct an RPCA (a family of RPCA in fact, depending on some parameters) that can simulate every other RPCA that satisfies some conditions. This is done in Lemma~\ref{universal}. This RPCA is extremely helpful and it will be part of all our subsequent constructions. Since it is difficult to overstress the importance of this RPCA, we will give a step-by-step description of its construction with as many details as possible. In Section~\ref{sec:structure}, we will embed a periodic rectangular grid in every configuration. This is a standard procedure in hierarchical constructions and it will allow us to partition every configuration into colonies and use the decoding function $\tilde\Phi$. In Section~\ref{Jarkko}, we will make a slight digression and show how we can simulate any TM with an RPCA in real-time. This is needed in order to preserve the expansiveness of the horizontal direction. Then, in Section~\ref{sec:singlepermutation}, we construct an RPCA to simulate an RPCA whose direction vectors are null (all its fields are still). There are some tricks involved in this phase, mainly having to do with deleting the previous simulated letter and synchronizing the computations. Then, in Section~\ref{sec:singleshift}, we construct an RPCA that can simulate any RPCA whose permutation is the identity \textit{i.e.}, any shift. Finally, in Section~\ref{sec:universal}, we construct the universal IPPA $\unive$ that can simulate any RPCA, when it is restricted to the appropriate alphabet. \section{Imposing a periodic structure}\label{sec:structure} Let $\C[\coordi]=[\texttt{Addr},\texttt{Addr}_{+1},\texttt{Clock},\texttt{Clock}_{+1}]$. \begin{itemize} \item $\texttt{Clock}$ and $\texttt{Addr}$ are meant to localize the cell in its macrocell, and they correspond to the projections involved in the definition of explicit simulation in Section~\ref{sub:simulconv}. \item $\texttt{Clock}_{+1}$ and $\texttt{Addr}_{+1}$ are used to communicate with the neighbour cells, so that consistency between the $\texttt{Clock}$ and $\texttt{Addr}$ fields is achieved. \end{itemize} \begin{algo}{coordi}{\coordi}{v_\texttt{MAddr}, v_\texttt{MClock}} \STATE{$\chekk[\bina{\pi_{\texttt{Addr}_{+1}}}=\bina{\pi_\texttt{Addr}}$ \AND $\bina{\pi_{\texttt{Clock}_{+1}}}=\bina{\pi_\texttt{Clock}}]$} \label{al:coordi:chek} \COMMENT{Check left-neighbour information coherence.} \STATE{$\incr[v_\texttt{MAddr};\texttt{Addr}_{+1}]$}\label{al:coordi:incadd} \COMMENT{Increment $\texttt{Addr}_{+1}$ so that the right neighbour can check coherence.} \STATE{$\incr[v_\texttt{MClock};\texttt{Clock}]$}\label{al:coordi:incage} \COMMENT{Update \texttt{Clock}.} \STATE{$\incr[v_\texttt{MClock};\texttt{Clock}_{+1}]$}\label{al:coordi:incage1} \COMMENT{Update $\texttt{Clock}_{+1}$.} \end{algo} By the discussion of Chapter~\ref{c:programming}, we know that $\coordi[v_{\texttt{MAddr}},v_{\texttt{MClock}};\C[\coordi]]$ is polynomially computable with respect to its parameters $v_{\texttt{MAddr}}$ and $v_{\texttt{MClock}}$. In practice, the two valuation parameters $v_{\texttt{MAddr}}$ and $v_{\texttt{MClock}}$ will be constant over the alphabet of the PPA, in which case the behaviour will be described by the following: \begin{lemma}\label{koo} Let us fix a field list $\C[\coordi]\in\mathbb N^4$ and integers $S,T\in{\mathbb N}_1$.\\ Let $F$ be the IPPA defined by the permutation $\coordi[S,T;\C[\coordi]]$ and directions $\vec{\nu}_{\coordi}$ given by the label indices, and let $\vec k\in\mathbb N^*$ be a vector satisfying: \[\both{ k_\texttt{Addr},k_{\texttt{Addr}_{+1}}\ge\norm S\\ k_\texttt{Clock},k_{\texttt{Clock}_{+1}}\ge\norm T ~.}\] Let $c \in (\haine5^{\vec{k}})^{\mathbb Z}$. Then, $c \in F^{-2}((\haine5^{\vec{k}})^{\mathbb Z})$ if and only if there exist $s \in \co{0}{S}$ and $t \in \co{0}{T}$ such that $c \in \gra{s}{t}{S}{T}$. In this case, $F(c) \in \gra{s}{t+1 \mod T}{S}{T}$. \end{lemma} In the previous statement, $S$ and $T$ should be understood as the \dfn{width} and \dfn{height} of the macrocells. Notice, also, that the statement holds for all vectors $\vec{k} \in \mathbb N^*$ that satisfy the inequalities, which means that there can be other fields in the alphabet. This means that if we use $\coordi$ together with other rules that do not change the values of the fields in $\C[\coordi]$, the statement of the lemma will still be true. The restrictions about the lengths of $\vec{k}$ ensure that fields are large enough that we can write the binary representation of $S$ and $T$ on them. \begin{proof} We prove the stronger claim that if $F^2(c)$ exists, then there exist $0\le s<S$ and $0\le t<T$ such that for all $n \in \mathbb Z$, $\bina{\pi_{\texttt{Addr}}(c_n)}=\bina{\pi_{\texttt{Addr}_{+1}}(c_n)}=s+n \bmod S$ and $\bina{\pi_{\texttt{Clock}}(c_n)}=\bina{\pi_{\texttt{Clock}_{+1}}(c_n)}=t$. Suppose that $\bina{\pi_{\texttt{Addr}}(c_n)} \neq \bina{\pi_{\texttt{Addr}_{+1}}(c_n)}$ or $\bina{\pi_{\texttt{Clock}}(c_n)} \neq \bina{\pi_{\texttt{Clock}_{+1}}(c_n)}$, for some $n \in \mathbb Z$. Then, line~\ref{al:coordi:chek} would not be defined at cell $n$, $F(c)$ would not exist, which is a contradiction. Suppose, then, that there exists $n \in \mathbb Z$ with $\bina{\pi_{\texttt{Addr}}(c_{n+1})} \neq \bina{\pi_{\texttt{Addr}}(c_n)} +1 \bmod S$. Line~\ref{al:coordi:incadd} and the fact that $\texttt{Addr}_{+1}$ is a right-going field imply that $\bina{\pi_{\texttt{Addr}_{+1}}F(c)_{n+1}} = \bina{\pi_{\texttt{Addr}}(c_n)} +1 \mod S$. Then, line~\ref{al:coordi:chek} is not defined at cell $n+1$ of $F(c)$ since $\bina{\pi_{\texttt{Addr}_{+1}}F(c)_{n+1}} = \bina{\pi_{\texttt{Addr}}(c_n)} +1 \mod S \neq \bina{\pi_{\texttt{Addr}}(c_{n+1})}$. Therefore $F^2(c)$ does not exist, which contradicts the hypothesis. Similarly, we can prove that $\bina{c_n.\texttt{Clock}} = \bina{c_{n+1}.\texttt{Clock}}$, for all $n \in \mathbb Z$. Thus, the stronger claim we made at the beginning of the proof is true. If $\bina{\pi_{\texttt{Addr}}(c_0)}=s$ and $\bina{\pi_{\texttt{Clock}}(c_0)}=t$, then the previous claim implies that for all $n \in \mathbb Z$, $\bina{\pi_{\texttt{Addr}}(c_n)}=s+n \bmod S$ and $\bina{\pi_{\texttt{Clock}}(c_n)}=t$. Furthermore, since the value of $\texttt{Addr}$ is not changed by $F$ and the value of $\texttt{Clock}$ is increased by $1 \mod T$ every time step by line~\ref{al:coordi:incage}, we have that $\bina{\pi_{\texttt{Addr}}F(c)_n}=s+n \bmod S$ and $\bina{\pi_{\texttt{Clock}}F(c)_n}=t+1 \bmod T$, for all $n \in \mathbb Z$. \end{proof} In general, when using IPPA, we have to use a similar rule every time we want to impose some horizontal restriction on the configuration. Namely, we have to use an additional right-moving (or left-moving, it does not make a difference) field, and then we need $2$ steps in order to verify that the field is constant. All of the rules we construct will factor onto $\coordi[S,T;\C[\coordi]]$, for some $S,T \in {\mathbb N}_1$. The following remark will give the disjointness condition in the definition of simulation. \begin{remark}\label{thirdconditionofsimulation} Assume that $F\colon \A^{\Z} \pto \A^{\Z}$ factors onto $\coordi[S,T;\C[\coordi]]$ through the factor map $H$ and let $\gr stF \defeq H^{-1}(\gra{s}{t}{S}{T})$. Then, the union $\bigsqcup_{\begin{subarray}c0\le t<T\\0\le s<S\end{subarray}}\gr stF$ is disjoint and $F(\gr{s}{t}{F}) \subseteq \gr{s}{t+1 \mod T}{F}$. Therefore, if $\Phi \colon \A^{\Z} \pto \B^{\Z}$ satisfies that $\mathcal D}%\DeclareMathOperator*{\dom}{dom(\Phi) \subseteq \gr{0}{0}{F}$, for some $s,t$, then the union $\bigsqcup_{\begin{subarray}c0\le t<T\\0\le s<S\end{subarray}}F^t\sigma^s(\mathcal D}%\DeclareMathOperator*{\dom}{dom(\Phi))$ is disjoint. \end{remark} $\gr{s}{t}{F}$ implicitly depends on the factor map $H$. However, in applications, $H$ will be equal to $\pi_{\C[\coordi]}$ so that no ambiguity arises by omitting it. \section{Simulating TM with IPPA}\label{Jarkko} The IPPA $\coordi$ allows us to divide every configuration into colonies with a periodical clock. We want to use this space-time structure in order to do computations within the \dfn{work-periods} (the \xpr{time} between two subsequent steps where the clock is $0$). We are going to introduce the elements needed for this one by one, since, hopefully, it will make some of the ideas more clear. First, let us show how to simulate TMs in real time with PPA. For all programs $p=p_{\mathcal{M}} \in \haine4^*$, we construct an IPPA that simulates $\mathcal{M}$ in real-time. This subsection is inspired by \cite{morita} Let ${\C}_\U\defeq[\texttt{Tape},\texttt{Head}_{-1},\texttt{Head}_{+1}]$. The key item to maintaining reversibility, is to keep track of the history of the computation. Some kind of \emph{archive} of each past step is shifted in the direction opposite to the head, in order for the head to always have space to write the new history. Recall the definition of the function $\U$ from Subsection~\ref{s:turing}. Let $\gamma_\U[p] \colon (\haine4^*)^3 \pto (\haine4^*)^3$ be defined by the following transitions: $(a,h_{-1},h_{+1})$ is mapped to \begin{itemize} \item $(a',\Chi{a,q,\delta},\Chi{a,q,\delta})$, if $(h_\delta,h_{-\delta})=(q,\motvide)$ ($\texttt{Head}_{\delta}$ contains the TM head) and $\U(a,q,p)=(a',\motvide,+1)$. If $\texttt{Head}_{\delta}$ contains a head and the transition is an accepting one, then we write an encoding of the last transition on the $\texttt{Head}$ fields, modify $\texttt{Tape}$ and the TM heads disappear. Here, the assumption that the TM head (which has disappeared) moves to the right is convenient to ensure injectivity. \item $(a',h_{-1}',h_{+1}')$, where $(h_{\delta'}',h_{-\delta'}')=(q',\Chi{a,q,\delta})$ if $(h_\delta,h_{-\delta})=(q,\motvide)$ and $\U(a,q,p)=(a',\motvide,\delta')$. If the transition is not an accepting one, then $\texttt{Tape}$ is modified, the TM head is written on the appropriate $\texttt{Head}$ field and on the other $\texttt{Head}$ field we write an encoding of the transition and of the position of the head before the transition. \item $(a,h_{-1},h_{+1})$ if $h_{-1},h_{+1}\notin Q_p\setminus\{\motvide\}$. If none of the $\texttt{Head}$ fields contains a TM head, then do nothing. \end{itemize} It is not difficult (by a tedious case enumeration) to see that $\gamma_\U[p]$ is a partial permutation and \emph{polynomially computable}, thanks in particular to the disjointness of $Q_p\subset\haine2^*$ and $\Chi{\haine4\times Q_p\times\{-1,1\}}\subset2\haine3^*$. Basically, $\gamma_\U[p]$ identifies the accepting state $\motvide$ with the absence of state (for which it just performs identity). In other cases, it prevents from having two (non-accepting) head states at the same cell; then it applies the transition rule and sends the new state to the correct direction (depending on $\delta$), while sending an archive of the last performed operation in the opposite direction. At the moment that the accepting state appears, it just sends two (identical) archives in opposite directions (there is no new state to send). We say that $c\in (\haine4^{**})^{\mathbb Z}$ \dfn{represents} $(z,q,j)\in \haine4^{\mathbb Z}\times Q\times\mathbb Z$ of the machine $\mathcal{M}$ corresponding to program $p$ if: \begin{itemize} \item For all $i \in \mathbb Z$, $\pi_{\texttt{Tape}}(c_i)=z_i$ \item $(\pi_{\texttt{Head}_{-1}}(c_j),\pi_{\texttt{Head}_{+1}}(c_j)) \in \{q\} \times \{\motvide\} \cup \{\motvide\} \times \{q\}$; \item For all $i\ne j$ and $\delta\in\{-1,+1\}$, $\pi_{\texttt{Head}_{\delta}}(c_i)\notin Q_p\setminus\{\motvide\}$ \item For all $i\ne j$ and $\delta\in\{-1,+1\}$, if $\pi_{\texttt{Head}_{\delta}}(c_i)\neq\motvide$ then $\delta$ has the sign of $j-i$. \end{itemize} Intuitively, this means that in $c$ there is at most one (non-accepting) head at position $j$, no head elsewhere, and nothing (represented by $\motvide$, like the accepting state) in $\texttt{Head}_{-1}$ on its right nor in $\texttt{Head}_{+1}$ on its left. The possible archives go away from the head position. We can thus see that the head will never move into a cell where there is an archive, so that one of the transitions of $\gamma_{\U}[p]$ will always be applicable. Formally, we have the following lemma about the behaviour of $\gamma_\U[p]$. \begin{lemma}\label{l:tmsim} Let us fix a field list $\C[\U]\in\mathbb N^3$ and a program $p=p_{\mathcal{M}} \in \haine4^*$.\\ Consider the IPPA $F$ defined by permutation $\sh{\gamma_\U[p]}$ and directions $\vec{\nu}_{\U}$ given by the label indices.\\ Let $\vec k\in\mathbb N^*$ be a vector satisfying: \[\both{ k_{\texttt{Head}_{-1}},k_{\texttt{Head}_{+1}}\ge \norm{\Chi{\haine4\times Q_p\times\{-1,+1\}}}\\ k_\texttt{Tape} \ge 1 ~.}\] Let $c \in (\haine5^{\vec{k}})^{\mathbb Z}$ and suppose that $\hs{c}$ represents configuration $(z,q,j)\in \haine4^{\mathbb Z}\times Q\times\mathbb Z$ of $\mathcal{M}$. Then, for all $t \in \mathbb N$, $\hs{F^t(c)}$ represents $\mathcal{M}^{t}(z,q,j)$. \end{lemma} As in Lemma~\ref{koo}, the inequalities about the lengths of $\vec{k}$ simply state that the fields are long enough. Using Lemma~\ref{sharpization}, we will omit the $\sh{\cdot}$ and $\hs{\cdot}$ from $\sh{\gamma_\U[p]}$ and $\hs{c}$ in the following proof, since they are only used to make $F$ have constant lengths. \begin{proof} We will prove the claim for $t=1$; the general claim then follows by induction. Suppose, first, that $\mathcal{M}(z,q,j)$ does not exist. This means that $\delta_\mathcal{M}(z_j,q)$ does not exist, or equivalently, that $\U(z_j,q,p)=\U(c_j.\texttt{Tape},q,p)$ does not exist. From the definition of $\gamma_\U[p]$ and the fact that $c$ represents $(z,q,j)$, we have that ${\gamma_{\U}[p]}(c_j)$ does not exist, which implies that $F(c)$ does not exist. Suppose, then, that $\mathcal{M}(z,q,j)=(z',q',j')$ exists. This means that $\U(z_j,q,p)=(z'_j,q',j'-j)$, and $z'_i=z_i$ for any $i\ne j$. By assumption, for any $i \in \mathbb Z$, $(\pi_{\texttt{Tape}}(c_i),\pi_{\texttt{Head}_{-1}}(c_i),\pi_{\texttt{Head}_{+1}}(c_i)) = (z_i,h_{-1,i},h_{+1,i})$ for some $h_{-1,i},h_{+1,i} \in Q_p \cup \Chi{\haine4\times Q_p\times\{-1,+1\}} \cup \{\motvide\}$. Moreover, for any $i\ne j$, and $\delta\in\{-1,+1\}$, $h_{\delta,i}\notin Q_p$, so the identity rule is applied. After applying the shifts, it gives that for any $i<j-1$, \begin{equation*} (\pi_{\texttt{Tape}}F(c_i),\pi_{\texttt{Head}_{-1}}F(c_i),\pi_{\texttt{Head}_{+1}}F(c_i))=(z_i,h_{-1,i+1},h_{+1,i-1}) = (z'_i,h_{-1,i+1},\motvide) \end{equation*} with $h_{-1,i+1}\notin Q_p$, and for any $i>j+1$, \begin{equation*} (\pi_{\texttt{Tape}}F(c_i),\pi_{\texttt{Head}_{-1}}F(c_i),\pi_{\texttt{Head}_{+1}}F(c_i))=(z_i,h_{-1,i+1},h_{+1,i-1})=(z'_i,\motvide,h_{+1,i-1}) \end{equation*} with $h_{+1,i-1}\notin Q$. Now, assume $(h_{-1,i},h_{+1,i})=(q,\motvide)$ and that $\U(z_j,q,p)=(z'_j,q',-1)$ (the other cases can be dealt with in a similar way). Then the transition $(z_j,q,\motvide) \to (z'_j,q',\Chi{z_j,q,-1})$ is applied by $\gamma_\U[p]$. After the application of $\gamma_\U[p]$ and the shifts, we obtain \begin{eqnarray*} (\pi_{\texttt{Tape}}F(c_j),\pi_{\texttt{Head}_{-1}}F(c_j),\pi_{\texttt{Head}_{+1}}F_\U[p](c_j)) &=& (z'_j,\motvide,\motvide),\\ (\pi_{\texttt{Tape}}F(c_{j-1}),\pi_{\texttt{Head}_{-1}}F(c_{j-1}),\pi_{\texttt{Head}_{+1}}F(c_{j-1})) &=& (z'_{j-1},q',\motvide) \text{ and,} \\ (\pi_{\texttt{Tape}}F(c_{j+1}),\pi_{\texttt{Head}_{-1}}F(c_{j+1}),\pi_{\texttt{Head}_{+1}}F(c_{j+1})) &=& (z'_{j+1},\motvide,\Chi{z_j,q,-1}). \end{eqnarray*} All conditions are hence satisfied for $F(c)$ to represent $\mathcal{M}(z,q,j)$. \end{proof} Note that due to the parallel nature of IPPA, some configurations may involve several machine heads, and valid simulations may take place in parallel, provided that there is enough space between them so that the archive and the heads do not collide. For this reason, we need to give a \xpr{finite version} of the previous lemma. \begin{lemma}\label{Turing} Let us fix a field list $\C[\U] \in\mathbb N^3$ and a program $p=p_{\mathcal{M}} \in \haine4^*$.\\ Consider the IPPA $F$ defined by permutation $\sh{\gamma_\U[p]}$ and directions $\vec{\nu}_{\U}$ given by the label indices. Let $\vec k\in\mathbb N^*$ be a vector satisfying: \begin{align*} &k_{\texttt{Head}_{-1}},k_{\texttt{Head}_{+1}}\ge \norm{\Chi{\haine4\times Q_p\times\{-1,+1\}}}\\ &k_\texttt{Tape} \ge 1. \end{align*} Let $c \in (\haine5^{\vec{k}})^{\mathbb Z}$, $c'=\hs{c}$ and assume that there exists $n \in \mathbb N$ such that the set \begin{equation*} J\defeq\set{j}{\mathbb Z}{{(\pi_{\texttt{Head}_{-1}}(c'_j),\pi_{\texttt{Head}_{+1}}(c'_j))}\ne(\motvide,\motvide)} \end{equation*} satisfies that for any $j\neq j'\in J$, we have $\abs{j'-j}>2n$, and that for all $j \in J$, $(c'_j.\texttt{Head}_{-1},c'_j.\texttt{Head}_{+1}) \in Q_p \times \{\motvide\} \cup \{\motvide\} \times Q_p$. For $j \in J$, let $q_j \defeq c'_j.\texttt{Head}_{-1}$ if $(c'_j.\texttt{Head}_{-1},c'_j.\texttt{Head}_{+1}) \in Q_p \times \{\motvide\}$ and $q_j \defeq c'_j.\texttt{Head}_{+1}$ if $(c'_j.\texttt{Head}_{-1},c'_j.\texttt{Head}_{+1}) \in \{\motvide\} \times Q_p$. Then, $F^n(c)$ exists if and only if $(z^j,q'_j,j')\defeq\mathcal{M}^n(c'.\texttt{Tape},q_j,j)$ exists, for all $j\in J$. In addition: \begin{itemize} \item $\pi_{\texttt{Tape}}F^n(c)_{\cc{j-n}{j+n}}=z^j_{\cc{j-n}{j+n}}$, for all $j \in J$; \item $\pi_{\texttt{Tape}}F^n(c)_i=c_i$ if $i\notin J+\cc{-n}n$; \item $(\pi_{\texttt{Head}_{-1}}F^n(c)_{j'},\pi_{\texttt{Head}_{+1}}F^n(c)_{j'}) \in \{(q'_j,\motvide)\} \cup \{(\motvide,q'_j\}$, for all $j \in J$; \item $(\pi_{\texttt{Head}_{-1}}F^n(c)_i,\pi_{\texttt{Head}_{+1}}F^n(c)_i) \notin Q_p\times \{\motvide\} \cup \{\motvide\} \times Q_p$, if $i \notin \sett{j'}{j \in J}$ \end{itemize} \end{lemma} \begin{proof} First note that the identity is always applied when the head is absent; in particular it is applied outside $J+\cc{-t}t$ at time $t\in\mathbb N$ (because $F$ has radius $1$) and initially the heads are only in the positions in $J$. According to the assumptions, for all $j\in J$, ${c'}^j$ is obtained by turning all $(c_i.\texttt{Head}_{-1},c_i.\texttt{Head}_{+1})$ to $(\motvide,\motvide)$ except at position $j$ represents $(c'.\texttt{Tape},q_j,j)$. Thanks to Lemma \ref{l:tmsim}, for all $0 \le t \le n$, $F^t({c'}^j)$ exists if and only if $\mathcal{M}^n(c'.\texttt{Tape},q_j,j)$ exists. In that case, since ${c'}^j$ coincides with $c'$ over interval $\cc{j-2n}{j+2n}$ and since the radius is $1$, a simple induction can show that $F^t({c'}^j)$ coincides with $F^t(c')$ over interval $\cc{j-2n+t}{j+2n-t}$. Lemma~\ref{l:tmsim} hence gives the main claim. Conversely, suppose that $F^t({c'}^j)$ is undefined for some $j\in J$ with $t\le n$ minimal. Then, $F^{t-1}({c'}^j)$ exists, and by Lemma \ref{l:tmsim} involves a unique (non-accepting) head, in some cell $j'\in\cc{j-t}{j+t}$. Therefore, $\gamma_\U[p](F^{t-1}({c'}^j)_i$ is defined for any $i\ne j'$. This means that $\gamma_\U[p](F^{t-1}({c'}^j)_{j'})$ is undefined; we have already noted that this is equal to $\gamma_\U[p](F^{t-1}(c')_{j'})$, which proves that $F^t(c')$ is undefined. \end{proof} Lemma~\ref{Turing} will be used in the following way: Every configuration will be divided into colonies by $\coordi$. Initially (when the clock is equal to $0$), inside every colony there will be exactly one TM head at the leftmost cell of the colony. These TM will perform some computation for a small amount of time compared to the the width of the colonies (the $S$ of Lemma~\ref{koo}) so that the heads will not meet. Lemma~\ref{Turing} will immediately imply that at the end of the computation, in every colony, $\texttt{Tape}$ contains the output of the computation. Finally, the output of the computation will be copied onto some new field and then the computation will be run backwards (remember that $\gamma_{\U}[p]$ is a permutation). We are now ready to give the details of the definition of the permutation $\alpha_\U[t;\texttt{Tape},\texttt{Head}_{-1},\texttt{Head}_{+1}]$: Let $\vec{u} \in \haine5^{**}$ and define $\vec{u'}$ in the following way: \begin{equation*} (u'.\texttt{Tape},u'.\texttt{Head}_{-1},u'.\texttt{Head}_{+1})=\sh{\gamma_{\U}[t(\vec{u})]}(u.\texttt{Tape},u.\texttt{Head}_{-1},u.\texttt{Head}_{+1}), \end{equation*} and $u'_j\defeq u_j$ for all $j\notin\{\texttt{Tape},\texttt{Head}_{-1},\texttt{Head}_{+1}\}$. Then, \begin{equation*} \alpha_\U[t;\texttt{Tape},\texttt{Head}_{-1},\texttt{Head}_{+1}](\vec u)\defeq\vec{u'}, \end{equation*} if $t(\vec{u})=t(\vec{u'})$ (and it is undefined otherwise.). Again, the definition gets a little more complicated due to the need to preserve the lengths, to have arbitrarily many fields and to ensure reversibility. When $t=\pi_{\texttt{Prog}}$, where $\texttt{Prog}$ is a new field, then the condition $t(\vec{u})=t(\vec{u'})$ is always satisfied. \section{Computing the simulated permutation}\label{sec:singlepermutation} Let $\C[\compute]= \C[\U]\sqcup[\texttt{NTape}]$. \begin{itemize} \item $\texttt{Head}_{-1},\texttt{Head}_{+1}$ are used by to simulate a TM with the rule of Subsection~\ref{Jarkko}. \item The output of this computation is written on $\texttt{NTape}$ and then the computation is reversed (the Bennett trick, see~\cite{bennett}). \end{itemize} \begin{algo}{compute}{\compute}{v_\texttt{Addr},v_\texttt{Clock},v_\texttt{Alarm},t_\texttt{Prog},t_\revprog} \IF{$v_\texttt{Clock}=0$ \AND $v_{\texttt{Addr}}=0$} \STATE $\rite[0;\texttt{Head}_{-1}]$ \label{al:com:wrhead} \COMMENT{Write the machine initial state in the left head field.} \ENDIF \IF{$0\le v_\texttt{Clock}<v_\texttt{Alarm}$} \STATE $\sh{\gamma_{\U}[t_\texttt{Prog};\texttt{Tape},\texttt{Head}_{-1},\texttt{Head}_{+1}]}$ \COMMENT{Run the machine in order to compute the permutation.} \label{al:com:posforcom} \ELSIF{$v_\texttt{Clock}=v_\texttt{Alarm}$} \STATE{$\chekk[\pi_{\texttt{Head}_{-1}},\pi_{\texttt{Head}_{+1}}\notin Q_{t_\texttt{Prog}}\setminus\{\motvide\}]$} \COMMENT{Check that the computation halted.} \label{al:com:poscheckhalt} \STATE{$\rite[\texttt{Tape};\texttt{NTape}]$} \COMMENT{Copy the output onto a different tape.} \label{al:com:posbennet} \STATE{$\exch[\texttt{Head}_{-1},\texttt{Head}_{+1}]$} \COMMENT{The directions of the fields of $F_{\U}[p]^{-1}$ are opposite to those of $F_{\U}[p]$.} \label{al:com:posexch} \ELSIF{$v_\texttt{Alarm}<v_\texttt{Clock}\le2v_\texttt{Alarm}$} \STATE{$\sh{\gamma_{\U}[t_\texttt{Prog};\texttt{Tape},\texttt{Head}_{+1},\texttt{Head}_{-1}]^{-1}}$} \COMMENT{Unwind the computation in order to delete the archive.}\label{al:com:posbaccom} \ENDIF \IF{$2v_\texttt{Alarm}\le v_\texttt{Clock}<3v_\texttt{Alarm}$} \STATE $\sh{\gamma_{\U}[t_\revprog;\texttt{NTape},\texttt{Head}_{-1},\texttt{Head}_{+1}]}$ \COMMENT{Compute the inverse of the permutation, in order to recover \texttt{Tape}.} \label{al:com:negforcom} \ELSIF{$v_\texttt{Clock}=3v_\texttt{Alarm}$} \STATE{$\chekk[\pi_{\texttt{Head}_{-1}},\pi_{\texttt{Head}_{+1}}\notin Q_{t_\revprog}\setminus\{\motvide\}]$} \COMMENT{Check that the computation halted.} \label{al:com:negcheckhalt} \STATE{$\rite[\texttt{Tape};\texttt{NTape}]^{-1}$} \label{al:com:negbennet}\COMMENT{Empty $\texttt{NTape}$.} \STATE{$\exch[\texttt{Head}_{-1},\texttt{Head}_{+1}]$} \label{al:com:negexch}\COMMENT{Reverse the directions again.} \ELSIF{$3v_\texttt{Alarm}<v_\texttt{Clock}\le4v_\texttt{Alarm}$} \STATE{$\sh{\gamma_{\U}[t_\revprog;\texttt{Tape},\texttt{Head}_{+1},\texttt{Head}_{-1}]^{-1}}$} \label{al:com:negbaccom}\COMMENT{Unwind the second computation, too.} \ENDIF \IF{$v_\texttt{Clock}=4v_\texttt{Alarm}$ \AND $v_\texttt{Addr}=0$} \STATE $\rite[0;\texttt{Head}_{-1}]^{-1}$\label{al:com:delhead} \COMMENT{Erase the machine initial state.} \ENDIF \end{algo} $\compute[v_\texttt{Addr},v_\texttt{Clock},v_\texttt{Alarm},v_\texttt{Prog},v_\revprog;\C[\compute]]$ is polynomially computable with respect to its parameters. Note that, depending on the value of $v_\texttt{Addr}$, only a small amount of these permutation are applied. In applications, the three parameters $v_\texttt{Alarm},t_\texttt{Prog},t_\revprog$ will be constant. $v_\texttt{Alarm}$ contains a natural number that controls how long the computation lasts. $t_\texttt{Prog}$ and $t_\revprog$ are interpreted as the program and the reverse program (\textit{i.e.}, the program of the inverse IPPA) of the IPPA that we want to simulate. We are now able to simulate uniformly RPCA with \xpr{radius $0$} (null direction vector). \begin{lemma}\label{behavior} Let us fix a field list $\C[\coordi] \sqcup \C[\compute] \in\mathbb N^8$, vectors $\vec{k}, \vec{k'}\in\mathbb N^*$, integers $S, T \in {\mathbb N}_1, t_0, U\in\mathbb N$ and programs $p,p^{-1}\in\haine4^*$ of a partial permutation $\alpha:\haine5^{**}\pto\haine5^{**}$ and its inverse $\alpha^{-1}$, respectively and let $G$ the IPPA corresponding to permutation $\alpha$ and null direction vector. Consider the IPPA $F$ with directions $\vec{\nu}_{\coordi \cup \compute}$, and permutation \begin{equation*} \coordi[S,T]\circ\compute[\bina{\pi_\texttt{Addr}},\bina{\pi_\texttt{Clock}}-t_0,U,p,p^{-1}]~, \end{equation*} and assume that the following inequalities hold: \[\both{ U\ge\max\{t_p({\haine5^{\vec{k'}}}),t_{p^{-1}}({\haine5^{\vec{k'}}})\}\\ S\ge\max\{2t_0,\norm{\Chi{\haine5^{\vec{k'}}}}\}\\ T\ge4U+t_0\\ k_\texttt{Addr},k_{\texttt{Addr}_{+1}}\ge\norm S\\ k_\texttt{Clock},k_{\texttt{Clock}_{+1}}\ge\norm T\\ k_{\texttt{Head}_{-1}},k_{\texttt{Head}_{+1}}\ge\length{\Chi{\haine4\times (Q_p \cup Q_{p^{-1}})\times\{-1,+1\}}}\\ k_\texttt{Tape},k_\texttt{NTape}\ge1. }\] Then, $F\restr{(\haine5^{\vec k})^\mathbb Z}\simu[S,T,0,\Phi]G\restr{(\haine5^{\vec k'})^\mathbb Z}$, where $\Phi\defeq\tilde{\Phi}_{\texttt{Tape}}{\restr\Sigma}$ and $\Sigma\defeq(\haine5^{\vec{k}})^{\mathbb Z}\cap\gra{0}{0}{S}{T}\cap \emp[\motvide]{\texttt{NTape},\texttt{Head}_{-1},\texttt{Head}_{+1}}\cap\tilde{\Phi}_{\texttt{Tape}}^{-1}((\haine5^{\vec{k'}})^\mathbb Z).$ \end{lemma} The number $t_0$ should be understood as a delay before which to apply this rule, and $U$ as the maximal time that we allow to the (forward and backward) computation. \begin{proof} Remarks~\ref{twoconditionsofsimulation} and \ref{thirdconditionofsimulation} imply that $\Phi$ is surjective, $\Phi \sigma^S = \sigma \Phi$ and that $\mathcal D}%\DeclareMathOperator*{\dom}{dom(\Phi)=\bigsqcup_{\begin{subarray}c0\le t<T\\0\le s<S\end{subarray}}F^t\sigma^s(\mathcal D}%\DeclareMathOperator*{\dom}{dom(\Phi))$ is a disjoint union. Therefore, in order to prove the simulation we only have to prove that $G \Phi = \Phi F^T$. This is equivalent (since we are talking about partial functions) to the facts that if $\Phi(c)=b$, then $F^T(c)$ exists if and only if $G(b)$ exists, and in that case $\Phi F^T(c)=G(b)$. We are actually going to prove the following stronger: \begin{fact}\label{fact:permutation} If $c \in (\haine5^{\vec{k}})^{\mathbb Z}\cap\gra{0}{t_0}{S}{T}\cap\emp[\motvide]{\texttt{NTape},\texttt{Head}_{-1},\texttt{Head}_{+1}}\cap\tilde{\Phi}_{\texttt{Tape}}^{-1}(b)$, then $F^{4U}(c)$ exists if and only if $G(b)$ exists, and in that case $F^{4U}(c) \in \gra{0}{t_0+4U}{S}{T}\cap\emp[\motvide]{\texttt{NTape},\texttt{Head}_{-1},\texttt{Head}_{+1}}\cap\tilde{\Phi}_{\texttt{Tape}}^{-1}(G(b))$. \end{fact} Since the only rule applied outside $t_0 \le \texttt{Clock} \le t_0 + 4U$ is $\coordi$, Fact~\ref{fact:permutation} implies that if $\Phi(c)=b$, then $\Phi F^T(c)=G(b)$, which concludes the proof of the lemma. For the rest of the proof, let $c \in (\haine5^{\vec{k}})^{\mathbb Z}\cap\gra{0}{t_0}{S}{T}\cap\emp[\motvide]{\texttt{NTape},\texttt{Head}_{-1},\texttt{Head}_{+1}}\cap\tilde{\Phi}_{\texttt{Tape}}^{-1}(b)$. Suppose, first, that $F^{4U}(c)$ exists. \begin{itemize} \item $\texttt{Clock}=t_0$: Initially, $c \in \emp[\motvide]{\texttt{NTape},\texttt{Head}_{-1},\texttt{Head}_{+1}}$. Line~\ref{al:com:wrhead} writes the initial head of the TM on $\texttt{Head}_{-1}$ of the leftmost cell of every colony. \item $t_0 \le \texttt{Clock} < t_0+U$: Only the permutation of line~\ref{al:com:posforcom} is applied. Together with the directions of $\vec{\nu}_{\compute}$, this implies that we apply $U$ steps of the IPPA $F_{\U}[p]$ on configuration \\ $(\pi_{\texttt{Tape}}(c),\pi_{\texttt{Head}_{-1}}(c),\pi_{\texttt{Head}_{+1}}(c))$. This configuration has a starting TM head at the leftmost cell of every colony, and since $c \in \Phi^{-1}(b)$, we have that in the $i$'th colony the input of the TM is $\Chi{b_i}$. \item $t_0=t_0+U$: Line~\ref{al:com:poscheckhalt} checks that in no place of the tape does there appear a head of the TM defined by $p$. This means that all the TM have accepted within the first $U$ steps and since $p$ is the program of $\alpha$, we take that $\alpha(b_i)$ exists, for all $i \in \mathbb Z$, or equivalently that $G(b)$ exists. \end{itemize} Therefore, if $F^{4U}(c)$ exists, then $G(b)$ exists For the other direction, suppose that $G(b)$ exists, or, equivalently, that $\alpha(b_i)$ exists, for all $i \in \mathbb N$. \begin{itemize} \item $\texttt{Clock}=t_0$: By assumption, $c \in \emp[\motvide]{\texttt{NTape},\texttt{Head}_{-1},\texttt{Head}_{+1}}$ Line~\ref{al:com:wrhead} writes the initial state of the TM on the $\texttt{Head}_{-1}$ of the leftmost cell of every colony. \item $t_0 \le \texttt{Clock} < t_0+U$: Only the permutation of line~\ref{al:com:posforcom} is applied. Together with the directions of $\vec{\nu}_{\compute}$, this implies that at each step, we apply the IPPA $F_{\U}[p]$ on the configuration \\ $(\pi_{\texttt{Tape}}(c),\pi_{\texttt{Head}_{-1}}(c),\pi_{\texttt{Head}_{+1}}(c))$. There is a TM head at the leftmost position of every colony and the input in the $i$'th colony is equal to $\Chi{b_i}$. In other words, $\tilde{\phi}_{\texttt{Tape}}(\col{i}{c})=b_i$, for all $i \in \mathbb Z$. \item $\texttt{Clock}=t_0+U$ Since $\alpha(b_i)$ is defined for all $i \in \mathbb Z$, $U \geq t_p(\haine5^{\vec{k'}})$ and $S \geq 2U$ , we can see that the conditions of Corollary~\ref{Turing} are satisfied with $n=U$. This means that the computation of the TM in every colony has accepted and that the output of the computation is written on the $\texttt{Tape}$ of every colony. In other words, $\tilde{\phi}_{\texttt{Tape}}(\col{i}{F^{U}(c)})=\alpha(b_i)$, for all $i \in \mathbb Z$. The check of line~\ref{al:com:poscheckhalt} is true, since by assumption all of the TM have accepted before $\texttt{Clock}=U$, and when a TM halts its head disappears. Line~\ref{al:com:posbennet} copies the contents of $\texttt{Tape}$ on $\texttt{NTape}$. Therefore, after the application of line~\ref{al:com:posbennet}, we have that $\tilde{\phi}_{\texttt{NTape}}(\col{i}{F^{U}(c)})=\alpha(b_i)$, for all $i \in \mathbb Z$. Finally, line~\ref{al:com:posexch} swaps the fields $\texttt{Head}_{-1}$ and $\texttt{Head}_{+1}$. This can be thought of as \xpr{reversing} the directions of these fields. We do this because we want to reverse the computation done by $F_{\U}[p]$ and in order to achieve this, it is not enough to apply $\sh{\gamma_{\U}[p]^{-1}}$, but we also need to use directions $-\vec{\nu}_{\compute}$. \item $t_0+U+1 \le \texttt{Clock} \le t_0+2U$: Only the permutation of line~\ref{al:com:posbaccom} is applied. Together with the fact that the shift directions have been reversed and that the fields $\texttt{Head}_{-1}$, $\texttt{Head}_{+1}$ have also been exchanged in the rules of lines \ref{al:com:posforcom} and \ref{al:com:posbaccom}, this implies that the IPPA $(F_{\U}[p])^{-1}$ is applied for $U$ time steps on the configuration $(F_{\U}[p])^U(\pi_{\texttt{Tape}}(c),\pi_{\texttt{Head}_{-1}}(c),\pi_{\texttt{Head}_{+1}}(c))$. Therefore, $(\pi_{\texttt{Tape}}F^{2U}(c),\pi_{\texttt{Head}_{-1}}F^{2U}(c),\pi_{\texttt{Head}_{+1}}F^{2R}(c))=(\pi_{\texttt{Tape}}(c),\pi_{\texttt{Head}_{-1}}(c),\pi_{\texttt{Head}_{+1}}(c)).$ In other words, the computation has been \xpr{run backwards} until the beginning, but the output of the computation is on $\texttt{NTape}$. This is the trick used by Bennet in \cite{bennett} to simulate arbitrary TM with reversible ones. At this point, $F^{2U}(c) \in \emp[\motvide]{\texttt{Head}_{+1}}$, $\texttt{Head}_{-1}$ is empty except at the left-most cell of every colony, where it contains the initial state $0$, and finally, $\phi_{\texttt{Tape}}(\col{i}{F^{2U}(c)})=b_i$ and $\phi_{\texttt{NTape}}(\col{i}{F^{2U}(c)})=\alpha(b_i)$, for all $i \in \mathbb Z$. \item $t_0+2U \le \texttt{Clock} < t_0+3U$: Only the permutation of line~\ref{al:com:negforcom} is applied. Together with the directions of $\vec{\nu}_{\compute}$, this implies that at each step, we apply the IPPA $F_{\U}[p^{-1}]$ on the configuration \\ $(\pi_{\texttt{NTape}}(c),\pi_{\texttt{Head}_{-1}}(c),\pi_{\texttt{Head}_{+1}}(c))$. Notice that we use $\texttt{NTape}$ as the TM tape and we use the program $p^{-1}=t_{\revprog}$. \item $\texttt{Clock}= t_0 +3U$: Since $\alpha^{-1}(\alpha(b_i))$ is defined for all $i \in \mathbb Z$, $U > t_{p^{-1}}(\haine5^{\vec{k'}})$ and $S \geq 2U$ , the conditions of Corollary~\ref{Turing} are satisfied with $n=U$. This implies that $\phi_{\texttt{NTape}}(\col{i}{F^{3U}(c)})=b_i$, for all $i \in \mathbb Z$. The check of line~\ref{al:com:negcheckhalt} is true, since all of the TM have accepted before $\texttt{Clock}=3U$. Line~\ref{al:com:negbennet} copies the contents of $\texttt{Tape}$ on $\texttt{NTape}$. Since, at this point these fields are equal in every cell, this is equivalent to emptying fields $\texttt{NTape}$ (in a reversible way, though). Therefore, after applying this permutation, $F^{3U}(c) \in \emp[\motvide]{\texttt{NTape}}$. We still have to empty the $\texttt{Head}$ fields, too. For this, we have to run the computation backwards. Line~\ref{al:com:negexch} swaps the fields $\texttt{Head}_{-1}$ and $\texttt{Head}_{+1}$, \xpr{reversing} the directions of these fields. \item $t_0+3U+1 \le \texttt{Clock} \le t_0+4U$: Only the permutation of line~\ref{al:com:negbaccom} is applied. Together with the fact that the shift directions have been reversed and that the head fields inside the rules are also exchanged, this implies that the IPPA $F_{\U}[p^{-1}]^{-1}$ is applied for $U$ time steps on the configuration $F_{\U}[p^{-1}]^{3U}(\pi_{\texttt{Tape}}(c),\pi_{\texttt{Head}_{-1}}(c),\pi_{\texttt{Head}_{+1}}(c))$. Therefore, $(\pi_{\texttt{Tape}}F^{4U}(c),\pi_{\texttt{Head}_{-1}}F^{4U}(c),\pi_{\texttt{Head}_{+1}}F^{4U}(c))=(\pi_{\texttt{Tape}}F^{2U}(c),\pi_{\texttt{Head}_{-1}}F^{2U}(c),\pi_{\texttt{Head}_{+1}}F^{2U}(c)). $ Notice that now we are using $\texttt{Tape}$ as the tape of the TM, while during the forward computation we used $\texttt{NTape}$. This is not a problem, though, because the two fields were equal at the end of the forward computation at step $3U$. At this point, we have that $\phi_{\texttt{Tape}}(\col{i}{F^{4U}(c)})=\alpha(b_i)$, for all $i \in \mathbb Z$. Also, in the $\texttt{Head}$ fields, there exists the initial state $0$ of the TM on $\texttt{Head}_{-1}$ of the leftmost cell of every colony, while the rest of them are empty. \item Finally, line~\ref{al:com:delhead} deletes the initial state, and we get that $F^{4U}(c) \in \emp[\motvide]{\texttt{NTape},\texttt{Head}_{-1},\texttt{Head}_{+1}}$. \end{itemize} Therefore, we have proved that if $G(b)$ exists, then $F^{4U}(c)$ exists and $F^{4U}(c) \in (\haine5^{\vec k})^\mathbb Z \cap \gra{0}{t_0+4U}{S}{T}\cap\emp[\motvide]{\texttt{Head}_{-1},\texttt{Head}_{+1},\texttt{NTape}} \cap \tilde{\Phi}_{\texttt{Tape}}^{-1}(\alpha(b))$, which finishes the proof of the lemma. \end{proof} In a nutshell, this is how the construction works. First, use the program $p$ to compute $\alpha$. At the end of this phase, $\texttt{Tape}$ contains $\alpha(b)$ (in the colonies). Copy $\alpha(b)$ onto $\texttt{NTape}$ and in the second phase, run the computation backwards so as to erase all auxiliary information written by the TM during the computation. At the end of the second phase, $\texttt{Tape}$ contains $b$ and $\texttt{NTape}$ contains $\alpha(b)$. In the third and fourth phases of the construction, perform the reverse of what was done in the first two phases, while exchanging the roles of $\texttt{NTape}$ and $\texttt{Tape}$. First, use $p^{-1}$ with tape field $\texttt{NTape}$ so as to compute $\alpha^{-1}(\alpha(b))=b$, then copy $\texttt{Tape}$ onto $\texttt{NTape}$ (thus emptying $\texttt{NTape}$) and then perform the computation backwards. At the end, $\texttt{NTape}$ is again empty and $\texttt{Tape}$ contains $\alpha(b)$ and everything was done in a reversible way. Notice for all $b \in (\haine5^{\vec{k'}})$ and all $c \in \Phi^{-1}(b)$, the values of the fields in $\C[\coordi] \sqcup \C[\compute]$ of $c$ are uniquely determined. This implies that if there are no anonymous fields, or if the values of the anonymous fields were determined by the fields of $\C[\coordi] \sqcup \C[\compute]$, then the simulation is also exact. \section{Shifting}\label{sec:singleshift} Let $\C[\shift]\defeq[\texttt{Tape},\texttt{Tape}_{-1},\texttt{Tape}_{+1}]$. $\texttt{Tape}_{+1}$ and $\texttt{Tape}_{-1}$ are used to exchange the information of $\texttt{Tape}$ between colonies. In the following algorithm, $M \in {\mathbb N}_1$ has to be thought of as the number of fields in the simulated alphabet, $\vec{\nu} \in \{-1,0,+1\}^M$ as the vector of directions of the simulated IPPA, and $\vec{k'} \colon \haine5^{**} \pto \mathbb N^M$ is a vector valuation that gives the lengths of the alphabet of the simulated IPPA. $\vec{k'}$ represents the field lengths of the simulated letters, whose information is then \xpr{known} to all the letters of the simulating PPCA. \begin{algo}{shift}{\shift}{M,\vec{\nu},\vec{k'},v_\texttt{Addr},v_\texttt{Clock},v_\texttt{MAddr}} \IF{$v_\texttt{Clock}=0$ \OR $v_\texttt{Clock}=v_\texttt{MAddr}$} \FOR{$0\le i<M$} \IF{$l_{\vec{k'},i}\le v_\texttt{Addr}<l_{\vec{k'},i+1}$} \label{al:shi:encoding} \STATE{$\exch[\texttt{Tape},\texttt{Tape}_{\vec{\nu}_i}]$} \label{al:shi:movetape} \COMMENT{Letters are moved to the corresponding moving fields, and back after $v_\texttt{MAddr}$ steps.} \ENDIF \ENDFOR \ENDIF \end{algo} This is polynomially computable in the parameters. \begin{lemma}\label{parshift} Let us fix a field list $\C[\coordi]\sqcup\C[\shift]\in\mathbb N^7$, an integer $M\in{\mathbb N}_1$, a direction vector $\vec{\nu} \in \{-1,0,+1\}^M$, a vector $\vec{k'} \in \mathbb N^M$ , a vector $\vec{k} \in \mathbb N^{*}$ and integers $S,T\in {\mathbb N}_1$, $t_0\in\mathbb N$. Consider the IPPA $F$ defined by directions $\vec{\nu}_{\coordi \sqcup \compute}$, given by the label indices, and permutation \begin{equation*} \coordi[S,T] \circ \shift[M,\vec{\nu}_{\coordi \sqcup \compute},\vec{k'},\bina{\pi_\texttt{Addr}},\bina{\pi_\texttt{Clock}}-t_0,S], \end{equation*} and assume that the following inequalities hold: \[\both{ S\geq\norm{\Chi{\haine5^{\vec{k'}}}}\\ T\geq t_0+S\\ k_\texttt{Addr},k_{\texttt{Addr}_{+1}}\geq\norm S\\ k_\texttt{Clock},k_{\texttt{Clock}_{+1}}\geq\norm T\\ k_\texttt{Tape},k_{\texttt{Tape}_{-1}},k_{\texttt{Tape}_{+1}}\geq 1 ~.}\] Then $F\restr{(\haine5^{\vec k})^\mathbb Z}\simu[S,T,0,\Phi]\sigma^{-\vec\nu}\restr{(\haine5^{\vec{k'}})^{\mathbb Z}}$, where $\Phi\defeq\tilde{\Phi}\restr\Sigma$ and $\Sigma\defeq(\haine5^{\vec{k}})^{\mathbb Z}\cap\gra{0}{0}{S}{T}\cap \emp[\motvide]{\texttt{Tape}_{-1},\texttt{Tape}_{+1}}\cap\tilde{\Phi}^{-1}((\haine5^{\vec{k'}})^\mathbb Z)$. \end{lemma} Recall that, by convention, the directions of the fields are the opposite of the shift that is actually applied. \begin{proof} Again, by the definition of $\Phi$ and Remarks~\ref{twoconditionsofsimulation} and~\ref{thirdconditionofsimulation}, we know that $\Phi$ is surjective, $\Phi \sigma^S = \sigma \Phi$ and that $\mathcal D}%\DeclareMathOperator*{\dom}{dom(\Phi)\defeq\bigsqcup_{\begin{subarray}c0\le t<T\\0\le s<S\end{subarray}}F^t\sigma^s(\mathcal D}%\DeclareMathOperator*{\dom}{dom(\Phi))$ is a disjoint union. Therefore, we only have to show that $\sigma^{-\vec{\nu}} \Phi = \Phi F^T$, which is equivalent to showing, since $\sigma^{-\vec{\nu}}(b)$ is defined for all $b \in (\haine5^{\vec{k'}})^{\mathbb Z}$, that if $\Phi(c)=b$, then $\Phi F^T(c)= \sigma^{-\vec{\nu}}(b)$. As in the proof of Lemma~\ref{behavior}, we are going to prove the following stronger \begin{fact}\label{fact:shift} If $c \in (\haine5^{\vec{k}})^{\mathbb Z}\cap\gra{0}{t_0}{S}{T}\cap\emp[\motvide]{\texttt{Tape}_{-1},\texttt{Tape}_{+1}}\cap\tilde{\Phi}_{\texttt{Tape}}^{-1}(b)$, then $F^S(c) \in \gra{0}{t_0+S}{S}{T} \cap \emp[\motvide]{\texttt{Tape}_{-1}=\texttt{Tape}_{+1}} \cap \tilde{\Phi}^{-1}(\sigma^{-\vec{\nu}}(b))$. \end{fact} \begin{itemize} \item $\texttt{Clock}=t_0$: By assumption, $c \in \emp[\motvide]{\texttt{Tape}_{-1},\texttt{Tape}_{+1}}$. Lines~\ref{al:shi:encoding} and \ref{al:shi:movetape} copy the encodings of the fields of $b$ on the correct $\texttt{Tape}$, in the following sense: Since $c \in \Phi^{-1}(b)$, $\pi_{\texttt{Tape}}(\col{i}{c})\restr{\co{l_{\vec{k'},j}}{l_{\vec{k'},j+1}}}=\Chi{\pi_j(b_i)}$, for all $i \in \mathbb Z$ and $0 \le j < M$. Line~\ref{al:shi:movetape} moves the letter in $\texttt{Tape}$ to $\texttt{Tape}_{\vec{\nu}_j}$ when $l_{\vec{v},j}\le \texttt{Addr}<l_{\vec{v},j+1}$, or in other words, moves the encoding of the $j$'th field of $b_j$ onto $\texttt{Tape}_{+1}$ if $j$ is a right-moving field and to $\texttt{Tape}_{-1}$ if $j$ is a left-moving field. This means that after the application of line \ref{al:shi:movetape}, we have that \\ $\pi_{\texttt{Tape}_{\vec{\nu}_j}}(\col{i}{c})\restr{\co{l_{\vec{k'},j}}{l_{\vec{k'},j+1} }}=\Chi{\pi_j(b_i)}$, while $\pi_{\texttt{Tape}_{-\vec{\nu}_j}}(\col{i}{c})\restr{\co{l_{\vec{k'},j}}{l_{\vec{k'},j+1}}}=\pi_{\texttt{Tape}}(\col{i}{c})\restr{\co{l_{\vec{k'},j}}{l_{\vec{k'},j+1}}}=\motvide$, for all $i \in \mathbb Z$ and $0 \le j < M$. \item $t_0 \le \texttt{Clock} < t_0+S$: No permutation is applied during these time steps. Only the $\texttt{Tape}$ fields are shifted to the corresponding direction. \item $t_0 = t_0+S$: Every symbol that was part of the encoding of the $j$'th field of $b$ has travelled $S$ steps to the direction indicated by $\vec{\nu}_j$. This means that before the application of line~\ref{al:shi:movetape}, we have that $\pi_{\texttt{Tape}_{\vec{\nu}_i}}(\col{i}{F^S(c)})\restr{\co{l_{\vec{k'},j}}{l_{\vec{k'},j+1}}}=\Chi{\pi_i(b_{i-\vec{\nu}_i})}$, while \\ $\pi_{\texttt{Tape}_{-\vec{\nu}_j}}(\col{n}{F^S(c)})\restr{\co{l_{\vec{k'},j}}{l_{\vec{k'},j+1}}}=\pi_{\texttt{Tape}}(\col{i}{F^S(c)})\restr{\co{l_{\vec{k'},j}}{l_{\vec{k'},j+1}}}=\motvide$, for all $i \in \mathbb Z$ and $0 \le j < M$. Line~\ref{al:shi:movetape} moves the letter from the $\texttt{Tape}$ fields back to $\texttt{Tape}$. Therefore, after the application of line~\ref{al:shi:movetape}, we have that $F^S(c) \in \emp[\motvide]{\texttt{Tape}_{-1},\texttt{Tape}_{+1}}$ and $\Phi(F^S(c))=\sigma^{-\vec{\nu}}(b)$, which concludes the proof of the Lemma. \end{itemize} \end{proof} In this proof, it is of great importance that all the letters of $b$ have the same lengths, because this implies that the $j$'th field of every letter of $b$ is encoded at the same positions inside every colony. In fact, the reason that we deal only with alphabets of constant lengths is that this shifting procedure can work so easily. \section{Simulating any fixed rule}\label{sec:universal} In this subsection, we will use Lemma~\ref{behavior} and \ref{parshift} to construct an IPPA that can simulate non-trivially any PCA, when restricted to an appropriate finite subalphabet. Let $\C[\unive]\defeq \C[\compute] \cup \C[\shift] \in \mathbb N^{6}$ ($\C[\compute]$ and $\C[\shift]$ share the field $\texttt{Tape}$). \begin{algo}{unive}{\unive}{M,\vec{\nu},\vec{k'},v_\texttt{Addr},v_\texttt{Clock},v_\texttt{MAddr},v_\texttt{Alarm},t_\texttt{Prog},t_\revprog} \STATE{$\compute[v_\texttt{Addr},v_\texttt{Clock},v_\texttt{Alarm},t_\texttt{Prog},t_\revprog]$} \label{al:uni:comp} \STATE{$\shift[M,\vec\nu,v_\texttt{Addr},v_\texttt{Clock}-4v_\texttt{Alarm},v_\texttt{MAddr},\vec{k'}]$} \label{al:uni:shift} \end{algo} This is easily seen to be polynomially computable in the parameters. Notice that $\shift$ and $\compute$ are used at \xpr{different time steps}, \textit{i.e.}, at different values of $v_\texttt{Clock}$. $\compute$ starts being used when $v_\texttt{Clock}=0$ and, by definition of $v_\compute$, is equal to the identity when $v_\texttt{Clock} \geq 4v_\texttt{Alarm}$, while $\shift$ starts being used when $v_\texttt{Clock}=4v_\texttt{Alarm}$ (it has a delay of $4v_\texttt{Alarm}$). Formally, this means that for every value of $v_\texttt{Clock}$, at most one of the rules $\compute$ and $\shift$ is not equal to the identity. The following lemma is the fruit of all our efforts until now. It provides an IPPA that can simulate any PCA when restricted to a sufficiently large alphabet. \begin{lemma}\label{universal} Let us fix a field list $\C[\unive] \sqcup \C[\coordi] \in\mathbb N^{10}$, an integer $M\in{\mathbb N}_1$, programs $p,p^{-1}\in\haine2^*$ of a partial permutation $\alpha:\haine5^{**}\pto\haine5^{**}$ and its inverse $\alpha^{-1}$, respectively, a direction vector $\vec{\nu}\in \{-1,0,+1\}^M$, a vector $\vec{k'} \in \mathbb N^M$, a vector $\vec{k} \in \mathbb N^{*}$ and integers $S,T,t_0,U\in\mathbb N$. Let $G=\sigma^{-\nu}\alpha$ and $F$ be the IPPA defined by directions $\vec{\nu}_{\coordi \sqcup \unive}$, given by the label indices, and permutation \begin{equation*} \coordi[S,T]\circ \unive[M,\vec{\nu}_{\unive},\vec{k'},\bina{\pi_\texttt{Addr}},\bina{\pi_\texttt{Clock}}-t_0,S,T,U,p,p^{-1}]~, \end{equation*} and assume that the following inequalities hold: \[\both{ U\ge\max\{t_p({\haine5^{\vec{k'}}}),t_{p^{-1}}({\haine5^{\vec{k'}}})\}\\ S\ge\max\{2U,\norm{\Chi{\haine5^{\vec{k'}}}}\}\\ T\ge4U+S+t_0\\ k_\texttt{Addr},k_{\texttt{Addr}_{+1}}\ge\norm S\\ k_\texttt{Clock},k_{\texttt{Clock}_{+1}}\ge\norm T\\ k_{\texttt{Head}_{-1}},k_{\texttt{Head}_{+1}}\ge\length{\Chi{\haine4\times (Q_p \cup Q_{p^{-1}})\times\{-1,+1\}}}\\ k_\texttt{Tape},k_\texttt{NTape},k_{\texttt{Tape}_{-1}},k_{\texttt{Tape}_{+1}}\ge1. }\] Then, $F\restr{(\haine5^{\vec k})^\mathbb Z}\simu[S,T,0,\Phi]G\restr{(\haine5^{\vec{k'}})^{\mathbb Z}}$ completely, where $\Phi\defeq\tilde{\Phi}_{\texttt{Tape}}{\restr{\Sigma}}$ and \begin{equation*} \Sigma\defeq(\haine5^{\vec{k}})^{\mathbb Z}\cap\gra{0}{t_0}{S}{T} \cap \emp[\motvide]{\texttt{Head}_{-1},\texttt{Head}_{+1},\texttt{Tape}_{-1},\texttt{Tape}_{+1},\texttt{NTape}}\cap\tilde{\Phi}_{\texttt{Tape}}^{-1}((\haine5^{\vec{k'}})^\mathbb Z). \end{equation*} \end{lemma} \begin{proof} Notice that line~\ref{al:uni:comp} together with $\coordi$ make up the permutation whose behavior is described in Lemma~\ref{behavior}, while line~\ref{al:uni:shift} and $\coordi$ make up the permutation of Lemma~\ref{parshift}. Also, as already noted, for any value of $\texttt{Clock}$, at most one permutation of lines~\ref{al:uni:comp} and~\ref{al:uni:shift} is not equal to the identity. Like in the proofs of Lemmas~\ref{behavior} and~\ref{parshift}, we can easily see that we only have to show that if $\Phi(c)=b$, then $\Phi F^T(c) = G(b)$, and that this follows from the following stronger fact: \begin{fact}\label{fact:shiftandpermutation} If $c \in (\haine5^{\vec{k}})^{\mathbb Z}\cap\gra{0}{t_0}{S}{T}\cap\emp[\motvide]{\texttt{NTape},\texttt{Head}_{-1},\texttt{Head}_{+1},\texttt{Tape}_{-1},\texttt{Tape}_{+1}}\cap\tilde{\Phi}_{\texttt{Tape}}^{-1}(b)^\mathbb Z$, then $F^{4U+S}(c)$ exists if and only if $G(b)$ exists, and in that case $F^{4U+S}(c) \in \gra{0}{t_0+4U+S}{S}{T}\cap\emp[\motvide]{\texttt{NTape},\texttt{Head}_{-1},\texttt{Head}_{+1}}\cap\tilde{\Phi}_{\texttt{Tape}}^{-1}(G(b))$. \end{fact} Indeed, according to Fact~\ref{fact:permutation}, we have that $F^{4U}(c)$ exists if and only if $\alpha(b)$ exits, or, equivalently, if and only if $G(b)$ exists, and in this case \begin{equation*} F^{4U}(c) \in \gra{0}{t_0+4U}{S}{T}\cap\emp[\motvide]{\texttt{Head}_{-1},\texttt{Head}_{+1},\texttt{Tape}_{-1},\texttt{Tape}_{+1},\texttt{NTape}}\\ \cap \tilde{\Phi}_{\texttt{Tape}}^{-1}(\alpha(b)).\end{equation*} Fact~\ref{fact:shift} implies that $F^{4U+S}(c) \in \gra{0}{t_0+4U+S}{S}{T}\cap \emp[\motvide]{\texttt{Head}_{-1},\texttt{Head}_{+1},\texttt{Tape}_{-1},\texttt{Tape}_{+1},\texttt{NTape}}\cap \tilde{\Phi}_{\texttt{Tape}}^{-1}(\sigma^{-\nu}\alpha(b)),$ which concludes the proof of the lemma, since $G=\sigma^{-\nu}\alpha(b)$. \end{proof} $\unive$ is a rule (family of rules in fact, since they depend on parameters) that can simulate any PPA. Every IPPA $F$ that we will construct later will factor onto $\unive$. They might have some additional fields for which we apply a different rule, and this rule might even take into consideration the fields of $\C[\unive]$, but none of these other rules is going to \emph{change} the fields of $\C[\unive]$. Therefore, by projecting onto $\C[\unive]$, we will immediately obtain that $F$ simulates $G$, even though the simulation might not be exact. However, if $\vec{k}$ does not have any anonymous fields, then the simulation is exact, since all the fields of $\C[\coordi] \sqcup \C[\unive]$ are uniquely determined by $\Phi$. \subsection{Satisfying the inequalities} Lemma~\ref{universal} is true only under the assumption that the following set of inequalities are satisfied: \[\both{ U\geq\max\{t_p({\haine5^{\vec{k'}}}),t_{p^{-1}}({\haine5^{\vec{k'}}})\}\\ S\geq\max\{2U,\norm{\Chi{\haine5^{\vec{k'}}}}\}\\ T\ge4U+S+t_0\\ k_\texttt{Addr},k_{\texttt{Addr}_{+1}}\geq\norm S\\ k_\texttt{Clock},k_{\texttt{Clock}_{+1}}\geq\norm T\\ k_{\texttt{Head}_{-1}},k_{\texttt{Head}_{+1}}\geq \length{\Chi{\haine4\times (Q_p \cup Q_{p^{-1}})\times\{-1,+1\}}}\\ k_\texttt{Tape},k_\texttt{NTape},k_{\texttt{Tape}_{-1}},k_{\texttt{Tape}_{+1}}\geq 1 ~.}\] We will denote this set of inequalities by $\I(\vec{k},\vec{k'},S,T,U,t_0,p,p^{-1})$. When $t_0=0$, \textit{i.e.}, when the computation starts at $\texttt{Clock}=0$, we will omit it and write $\I(\vec{k},\vec{k'},S,T,U,p)$, instead. Let us explain again intuitively why each inequality is needed: \begin{itemize} \item $k_\texttt{Addr},k_{\texttt{Addr}_{+1}}\ge\norm S$: The fields $k_\texttt{Addr}$ and $k_{\texttt{Addr}{+1}}$ have to be large enough so that we can write the binary representation of $S$ in them. \item $k_\texttt{Clock},k_{\texttt{Clock}_{+1}}\ge\norm T$: The fields $k_\texttt{Clock}$ and $k_{\texttt{Clock}{+1}}$ have to be large enough so that we can write the binary representation of $T$ in them. \item $U\ge\max\{t_p({\haine5^{\vec{k'}}}),t_{p^{-1}}({\haine5^{\vec{k'}}})\}$: We have to run the TM long enough so that the computation of $p$ onto the encoded letters halts. \item $S\ge\max\{2U,\norm{\Chi{\haine5^{\vec{k'}}}}\}$: The colonies have to be wide enough so that we can encode the letters of $\haine5^{\vec{k'}}$ in them. In addition, they have to be wide enough so that the heads of the computation do not \xpr{collide}. \item $T\ge4U+S+t_0$: $T$ has to be large enough so that the computation and the shifting are done before the next working period starts. \item $k_{\texttt{Head}_{-1}},k_{\texttt{Head}_{+1}}\ge\length{\Chi{\haine4\times (Q_p\cup Q_{p^{-1}})\times\{-1,+1\}}}$: The head fields have to be large enough so that states of $\gamma_{\U}[p]$ and $\gamma_{\U}[p^{-1}]$ can be written on them. \item $k_\texttt{Tape},k_\texttt{NTape},k_{\texttt{Tape}_{-1}},k_{\texttt{Tape}_{+1}}\ge1$. Empty fields are of no use, in general. \end{itemize} \begin{remark} If $p,p^{-1}\in \haine2^*, \vec{k'} \in \mathbb N^M$ and $t_0 \in \mathbb N$ are fixed, then we can choose $\vec{k},U,S$ and $T$ such that the inequalities of Lemma~\ref{universal} is satisfied. \end{remark} \begin{proof} Since $\vec{k'}, p$ and $t_0$ are fixed, given $U,S,T$ we can choose $\vec{k} \defeq \vec{k}_{U,S,T}$ such that all of the inequalities except for the three first are satisfied as equalities. Then, given $S,U$ we can choose $T \defeq T_{S,U}$ such that the third inequality is satisfied as an equality. Similarly, given $U$ we can choose $S \defeq S_U$ such that the second inequality is satisfied. Finally, $U$ can be chosen independently from the rest of the parameters, since it only depends on $p$ and $\vec{k'}$, which are fixed. \end{proof} In later constructions, the choice of $U$ will not be so straightforward, as $\vec{k'}$ will depend on $\vec{k}$. In this case, we first fix $G$ and then look for the suitable values of the parameters. The situation becomes trickier when the simulated RPCA depends on the choice of the parameters, as will be the case in the following chapters. Then, we have to be careful not to fall into a circular argument. \chapter{Infinite hierarchies}\label{s:hierarchy} For every PPA $G$ and sufficiently large $S,T$, Lemma~\ref{universal} shows that is is possible to construct a PPA $F$ that $(S,T,0)$-simulates $G$. In addition, the simulation can be made exact. We also want to make it complete. The most direct way is to restrict $F$ to $\tilde{\Phi}_{\texttt{Tape}}^{-1}(\mathcal D}%\DeclareMathOperator*{\dom}{dom{G})$, which, by definition makes the simulation complete. However, this is not good because it is a radius-$S$ SFT condition and, if we wanted to have an infinite nested simulation, we would have to impose an infinite number of such restrictions, so that the subshift we would obtain would not be an SFT. The idea, which is the basic idea behind all hierarchical constructions, is that if the simulated alphabet is determined in an easy way by the simulating alphabet, then it is possible to design a simple IPPA that ensures that the simulating configuration is in $\tilde{\Phi}_{\texttt{Tape}}^{-1}(\mathcal D}%\DeclareMathOperator*{\dom}{dom{G})$. \section{Son-father checks} The first thing is to check that the simulated letter, which is written in an encoded form bit by bit in $\texttt{Tape}$, has the correct structure, \textit{i.e.}, it is the encoding of a letter with the correct number of fields and lengths. $\vec{k'} \in \haine5^{**} \pto \mathbb N^M$ is a vector valuation, that gives the lengths of the simulated alphabet as a function of the lengths of the simulating alphabet. In applications, it will be easily computable from every letter of the simulating alphabet, or, in other words, the information about the structure of the simulated letter will be known to all of the letters of the simulating IPPA. \begin{algo}{chekka}{\chekka}{M,v_\texttt{Addr},t_\texttt{Tape},\vec{k'}} \IF{$v_\texttt{Addr}\ge l_{\vec{k'},M}$} \STATE{$\chekk(t_\texttt{Tape}=3)$} \COMMENT{On the right side of the encoding, \texttt{Tape}\ is $3$.} \ENDIF \FOR{$0\le i<M$} \IF{$v_\texttt{Addr}=l_{\vec{k'},i}$} \STATE{$\chekk[t_\texttt{Tape}=2]$} \COMMENT{Field separators are at the expected positions.} \ELSIF{$l_{\vec{k'},i}<v_\texttt{Addr}<l_{\vec{k'},i+1}$} \STATE{$\chekk[t_\texttt{Tape}\in\haine2]$} \COMMENT{Proper field encodings are binary.} \ENDIF \ENDFOR \end{algo} This permutation is polynomially computable in its parameters. (In every case that we use it, it will be easily checkable that the parameters are polynomially computable.) The following lemma follows simply by inspection of the definition of $\Chi{\cdot}$ and ${\chekka}[M,v_\texttt{Addr},t_\texttt{Tape},\vec{k'}]$: \begin{lemma}\label{lem:fixalpha} Let us fix a field list $[\texttt{Addr},\texttt{Tape}]\in\mathbb N^2$, an integer $M\in{\mathbb N}_1$, a vector $\vec{k'} \in \mathbb N^M$, $S\in{\mathbb N}_1$ and a vector $\vec{k} \in \mathbb N^*$. Let $F$ be the IPPA defined by a null direction vector and permutation $\chekka[M,\bina{\pi_{\texttt{Addr}}},\pi_{\texttt{Tape}},\vec{k'}]$, and assume that the following inequalities hold: \[\both{ S \geq \norm{\Chi{\haine5^{\vec{k'}}}}\\ k_\texttt{Addr}\ge\norm S\\ k_\texttt{Tape}\ge1 ~.}\] Let $c \in (\haine5^{\vec k})^{\mathbb Z} \cap \grs{s}{S} \cap \tilde{\Phi}_{\texttt{Tape}}^{-1}(b)$, where $s \in \co{0}{S}$ and $b \in (\haine5^{**})^{\mathbb Z}$. Then, $F(c)$ exists if and only if $b \in (\haine5^{\vec{k'}})^{\mathbb Z}$ and in this case $F(c)=c$. \end{lemma} In other words, if a configuration is split into colonies using the $\texttt{Addr}$ field and every colony has the encoding of some letter on its $\texttt{Tape}$ tape, then $\chekka[M,\bina{\pi_{\texttt{Addr}}},\pi_{\texttt{Tape}},\vec{k'}]$ ensures that this encoded letter belongs in $\haine5^{\vec{k'}}$. We can also check that some field $i$ in the simulated letter has a prescribed prefix (given by a term $t$). \begin{algo}{hier}{\hier}{M,v_\texttt{Addr},t_\texttt{Tape},\vec{k'},i,t} \IF{$l_{\vec{k'},i}<v_\texttt{Addr}\le l_{\vec{k'},i}+\length{\Chi{t}}$} \STATE{$\chekk[t_\texttt{Tape}=\Chi{t}\restr{v_\texttt{Addr}-l_{\vec{k'},i}}]$} \ENDIF \end{algo} \begin{lemma}\label{lem:fixfield} Let us fix a field list $[\texttt{Addr},\texttt{Tape}]\in\mathbb N^2$, an integer $M\in{\mathbb N}_1$, a field $i\in\co0M$, a covector $\vec{k'}\in \mathbb N^M$, $S \in \mathbb N$, a term $t \colon \haine5^{**} \to \haine5^{*}$ and a vector $\vec{k} \in \mathbb N^*$. Let $F$ be the IPPA defined by a null direction vector and permutation $\hier[M,\bina{\pi_\texttt{Addr}},\texttt{Tape},\vec{k'},i,t]$, and assume that the following inequalities hold: \[\both{ S \geq \norm{\Chi{\haine5^{\vec{k'}}}}\\ k_\texttt{Addr}\ge\norm S\\ k_\texttt{Tape}\ge1 ~.}\] Let $c \in (\haine5^{\vec k})^{\mathbb Z} \cap \grs{s}{S} \cap \phi^{-1}(b)$, where $0 \le s <S$ and $b \in (\haine5^{\vec{k'}})^{\mathbb Z}$ and assume that $t(c_n)=t(c_{n'})\defeq t_c$ for all $n,n' \in \mathbb Z$.\\ Then, $F(c)$ exists if and only if ${\pi_i(b_j)}_{\co{0}{\norm{t_c}}}=t_c$, for all $j \in \mathbb Z$ and in this case $F(c)=c$. \end{lemma} We implicitly assume that if $l > \norm{w}$, where $w \in \A^{*}$, then $w_l=\motvide$. In other words, if all letters of $c$ have the \xpr{same idea} about what $\pi_i(b_j)$ should be, then, they can check in one step that this indeed happens. In practice, $t$ will usually be equal to $\pi_{\field}$, where $\field$ is a horizontally constant field, so that the condition $t(c_n)=t(c_{n'})$ will be true. In this case, we just check that $\pi_\field(c_n)$ is a prefix of $\pi_{\field}(b_j)$, for all $j,n \in \mathbb Z$. Lemmas~\ref{lem:fixalpha} and \ref{lem:fixfield} correspond to what in \cite{drs} is achieved by mentioning that ``the TM knows at which place the information of every field is held''. For many people, this argument is one of the most confusing things in that construction. This is the reason why we have tried to explain this point as clearly as possible and show exactly how the cells of the simulating IPPA can collectively check that some constant information of the simulating alphabet is the same in the simulated alphabet. In fact, we use a general term $t$ in Lemma~\ref{lem:fixfield}, which essentially allows us to impose any (polynomially computable) condition on the simulated alphabet. \section{Self-simulation}\label{sself} We are now ready to construct a self-simulating RPCA. This is the simplest and first example of nested simulation. We just check that the simulated letter has the same lengths as the simulating ones and that some \xpr{hierarchical} fields (which contain the values $p,p^{-1},U,S,T$ that are fixed in Lemma~\ref{universal}) have the same value in the simulated letter as in the simulating ones (where their values is already fixed). Let $\C[\texttt{Self}]\defeq\C[\unive]\sqcup [\texttt{MAddr},\texttt{MClock},\texttt{Alarm},\texttt{Prog},\revprog] \in \mathbb N^{15}$. $\texttt{MAddr}$ and $\texttt{MClock}$ are used to obtain the values of $v_{\texttt{MAddr}}$ and $v_\texttt{MClock}$. Similarly, $\texttt{Prog},\revprog$ and $\texttt{Alarm}$ are used to obtain the values $t_{\texttt{Prog}},t_{\revprog}$ and $v_{\texttt{Alarm}}$. All of these fields will be horizontally constant. \begin{algo}{selfs}{\texttt{Self}}{M,\vec\nu} \IF{$\bina{\pi_\texttt{Clock}}=0$} \STATE{$\chekk[\emp[\motvide]{\texttt{Head}_{-1},\texttt{Head}_{+1},\texttt{Tape}_{-1},\texttt{Tape}_{+1},\texttt{NTape}}]$}\label{al:self:initial} \STATE{$\chekka[M,\bina{\pi_\texttt{Addr}},\pi_\texttt{Tape},(\length{\pi_j})_{j<M}]$} \COMMENT{Check that the lengths of the simulated letter are the same}\label{al:self:alph} \FOR{$i\in\{\texttt{MAddr},\texttt{MClock},\texttt{Alarm},\texttt{Prog},\revprog\}$} \STATE{$\hier[M,\bina{\pi_\texttt{Addr}},\texttt{Tape},(\length{\pi_j})_{j<M},i,\pi_i]$} \COMMENT{Check that the hierarchical fields of the simulated letter are the same}\label{al:self:hiera} \ENDFOR \ENDIF \STATE{$\unive[M$,$\vec{\nu}$,$(\length{\pi_j})_{j<M}$,$\bina{\pi_\texttt{Addr}}$,$\bina{\pi_\texttt{Clock}}$, $\bina{\pi_\texttt{MAddr}}$, $\bina{\pi_\texttt{MClock}}$, $\bina{\pi_\texttt{Alarm}}$, $\pi_\texttt{Prog}$, $\pi_\revprog]$}\COMMENT{The alphabet is as expected; we can simulate.} \STATE{$\coordi[\bina{\pi_\texttt{MAddr}},\bina{\pi_\texttt{MClock}}]$} \label{al:self:unive} \end{algo} In the next lemma, we do not want to have any anonymous fields, but only those fields that are used in $\texttt{Self}$. There are $15$ fields in $\C[\texttt{Self}]$, so we take the field list $[0,\ldots,14]$, which means that we assign a number of $\co{0}{15}$ to every field in $\C[\texttt{Self}]$ in some random (but fixed) way. Once we have done this, the corresponding vector of directions is also well-defined. \begin{lemma}\label{self} Let us fix the field list $\C[\texttt{Self}]\defeq[0,\ldots,14]$, the corresponding direction vector $\vec{\nu}_{\texttt{Self}}$, integers $S,T,U\in{\mathbb N}_1$ and vector $\vec k\in\mathbb N^{15}$. Let $F$ be the IPPA with directions $\vec{\nu}_{\texttt{Self}}$ and permutation $\texttt{Self}[15,\vec{\nu}_{\texttt{Self}}]$ and $p,p^{-1}$ be the programs for this permutation and its inverse, respectively. Let $F_{\vec{k},S,T,U}$ be the restriction of $F$ to the subalphabet \begin{equation*} \A_{\vec k,S,T,U}\defeq\haine5^{\vec{k}}\cap\emp[S]{\texttt{MAddr}}\cap\emp[T]{\texttt{MClock}}\cap\emp[U]{\texttt{Alarm}}\cap\emp[p]{\texttt{Prog}}\cap\emp[p^{-1}]{\revprog}, \end{equation*} and assume that the following inequalities are satisfied: \[\both{ \I(\vec{k},\vec{k},S,T,U,p,p^{-1})\\ k_\texttt{Prog} \geq\norm p\\ k_\revprog \geq\norm{p^{-1}}\\ k_\texttt{MAddr} \geq \norm{S}\\ k_{\texttt{MClock}} \geq \norm{T}\\ k_{\texttt{Alarm}} \geq \norm{U} ~.}\] Then, $F_{\vec{k},S,T,U}\simu[S,T,0,\Phi]F_{\vec{k},S,T,U}$ completely exactly, where $\Phi\defeq\tilde{\Phi}_{\texttt{Tape}}{\restr{\Sigma}}$ and \begin{equation*} \Sigma\defeq\A_{\vec k,S,T,U}^{\mathbb Z}\cap\gra{0}{0}{S}{T}\cap\emp[\motvide]{\texttt{Head}_{-1},\texttt{Head}_{+1},\texttt{Tape}_{-1},\texttt{Tape}_{+1},\texttt{NTape}}\cap\tilde{\Phi}^{-1}_{\texttt{Tape}}(\A_{\vec k,U,S,T,p}^\mathbb Z). \end{equation*} \end{lemma} It is important to notice that $F$ is a fixed rule that does not depend on $\vec{k},S,T,U$. Therefore, its program $p$ is a \emph{fixed} word which we can \xpr{feed} to itself by restricting the alphabet to $\emp[p]{\texttt{Prog}}$. This is the basic idea of self-simulation. Notice also that the fields for which we do a hierarchical check are exactly those that are fixed in the definition of $\A_{\vec k,S,T,U}$. We need to ensure that these fields have the correct value in the simulated letter. The correct way to do this is to check that the value of the simulated letter is in a good relation with the values in the letters of the simulating IPPA. Here, the relation is simply equality. Later it will be something more complicated. We will try to give as many details as possible in the following proof because it will serve as a prototype for the rest of the hierarchical simulations. \begin{proof} We have to show three things: First of all, that $F_{\vec{k},S,T,U}$ $(S,T,0)$-simulates $F_{\vec{k},S,T,U}$ with decoding function $\Phi$ (simulation), second, that $\Phi$ is injective (exactness) and, finally, that $\Omega_{F_{\vec{k},S,T,U,p}} \subseteq \rock{\Phi}$ (completeness). For the simulation part, let $b \in \A_{\vec k,S,T,U}^{\mathbb Z}$ and $c \in \A_{\vec k,S,T,U}^{\mathbb Z}\cap\gra0{0}{S}{T}\cap \emp[\motvide]{\texttt{Head}_{-1},\texttt{Head}_{+1},\texttt{Tape}_{-1},\texttt{Tape}_{+1},\texttt{NTape}} \cap \tilde{\Phi}^{-1}(b)$. By definition, $c$ is not rejected by the checks of lines~\ref{al:self:initial},\ref{al:self:alph} and~\ref{al:self:hiera}. Indeed, line~\ref{al:self:initial} checks that the fields $\texttt{Head}_{-1}$, $\texttt{Head}_{+1}$, $\texttt{Tape}_{-1}$, $\texttt{Tape}_{+1}$, $\texttt{NTape}$ are empty, which is true since \begin{equation*} c \in \gra{0}{0}{S}{T}\cap\emp[\motvide]{\texttt{Head}_{-1},\texttt{Head}_{+1},\texttt{Tape}_{-1},\texttt{Tape}_{+1},\texttt{NTape}}. \end{equation*} Line~\ref{al:self:alph} checks that the lengths of the fields of $b$ and $c$ are the same, while the checks of line~\ref{al:self:hiera} check that $b$ and $c$ have the same values in the fields $\texttt{MAddr}, \texttt{MClock}, \texttt{Prog}$ and $\revprog$, which are true by definition. Since $c$ is not rejected by these checks and $F_{\vec{k},S,T,U}$ is a subrule of \begin{equation*} \coordi[S,T]\circ \unive_{15,\vec{\nu}_{\texttt{Self}}}[(\length{\pi_j})_{j<M},\bina{\pi_\texttt{Addr}},\bina{\pi_\texttt{Clock}},S,T,U,p,p^{-1}] \end{equation*} and, by assumption, the inequalities of Lemma~\ref{universal} are satisfied, and $p$ is the program of $\texttt{Self}[15,\vec{\nu}_{\texttt{Self}}]$, Lemma~\ref{universal} gives that $F_{\vec{k},S,T,U}$ $(S,T,0)$-simulates $F_{\vec{k},S,T,U}$ with decoding function $\Phi$. For the exactness part, we have already noted various times that the values of the fields in $\C[\unive]$ are uniquely determined for all $c \in \Phi^{-1}(b)$. For the hierarchical fields (\textit{i.e.}, $\texttt{MAddr}, \texttt{MClock}, \texttt{Alarm}, \texttt{Prog}, \revprog$) the values are fixed for all $c \in \A_{\vec k,S,T,U}^{\mathbb Z}$. In addition, there do not exist any anonymous fields (since we chose $M=15$). Therefore, $\Phi$ is injective and the simulation is exact. In order to show that the simulation is complete, we will first show that if $c \in \gra{0}{0}{S}{T} \cap F_{\vec{k},S,T,U}^{-T}(\A_{\vec k,S,T,U}^{\mathbb Z})$, then $c \in \Phi^{-1}(\A_{\vec k,S,T,U}^{\mathbb Z})$. Indeed, line~\ref{al:self:initial} checks that $c \in \gra{0}{0}{S}{T}\cap\emp[\motvide]{\texttt{Head}_{-1},\texttt{Head}_{+1},\texttt{Tape}_{-1},\texttt{Tape}_{+1},\texttt{NTape}}$ (in the sense that if this is not true, then $F_{\vec{k},S,T,U,p}$ would not be defined, so there would be a contradiction). According to Lemma~\ref{lem:fixalpha}, line~\ref{al:self:alph} checks that for every colony $\col{i}{c}$, $\pi_{\texttt{Tape}}(\col{i}{c})$ has the structure of the encoding of a letter in $\haine5^{\vec{k}}$. (We cannot immediately say that $\pi_{\texttt{Tape}}(\col{i}{c})$ is the encoding of a letter in $\haine5^{\vec{k}}$ because there are some triplets that are not used by $\Chi{\cdot}$. So for example, if the first three letters in the $\texttt{Tape}$ tape are $111$, then $\pi_{\texttt{Tape}}(\col{i}{c})$ is not the encoding of a letter in $\haine5^{\vec{k}}$, even though the $2$'s and $3$'s are in the correct positions.) In addition, since $F_{\vec{k},S,T,U}^{T}$ exists and the inequalities $\I(\vec{k},\vec{k},S,T,U,p,p^{-1})$ are satisfied, this means that the computation of $p$ on input $\pi_\texttt{Tape}(\col{i}{c})$ halts, therefore for all $i \in \mathbb Z$, $\tilde{\phi}_{\texttt{Tape}}(\col{i}{c})=b_i \in \haine5^{**}$. Lemma~\ref{lem:fixalpha} now implies that $\tilde{\phi}_{\texttt{Tape}}(\col{i}{c})=b_i \in \haine5^{\vec{k}}$. Finally, line~\ref{al:self:hiera} checks that $\tilde{\phi}_{\texttt{Tape}}(b_i) \in \emp[S]{\texttt{MAddr}}\cap\emp[T]{\texttt{MClock}}\cap\emp[U]{\texttt{Alarm}}\cap\emp[p]{\texttt{Prog}}\cap\emp[p^{-1}]{\revprog}$ by checking that the hierarchical fields of $b_i$ have the same values as the corresponding fields of the letters of $c$ (notice that the hierarchical fields are constant for the letters of $c$, so that Lemma~\ref{lem:fixfield} applies). Summarizing, we have that $b \in \A_{\vec k,S,T,U}^{\mathbb Z}$, so that $c \in \A_{\vec k,S,T,U}^{\mathbb Z}\cap\gra{0}{0}{S}{T} \cap \emp[\motvide]{\texttt{Head}_{-1},\texttt{Head}_{+1},\texttt{Tape}_{-1},\texttt{Tape}_{+1},\texttt{NTape}}\cap\tilde{\Phi}^{-1}_{\texttt{Tape}}(\A_{\vec k,S,T,U}^\mathbb Z)$. Finally, if $c \in F_{\vec{k},S,T,U}^{-2T}(\A_{\vec k,S,T,U}^{\mathbb Z})$ then $F^t\sigma^s(c) \in \gra{0}{0}{S}{T}$, for some $s\in \co{0}{S}$ and $t \in \co{0}{T}$. Therefore, $F_{\vec{k},S,T,U}^t\sigma^s(c) \in \gra{0}{0}{S}{T} \cap F_{\vec{k},S,T,U}^{-T}(\A_{\vec k,S,T,U}^{\mathbb Z})$ so that $F_{\vec{k},S,T,U}^t\sigma^s(c) \in \Phi^{-1}(\A_{\vec k,S,T,U}^{\mathbb Z})$. This implies that \\ $\Omega_{F_{\vec{k},S,T,U}} \subseteq F_{\vec{k},S,T,U}^{-2T}(\A_{\vec k,S,T,U}^{\mathbb Z}) \subseteq \bigsqcup_{\begin{subarray}c0\le t<T\\0\le s<S\end{subarray}}F^t\sigma^s(\mathcal D}%\DeclareMathOperator*{\dom}{dom(\Phi))$, which means that the simulation is also complete. \end{proof} \subsection{Satisfying the inequalities} It is not as straightforward to see that the inequalities $\I(\vec{k},\vec{k},S,T,U,p,p^{-1})$ can be satisfied as it was for Lemma~\ref{universal}, because in this case $\vec{k'}$ is equal to $\vec{k}$, which means that we cannot fix $\vec{k'}$ and then choose $\vec{k},S,T,U$ sufficiently big. \begin{remark}\label{rem:inequselfsimi} We can find $\vec{k},S,T,U$ such that the inequalities of Lemma~\ref{self} are satisfied. In addition, for all $\epsilon > 0$, $S / T$ can be made larger than $1 -\epsilon$. (Intuitively, the macro-tiles can be made as close to a square as we want.) \end{remark} \begin{proof} We have to satisfy the following inequalities: \[\both{ U\ge\max\{t_p({\haine5^{\vec{k}}}),t_{p^{-1}}({\haine5^{\vec{k}}})\}\\ S\ge\max\{2U,\norm{\Chi{\haine5^{\vec{k}}}}\}\\ T\ge 4U+S\\ k_\texttt{Addr},k_{\texttt{Addr}_{+1}}\ge\norm S\\ k_\texttt{Clock},k_{\texttt{Clock}_{+1}}\ge\norm T\\ k_{\texttt{Head}_{-1}},k_{\texttt{Head}_{+1}}\ge\length{\Chi{\haine4\times (Q_p \cup Q_{p^{-1}})\times\{-1,+1\}}}\\ k_\texttt{Tape},k_\texttt{NTape},k_{\texttt{Tape}_{-1}},k_{\texttt{Tape}_{+1}}\ge1\\ k_\texttt{Prog}=\norm p\\ k_\revprog=\norm{p^{-1}}\\ k_\texttt{MAddr} = \norm{S}\\ k_{\texttt{MClock}} = \norm{T}\\ k_{\texttt{Alarm}} = \norm{U} ~.}\] For all $S,T,U$, let us choose $\vec{k}\defeq\vec{k}_{S,T,U}$ such that all of the inequalities except the first three are equalities. Then, $\norm{\Chi{\haine5^{\vec{k}}}}\le P_1(\log{S},\log{T},\log{U})$ and $\max\{t_p({\haine5^{\vec{k}}}),t_{p^{-1}}({\haine5^{\vec{k}}})\} \le P_2(\log{S},\log{T},\log{U})$, for some polynomials $P_1,P_2$. These follow by definition of $\Chi{\cdot}$ and $\vec{k}_{S,T,U}$ and the fact that the program $p$ is fixed and has polynomial complexity. Therefore, it is enough to find $S,T,U$ that satisfy the following inequalities: \[\both{ U\ge P_2(\log{S},\log{T},\log{U})\\ S\ge\max\{2U,P_1(\log{S},\log{T},\log{U})\}\\ T\ge 4U+S ~.}\] For all $S,U$, let us choose $T \defeq T_{S,U}=S+4U$. Also, for all $S,S_0,r$, let us choose $U \defeq U_{S,S_0,r}=\log^r(S+S_0)$. Then, the third inequality is satisfied and the other two are written as follows: \[\bothrl{ \log^r(S+S_0)\ge &P_2(\log{S},\log(S+4\log^r(S+S_0)),\log(\log^r(S+S_0))\\ S\ge&\max\{2\log^r(S+S_0),\\ &P_1(\log{S},\log(S+4\log^r(S+S_0)),\log(\log^r(S+S_0)))\}.}\] There exist \emph{fixed} $r,S_0 \in \mathbb N$ such that the first inequality is satisfied for all $S$, because $P_2$ is a fixed polynomial (hence its degree is a fixed number). Let us choose such $r,S_0$. Then, if $S$ is sufficiently large we also have that the second inequality is also satisfied (since $r,S_0$ are now fixed), because the right hand side grows only polylogarithmically in $S$, which finishes the proof of the claim. For the second claim, $S/T=\frac{S}{S+\log^r(S+S_0)}$, which can be made larger than $1-\epsilon$ by choosing $S$ sufficiently large. \end{proof} \begin{corollary} There exists an RPCA $G$ such that $\orb{G}$ is non-empty, aperiodic and $\NE(G)=\{0\}$. \end{corollary} \begin{proof} Let $G \defeq F_{\vec{k},S,T,U} \colon \A_{\vec k,U,S,T}^{\mathbb Z} \to \A_{\vec k,U,S,T}^{\mathbb Z}$, for some parameters that satisfy $\I(\vec{k},\vec{k},S,T,U,p,p^{-1})$. This is possible, according to Remark~\ref{rem:inequselfsimi}. By definition, we have that $S < T$. It is not difficult to see that $G^{-1}(\A_{\vec k,U,S,T}^{\mathbb Z})$ is nonempty. Then, Lemmas~\ref{self}, \ref{lem:aperiodichierarchy}, \ref{l:nonvide} and Proposition~\ref{prop:hochman} imply that $\orb{G}$ is non-empty, uncountable, aperiodic and $\NE(G)=\{0\}$. \end{proof} This finishes the construction of an extremely-expansive, aperiodic 2D SFT. Once we achieved self-simulation, then extreme expansiveness follows immediately from Proposition~\ref{prop:hochman}. \section{Hierarchical simulation} We now want to construct more general nested hierarchical simulations, where the parameters of the simulation might vary in every simulation level. This structure is more flexible than a simple self-simulating RPCA, and it will be more useful in the various applications. Let us fix the field list $\C[\hsim] \defeq \C[\unive] \sqcup [\texttt{Level},\texttt{Prog},\revprog]$ and let $\vec{\nu}_{\hsim}$ be the corresponding vector of directions. \begin{itemize} \item $\texttt{Prog}$ and $\revprog$ are used as in the previous section. \item $\texttt{Level}$ is used to obtain the values of $v_{\texttt{MAddr}}, v_{\texttt{MClock}}$ and $v_{\texttt{Alarm}}$, not through a direct projection, as in the previous case, but in a polynomially computable way. \end{itemize} In the following, let $\seq S,\seq T, \seq U \in \mathbb N^{\mathbb N}$ be sequences of integers and $(\vec{k} \colon \mathbb N \to \mathbb N^M)_{n \in \mathbb N}$ is a \emph{sequence} of vectors depending on $n$ (It can give rise to a vector valuation by using $\bina{\pi_\field}$ as the index of the sequence). \begin{algo}{hsim}{\hsim}{M,\vec\nu,\vec{k},\seq S,\seq T,\seq U} \IF{$\bina{\pi_\texttt{Clock}}=0$} \STATE{$\chekk[\emp[\motvide]{\texttt{Head}_{-1},\texttt{Head}_{+1},\texttt{Tape}_{-1},\texttt{Tape}_{+1},\texttt{NTape}}]$}\label{al:hier:empty} \STATE{$\chekka[M,\bina{\pi_\texttt{Addr}},\pi_\texttt{Tape},\vec{k}_{\bina{\pi_\texttt{Level}}+1}]$} \COMMENT{Check that the lengths of the simulated letter are correct} \label{al:hier:alph} \STATE{$\hier[M,\bina{\pi_\texttt{Addr}},\pi_\texttt{Tape},\vec{k}_{\bina{\pi_\texttt{Level}}+1},\texttt{Prog} \pi_{\texttt{Prog}}]$} \COMMENT{$\texttt{Prog}$ of the simulated letter is the same}\label{al:hier:prog} \STATE{$\hier[M,\bina{\pi_\texttt{Addr}},\pi_\texttt{Tape},\vec{k}_{\bina{\pi_\texttt{Level}}+1},\revprog \pi_\revprog]$} \COMMENT{$\revprog$ is also the same}\label{al:hier:revprog} \STATE{$\hier[M,\bina{\pi_\texttt{Addr}},\pi_\texttt{Tape},\vec{k}_{\bina{\pi_\texttt{Level}}+1},\texttt{Level} \anib{\bina{\pi_\texttt{Level}}+1}]$} \COMMENT{$\bina{\texttt{Level}}$ of the simulated letter increases by $1$}\label{al:hier:lev} \ENDIF \STATE{$\unive[M$, $\vec{\nu}$, $\vec{k}_{\bina{\pi_\texttt{Level}}}$, $\bina{\pi_\texttt{Addr}}$ ,$\bina{\pi_\texttt{Clock}}$, $\seq{S}_{\bina{\pi_\texttt{Level}}}$, $\seq{T}_{\bina{\pi_\texttt{Level}}}$, $\seq{U}_{\bina{\pi_\texttt{Level}}}$, $\pi_\texttt{Prog}$, $\pi_\revprog]$} \COMMENT{Simulate}\label{al:hier:unive} \STATE{$\coordi[\seq{S}_{\bina{\pi_\texttt{Level}}},\seq{T}_{\bina{\pi_\texttt{Level}}}]$} \end{algo} We will now construct a nested simulation of RPCA where the simulation parameters are different at every level. \begin{lemma}\label{lem:nestsimul} Let $\seq U,\seq S,\seq T$ be polynomially checkable sequences of integers. Let us fix the field list $\C[\hsim]\defeq [0,\ldots,12]$, the corresponding \emph{fixed} direction vector $\vec{\nu}_{\hsim}$ and a polynomially checkable sequence of $13$-uples $\seq{\vec{k}}\in(\mathbb N^{13})^\mathbb N$. Let $F$ be the IPPA with directions $\vec{\nu}_{\hsim}$ and permutation $\hsim[13,\vec{\nu}_{\hsim},\vec{k},\seq S,\seq T, \seq U;\C[\hsim]]$ and $p,p^{-1}$ be the programs for this permutation and its inverse, respectively. For all $n \in \mathbb N$, let $F_n$ be the restriction of $F$ to the subalphabet \begin{equation*} \A_{n}\defeq \haine5^{\vec{k}_n}\cap\emp[n]{\texttt{Level}}\cap\emp[p]{\texttt{Prog}}\cap\emp[p^{-1}]{\revprog}, \end{equation*} and assume that the following inequalities hold for all $n \in \mathbb N$: \[\both{ \I(\vec{k}_n,\vec{k}_{n+1},S_n,T_n,U_n,p,p^{-1})\\ k_{n,\texttt{Prog}}\geq\norm p\\ k_{n,\revprog}\geq\norm{p^{-1}}\\ k_{n,\texttt{Level}} \geq \norm{n} ~.}\] Then, $F_n\simu[S_n,T_n,0,\Phi_n]F_{n+1}$ completely exactly, where $\Phi_n\defeq\tilde{\Phi}_{\texttt{Tape}}{\restr{\Sigma_n}}$ and $\Sigma_n \defeq \A_{n}^{\mathbb Z} \cap \gra{0}{0}{S_n}{T_n}\cap \emp[\motvide]{\texttt{Head}_{-1},\texttt{Head}_{+1},\texttt{Tape}_{-1},\texttt{Tape}_{+1},\texttt{NTape}} \cap \tilde{\Phi}^{-1}(\A_{n+1}^{\mathbb Z})$ \end{lemma} The proof is very similar to the proof of Lemma~\ref{self}. There exists some differences, though. For example, we do not have the fields $\texttt{MAddr}, \texttt{MClock}$ and $\texttt{Alarm}$. These fields are computed with the aid of field $\texttt{Level}$, so we perform a hierarchical check for $\texttt{Level}$. Apart from that, the proof follows the same pattern. Another difference, which will be important when we prove that the inequalities can be satisfied is that the program is not fixed once we fix $M$ and $\vec{\nu}$, as in the self-similar case, but depends on $\vec{k},\seq{S},\seq{T}$ and $\seq{U}$. Therefore, its complexity also depends on these parameters. More precisely, $t_p(\A_{n}) = P(\length{\A_n}+t_{\vec{k}}(n)+t_{\seq{S}}(n)+t_{\seq{T}}(n)+t_{\seq{U}}(n))$, for some polynomial $P$ that does not depend on the parameters. This is due to the fact that the program consists in a bounded number (independent of $n$) of polynomially computable functions and a bounded number of calls to the parameters. Similarly, $\length{p}= O(\length{p_{\vec{k}}}+\length{p_{\seq{S}}}+\length{p_{\seq{T}}}+ \length{p_{\seq{U}}})$. (The same things hold for $p^{-1}$.) \begin{proof} Let us fix $n \in \mathbb N$. We have to show three things: that $F_n$ $(S_n,T_n,0)$-simulates $F_{n+1}$ with decoding function $\Phi_n$ (simulation), that $\Phi_n$ is injective (exactness) and that $\Omega_{F_{n}} \subseteq \mathcal D}%\DeclareMathOperator*{\dom}{dom(\Phi_n)$ (completeness). For the simulation part, let $b \in \A_{n+1}^{\mathbb Z}$ and $c \in \Phi^{-1}(b) \in \A_{n}^{\mathbb Z}\cap\gra{0}{0}{S_n}{T_n} \cap \emp[\motvide]{\texttt{Head}_{-1},\texttt{Head}_{+1},\texttt{Tape}_{-1},\texttt{Tape}_{+1},\texttt{NTape}}$. By definition, $c$ is not rejected by the checks of lines~\ref{al:hier:empty},\ref{al:hier:alph},\ref{al:hier:prog}, \ref{al:hier:revprog} and \ref{al:hier:lev}, . Since $c$ is not rejected by these checks and $F_{n}$ factors onto \begin{align*} &\unive[13,\vec{\nu}_{\unive},(\length{\pi_j})_{j<M},\bina{\pi_\texttt{Addr}},\bina{\pi_\texttt{Clock}},S_n,T_n,U_n,p,p^{-1}]\\ &\coordi[S_n,T_n] \end{align*} and, by assumption, the inequalities of Lemma~\ref{universal} are satisfied by $\vec{k}_n$ and $\vec{k}_{n+1}$, and $p$ is the program of $\hsim[13,\vec{\nu}_{\hsim},\vec{k},\seq S,\seq T, \seq U]$, Lemma~\ref{universal} gives that $F_{n}$ $(S_n,T_n,0)$-simulates $F_{n+1}$ with decoding function $\Phi_n$. For the exactness part, we have already noted various times that the values of the fields in $\C[\unive]$ are uniquely determined for all $c \in \Phi^{-1}(b)$. For the hierarchical fields (\textit{i.e.}, $\texttt{Level}, \texttt{Prog}, \revprog$) the values are fixed for all $c \in \A_{n}^{\mathbb Z}$. In addition, there do not exist any anonymous fields (since we chose $M=13$). Therefore, $\Phi_n$ is injective and the simulation is exact. For the completeness part, we will only show that if $c \in \gra{0}{0}{S_n}{T_n} \cap F_{n}^{-T}(\A_{n}^{\mathbb Z})$, then $c \in \Phi^{-1}(\A_{n+1}^{\mathbb Z})$. Having shown this, it is easy to conclude that the simulation is complete using the same argument as in the proof of Lemma~\ref{self} Indeed, line~\ref{al:hier:empty} checks that $c \in \gra{0}{0}{S_n}{T_n}\cap \emp[\motvide]{\texttt{Head}_{-1},\texttt{Head}_{+1},\texttt{Tape}_{-1},\texttt{Tape}_{+1},\texttt{NTape}}$. According to Lemma~\ref{lem:fixalpha}, line~\ref{al:hier:alph} checks that for every colony $\col{i}{c}$, $\pi_{\texttt{Tape}}(\col{i}{c})$ has the structure of the encoding of a letter in $\haine5^{\vec{k}_{n+1}}$. In addition, since $F_{n}^{T_n}(c)$ exists and the equations $\I(\vec{k}_{n},\vec{k}_{n+1},S_n,T_n,U_n,p,p^{-1})$ are satisfied, this means that the computation of $p$ on input $\pi_\texttt{Tape}(\col{i}{c})$ halts, therefore for all $i \in \mathbb Z$, $\tilde{\phi}_{\texttt{Tape}}(\col{i}{c})=b_i \in \haine5^{**}$. Lemma~\ref{lem:fixalpha} now implies that $\tilde{\phi}_{\texttt{Tape}}(\col{i}{c})=b_i \in \haine5^{\vec{k}_{n+1}}$. Finally, lines~\ref{al:hier:lev},\ref{al:hier:prog} and \ref{al:hier:revprog} check that $\tilde{\phi}_{\texttt{Tape}}(b_i) \in \emp[n+1]{\texttt{Level}}\cap\emp[p]{\texttt{Prog}}\cap\emp[p^{-1}]{\revprog}$. Summarizing, we have that $b \in \A_{n+1}^{\mathbb Z}$, so that $c \in \A_{n}^{\mathbb Z}\cap\gra0{0}{S_n}{T_n} \cap \emp[\motvide]{\texttt{Head}_{-1},\texttt{Head}_{+1},\texttt{Tape}_{-1},\texttt{Tape}_{+1},\texttt{NTape}}\cap\tilde{\Phi}^{-1}_{\texttt{Tape}}(\A_{n+1}^\mathbb Z) = \Sigma_n$, or equivalently that $c \in \Phi^{-1}(\A_{n+1}^{\mathbb Z})$. \end{proof} \subsection{Satisfying the inequalities} Let us now show that the inequalities of Lemma~\ref{lem:nestsimul} can be satisfied: \begin{remark}\label{rem:inequhiera} We can find $\vec{k} \in (\mathbb N^{13})^{\mathbb N}$ and $\seq{S},\seq{T},\seq{U} \in {\mathbb N}_1^{\mathbb N}$ such that the inequalities of Lemma~\ref{lem:nestsimul} are satisfied. In addition, $\prod_{i<n} S_i/T_i$ can be made both $0$ and $\neq 0$. \end{remark} We have to deal with two problems, which were not present in the previous cases: First, there is an infinite set of inequalities, since there is also an infinite set of RPCA, and they must all be satisfied simultaneously. Second, the size of the program and the complexity of the permutations depends on the choice of the parameters $\seq S$ and $\seq T$. \begin{proof} We have to satisfy the following inequalities, for all $n \in \mathbb N$ \[\both{ U_n\ge\max\{t_p({\haine5^{\seq{\vec{k}}_{n+1}}}),t_{p^{-1}}({\haine5^{\seq{\vec{k}}_{n+1}}})\}\\ S_n\ge\max\{2{U}_n,\norm{\Chi{\haine5^{\vec{k}_{n+1}}}}\}\\ T_n\ge 4{U}_n+S_n\\ k_{n,\texttt{Prog}}\geq\norm p\\ k_{n,\revprog}\geq\norm{p^{-1}}\\ k_{n,\texttt{Head}_{-1}},k_{n,\texttt{Head}_{+1}}\ge\length{\Chi{\haine4\times (Q_p \cup Q_{p^{-1}})\times\{-1,+1\}}}\\ k_{n,\texttt{Addr}},k_{n,\texttt{Addr}_{+1}}\ge\norm{S_n}\\ k_{n,\texttt{Clock}},k_{n,\texttt{Clock}_{+1}}\ge\norm{T_n}\\ k_{n,\texttt{Tape}},k_{n,\texttt{NTape}},k_{n,\texttt{Tape}_{-1}},k_{n,\texttt{Tape}_{+1}}\ge1\\ k_{n,\texttt{Level}}=\norm{n} ~.}\] For all $n \in \mathbb N$ and $\seq{S},\seq{T}$ and $\seq{U}$, let us choose $\vec{k}_n\defeq\vec{k}_{n,\seq{S},\seq{T},\seq{U}}$ such that the last four inequalities are satisfied as equalities. Then, we can see that \begin{equation*} \norm{\Chi{\haine5^{\vec{k}_n}}}\le P_1(\log{S_n},\log{T_n},\log{n},k_{n,\texttt{Head}_{-1}},k_{n,\texttt{Head}_{+1}},k_{n,\revprog},k_{n,\texttt{Prog}}), \end{equation*} for some polynomial $P_1$. We claim that $\length{p} \le c(\length{p_{\vec{k}}}+\length{p_{\seq{S}}}+\length{p_{\seq{T}}}+\length{p_{\seq{U}}})$, for some constant $c$. (The same holds for $p^{-1}$ and we can assume that the constant $c$ is the same.) This is because, as we have already noticed, the program of $p$ uses a fixed number of polynomial operations and a bounded number of calls to the parameters $\length{p_{\vec{k}}}$, $\length{p_{\seq{S}}}$, $\length{p_{\seq{T}}}$, $\length{p_{\seq{U}}}$. For the same reason, we have that \begin{multline*} \max\{t_p({\haine5^{\vec{k}}}),t_{p^{-1}}({\haine5^{\vec{k}}})\} \le P_2(\log{S_n},\log{T_n},\log{n},k_{n,\texttt{Head}_{-1}},\\ k_{n,\texttt{Head}_{+1}},k_{n,\revprog},k_{n,\texttt{Prog}},t_{\vec{k}}(n),t_{\seq S}(n), ,t_{\seq T}(n),t_{\seq U}(n)), \end{multline*} for some \emph{fixed} polynomial $P_2$ that does not depend on the parameter sequences. Therefore, it is enough to find sequences $\vec{k},\seq{S},\seq{T},\seq{U}$ that satisfy the following inequalities, for all $n \in \mathbb N$: \[\bothrl{ k_{n,\texttt{Prog}}, k_{n,\revprog}\geq & c(\length{p_{\vec{k}}}+\length{p_{\seq{S}}}+\length{p_{\seq{T}}}+ \length{p_{\seq{U}}})\\ k_{n,\texttt{Head}_{-1}},k_{n,\texttt{Head}_{+1}}\geq & \length{\Chi{\haine4\times (Q_p \cup Q_{p^{-1}})\times\{-1,+1\}}}\\ U_n\ge & P_2(\log{S_{n+1}},\log{T_{n+1}},\log(n+1),\\ &k_{n+1,\texttt{Head}_{-1}}, k_{n+1,\texttt{Head}_{+1}},k_{n+1,\revprog},k_{n+1,\texttt{Prog}},\\ &t_{\vec{k}}(n+1),t_{\seq S}(n+1),t_{\seq T}(n+1),t_{\seq U}(n+1))\\ S_n \ge & \max\{2U_n,P_1(\log{S_{n+1}},\log{T_{n+1}},\log(n+1),\\ & \length{p_{\vec{k}}},\length{p_{\seq{S}}},\length{p_{\seq{T}}},\length{p_{\seq{U}}})\}\\ T_n \ge & 4U_n+S_n ~.}\] Recall that in the above inequalities, $Q_p$ and $Q_{p^{-1}}$ depend on the choice of parameter sequences. We will show two ways to do this. The first one does not give an extremely-expansive SFT, because $\prod_{i<n}S_i/T_i$ does not converge to $0$, while the second one does. \begin{enumerate} \item For all sequences $\seq S$ and $\seq U$, let us choose $T_n \defeq T_{n,\seq S,\seq U}=S_n+4U_n$. Also, for all $n_0,r$ and $Q \geq 2$, let us choose $U_{n,n_0,r}\defeq U_n=(n+n_0)^r$, $S_{n,n_0}\defeq S_n=Q^{n+n_0}$, $k_{n,\texttt{Prog}}=k_{n,\revprog}=n_0Qr$ and $k_{n,\texttt{Head}_{-1}}=k_{n,\texttt{Head}_{+1}}=n_0$ for all $n \in \mathbb N$. Then, the last inequality is satisfied by definition. In addition, for all $n_0,r,Q$, we have that $\length{p_{\seq{S}}} \le \norm{c_1n_0Q}$, $\length{p_{\seq{U}}} \le \norm{c_2rn_0}$ and $\length{p_{\seq{T}}}$, $ \length{p_{\vec{k}}} \le \norm{c_3rn_0Q}$, for some \emph{constants} $c_1,c_2,c_3$. This is true because the sequence $(n+n_0)^r$ is uniformly (polynomially) computable in $n,n_0,r$, which means that there exists an algorithm that takes as input $(n,n_0,r)$ and outputs $(n+n_0)^r$, for \emph{all} values of $n,n_0$ and $r$. If we use the program for this algorithm together with a description of $n_0$ and $r$, then we obtain a program of length bounded by $\norm{c_2rn_0}$ for the sequence $((n+n_0)^r)_{n \in \mathbb N}$. A similar argument holds for the sequence $Q^{n+n_0}$. Since this algorithm works for \emph{all} choices of $n_0,Q,r$, it means that $Q_p$ and $Q_{p^{-1}}$ are actually \emph{fixed}. In addition, all of the algorithms are polynomially computable, which means that \begin{equation*} t_{\seq S}(n), t_{\seq T}(n), t_{\vec{k}}(n) \le P_3(\log{Q^{n+n_0}}), t_{\seq U}(n) \le P_4(\log(n+n_0)^r), \end{equation*} for some \emph{fixed} polynomials $P_3,P_4$. Therefore, substituting these in the inequalities above and doing some regrouping of the terms in parentheses (that is omitted), the inequalities that need to be satisfied are written as follows: \[\bothrl{ n_0Qr \geq & c'\log(n_0Qr)\\ n_0 \geq & c'\\ (n+n_0)^r\ge & P_5(\log{Q^{n+n_0+1}},\log(n+n_0+1)^r,\log(n+1))\\ Q^{n+n_0}\ge & \max\{2(n+n_0+1)^r, P_6(\log{Q^{n+n_0+1}},\log(n+1))\} ~,}\] for some polynomials $P_5,P_6$ and constant $c'$ that do not depend on $r,n_0$ or $Q$. Since $c'$ is fixed, the first two inequalities are true for all but a finite number of triples $n_0,Q,r$. Without loss of generality, we assume that it is always true. We can choose $n_Q$ and $r$ such that the second inequality is true for all $n \in \mathbb N$ and all $n_0 \geq n_Q$, because the right hand of the inequality is bounded by a fixed polynomial of $(n+n_0)$ and $r$, while the left-hand side grows like $n^r$. With fixed $r$, we can also find $n'_Q$ such that the third inequality is satisfied for all $n \in \mathbb N$ and all $n_0 \geq n'_Q$, because the left-hand side grows exponentially in $(n+n_0)$ and the right hand only polynomially (since $r$ is fixed). By choosing $n_0=\max\{n_Q,n'_Q\}$ we can satisfy both inequalities for all $n$ at the same time. Note that $\prod_{i \in \mathbb N}S_i/T_i = \prod_{i \in \mathbb N}(1+(n+n_0)^r/Q^{n+n_0}) \neq 0$. Therefore, if we choose the sequences like this, we do not obtain a unique direction of non-expansiveness, but rather a cone of non-expansive directions. \item For all $n_0 \in \mathbb N$ and $Q\geq 2$, let us choose $S_{n,n_0}\defeq S_n=Q^{n+n_0}$, $T_{n,n_0}\defeq T_n=2S_n$ and $U_{n,n_0}\defeq U_n=\frac{S_n}{2Q}$, $k_{n,\texttt{Prog}}=k_{n,\revprog}=n_0Q$ and $k_{n,\texttt{Head}_{-1}}=k_{n,\texttt{Head}_{+1}}=n_0$ for all $n \in \mathbb N$. We can use a similar argumentation as in previous case to show that it is enough to satisfy the following inequalities: \[\both{ n_0Q \geq \norm{Qn_0}\\ \frac{Q^{n+n_0}}{4}\ge P_3(\log{Q^{n+n_0+1}},\log(n+1))\\ Q^{n+n_0}\ge\max\{\frac{Q^{n+n_0+1}}{2Q},P_4(\log{Q^{n+n_0+1}},\log(n+1))\} ~,}\] for some polynomials $P_3,P_4$ and constant $c$ that do not depend on $n_0$ and $Q$. Obviously, for all $Q$ these inequalities are satisfied when $n_0$ is sufficiently large. In this case, $\prod_{i \in \mathbb N}S_i/T_i= \prod_{i \in \mathbb N}S_i/2S_i = 0$, therefore the corresponding SFT is extremely expansive. \end{enumerate} \end{proof} For both cases, we have a lot of freedom in choosing the sequences. In the previous proof, we just described two of the possible ways which are enough for the results we want to obtain and help in presenting the basic ideas of the proof that is needed in any possible case. \section{Universality} Let $\C[\texttt{Other}]=[\texttt{OTape}_{-1},\texttt{OTape},\texttt{OTape}_{+1}]$ and let us fix the field list $\C[\intru]=\C[\hsim]\sqcup\C[\texttt{Other}]$ and the corresponding direction vector $\vec{\nu}_{\intru}$. For any $n$, consider an RPCA $G_n$ with permutation $\alpha_n \colon (\haine5^{l_n})^3 \to (\haine5^{l_n})^3$ over $\C[\texttt{Other}]$. This is not a strict restriction in itself: all RPCA can be represented in this way, up to a simple alphabet renaming and use of Remark~\ref{sharpization If the sequence of permutations $(\alpha_n)_{n \in \mathbb N}$ is polynomially computable, we can build a PPA that simulates $G_n$, for all $n \in \mathbb N$. \begin{algo}{intru}{\intru}{M,\vec{\nu},\vec{k},\seq S,\seq T,\seq U, \seq\alpha} \STATE{$\alpha_{\bina{\texttt{Level}}}[\C[\texttt{Other}]]$} \COMMENT{$G_n$ on the $\C[\texttt{Other}] $ fields.}\label{al:intru:gn} \IF{$\bina{\pi_\texttt{Clock}}=0$} \STATE{$\chekk[\emp[\motvide]{\texttt{Head}_{-1},\texttt{Head}_{+1},\texttt{Tape}_{-1},\texttt{Tape}_{+1},\texttt{NTape}}]$}\label{al:univ:empty} \STATE{$\chekka[M,\bina{\pi_\texttt{Addr}},\pi_\texttt{Tape},\vec{k}_{\bina{\pi_\texttt{Level}}+1}]$} \label{al:intru:alph} \STATE{$\hier[M,\bina{\pi_\texttt{Addr}},\pi_\texttt{Tape},\vec{k}_{\bina{\pi_\texttt{Level}}+1},\texttt{Prog} \pi_{\texttt{Prog}}]$}\label{al:intru:prog} \STATE{$\hier[M,\bina{\pi_\texttt{Addr}},\pi_\texttt{Tape},\vec{k}_{\bina{\pi_\texttt{Level}}+1},\revprog \pi_\revprog]$} \label{al:intru:revprog} \STATE{$\hier[M,\bina{\pi_\texttt{Addr}},\pi_\texttt{Tape},\vec{k}_{\bina{\pi_\texttt{Level}}+1},\texttt{Level} \anib{\bina{\pi_\texttt{Level}}+1}]$} \label{al:intru:lev} \ENDIF \STATE{$\unive[M$, $\vec{\nu}$, $\vec{k}_{\bina{\pi_\texttt{Level}}}$, $\bina{\pi_\texttt{Addr}}$, $\bina{\pi_\texttt{Clock}}$, $\seq{S}_{\bina{\pi_\texttt{Level}}}$, $\seq{T}_{\bina{\pi_\texttt{Level}}}$, $\seq{U}_{\bina{\pi_\texttt{Level}}}$, $\pi_\texttt{Prog}$, $\pi_\revprog]$} \COMMENT{Simulate}\label{al:intru:unive} \STATE{$\coordi[\seq{S}_{\bina{\pi_\texttt{Level}}},\seq{T}_{\bina{\pi_\texttt{Level}}};\C[\coordi]]$} \end{algo} The only difference of this rule with $\hsim$ is that it has 3 additional fields (which implies that $\vec{k}$ will be chosen in $(\mathbb N^{16})^{\mathbb N}$) and that we apply $\alpha_{\bina{\texttt{Level}}}$ onto the field list $\C[\texttt{Other}]$ \emph{independently} from what we do on $\C[\hsim]$. \begin{lemma}\label{lem:univppa} Let $\seq U,\seq S,\seq T$ be polynomially checkable sequences of integers and $\alpha$ a polynomially computable sequence of permutations. Let us fix the field list $\C[\intru]\defeq [0,\ldots,15]$, the corresponding \emph{fixed} direction vector $\vec{\nu}_{\intru}$ and a polynomially checkable sequence of $M$-uples $\seq{\vec{k}}\in(\mathbb N^{15})^\mathbb N$. Let $F$ be the IPPA with directions $\vec{\nu}_{\intru}$ and permutation $\intru[15,\vec{\nu}_{\intru},\vec{k},\seq S,\seq T, \seq U;\C[\intru]]$ and $p,p^{-1}$ be the programs for this permutation and its inverse, respectively. For all $n \in \mathbb N$, let $F_n$ be the restriction of $F$ to the subalphabet \begin{equation*} \A_{n}\defeq \haine5^{\vec{k}_n}\cap\emp[n]{\texttt{Level}}\cap\emp[p]{\texttt{Prog}}\cap\emp[p^{-1}]{\revprog}, \end{equation*} and assume that the following inequalities hold: \[\both{ \I(\vec{k}_n,\vec{k}_{n+1},S_n,T_n,U_n,p)\\ k_{n,\texttt{Prog}}\geq\length p\\ k_{n,\revprog}\geq\length{p^{-1}}\\ k_{n,\texttt{Level}} \geq \norm{n}\\ k_{n,\texttt{OTape}}=k_{n,\texttt{OTape}_{-1}}=k_{n,\texttt{OTape}_{+1}}\geq l_n ~.}\] If $\Omega_{G_n} \neq \emptyset$, then $F_n$ completely $(S_n,T_n,0)$-simulates $F_{n+1}$ with decoding function $\Phi_n=\tilde{\Phi}_{\texttt{Tape}}{\restr{\Sigma_n}}$, where $\Sigma_n \defeq \A_{n}^{\mathbb Z} \cap \gra{0}{0}{S}{T}\cap \emp[\motvide]{\texttt{Head}_{-1},\texttt{Head}_{+1},\texttt{Tape}_{-1},\texttt{Tape}_{+1},\texttt{NTape}} \cap \Phi^{-1}(\A_{n+1}^{\mathbb Z}).$ The simulation is exact if and only if $\Omega_{G_n}$ is a singleton. In addition, if $c \in F_n^{-1}(\A_{n}^{\mathbb Z})$, then $G_n \pi_{\C[\texttt{Other}]}(c)=\pi_{\C[\texttt{Other}]}F_n(c)$. \end{lemma} As mentioned before, the proof is very similar to the proof of Lemma~\ref{lem:nestsimul}. Therefore, we are going to omit most of the details and only stress those points where there is a difference. \begin{proof} Let us fix $n \in \mathbb N$. For the first claim, we have to show that $F_n$ $(S_n,T_n,0)$-simulates $F_{n+1}$ with decoding function $\Phi_n$ (simulation), that $\Phi_n$ is an injection if and only if $\Omega_{G_n}$ is a singleton and that $\Omega_{F_{n}} \subseteq \mathcal D}%\DeclareMathOperator*{\dom}{dom(\Phi_n)$ (completeness). For the simulation part, let $b \in \A_{n+1,p}^{\mathbb Z}$. We can find $c \in \A_{n}^{\mathbb Z}$ that simulates $b$: we choose $c \in \Phi^{-1}(b) \in \A_{n}^{\mathbb Z}\cap\gra{0}{0}{S}{T}\cap \emp[\motvide]{\texttt{Head}_{-1},\texttt{Head}_{+1},\texttt{Tape}_{-1},\texttt{Tape}_{+1},\texttt{NTape}}$ such that $\pi_{\C[\texttt{Other}]}(c) \in \Omega_{G_n}$ (this is possible by the assumption that $\Omega_{G_n} \neq \emptyset$). Then, it is easy to see that $c$ simulates $b$, because $\C[\texttt{Other}]$ is only \xpr{touched} by $\alpha_{\bina{\texttt{Level}}}\defeq \alpha_n$, $p$ is the program of $\intru[17,\vec{\nu}_{\intru},\vec{k},\seq S,\seq T, \seq U;\C[\intru]]$ and $G_n^{T}(\pi_{\C[\texttt{Other}]}(c))$ exists. For the exactness part, as usual $\pi_{\C[\hsim]}(c)$ is uniquely determined by $b$. However, $\pi_{\C[\texttt{Other}]}(c)$ can be chosen independently from $b$ to be any element of $\Omega_{G_n}$, so that the simulation is exact if and only if $\Omega_{G_n}$ is a singleton. Finally, for the completeness part, an argument almost identical to the argument in the proof of Lemma~\ref{lem:nestsimul} shows that if $c \in \gra{0}{0}{S}{T} \cap F_{n}^{-T}(\A_{n}^{\mathbb Z})$, then $c \in \Phi^{-1}(\A_{n+1}^{\mathbb Z})$. As we know, this is enough to show that the simulation is complete. The second claim, that if $c \in F_n^{-1}(\A_{n}^{\mathbb Z})$, then $G_n \pi_{\C[\texttt{Other}]}(c)=\pi_{\C[\texttt{Other}]}F_n(c)$ is straightforward from the definition of $F_n$, since the only rule that \xpr{touches} the fields $\C[\texttt{Other}]$ is $G_n$. \end{proof} \begin{remark}\label{rem:univenonempt} \begin{enumerate} \item $\Omega_{F_0} \neq \emptyset$ if and only if $\Omega_{G_n} \neq \emptyset$, for all $n \in \mathbb N$. \item If $\Omega_{F_0} \neq \emptyset$, then $F_0$ completely simulates $G_n$ for all $n \in \mathbb N$. \end{enumerate} \end{remark} \begin{proof} \begin{enumerate} \item If $\Omega_{G_n} = \emptyset$ for some $n \in \mathbb N$, then $\Omega_{F_n} = \emptyset$, so that since $F_0$ simulates $F_n$ (by transitivity of simulation), we obtain that $\Omega_{F_0}=\emptyset$ by Lemma~\ref{lem:aperiodichierarchy}. If, on the other hand, $\Omega_{G_n} \neq \emptyset$ for all $n \in \mathbb N$, then Lemma~\ref{l:nonvide} gives that $\Omega_{F_0} \neq \emptyset$. \item If $\Omega_{F_0} \neq \emptyset$, then the second claim of Lemma~\ref{lem:univppa} implies that $F_n$ factors onto $G_n$. Since $F_0$ simulates $F_n$, for all $n \in \mathbb N$, we obtain that $F_0$ simulates $G_n$, for all $n \in \mathbb N$. \end{enumerate} \end{proof} \begin{remark}\label{rem:univextrexp} Even if $\prod_{i \in \mathbb N} S_i/T_i =0$, $F_0$ is not necessarily extremely expansive, since we might have non-expansive directions coming from the $G_n$ part. However, in the special case that $\Omega_{G_n}$ is a singleton for all $n \in \mathbb N$, then all the simulations are exact and it is straightforward to see that $\NE(F_0) = \{0\}$, because Proposition~\ref{prop:hochman} applies. \end{remark} \subsection{Satisfying the inequalities} \begin{remark}\label{rem:inequunivppa} We can find $\vec{k} \in (\mathbb N^{16})^{\mathbb N}$ and $\seq{S},\seq{T},\seq{U} \in {\mathbb N}_1^{\mathbb N}$ such that the inequalities of Lemma~\ref{lem:univppa} are satisfied and $\prod_{i \in \mathbb N}S_i/T_i=0$. \end{remark} We only state the case $\prod_{i \in \mathbb N}S_i/T_i=0$ (even though we can make it $\neq 0$) too, because it is what we will need and use in the applications. \begin{proof} Let us write explicitly the inequalities that we need to satisfy: \[\both{ U_n\ge\max\{t_p({\haine5^{\seq{\vec{k}}_{n+1}}}),t_{p^{-1}}({\haine5^{\seq{\vec{k}}_{n+1}}})\}\\ S_n\ge\max\{2{U}_n,\length{\Chi{\haine5^{\vec{k}_{n+1}}}}\}\\ T_n\ge 4{U}_n+S_n\\ k_{n,\texttt{Prog}}\geq\length p\\ k_{n,\revprog}\geq\length{p^{-1}}\\ k_{n,\texttt{Head}_{-1}},k_{n,\texttt{Head}_{+1}}\ge\length{\Chi{\haine4\times (Q_p \cup Q_{p^{-1}})\times\{-1,+1\}}}\\ k_{n,\texttt{Addr}},k_{n,\texttt{Addr}_{+1}}\ge\norm{S_n}\\ k_{n,\texttt{Clock}},k_{n,\texttt{Clock}_{+1}}\ge\norm{T_n}\\ k_{n,\texttt{Tape}},k_{n,\texttt{NTape}},k_{n,\texttt{Tape}_{-1}},k_{n,\texttt{Tape}_{+1}}\ge1\\ k_{n,\texttt{Level}} = \norm{n}\\ k_{n,\texttt{OTape}}=k_{n,\texttt{OTape}_{-1}}=k_{n,\texttt{OTape}_{+1}}= l_n ~.}\] For all $\seq{S},\seq{T}$ and $\seq{U}$ , let us choose $\vec{k}_n\defeq\vec{k}_{n,\seq{S},\seq{T},\seq{U}}$ such that the last five inequalities are satisfied as equalities. Then, we have the crucial inequality \begin{equation*} \norm{\Chi{\haine5^{\vec{k}_n}}}\le P_1(\log{S_n},\log{T_n},\log{n},l_n,k_{n,\texttt{Head}_{-1}},k_{n,\texttt{Head}_{+1}},k_{n,\revprog},k_{n,\texttt{Prog}}), \end{equation*} where $l_n$ is the size of the alphabet of $\alpha_n$. The other crucial inequality of the proof of Lemma~\ref{rem:inequhiera} also holds without any essential changes: \begin{multline*} \max\{t_p({\haine5^{\vec{k}}}),t_{p^{-1}}({\haine5^{\vec{k}}})\} \le P_2(\log{S_n},\log{T_n},\log{n},l_n,k_{n,\texttt{Head}_{-1}},\\ k_{n,\texttt{Head}_{+1}},k_{n,\revprog},k_{n,\texttt{Prog}},t_{\vec{k}}(n),t_{\seq S}(n), ,t_{\seq T}(n),t_{\seq U}(n)) \end{multline*} for some polynomial $P_2$ that does not depend on the parameters. This holds because the permutation applied consists in a number of polynomial operations (recall that $\alpha$ is polynomially computable and fixed for this specific construction) and a bounded number of calls to $\seq S$, $\seq T$,$\seq U$ and $\vec{k}$. Also, since $\alpha$ is polynomially computable, $l_n$ (which is part of the output of $\alpha_n$) is also bounded by a polynomial of $n$ so that we can \xpr{remove} $l_n$ from the right-hand side of the previous inequalities and \xpr{incorporate} it in the polynomials $P_1,P_2$. From this point on, the proof is identical to the proof of Remark~\ref{rem:inequhiera}. (We are free to chose whether $\prod_{i \in \mathbb N}S_i/T_i$ is equal to $0$ or not.) \end{proof} \subsection{Domino problem} \begin{theorem} It is undecidable whether an extremely expansive SFT is empty. \end{theorem} \begin{proof} Let $\mathcal{M}$ be an arbitrary TM with program $p'$. For all $n \in \mathbb N$, we define $\alpha_n$ as follows: $l_n =1$. $\alpha_n(0,0,0)=(0,0,0)$ if $\halt{p'}{n}{0^n}$ is true (\textit{i.e.}, if $\mathcal{M}$ does not halt within $n$ steps). $\alpha_n$ is undefined in all other cases. $(\alpha_n)_{n \in \mathbb N}$ is a polynomially computable sequence of permutations. $\Omega_{G_n}$ is a singleton, equal to $\{\dinf 0 \}$, if and only if $\mathcal{M}$ does not halt within $n$ steps. Otherwise $\Omega_{G_n}$ is empty. Let us construct the sequence of RPCA $(F_n)_{n \in \mathbb N}$ as in Lemma~\ref{lem:univppa} corresponding to the $\alpha$ and $\vec{k}, \seq S, \seq T, \seq U$ that satisfy the inequalities and for which $\prod_{i \in \mathbb N} S_i/T_i=0$. Then, Remark~\ref{rem:univenonempt} implies that $\Omega_{F_0}$ (equivalently, $\orb{F_0}$) is non-empty if and only if $\Omega_{G_n}$ is non-empty for all $n$, which is equivalent to that $\mathcal{M}$ does not halt over input $0^{\infty}$. In addition, Remark~\ref{rem:univextrexp} implies that if $\Omega_{F_0}$ is non-empty, then $\NE(F_0)=\{0\}$. Therefore, for every TM $\mathcal{M}$, we have constructed a 2D SFT $\orb{F_0}$ that is non-empty if and only if $\mathcal{M}$ does not halt over input $0^{\infty}$ and if $\orb{F_0} \neq \emptyset$, then $\orb{F_0}$ is extremely expansive. This concludes the proof of the undecidability. \end{proof} It follows from the previous proof that we have actually proved the following: Let $A$ be the family of forbidden patterns that define empty SFT, and let $B$ be the family that defines non-empty extremely expansive SFT (with unique direction of non- expansiveness $\infty$). There does not exists a recursively enumerable set $X$ that contains $B$ and is disjoint from $A$. In other words, if an algorithm correctly recognizes all non- empty extremely-expansive SFT, then it must also (falsely) recognize an empty SFT. \subsection{Intrinsic universality} The second application concerns the universality properties of RPCA. \begin{theorem}\label{c:intru} For any computably enumerable set of non-empty PPA, there exists a PPA that completely simulates all of them. \end{theorem} \begin{proof} First of all, we can assume that all the PPA are over the field list $\C[\texttt{Other}]$ with the corresponding directions. This is true because we can encode, in polynomial time, all the left-moving fields into a unique left-moving field, and similarly for the other types of fields. Then, saying that a set of PPA is computably enumerable is equivalent to saying that the corresponding set of permutations that define these PPA is computable enumerable. In addition, for every computably enumerable set of PPA $X$ (over the field list $\C[\texttt{Other}]$), there exists a \emph{polynomially computable sequence} $(G_n)_{n\in\mathbb N}$ of PPA that contains exactly the elements of $X$. Equivalently, there exists a \emph{polynomially computable sequence} of permutations $(\alpha_n)_{n \in \mathbb N}$ that contains exactly the permutations of the PPA in $X$. (Let $g$ be a fixed element of $X$. The polynomial algorithm of $(\alpha_n)_{n \in \mathbb N}$ takes as input $n$, runs the algorithm that enumerates $X$ for $n$ steps and sets $\alpha_n$ equal to the last permutation that was output. If no permutation has yet been output, then $\alpha_n$ is set equal to $g$.) If we use this sequence $\alpha$ and sequences $\vec{k}, \seq S, \seq T, \seq U$ that satisfy the inequalities to define the sequence $(F_n)_{n \in \mathbb N}$ as in Lemma~\ref{lem:univppa}, then, since by assumption $\Omega_{G_n} \neq \emptyset$ for all $n \in \mathbb N$, Remark~\ref{rem:univenonempt} implies that $F_0$ completely simulates $G_n$, for all $n \in \mathbb N$. \end{proof} Theorem~\ref{c:intru} applies, up to a conjugacy, to computably enumerable sets of nonempty RPCA. In some sense, it gives a deterministc version of the result in \cite{lafitteweiss}. The same result is not true for the non-computably-enumerable set of all nonempty RPCA, thanks to an argument by Hochman \cite{hochmanuniv} and Ballier \cite{balliermedvedev}. Nevertheless, the corollary applies to the family of all reversible (complete) cellular automata, since the family of RCA is computably enumerable. Unfortunately, it gives an RPCA (partial CA) that simulates all RCA (full CA) instead of an RCA. The existence of an RCA that simulates all RCA seems to be a much more difficult question and is still open, (see for instance \cite{guillaume1}. \section{Synchronizing computation}\label{s:comput} We now introduce one more trick in our construction: the encoding of an infinite sequence inside an infinite nested simulation by encoding increasing finite prefixes of the infinite sequence inside the alphabets of the RPCA of the nested simulation. Let $\C[\syncomp]\defeq\C[\unive] \sqcup [\texttt{MHist},\texttt{MHist}_{+1},\texttt{Prog},\revprog]$. In this simulation, we do not use a field $\texttt{Level}$ in order to store the parameter $n$. Instead, it will be obtained as the length of field $\texttt{MHist}$. $p'$ is the program of a TM. It is used to reject some nested simulation sequences, depending on the infinite sequence that is stored (through its increasing finite prefixes) in the alphabets of the RPCA. \begin{algo}{syncomp}{\syncomp}{M,\vec{\nu},\vec{k},\seq S,\seq T,\seq U,p'} \STATE{$\chekk[\pi_\texttt{MHist}=\pi_{\texttt{MHist}_{+1}}]$}\label{al:syncomp:mhistconsistency} \IF{$\bina{\pi_\texttt{Clock}}=0$} \STATE{$\chekk[\halt{p'}{\length{\texttt{MHist}}}{\texttt{MHist}}]$}\label{al:syncomp:medvedev} \STATE{$\chekk[\emp[\motvide]{\texttt{Head}_{-1},\texttt{Head}_{+1},\texttt{Tape}_{-1},\texttt{Tape}_{+1},\texttt{NTape}}]$}\label{al:syncomp:empty} \STATE{$\chekka[M,\bina{\pi_\texttt{Addr}},\pi_\texttt{Tape},\vec{k}_{\length{\texttt{MHist}}+1}]$} \COMMENT{Check that the lengths of the simulated letter are correct} \label{al:syncomp:alph} \STATE{$\hier[M,\bina{\pi_\texttt{Addr}},\pi_\texttt{Tape},\vec{k}_{\length{\texttt{MHist}}+1},\texttt{Prog} \pi_{\texttt{Prog}}]$} \COMMENT{$\texttt{Prog}$ of the simulated letter is the same}\label{al:syncomp:prog} \STATE{$\hier[M,\bina{\pi_\texttt{Addr}},\pi_\texttt{Tape},\vec{k}_{\length{\texttt{MHist}}+1},\revprog \pi_\revprog]$} \COMMENT{$\revprog$ is also the same}\label{al:syncomp:revprog} \STATE{$\hier[M,\bina{\pi_\texttt{Addr}},\pi_\texttt{Tape},\vec{k}_{\length{\texttt{MHist}}+1},\texttt{MHist},\pi_{\texttt{MHist}}]$} \COMMENT{$\texttt{MHist}$ of the simulating letters is a prefix of $\texttt{MHist}$ of the simulated}\label{al:syncomp:infinitesequence} \ENDIF \STATE{$\unive[M$, $\vec{\nu}$, $\vec{k}_{\length{\texttt{MHist}}}$, $\bina{\pi_\texttt{Addr}}$, $\bina{\pi_\texttt{Clock}}$, $\seq{S}_{\length{\texttt{MHist}}}$, $\seq{T}_{\length{\texttt{MHist}}}$, $\seq{U}_{\length{\texttt{MHist}}}$, $\pi_\texttt{Prog}$, $\pi_\revprog]$} \label{al:syncomp:unive} \STATE{$\coordi[\seq{S}_{\length{\texttt{MHist}}},\seq{T}_{\length{\texttt{MHist}}}]$} \end{algo} \begin{lemma}\label{l:mhist} Let $\seq S,\seq T,\seq U$ be polynomially checkable sequences of integers and $p'$ be the program of a TM. Let us fix the field list $\C[\syncomp]\defeq [0,\ldots,13]$, the corresponding \emph{fixed} direction vector $\vec{\nu}_{\syncomp}$ and a polynomially checkable sequence of $14$-uples $\seq{\vec{k}}\in(\mathbb N^{14})^{\mathbb N}$. Let $F$ be the IPPA with directions $\vec{\nu}_{\syncomp}$ and permutation \begin{equation*} \syncomp[14,\vec{\nu}_{\syncomp},\vec{k},\seq S,\seq T, \seq U,\pi_{\texttt{MHist}},p'] \end{equation*} and $p,p^{-1}$ be the programs for this permutation and its inverse, respectively. For all $w \in \haine2^{*}$, let $S_w\defeq S_{\length{w}}$, $T_w \defeq T_{\length{w}} and $F_w$ be the restriction of $F$ to the subalphabet \begin{equation*} \A_{w}\defeq \haine5^{\vec{k}_{\length{w}}}\cap\emp[w]{\texttt{MHist},\texttt{MHist}_{+1}}\cap\emp[p]{\texttt{Prog}}\cap\emp[p^{-1}]{\revprog}, \end{equation*} and assume that the following inequalities hold for all $n \in \mathbb N$: \[\both{ \I(\vec{k}_n,\vec{k}_{n+1},S_n,T_n,U_n,p,p^{-1})\\ k_{n,\texttt{Prog}}\geq\length p\\ k_{n,\revprog}\geq\length{p^{-1}}\\ k_{n,\texttt{Level}} \geq \norm{n} ~.}\] Then, $F_w\simu[S_w,T_w,0,\Phi_w]\bigsqcup_{\begin{subarray}{c}a\in \haine2\end{subarray}}{F_{wa}}$ completely exactly, where \begin{equation*} \Sigma_w \defeq \A_{w}^{\mathbb Z} \cap \gra{0}{0}{S_w}{T_w} \cap \emp[\motvide]{\texttt{Head}_{-1},\texttt{Head}_{+1},\texttt{Tape}_{-1},\texttt{Tape}_{+1},\texttt{NTape}} \cap \tilde{\Phi}^{-1}(\bigsqcup_{\begin{subarray}{c} a \in \haine2 \end{subarray}}\A_{wa}^{\mathbb Z}), \text{ and } \Phi_w\defeq\tilde{\Phi}_{\texttt{Tape}}{\restr{\Sigma_w}}. \end{equation*} \end{lemma} \begin{proof} Let $w \in \haine2^*$ and $\length{w}=n$. By definition, $S_w\defeq S_n$, $T_w\defeq T_n$ and $U_w \defeq U_n$. If $p'$ halts on input $w$ within $n$ steps, then the check of line~\ref{al:syncomp:medvedev} will reject every configuration, which means that $\mathcal D}%\DeclareMathOperator*{\dom}{dom(F_{w}) = \emptyset$. But, in this case, $wa$ will also be rejected by $p'$ within $n+1$ steps, for all $a \in \haine2$, so that $\mathcal D}%\DeclareMathOperator*{\dom}{dom(\bigsqcup_{\begin{subarray}{c}a\in \haine2\end{subarray}}{F_{wa}}) = \emptyset$, too. By definition, the empty PCA strongly, completely simulates itself for all possible choices of the simulating parameters, so that the claim is true in this case. Suppose, then, that $p'$ does not halt on input $w$ within $n$ steps. Then, the check of line~\ref{al:syncomp:medvedev} is always true, so that we can ignore it in the rest of the proof. As in the previous proofs, we have to show three things: that $F_{w}$ $(S_n,T_n,0)$-simulates $\bigsqcup_{\begin{subarray}{c}a\in \haine2\end{subarray}}{F_{wa}}$ with decoding function $\Phi_{w}$ (simulation), that $\Phi_{w}$ is injective (exactness) and that $\Omega_{F_{w}} \subseteq \mathcal D}%\DeclareMathOperator*{\dom}{dom(\Phi_w)$ (completeness). For the simulation, it is easy to see that if $b \in \A_{wa}^{\mathbb Z}$, where $a \in \haine2$ and $c \in \Phi_w^{-1}(b)$, then $c$ is not rejected by the checks of lines~\ref{al:syncomp:mhistconsistency},\ref{al:syncomp:empty},\ref{al:syncomp:alph},\ref{al:syncomp:prog}, \ref{al:syncomp:revprog} and \ref{al:syncomp:infinitesequence}. Then, simulation follows easily from the choice of the program $p$ and Lemma~\ref{universal}. Exactness is also direct. The values of all the fields of $c$ are uniquely determined by $b$ and the form $\Phi_w$. Completeness also follows the general pattern of the previous proofs, but there is a small difference: we can show that if $c \in \gra{0}{0}{S_n}{T_n} \cap F_{w}^{-2T}(\A_{w}^{\mathbb Z})$ (the difference is that we have $2T$ instead of $T$ in the exponent), then $c \in \Phi_{w}^{-1}(\bigsqcup_{\begin{subarray}{c} a \in \haine2 \end{subarray}}\A_{wa}^{\mathbb Z})$. This is enough to ensure completeness of the simulation. Indeed, if \begin{equation*} c \in \gra{0}{0}{S_n}{T_n} \cap F_{w}^{-T}(\A_{w}^{\mathbb Z}), \end{equation*} then lines~\ref{al:syncomp:empty},\ref{al:syncomp:prog},\ref{al:syncomp:revprog} and \ref{al:syncomp:infinitesequence} ensure that \begin{equation*} c \in \A_{w}^{\mathbb Z} \cap \gra{0}{0}{S_n}{T_n}\cap\emp[\motvide]{\texttt{Head}_{-1},\texttt{Head}_{+1},\texttt{Tape}_{-1},\texttt{Tape}_{+1},\texttt{NTape}} \cap \Phi^{-1}((\bigsqcup_{\begin{subarray}{c} a \in \haine2 \end{subarray}}\A_{wa})^{\mathbb Z}). \end{equation*} Let $b \in (\bigsqcup_{\begin{subarray}{c} a \in \haine2 \end{subarray}}\A_{wa})^{\mathbb Z}$ be such that $c \in \Phi^{-1}(b)$. The problem is that we still cannot know that $\pi_{\texttt{MHist}}(b)$ is the same in all cells, because line~\ref{al:syncomp:infinitesequence} only checks that at every cell $i$, $\pi_{\texttt{MHist}}(c)=w$ (which we know that it is constant) is a prefix of $\pi_{\texttt{MHist}}(b_i)$. However, we could still have that $\pi_{\texttt{MHist}}(b_i)=w0$ and $\pi_{\texttt{MHist}}(b_j)=w1$, for some $i\neq j$. This is why we need to take $2T$ steps instead of $T$ steps. Indeed, since $F_w^{2T}(c)$ exists, $F^2(b)$ also exists, and line~\ref{al:syncomp:mhistconsistency} ensures that $\pi_{\texttt{MHist}}(b_i)=\pi_{\texttt{MHist}_{+1}}(b_i)=\pi_{\texttt{MHist}}(b_j)$, for all $i,j \in \mathbb Z$. The argument for this is similar to the argument used in the proof of Lemma~\ref{koo}. Therefore, $b \in \bigsqcup_{\begin{subarray}{c} a \in \haine2 \end{subarray}}\A_{wa}^{\mathbb Z}$ and this concludes the proof of the Lemma. \end{proof} \subsection{Satisfying the inequalities} \begin{remark}\label{rem:inequcomputa} We can find $\vec{k} \in (\mathbb N^{14})^{\mathbb N}$ and $\seq{S},\seq{T},\seq{U} \in {\mathbb N}_1^{\mathbb N}$ such that the inequalities of Lemma~\ref{l:mhist} are satisfied and $\prod_{i<n} S_i/T_i =0$. \end{remark} \begin{proof} The proof is almost identical to the proof of Remark~\ref{rem:inequunivppa} and is omitted. We just make a few comments: First of all, the inequalities depend on $w \in \haine2^{*}$, but in fact, if $\length{w}=\length{w'}$, then we have exactly the same inequalities for $w$ and $w'$, so that actually the inequalities can be translated to a set of inequalities that depend on $n$. Second, notice that line~\ref{al:syncomp:medvedev} is computable in polynomial time, and since the program $p'$ is fixed in advance, its contribution to $\norm{\A_w}$, $t_p$ and $\length{p}$ is constant and does not depend on the choice of parameters. Finally, we can choose $k_{w,\texttt{MHist}} \defeq n$, (where $n \defeq \length{w}$) which means that this field only contributes a polynomial of $n$ to the various inequalities, so that it can be \xpr{incorporated} into the polynomials and the problem can be reduced to the cases that have already been dealt with. \end{proof} \subsection{Realizing computational degrees} The statement of Lemma~\ref{l:mhist} falls exactly into the situation described in Lemma~\ref{l:nonvides}. For all $n \in \mathbb N$, let $\B_n=\haine2$. Then, for all $w \in \haine2^n (= \prod_{i < n} \B_i)$, we have defined $S_w,T_w, F_w$ and $\Phi_w$ such that $F_w$ exactly, completely $(S_w,T_w,0)$-simulates $\bigsqcup_{b\in\B_n}F_{ub}$. The check of line~\ref{al:syncomp:medvedev} forces that if $z \in \prod_{i \in \mathbb N} \B_i$, then $\rocks[\infty]z{\seq\Phi}\ne\emptyset$ if and only if $\halt{p'}{\infty}{z}$ is true, or in other words, $z \in \X_p$. Indeed, we have that $\mathcal D}%\DeclareMathOperator*{\dom}{dom(F_{z_{\co{0}{n}}})=\emptyset$ for some $n$ if and only if $\halt{p'}{\infty}{z}$ is not true, or in other words, if and only if $p'$ halts over $z$ within $n$ steps. Therefore, Lemma~\ref{l:nonvides} implies that $\Omega_{F_{\motvide}}= \rocks[\infty]Z{\seq\Phi} = \bigsqcup_{z\in \X_p}\rocks[\infty]z{\seq\Phi}$. \begin{lemma}\label{l:comphomeo} For any effectively closed subset $Z\subset\haine2^\mathbb N$, there exists an extremely expansive RPCA $F$ and a computable, left-invertible map from $\Omega_{F}$ onto the Cartesian product $Z\times\haine2^\mathbb N$. \end{lemma} One could even prove that the computable map is two-to-one, and almost one-to-one for any reasonable (topological or measure-theoretical) notion. Also, this SFT can be effectively constructed from $Z$. \begin{proof} We construct $\syncomp[14,$ $\vec{\nu}_{\syncomp}$, $\vec{k}$, $\seq S$, $\seq T$, $\seq U$, $\pi_{\texttt{MHist}}$, $p']$ for some sequences that satisfy the inequalities and a program $p'$ that recognizes $Z$. In addition, assume that $\prod_{i<n} S_i/T_i =0$. It follows from Lemma \ref{l:nonvides} that \begin{equation*} \Omega_{F_{\motvide}}=\bigsqcup_{z \in \X_p}\rocks[\infty]z{\seq\Phi}= \bigsqcup_{\begin{subarray}c z\in \X_p\\\seq t\in\prod_{i\in\mathbb N}\co0{T_i}\\\seq s\in\prod_{i\in\mathbb N}\co0{S_i}\end{subarray}}\bigcap_{n\in\mathbb N}\sigma^{\anib[\seq S]{s_{\co0n}}}F_{\motvide}^{\anib[\seq T]{t_{\co0n}}}\Phi_{z_0}^{-1}\cdots\Phi_{z_{\co{0}{n}}}^{-1}(\Omega_{F_n}). \end{equation*} Consider the map that associates, to each configuration $x \in \Omega_{F_{\motvide}}$, the unique triple $(z,\seq s,\seq t)$ such that $x\in\bigcap_{n\in\mathbb N}\sigma^{\anib[\seq S]{s_{\co0n}}}F_0^{\anib[\seq T]{t_{\co0n}}}\Phi_{z_0}^{-1}\cdots\Phi_{z_{\co{0}{n}}}^{-1}(\Omega_{F_n})$. This map is computable, since, for all $n\in\mathbb N$, $S_n$ is for instance given by $\bina{\pi_\texttt{Clock}\Phi_0\Phi_1\cdots\Phi_{n-1}}$ and $z_{\co{0}{n}}$ is given by $\pi_{\texttt{MHist}}\Phi_0\Phi_1\cdots\Phi_{n-1}$. Conversely, from the tripe $(z,\seq{s},\seq{t})$, one can build construct a configuration in $\Omega_{F_{\motvide}}$, as explained in \cite[Proposition 4.2]{gacs}. The result follows from the obvious computable homeomorphisms between $\Omega_F$ and $\orb F$, and between $\prod_{i\in\mathbb N}\co0{S_i}\times\co0{T_i}$ and $\haine2^{\mathbb N^2}$. Finally, $\NE(F_{\motvide}) = \{0\}$, because $\orb{F_{\motvide}}=\bigsqcup_{z \in \X_p} \orb{F_{\rocks[\infty]z{\seq\Phi}}}$ and $\NE(\orb{F_{\rocks[\infty]z{\seq\Phi}}}) = \{0\}$, due to the exact complete simulation and $\prod_{i<n} S_i/T_i =0$. \end{proof} The following was proven in \cite{simpson} for general 2D SFT. Here, we can also restrict the set of non-expansive directions. \begin{theorem}\label{t:comphomeo} For any effectively closed subset $X$, there exists a Medvedev-equivalent extremely expansive 2D SFT whose Turing degrees are the cones above the Turing degrees of $X$. \end{theorem} A fortiori, all Medvedev (and Mu\v cnik) degrees contain an extremely expansive SFT. \begin{proof} It is enough to notice that $\Omega_{F}$ and $\orb{F}$ are computably homeomorphic. \end{proof} The second component in the computable homeomorphism cannot easily be taken out: it is pointed in \cite{vanierdegrees} that all aperiodic subshifts admit a cone of Turing degrees (that is one degree and all degrees above it). Let us make some final comments: In this chapter, we are inspired and draw mainly on the work of Durand, Romashchenko and Shen \cite{drs}. Reading that paper, one has the feeling that the construction of that paper can be done in a reversible way, except for the exchange of information. Working out the details needed to make that intuition work is (as proven by this chapter) messy and even tedious, sometimes, but we manage to obtain results for which there is no known alternative proof. We also feel that our construction can also shed some light on the construction of Durand, Romashchenko and Shen. More specifically, we always write explicitly the inequalities that need to be satisfied, and for each one we explain at least once why it is needed. Also, we construct \xpr{once and for all} the rules and then prove that they have the desired behaviour, instead of using their more informal approach where some rule is created and then it is modified, resulting in a new rule, for which it is taken for granted that the previous argumentation still holds. \chapter{Expansive directions}\label{sec:expdir} In Lemma~\ref{lem:nonexpsftrestr}, we described a necessary condition for the set of non-expansive directions of an SFT: if $X$ is an SFT, then $\NE(X)$ is effectively closed. In this section, we are going to show that this is in fact a characterization of sets of non-expansive directions of SFTs. \begin{theorem}\label{thm:nonexpansive} If $\NE_0 \subseteq \Rb$ is effectively closed, then there exists an SFT $X \subseteq \A^{\mathbb Z^2}$ such that $\NE(X)=\NE_0$. \end{theorem} This is mentioned as Open Problem~11.1 in Mike Boyle's Open Problems for Symbolic Dynamics \cite{opsd}. We only answer the first part of that problem, since our constructions do not have any SFT direction. The second part of the problem, concerning 2D SFT with an SFT direction is much more difficult to answer, since it is inextricably related to the expansiveness of RCA. It is enough to prove this for sets of non-expansive directions that are included in $[-1,1]$, or even $[0,r]$, for some $0 < r <1$, because we can cover the set of directions with a finite number of rotations of $[-1,1]$ (and $[0,r]$). Therefore, even though the fact that we are using PPA might seem problematic (since $\NE(F) \subseteq [-1,1]$ in this case), this is not the case. The key idea consists in constructing subshifts with a unique direction of non-expansiveness through a nested simulation of RPCA, so that we can use Lemma~\ref{prop:hochman}. This idea was introduced in \cite{nexpdir}, in a non-effective way; we will try to emphasize the obstacle that has to be overcome when trying to \xpr{SFTize} this construction. \section{Directive encoding}\label{subsection:direncoding} Proposition~\ref{prop:hochman} states that, if we manage to implement a certain kind of nested simulation, then we will obtain a subshift with a unique direction of non-expansiveness, equal to $\anib[\seq S/\seq T]{\seq D}$, where $\seq S,\seq T\in{\mathbb N}_1^\mathbb N$ and $\seq D\in\mathbb Z^\mathbb N$. \cite[Lemma~5.6]{nexpdir} shows that all directions can be written in this form (when the sequences $\seq S, \seq T, \seq D$ are allowed to be chosen without any constraints). But the sequences of nested simulations that are possible with our SFT construction are more constrained: for example, the sequences $\seq S$ and $\seq T$ must be polynomially checkable This immediately imposes some restrictions, since, for example, it implies that $S_i$ cannot grow like an exponential tower of height $i$. This is not excluded from the construction of Hochman, since he takes $S$ \xpr{sufficiently large}, in order to make some \xpr{error term} sufficiently small. A large part of our construction is to show that we can satisfy these restrictions at the same time, or, in other words, that the error terms can be made sufficiently small even if $\seq S$ grows relatively slowly. At the same time, we have to take care of some technical details. Let us begin the construction by giving some additional necessary definitions: To any vector $\vec\varepsilon\in\mathbb R_+^n$ and any \dfn{directive word} $\vec d\defeq(D_i,W_i)_{0\le i<n}\in(\mathbb N^2\setminus\{(0,0)\})^n$, where $n\in\mathbb N$, we associate the direction interval $\Theta_{\vec\varepsilon}(\vec d)\defeq(\prod_{0\le i<n}R_i)[-1,1]+\anib[\vec R]{\vec D}\subset\Rb$, where $R_i\defeq1/(D_i+W_i+1+\varepsilon_i)\le1/2$; recall that $\anib[\vec R]{\vec D}\defeq\sum_{0\le i<n}D_i\prod_{0\le j\le i}R_j$. It follows immediately by the definition that $\Theta_{\vec\varepsilon}(\vec d)=R_0(\Theta_{\varepsilon\restr{\co1n}}(\vec d\restr{\co1n})+D_0)$ for any $\vec d=(D_i,W_i)_{0\le i < n}$. We extend these definitions for infinite sequences in the natural way: To any sequence $\seq\varepsilon\in\mathbb R_+^\mathbb N$ and any \dfn{directive sequence} $\seq d\defeq(D_n,W_n)_{n\in\mathbb N}\in(\mathbb N^2\setminus\{(0,0)\})^\mathbb N$ we associate the direction $\theta_{\seq\varepsilon}(\seq d)\defeq\anib[\seq R]{\seq D}\in\mathbb R$, the unique element of $\bigcap_{n\in\mathbb N}\Theta_{\varepsilon\restr{\co0n}}((D_i,W_i)_{0\le i<n})$ (uniqueness follows from the fact that $R_i \le 1 /2$, for all $i \in \mathbb N$). If $F_0\simu[S_0,T_0,D_0S_0]F_1\simu[S_1,T_1,D_1S_1]\ldots$ and for all $n\in\mathbb N$, $T_n=(D_n+W_n+1+\epsilon_n)S_n$, then observe that Proposition~\ref{prop:hochman} can be seen as saying that $\NE(F_0)=\{\theta_{\seq\varepsilon}(\seq D,\seq )\}$. This is point of contact between this section and the rest of the thesis. \cite[Lemma 5.6]{nexpdir} can now be reformalized as the following. \begin{lemma \label{lem:hochgeomstuff} For all $x \in [0,1]$, there exist a sequence $\seq\varepsilon\in\mathbb R_+^\mathbb N$ and a directive sequence $\seq d$ such that $\theta_{\seq\varepsilon}(\seq d) = x$. \end{lemma} We refine this statement in two ways: first, we will show that the sequence $\seq\varepsilon$ can be fixed and second, we will restrict the alphabet of acceptable directive sequences to $\mathbf{\mathcal{B}}\defeq\{(0,1),(1,1),(1,0)\}$. By doing that, we will \xpr{lose} a small part on the right endpoint of the interval $[0,1]$. \begin{lemma}\label{lem:geomstuff} Let $\seq\varepsilon\in[0,\sqrt2-1]^\mathbb N$ be a sequence. Then, \begin{equation*} \theta_{\seq\varepsilon}(\mathbf{\mathcal{B}}^\mathbb N) =[0,\anib[(1/(2+\varepsilon_i)_i)]{111\ldots ]. \end{equation*} \end{lemma} Though the interval $[0,1[$ cannot be covered fully with a fixed, non-trivial sequence, the convergence of the sequence $(\varepsilon_n)_{n \in \mathbb N}$ to $0$ can be sped up suitably, in order to realise any number arbitrarily close to $1$. \begin{proof} Let us prove by induction over $n\in\mathbb N$ that for any such sequence $\seq\varepsilon$, we have that \begin{equation*} \theta_{\varepsilon\restr{\co0n}}(\mathbf{\mathcal{B}}^n)=[-\prod_{i<n}\frac1{2+\varepsilon_i},\anib[(1/(2+\varepsilon_i))_{i<n}]{1\ldots12}]\supset[0,\frac1{\sqrt2}]. \end{equation* The base of the induction follows from $\epsilon_0 \le \sqrt{2}-1$. Let us assume that the inductive hypothesis is true for some $n\in\mathbb N$, and prove it for $n+1$. Note that $ \anib[(1/(2+\varepsilon_i))_{i < n+1}]{1\ldots12} = \frac1{2+\varepsilon_0}\left(1+\anib[(1/(2+\varepsilon_i))_{1 \le i<n+1}]{1\ldots12}\right) $ By the induction hypothesis (which we can apply to the truncated sequence $(\epsilon_n)_{n \geq 1}$, since it satisfies the assumption, too) and $\varepsilon_0\le\sqrt2-1$, we have $\anib[(1/(2+\varepsilon_i))_{i < n+1}]{1\ldots12}\ge\frac1{1+\sqrt2}\left(1+\frac1{\sqrt2}\right)=\frac1{\sqrt2}$. The set $\theta_{\varepsilon\restr{\co{0}{n+1}}}(\mathbf{\mathcal{B}}^{n+1})$ can be decomposed, in terms of the first directive letter, into a union of three intervals \begin{eqnarray*} \frac1{2+\varepsilon_0}\theta_{\varepsilon\restr{\cc1n}}(\mathbf{\mathcal{B}}^n)\cup\frac1{3+\varepsilon_0}(\theta_{\varepsilon\restr{\cc1n}}(\mathbf{\mathcal{B}}^n)+1)\cup\frac1{2+\varepsilon_0}(\theta_{\varepsilon\restr{\cc1n}}(\mathbf{\mathcal{B}}^n)+1) ~. \end{eqnarray*} Following the induction hypothesis, the first interval is equal to \begin{equation*} \frac1{2+\varepsilon_0}[-\prod_{1\le i\le n}\frac1{2+\varepsilon_i},\anib[(1/(2+\varepsilon_i))_{1\le i \le n}]{1\ldots12}] =[-\prod_{0\le i\le n}\frac1{2+\varepsilon_i},\frac{1}{2+\varepsilon_0}\anib[(1/(2+\varepsilon_i))_{1\le i\le n}]{1\ldots12}]. \end{equation*} The second interval is equal to \begin{equation*} \frac1{3+\varepsilon_0}(1+[-\prod_{1\le i\le n}\frac1{2+\varepsilon_i},\anib[(1/(2+\varepsilon_i))_{1\le i\le n}]{1\ldots12}])=[\frac1{3+\varepsilon_0}(1-\prod_{1\le i\le n}\frac1{2+\varepsilon_i}),\frac1{3+\varepsilon_0}\anib[(1/(2+\varepsilon_i))_{1\le i\le n}]{1\ldots12}]. \end{equation*} The third interval is equal to \begin{equation*} \frac1{2+\varepsilon_0}(1+[-\prod_{1\le i\le n}\frac1{2+\varepsilon_i},\anib[(1/(2+\varepsilon_i))_{1\le i\le n}]{1\ldots12}])=[\frac1{2+\varepsilon_0}-\prod_{0\le i\le n}\frac1{2+\varepsilon_n},\anib[(1/(2+\varepsilon_i))_{0\le i\le n}]{1\ldots12}]. \end{equation*} It is clear that the smallest point of these three intervals is $-\prod_{0\le i\le n}\frac1{2+\varepsilon_i}$ (the last two intervals are in $\mathbb R_+$), and the largest is $\anib[(1/(2+\varepsilon_i))_{0\le i\le n}]{1\ldots12}$ (the first interval is obtained through a translation by $-1$ of the third one, or through a homothecy by $\frac{2+\varepsilon_0}{3+\varepsilon_0}$ of the second one). We proved earlier that $\anib[(1/(2+\varepsilon_i))_{i < n+1}]{1\ldots12}\ge\frac1{1+\sqrt2}\left(1+\frac1{\sqrt2}\right)=\frac1{\sqrt2}$. It follows from this that the upper bound $\frac1{2+\varepsilon_0}\anib[(1/(2+\varepsilon_i))_{1\le i\le n}]{1\ldots12}$ of the first interval is larger than $\frac{1}{(2+\varepsilon_0)\sqrt2}$, while the smaller bound of the second interval is lower than $\frac{1}{3+\varepsilon_0}$, which is less than $\frac{1}{(2+\varepsilon_0)\sqrt2}$, as can be easily verified In other words, there is no hole between these two intervals. Using the same arguments, one can easily see that the upper bound of the second interval is $\frac1{3+\varepsilon_0}(1+\anib[(1/(2+\varepsilon_i))_{1\le i\le n+1}]{1\ldots12})$, which is larger than $\frac{1+1/\sqrt2}{3+\varepsilon_0}$ while the lower bound of the third interval is smaller than $\frac{1}{2+\varepsilon_0}$, which is smaller than $\frac{1+1/\sqrt2}{3+\varepsilon_0}$, since $\sqrt2-1+\varepsilon_0\frac1{\sqrt2}>0$. There is no hole here either, and globally, we get the full interval $\left[-\prod_{0\le i\le n}\frac1{2+\varepsilon_i},\anib[(1/(2+\varepsilon_i))_{0\le i\le n}]{1\ldots12}\right]$. The proof of the statement is finished by observing that $\prod_{0\le i<n}\frac1{2+\varepsilon_i}\to0$ and $\anib[(1/(2+\varepsilon_i)_{i<n})]{1\ldots12}\to\anib[(1/(2+\varepsilon_i)_{i\in\mathbb N})]{1\ldots}$. \end{proof} \section{Computing directions} Lemma~\ref{lem:nonexpsftrestr} stated that the set of non-expansive directions of an SFT is \emph{effectively} closed, which means that there exists a program which takes as (infinite) input the description of a direction in $\Rb$ and halts (after having read finitely many bits of the input) if and only if the direction is expansive. In Subsection \ref{ss:comput}, it was suggested that the good way to represent directions in order to compute with them was by the two coordinates of some intersection with the unit circle. Each slope then has two (opposite) valid representations. When restricting to closed subsets of $\mathbb R \subseteq \Rb$ (\textit{i.e.}, when we are not talking about the horizontal direction), the notion of effectively closed set of direction is the same with the above representation as with the usual definition of $\mathbb R$. This is due to the facts that the functions $\sin$ and $\cos$ and their inverses are computable and that the function $x \to 1/x$ is uniformly continuous away from $0$. The following remark states that directive sequences give another, equivalent representation for directions. \begin{remark}\label{rem:compslope} Let $\seq\varepsilon \in \mathbb R^{\mathbb N}$ be computable. Then, $\theta_{\seq\varepsilon}$ is a computable function. \end{remark} The computation is actually uniform in $\seq\varepsilon$, in the sense that it could be considered as part of the input. \begin{proof} This follows from the fact that the diameter of $\Theta_{\varepsilon\restr{\co0n}}(\vec d)$ is at most $2^{-n}$, since $R_i \le 1/2$, for all $i \in \mathbb N$ and directive sequence $\vec d$. \end{proof} Remark~\ref{rem:compslope} implies that effectively closed sets of slopes can be equivalently described by an effectively closed set of directive sequences. This is the computational description of directive sequences that we are going to use in the next chapter. \section{Realization of sets of non-expansive directions} Let $\C[\reali]=\C[\unive]\sqcup[\texttt{MHist},\texttt{MShift},\texttt{MShift}_{+1},\texttt{Prog},\revprog]$. The following permutation will also be parametrized by an effectively closed set $\NE_0 \subseteq [0,1/2]$, which is represented by the program $p'$ of the TM that recognizes $\NE_0$ as a set of directive sequences. $\NE_0$ is the set of non-expansive directions that we are trying to realize. We identify the set $\mathbf{\mathcal{B}} \defeq \{(0,1),(1,1),(1,0)\}$ with $\haine3$ (through any bijection). If $a \in \mathbf{\mathcal{B}}$, then $D_a, W_a$ will denote the projection of $a$ onto the first and second coordinate, respectively. In the following algorithm, $p'$ is the program of a TM that recognizes some set of non-expansive directions \begin{algo}{reali}{\reali}{M,\vec{\nu},p',\seq{\vec{k}},\seq S,\seq U} \STATE{$\chekk[\pi_\texttt{MShift}=\pi_{\texttt{MShift}_{+1}}]$}\label{al:reali:mhistconsistency} \IF{$\bina{\pi_\texttt{Clock}}=0$} \STATE{$\chekk[\halt{p'}{\length{\texttt{MHist}}}{\texttt{MHist}}]$}\label{al:reali:medvedev} \STATE{$\chekk[\emp[\motvide]{\texttt{Head}_{-1},\texttt{Head}_{+1},\texttt{Tape}_{-1},\texttt{Tape}_{+1},\texttt{NTape}}]$}\label{al:reali:empty} \STATE{$\chekka[M,\bina{\pi_\texttt{Addr}},\pi_\texttt{Tape},\vec{k}_{\length{\texttt{MHist}}+1}]$} \label{al:reali:alph} \STATE{$\hier[M,\bina{\pi_\texttt{Addr}},\pi_\texttt{Tape},\vec{k}_{\length{\texttt{MHist}}+1},\texttt{Prog} \pi_{\texttt{Prog}}]$}\label{al:reali:prog} \STATE{$\hier[M,\bina{\pi_\texttt{Addr}},\pi_\texttt{Tape},\vec{k}_{\length{\texttt{MHist}}+1},\revprog \pi_\revprog]$}\label{al:reali:revprog} \STATE{$\hier[M,\bina{\pi_\texttt{Addr}},\pi_\texttt{Tape},\vec{k}_{\length{\texttt{MHist}}+1},\texttt{MHist},\pi_{\texttt{MHist}}\pi_{\texttt{MShift}}]$}\label{al:reali:infinitesequence} \ENDIF \IF{$\bina{\pi_\texttt{Clock}}=0$} \STATE{$\exch[\texttt{Tape},\texttt{Tape}_{+1}]$}\label{al:reali:startshift} \ENDIF \IF{$\bina{\pi_\texttt{Clock}}=D_{\texttt{MShift}}S_{\length{\texttt{MHist}}}$} \STATE{$\exch[\texttt{Tape},\texttt{Tape}_{+1}]$}\label{al:reali:stopshift} \ENDIF \STATE{$\unive[M$, $\vec{\nu}$, $\vec{k}_{\length{\texttt{MHist}}}$, $\bina{\pi_\texttt{Addr}}$, $\bina{\pi_\texttt{Clock}}-D_{\texttt{MShift}}S_{\length{\texttt{MHist}}}$, $S_{\length{\texttt{MHist}}}$, $S_{\length{\texttt{MHist}}}(D_{\texttt{MShift}}+W_{\texttt{MShift}}+1)+4U_{\length{\texttt{MHist}}}$ $,U_{\length{\texttt{MHist}}}$, $\pi_\texttt{Prog}$, $\pi_\revprog]$}\label{al:reali:unive} \STATE{$\coordi[\seq{S}_{\length{\texttt{MHist}}},\seq{S}_{\length{\texttt{MHist}}}(D_{\texttt{MShift}}+W_{\texttt{MShift}}+1)+4\seq{U}_{\length{\texttt{MHist}}}]$}\label{al:reali:coordi} \end{algo} This is like the simulation of the computation degrees, only that we keep a more complicated register in the $\texttt{MHist}$ field and we use the values of $\texttt{MShift}$ to perform a ``macro-shift'' before the simulation. We also note that $\seq{T}$ is not given as a parameter of the construction. Instead, it is determined by the sequences $\seq{S}, \seq{U}$ and the value of field $\texttt{MShift}$. \begin{lemma}\label{lem:nexpdirsimul} Let $\seq S,\seq U$ be polynomially checkable sequences of integers and $p'$ be the program of a TM. Let us fix the field list $\C[\reali]\defeq [0,\ldots,14]$, the corresponding direction vector $\vec{\nu}_{\reali}$ and a polynomially checkable sequence of $15$-uples $\seq{\vec{k}}\in(\mathbb N^{14})^\mathbb N$. Let $F$ be the IPPA with directions $\vec{\nu}_{\reali}$ and permutation $\reali[14,\vec{\nu}_{\reali},p',\vec{k},\seq S,\seq U]$ and $p,p^{-1}$ be the programs for this permutation and its inverse, respectively. For all $w \in \mathbf{\mathcal{B}}^*$ and $a \in \mathbf{\mathcal{B}}$, let $\vec{k}_{w,a} \defeq \vec{k}_{\length{w}}$, $S_{w,a}\defeq S_{\length{w}}$, $U_{w,a} \defeq U_{\length{w}}$, $T_{w,a} \defeq S_{w,a}(D_a+W_a+1)+4U_{w,a}$ and $F_{w,a}$ be the restriction of $F$ to the subalphabet \begin{equation*} \A_{w,a}\defeq \haine5^{\vec{k}_{\length{w}}}\cap\emp[w]{\texttt{MHist}}\cap\emp[a]{\texttt{MShift},\texttt{MShift}_{+1}}\cap\emp[p]{\texttt{Prog}}\cap\emp[p^{-1}]{\revprog}. \end{equation*} Assume that the following inequalities hold, for all $w\in\mathbf{\mathcal{B}}^*$ and $a,a' \in \mathbf{\mathcal{B}}$: \[\both{ U_{w,a}\ge\max\{t_p({\haine5^{\seq{\vec{k}}_{wa,a'}}}),t_{p^{-1}}({\haine5^{\seq{\vec{k}}_{wa,a'}}})\}\\ S_{w,a}\ge\max\{2U_{w,a},\length{\Chi{\haine5^{\vec{k}_{wa,a'}}}}\}\\ k_{w,a,\texttt{Addr}},k_{w,a,\texttt{Addr}_{+1}}\ge\norm{S_{w,a}}\\ k_{w,a,\texttt{Clock}},k_{w,a,\texttt{Clock}_{+1}}\ge\norm{T_{w,a}}\\ k_{w,a,\texttt{Head}_{-1}},k_{w,a,\texttt{Head}_{+1}}\ge\max\length{\Chi{\haine4\times (Q_p \cup Q_{p^{-1}})\times\{-1,+1\}}}\\ k_{w,a,\texttt{Tape}},k_{w,a,\texttt{NTape}},k_{w,a,\texttt{Tape}_{-1}},k_{w,a,\texttt{Tape}_{+1}}\ge1\\ k_{w,a,\texttt{Prog}}\geq\length p\\ k_{w,a,\revprog}\geq\length{p^{-1}}\\ k_{w,a,\texttt{MHist}} \geq\length{w}\\ k_{w,a,\texttt{MShift}}=k_{w,a,\texttt{MShift}_{+1}}\geq 1 ~.}\] Then, $F_{w,a}$ completely exactly simulates $\bigsqcup_{\begin{subarray}{c}a'\in \mathbf{\mathcal{B}}\end{subarray}}{F_{wa,a'}}$ with parameters $(S_{w,a},T_{w,a},D_aS_{w,a},\Phi_{w,a})$, where $\Phi_{w,a}=\tilde{\Phi}_{\texttt{Tape}}{\restr{\Sigma_{w,a}}}$ and $\Sigma_{w,a} \defeq \A_{w,a}^{\mathbb Z} \cap \gra{0}{0}{S_{w,a}}{T_{w,a}}\cap\emp[\motvide]{\texttt{Head}_{-1},\texttt{Head}_{+1},\texttt{Tape}_{-1},\texttt{Tape}_{+1},\texttt{NTape}} \cap \Phi^{-1}(\bigsqcup_{\begin{subarray}{c} a' \in \mathbf{\mathcal{B}} \end{subarray}}\A_{wa,a'}^{\mathbb Z}).$ \end{lemma} The only difference between this proof and the proof of Lemma~\ref{l:mhist} is that we shift all the encodings $D_aS_{w,a}$ cells (equivalently, $D_a$ macro-cells) to the right before starting the simulation. Also, it is not difficult to see that the usual inequalities hold: $t_p(\A_{n}) = t_{p^{-1}}(\A_{n})=P(\length{\A_n}+t_{\vec{k}}(n)+t_{\seq{S}}(n)+t_{\seq{T}}(n)+t_{\seq{U}}(n))$ and $\length{p},\length{p^{-1}} = O(\length{p_{\vec{k}}}+\length{p_{\seq{S}}}+\length{p_{\seq{T}}}+ \length{p_{\seq{U}}})$. Recall that $p'$ is fixed in advance so it is a constant in what matters complexity. \begin{proof} If $p'$ halts on input $w$ within $\length{w}$ steps, then the check of line~\ref{al:reali:medvedev} will reject every configuration, which means that $F_{w,a} = \emptyset$. But, in this case, $wa$ will also be rejected by $p'$ within $\length{wa}$ steps, for all $a \in \mathbf{\mathcal{B}}$, so that $\bigsqcup_{\begin{subarray}{c}a'\in \mathbf{\mathcal{B}}\end{subarray}}{F_{wa,a'}} = \emptyset$, too. By definition, the empty PCA strongly, completely simulates itself for all possible choices of the simulating parameters, so that the claim is true in this case. Suppose, then, that $p'$ does not halt on input $w$ within $\length{w}$ steps. Then, the check of line~\ref{al:reali:medvedev} is always true, so that we can ignore it in the rest of the proof. As in the previous proofs, we have to show three things: that $F_{w,a}$ $(S_{w,a},T_{w,a})$ simulates $\bigsqcup_{\begin{subarray}{c}a\in \haine2\end{subarray}}{F_{wa,a'}}$ with decoding function $\Phi_{w,a}$ (simulation), that $\Phi_{w,a}$ is injective (exactness) and that $\Omega_{F_{w,a}} \subseteq \mathcal D}%\DeclareMathOperator*{\dom}{dom{\Phi_{w,a}}$ (completeness). For the simulation, it is easy to see that if $b \in \A_{wa,a'}^{\mathbb Z}$, where $a' \in \haine2$ and $c \in \Phi_{w,a}^{-1}(b)$, then $c$ is not rejected by the checks of lines~\ref{al:syncomp:mhistconsistency},\ref{al:syncomp:empty},\ref{al:syncomp:alph},\ref{al:syncomp:prog}, \ref{al:syncomp:revprog} and \ref{al:syncomp:infinitesequence}. Then, line~\ref{al:reali:startshift} copies \emph{all} the info bits onto $\texttt{Tape}_{+1}$. During the next $S_{w,a}D_a$ steps, no permutation is applied. The only thing happening to the configuration is that the encodings that are in $\texttt{Tape}_{+1}$ travel to the right at the speed of one cell per time step. After $S_{w,a}D_a$ steps, they are copied back to the $\texttt{Tape}$ tape by line~\ref{al:reali:stopshift}. Every letter has travelled exactly $S_{w,a}D_a$ cells to the right, which corresponds to $D_a$ macro-cells. Formally, $\Phi(F^{D_aS_{w,a}}(c))=\sigma^{-D_a}(b)$. Then, from Fact~\ref{fact:shiftandpermutation} and since the only rule applied from $\texttt{Clock}=D_aS_{w,a}$ is $\coordi \circ \unive$, we obtain that $\Phi(F^{D_aS_{w,a}+S_{w,a}+4{U}_{w,a}}(c))=$ $F_{wa,a'}(\sigma^{-D_a}(b))$. After $\texttt{Clock}=D_aS_{w,a}+S_{w,a}+4{U}_{w,a}$, nothing else changes in the configuration until $\texttt{Clock}$ becomes $0$ again. Line~\ref{al:reali:coordi} ensures that $\texttt{Clock}$ goes from $0$ to $(D_a+W_a+1)\seq{S}_{w,a}+4\seq{U}_{w,a}$. This concludes the proof of the simulation part. Exactness of the simulation is easy to see. The values of all the fields of $c \in \Phi_{w,a}^{-1}(b)$ are uniquely determined by $b$ and $\Sigma_{w,a}$. For the completeness, we show that if $c \in \gra{0}{0}{S_{w,a}}{T_{w,a}} \cap F_{w,a}^{-2T_{w,a}}(\A_{w,a}^{\mathbb Z})$, then $c \in \Phi_{w,a}^{-1}(\bigsqcup_{\begin{subarray}{c} a' \in \mathbf{\mathcal{B}} \end{subarray}}\A_{wa,a'}^{\mathbb Z})$. Indeed, if $c \in \gra{0}{0}{S_{w,a}}{T_{w,a}} \cap F_{w,a}^{-T_{w,a}}(\A_{w,a}^{\mathbb Z})$, then lines~\ref{al:reali:empty},\ref{al:reali:prog},\ref{al:reali:revprog} and \ref{al:reali:infinitesequence} ensure that $c \in \A_{w,a}^{\mathbb Z} \cap \gra{0}{0}{S_{w,a}}{T_{w,a}}\cap \emp[\motvide]{\texttt{Head}_{-1},\texttt{Head}_{+1},\texttt{Tape}_{-1},\texttt{Tape}_{+1},\texttt{NTape}} \cap \Phi^{-1}((\bigsqcup_{\begin{subarray}{c} a' \in \mathbf{\mathcal{B}} \end{subarray}}\A_{wa,a'})^{\mathbb Z}).$ Let $b \in (\bigsqcup_{\begin{subarray}{c} a' \in \mathbf{\mathcal{B}} \end{subarray}}\A_{wa,a'})^{\mathbb Z}$ be such that $c \in \Phi^{-1}(b)$. We still cannot know that $\pi_{\texttt{MShift}}(b_i)$ is the same for all $i \in \mathbb Z$. We deal with this problem in a similar way as in Section~\ref{s:comput}, since $F_{w,a}^{2T}(c)$ exists, this means that $F^2(b)$ exists, and line\ref{al:reali:mhistconsistency} ensures that $\pi_{\texttt{MShift}}(b_i)=\pi_{\texttt{MShift}_{+1}}(b_i)=\pi_{\texttt{MShift}}(b_j)$, for all $i,j \in \mathbb Z$. Therefore, $b \in \bigsqcup_{\begin{subarray}{c} a' \in \haine2 \end{subarray}}\A_{wa,a'}^{\mathbb Z}$ and this concludes the proof. \end{proof} \subsection{Satisfying the inequalities} Unlike the previous cases, the set of inequalities that we want to satisfy does not depend on $n$, but instead on a word $w \in \mathbf{\mathcal{B}}^*$ and $a,a' \in \mathbf{\mathcal{B}}$. However, we will now see that the inequalities can be translated to some inequalities about the polynomially computable sequences $\seq{S},\seq{U}$ \begin{remark} We can find $\vec{k} \in (\mathbb N^{15})^{\mathbb N}$ and $\seq{S}, \seq{U} \in \mathbb N^{\mathbb N}$ such that the inequalities of Lemma~\ref{lem:nexpdirsimul} are satisfied. In addition, we can have $\seq\varepsilon_n \defeq 4U_n / S_n < \sqrt{2}-1$, for all $n \in \mathbb N$ and $\theta_{\seq\varepsilon}(\mathbf{\mathcal{B}}^{\mathbb N}) \supseteq [0,1/2]$. \end{remark} \begin{proof} Let $w \in \mathbf{\mathcal{B}}^*$ and $a,a' \in \mathbf{\mathcal{B}}$. Let $n \defeq \length{w}$ be the length of $w$. Let us write again the inequalities: \[\both{ U_{w,a}\ge\max\{t_p({\haine5^{\seq{\vec{k}}_{wa,a'}}}),t_{p^{-1}}({\haine5^{\seq{\vec{k}}_{wa,a'}}})\}\\ S_{w,a}\ge\max\{2U_{w,a},\length{\Chi{\haine5^{\vec{k}_{wa,a'}}}}\}\\ k_{w,a,\texttt{Prog}}\geq\length p\\ k_{w,a,\revprog}\geq\length{p^{-1}}\\ k_{w,a,\texttt{Head}_{-1}},k_{w,a,\texttt{Head}_{+1}}\ge\length{\Chi{\haine4\times (Q_p \cup Q_{p^{-1}})\times\{-1,+1\}}}\\ k_{w,a,\texttt{Addr}},k_{w,a,\texttt{Addr}_{+1}}\ge\norm{S_{w,a}}\\ k_{w,a,\texttt{Clock}},k_{w,a,\texttt{Clock}_{+1}}\ge\norm{T_{w,a}}\\ k_{w,a,\texttt{Tape}},k_{w,a,\texttt{NTape}},k_{w,a,\texttt{Tape}_{-1}},k_{w,a,\texttt{Tape}_{+1}}\ge1\\ k_{w,a,\texttt{MHist}} \geq\length{w}\\ k_{w,a,\texttt{MShift}}=k_{w,a,\texttt{MShift}_{+1}}\geq 1 ~.}\] According to the definition, $\vec{k}_{w,a} = \vec{k}_n$, $S_{w,a}=S_n$, $U_{w,a}=U_n$ and $T_{w,a}=S_n(D_a+W_a+1)+4U_n$: Therefore, we can write the above inequalities as follows: \[\both{ U_n\ge\max\{t_p({\haine5^{\seq{\vec{k}}_{n+1}}}),t_{p^{-1}}({\haine5^{\seq{\vec{k}}_{n+1}}})\}\\ S_{n}\ge\max\{2U_{n},\length{\Chi{\haine5^{\vec{k}_{n+1}}}}\}\\ k_{n,\texttt{Prog}}\geq\length p\\ k_{n,\revprog}\geq\length{p^{-1}}\\ k_{n,\texttt{Head}_{-1}},k_{n,\texttt{Head}_{+1}}\ge\length{\Chi{\haine4\times (Q_p \cup Q_{p^{-1}})\times\{-1,+1\}}}\\ k_{n,\texttt{Addr}},k_{n,\texttt{Addr}_{+1}}\ge\norm{S_{n}}\\ k_{n,\texttt{Clock}},k_{n,\texttt{Clock}_{+1}}\ge\norm{S_n(D_a+W_a+1)+4U_n}\\ k_{n,\texttt{Tape}},k_{n,\texttt{NTape}},k_{n,\texttt{Tape}_{-1}},k_{n,\texttt{Tape}_{+1}}\ge1\\ k_{n,\texttt{MHist}} \geq n\\ k_{n,\texttt{MShift}}=k_{n,\texttt{MShift}_{+1}}\geq 1 ~.}\] Unlike the previous proofs, we cannot choose $\vec{k}_{\seq{S},\seq{T}} \in \mathbb N^{\mathbb N}$ such that all of the inequalities except the first two are satisfied as equalities. This is because the inequalities about $\texttt{Clock}$ and $\texttt{Clock}_{+1}$ depend on $a \in \mathbf{\mathcal{B}}$ and not only on $n$. However, $D_a+W_a \le 2$, for all $a\in \mathbf{\mathcal{B}}$, so that we can replace these inequalities with $k_{n,\texttt{Clock}},k_{n,\texttt{Clock}_{+1}}\ge\norm{3S_n+4U_n}$ and show that this new set of inequalities can be satisfied. (Here, it is essential that $\mathbf{\mathcal{B}}$ is a finite set. Bounding $D_a+W_a$ from above is one of the reasons that we had to do a little more work with the directive sequences.) The rest of the proof follows the usual pattern. We choose $S_n = Q^{n+n_0}$ and $U_n=(n+n_0)^r$ for some suitable values of $n_0,r$ and $Q$. For the second claim, let us recall more specifically in which order $n_0,r$ and $Q$ are chosen. In the proof of Remark~\ref{rem:inequhiera}, we showed that there exists $n_0$ and $r$ that work for \emph{every} $Q$. Therefore, by choosing $S_n = Q^{n+n_0}$ and $U_n=(n+n_0)^r$, for some sufficiently large $Q$, we can make $\seq\varepsilon \defeq 4U_n / S_n$ smaller than $\sqrt{2}-1$ and $\anib[(1/(2+\varepsilon_i)_i)]{111\ldots}$ larger than $1/2$. \end{proof} For all $w,a$, we have $T_{w,a}= (D_a+W_a+1)S_{w,a}+4U_{w,a}=(D_a+W_a+1+4U_{w,a}/S_{w,a})S_{w,a}=(D_a+W_a+1+\epsilon_{w,a})S_{w,a}$, where $\epsilon_{w,a} < \sqrt{2}-1$, so that we are in the situation described in Lemma~\ref{lem:geomstuff} and $\theta_{\seq\varepsilon}(\mathbf{\mathcal{B}}^{\mathbb N}) \supseteq [0,1/2]$. Since $\NE_0 \subseteq [0,1/2]$ by assumption, this means that every direction in $\NE_0$ is representable as a sequence in $\theta_{\seq\varepsilon}(\mathbf{\mathcal{B}}^{\mathbb N})$. \subsection{Realization} For all $z \in \X_{p'}$, we have a sequence of complete, exact simulations given by \begin{equation*} F_{z_{\co{0}{n}},z_n} \simu[S_{z_{\co{0}{n}},z_n},T_{z_{\co{0}{n}},z_n},D_{z_n}S_{z_{\co{0}{n}},z_n}] F_{z_{\co{0}{n+1}},z_{n+1}}, \end{equation*} for all $n \in \mathbb N$. Therefore, according to a Lemma~\ref{l:nonvides}, we have that $\Omega_F= \bigsqcup_{z \in \X_{p'}} \rocks[\infty]z{\Phi}$, and $\orb{F_{\epsilon}}=\bigsqcup_{z \in \X_{p'}} \orb{F_{\rocks[\infty]z{\Phi}}}$. In addition, we know that $\NE(\orb{F_{\rocks[\infty]z{\Phi}}})=\theta(z)$, by Lemma~\ref{lem:iterrelsimulexp} and that every direction in $\cc{0}{1/2}$ can be represented in such a way, by Lemma~\ref{lem:geomstuff}. Finally, we know by Lemma~\ref{lem:basicstuffaboutNE} that $\NE(\orb{F_{\epsilon}})= \bigsqcup_{z \in \X_{p'}} \NE(\orb{F_{\rocks[\infty]z{\Phi}}}) = \bigsqcup_{z \in \X_{p'}} \theta(z) = \NE_{p'}=\NE_0$. Therefore, for every effectively closed set of directions which is included in $[0,1/2]$, recognized by a TM with program $p'$, we have constructed a 2D SFT with exactly this set as set of non-expansive directions. According to our previous discussions, this is enough to realize arbitrary effectively closed sets of directions as the set of non-expansive directions of 2D SFT. This concludes the proof of Theorem~\ref{thm:nonexpansive}. \chapter*{Conclusion and Open Questions} We have provided a general method for constructing extremely-expansive 2D SFT's of finite type and we have shown that this class of 2D SFT's has very rich computational, dynamical and geometrical properties. At the same time, our method throws some light on the essence of self-similar and hierarchical constructions and we hope that it might help to better understand previous works with hierarchical constructions, especially \cite{drs}. ( On the other hand, the difficulty of \cite{gacs} only partly comes from the hierarchical simulation, so our work is certainly not sufficient to explain this construction better.) Regarding future work, we believe that the following questions about extremely-expansive 2D SFTs are very natural: First of all, can (a variant of) our method produce a \emph{minimal} extremely-expansive SFT? Is the emptiness problem undecidable for \emph{minimal} extremely-expansive SFT's? Recently, Durand and Romashchenko \cite{drs2} described a method for constructing minimal (but not extremely-expansive) SFT's and answer the second question positively. It seems that their technique can be readily generalized to our framework. Second, is it possible to realize all effective subshifts, in the sense of \cite{projsft, drs, aubrun} with 2D extremely-expansive SFT's? This would be an improvement with of the result of \cite{drs, aubrun} since it would (in some sense) further lower the dimension of the realizing subshift by one. Third, is it possible to construct extremely expansive SFT covers for square substitutions? This question goes back to the construction of Mozes \cite{mozes}, which constructs SFT covers for square substitutions without any directions of expansivess. Recently, Ollinger and Legloannec \cite{legloannec} constructed 4-way deterministic covers. We believe that the answer to this question is also positive. Finally, and this is certainly the most interesting, but also difficult question, is it possible to use our method in order to construct reversible, self-simulating CA, \textit{i.e.}, is it possible to turn the partial rules, with which we have been working in this thesis, to complete rules, while at the same time keeping the good properties of self-simulation? This could find an application to the problem of the undecidability of expansiveness for reversible CA.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Human Serum Albumin (HSA) is an important transport protein that interacts with substrates as e.g. fatty acids~\cite{Curry1998, Fasano2005} and pharmaceuticals in a specific manner.~\cite{Barbosa2005} Also, transport by HSA plays an important role for renal clearance and the interaction of HSA with typical uremic toxins \cite{Jankowski2003} as e.g. phenylacetic acid, indoxyl sulfate, and p-cresyl sulfate needs to be understood in detail.~\cite{Vanholder2003} Uremic toxins play an essential role in the excessive cardiovascular mortality and all cause mortality in patients with end stage renal disease. Elimination of these substances is essential to reduce the high cardiovascular disease risk.~\cite{Duranton2012,Duranton2013} The obvious biological importance of this transport has led to a great number of studies of the interaction of HSA with various substrates, most notably by crystallography,~\cite{Bhattacharya2000, Matsuda2014} electron paramagnetic resonance spectroscopy, \cite{Reichenwallner2013} and isothermal titration calorimetry (ITC).~\cite{Chatterjee2012, Rehman2014} Thus, it is well established that the interaction of various substrates with HSA takes place mainly on the Sudlow I and the Sudlow II site.~\cite{Sudlow1975} ITC has been used to elucidate the thermodynamics of this interaction and by this the number of substrates adsorbed onto the surface of the protein.~\cite{Roselin2010,Bouchemal2008,Chatterjee2012,Keswani2010,Sivertsen2014} Charge-charge interaction plays an essential role in the process of adsorption~\cite{Cooper2005, DaSilva2009, Becker2012} and a number of ITC-studies explore the dependence of the adsorption constant on ionic strength.~\cite{Ball2009,Seyrek2003,Du2014,Welsch2013} Recently, Jankowski and coworkers~\cite{Boehringer2015,Devine2014} have demonstrated that raising the ionic strength in the infusion fluid leads to an improved clearance of protein-bound toxins (PBT). Thus, raising the concentration of NaCl to 600~mM led to a significantly better removal of of uremic toxins as e.g. phenylacetic acid. This result points clearly to the central importance of Coulombic interactions for the binding strength of such toxins to HSA and to proteins in general. Charged toxins as phenylacetic acid are difficult to remove by conventional dialysis and exploring their interaction is of central importance for devising improved techniques for dialysis.~\cite{Boehringer2015} Here we analyze the role of the interaction of charged substrates with HSA as the function of ionic strength and temperature. As a substrate we chose a short polyelectrolyte, namely poly(acrylic acid) (PAA) with 25 repeating units only. This substrate allows us to explore the Coulombic interaction of HSA with charged molecules in general. In addition to this, synergetic effects of adjacent carboxyl groups onto the binding constant can be elucidated. At the same time, PAA can be regarded as a model of charged toxins with molecular weight above 500~g/Mol.~\cite{Vanholder2003} ITC is used to obtain the binding constant and the number of bound PAA-molecules per protein and the temperature is varied between 25$^{\circ}$C and 37$^{\circ}$C. A first study of the interaction of polyelectrolytes with proteins by ITC has been presented by Schaaf~\textit{et al.} who demonstrated the general suitability of this method for the study of protein-polyelectrolyte interaction.~\cite{Ball2002} The interaction of long polyelectrolytes with proteins in aqueous solution has been the subject of longstanding research that has led to an enormous literature.~\cite{Kayitmazer2013, Cooper2005, Nfor2008} Proteins can form complex coacervates with polyelectrolyte of opposite charge in aqueous solution and the strength of interaction is mediated by the ionic strength in the system.~\cite{Silva2010} If the ionic strength of the system is low enough, interaction may take place even on the ``wrong side`` of the isoelectric point, that is, proteins associate with polyelectrolytes of like charge. In many cases the formation of complexes between the protein and the polyelectrolyte is followed by precipitation and phase separation that may also lead to non-equilibrium states.~\cite{Du2014,Kayitmazer2013} Here we use a very short polyelectrolyte and low concentrations to avoid multiple interactions and phase separation. The results from the present experiments, however, may be directly utilized for a better understanding of complex coacervates. Evidently, protein crystallography cannot be used to elucidate the structure of the complex between HSA and PAA. First of all, even in the bound state the non-bound segments of the polyelectrolyte will explore their conformational freedom in solution and the entropic contribution deriving therefrom will be an important part of the resulting change of free energy. Moreover, crystals of HSA will certainly not accommodate substrates of the size of a polyelectrolyte having 25 units. In order to make progress on a detailed structural investigation, we employ coarse-grained Langevin Dynamics (LD) computer simulations with an implicit solvent whereas the co- and counterions are treated in an explicit manner.~\cite{Clementi2000,Takada2012,Ravikumar2012} The protein is treated within the C$_\alpha$ - G$\bar{o}$ model, that is, each amino acid of HSA is modelled by a single bead bearing a charge or not. The polyelectrolyte PAA is also treated as a coarse-grained charged polymer. The combination of these models allows us to elucidate the details of the complex of PAA and HSA in solution with full structural resolution. In addition to this, LD-simulations allow us to calculate the free enthalpy of binding $\Delta G_b$ and to compare these data with measured values. Thus, the combination of ITC with computer simulation can be used to elucidate structural and thermodynamic details that are available by no other method. \section{Experimental}\label{Sec:Experimental} \textbf{Materials.} Polyacrylic acid (PAA) with M$_W$=1800~g/mol was purchased from Sigma-Aldrich (Schnelldorf, Germany) and used after several weeks of dialysis to match pH without changing ionic strength in the system. Human Serum Albumin (HSA) was also purchased from Sigma-Aldrich (lyophilized powder, Fatty acid free, Globulin free, $99\%$) with molecular weight calculated M$_W$=66400~g/mol and its purity verified by sds-gelelectrophoresis. The buffer morpholin-N-oxide (MOPS) was purchased from Sigma-Aldrich and used as received. \subsection{Isothermal Titration Calorimetry} ITC experiments were performed using a VP-ITC instrument (Microcal, Northampton, MA). All samples were prepared in a pH 7.2 buffer solution using 10~mM MOPS and 10~mM, 40~mM, 60~mM and 90~mM NaCl to adjust ionic strength. All samples were dialyzed against buffer solution with according pH and degassed prior to experiment. For dialysis, the dialysis-system Float-a-Lyzer by Spectrum Labs with molecular weight cut-off (MWCO) 500-1000~Da for PAA and MWCO 20~kDa for HSA were used respectively. The samples were thermostatted and the instrument stabilized for 1 h to ensure thermal equilibrium and stability of the system. A total of 298~$\mu$L PAA solution was titrated with 75 successive 4~$\mu$L injections, stirring at 307~rpm and a time interval of 300-350~s between each injection into the cell containing 1.4~mL protein solution. The concentration of PAA and HSA were 1.3~g/L and 0.9~g/L respectively. We choose these low concentrations in order to study the interaction of single PAA-chains with only one HSA-molecule. Moreover, possible complications by coacervate formation are circumvented in this way. The experiments were performed at 25, 27.5, 30, 33, 37~$^o$C and ionic strengths 20~mM, 50~mM, 70~mM and 100~mM. As a first step of the ITC data analysis, the integration of the measured heat $Q$ over time is carried out to obtain the incremental heat $\Delta Q$ as a function of the molar ratio $x$ between polyelectrolyte and protein. For each experiment, the heat of dilution of PAA were measured separately by titrating the according polyelectrolyte into blank buffer solution and subtracted from adsorption heats. After correction, the resulting binding isotherm are fitted using a supplied module for Origin 7.0 (Microcal).\\ \subsubsection{Data analysis.} \label{sec:itc_analysis} For the analysis, we chose the single set of independent binding sites (SSIS) model to fit all data.~\cite{Indyk1998} This model is based on the Langmuir equation, which assumes an equilibrium between the empty adsorption sites, the no. of proteins in solution and the occupied adsorption sites. This leads to the binding constant $K_b$: \begin{equation} K_b=\frac{\Theta}{(1-\Theta)c_{PAA}} \end{equation} where $\Theta$ denotes the fraction of sites occupied by the polyelectrolyte and $c_{PAA}$ the concentration of free polyelectrolyte. Since only the total concentration of PAA $c_{PAA}^{tot}$ in the solution is known, $c_{PAA}$ is connected to the $c_{PAA}^{tot}$ as follows: \begin{equation} c_{PAA}^{tot}=c_{PAA}+N\Theta c_{prot} \end{equation} where $N$ is the number of free binding sites and $c_{prot}$ the total protein concentration in solution. Following these equations, the binding number $N$, binding affinity $K_b$ and the overall enthalpy change measured $\Delta H_{ITC}$ can be obtained by fitting the isotherm. In the following, we will show that only one PAA is adsorbed onto HSA in all cases. Therefore the parameter $N$ was fixed to unity for all subsequent fits. In this case the interaction of PAA and HSA can be formulated as a conventional chemical equilibrium:\\ \begin{equation} K_b\approx\frac{c_{PAA-HSA}}{c_{PAA} (c^{tot}_{HSA}-c_{PAA-HSA}}) \label{eq:KbN1} \end{equation} where $c^{tot}_{HSA}$ denotes the total protein concentration in the solution and $c_{PAA-HSA}$ the concentration of PAA-HSA complex. Furthermore, the binding free energy $\Delta G_{b}$ can be derived by \begin{equation} \Delta G_b=-RT\cdot ln{K_b} \label{eq:DeltaG} \end{equation} Furthermore using the following two equations, the entropy $\Delta S_b$ can be either calculated for one temperature or extracted from its temperature dependence: \begin{equation} \Delta G_b = \Delta H_{ITC}-T\Delta S \label{eq:Gb_1} \end{equation} \begin{equation} \frac{\partial\Delta G_b}{\partial T}=-\Delta S_b \label{eq:Sb_vanthoff} \end{equation} \subsection{Theoretical Methods} \subsubsection{Computer simulation model and parameters} In our simulations we employ an implicit-water coarse-grained (CG) model, where each of the amino acids, PAA monomers, and salt ions is explicitly represented by a single interaction bead. Hence, the water is modeled by a dielectric background continuum while the salt is explicitly resolved. The dynamics of each of the beads in our simulations is governed by Langevin's equation of motion~\cite{Ermak1978} \begin{equation} m_i\frac{d^2\bm{r}_i}{dt^2} = -m_i\xi_i\frac{d\bm{r}_i}{dt} + \bm\nabla_{i}U + \bm{R}_i(t) \end{equation} where $m_i$ and $\xi_i$ are the mass and friction constant of the $i$th bead, respectively. $U$ is the system potential energy and includes harmonic angular and bonded interactions between neighbouring beads in HSA and PAA, dihedral potential in the case of the HSA, and interatomic Lennard-Jones (LJ) between all non-neighbouring beads, including ions. Coulomb interactions govern the electrostatic pair potential between all charged beads. The random force $\bm{R}_i(t)$ is a Gaussian noise process and satisfies the fluctuation-dissipation theorem \begin{equation} \langle \bm{R}_i(t) \cdot \bm{R}_j(t') \rangle = 2m_i\xi_ik_BT\delta(t-t')\delta_{ij}. \end{equation} The simulations are performed using the GROMACS 4.5.4 software package.~\cite{Hess2008} A leap-frog algorithm with a time step of 2 fs is used to integrate the equations of motion. A cubic box with side lengths of $L = 30$ nm is employed and periodically replicated to generate a quasi-infinite system in the canonical ensemble. The Langevin friction is {$\xi_i = 1.0$ ps$^{-1}$} that dissipates the energy at constant temperature $T$. Center of mass translation of the system is removed every 10 steps. The cutoff radius is set to {3.0 nm} to calculate the real-space interactions while Particle-Mesh-Ewald~\cite{Essmann1995} (PME) is implemented to account for long-range electrostatics. The PME method is computed in the reciprocal space with a FFT grid of {$\sim$~0.12 nm} spacing and a cubic interpolation of fourth-order. The solvent is modelled as a continuous medium with a static dielectric constant of $\epsilon_r = 73.4$ and 78.2 for temperatures $T=25^{\circ}$C and 37$^{\circ}$C, respectively. All beads have mass $m_i = 1$~amu, diameter $\sigma_{LJ} = 0.4$~nm, energy well $\epsilon_{LJ} = 0.1$~k$_\text{B}$T and integer charges $q$ = 0, +1 or -1 e. The mass was chosen artificially low to enhance orientational fluctuations and sampling. Clearly, equilibrium properties, as investigated in this work, are not affected by any reasonable mass choices as long as the simulations are ergodic. The protein sequence for the HSA is provided by PDB databank: ID=1N5U.~\cite{Wardell2002} Every amino acid is described by a single bead positioned at its $C_{\alpha}$ atom. The native structure of the protein is maintained by a Go-model like force field as provided from the SMOG webtool for biomolecular simulations.~\cite{smog, calpha} All beads corresponding to basic and acidic amino acids are assigned a charge of +1$e$ and -1$e$, according to their titration state at physiological pH = 7.4, that is, ARG and LYS residues are assigned a charge +1$e$, while ASP and GLU are -1$e$, and HIS is neutral. With that the net charge of the simulated HSA is -14$e$. A single flexible PAA polyelectrolyte is modeled in a coarse-grained fashion as a sequence of $N_\text{mon}$ freely jointed beads. Each bead represents a monomer with a radius $\sigma_{LJ}$ and carries an electric charge of $-1e$. The PAA monomers are connected by a harmonic bond potential with an equilibrium bond length $b_\text{mon} = 0.4$~nm and a force constant $k_\text{mon} = 4100$ kJ mol$^{-1}$ nm$^{-2}$. The flexibility of the PAA chain is defined via a harmonic angle potential in which the angle between a triplet of monomer is $\phi = 120^\circ$ and the force constant is {$k_\phi = 418.4$} kJ mol$^{-1}$ rad$^{-2}$. As in the experiments we consider short PAA chains with $N_\text{mon} = 25$ monomers. The simulation box contains one HSA, while the amount of PAA and ions are characterized by the molar ratio $x = c_{PAA}/c_{HSA}$, ranging from 1 to 10, and salt concentration $c_{salt}$, ranging from 20 to 100~mM, respectively. \subsubsection{Binding and free energy calculations} The stoichiometry, in our case the average number of bound PAA chains on one HSA, can be determined through a calculation of the normalized density distribution function $g(r)= c(r)/c^{tot}_{PAA}$, where $r$ is the distance between the centers-of-mass (COM) coordinates of the HSA and PAA and $c^{tot}_{PAA}$ is the PAA bulk concentration. Integration of the $g(r)$ further leads to the PAA coordination number \begin{equation} n(r) = 4\pi c^{tot}_{PAA}\int^{r}_{0} g(r') r'^2 dr'. \end{equation} which describes how many PAA chains are bound on average at a distance $r$. For each molar ratio we simulated 120 ns in order to calculate the equilibrium coordination (binding) numbers of PAA to HSA. \begin{figure*}[!b] \center \includegraphics[width=0.3\textwidth]{Figure1a.pdf} \includegraphics[width=0.3\textwidth]{Figure1b.pdf} \includegraphics[width=0.29\textwidth]{Figure1c.pdf} \caption{ITC data of adsorption of PAA upon HSA and the corresponding heats of dilution of PAA at pH=7.2, T=37$^\circ$C and a) I=20~mM, b) I=50~mM. c)~Binding isotherm corrected for the heat of dilution at 37$^{\circ}$C and I=20~mM \& 50~mM.} \label{Fig:ITCRAW} \end{figure*} To quantify the number of bound and released ions upon complexation, we count the average number of ions $N_i$, $i=\pm 1$, that are bound ('condensed') on the PAA chain or on the positive protein patches, respectively, and make a comparison before and after PAA/HSA association. An ion is defined as 'condensed' if it is located in the first binding layer, that is closer than a cut-off distance $r_s = 0.5$~nm from the charged bead, while double-counting in overlapping volumes is avoided. The average is over long (ca. 30 ns) trajectories in the fully separated and stable bound states. \begin{figure*}[!b] \center \includegraphics[width=0.32\textwidth]{Figure2a.JPG} \includegraphics[width=0.32\textwidth]{Figure2b.jpg} \caption{Determination of the parameter $N$, i.e., the number of PAA molecules adsorbed on on HSA molecule is shown for one temperature 37$^{\circ}C$ at a) I=20~mM and b) 50~mM. Each line shows a fit with different values of fixed $N$ marked by different colors.} \label{Fig:NfitVgl} \end{figure*} To calculate the potential mean force (PMF) between the HSA and the PAA we employed steered Langevin Dynamics simulations using the pull code as provided by GROMACS.~\cite{Hess2008} Here, the center of mass of the PAA is restrained in space by an external time-dependent force. This force is applied as a constraint, \textit{i.e.} harmonic potential, and moved with a constant pulling velocity $v_p$ to steer the particle in the prescribed direction.\cite{Isralewitz2001} After several test runs, the pulling rate $v_p = 0.1$ nm/ns was chosen and a harmonic force constant $K = 2500$ kJ mol$^{-1}$ nm$^{-2}$. The simulations were performed for $\sim$100 ns. Given the pulling speed above, this simulation time is required to bring the two macromolecules from a separated state ($r \sim 11$ nm) to a final state ($r \sim 1$ nm). The standard deviation was calculated by standard block averages to specify the statistical error. The friction force {$F = -m\xi_i v_p$} was subtracted from the constraint force and the result averaged within a specific interval of discrete spacing $\Delta r$ to obtain the mean force of the interaction potential. According to the simulation setup, the mean force was integrated backwards to get the potential of mean force (PMF). Because the CPPM were radially constraint in 3D space, the PMF has to be corrected for translational entropy~\cite{Neumann1980} by \begin{equation} G(r) = G^\text{I}(r) - 2\text{k}_\text{B}\text{T}\ln(r), \end{equation} where $G^\text{I}(r)$ is the integrated mean force. The binding affinity of the PAA can then be defined as the free energy value at the global minimum of the PMF in the stable complex as \begin{equation} \Delta G^{sim} = G(r_{min}) - G(\infty), \end{equation} where the reference $G(\infty)$ is set to zero. However, before making a comparison with the experiment, we had to consider that $\Delta G_b^{exp}$ provided by the experiment is defined as a standard free energy, which refers to the standard binding volume $V_0 = 1/C^0$ of one liter per mol.~\cite{zhou:review} Hence, the binding constant $K_b$ generated from experiment is formulated as $K_b = e^{- \beta \Delta G^{sim}} V_0$. In our simulations we average the accessible volume $V_b$ of the COM of the PAA in the bound state. As a result, the standard binding free energy from the simulation can be obtained as~\cite{zhou:review} \begin{equation} \Delta G_b^{sim} = \Delta G^{corr} + \Delta G^{sim}, \end{equation} with a term $\Delta G^{corr} = -k_B T \ln(C^0 V_b)$ is the entropy correction arising from the accessible volume of the COM of the PAA in the bound~state. \section{Results and Discussion} We performed a systematic series of ITC experiments comprising four ionic strengths and five different temperatures ranging from room temperature (25$^{\circ}$C) to the physiological temperature (37$^{\circ}$C). The experiments were performed at pH 7.2 in buffer solution. This pH is well above the isoelectric point of the protein HSA used for this experiment leading to a net effective charge of -14. PAA is a weak acid with a pK$_a$ of 4.5 and hence a net charge of -25 at pH 7.2. Thus, we study the adsorption on the "wrong side" of the isoelectric point where both the protein as well as the polyelectrolyte is charged negatively. ITC is certainly the method of choice for the determination of the adsorption constant of a given substrate to HSA.~\cite{Bouchemal2008,Ball2009,Kabiri2014} However, the concentrations of the protein and PAA are small and the evolved heat will be concomitantly small. Hence, all effects leading to spurious heat signals must be considered in detail and carefully excluded. The main problem is the adjustment of the same pH and ionic strength in both the solution of the protein and of the polyelectrolyte. This is done by extensive dialysis which turned out to be decisive for obtaining meaningful ITC-data. Since the concentrations of both PAA and HSA are low, all spurious effects leading to a heat signal must be considered. Since PAA-solutions are added in small portions to the solution of HSA, the heat of dilution of PAA must be determined carefully and subtracted from the raw signal. \begin{figure*}[t] \center \includegraphics[width=0.3\textwidth]{Figure3a.jpg} \includegraphics[width=0.3\textwidth]{Figure3b.pdf} \includegraphics[width=0.38\textwidth]{Figure3c.pdf} \caption{Effect of temperature. The integrated heats Q for adsorption of PAA on HSA at temperatures between 25$^\circ$C and 37$^\circ$C for a) I=20~mM and b) I=50~mM and the according fits are displayed. For better clarity, only 3 temperatures are displayed. c) Van't Hoff analysis of the dependence of the adsorption constant on temperature. Data points are derived from the fits of the ITC-data shown in a) and b). The crosses correspond to I=20~mM and the empty squares to I=50~mM.} \label{Fig:ITC_Qfit} \end{figure*} Fig.~\ref{Fig:ITCRAW} and Fig.~S1 of the SI displays the raw ITC-signals of PAA onto HSA (black curves and points) as well as the heat of dilution of PAA (green points). The signal is weakly endothermic at I=20~mM but exothermic at I=50~mM (see Fig.~\ref{Fig:ITCRAW}). Dilutions are in all cases exothermic and the effect becomes stronger with increasing salt content as expected (blue curves and points). For higher ionic strength, the heat of dilution has a dominant effect on the overall signal and determines the sign of the signal at I>20~mM. For data analysis, the heats of dilution are subtracted from the heats of adsorption prior to fitting. Here, special attention must be paid to this step, as for some cases there remains a constant residue after subtraction of the heat of dilution. Even though this offset signal is very small and usually less than 0.1~kcal/mol, it cannot be neglected due to the small overall heat. We assign this offset to a slight mismatch of the pH or salt titrant and the solution in the cell. In order to take this effect into account, a flat background was fitted to all isotherms after the first step of subtraction and used to correct the data in the second step. The panel on the right-hand side of Fig.~\ref{Fig:ITCRAW} displays a set of typical results thus obtained. Evidently, the heat of adsorption is positive and we find this for all conditions under consideration here. Hence, the driving force for the process of adsorption must be entropic. This point will be discussed in more detail below and is well borne out from the simulations, too. In order to obtain the number $N$ of PAA-molecules bound to one HSA-molecule, the data were first fitted using the SSIS model as described in Section~2.1.1. Fig.~\ref{Fig:NfitVgl} shows a comparison of the parameter $N$ for the two data sets as in Fig.~\ref{Fig:ITCRAW}. The colored curves showing different fixed values of $N$ reveal that the data clearly justify $N=1$ as the best choice for fitting. This observation is true for any other sets of data. Deviations from $N=1$ are not significant and we can safely assume $N=1$ in all subsequent analysis (see eq.~\ref{eq:KbN1} in Section~2.1.1). \begin{table*} \center \begin{tabular}{llccccc} \hline \\ \renewcommand{\arraystretch}{1.8} Ionic Strength (mM) & T ($^{\circ}$C) & $\Delta H_{ITC}$ & $K_b \cdot 10^4$ & $\Delta G_b^{exp}$ & $\Delta S_b$ & $\Delta H_b$ \tabularnewline & & (kJ/mol) & (mol$^{-1}$) & (kJ/mol) & (kJ/mol/K) & (kJ/mol) \\ \hline \\ \renewcommand{\arraystretch}{2.8} & 25 & 16.4$\pm$0.3 & 7.7$\pm$0.5 & -27.9$\pm$1.3 & & \tabularnewline & 27.5 & 32.8$\pm$0.6 & 5.8$\pm$0.3 & -27.4$\pm$1.4 & &\tabularnewline 20 & 30 & 34.0$\pm$0.5 & 6.9$\pm$0.3 & -28.1$\pm$1.1 & 0.17$\pm$0.01& 15$\pm$4 \tabularnewline & 33 & 45.6$\pm$0.9 & 8.1$\pm$0.5 & -28.8$\pm$1.6 & & \tabularnewline & 37 & 53.4$\pm$1.0 & 8.3$\pm$0.5 & -29.2$\pm$1.5 & &\tabularnewline & & & & & &\tabularnewline & 25 & 13.6$\pm$1.1 & 1.3$\pm$0.2 & -23.4$\pm$0.5 & & \tabularnewline & 27.5 & 14.4$\pm$0.9 & 1.6$\pm$0.2 & -24.2$\pm$0.3 & & \tabularnewline 50 & 30 & 24.8$\pm$0.7 & 1.7$\pm$0.1 & -24.6$\pm$0.1 & 0.27$\pm$0.02 & 44$\pm$8\tabularnewline & 33 & 27.6$\pm$0.9 & 1.8$\pm$0.2 & -24.9$\pm$0.2 & & \tabularnewline & 37 & 26.4$\pm$0.6 & 2.6$\pm$0.1 & -26.2$\pm$0.1 & & \tabularnewline & & & & & &\tabularnewline 70 & 37& 21.9$\pm$0.7 & 1.0$\pm$0.1 & -23.6$\pm$0.5 & 0.12 (37$^{\circ}$C) &\tabularnewline & & & & & &\tabularnewline 100 & 37 & 35.5$\pm$11.6 & 0.3$\pm$0.1 & -20.7$\pm$1.2 & 0.10 (37$^{\circ}$C)&\tabularnewline \hline \end{tabular} \caption{Overview of thermodynamic parameters for all fitted isotherms for the temperature series between 25$^{\circ}$C-37$^{\circ}$C and ionic strengths between I~=~20~mM - 100~mM. As discussed in Sec. 3, all data were fitted with fixed $N=1$. $\Delta G_b^{exp}$, $\Delta S_b$ and $\Delta H_b$ were calculated according to eq.~\ref{eq:DeltaG},\ref{eq:Sb_vanthoff} and \ref{eq:vantHoff} respectively. Entropys for 70~mM and 100~mM were calculated using eq.~\ref{eq:Gb_1} at 37$^{\circ}$C.} \label{Tab:Iall_N1} \end{table*} \subsection{Strength of interaction as the function of temperature} \label{sec:Temperature} Fig.~\ref{Fig:ITC_Qfit} presents two series of temperature dependency studies for the ionic strengths I=20~mM and 50~mM. For better clarity, only 3 temperatures are displayed in these graphs. The data for two more temperatures are displayed in Figure S1 of the Supporting Information. The data taken at both ionic strength reveal a significant increase of enthalpy with increasing temperature from 25$^{\circ}$C to 37$^{\circ}$C. This effect is more pronounced for I=20~mM than at I=50~mM. Additionally, the overall enthalpy of adsorption becomes weaker with increasing salt content, which points directly to the importance of electrostatic interaction on the binding of PAA onto HSA. All data are very well described by the model assuming $N = 1$. Data taken at higher salinity are more noisy but the raise of the signal with temperature is clearly discernible. Since there is no plateau in the ITC signal, the parameter $\Delta H_{ITC}$ might be overestimated by the fits. The results of these fits are listed in Table~\ref{Tab:Iall_N1} with $\Delta H_{ITC}$ and $K_b$ as fitting parameters, $N$ being fixed to unity (see above). The free energy of binding $\Delta G_b$ was calculated from the fitting parameter K$_b$ using equation~\ref{eq:DeltaG}. \\ The temperature dependence of the binding can now be analyzed according to van't Hoff's law: \begin{equation} \left(\frac{\partial ln K_b}{\partial T^{-1}}\right)_p = -\frac{\Delta H_{b}}{R} \label{eq:vantHoff} \end{equation} Fig.~\ref{Fig:ITC_Qfit}~c) shows the resulting van't Hoff plot. A linear correlation between logarithm of the binding constant $K_b$ and the inverse temperature is seen within the limits of errors. The binding enthalpy $\Delta H_{b}$ can be obtained from the slope of the linear fit and $\Delta S_{b}$ from the intercept. The resulting data are gathered in Table~\ref{Tab:Iall_N1}. In general, the values of $\Delta$H$_{ITC}$ are larger than the data resulting from the van't Hoff analysis. Similar finding have been made in a recent study of the interaction of proteins with charged microgels.~\cite{Welsch2012} Reasons may be sought in additional processes as e.g. the hydration of freed counterions that are not directly coupled to the process of binding (see below). Also, the heat of adsorption taken directly from the ITC data might be slightly overestimated (see above). \subsection{Dependence on ionic strength.} \label{sec:ionicstrength} To study the dependence of the binding process on ionic strength, we carried out two more experiments at 37$^{\circ}$C and I=70~mM and 100~mM (see Fig.~\ref{Fig:QFit_I}, raw data are shown in Figure S2). With increasing ionic strength, the measured enthalpy approaches zero and the ITC method reaches its instrumental limits. As described above, we fix the parameter $N$ for both salt concentrations to unity. Table~\ref{Tab:Iall_N1} gathers all data obtained from these experiments. \begin{figure}[h] \center \includegraphics[width=0.35\textwidth]{Figure4.pdf} \caption{Effect of ionic strength. Isotherms are shown for a series of ionic strengths ranging from I=20~mM-100~mM at 37$^{\circ}$C. All fits have been done with $N=1$. The thermodynamic data resulting from these fits are listed in Table~\ref{Tab:Iall_N1}.} \label{Fig:QFit_I} \end{figure} The data exhibit a very consistent decrease of binding constant $K_b$ with increasing salt concentration (see Tab.~\ref{Tab:Iall_N1}). This observation combined with the fact that only about one PAA molecule is adsorbed on the HSA leads to the conclusion that the driving force of the interaction is an attractive electrostatic potential between the negative PAA with patches of positive charge on the surface of HSA molecule. These patches are known to act as multivalent counterions for the polyelectrolyte. Binding of such a polyelectrolyte to such a patch thus leads to a release of its counterions.~\cite{Becker2011, Welsch2012, Henzler2010} This ``counterion release force`` was considered many years ago by Record and Lohman who predicted a linear correlation between the logarithms of binding constant and salt concentration~\cite{Record1973}: \begin{equation} \frac{\text{d}ln K_b}{\text{d}ln c_{salt}}=-\Delta n_{ion} \label{eq:Lohman} \end{equation} where K$_b$ is the binding constant, c$_{salt}$ the salt concentration and $n_{ion}$ the number of counterion release upon adsorption. This behavior is observed for the present data as well if the ionic strengths is above 20~mM (see Fig.~\ref{Fig:LogK_c}). Application of eq.~\ref{eq:Lohman} to the data in Fig.~\ref{Fig:LogK_c} yields $\Delta n_{ion}\approx 2.9\pm 0.5$, that is, approximately 3 ions are released upon binding of one PAA-molecule to a HSA molecule. The deviation from linearity for low ionic strength has been observed as well by Dubin~\textit{et al.} for the interaction between Bovine Serum Albumin and the polyanion Heparin at pH 6.8.~\cite{Seyrek2003} Here, electrostatic interactions become long-ranged and the relative contributions of the counterion release mechanism significantly decrease. Extrapolation of the linear fit reveals that interaction still persists at physiological ionic strengths and temperature. At I=150~mM and 37$^{\circ}$C, we derive from our plot a finite binding free energy of around -17~kJ/mol and it only decays to small values at around 750~mM. This concentration has been found to be necessary in dialysis to remove protein-bound uremic retention solutes.~\cite{Boehringer2015} Evidently, there is still some binding under physiological conditions and much higher salt concentrations are needed to remove the toxins from the surface of HSA. These results show also that ITC-experiments can be deceptive when enthalpies of different reaction in the system compensate in such a way that they vanish. Thus, in the present case interaction still exists under physiological conditions but does not lead to measurable enthalpies. \begin{figure}[h] \center \includegraphics[width=0.4\textwidth]{Figure5.pdf} \caption{Linear dependence of the binding constant versus ionic strength as listed in Table~\ref{Tab:Iall_N1} are depicted in logarithmic scales according to eq.\ref{eq:Lohman}.~\cite{Record1973}} \label{Fig:LogK_c} \end{figure} \section{Comparison with computer simulations} All simulations presented in the following are using an implicit-water coarse-grained (CG) structure-based model, where the CG protein is held in its native structure by semi-empricial force-fields.~\cite{calpha} Here the amino acids, the PAA monomers, and the salt ions are modelled explicitly on a single bead level. Water is modelled by a dielectric background continuum while the salt ions are explicitly resolved. Similar methods have been used repeatedly to study, e.g., protein folding~\cite{Lammert2009} and the pair potential~\cite{Lund2008} between proteins. They provide a reasonable picture of the interactions of a given molecule with the amino acids localized at the surface of the protein because they retain native structure, thermal motion, and ion-induced mechanisms in a well-resolved fashion. Hence, this type of simulations provides a full microscopic picture of the interaction of PAA with HSA, in particular the identification of the sites where PAA docks on. Moreover, the simulations can be used to obtain realistic free enthalpies of binding as will be further shown below. \begin{figure}[h] \center \includegraphics[width=0.35\textwidth]{Figure6a.pdf}\hspace{1cm} \includegraphics[width=0.35\textwidth]{Figure6b.pdf} \caption{(a) Representative computer simulation snapshot of the total HSA-PAA complex. (b) Magnification of the binding site: the PAA (yellow string of beads) is bound near the Sudlow II site. The amino acid beads that directly participate in the binding (defined by being within 0.5~nm distance to the PAA on average) are depicted by the opaque spheres. The rest of the HSA structure is distinguished by a transparent surface plot. Electrostatically neutral HSA beads are colored white, positive beads are green, and negative beads are red. } \label{fig:snapshot} \end{figure} Our computer simulations demonstrate that the HSA binds only one PAA chain, independent of temperature, salt concentration, and molar ratio in the considered parameter ranges. Hence, it reconfirms the result obtained previously by ITC. A representative simulation snapshot of the bound complex with one PAA is presented in Fig.~\ref{fig:snapshot}. From a thorough screening of our simulation trajectories it emerges that this structure is highly reproducible and assumed in 80\% of the simulation time. It is hence a highly stable and probable configuration. Additional analysis reveals that the PAA chain spans the sub-domains II A, III A, and III B, involving the Sudlow II binding site. As expected for a negatively charged polyelectrolyte we find that it favorably binds positively charged amino acids, arginine (R) and lysine (K) at positions R410, R484, R485, R413, R538, K541, K199, K195, see also the green opaque spheres in Fig.~\ref{fig:snapshot}. This is certainly a central result of the present analysis inasmuch as it defines the precise location of the binding of a highly charged molecule as PAA. \begin{figure}[h] \center \includegraphics[width=0.48\textwidth]{Figure7.pdf} \caption{Running coordination number $n(r)$ of the PAA chains around the HSA versus their centers-of-mass distance $r$ at a temperature of 25$^\circ$~C, salt concentration of 20~mM, and molar ratios $x=c_{\rm PAA}/c_{\rm HSA}$ = 1, 2, 6, and 10, see legend. In all cases roughly one PAA chain (horizontal black dotted line) is bound to HSA.} \label{fig:coordination} \end{figure} To demonstrate that only one PAA forms a complex with HSA, the running coordination number $n(r)$ of the PAA around HSA is shown in Fig.~\ref{fig:coordination} for molar ratios $x=1$, 2, 6, and 10. The quantity $n(r)$ is the total number of PAA-molecules around a given HSA as the function of distance. The plateau of the curves between separations of $r=4$ and 6~nm just above the average value of the HSA radius is a proof that the binding number does not exceed one, irrespective of the molar ratio. Only at larger distances $n(r)$ is increasing beyond unity since here the entire solution is explored. We find qualitatively similar results for the other investigated salt concentrations and temperatures. This finding again is in direct agreement with the results of the ITC-experiments (see also the normalized density profiles in Fig.~S5). It is furthermore interesting to see the temporal evolution of the complex of HSA with PAA. The PAA chain slides along the Sudlow II site much in a way of a threading through an orifice. A series of snapshots combined with a movie can be found the in Fig.~S4 of the supplementary information. At the one hand, this fact demonstrates the strong binding of PAA by this side. On the other hand, the threading through this site leads to a strongly increased number of configurations of the complex and thus increases the entropy of the complex. This fact certainly leads to the binding of the PAA-chain at the Sudlow II site and not on other positive patches on the surface of HSA. Our simulations allow us to calculate the free energy profiles (potential of a mean force) along the HSA-PAA distance coordinate. Examples for this interaction free energy $G(r)$ between a single uncomplexed HSA and one PAA at two salt concentrations is presented in Fig.~\ref{fig:pmf}. For larger distances of approach, $r\simeq 7$~nm, a small repulsive barrier can be observed stemming from the monopole charge repulsion which dominates for large separations as expected. The barrier decreases and shifts slightly to shorter distances with higher salt concentration. At about $r\simeq 6$~nm the onset of a strong attraction takes place until a global minimum is observed at closer approach at about $r=2$~nm. The onset of attraction happens right at values comparable to half of the contour length of the PAA chain at which one of its ends is first able to get in contact with the HSA surface. We never found a stable free energy minimum for the adsorption of a second PAA. This is due to a too strong monopole charge repulsion and the covering of the high-potential binding spot by the firstly bound PAA. For the stable HSA/PAA complex, the binding free energy $\Delta G^{sim}$ can be calculated from the difference of the global minimum and the reference free energy at large distances (horizontal lines in Fig.~\ref{fig:pmf}). The values of the simulation binding free energy, corrected to yield the {\it standard} free energy of binding from the simulations $\Delta G_b^{sim}$ (see Methods), are summarized in Table~\ref{Tab:FreeEnergy_theo} for various salt concentrations and temperatures. We find a very good agreement for all systems with largest deviation of only 13\% for the case of 50~mM salt at 37$^\circ$~C temperature. As in the experiments, the binding affinity decreases in the simulations with higher salt concentrations and increases with rising temperature. The highly quantitative description by the simulations, however, is actually somewhat surprising given the simplicity of the underlying model and the neglect of hydration effects and should be discussed with caution. However, we take it as a strong indication that relatively generic electrostatic interactions rule the complexation process and hydration contributions (such as hydrophobic or van der Waals (vdW) attractions) are rather small. \begin{figure}[h] \center \includegraphics[width=0.48\textwidth]{Figure8.pdf} \caption{Free energy profile (or potential of mean force) $G(r)$ between the PAA and the HSA versus their centers-of-mass distance $r$ at a temperature of 25$^\circ$~C and for 20~mM (red) and 50~mM (blue) salt concentrations. The binding free energy $\Delta G_{sim}$ derived from the simulation can be read off as the difference between the zero free energy reference state at far separation (horizontal black dotted line) and the global minimum representing the bound state (horizontal blue and red dotted lines).} \label{fig:pmf} \end{figure} \begin{table*} \begin{tabular}{cccccc} \hline Conditions & $\Delta G^{sim}$ (kJ/mol) & $V_b$ ($nm^3$) & $\Delta G^{corr}$ (kJ/mol)& $\Delta G_b^{sim}$ (kJ/mol) & $\Delta G_b^{exp}$ (kJ/mol) \tabularnewline \hline 20~mM, 25$^{\circ}$C& $-24.8\pm 4.0$ & 3.0 & $-1.5$ & $-26.3\pm 4.0$ & $-27.9\pm 0.2$\tabularnewline 20~mM, 37$^{\circ}$C& $-25.1\pm 3.6$ & 5.9 & $-3.1$ & $-28.2\pm 3.6$ & $-29.2\pm 0.2$\tabularnewline 50~mM, 25$^{\circ}$C& $-16.1\pm 1.0$ & 16.0 & $-5.7$ & $-21.8\pm 1.0$ & $-23.5\pm 0.5$\tabularnewline 50~mM, 37$^{\circ}$C& $-18.9\pm 0.5$ & 7.5 & $-3.9$ & $-22.8\pm0.5$ & $-26.2\pm 0.2$\tabularnewline \hline \end{tabular} \caption{The calculated standard binding free energy from the simulations $\Delta G_b^{sim}$ in comparison with the experimental ones $\Delta G_b^{exp}$ for various salt concentrations and temperatures in units of kJ/mol. $\Delta G^{sim}$ is the direct output from the simulations which has to be corrected by $\Delta G^{corr}$ for the binding volume $V_b$ to obtain the standard free energy of binding (see Methods)} \label{Tab:FreeEnergy_theo} \end{table*} In order to test our hypothesis introduced above that the binding free energy is essentially dominated by the entropy of released counterions from the PAA chain and/or a positive patches on the HSA, we have counted the number of released ions upon complexation. In brief, we define an ion as 'condensed' if it is located in the first bound layer near a charged HSA or PAA monomer, defined by a cut-off radius of 0.5~nm. Hence, the number of released ions is calculated as the difference of the average number of condensed ions in the fully separated and the stable bound states (see Fig.~S5). For a temperature of 25$^\circ$C and 20~mM and 50~mM salt concentrations, our analysis shows that on average indeed 2.5 condensed ions are diluted away into the bulk upon complexation. This is indeed in good agreement with the Record-Lohman-analysis of the experiment data discussed above (cf. the discussion of Fig.~\ref{Fig:LogK_c}). Deeper inspection shows that 2 of those ions come from the PAA, at which they where bound in a high density state (see Fig.~S6 for ionic profiles around PAA monomers). If we now just consider the PAA-condensed ions and their average concentration in the bound state $c_{dense} \simeq 1.5 \pm 0.5$~mol/l, this implies that a favorable entropy contributions of about $\simeq k_BT \ln (c_{dense}/c_s) = 4.3\pm 0.4$ and $3.4 \pm 0.4~k_BT$ is gained per ion upon its release into 20~mM and 50~mM bulk concentrations, respectively. The total release free energies estimated by this analysis are thus roughly -21$\pm 2$ and -17$\pm 2$~kJ/mol for 20~mM and 50~mM salt, respectively, which is close to the binding free energies from both experimental data and from simulations. Hence, the binding of PAA is to a great part ruled by a counterion release mechanism and entropy. We note, however, that the matching of these numbers may be fortunate since other non-negligible interactions such as (repulsive) chain entropy, vdW attractions, and multipolar charge interactions beyond the bound ion layer (that is, from screening ions), all present in both simulation and experiment, have been neglected in this simple counterion release concept. The present comparison with experimental data, however, indicates that these contributions are of comparable magnitude and cancel each other roughly for the present system. Evidently, we obtained the leading contribution of the ions directly condensed on the PAA chain. Hence, the estimate of the entropy of counterion release given above should be considered a lower bound for the absolute entropy contribution. Other contributions not included in the theoretical analysis apparently lead to experimental entropies that are higher by a factor of 2-3 (see cf.~Table~\ref{Tab:Iall_N1}). The good agreement between the measured and the calculated $\Delta G_b$, however, demonstrates that these additional entropic contributions are canceled out by an enthalpic contribution of equal magnitude. This ``enthalpy-entropy compensation`` is well-known for various processes as e.g. solute hydration, protein folding, or proteins association.~\cite{Ball2002,Kabiri2014} The present comparison of theory and experiment allows us to discern among these terms and the leading contribution to $\Delta G_b$. \section{Conclusions} We presented a study of the binding of short PAA chains to HSA by combining calorimetry, that is, ITC and computer simulation. Both ITC experiments and simulation results show that there exists a strong attractive interaction between PAA and HSA at ``the wrong side of pI``, where both are negatively charged. Computer simulations demonstrate that the binding of the PAA takes place at the Sudlow II site. ITC measurements for a series of salt concentrations between 20~mM and 100~mM show that the dependence of binding affinity $\Delta G_b$ on ionic strengths is mainly determined by the counterion release mechanism and can be described by the Record-Lohman relation see eq.~\ref{eq:Lohman} (see Fig.~\ref{Fig:LogK_c}). The binding affinity $\Delta G_b$ decreases with high ionic strength, until it practically vanishes at around 750~mM. Both the analysis of the experimental data as well as the simulations find that approximately 3 ions are released in the binding process. The binding affinity $\Delta G_b$ can be calculated from simulations with good accuracy. Combining simulations with calorimetry is hence a powerful tool to elucidate the interaction of proteins with given substrates or possible toxins. Thus, this combination of theory and experiment is now capable of solving biochemical problems of direct medical importance. \acknowledgement The Helmholtz Virtual Institute for Multifunctional Biomaterials for Medicine are gratefully acknowledged for financial support. The authors thank the Helmholtz Association for funding of this work through Helmholtz-Portfolio Topic "Technology and Medicine". X. Xu is sponsored by the China Scholarship Council (CSC). In addition, J. Jankowski was supported by the grant "NPORE" of the BMBF/IGSTC (01DQ13006A).
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The mathematical abstraction of a pinhole camera is a projective linear map $$\mathbb R\mathrm P^3 \dashrightarrow \mathbb R\mathrm P^2, \quad \mathbf{x}\mapsto C\mathbf{x},$$ where $C\in\mathbb R^{3\times 4}$ is a matrix of rank 3. The camera is called \emph{calibrated}, when $C=[R, \mathbf{t}]$, where~$R\in\mathrm{SO}(3)$ is a rotation matrix and $\mathbf{t}\in\mathbb R^3$ is a translation vector. The \emph{relative-pose problem} is the problem of computing the relative position of two cameras in 3-space; see \cite[Section 9]{hartley_zisserman_2004}. Suppose that we have two calibrated cameras given by two matrices $C_1$ and~$C_2$ of rank 3. Since we are only interested in relative positions, we can assume $C_1=[\mathrm{1}_3, \mathbf{0}]$ and $C_2=[R, \mathbf{t}]$. If $\mathbf{x}\in\mathbb R\mathrm P^3$ is a point in 3-space, $\mathbf{u}=C_1\mathbf{x}\in\mathbb R\mathrm P^2$ and $\mathbf{v}=C_2\mathbf{x}\in\mathbb R\mathrm P^2$ are called a \emph{point-correspondence}. Any point-correspondence $(\mathbf{u},\mathbf{v})$ satisfies the algebraic equation \begin{equation}\label{E_eq} \mathbf{u}^T E(R,\mathbf{t}) \mathbf{v} = 0,\quad \text{ where } E(R,\mathbf{t}) = [\mathbf{t}]_\times \, R, \end{equation} and $[\mathbf{t}]_\times$ is the matrix acting by $[\mathbf{t}]_\times \mathbf{x} = \mathbf{t}\times \mathbf{x},$ the cross-product in $\mathbb R^3$. The set of all such matrices is denoted $\widehat{\mathcal{E}} := \{E(R,\mathbf{t})\mid R \in \mathrm{SO}(3), \mathbf{t}\in \mathbb{R}^3\}$. This is an algebraic variety defined by the 10 cubic and homogeneous polynomial equations $\det(E)=0,\; 2EE^TE - \mathrm{Tr}(EE^T)E=0$; see \cite[Section 4]{FM1990}. Therefore, if $\pi: \mathbb{R}^{3\times 3} \mapsto \mathrm{P}(\mathbb{R}^{3\times 3})\cong \mathbb R\mathrm P^8$ denotes the projectivization map, $\widehat{\mathcal{E}}$ is the cone over the projective variety \begin{equation}\label{E_eq2}\mathcal{E} = \pi(\widehat{\mathcal{E}}), \end{equation} which is called the \emph{essential variety}. In the following we view elements in $\mathbb R\mathrm P^8$ as real $3\times 3$ matrices up to scaling. The essential variety $\mathcal E$ is of dimension $5 = \dim \mathrm{SO}(3) + \dim \mathbb{R}^3 - 1$, and Demazure showed that its complexification has degree $10$; see \cite[Theorem 6.4]{demazure:inria-00075672}. Denote by $\mathbb G:=G(3,\mathbb R\mathrm P^8)$ the Grassmannian of $3$-dimensional linear spaces in $\mathbb R\mathrm P^8$. By \cref{E_eq}, every point correspondence induces a linear equation on $\mathcal E$. For 5 general point correspondences $(\mathbf{u}_1,\mathbf{v}_1),\ldots,(\mathbf{u}_5,\mathbf{v}_5)\in \mathbb R\mathrm P^2\times \mathbb R\mathrm P^2,$ the linear space $$L:=\{E\in \mathbb R\mathrm P^8 \mid \mathbf{u}_1^T E\mathbf{v}_1 = \cdots = \mathbf{u}_5^T E\mathbf{v}_5 = 0\}$$ is general in $\mathbb G$. Thus $$\# (\mathcal E\cap L)\leq 10.$$ That is, the relative pose problem can be solved by computing the real zeros of a system of polynomial equations that has 10 complex zeros in general. Once we have computed $E=E(R,\mathbf{t})$ we can recover the relative position of the two cameras from $E$. The process of recovering the relative pose of two calibrated cameras from five point correspondences is known as the \emph{5-point algorithm}. The system of polynomial equations that we need to solve as part of the 5-point algorithm has 10 complex zeros in general, but the number of real zeros depends on $L$. Often, one computes all complex zeros and sorts out the real ones. Whether or not this is an efficient approach depends on how likely it is to have many real zeros out of 10 complex ones. Motivated by this observation, in this paper we study the \emph{average degree} $\mean \# (\mathcal E\cap L)$ for random $L$. We write $L\sim \mathrm{Unif}(\mathbb G)$ when $L=U\cdot L_0$, with ~$U\sim \mathrm{Unif}(\mathrm{O}(9))$ (the uniform distribution relative to the Haar measure on $\mathrm{O}(9)$) and $L_0\in \mathbb G$ is fixed. Our first result shows with this uniform distribution, we expect 4 of the 10 complex intersection points to be real. \begin{theorem}\label{main1} With the distribution $\mathrm{Unif}(\mathbb{G})$ as defined above, $$\displaystyle\mean_{L\sim \mathrm{Unif}(\mathbb G)} \# (\mathcal E\cap L)=4.$$ \end{theorem} This result is in fact quite surprising, because we get an integer, though there is no reason why it should even be a rational number (see also \cite[Remark 2]{BKL18}). To work within the computer vision framework, we need a different distribution than used in \cref{main1}. The probability distribution is $\mathrm{O}(9)$-invariant, yet linear equations of the type $\mathbf{u}^TE\mathbf{v}=0$ are not $\mathrm{O}(9)$-invariant. These special linear equations are $\mathrm{O}(3)\times \mathrm{O}(3)$-invariant by the group action $(U,V).(\mathbf{u},\mathbf{v}):=(U\mathbf{u},V\mathbf{v})$. The corresponding invariant probability distribution is given by the random point $\mathbf{a}=U\cdot \mathbf{a}_0\in\mathbb R\mathrm P^2$, where $U\sim \mathrm{Unif}(\mathrm{O}(3))$ and $\mathbf{a}_0\in\mathbb R\mathrm P^2$ is fixed. We denote this by $\mathbf{a}\sim\mathrm{Unif}(\mathbb R\mathrm P^2)$. \begin{remark} The definition of $\mathrm{Unif}(\mathbb G)$ does not depend on the choice of $L_0$, and the definition of $\mathrm{Unif}(\mathbb R\mathrm P^2)$ does not depend on the choice of $\mathbf{a}_0$. \end{remark} We write $L\sim \psi$, where $L=\{E\in\mathbb R\mathrm P^8 \mid \mathbf{u}_1^T E\mathbf{v}_1 = \cdots = \mathbf{u}_5^T E\mathbf{v}_5 = 0\}\in \mathbb G$ is the random linear space given by i.i.d.\ points $\mathbf{u}_1,\mathbf{v}_1,\ldots,\mathbf{u}_5,\mathbf{v}_5\sim \mathrm{Unif}(\mathbb R\mathrm P^2)$. We have the following result. \begin{theorem}\label{main2} With the distribution $\psi$ defined above, $$\mean_{L\sim \psi} \# (\mathcal E\cap L)=\frac{\pi^3}{4} \cdot \mean \left\vert\det \begin{bmatrix} \mathbf{z}_1 & \mathbf{z}_2 &\mathbf{z}_3&\mathbf{z}_4 & \mathbf{z}_5\end{bmatrix}\right\vert,$$ where $\mathbf{z}_1,\mathbf{z}_2,\mathbf{z}_3,\mathbf{z}_4, \mathbf{z}_5\sim \mathbf{z}$ are i.i.d., $$\mathbf{z}= \begin{bmatrix} b\cdot r\cdot \sin\theta, & b\cdot r\cdot \cos \theta,& a \cdot s \cdot \sin\theta,& a \cdot s \cdot \cos\theta,& rs \end{bmatrix}^T\in\mathbb R^5$$ and $a,b,r,s\sim N(0,1)$, $\theta\sim \mathrm{Unif}([0,2\pi))$ are independent. \end{theorem} We were not able to determine the exact value of the integral in this theorem. Yet, we can independently sample $N$ random matrices of the form $\begin{bmatrix} \mathbf{z}_1 & \mathbf{z}_2 &\mathbf{z}_3&\mathbf{z}_4 & \mathbf{z}_5\end{bmatrix}$ and compute their absolute determinants. This gives an empirical average value $\mu_N$. An experiment with sample size $N=5\cdot 10^9$ gives an empirical average of $$\mu_N \approx 3.95$$ (the code for this experiment is in \Cref{sec_MC_code}). In fact, $\mu_N$ is itself a random variable and we have $P\left(\ \vert \mu_N - \mean_{L\sim \psi} \# (\mathcal E\cap L)\vert \geq \varepsilon\ \right) \leq \frac{\pi^6}{16}\cdot \frac{\sigma^2}{N\cdot \varepsilon^2}$ by Chebychev's inequality, where $\sigma^2$ is the variance of the absolute determinant. We show in \cref{prop_var} below that $\sigma^2\leq 360$. Using this in Chebychev's inequality we get $$P\left(\ \vert \mu_N - \mean_{L\sim \psi} \# (\mathcal E\cap L)\vert \geq 0.05\ \right) \leq 0.0175\%$$ (in fact, since $360$ is an extremely coarse upper bound, the true probability should be much smaller). Therefore, it is likely that~$\mean_{L\sim \psi} \# (\mathcal E\cap L)$ is strictly smaller than 4; i.e., it is likely that the expected value in \cref{main2} is less than the one in \cref{main1}. See \cref{fig:exp}. We remark that the distributions $\mathrm{Unif}(\mathbb G)$ and $\psi$ are different in the following sense. For $L\sim \mathrm{Unif}(\mathbb G)$ every linear space~$L\in \mathbb G$ has the same probability. But when $L\sim \psi$, it must be defined by $5$ linear equations that are given by rank-one matrices of size $3$. The Segre variety of rank-one matrices of size $3$ in~$\mathbb R\mathrm P^8$ has dimension~4 (see\footnote{In \cite{Landsberg2012} one can find a formula for the dimension of the complex Segre variety. The real Segre variety is Zariski dense in the complex Segre variety, so their real and complex dimensions coincide.} \cite[Section 4.3.5]{Landsberg2012}), so that a general linear space of codimension $4=9-5$ in~$\mathbb R\mathrm P^8$, spanned by 5 general $3\times 3$ matrices, intersects the Segre variety in finitely many points. There is an Euclidean open subset in~$\mathbb G$, such that this intersection has strictly less than 5 points. Hence, there is a measurable subset~$\mathcal W\subset \mathbb G$ such that $P_{L\sim \mathrm{Unif}(\mathbb G)}(L\in \mathcal W)>0$ but~$P_{L\sim \psi}(L\in \mathcal W)=0$. In \cref{sec_zonoid} we use a result by Vitale \cite{Vitale} to express the expected value in \cref{main2} through the volume of a certain convex body $K\subset \mathbb{R}^5$. Namely, \begin{equation}\label{main2_K} \mean_{L\sim \psi} \# (\mathcal E\cap L) = 30\pi^2 \cdot \mathrm{vol}(K), \end{equation} and $K$ defined by its support function $h_K(\mathbf{x})= \tfrac{1}{2}\mean_{\mathbf{z}} \vert \mathbf{x}^T\mathbf{z}\vert$, and $\mathbf{z}\in\mathbb{R}^5$ is as above; $K$ is a zonoid and we call it the \emph{essential zonoid}. We use this to prove a lower bound for the expected number of real points~$\mean_{L\sim \psi} \# (\mathcal E\cap L)$ in \cref{thm: lower bound}. The two probability distributions in \cref{main1} and \cref{main2} are \emph{geometric}, meaning that they are not biased towards preferred points in $\mathbb G$ or $\mathbb R\mathrm P^2$, respectively. In applications, however, one might be interested in other distributions, like for instance taking the $\mathbf{u}_i$ and~$\mathbf{v}_i$ uniformly in a box (see \cref{ex1} below). For such a case, we do not get concrete results like \cref{main1} or \cref{main2}. Nevertheless, in \cref{main3} below we give a general integral formula for such cases that can be evaluated numerically or using Monte Carlo methods. \begin{figure}[ht] \includegraphics[width = 0.49\textwidth]{mean1.pdf} \hfill \includegraphics[width = 0.49\textwidth]{mean2.pdf} \caption{The two pie charts show the outcome of the following two experiments. We sampled $N=1000$ random linear spaces, once with distribution $\mathrm{Unif}(\mathbb G)$ (the left chart) and once with distribution $\psi$ (the right chart). Then, we computed $\mathcal E\cap L$ by solving the system of polynomial equations with the software \texttt{HomotopyContinuation.jl} \cite{HC.jl}. The charts show the empirical distribution of real zeros and the corresponding empirical means in these experiments.} \label{fig:exp} \end{figure} \subsection*{Outline} In \cref{sec:preliminaries} we give preliminaries. We recall the integral geometry formula in projective space and study the geometry of the essential variety. In \cref{sec:volume} we prove \cref{main1} by computing the volume of the essential variety. In \cref{sec:proof_main2} we prove \cref{main2} and \cref{main3}. In the last section, \cref{sec_zonoid}, we study the essential zonoid. \subsection*{Acknowledgements} P.~Breiding, S.~Fairchild, and E.~Shehu are funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) -- Projektnummer 445466444. We would like to thank Rainer Sinn and Rekha R.~Thomas who posed the problem of computing the expected value in \cref{main1} to the first author. Moreover, the first author thanks Sameer Agarwal for a helpful discussion on the topic. \medskip \section{Preliminaries}\label{sec:preliminaries} Let us start by setting up our notation as well as making note of many key volume computations used throughout the paper. We consider the Euclidean space $\mathbb R^n$ with the standard metric $\langle\mathbf{x}, \mathbf{y}\rangle = \mathbf{x}^T\mathbf{y}$. The norm of a vector $\mathbf{x}\in\mathbb R^3$ will be denoted by $\Vert \mathbf{x}\Vert:=\sqrt{\langle \mathbf{x},\mathbf{x}\rangle}$ and the unit sphere by $\S^{n-1}:=\{\mathbf{x}\in\mathbb R^n \mid \Vert \mathbf{x} \Vert = 1\}$. The Euclidean volume of the sphere is \begin{equation}\label{volume_sphere} \operatorname{\mathrm{vol}}(\S^n) = \frac{2 \pi^{\frac{n+1}{2}}}{\Gamma\left(\frac{n+1}{2}\right)}.\end{equation} In particular $\operatorname{\mathrm{vol}}(\S^1) =2\pi$ and $\operatorname{\mathrm{vol}}(\S^2) = 4\pi$. The standard basis vectors in $\mathbb{R}^n$ are denoted~$\mathbf{e}_i$ for~$1\leq i\leq n$. The space of real $n\times n$ matrices $\mathbb R^{n\times n}$ is also endowed with a Euclidean structure $$\langle A,B\rangle := \frac{1}{2}\, \mathrm{Tr}(AB^T),\quad A,B\in\mathbb R^{n\times n}.$$ We denote the identity matrix $\mathrm{1}_n\in\mathbb R^{n\times n}$. The orthogonal group will be denoted by $\mathrm{O}(n)$, while the special orthogonal group is $\mathrm{SO}(n)$. Both the orthogonal and special orthogonal group are Riemannian submanifolds of $\mathbb R^{n\times n}$. Volumes of the two manifolds are $$\operatorname{\mathrm{vol}}(\mathrm{O}(n)) = 2\prod_{k=1}^{n-1} \mathrm{vol}(\S^k) \quad\hbox{ and } \quad \operatorname{\mathrm{vol}}(\mathrm{SO}(n)) = \frac{1}{2}\operatorname{\mathrm{vol}}(\mathrm{O}(n));$$ see \cite[Equation (3-15)]{Howard}. For instance, $\mathrm{vol}(\mathrm{SO}(2)) = 2\pi$ and $\mathrm{vol}(\mathrm{SO}(3)) = 8\pi^2$. \subsection{Integral geometry} The \emph{real projective space} of dimension $n-1$ is defined to be $\mathbb R\mathrm P^{n-1} := (\mathbb R^{n}\setminus \{0\})/\sim$, where the equivalence relation is $\mathbf{x}\sim \mathbf{y} \Leftrightarrow \exists \lambda \in\mathbb R: \mathbf{x}=\lambda \mathbf{y}$. The projection $\pi:\S^{n-1} \to \mathbb R\mathrm P^{n-1}$ that maps $\mathbf{x}$ to its class is a $2:1$ cover. It induces a Riemannian structure on $\mathbb R\mathrm P^{n-1}$ by declaring $\pi$ to be a local isometry. Let now $X\subseteq \mathbb R\mathrm P^{n-1}$ be a submanifold of dimension $m$ and $L\subseteq \mathbb R\mathrm P^{n-1}$ be a linear space of codimension $m$. Howard \cite{Howard} proved that for almost all $U\in \mathrm{O}(n)$ we have that $X\cap U \cdot L$ is finite and \begin{equation}\label{IG_formula} \mean_{U\sim \mathrm{Unif}(\mathrm{O}(n))} \operatorname{\mathrm{vol}}(X\cap U\cdot L) = \frac{\operatorname{\mathrm{vol}}(X)}{\operatorname{\mathrm{vol}}(\mathbb R\mathrm P^{m})}; \end{equation} see \cite[Theorem 3.8 \& Corollary 3.9]{Howard}. This formula will be used for proving \cref{main1}. \subsection{The coarea formula} The proof of \cref{IG_formula} is based on the coarea formula, which we will also need. In order to state the formula we need to introduce the normal Jacobian. Let $M, N$ be Riemannian manifolds with $\dim(M)\geq \dim(N)$ and let $F\colon M\rightarrow N$ be a surjective smooth map. Fix a point $\mathbf{x}\in M$. The \emph{normal Jacobian} $\mathrm{NJ}(F,\mathbf{x})$ of $F$ at $\mathbf{x}$ is $$\mathrm{NJ}(F,\mathbf{x})= \sqrt{\det JJ^T},$$ where $J$ is the matrix representation of the derivative $\mathrm D_\mathbf{x} F$ relative to orthonormal bases in $T_\mathbf{x} M$ and $T_{F(\mathbf{x})}N$. Then for any integrable function $h:M\to \mathbb{R}$ \begin{equation}\label{coarea_formula}\int_M h(\mathbf{x}) \,\mathrm d\mathbf{x} = \int_{\mathbf{y}\in N} \left(\int_{\mathbf{x}\in F^{-1}(\mathbf{y})} \frac{h(\mathbf{x})}{\mathrm{NJ}(F,\mathbf{x})} \,\mathrm d\mathbf{x}\right)\,\mathrm d\mathbf{y}. \end{equation} See, e.g., \cite[Section A-2]{Howard}. \subsection{The geometry of the essential variety} In this subsection, we study in more detail the geometry of the essential variety $\mathcal E$. Recall from \cref{E_eq2} that $\mathcal E$ is the projection of the cone~$\widehat{\mathcal E}$ to projective space $\mathbb R\mathrm P^8$. We can also project $\widehat{\mathcal E}$ to the sphere. This defines the \emph{spherical essential variety} $$\mathcal E_{\mathbb S} := \{E\in \widehat{\mathcal E} \mid \Vert E\Vert = 1\}.$$ Recall from \cref{E_eq} the definition of $E(R, \mathbf{t})$. \begin{lemma}\label{lemma:image_essential} The map $E: \mathrm{SO}(3)\times \S^2 \to \mathbb{R}^{3\times 3}, (R,\mathbf{t})\mapsto E(R,\mathbf{t})$ is 2:1 and $\operatorname{im}(E) =\mathcal E_{\mathbb S}$. \end{lemma} \begin{proof} Let $(R,\mathbf{t})\in \mathrm{SO}(3)\times \S^2 $. The matrix description of $[\mathbf{t}]_\times$ is $$[\mathbf{t}]_\times = \begin{bmatrix} 0 & -t_3 & t_2 \\ t_3 & 0 & -t_1 \\ -t_2 & t_1 & 0\end{bmatrix}.$$ In particular, this shows $\mathrm{Tr}\left([\mathbf{t}]_\times [\mathbf{t}]_\times^{T}\right) = 2\Vert \mathbf{t}\Vert^2 = 2$. Then, the norm squared of $E(R,\mathbf{t})$ is $$\Vert E(R,\mathbf{t})\Vert^2 = \frac{1}{2}\,\mathrm{Tr}\left(\ E(R,\mathbf{t})E(R,\mathbf{t})^T\ \right) = \frac{1}{2}\,\mathrm{Tr}\left(\ [\mathbf{t}]_\times R R^T [\mathbf{t}]_\times^T\ \right) = \frac{1}{2}\,\mathrm{Tr}\left(\ [\mathbf{t}]_\times [\mathbf{t}]_\times^{T}\ \right) = 1. $$ Therefore, $\operatorname{im}(E) =\mathcal E_{\mathbb S}$. The vector $\mathbf{t}$ spans the left-kernel of $E$. Therefore, $\mathbf{t}$ can be recovered from $E(R,\mathbf{t})$ up to scaling. Let $M\in\mathrm{SO}(3)$ be a matrix such that $M\mathbf{t} = \mathbf{t}$ and $M\mathbf{x} =-\mathbf{x}$ for all $\mathbf{x}$ orthogonal to $\mathbf{t}$, then we~have $M[-\mathbf{t}]_\times = [\mathbf{t}]_\times$ and we can write the following \begin{equation}\label{eq20}E(R,\mathbf{t}) = [\mathbf{t}]_\times R =[\mathbf{t}]_\times M^TM R=(M[-\mathbf{t}]_\times)^T MR= [-\mathbf{t}]_\times MR =E(MR, -\mathbf{t}). \end{equation} This means that $E$ is 2:1.\end{proof} Next, we show the invariance properties of the map $E$. For $U,V\in\mathrm{SO}(3)$ we denote $$(U,V).E := U\, E\, V^T.$$ In particular, the next lemma shows that this defines a group action on $\mathcal E_{\S}$. \begin{lemma}\label{lemma:invariance_phi} For orthogonal matrices $U,V\in\mathrm{SO}(3)$ and $(R,\mathbf{t})\in \mathrm{SO}(3)\times \S^2$ we have $$E(URV^T, U\mathbf{t}) = (U,V).E(R,\mathbf{t}).$$ \end{lemma} \begin{proof} We have $E(URV^T, U\mathbf{t}) = [U\mathbf{t}]_\times URV^T = ([U\mathbf{t}]_\times UR)V^T$. Moreover, the cross product satisfies $(U\mathbf{t})\times (U\mathbf{x})=U(\mathbf{t} \times \mathbf{x})$ for all $\mathbf{x}\in\mathbb R^3$. \end{proof} With the above lemma, we deduce the following result on $\mathcal{E}_{\S}$. \begin{corollary}\label{cor_hom_space} $\mathcal E_{\mathbb S}$ is a homogeneous space for $\mathrm{SO}(3)\times\mathrm{SO}(3)$ acting by left and right multiplication. In particular, $\mathcal E_{\mathbb S}$, and hence also $\mathcal E$, is smooth. \end{corollary} We now denote the following special matrix in $\mathcal E$: \begin{equation}\label{def_E_0} E_0 := E(\mathrm{1}_3, \mathbf{e}_1) = \begin{bmatrix} 0 & 0 & 0\\ 0 & 0 & -1\\ 0 & 1 & 0\\ \end{bmatrix} \end{equation} (recall that $\mathbf{e}_1$ denotes the first standard basis vector $(1,0,0)^T$). \begin{lemma}\label{lem_stabilizer} The stabilizer group of $E\in\mathcal E_{\S}$ under the $\mathrm{SO}(3)\times\mathrm{SO}(3)$ action has volume equal to~$2\sqrt{2} \cdot \mathrm{vol}(\mathrm{SO}(2))$. \end{lemma} \begin{proof} The stabilizer groups of $E\in\mathcal E_{\S}$ all have the same volume. We compute the stabilizer group of $E_0$. By \cref{lemma:image_essential}, $E(R,\mathbf{t})$ is 2:1 and by \cref{eq20} we have $$E_0 = E(\mathbf 1_3, \mathbf{e}_1) = E(M, -\mathbf{e}_1),$$ where $M=\left[\begin{smallmatrix} 1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0&-1\end{smallmatrix}\right]$. Therefore, $(U,V).E_0 = E_0$ if and only if~$U\mathbf{e}_1 = \mathbf{e}_1$ and $UV^T = 1_3$, or $U\mathbf{e}_1 = -\mathbf{e}_1$ and $UV^T = M$; i.e., $MU=V$. That is, $\mathrm{stab}(E_0)$ is realized as the image of the map $$(\tilde{U},\varepsilon)\mapsto\left(\begin{bmatrix} \varepsilon & 0 & 0\\ 0 & \varepsilon u_{11} & u_{12}\\ 0 & \varepsilon u_{21} & u_{22} \end{bmatrix},\begin{bmatrix} \varepsilon & 0 & 0\\ 0 & u_{11} & \varepsilon u_{12}\\ 0 & u_{21} &\varepsilon u_{22} \end{bmatrix}\right), \mbox{ where } \tilde{U}=\begin{bmatrix} u_{11} & u_{12}\\ u_{21} & u_{22} \end{bmatrix} \in \mathrm{SO}(2),\,\varepsilon\in\{-1,1\}.$$ The derivative of this map has normal Jacobian $\sqrt{2}^{\,\dim \mathrm{SO}(2)} = \sqrt{2}$. Thus, using the coarea formula~\cref{coarea_formula} gives $\mathrm{vol}(\mathrm{stab}(E_0)) = 2\sqrt{2} \cdot \mathrm{vol}(\mathrm{SO}(2)).$ \end{proof} Next, we compute an orthonormal basis of the tangent space $T_{E_0}\mathcal{E}$ at $E_0$. \begin{lemma}\label{prop_TS} An orthonormal basis of $T_{E_0}\mathcal E$ is given by the following five matrices \begin{alignat*}{3} &B_1 = \begin{bmatrix} 0 & 0 & 0\\ 0 & 0 & 0\\ \sqrt{2} & 0 & 0\\ \end{bmatrix},\quad &&B_2 = \begin{bmatrix} 0 & 0 & 0\\ \sqrt{2} & 0 & 0\\ 0 & 0 & 0\\ \end{bmatrix},\quad && B_3 =\begin{bmatrix} 0 & 0 & \sqrt{2}\\ 0 & 0 & 0\\ 0 & 0 & 0\\ \end{bmatrix}, \\[0.7em] &B_4 = \begin{bmatrix} 0 & \sqrt{2} & 0\\ 0 & 0 & 0\\ 0 & 0 & 0\\ \end{bmatrix},\quad &&B_5 = \begin{bmatrix} 0 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1\\ \end{bmatrix} && \end{alignat*} \end{lemma} \begin{proof} First, we observe that the five matrices above are pairwise orthogonal and all of norm one. Since $\dim \mathcal E=5$, it therefore suffices to show that $B_1,\ldots,B_5\in T_{E_0}\mathcal E = T_{E_0}\mathcal E_{\S}$. The derivatives of $E$ evaluated in $(\mathrm{1}_3, \dot \mathbf{t})$ and $(\dot R, \mathbf{e}_1)$ respectively are \begin{align*} \frac{\partial E}{\partial \mathbf{t}}(\mathrm{1}_3,\dot{\mathbf{t}}) &=E(\mathrm{1}_3, \dot{\mathbf{t}}),\quad \frac{\partial E}{\partial R}(\dot R, \mathbf{e}_1)= E(\dot R, \mathbf{e}_1). \end{align*} We have $T_{\mathbf{e}_1} \S^2 = \mathrm{span}\{\mathbf{e}_2, \mathbf{e}_3\}$ and $T_{\mathrm{1}_3}\mathrm{SO}(3)=\mathrm{span}\{F_{1,2}, F_{1,3}, F_{2,3}\}$, where $F_{i,j}=\mathbf{e}_i\mathbf{e}_j^T - \mathbf{e}_j\mathbf{e}_i^T$. Therefore, the following five matrices are in $T_{E_0}\mathcal E$: \begin{alignat}{2}\label{matrices1} E(\mathrm{1}_3, \mathbf{e}_2)=&\begin{bmatrix} 0 & 0 & 1\\ 0 & 0 & 0\\ -1 & 0 & 0\\ \end{bmatrix}, \quad E(\mathrm{1}_3,\mathbf{e}_3) =&&\begin{bmatrix} 0 & -1 & 0\\ 1 & 0 & 0\\ 0 & 0 & 0\\ \end{bmatrix}\\[0.5em]\nonumber E(F_{1,2}, \mathbf{e}_1) =& \begin{bmatrix} 0 & 0 & 0\\ 0 & 0 & 0\\ -1 & 0 & 0\\ \end{bmatrix}, \quad E(F_{1,3}, \mathbf{e}_1) =&& \begin{bmatrix} 0 & 0 & 0\\ 1 & 0 & 0\\ 0 & 0 & 0\\ \end{bmatrix}, \quad E(F_{2,3}, \mathbf{e}_1) = \begin{bmatrix} 0 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1\\ \end{bmatrix}. \end{alignat} Each of the $B_i$ above can be expressed as a linear combination of these five matrices, which shows $B_i\in T_{E_0}\mathcal E$. \end{proof} Alternatively, to prove \cref{prop_TS} we consider the derivative of the smooth surjective map $\gamma: \mathrm{SO}(3)\times \mathrm{SO}(3) \to \mathcal{E}_{\S}, (U, V)\mapsto (U, V).E_0$. Since the basis for the tangent space of~$\mathrm{SO}(3)\times \mathrm{SO}(3)$ at $(\mathrm 1_3,\mathrm 1_3)$ is given by $\{(1_3, F_{1,2}), (1_3, F_{1,3}), (1_3, F_{2,3}), (F_{1,2},1_3), (F_{1,3},1_3), (F_{2,3},1_3)\}$, the tangent space $T_{E_0}\mathcal E$ is also spanned by the following six matrices \begin{alignat}{2}\label{matrices2} E_0F_{1,2}^T=&\begin{bmatrix} 0 & 0 & 0\\ 0 & 0 & 0\\ 1 & 0 & 0\\ \end{bmatrix}, \quad E_0F_{1,3}^T =&&\begin{bmatrix} 0 & 0 & 0\\ -1 & 0 & 0\\ 0 & 0 & 0\\ \end{bmatrix},\quad E_0F_{2,3}^T = \begin{bmatrix} 0 & 0 & 0\\ 0 & -1 & 0\\ 0 & 0 & -1\\ \end{bmatrix},\\[0.5em]\nonumber F_{1,2}E_0 =& \begin{bmatrix} 0 & 0 & -1\\ 0 & 0 & 0\\ 0 & 0 & 0\\ \end{bmatrix}, \quad F_{1,3}E_0 =&& \begin{bmatrix} 0 & 1 & 0\\ 0 & 0 & 0\\ 0 & 0 & 0\\ \end{bmatrix}, \quad F_{2,3}E_0 = \begin{bmatrix} 0 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1\\ \end{bmatrix}. \end{alignat} \medskip \section{The volume of the essential variety}\label{sec:volume} In this section, we prove \cref{main1}. The strategy is as follows. By \cref{cor_hom_space}, $\mathcal E$ is a smooth submanifold of $\mathbb R\mathrm P^8$. We can apply the integral geometry formula \cref{IG_formula} to get \begin{equation}\label{IG_in_our_case} \mean_{L\sim \mathrm{Unif}(\mathbb G)} \operatorname{\mathrm{vol}}(\mathcal E\cap L) = \frac{\operatorname{\mathrm{vol}}(\mathcal E)}{\operatorname{\mathrm{vol}}(\mathbb R\mathrm P^{5})}. \end{equation} Thus, to prove \cref{main1} we can compute the volume of $\mathcal E$. We do this in the next theorem. Notice that the result of the theorem, when plugged into \cref{IG_in_our_case} immediately, proves \cref{main1}. \begin{theorem}\label{lem:volEssential} The volume of the essential variety is $$\operatorname{\mathrm{vol}}(\mathcal{E})= 4 \cdot \mathrm{vol}(\mathbb R\mathrm P^5). $$ \end{theorem} We give two different proofs of this theorem. Since $\operatorname{\mathrm{vol}}(\mathcal{E})=\tfrac{1}{2}\,\operatorname{\mathrm{vol}}(\mathcal{E}_{\S})$, it is enough to compute the latter volume. \begin{proof}[Proof 1] By Lemma~\ref{lemma:image_essential}, we realize $\mathcal{E}_\S$ as the image of the smooth map $(R,\mathbf{t})\mapsto E(R,\mathbf{t})$, and we now restrict the domain to the image. By Lemma \ref{lemma:invariance_phi}, $\mathrm{NJ} (E, (R,\mathbf{t}))$ is invariant under the action by $\mathrm{SO}(3)\times\mathrm{SO}(3)$. Applying the coarea formula \cref{coarea_formula} over the 2-element fibers of $E$, we get that $$\operatorname{\mathrm{vol}}(\mathcal{E}_{\S}) = \int_{\mathcal E_{\mathbb S}} 1\; \mathrm d E = \frac{1}{2} \int_{\mathrm{SO}(3)\times \S^2}\mathrm{NJ} (E, (R,\mathbf{t})) \; \mathrm d(R,\mathbf{t}).$$ This implies \begin{align*} \label{eq:allbutJ} \operatorname{\mathrm{vol}}(\mathcal{E}_{\S}) = & \frac{1}{2} \operatorname{\mathrm{vol}}(\mathrm{SO}(3)) \cdot \operatorname{\mathrm{vol}}(\S^2) \cdot \mathrm{NJ} (E, (\mathrm 1_3,\mathbf{e}_1))= 16\pi^3 \cdot \mathrm{NJ} (E, (\mathrm 1_3,\mathbf{e}_1)). \end{align*} Recall, $F_{i,j}=\mathbf{e}_i\mathbf{e}_j^T - \mathbf{e}_j\mathbf{e}_i^T$. With respect to the orthonormal basis $\{B_i\}$ computed in Lemma~\ref{prop_TS} and the orthonormal basis $\{(1_3,\mathbf{e}_2), (1_3,\mathbf{e}_3), ( F_{1,2}, \mathbf{e}_1), (F_{1,3},\mathbf{e}_1 ), ( F_{2,3},\mathbf{e}_1)\}$ computed for~$ T_{\mathrm 1_3}\mathrm{SO}(3)\times T_{\mathbf{e}_1}\S^2$, the columns of the matrix $J$ associated to the derivative of $E$ at $(\mathrm 1_3,\mathbf{e}_1)$ are the basis elements of $ T_{\mathrm 1_3}\mathrm{SO}(3)\times T_{\mathbf{e}_1}\S^2$ written as a combination of the basis given by Lemma~\ref{prop_TS}: \begin{align*} J &= \frac{1}{\sqrt{2}}\begin{bmatrix} -1 & 0 & -1 & 0 & 0\\ 0 & 1 & 0 & 1 & 0\\ 1 & 0 & 0 & 0 & 0\\ 0 & -1 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & \sqrt{2} \end{bmatrix}. \end{align*} So, we have that $\mathrm{NJ} (E, (\mathrm 1_3,\mathbf{e}_1)) = \sqrt{\det JJ^T} = \frac{1}{4}$, and consequently $\operatorname{\mathrm{vol}}(\mathcal{E}_{\S}) = 4\pi^3$. Therefore, we have $\operatorname{\mathrm{vol}}(\mathcal{E}) = 2\pi^3$. By \cref{volume_sphere}, $\mathrm{vol}(\mathbb R\mathrm P^5) = \frac{1}{2}\cdot \mathrm{vol}(\S^5) = \frac{\pi^3}{2}$, so $\mathrm{vol}(\mathcal E) = 4 \cdot \mathrm{vol}(\mathbb R\mathrm P^5)$. \end{proof} \begin{proof}[Proof 2] By \cref{cor_hom_space}, $\mathcal{E}_{\S}$ is a homogeneous space under the action of $\mathrm{SO}(3)\times \mathrm{SO}(3)$. We therefore have the surjective smooth map $\gamma: \mathrm{SO}(3)\times \mathrm{SO}(3) \to \mathcal{E}_{\S}, (U,V)\mapsto (U,V).E_0$ with fibers that satisfy $\mathrm{vol}(\gamma^{-1}(E)) = 2\sqrt{2}\cdot \mathrm{vol}(\mathrm{SO}(2))$ for all $E\in\mathcal E_{\S}$; see \cref{lem_stabilizer}. The coarea formula from \cref{coarea_formula} implies $$\mathrm{vol}(\mathcal{E}_{\S})\cdot 2\sqrt{2}\cdot \mathrm{vol}(\mathrm{SO}(2)) = \int_{\mathrm{SO}(3)\times \mathrm{SO}(3)} \mathrm{NJ}(\gamma, (U,V))\; \mathrm d(U,V).$$ By \cref{lemma:invariance_phi}, the map $\gamma$ is equivariant with respect to the $\mathrm{SO}(3)\times \mathrm{SO}(3)$ action. This implies, that the value of the normal Jacobian does not depend on $(U,V)$. Therefore, we have $\mathrm{vol}(\mathcal{E}_{\S})\cdot2\sqrt{2}\cdot \mathrm{vol}(\mathrm{SO}(2)) = \mathrm{NJ}(\gamma, (\mathrm 1_3,\mathrm 1_3)) \cdot \mathrm{vol}(\mathrm{SO}(3))^2,$ and so $$\mathrm{vol}(\mathcal{E}_{\S}) = \frac{\mathrm{vol}(\mathrm{SO}(3))^2}{2\sqrt{2}\cdot\mathrm{vol}(\mathrm{SO}(2))} \cdot \mathrm{NJ}(\gamma, (\mathrm 1_3,\mathrm 1_3)) = \frac{16\pi^3}{\sqrt{2}}\cdot \mathrm{NJ}(\gamma, (\mathrm 1_3,\mathrm 1_3)).$$ We compute the normal Jacobian. Recall the notation $F_{i,j}=\mathbf{e}_i\mathbf{e}_j^T - \mathbf{e}_j\mathbf{e}_i^T$. With respect to the orthonormal basis computed in Lemma~\ref{prop_TS} and the orthonormal basis $\{(1_3, F_{1,2}),(1_3, F_{1,3}), (1_3, F_{2,3}),(F_{1,2},1_3), (F_{1,3},1_3), (F_{2,3},1_3)\}$ for the tangent space of $\mathrm{SO}(3)\times \mathrm{SO}(3)$ at $(\mathrm 1_3,\mathrm 1_3)$, the columns of the matrix $J$ associated to the derivative of $\gamma$ at $(\mathrm 1_3,\mathrm 1_3)$ are given by writing the matrices in \cref{matrices2} with respect to the basis in Lemma~\ref{prop_TS}: \begin{align*}J &= \frac{1}{\sqrt{2}}\begin{bmatrix} 1&0&0&0&0&0\\ 0&-1&0&0&0&0\\ 0&0&0&-1&0&0\\ 0&0&0&0&1&0\\ 0&0&-\sqrt{2}&0&0&\sqrt{2} \end{bmatrix}. \end{align*} Taking determinant we obtain $ \mathrm{NJ}(\gamma, (\mathrm 1_3,\mathrm 1_3)) = \sqrt{\det JJ^T} = \frac{1}{\sqrt{8}}.$ We get $\mathrm{vol}(\mathcal E_{\S}) = 4\pi^3.$ As above, this implies $\mathrm{vol}(\mathcal E) = 4 \cdot \mathrm{vol}(\mathbb R\mathrm P^5)$. \end{proof} Another important notion in the context of relative pose problems in computer vision is the so-called \emph{fundamental matrix}; see, e.g., \cite[Section 9]{hartley_zisserman_2004}. While essential matrices encode the relative pose of calibrated cameras, fundamental matrices encode the relative position between uncalibrated cameras. Fundamental matrices are the precisely the matrices of rank two. So, similar to \cref{lem:volEssential}, the average degree of fundamantal matrices is given by the normalized volume of the manifold of rank two matrices $\mathcal F \subset \mathbb R\mathrm P^8$. The volume was computed by Beltr\'an in \cite{Beltr2009}: $\mathrm{vol}(\mathcal F) = \frac{\pi^4}{6} = 2\cdot \mathrm{vol}(\mathbb R\mathrm P^7).$ Notice that $\dim \mathcal F =7$. We get $$ \mean_{L\sim \mathrm{Unif}(\mathbb G)} \operatorname{\mathrm{vol}}(\mathcal F \cap L) = \frac{\operatorname{\mathrm{vol}}(\mathcal F)}{\operatorname{\mathrm{vol}}(\mathbb R\mathrm P^{7})} = 2. $$ (here, $L=U\cdot L_0, U\sim \mathrm{Unif}(\mathrm O(9))$, is a random uniform line in $\mathbb R\mathrm P^8$). Thus, the average degree of the manifold of fundamental matrices is 2, while the degree of its complexification is 3. \medskip \section{Average number of relative poses}\label{sec:proof_main2} In this section we prove \cref{main2}. Let $\Psi:(\mathbb R\mathrm P^2)^{\times 10}\to\mathbb{R}$ be a measurable function and denote $\mathbf{p} := (\mathbf{u}_1,\mathbf{v}_1,\ldots,\mathbf{u}_5,\mathbf{v}_5)\in(\mathbb R\mathrm P^2)^{\times 10}.$ We consider the expected value $$\mu:=\mean\limits_{\mathbf{u}_1,\mathbf{v}_1,\ldots, \mathbf{u}_5,\mathbf{v}_5\sim \mathrm{Unif}(\mathbb R\mathrm P^2) \text{ i.i.d.}} \,\Psi(\mathbf{p}) \cdot \#\{E\in\mathcal E \mid \mathbf{u}_{1}^T E \mathbf{v}_{1}=\cdots =\mathbf{u}_{5}^T E \mathbf{v}_{5}=0\}.$$ For $\Psi(\mathbf{p})=1$, the constant one function, $\mu=\mean_{L\sim \psi} \# (\mathcal E\cap L)$. In the general case, $\mu$ is the expected value of $\# (\mathcal E\cap L)$ for a probability distribution with probability density $\Psi(\mathbf{p})$. \smallskip \begin{example}\label{ex1} We regard $\mathbb{R}^2$ as a subset of $\mathbb R\mathrm P^2$ by using the embedding $\phi:\mathbb{R}^2\to\mathbb R\mathrm P^2$ such that $ \mathbf{u}:=\phi(\mathbf{y})=[\mathbf{y} : 1]$. Consider the case when $\mathbf{y}\in\mathbb{R}^2$ is chosen uniformly in the box $B:=[a,b]\times[c,d]\subset \mathbb{R}^2$. We compute the probability density of $\mathbf{u}$ relative to the uniform measure on $\mathbb R\mathrm P^2$. The probability density of $\mathbf{y}$ relative to the Lebesgue measure in $\mathbb{R}^2$ is $\frac{1}{(b-a)(d-c)}\cdot \delta_B(\mathbf{y})$, where $\delta_B(\mathbf{y})$ is the indicator function of the box $B$. Let $W\subset \phi(B)$ be a measurable subset, then $\mathbb{P}(\mathbf{u}\in W)=\mathbb{P}(\mathbf{y}\in \phi^{-1}(W))=\int_{\phi^{-1}(W)}\frac{1}{(b-a)(d-c)}\; \mathrm{d}\mathbf{y} $. Using the coarea formula \cref{coarea_formula} we express the probability of $W$ as $$ \mathbb{P}(\mathbf{u}\in W) =\int_{W} \frac{1}{(b-a)(d-c)}\cdot \frac{1}{\mathrm{NJ}(\phi,\mathbf{y})} \; \mathrm{d}\mathbf{u}. $$ Therefore, the probability density of $\mathbf{u}$ is $\left((b-a)(d-c)\cdot \mathrm{NJ}(\phi,\mathbf{y})\right)^{-1}$. Let us compute the normal Jacobian of the map $\phi$. Since we can work locally, we compute the derivative of the map $ \mathbf{y} \mapsto \mathbf{s}:= \frac{1}{\sqrt{y_1^2+y_2^2+1}}(y_1,y_2,1) \in \S^2$. The derivative of this map relative to the standard basis in $\mathbb{R}^2$ and $\mathbb{R}^3$ is expressed by the matrix $$ \frac{1}{\sqrt{y_1^2+y_2^2+1}} \begin{bmatrix} 1 & 0 \\ 0 & 1\\ 0 & 0 \end{bmatrix} + \left(\frac{\partial}{\partial y_1} \frac{1}{\sqrt{y_1^2+y_2^2+1}} \right)\begin{bmatrix} y_1 & 0 \\ y_2 &0 \\ 1 & 0 \end{bmatrix} + \left(\frac{\partial}{\partial y_2} \frac{1}{\sqrt{y_1^2+y_2^2+1}} \right)\begin{bmatrix} 0 & y_1 \\ 0& y_2 \\ 0 &1 \end{bmatrix}. $$ The tangent space of the sphere is $T_{\mathbf{s}}\S^2=\mathbf{s}^\perp$. Let $P_{\mathbf{s}}=\mathrm{1}_3-\mathbf{s} \mathbf{s}^T$ be the projection onto $\mathbf{s}^\perp$. To get the derivative relative to an orthonormal basis of $\mathbf{s}^\perp$, we have to multiply the above matrix from the left with $P_{\mathbf{s}}$. We get $$ \mathrm{NJ}(\phi,\mathbf{y}) = \frac{1}{y_1^2+y_2^2+1} \cdot \sqrt{\det M^TM}, \; \text{ where } M= P_{\mathbf{s}} \begin{bmatrix} 1 & 0 \\ 0 & 1\\ 0 & 0 \end{bmatrix}. $$ We have $\sqrt{\det M^TM}=\vert \langle \mathbf{s} , \mathbf{e}_3 \rangle \vert $. This implies that the probability density of $\mathbf{u}$ is given by $$ \frac{1}{(b-a)(d-c)} \cdot \frac{1}{\mathrm{NJ}(\phi,\mathbf{y}) }=\frac{1}{(b-a)(d-c)}\cdot \frac{u_1^2+u_2^2+u_3^2}{u_3^2} \cdot \frac{1}{\cos{\alpha}}, $$ where $\alpha$ is the angle between the lines through $\mathbf{u}$ and $\mathbf{e}_3$. Let us write $g(\mathbf{u}):=\frac{u_1^2+u_2^2+u_3^2}{u_3^2} \cdot \frac{1}{\cos{\alpha_i}}$. If for $1\leq i\leq 5$ we choose independently~$\mathbf{u}_i$ from the box $[a_i,b_i]\times[c_i,d_i]$ and $\mathbf{v}_i$ from the box $[a_i',b_i']\times[c_i',d_i']$ we obtain the density $\Psi(\mathbf{p})$ with $$\Psi(\mathbf{p}) = \frac{g(\mathbf{u}_1)}{(b_1-a_1)(d_1-c_1)}\cdots \frac{g(\mathbf{u}_5)}{(b_5-a_5)(d_5-c_5)}\cdot \frac{g(\mathbf{v}_1)}{(b_1'-a_1')(d_1'-c_1')}\cdots \frac{g(\mathbf{v}_5)}{(b_5'-a_5')(d_5'-c_5')},$$ when $\mathbf{p}$ is in the product of boxes, and $\Psi(\mathbf{p})=0$ otherwise. \xqed{$\triangle$} \end{example} \bigskip We will also denote $\Psi:(\mathbb{R}^3\setminus\{0\})^{\times 10}\to\mathbb{R}$ defined by $\Psi(\mathbf{u}_1,\ldots,\mathbf{v}_5):=\Psi(\pi(\mathbf{u}_1),\ldots,\pi(\mathbf{v}_5)),$ where $\pi:\mathbb{R}^3\setminus\{0\}\to\mathbb R\mathrm P^2$ is the projection. It will be convenient to replace the uniform random variables in $\mathbb R\mathrm P^2$ by Gaussian random variables in $\mathbb R^3$, see \cite[Remark 2.24]{condition}: \begin{equation}\label{eq4} \mu = \mean\limits_{\mathbf{u}_1,\mathbf{v}_1,\ldots, \mathbf{u}_5,\mathbf{v}_5\sim N(0,\mathrm{1}) \text{ i.i.d.} } \,\Psi(\mathbf{p})\cdot \#\{E\in\mathcal E \mid \mathbf{u}_{1}^T E \mathbf{v}_{1}=\cdots =\mathbf{u}_{5}^T E \mathbf{v}_{5}=0\}. \end{equation} Again, $\mean_{L\sim \psi} \# (\mathcal E\cap L)$ is recovered by setting $\Psi(\mathbf{p})=1$ in \cref{eq4}. We denote the Gaussian density by $\Phi(\mathbf{p})=(2\pi)^{-15} \exp(-\tfrac{1}{2} \sum_{i=1}^5\Vert \mathbf{u}_i\Vert^2 + \Vert \mathbf{v}_i\Vert^2 )$. The proof of \cref{main2} consists of three steps, separated into three subsections. \subsection{The incidence variety} The incidence variety is $$\mathcal I := \{(\mathbf{p},E)\in (\mathbb R^3)^{\times 10}\times\mathcal E \mid \mathbf{u}_{1}^T E \mathbf{v}_{1}=\cdots =\mathbf{u}_{5}^T E \mathbf{v}_{5}=0\}.$$ This is a real algebraic subvariety of $(\mathbb R^3)^{\times 10}\times\mathcal E$. Recall from \cref{lemma:invariance_phi} that $\mathrm{SO}(3)\times \mathrm{SO}(3)$ acts transitively on $\mathcal E$ by left and right multiplication. This extends to a group action on $\mathcal I$ via $(U,V).(\mathbf{p},E) := (U\mathbf{u}_1,V\mathbf{v}_1,\ldots,U\mathbf{u}_5,V\mathbf{v}_5,\ UEV^T).$ Let $E_0:=E(\mathrm 1_3, \mathbf{e}_1)$ be as in \cref{def_E_0} and let us denote the quadric $$ q(\mathbf{u},\mathbf{v}):=\mathbf{u}^T E_0\mathbf{v} = \mathbf{u}^T\begin{bmatrix}0&0&0\\ 0&0&-1\\0&1&0\end{bmatrix}\mathbf{v}=-\det \begin{bmatrix} u_2 & u_3\\ v_2&v_3\end{bmatrix}, $$ where $\mathbf{u}=(u_1,u_2,u_3)^T$ and $\mathbf{v}=(v_1,v_2,v_3)^T$. We denote its zero set by $$Q = \{(\mathbf{u},\mathbf{v})\in\mathbb R^3 \times \mathbb R^3 \mid q(\mathbf{u},\mathbf{v}) = 0\}. $$ Since $\mathcal E$ is an orbit of the $\mathrm{SO}(3)\times \mathrm{SO}(3)$ action, $\mathcal I = \bigcup_{(U,V) \in \mathrm{SO}(3)\times \mathrm{SO}(3)} \; (U,V).(Q^{\times 5}\times \{E_0\}).$ Let us denote $\widetilde{Q}:=\{(\mathbf{u},\mathbf{v})\in Q\mid \mathbf{u},\mathbf{v}\not \in \mathbb R\mathbf{e}_1\}$. This is a Zariski open subset of $Q$. Let $$\widetilde{\mathcal I} := \bigcup_{(U,V) \in \mathrm{SO}(3)\times \mathrm{SO}(3)} \; (U,V).(\widetilde{Q}^{\times 5}\times \{E_0\}).$$ We prove that $\widetilde{\mathcal I}$ is smooth by showing that the Jacobian matrix of the system of equations $\mathbf{u}_i^TE_i\mathbf{v}_i=0,$ for $i=1,\ldots,5$ has full rank at every point in $\widetilde{\mathcal I}$; see, e.g., \cite[Theorem A.9]{condition}. The Jacobian matrix of $q$ is the $1\times 6$ matrix $J(\mathbf{u},\mathbf{v}) := \begin{bmatrix} 0 & -v_3 & v_2& 0 & u_3 & -u_2\end{bmatrix}$. Denote \begin{equation}\label{def_A} A:=\begin{bmatrix} J(\mathbf{u}_1,\mathbf{v}_1) & & & & \\ & J(\mathbf{u}_2,\mathbf{v}_2) & & & \\ & & J(\mathbf{u}_3,\mathbf{v}_3) & & \\ & & & J(\mathbf{u}_4,\mathbf{v}_4) & \\ & & & & J(\mathbf{u}_5,\mathbf{v}_5) \end{bmatrix}\in\mathbb R^{5\times 30}. \end{equation} For $(\mathbf{p}, E_0)\in \widetilde{\mathcal I}$ the matrix $A$ has full rank. Since the image of $A$ is contained in the image of the Jacobian matrix of $\mathbf{u}_i^TE_i\mathbf{v}_i=0, i=1,\ldots,5$, we see that the latter has full rank. Therefore, $\widetilde{\mathcal I}$ is smooth. \subsection{Computing the normal Jacobian} On $\mathcal I$ we have the two coordinate projections $\pi_1: \mathcal I \to (\mathbb R^3\setminus\{0\})^{\times 10}$ and $\pi_2:\mathcal I\to \mathcal E$. Note that $\pi_2$ is surjective, but $\pi_1$ is not, since out of the 10 complex solutions of the system of equations $\mathbf{u}_i^TE_i\mathbf{v}_i=0, i=1,\ldots,5$, there can be 0 real solutions. Let $\mathcal U:=\operatorname{im}(\pi_1)$. Notice that $\mathcal U$ is the complement of a union of open balls, hence measurable. Using \cref{eq4}, \begin{equation}\label{eq5} \mu = \int_{\mathcal U} \#\pi^{-1}(\mathbf{p})\;\Phi(\mathbf{p})\;\mathrm d \mathbf{p}. \end{equation} Let us also denote $\widetilde{\mathcal U}:=\pi_1(\widetilde{\mathcal I})$. Consider a point $\mathbf{p}\in\mathcal U\setminus \widetilde{\mathcal U}$ and suppose that $(\mathbf{p}, E)\in\mathcal I$. Let $(U,V)\in \mathrm{SO}(3)\times \mathrm{SO}(3)$ such that $(U,V).E=E_0$. Since $\widetilde{Q}$ is Zariski open in $Q$, every neighborhood of~$(U,V).\mathbf{p}$ intersects $\widetilde{Q}$. Consequently, every neighborhood of $\mathbf{p}$ intersects $\widetilde{\mathcal U}$. This means that $\mathcal U'$ is open dense in $\mathcal U$ in the Euclidean topology. Hence, in \cref{eq5} we can replace $\mathcal U$ by $\widetilde{\mathcal U}$ to get $$ \mu = \int_{\widetilde{\mathcal U}} \Phi(\mathbf{p})\cdot\Psi(\mathbf{p})\cdot \#\pi^{-1}(\mathbf{p})\;\mathrm d \mathbf{p}.$$ We have shown in the previous subsection that $\widetilde{\mathcal I}$ is a smooth manifold. We may therefore apply the coarea formula from \cref{coarea_formula} twice, first to $\pi_1$ and then to $\pi_2$, to get \begin{align*} \mu &=\int_{\widetilde{\mathcal I}}\,\Phi(\mathbf{p})\cdot\Psi(\mathbf{p})\cdot \mathrm{NJ}(\pi_1,(\mathbf{p}, E))\; \mathrm d(\mathbf{p}, E)\\ &= \int_{\mathcal E} \left(\int_{\pi^{-1}_{2}(E)}\, \Phi(\mathbf{p})\cdot\Psi(\mathbf{p})\cdot\frac{\mathrm{NJ}(\pi_1,(\mathbf{p}, E))}{\mathrm{NJ}(\pi_2,(\mathbf{p},E))}\; \mathrm d\mathbf{p}\right)\;\mathrm d E. \end{align*} Let now $(U,V)\in\mathrm{SO}(3)\times\mathrm{SO}(3)$ such that $UEV^T = E_0$. It follows from \cref{lemma:invariance_phi} that $\pi_1,\pi_2$ are equivariant, which implies that $\mathrm{NJ}(\pi_{i},(\mathbf{p},E)) = \mathrm{NJ}(\pi_{i}, (U,V).(\mathbf{p},E)), i=1,2$. Furthermore, the Gaussian density $\Phi(\mathbf{p})$ is also invariant under the $\mathrm{SO}(3)\times\mathrm{SO}(3)$ action. The fiber over~$E_0$ is $\pi_2^{-1}(E_0) = \widetilde{Q}^{\times 5}\times \{E_0\}$, which is open dense in $Q^{\times 5}\times \{E_0\}$. So, \begin{equation}\label{eq1} \mu = \int_{\mathcal E} \left(\int_{Q^{\times 5}}\, \Phi(\mathbf{p})\cdot\Psi((U,V).\mathbf{p})\cdot\frac{\mathrm{NJ}(\pi_1,(\mathbf{p}, E_0))}{\mathrm{NJ}(\pi_2,(\mathbf{p},E_0))}\; \mathrm d\mathbf{p}\right)\;\mathrm d E, \end{equation} where $(U,V)\in\mathrm{SO}(3)\times \mathrm{SO}(3)$ is such that $E=(U,V).E_0$. The ratio of normal Jacobians is computed next. Recall from \cref{def_A} the definition of the matrix $A\in\mathbb{R}^{5\times 30}$. For $B_1,\ldots,B_5$ the basis from \cref{prop_TS} we denote $$B := \begin{bmatrix} \mathbf{u}_1^TB_1\mathbf{v}_1 & \cdots & \mathbf{u}_1^TB_5\mathbf{v}_1\\ \vdots & \ddots & \vdots\\ \mathbf{u}_5^TB_1\mathbf{v}_5 & \cdots & \mathbf{u}_5^TB_5\mathbf{v}_5 \end{bmatrix} \in\mathbb R^{5\times 5}. $$ Then, the tangent space of $\widetilde{\mathcal I}$ at $(\mathbf{p}, E_0)$ is defined by the linear equation $A\dot \mathbf{p} + B\dot E=0$. Therefore, when~$B$ is invertible, $-B^{-1}A$ is a matrix representation for $\mathrm D_{(\mathbf{p},E_0)}\pi_2\mathrm D_{(\mathbf{p},E_0)}\pi_1^{-1}$ with respect to orthonormal bases. So, \begin{equation}\label{eq2} \frac{\mathrm{NJ}(\pi_1,(\mathbf{p},E_0))}{\mathrm{NJ}(\pi_2,(\mathbf{p},E_0))} = \frac{1}{\vert\det(B^{-1}AA^TB^{-T})\vert} = \frac{\vert \det(B)\vert}{\sqrt{\det(AA^T)}}. \end{equation} When $B$ is not invertible, $\mathrm{NJ}(\pi_1,(\mathbf{p},E_0))=0$ and the formula in \cref{eq2} also holds. \subsection{Integration on the quadric} We plug \cref{eq2} into \cref{eq1} and obtain $$\mu =\int_{\mathcal E} \left(\int_{Q^{\times 5}}\, \Phi(\mathbf{p})\cdot\Psi((U,V).\mathbf{p})\cdot\frac{\vert \det(B)\vert}{\sqrt{\det(AA^T)}}\; \mathrm d\mathbf{p}\right)\;\mathrm d E.$$ We denote $f(\mathbf{u},\mathbf{v}):=u_2^2 + u_3^2 + v_2^2 + v_3^2$ for $\mathbf{u}=(u_1,u_2,u_3), \mathbf{v}=(v_1,v_2,v_3)$. Then, $$\det(AA^T) = \prod_{i=1}^5 f(\mathbf{u}_i,\mathbf{v}_i).$$ We have $(\mathbf{u},\mathbf{v})\in Q$ if and only if $(u_2,u_3)$ is a multiple of $(v_2,v_3)$. Therefore, we have the following $2:1$ parametrization: \begin{align*} &\phi: \mathbb R^4\times [0,2\pi) \to Q,\; (a,b,r,s,\theta)\mapsto (\mathbf{u},\mathbf{v}),\\ &\text{ where } \mathbf{u}=(a, r\cdot \mathbf{w})^T,\quad \mathbf{v}=(b, s\cdot \mathbf{w})^T,\quad \mathbf{w}=(\cos\theta, \sin\theta)\in\S^1. \end{align*} The Jacobian matrix of $\phi$ is $$J = \begin{bmatrix} 1&0 & 0&0&0\\ 0 &0& \cos\theta& 0&-r\sin\theta \\ 0 &0& \sin\theta& 0&r\cos\theta \\ 0&1& 0&0&0\\ 0 &0& 0 & \cos\theta& -t\sin\theta \\ 0 &0& 0 & \sin\theta &t\cos\theta \end{bmatrix}\in\mathbb R^{6\times 5}.$$ Then, $\mathrm{NJ}(\phi, (a,b,r,s,\theta)) = \sqrt{\det(J^TJ)}$ and $$\det(J^TJ) = r^2 + s^2= u_2^2 + u_3^2 + v_2^2 + v_3^2 = f(\mathbf{u},\mathbf{v}).$$ Let us denote $\mathbf{a}:=(a_i,b_i,r_i,s_i,\theta_i)_{i=1}^5$. We get: \begin{equation}\label{eq6} \mu = \frac{1}{2^5}\int_{\mathcal E} \left(\int_{(\mathbb R^4\times [0,2\pi))^{\times 5}}\, \Phi(\phi(\mathbf{a}))\cdot\Psi((U,V).\phi(\mathbf{a}))\cdot \vert \det(B)\vert\; \mathrm d\mathbf{a}\right)\;\mathrm d E. \end{equation} Notice that $\Phi(\phi(\mathbf{a})) = \tfrac{1}{(2\pi)^{5}}\,\tfrac{1}{(2\pi)^{10}}\,\exp(-\tfrac{1}{2}\sum_{i=1}^5 (a_i^2 + b_i^2 + r_i^2 + s_i^2))$ is the probability density, such that $a_i,b_i,r_i,s_i$ are all standard normal and $\theta_i$ is uniform in $[0,2\pi)$ for every $i$, and all variables are independent. We can therefore rephrase \cref{eq6} as $$\mu = \frac{1}{2^5}\int_{\mathcal E} \bigg(\mean_{a_i,b_i,r_i,s_i\sim N(0,1)}\; \mean_{\theta_i\sim \mathrm{Unif}([0,2\pi)), i=1,\ldots,5} \Psi((U,V).\phi(\mathbf{a}))\cdot \vert\det(B)\vert\bigg)\;\mathrm d E.$$ The rows of $B$ are all of the form $$ \begin{bmatrix} \mathbf{u}^TB_1\mathbf{v}\\ \mathbf{u}^TB_2\mathbf{v}\\ \mathbf{u}^TB_3\mathbf{v}\\ \mathbf{u}^TB_4\mathbf{v}\\ \mathbf{u}^TB_5\mathbf{v} \end{bmatrix} = \begin{bmatrix} \sqrt{2}\, u_3v_1\\ \sqrt{2}\, u_2v_1\\ \sqrt{2}\, u_1v_3\\ \sqrt{2}\, u_1v_2\\ \ u_2v_2+u_3v_3\ \end{bmatrix} = \begin{bmatrix} \sqrt{2}\cdot b\cdot r\cdot \sin\theta \\ \sqrt{2}\cdot b\cdot r\cdot \cos \theta\\ \sqrt{2}\cdot a \cdot s \cdot \sin\theta\\ \sqrt{2}\cdot a \cdot s \cdot \cos\theta\\ rs \end{bmatrix}. $$ This shows that $\vert \det(B)\vert \sim 4 \cdot \vert \det \begin{bmatrix} \mathbf{z}_1 & \ldots & \mathbf{z}_5\end{bmatrix}\vert$, where $\mathbf{z}_1,\ldots, \mathbf{z}_5\sim \mathbf{z}$ i.i.d.\ for \begin{equation} \label{def_z} \mathbf{z}= \begin{bmatrix} b\cdot r\cdot \sin\theta \\ b\cdot r\cdot \cos \theta\\ a \cdot s \cdot \sin\theta\\ a \cdot s \cdot \cos\theta\\ rs \end{bmatrix}, \quad a,b,r,s\sim N(0,1), \quad \theta\sim \mathrm{Unif}([0,2\pi)),\quad \text{all independent}. \end{equation} We state a general integral formula. \begin{theorem}\label{main3} With the notation above, we have that the expected value $\mu = \mean\#(\mathcal E\cap L)$, where the distribution of $L$ is defined by a nonnegative measurable function $\Psi:(\mathbb R\mathrm P^2)^{\times 10}\to \mathbb{R}$, is given by $$\mu = \frac{1}{2^3}\int_{\mathcal E} \bigg(\mean_{\mathbf{a}}\; \Psi((U,V).\phi(\mathbf{a}))\cdot \vert\det\begin{bmatrix} \mathbf{z}_1 & \ldots & \mathbf{z}_5\end{bmatrix}\vert\bigg)\;\mathrm d E$$ where $(U,V)\in\mathrm{SO}(3)\times\mathrm{SO}(3)$ is such that $E=(U,V).E_0$. \end{theorem} Let us now work towards proving \cref{main2}. In the setting of \cref{main2} we have $\Psi(\mathbf{p})=1$ and thus, by \cref{main3}, $ \mean_{L\sim \psi} \# (\mathcal E\cap L) = 2^{-3}\cdot \mathrm{vol}(\mathcal E)\cdot \mean \left\vert\det \begin{bmatrix} \mathbf{z}_1 & \mathbf{z}_2&\mathbf{z}_3&\mathbf{z}_4 & \mathbf{z}_5\end{bmatrix}\right\vert. $ We have shown in \cref{lem:volEssential} that $\operatorname{\mathrm{vol}}(\mathcal E) = 4\cdot \mathrm{vol}(\mathbb R\mathrm P^5) = 2 \pi^3$. Consequently, $$\mean_{L\sim \psi} \# (\mathcal E\cap L) = \frac{\pi^3}{4} \cdot \mean \left\vert\det \begin{bmatrix} \mathbf{z}_1 & \mathbf{z}_2&\mathbf{z}_3&\mathbf{z}_4 & \mathbf{z}_5\end{bmatrix}\right\vert$$ as stated in \cref{main2}. We close this section by giving a (extremely coarse) upper bound on the variance of the random determinant. This bound is used for applying Chebychev's inequality in the introduction. \begin{proposition}\label{prop_var} $\mathrm{Var}\left(\ \left\vert\det \begin{bmatrix} \mathbf{z}_1 & \mathbf{z}_2&\mathbf{z}_3&\mathbf{z}_4 & \mathbf{z}_5\end{bmatrix}\right\vert\ \right)\leq 360$. \end{proposition} \begin{proof} Let $D$ denote the random absolute determinant. We have $\mathrm{Var}(D)\leq \mean D^2$. Expanding the determinant with Laplace expansion, multiplying out the square, and taking the expected value we see that all mixed terms (that is, all terms which are not a square) average to 0, because the distributions of $a,b,r,s$ are symmetric around 0. This implies $$\mean D^2 = 5!\cdot \mean (br\sin\theta)^2 + (br\cos\theta)^2 + (as\sin\theta)^2 + (as\cos\theta)^2 + (rs)^2 = 5! \cdot 3 = 360,$$ where we have used that $\mean_\theta \cos^2\theta = \mean_\theta \sin^2\theta = \tfrac{1}{2}$. \end{proof} \medskip \section{The essential zonoid}\label{sec_zonoid} Vitale \cite{Vitale} showed that the expected absolute determinant of a random matrix can be expressed as the volume of a convex body. More specifically, of a \emph{zonoid}. Zonoids are limits of zonotopes in the Hausdorff topology on the space of all convex bodies, and zonotopes are Minkowski sums of line segments; see \cite{schneider14} for more details. Notice that the probability distribution of $\mathbf{z}$ from \cref{def_z} is invariant under multiplying by~$-1$; i.e., $\mathbf{z}\sim -\mathbf{z}$. In this case, based on Vitale's result, it was shown in \cite[Theorem 5.4]{BBLM2020} that~$\mean \left\vert\det \begin{bmatrix} \mathbf{z}_1 & \mathbf{z}_2&\mathbf{z}_3&\mathbf{z}_4 & \mathbf{z}_5\end{bmatrix}\right\vert = 5!\cdot \mathrm{vol}(K)$, where $K\subset\mathbb R^5$ is the convex body with support function $h_K(\mathbf{x}) = \tfrac{1}{2}\mean \vert \langle \mathbf{x},\mathbf{z}\rangle \vert$. So \begin{equation} \label{eq9}\mean_{L\sim \psi} \# (\mathcal E\cap L) = 5!\cdot \frac{\pi^3}{4}\cdot \mathrm{vol}(K). \end{equation} We call $K$ the \emph{essential zonoid}. In the remainder of this section, we bound $h_K(\mathbf{x})$ from below to find a convex body whose volumes give a lower bound for~$\mathrm{vol}(K)$. This gives, using \cref{eq9}, the following result. \begin{theorem}\label{thm: lower bound} $\displaystyle \mean_{L\sim \psi} \# (\mathcal E\cap L) \geq 0.93$. \end{theorem} \begin{remark} The value of $0.93$ is not close to the experimental value of $3.95$ from the introduction. To get a lower bound closer to $3.95$ one would need to understand the support function of $K$ at points $\mathbf{x}=(x_1,\ldots,x_5)\in\mathbb R^5$, where all entries are nonzero. In the computation below we always either have $x_1=x_2=0$ or $x_3=x_4=0$. For such points we can work with the function that maps $\mathbf{x}$ to the vector of norms $\boldsymbol \rho=(\rho_1,\rho_2,\rho_3)$, where~$\rho_1=\sqrt{x_1^2+x_2^2}, \rho_2 = \sqrt{x_3^2+x_4^2}$ and $\rho_3 = \vert x_5\vert$. However, if all entries of $\mathbf{x}$ are nonzero, also the angle between the two points~$(x_1,x_2),(x_3,x_4)\in\mathbb R^2$ will play a role, not just their norms. We were not able to find a lower bound for $h_K(\mathbf{x})$ in this case. We nevertheless prove \cref{thm: lower bound} for completeness. \end{remark} We will need the following lemma. \begin{lemma}\label{lem_expected_values} We have \begin{enumerate} \item $\displaystyle\mean_{\xi\sim N(0,\sigma^2)} \vert\xi\vert =\sigma \sqrt{\tfrac{2}{\pi}}$; \item $\displaystyle\mean_{\theta\sim \mathrm{Unif}([0,2\pi))} \vert\cos\theta\vert = \tfrac{2}{\pi}.$ \end{enumerate} \end{lemma} \begin{proof} The first formula is proved by using $\mean_{\xi\sim N(0,1)} \vert\xi\vert = 2\int_{0}^\infty x \cdot \tfrac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}x^2}\;\mathrm d x = \sqrt{\tfrac{2}{\pi}},$ and then $\mean_{\xi\sim N(0,\sigma^2)} \vert\xi\vert =\sigma \mean_{\xi\sim N(0,1)} \vert\xi\vert $. The second is $\mean \vert\cos\theta\vert = 4\int_0^{\frac{\pi}{2}} \cos\theta \cdot \tfrac{1}{2\pi}\; \mathrm d \theta = \tfrac{2}{\pi}.$ \end{proof} Let us have a closer look at the support function. \begin{align*} h_K(\mathbf{x}) &= \frac{1}{2}\mean \vert\langle \mathbf{x}, \mathbf{z}\rangle \vert\\[0.5em] &= \frac{1}{2}\mean \vert br(x_1 \sin\theta + x_2 \cos\theta) + as(x_3 \sin\theta + x_4 \cos\theta) + x_5rs\vert\\[0.5em] &= \frac{1}{2}\mean \left\vert \begin{bmatrix} a & r \end{bmatrix} \, C \, \begin{bmatrix} b \\s \end{bmatrix}\right\vert,\end{align*} where $C$ is the $2\times 2$ matrix \[C := \begin{bmatrix} 0 & x_3 \sin\theta + x_4 \cos\theta \\ x_1 \sin\theta + x_2 \cos\theta & x_5 \end{bmatrix}. \] Let $\sigma_1\geq\sigma_2\geq 0$ denote the two singular values of $C$. The Gaussian vectors $(a,r)$ and $(b,s)$ are invariant under rotations. Therefore, $h_K(\mathbf{x}) = \frac{1}{2}\mean \vert \sigma_1ab + \sigma_2rs\vert$. The law of adding Gaussians implies that for fixed $a,r$ and random $b,s$ we have $\sigma_1 ab + \sigma_2 rs\sim N(0, \sigma_1^2a^2 + \sigma_2^2r^2)$. We now keep $a,r$ fixed and take the expectation with respect to~$b,s$. This gives, using the first formula from \cref{lem_expected_values}, \begin{equation}\label{support_fct} h_K(\mathbf{x}) = \frac{1}{\sqrt{2\pi}}\,\mean_{a,r,\theta} \sqrt{\sigma_1^2a^2 + \sigma_2^2r^2}.\end{equation} For $\mathbf{x}\in\mathbb{R}^5$ let us write $$\rho_1:=\sqrt{x_1^2+x_2^2}, \quad \rho_2:=\sqrt{x_3^2+x_4^2}\quad \text{ and }\quad \rho_3:=\vert x_5\vert.$$ From \cref{support_fct} we have $h_K(\mathbf{x}) \geq \frac{1}{\sqrt{2\pi}}\,\mean_{a,\theta} \vert\sigma_1a\vert$ as $\sigma_2^2r^2\geq 0$. Since $\sigma_1$ does not depend on $a$ and $a,\theta$ are independent, this gives $h_K(\mathbf{x})\geq \frac{1}{\sqrt{2\pi}}\,\mean_{a} \vert a\vert\mean_\theta \vert\sigma_1\vert$. Using \cref{lem_expected_values} we get $$h_K(\mathbf{x}) \geq \frac{1}{\pi}\, \mean_{\theta} \sigma_1.$$ The larger singular value $\sigma_1$ can be expressed as $$\sigma_1 = \max_{\mathbf{a},\mathbf{b}\in\mathbb R^2: \Vert\mathbf{a}\Vert = \Vert\mathbf{b}\Vert = 1} \mathbf{a}^T\, C\, \mathbf{b}.$$ Therefore, $$h_K(\mathbf{x}) \geq \frac{1}{\pi}\, \mean_\theta \vert\mathbf{e}_2^T\,C\,\mathbf{e}_1\vert = \frac{1}{\pi}\, \mean_\theta \vert x_1\sin\theta + x_2\cos\theta\vert = \frac{2}{\pi^2}\cdot\rho_1;$$ the last equality by rotational invariance and \cref{lem_expected_values}. Similarly, $h_K(\mathbf{x}) \geq \tfrac{2}{\pi^2}\rho_2,$ and also~$h_K(\mathbf{x}) \geq \tfrac{1}{\pi} \rho_3$. We recall the definition of the \emph{elliptic integral of the second kind} $$\mathrm E(m):=\int_0^{\frac{\pi}{2}} \sqrt{1-m\sin^2\theta}\;\mathrm d\theta$$ and define $$F(x,y) := \tfrac{2}{\pi^2}\cdot \sqrt{x^2+y^2}\cdot \mathrm E\left(\tfrac{x^2}{x^2 +y^2}\right).$$ Then, we have \begin{align*}h_K(\mathbf{x}) \geq \frac{1}{\pi}\, \mean_\theta \Vert M^T\mathbf{e}_2\Vert &= \frac{1}{\pi}\, \mean_\theta \sqrt{(x_1\sin\theta + x_2\cos \theta)^2 + x_5^2}\\ &= \frac{1}{\pi}\, \mean_\theta\sqrt{\rho_1^2\cos^2\theta + \rho_3^2}\\ &=\frac{1}{2\pi^2} \int_0^{2\pi} \sqrt{\rho_1^2\cos^2\theta + \rho_3^2}\;\mathrm d\theta \\ &=\frac{2}{\pi^2}\cdot \int_0^{\frac{\pi}{2}} \sqrt{\rho_1^2\cos^2\theta + \rho_3^2}\;\mathrm d\theta = F(\rho_1,\rho_3), \end{align*} Similarly, we have $h_K(\mathbf{x}) \geq F(\rho_2,\rho_3).$ \begin{figure} \begin{tikzpicture}[scale=3] \coordinate (a1) at (0,0,0); \coordinate (a2) at (1,0,0); \coordinate (a3) at (0,1,0); \coordinate (a4) at (0,0,1); \coordinate (a5) at (0.73, 0, 0.73); \coordinate (a6) at (0, 0.72, 0.72); \coordinate (a7) at (0.86, 0, 2*0.86/3); \coordinate (a8) at (0, 0.86, 2*0.86/3); \coordinate (a9) at (2*0.85/3, 0, 0.85); \coordinate (a10) at (0, 2*0.85/3, 0.85); \coordinate (a11) at (0.966, 0, 0.966/3); \coordinate (a12) at (0, 0.966, 0.966/3); \coordinate (a13) at (0.957/3, 0, 0.957); \coordinate (a14) at (0, 0.957/3, 0.957); \draw[fill=teal!20] (a4) -- (a14) -- (a10) -- (a6) -- (a8) -- (a12) --(a3) -- (a2) -- (a11) -- (a7) -- (a5) -- (a9) -- (a13) -- (a4); \draw (a1) node[below right] {$\mathbf{0}$} node{$\bullet$}; \draw (a2) node[above right] {$\mathbf{e}_1$} node{$\bullet$}; \draw (a3) node[above right] {$\mathbf{e}_2$} node{$\bullet$}; \draw (a4) node[above left] {$\mathbf{e}_3$} node{$\bullet$}; \draw (a5) node{$\bullet$}; \draw (a6) node{$\bullet$}; \draw (a7) node{$\bullet$}; \draw (a8) node{$\bullet$}; \draw (a9) node{$\bullet$}; \draw (a10) node{$\bullet$}; \draw (a11) node{$\bullet$}; \draw (a12) node{$\bullet$}; \draw (a13) node{$\bullet$}; \draw (a14) node{$\bullet$}; \draw[dashed, thick] (a1) -- (a2); \draw[dashed, thick] (a1) -- (a3); \draw[dashed, thick] (a1) -- (a4); \draw[thick] (a4) -- (a14) -- (a10) -- (a6) -- (a8) -- (a12) --(a3); \draw[thick] (a4) -- (a13) -- (a9) -- (a5) -- (a7) -- (a11) --(a2); \draw[thick] (a5) -- (a6); \draw[thick] (a2) -- (a3); \draw[thick] (a7) -- (a8); \draw[thick] (a9) -- (a10); \draw[thick] (a11) -- (a12); \draw[thick] (a13) -- (a14); \draw[->, thick] (a2) -- (1.2,0,0); \draw[->, thick] (a3) -- (0,1.2,0); \draw[->, thick] (a4) -- (0,0,1.3); \end{tikzpicture} \caption{The polytope $P$ from \cref{def_P}.\label{fig2}} \end{figure} Let $L'\subset \mathbb R^3$ be the convex body whose support function is $$h_{L'}(\boldsymbol\rho)=\max\left\{0,\ \tfrac{2}{\pi^2}\, \rho_1,\ \tfrac{2}{\pi^2}\, \rho_2,\ \tfrac{1}{\pi}\rho_3,\ F(\rho_1,\rho_3),\ F(\rho_2,\rho_3)\right\},$$ and define $\varphi: \mathbb R^5\to\mathbb R^3_{\geq 0},\; \mathbf{x} \mapsto \boldsymbol\rho$, and $$L:=L'\cap \mathbb R^3_{\geq 0}.$$ We have thus shown that $h_K(\mathbf{x})\geq h_{\varphi^{-1}(L)}(\mathbf{x}).$ Since \begin{equation}\label{support_fct_int} K = \bigcap_{\mathbf{x} \in \mathbb R^5 \setminus \{0\}} \, \{\mathbf{y}\in\mathbb R^5 \mid \langle \mathbf{x},\mathbf{y}\rangle \leq h_K(\mathbf{x})\}, \end{equation} this means $\varphi^{-1}(L)\subset K$. For every point $\mathbf{x}\in\mathbb R^5$ we have $\mathrm{NJ}(\varphi,\mathbf{x})=\rho_1\cdot \rho_2$. For a fixed $\boldsymbol\rho\in\mathbb R^3$ the fiber $\varphi^{-1}(\boldsymbol\rho)$ consists of the product of two circles (all points $\mathbf{x}$ with $\sqrt{x_1^2+x_2^2}=\rho_1$ and $\sqrt{x_3^2+x_4^2}=\rho_1$) and two points ($-x_5$ and~$x_5$). Therefore, the fibers of $\varphi$ have volume $2 \mathrm{vol}(\S^1)^2 = 2(2\pi)^2$. Then, by the coarea formula \cref{coarea_formula}, \begin{equation}\label{eq8}\mathrm{vol}(K)\geq \mathrm{vol}(\varphi^{-1}(L)) = \int_{\mathbb R^5} \delta_{\varphi^{-1}(L)}(\mathbf{x})\;\mathrm d\mathbf{x} = 2(2\pi)^2 \cdot \int_L \rho_1\cdot \rho_2\;\mathrm d\boldsymbol \rho, \end{equation} where $\delta_{\varphi^{-1}(L)}$ is the indicator function of the interior of $\varphi^{-1}(L)$. We have $\mathbf{0}\in L$. Since $\langle \tfrac{2}{\pi^2}\mathbf{e}_1, \boldsymbol\rho\rangle = \tfrac{2}{\pi^2} \rho_1 \leq h_L(\boldsymbol\rho)$ for all $\boldsymbol\rho\neq \mathbf{0}$, we also have, by \cref{support_fct_int}, $$\mathbf{p}_1 := \tfrac{2}{\pi^2}\,\mathbf{e}_1\in L\quad \text{ and, similarly, }\quad \mathbf{p}_2:=\tfrac{2}{\pi^2}\,\mathbf{e}_2\in L, \quad \mathbf{p}_3:=\tfrac{1}{\pi}\,\mathbf{e}_3\in L.$$ Using \texttt{Mathematica} \cite{Mathematica} we prove that $$\lambda_1(\mathbf{p}_i+\mathbf{p}_3),\; \lambda_2(\mathbf{p}_i+\tfrac{2}{3}\mathbf{p}_3), \; \lambda_3(\tfrac{2}{3}\mathbf{p}_i+\mathbf{p}_3),\; \lambda_4(\mathbf{p}_i+\tfrac{1}{3}\mathbf{p}_3),\; \lambda_5(\tfrac{1}{3}\mathbf{p}_i+\mathbf{p}_3)\in L, \quad i=1,2,$$ where $\lambda_1=0.73, \lambda_2 = 0.86, \lambda_3 = 0.85, \lambda_4=0.966, \lambda_5 = 0.957$ and we refer to \Cref{sec_appendix} for the precise computation. By convexity,~$L$ contains the convex hull of all these points. We define \begin{align}\label{def_P} P\ :=\ & \operatorname{conv} \big(\{\mathbf 0, \,\mathbf{e}_1,\ \mathbf{e}_2,\ \mathbf{e}_3,\ \lambda_1(\mathbf{e}_1+\mathbf{e}_3)\} \\ &\cup \{\lambda_2(\mathbf{e}_i+\tfrac{2}{3}\mathbf{e}_3),\ \lambda_3(\tfrac{2}{3}\mathbf{e}_i+\mathbf{e}_3),\ \lambda_4(\mathbf{e}_i+\tfrac{1}{3}\mathbf{e}_3),\ \lambda_5(\tfrac{1}{3}\mathbf{e}_i+\mathbf{e}_3)\mid i=1,2\}\big)\nonumber \end{align} (see \cref{fig2}). Then, using the coarea formula \cref{coarea_formula} we have \begin{equation}\label{eq10} \int_L\rho_1\cdot \rho_2\;\mathrm d\boldsymbol \rho\geq \left(\frac{2}{\pi^2}\right)^4 \cdot \frac{1}{\pi}\cdot \int_P\rho_1\cdot \rho_2\;\mathrm d\boldsymbol \rho. \end{equation} We evaluate this integral using \texttt{Mathematica} \cite{Mathematica} and get $\int_P\rho_1\cdot \rho_2\;\mathrm d\boldsymbol\rho \geq 0.0216127$ (cf. \Cref{sec_appendix}). \begin{proof}[Proof of \cref{thm: lower bound}] By \cref{eq9}, we have $\mean_{L\sim \psi} \# (\mathcal E\cap L) = 5!\cdot \frac{\pi^3}{4}\cdot \mathrm{vol}(K)$. Above we have shown $$\mathrm{vol}(K)\,\stackrel{\text{\cref{eq8}}}{\geq }\, 2(2\pi)^2 \cdot \int_L \rho_1\cdot \rho_2\;\mathrm d\boldsymbol \rho \, \stackrel{\text{\cref{eq10}}}{\geq} \, \frac{2^7}{\pi^7}\cdot \int_P\rho_1\cdot \rho_2\;\mathrm d\boldsymbol \rho\geq \frac{2^7}{\pi^7}\cdot 0.0236165.$$ So, $\mean_{L\sim \psi} \# (\mathcal E\cap L)\geq 5! \cdot \frac{2^5}{\pi^4} \cdot 0.0236165 \geq 0.93$. \end{proof} \medskip
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{introduction} In the hierarchical picture of galaxy formation, large galaxies arise through the assembly of smaller aggregates \citep[e.g.,][]{whi91,die07}, and metal-deficient dwarf galaxies are possibly the closest examples we can find of the elementary primordial units from which galaxies assembled. In the downsizing paradigm, large galaxies form metals early on \citep[e.g.,][]{cow96,kau03b}, and only low metallicity dwarfs may still keep a fossil record of the pristine Inter-Stellar Medium (ISM). For one reason or another, Blue Compact Dwarf (\bcd ) galaxies seem to probe early phases of the Universe. They combine the two required ingredients, i.e., they are dwarfs having record-braking low metallicities \citep[e.g.][]{kun00,izo05}. \bcd\ galaxies have been used to constrain, e.g., the properties of the first (Pop~III) stars that polluted the primordial ISM at the time of galaxy formation \citep[e.g.,][]{bro04}, or the primordial He abundance inherited from big-bang nucleosynthesis \citep[e.g.,][]{izo07b}. \bcd\ galaxies have been extensively studied during the last 35 years \citep[e.g.][]{sar70, sea72,sil87,dav88,pap96,tel97,cai01,gil03,gil05,amo07}. However, the way they grow up and evolve remains unknown. The intense starburst that characterizes \bcd s lasts only a few Myr \citep[e.g.,][]{mas99,thu91}. BCDs seem to be undergoing a transient but we ignore how they reach such state, and what happens to them afterwards. Consequently, identifying precursors and descendants of \bcd s would be a major breakthrough in deciphering the nature and the functioning of these special galaxies. It will facilitate using them as reliable tools in cosmological studies. Evolutionary connections between different dwarf galaxies and \bcd s have been both proposed and questioned in the literature \citep[e.g.,][]{sea72,sil87,dav88,pap96,tel97,gil05,del07,amo07b}. In an attempt to complement these efforts, we carried out a search for galaxies that may be \bcd\ during the periods where the major starburst is gone, i.e., quiescent~\bcd s or, for short, \qbcd s \citep[][hereafter \paperi]{san08}. We addressed the issue from a new perspective. Most \bcd s show a red low surface brightness component which should exist before the present starburst sets in and should remain once the starburst fades away \citep[][]{loo86,pap96b}. By carefully removing the starburst, this underlying component has been studied and characterized in the literature \citep[e.g.,][]{noe03,cao05,amo07,amo07b}. We searched the SDSS/DR6~database for isolated galaxies with the luminosity, color, surface brightness, and concentration characteristic of the low surface brightness component underlying the \bcd s (\paperi). Assuming that the underlying low surface brightness galaxy remains unaltered after each starburst exhaustion, the targets thus selected could be \qbcd s. The search yielded some~21500 \qbcd\ candidates, with properties pointing out that they may be indeed pre or post \bcd s. In particular, they have the same luminosity function as the \bcd s, although they are thirty times more numerous. The results suggested an evolutive sequence where \bcd s undergo many short starburst phases during their lifetimes, as proposed long ago by \citet{sea72}. In between bursts, the galaxies show up as \qbcd s in a low activity state of various degrees lasting thirty times longer than the bursts. Statistically, \qbcd s should undergo a \bcd\ phase every 300~Myr and lasting some 10~Myr. This sequence of \bcd\ and \qbcd\ phases can be maintained during several Hubble times, and the most active \qbcd s are indeed \bcd s. \paperi\ carries out the differential comparison with \bcd s by selecting the sample of \bcd s also from SDSS/DR6, and employing the same procedures used to retrieve the \qbcd s. In spite of all these agreeable features, the evolutive link between \bcd s and \qbcd s presents an important difficulty posed in \paperi . The \qbcd\ oxygen abundance was estimated to be 0.35~dex systematically larger than the oxygen abundance of the \bcd s. This makes the role of \qbcd s as \bcd\ precursors questionable since starbursts increase metallicity, and the putative precursors (\qbcd s) should have lower metallicity than their descendants (\bcd s). \paperi\ offered a few alternatives to clear out the difficulty, most of which were related with the infall of metal poor gas before the starburst sets in. In addition, we speculated that the metallicity assigned to the \qbcd s may be biased, with the {\em true} \qbcd\ metallicities much lower than the observed ones, and close to the observed \bcd\ metallicities. We derive the oxygen abundance from emission lines produced in H~{\sc ii} regions, which trace the ISM in those places now going through a star-formation episode. In the case of \qbcd s, the star formation rate is quite small ($< 0.1\,M_\odot$\,y$^{-1}$ even for the brightest ones; \S~\ref{chemical}), therefore the volume of galactic gas sampled by the measurement is very small too. The question arises as to whether the abundance of this gas is representative of the total galactic gas. If it is not, then it could explain a false overabundance of oxygen in our \qbcd\ candidates. The sampled gas may not be properly mixed up with the galactic ISM and, therefore, be self metal enriched by successive starbursts. The possibility that the metallicity deduced from emission lines may be contaminated by recent starbursts has been previously mentioned in the literature \citep[e.g.,][]{kun86,thu04,dal07}. The mixing of the ISM is a slow process, which leaves behind a patchy medium \citep[e.g.,][]{ten96,dea02}. The present work was originally meant at testing the main conjecture in \paperi , namely, that the emission line derived metallicity overestimates the {\em true} average metallicity of the \qbcd\ gas. If so, it should be significantly larger than the metallicity of other galactic components, in particular, the metallicity of the stars. This seems to be the case (\S~\ref{metal_vs_metal}), but in the way of working it out, several other properties of \qbcd s (and \bcd s) have emerged. These results are described here in fairly broad terms, keeping in mind their potential interest outside the specific original motivation of the work. In particular, \qbcd\ galaxies are quite common (one out of each three local dwarfs; \paperi), so that their properties may also be representative of the whole class of dwarf galaxies. The paper is organized as follows: \S~\ref{spectra} summarizes the main observational properties of the Sloan Digital Sky Survey/Data Release 6 (SDSS/DR6) spectra used in our analysis. \S~\ref{class} explains the classification of spectra before averaging them out to improve the signal-to-noise ratio. By fitting the observed \qbcd\ spectra with synthetic spectra, we assign ages and metallicities to the stellar component of the galaxies (\S~\ref{metalic}). Gas metallicities are estimated in \S~\ref{gas_metal}, with their uncertainties critically assesses in App.~\ref{appa}. \S~\ref{metal_vs_metal} puts forward the excess of gas metallicity with respect to the metallicity of the stellar component. Ages and stellar content are analyzed in \S~\ref{ages_sect}. Chemical evolution model galaxies able to account for the differences between stellar and nebular metallicities are discussed in \S~\ref{chemical}, where we also have to estimate Star Formation Rates (SFRs). Finally, the main results and their implications are discussed in \S~\ref{conclusions}. \section{Data set: SDSS spectra}\label{spectra} We aim at assigning metallicities to the stellar component of the \qbcd\ candidates selected in \paperi . The original galaxies were chosen within the SDSS spectroscopic catalog, to have redshifts from which we derive absolute magnitudes (mean redshift 0.030, with a standard deviation of 0.014). The present analysis of stellar metallicities is based on these SDSS spectra. For the sake of comprehensiveness, their main characteristics are summarized here. A more detailled account can be found in \citet{sto02}, \citet{ade08}, and also in the SDSS~website ({\tt http://www.sdss.org/dr6}). The SDSS spectrograph has two independent arms, with a dichroic separating the blue beam and the red beam at 6150 \AA . It simultaneously renders a spectral range from 3800\,\AA\ to 9200\,\AA , with a spectral resolution between 1800 and 2200. The sampling is linear in logarithmic wavelength, with a mean dispersion of 1.1\,\AA\,pix$^{-1}$ in the blue and 1.8\,\AA\,pix$^{-1}$ in the red. Repeated 15 min exposure spectra are integrated to yield a S/N per pixel $>4$ when the apparent magnitude in the $g~$bandpass is 20.2. The spectrograph is fed by fibers which subtend about 3\arcsec on the sky. Most galaxies are larger than this size, therefore, the spectra sample only their central parts (e.g., 89\%\ of the \qbcd\ galaxies have an effective radius larger than half the fiber diameter). We retrieve the 21493 \qbcd\ spectra and the 1609 \bcd\ spectra in FITS format from the SDSS Data Archive Server. All spectra were re-sampled to a common restframe wavelength scale that matches the spectral library used in \S~\ref{metalic} \citep{san06,cen07}. We use linear interpolation to oversample the original spectra with a constant dispersion of 0.9\,\AA\,pix$^{-1}$. The spectra were normalized to the flux in the $g$~color filter (effective wavelength $\simeq$ 4825\,\AA ), a normalization factor that we compute from each spectra using the transmission curve downloaded from the SDSS website. \section{Classification of galaxy spectra}\label{class} As we will discuss later on (\S~\ref{individual}), the S/N of the individual SDSS spectra is insufficient to estimate the metallicity of the stellar component. We improve the S/N ratio to acceptable levels by averaging similar spectra (a technique often refereed to as {\em stacking}; see, e.g. \citealt{ell00}). Before averaging, the spectra have been classified in alike sets using a cluster analysis algorithm. We employ the simple {\em k-means clustering} \citep[see, e.g., ][Chapter~5]{eve95}. A number $k$ of template spectra are selected at random from the full set. Each template spectrum is assumed to be a cluster center, and each spectrum of the data set is assigned to the closest cluster center (closest in a least squares sense). Once all spectra in the dataset have been classified, the cluster center is re-computed as the average of all spectra in the cluster. This procedure is iterated with the new cluster centers, and it quits when no spectrum is re-classified in two consecutive steps. The algorithm is simple and fast, but it yields different clusters with each random initialization -- the final cluster centers keep some memory of the original randomly chosen cluster centers. This drawback does not interfere with our purpose of selecting sub-sets of similar spectra suitable for averaging because, independently of the initialization, the clusters always contain similar spectra. The algorithm forces all spectra in a class to be similar to the cluster center, and therefore, similar among them. The number of clusters $k$ is arbitrarily chosen but, in practice, the results are insensitive to such selection since only a few clusters possess a significant number of members, so that the rest are discarded. Figure~\ref{QBCDnumbers} shows the number of elements in each class of \qbcd\ spectra resulting from applying the procedure. The classes have been sorted and labelled according to the number of elements, with Class~0 the most numerous, Class~1 the second most numerous, and so on (percentages are given in Table~\ref{table1}). Figure~\ref{QBCDclasses} shows the average spectrum corresponding to the first nine most numerous classes. The classification was carried out using four spectral bandpasses containing emission lines (the bandpasses are indicated as dotted lines in Fig~\ref{QBCDclasses}, Class~0, and also in Fig.~\ref{classrms}). The use of these particular bandpasses emphasizes the contribution of the emission lines for classification which, otherwise, would be completely overridden by the continuum. We select bandpasses throughout the full spectral range to assure that the global trend of the continuum is considered when classifying. There seems to be continuous variation of properties which, in the end, give rise to a large variety of shapes . From spectra without significant emission lines (Class~3), to spectra with red continuum and emission lines (Class~7), to blue continua with moderate emission lines (Class~4). The most numerous Class~0 has blue continuum and presents emission lines. \begin{deluxetable*}{cccccccc} \tablecaption{Properties of \qbcd\ classes and \bcd\ classes.\label{table1}} \tablehead{ \colhead{Galaxy} &\colhead{Class}&\colhead{Fraction\tablenotemark{a}}& \colhead{Stellar}& \colhead{Stellar Age\tablenotemark{c}} &\colhead{Nebular}& \colhead{H$\alpha$ EW\tablenotemark{e}} & \colhead{[N{\sc ii}] EW\tablenotemark{f}}\\ \colhead{Type}& &\colhead{[\%]} &\colhead{Metallicity\tablenotemark{b}}&\colhead{[Gyr]}& \colhead{Metallicity\tablenotemark{d}} & \colhead{[\AA]} & \colhead{[\AA]} } \startdata \vspace*{1mm} \qbcd& 0& 36.1&-0.44$\pm0.03$& 1.7$\pm 0.2$&-0.12$\pm0.18$& 22.3& 5.2 \\ \vspace*{1mm} & 1& 14.3&-0.39$\pm0.02$& 5.2$\pm 1.4$& 0.05$\pm0.18$& 5.3& 2.5 \\ \vspace*{1mm} & 2& 9.5&-0.32$\pm0.12$& 1.1$\pm 0.6$&-0.25$\pm0.18$& 50.4& 6.9 \\ \vspace*{1mm} & 3& 8.5&-0.33$\pm0.02$&11.1$\pm 1.7$&\nodata&\nodata&\nodata\\ \vspace*{1mm} & 4& 5.3&-0.36$\pm0.08$& 1.0$\pm 0.1$&-0.26$\pm0.18$& 14.5& 1.9 \\ \vspace*{1mm} & 5& 3.5&-0.68$\pm0.28$& 1.1$\pm 3.0$&-0.40$\pm0.18$& 99.9& 7.5 \\ \vspace*{1mm} & 6& 3.4&-0.44$\pm0.11$& 1.1$\pm 0.2$&-0.25$\pm0.18$& 31.6& 4.4 \\ \vspace*{1mm} & 7& 3.1&-0.42$\pm0.04$& 4.1$\pm 1.2$& 0.05$\pm0.18$& 8.4& 3.9 \\ \vspace*{1mm} & 8& 2.5&-0.37$\pm0.07$& 2.7$\pm 1.7$&-0.01$\pm0.18$& 27.6& 10.0 \\ \vspace*{1mm} & 9& 2.0&-0.30$\pm0.02$&17.8$\pm 0.4$&\nodata&\nodata&\nodata\\ \tableline\vspace{-1.5mm}\\ \bcd& 0& 46.9&-0.33$\pm0.08$& 0.9$\pm 0.1$&-0.38$\pm0.18$& 81.2& 6.7 \\ \vspace*{1mm} & 1& 12.7&-0.34$\pm0.22$& 1.0$\pm 0.8$&-0.41$\pm0.18$&160.2& 11.7 \\ \vspace*{1mm} & 2& 7.9&-0.37$\pm0.40$& 0.9$\pm 2.5$&-0.49$\pm0.18$&241.6& 12.4 \\ \vspace*{1mm} & 3& 7.4&-0.39$\pm0.17$& 1.1$\pm 0.8$&-0.37$\pm0.18$&125.9& 10.7 \\ \vspace*{1mm} & 4& 6.5&-0.41$\pm0.21$& 0.9$\pm 0.6$&-0.51$\pm0.18$&142.0& 6.5 \\ \vspace*{1mm} & 5& 3.8&-0.34$\pm0.15$& 1.0$\pm 0.4$&-0.35$\pm0.18$& 93.7& 8.8 \\ \vspace*{1mm} & 6& 3.5&-0.34$\pm0.54$& 1.1$\pm 4.6$&-0.48$\pm0.18$&333.0& 17.1 \enddata \tablecomments{It includes those classes containing 90\% of the galaxies.} \tablenotetext{a}{Percentage of galaxies represented by the class.} \tablenotetext{b}{In logarithm scale, referred to the solar metallicity. Errors from Monte Carlo analysis in \S~\ref{metalic}.} \tablenotetext{c}{Errors from the Monte Carlo analysis in \S~\ref{metalic}.} \tablenotetext{d}{In logarithm scale, referred to the solar metallicity. Its error has been taken from \citet{pet04}.} \tablenotetext{e}{Equivalent width of H$\alpha$. No data implies line in absorption.} \tablenotetext{f}{Equivalent width of [N{\sc ii}]~$\lambda$6583. No data implies line in absorption.} \end{deluxetable* \begin{figure} \includegraphics[width=0.35\textwidth,angle=90]{f1.ps} \caption{(a) Histogram with the number of \qbcd\ galaxies corresponding to each class. (b) Normalized cumulative histogram, i.e., fraction of \qbcd\ galaxies from Class~0 to the each class~\#. It is given in percent. Note that the first ten classes include 90~\% of the \qbcd s. } \label{QBCDnumbers} \end{figure} \begin{figure*} \includegraphics[width=0.7\textwidth,angle=90]{f2.ps} \caption{Average spectra of the first nine most abundant \qbcd\ classes. Together with the class identifier, the insets of the figures give the number of galaxies in the class. Wavelengths are in $\mu {\rm m}$, and the range of ordinates is different for the different plots. All spectra are normalized to the flux in the $g$ filter. The dotted line shown together with Class~0 indicates the band-passes used to classify spectra (i.e., those wavelengths were it is zero were disregarded for classification). } \label{QBCDclasses} \end{figure*} The scatter among the spectra belonging to a class depends on wavelength, and it is largest in the intense emission lines. As it is illustrated in Fig.~\ref{classrms} with Class~0 spectrum, it is of the order of 10\% in the spectral ranges with absorption lines, and it can be of the order of 50\% in the regions having strong emission lines (see the dashed line in Fig.~\ref{classrms}, which corresponds to the {\em rms} fluctuations among all the spectra in the class divided by the mean spectrum). \begin{figure*} \includegraphics[width=0.7\textwidth,angle=90]{f3.ps} \caption{ Expanded version of the spectrum of Class~0 in Fig.~\ref{QBCDclasses}. In addition to the spectrum itself (the solid line), the plot includes the $rms$ fluctuations among all the spectra in the class divided by the mean spectrum (the dashed line). The bandpasses used for classification are shown as dotted line. The plot also labels several typical spectral features. Wavelengths are given in \AA . The large fluctuations between 6200\AA\ and 6300\AA\ are produced by telluric lines \citep[e.g., \S~4.2 in][]{gra09}. Their presence does not affect our fits (see the residuals at these wavelengths in Figs.~\ref{fits} and \ref{fitsb}). } \label{classrms} \end{figure*} The same classification procedure was also applied to the control set of \bcd s. Representative spectra of the most numerous classes are shown in Fig.~\ref{BCDclasses}, also sorted and labelled according to the number of galaxies in the class. The most conspicuous differences with respect to \qbcd\ spectra are the strength of the emission lines, and the (barely visible but always present) blue continua. The fraction of galaxies in each class is listed in Table~\ref{table1}. \begin{figure*} \includegraphics[width=0.7\textwidth,angle=90]{f4.ps} \caption{Same as Fig.~\ref{QBCDclasses} but for \bcd\ galaxies. } \label{BCDclasses} \end{figure*} \subsection{Green valley \qbcd s}\label{green_valley} One of the findings of the SDSS is the existence of a galaxy color sequence with well defined bi-modality \citep[e.g., ][]{bal04}. \paperi\ shows how \qbcd s occupy all the color sequence between the blue and the red clumps. It turns out that the classification has been able to separate galaxies in the red sequence, galaxies in the blue sequence, as well as those in between \citep[often refereed to as {\em green valley} galaxies, e.g.,][]{sal07}. Figure~\ref{color_classes} shows color vs color scatter plots for the \qbcd s, the \bcd s, as well as the most usual \qbcd\ classes separately. It also includes the somewhat arbitrary boundary between the red and the blue sequences worked out in \paperi\ (the dashed line). \begin{figure*} \includegraphics[width=0.7\textwidth,angle=90]{f5.ps} \caption{ Color vs color scatter plots for the \qbcd s, the \bcd s, and the most usual \qbcd\ classes separately. The insets specify each galaxy group. The dashed line is the same in all plots, and it was worked out in \paperi\ to separate the blue and the red sequences. Note that most \qbcd s belong to the blue sequence. Almost all galaxies in the red clump correspond to Class~3. Moreover, Class~1 seems to gather most galaxies in the so-called {\em green valley} between the red and the blue sequences (c.f. the plots in top left and the bottom right corners, which are identical except that Class~1 has been removed from the latter). } \label{color_classes} \end{figure*} The classification does a fair job in splitting the galaxies in colors. Among the most numerous classes, it turns out that only Class~3 belong to the red sequence. (Class 3 has no emission lines; see Fig.~\ref{QBCDclasses}.) In addition, we noticed that Class~1 seems to include all the {\em green valley} galaxies, i.e., the transition galaxies, central to understand how and why galaxies move back and forth in the color sequence \citep[e.g.,][]{spr05,cat06}. The goodness of this green valley galaxy selection method can be appreciated by comparing the top left plot with the bottom right plot in Fig.~\ref{color_classes}. The two of them include the same \qbcd\ galaxies except for Class~1. A clear gap splits up the red and the blue clumps. Since different classes have different colors, and the \qbcd\ present a clear color-(nebular)metallicity relationship (\paperi), different classes have different metallicities too. Scatter plots of metallicity vs absolute magnitude are shown in Fig.~\ref{metal_classes}. Here and throughout we use the recipe in \citet{pet04} to compute nebular metallicities from the N2 strong-line ratio (see also \S~\ref{gas_metal}). It upgrades of the classical calibration by \citet[][]{den02} used in \paperi\ \citep[][and references therein]{shi05}. Late types (e.g., Class~0) are metal poor as compared to the transition objects included in Class~1. These green valley galaxies have solar metallicity. The few galaxies in the red clump with emission lines (Class~7; see Fig.~\ref{QBCDclasses}) seems to have slightly super-solar metallicity~(Fig.~\ref{metal_classes}). \begin{figure} \includegraphics[width=0.49\textwidth]{f6.ps} \caption{ Scatter plot of oxygen abundance vs $g-r$ color for some representative \qbcd\ classes. The horizontal solid line corresponds to the solar metallicity as given by \citet{gre07}. Class~3 is not included because it lacks of the emission lines needed to compute nebular abundances. } \label{metal_classes} \end{figure} The k-mean clustering classification provides an automatic method to identify spectra of green valley galaxies. It works for \qbcd s, however, there is no clear reason why it should be restricted to them. It may be valid for any type of galaxy, even outside the particular set of dwarfs we are dealing with. We elaborate on this possibility in \S~\ref{conclusions}. \section{Determination of stellar metallicities and ages}\label{metalic} The stellar content of a galaxy can be studied through modeling and interpretation of the absorption features in the integrated spectrum. The analysis of line-strength indices, mainly those of the Lick system \citep{wor94,wor97}, has been the most common approach for studying the stellar metallicities and ages. Most studies have been focused on early-type galaxies, with stellar populations typically older than 1\,Gyr \citep[e.g., ][and references therein]{tra98}. The use of age-sensitive Balmer line indices, e.g., H$\beta$, and metallicity-sensitive indices, e.g., Mg$b$, allows us to lift in part the age-metallicity degeneracy affecting the stellar populations of early-type galaxies \citep[e.g.,][]{wor94,vaz99b}. However, these line strengths, and particularly the Balmer indices, are not optimal for analyzing our galaxy spectra, since they are filled-in with nebular emission (Fig.~\ref{QBCDclasses}). Here we take a different approach, and ages and metallicities are estimated by direct comparison of the observed spectra with model spectra from stellar population syntheses. The full spectral range is used simultaneously. This alternative strategy has a respectable tradition \citep[see ][and references therein]{kol08}, and it allows us to easily overcome the problem of emission lines by masking them out. Emission lines represent only a small fraction of the spectral range, and the rest of the spectrum can be used to extract the required information (see below). We use an updated version of the models by \citet{vaz99a}, which provide spectral energy distributions of single-age, single-metallicity stellar populations (SSPs) on the basis of the stellar spectral library MILES \citep{san06,cen07}. The \miles\ spectra combine, according to a Salpeter initial mass function distribution, a suite of stellar spectra from 0.09\,$M_\odot$ to 100\,$M_\odot$. They have a resolution of $\simeq 2.3\,$\AA , a spectral range from 3540\,\AA\ to 7410\,\AA , and a dispersion of 0.9\,\AA\,pix$^{-1}$. \miles\ extends the range of ages of \citet{vaz99a}, and now it covers from 0.1\,Gyr to 17.8\,Gyr. \miles\ spectra span a range of metallicities\footnote{In the usual logarithmic scale refereed to the solar metallicity $Z_\odot$, i.e., $\log(Z_s/Z_\odot)$ with $Z_s$ the fraction of mass in metals.} between $-1.7$ and $+0.2$. The grid includes 276 SSP spectra, with 46 samples equispaced in logarithmic time, and 6 steps in metallicity. The range of metallicities and ages fits in well the values to be expected for \qbcd\ (see \S~\ref{introduction}). As far as the wavelength sampling and wavelength coverage are concerned, \miles\ spectral resolution is comparable to SDSS (although better), but it misses the reddest $1800~$\AA\ of the SDSS spectral range. The uncovered 20\% of the SDSS spectral range is in the near IR, where the number of spectral lines decreases significantly. Keeping in mind all these variables, \miles\ meets very well our needs. The fits are carried out by direct comparison of each average profile representative of a class with all spectra in the \miles\ library, smeared to three spectral resolutions (the original one, the original one plus 2.5~\AA , and the original one plus 3.5~\AA). Considering various broadenings is required to account for stellar motions, as well as for the difference of spectral resolution between MILES SSP and SDSS. The observed spectrum is compared with each synthetic spectrum, and closest one in a lest-squares sense is chosen as best fit. The comparison was carried out with a few constraints which try to minimize potential biases. (1)~A 100\,\AA\ running-box mean of the original spectra was removed from observed and synthetic spectra. By removing the continua, the results of the fits are not very sensitive to the extinction, a miscalibration of the spectra, the uncorrected differential refraction \citep{izo06}, and so on, which affect the continua but not so much the relative intensity of adjaccent spectral lines. Moreover, it guarantees that ages and metallicities are inferred from spectral lines, with negligible contribution from the global shape of the continuum. (2) We assume the observed spectra $O_i$ to be a linear combination of a starburst spectrum $N_i$ plus an stellar spectrum $S_i$, \begin{equation} O_i=\alpha N_i+\beta S_i, \label{first_def} \end{equation} with the underscript $i$ representing the $i-th$ wavelength, and $\alpha$ and $\beta$ being two scaling constants. The starburst spectrum has strong emission lines and little continuum, therefore, one could simply neglect the core of the emission lines when carrying out the fits (i.e., one could mask out the emission lines and set $N_i=0$ in equation~[\ref{first_def}]). Here we go a step further so that the (small) contamination by $N_i$ outside emission lines is estimated and subtracted out. The decontamination procedure works as follows. At the emission line cores of $O_i$, the spectrum is dominated by $N_i$ so that, \begin{equation} O_i\ (1-w_i)=(1-w_i)(\alpha N_i+\beta S_i)\simeq (1-w_i)\,\alpha\, N_i, \label{second_def} \end{equation} with $w_i$ a properly chosen weight which is zero in the emission cores and one elsewhere, i.e., \begin{equation} w_i=\cases{0& emission lines,\cr 1 & elsewhere.} \end{equation} Using the different classes of \bcd\ spectra as proxies for $N_i$, we choose for each $O_i$ the $N_i$ \bcd\ spectrum that minimizes the appropriate merit function, \begin{equation} \chi^2=\sum_i\big[\big(O_i-\alpha N_i\big)^2\,(1-w_i)^2\big], \label{merit_lines} \end{equation} with \begin{equation} \alpha=\big[\sum_i\,O_i\,N_i\,(1-w_i)^2\big]\big/\big[\sum_jN_j^2\,(1-w_j)^2\big], \end{equation} the latter being just a least squares estimate of the scaling factor that best fit the emission lines of $O_i$ once $N_i$ is given. Note that the weight $(1-w_i)$ in equation~(\ref{merit_lines}) assures that only the emission lines contribute to $\chi^2$. The $N_i$ and $\alpha$ thus derived allows us to compute the observed spectrum corrected for emission, $O^*_i$, \begin{equation} O_i^*=O_i-\alpha N_i=\beta S_i. \label{correction} \end{equation} The best fitting \miles\ spectrum is obtained by repeating the same procedure with $O^*_i$ but masking out the emission lines, i.e., defining the merit function for each \miles\ spectrum $S_i(t,Z_s)$ as, \begin{equation} \chi^2(t,Z_s)=\sum_i\big[\big(O^*_i-\beta S_i(t,Z_s)\big)^2\,w_i^2\big], \label{def_chi2} \end{equation} where \begin{equation} \beta=\big[\sum_i\,O^*_i\,S_i(t,Z_s)\,w_i^2\big]\big/\big[\sum_jS_j(t,Z_s)^2\,w_j^2\big]. \end{equation} The expressions explicitly include the dependence of the synthetic spectrum on the age of the starburst, $t$, and the stellar metallicity, $Z_s$, i.e., $S_i(t,Z_s)$. The weight $w_i$ in equation~(\ref{def_chi2}) cancels out the contribution of the emission lines, rendering the correction~(\ref{correction}) of secondary importance. The weights $w_i$ are assigned so as to cover the emission line cores observed in \bcd\ spectra. Examples of these weights are the (thin) solid lines in Figs.~\ref{fits} and \ref{fitsb}. The positions of the minima of these broken lines mark the wavelengths discarded from the fits. The fitting procedure described above was also applied to \bcd\ spectra. In this case we cannot correct for the starburst since \bcd s are used as template starburst spectra. We just mask out the emission lines and force $\alpha=0$ in equation~(\ref{correction}). \begin{figure*} \includegraphics[width=0.7\textwidth,angle=90]{f7.ps} \caption{ Observed average spectrum of \qbcd\ Class~0 (the thick solid line) and best fitting \miles\ synthetic spectrum (the dotted line). The dashed line corresponds to the residuals vertically shifted by an arbitrary amount. Panels (a) and (b) show the full spectral range, whereas (c), (d), (e) and (f) zoom into details to appreciate the goodness of the fit. The weights of the fits are represented in its own scale as a thin solid line, with the minima corresponding to no contribution, i.e., to weight equals zero. The weight goes out of scale in (a) indicating the wavelengths of the three Lick metallic indexes overweighted during fitting ({\tt Fe4383}, {\tt Fe4531}, and {\tt Fe4668}). The continuum has been subtracted from both the observed, and the synthetic spectra. } \label{fits} \end{figure*} \begin{figure*} \includegraphics[width=0.7\textwidth,angle=90]{f8.ps} \caption{ Same as Fig.~\ref{fits} except that the spectrum corresponds to Class~3 \qbcd , i.e., the class corresponding to the red sequence galaxies without significant emission lines. } \label{fitsb} \end{figure*} As judged from visual inspection, the best fitting model spectrum reproduces very well the observed spectra; see, e.g., Fig.~\ref{fits}. Error bars cannot be assigned using traditional methods based on the Hessian matrix of $\chi^2$, since we do not have a continuous function $\chi^2(t,Z_s)$ to compute the partial derivatives with respect to age and metallicity. We resort to Monte~Carlo simulations to assign confidence intervals. Gaussian noise with the standard deviation of the residuals is added to the best fitting model spectrum. This mock observation is analyzed as the real observation to get an age and a metallicity which, in general, differ from those of the best fitting model spectrum. The procedure is repeated 1000 times, which provides a range of ages and metallicities consistent with the best fitting model spectrum and the residual of the fit. Confidence intervals thus assigned reveal a serious problem of degeneracy in the metallicity estimate. The standard deviation of the most common \qbcd\ Class~0 turns out to be 0.25~dex, which allows for any metallicity between $-0.20$ and $-0.70$. Such degeneracy in metallicity is the young stellar population equivalent of the well known age-metallicity degeneracy appearing in early-type galaxy dating \citep[e.g.,][]{wor94b}. We managed to sort out the degeneracy problem by overweighting the contribution of those spectral bandpasses that are known to be particularly sensitive to metallicity. Specifically, the merit function in the definition~(\ref{def_chi2}) is now, \begin{equation} w_i=\cases{0& emission lines,\cr W& metallicity sensitive bandpasses,\cr 1 & elsewhere,} \end{equation} with $W > 1$ for overweighting. The band passes were selected from the Lick index system \citep{wor94}, which was specifically designed for estimating ages and metallicities in the integrated light of stellar populations. In order to find out which are the Lick indexes most sensitive to metallicity in our domain of ages, we computed the variation of the indexes with metallicity at a given age. Some results for the \miles\ library are shown in Fig.~\ref{index_selection}. Among the 21 indexes defined by \citet{wor94}, we select the three indexes in the top row because they show the largest variation with metallicity. Figure~\ref{index_selection}, bottom row, also includes three other indexes commonly used in metallicity studies of early-type galaxies \citep[e.g., they are combined to form the so-called {$[$MgFe$]$} index; see ][]{gon93,tho03}. The range of variation is clearly inferior to the variation of the indexes that we select. The bandpasses of the three selected indexes, {\tt Fe4383}, {\tt Fe4531}, and {\tt Fe4668}, are indicated in Figs~\ref{fits}a and \ref{fitsb}a as the wavelengths where the thin solid line representing the weight $w_i$ goes out of scale. \begin{figure} \includegraphics[width=0.35\textwidth,angle=90]{f9.ps} \caption{Variation with metallicity of various Lick indexes in \miles\ spectra. Each curve of each plot corresponds to a constant age. We only show young populations, with ages between 0.5~Gyr (the curves of smallest equivalent widths) and 2~Gyr (the curves of largest equivalent widths). The wavelength range corresponding to the indexes in the top row are overweighted in our fits to break down the metallicity degeneracy. Other commonly used indexes are discarded because they present less dependence on metallicity for this range of ages; see the bottom row. The index name can be found in the ordinate axis labels. The corresponding bandpasses are defined in \citet{wor94}. Metallicities are given in a logarithm scale referred to the solar metallicity. } \label{index_selection} \end{figure} We tried with various overweights ($W=10, 20, 50$ and $100$), to finally choose $W=50$ since the trial fits indicate that the inferred metallicity does not depend on the actual weight when the weights are large enough. The use of these overweights improves the metallicity estimate to a large extent. We repeated the Monte Carlo simulation described above, and the random errors of \qbcd\ Class~0 decrease by almost an order of magnitude with respect to the case where $W$ was set to one. Table~\ref{table1} includes the standard deviation for the ages and metallicities of all major \qbcd\ and \bcd\ classes. They will be employed as 1$\,\sigma$ errors in the discusions along the paper. The small value of these random errors has been independently corroborated by a bootstrap error estimate \citep[e.g.,][]{moo03}. A caveat is in order, though. These small errors only indicate that the best fitting \miles\ spectrum is well defined, i.e., among the \miles\ set, a few spectra reproduce the observation clearly better than the rest. Our procedure does not account for systematic errors, which may dominate the error budget (e.g., is SSP a good description of our galaxies?). The magnitude of the systematic errors is unknown, and ignored in our discussions. \subsection{Self-consistency of the continua}\label{continua2} A running mean average was subtracted from both the observed and the model spectra to minimize the influence of miscalibrations (\S~\ref{metalic}). Then our fits are virtually blind to the galaxy continua. The question arises as whether the ages and metallicities thus derived are or not consistent with the observed galaxy continua. In order to assign a continuum to the model spectra, one has to bring out the continuum information removed when subtracting the running mean average. We do it by parameterizing the relationship between the actual observed spectrum, $o_i$, and model we fit, $m_i$, including the biases that the subtraction of a continuum removes, i.e., \begin{equation} o_i=m_i\,10^{-(A_i-A_0)/2.5}+\kappa_i, \label{dust} \end{equation} where $A_i$ corresponds to extinction by dust\footnote{Both internal, and due to our Galaxy, since the two of them add up when dealing with low redshift targets.} defined as usual \citep[e.g.,][]{car89}, and $\kappa_i$ accounts for other possible differences not included in the model. (Recall that the underscript $i$ parameterizes the variation with wavelength.) Equation~(\ref{dust}) also assumes that the model spectrum includes a gray extinction given by $A_0$ (incorporated into the global scaling factor $\beta$ that we use for fitting; see equation~[\ref{def_chi2}]). As we will show, the expression~(\ref{dust}) is fully consistent with equation~(\ref{first_def}) if one removes a running mean average of the observed spectrum, $\langle o_i\rangle$. The spectrum we fit ($O_i$ in equation~[\ref{first_def}]) is \begin{equation} O_i=o_i-\langle o_i\rangle\simeq (\alpha N_i+\beta S_i)\,10^{-(A_i-A_0)/2.5}, \label{dust2} \end{equation} with the model galaxy spectrum given by \begin{equation} \alpha N_i+\beta S_i=m_i - \langle m_i\rangle. \end{equation} We have employed equation~(\ref{dust}) assuming that $A_i$ and $\kappa_i$ do not vary within the kernel that defines the running mean (i.e., $\langle A_i\rangle=A_i$, and $\langle \kappa_i\rangle=\kappa_i$). Neglecting in equation~(\ref{dust2}) terms of the order of, \begin{equation} (\alpha N_i+\beta S_i)\,(A_i-A_0), \end{equation} one ends up with, \begin{equation} O_i\simeq \alpha N_i+\beta S_i, \label{nodust} \end{equation} which is the approximation used for fitting (equation~[\ref{first_def}]). Within this approximation, one can re-write equation~(\ref{dust}) as, \begin{equation} o_i\simeq \alpha N_i+\beta S_i + \langle o_i\rangle, \label{lsfit} \end{equation} where \begin{equation} \langle o_i\rangle =\big[1-(A_i-A_0)/(2.5\log{\rm e})\big]\, \langle m_i\rangle + \kappa_i, \label{tbadded} \end{equation} is the term to be added to the best fitting synthetic spectrum, $\alpha N_i+\beta S_i $, to recover the observed spectrum with continuum, $o_i$. Equation~(\ref{tbadded}) allows us to estimate both $A_i$ and $\kappa_i$ and, consequently, to complete the best fitting model with its continuum. For lack of better assumption, we regard $\kappa_i$ as independent of wavelength. In addition, the wavelength dependence of $A_i$ is assumed to be known except for a scaling factor, parameterized as the extinction in the Johnsson's $V$ band $A_V$. The ratio $A_i/A_V$ is assumed to follow the milky-way law by \citet{car89}, modified according to \citet{mis99} to represent the large Magellanic cloud, which we use as a proxy for low metallicity extinction law. (The conclusions below remain even if one directly takes the milky-way extinction law.) Then the constants $\kappa_i$ and $A_V$ can be retrieved from a linear least squares fit using equations~(\ref{lsfit}) and (\ref{tbadded}) since $o_i$, $m_i$, $\langle m_i \rangle$ and $A_i/A_V$ are all known, and one can regard $A_0$ as the (wavelength) average extinction. The comparison between observations and model spectra including continuum for the nine first \qbcd\ classes is shown in Fig.~\ref{continua}. (The emission lines have been artificially taken out to better appreciate differences between observed and model continua.) $A_V$ is forced to be non-negative, so that if a (small) negative number is found in an unconstrained fit, it is automatically set to zero. \begin{figure*} \includegraphics[width=0.7\textwidth,angle=90]{f10.ps} \caption{Similar to Fig.~\ref{QBCDclasses}, except for the ordinate scale, magnified to appreciate differences between the observed continua (the solid lines), and the one inferred from fitting absorption lines (the dotted lines). The emission lines have been taken out to avoid overcrowding of the plots. The insets provide the class number of the spectrum together with the extinction coefficient $A_V$. } \label{continua} \end{figure*} The agreement is good, in particular, for the most numerous classes. Keep in mind that the fitting procedure disregards continua, yet, the observed and model continua match quite well. The agreement is found for low extinctions, of only a few tenths of magnitudes, $A_V=0.18\pm 0.27$. Moreover, the most populated classes are in the low extinction range of such interval, e.g., $A_V=0.04$ for Class~0 (see the labels in Fig.~\ref{continua}). \subsection{Why we do not use spectra of individual galaxies to estimate ages and metallicities}\label{individual} Only average spectra are used in our analysis. Insufficient signal-to-noise ratio refrain us from assigning ages and metallicities to individual galaxies. The reason stands out clearly from the error budget analysis in \S~\ref{metalic}. The {\em rms} fluctuations of the residual of the fits are as small as 1.5\%\ (see Figs.~\ref{fits} and \ref{fitsb}) and, even in this case, the constraint they provide are quite loose. The individual SDSS spectra have S/N $\ga 4$ , and this sole random error would rise the residual of any fit to an {\em rms} $\la 25$\%. This residual is some 15 times larger than the residuals of our fits, and such a large error would make our analysis completely unreliable. One spectrum is not sufficient. Putting the same idea in other words; if the errors of the stacked spectra are similar and independent, averaging at least some $\sim 15^2\simeq 220$ spectra is required to get the kind of residual represented in Figs.~\ref{fits} and \ref{fitsb}. \section{Gas metallicities}\label{gas_metal} The gas (or nebular) metallicities of the different classes of \qbcd s and \bcd s are estimated using strong~line empirical calibration methods. Specifically, we primarily use the so-called N2~method as provided by \citet{pet04}. We cannot employ the more accurate $T_e$~estimate because the [OIII]$\lambda$4363 line required to compute electron temperatures \citep[see, e.g.,][\S~3.1]{izo06} is much too faint in \qbcd s. Since using strong~line methods is always controversial \citep[e.g.,][]{sta04,sta08,shi05}, we have studied some of the potential biases that may arise. The success of empirical calibrations resides in the agreement between the physical conditions of the calibration targets, and those of the galaxies to be analyzed. As we discuss in App.~\ref{appa}, the calibration by \citet{pet04} holds for a fairly large range of conditions, broad enough as to encompass the different physical conditions to be expected in \qbcd s and \bcd s. Moreover, the various available strong~line calibrations give consistent results when applied to our spectra. If the observed differences between the metallicities of \bcd s and \qbcd s (\paperi , but also \S~\ref{introduction} and the forthcoming paragraphs) were an artifact of using strong~line calibrations, different calibrations should provide different biases. However, they coherently show the \qbcd s to be more metallic that the \bcd s. Figure~\ref{mainr} presents estimates based on N2 and O3N2 as calibrated by \citet[][]{pet04}. When applied to our spectra, the two of them agree within 0.1 dex; compare the squares (N2) and the asterisks (O3N2) in Fig.~\ref{mainr}. We have also tried with the S23~index as calibrated by \citet{dia00}, giving results similar to N2 and O3N2\footnote{Other methods, like P23 and P \citep{shi05}, cannot be applied because some of the required emission lines lie outside the spectral range of the SDSS spectra.}. In short, the difference of gas metallicity between \qbcd s and \bcd s does not seem to be a bias caused by using the N2~method. We do not correct for dust extinction to derive metallicities. This approximation can be readily justified since our main calibration N2 uses two spectral lines so close in wavelength that the correction for extinction is truly negligible ($\simeq 0.0006$~dex for a one magnitude extinction). We measure the mean extinction for \qbcd s to very small (\S~\ref{continua2}), and \citet[][]{wu08} show how the metallicity measured in a few representative BCD starbursts is not biased by extinction. The gas metallicities thus obtained are absolute -- in the end, they are based on photo-ionization modeling which relates the number of observed photons with the number of emitting atoms in the photoionized nebula \citep[e.g.,][]{sta04}. The stellar metallicity, however, is relative to the solar metallicity. In order to compare gas and stars, the gas metallicity must be normalized to the solar value. This normalization is delicate and may bias the comparison, particularly in this moment when a major revision of the solar metallicity scale has occurred \citep{all01,asp05b,gre07}, and it is not consistently implemented in modeling. For this reason, we feel compelled to discuss our use of the modern oxygen abundance for gas metallicity normalization, \begin{equation} 12+\log({\rm O/H})_\odot=8.66\pm 0.05, \label{new_solar_abu} \end{equation} despite the fact that the \miles\ spectra used in our stellar metallicity estimates are based on a stellar library whose metallicities date back to pre-revision days \citep{cen07,leb04,cay01}. The modification of the of solar metallicity had to do with improved modelling -- NLTE effects and realistic 3D hydrodynamical model atmospheres have been incorporated into the analysis \citep{asp05b}. Since the observed solar spectrum has not been modified, the revision simply re-labelled it with a different metallicity. For the sake of argumentation, assume that the spectrum of one of our galaxies has solar metallicity. Then it has the metallicity corresponding to the oxygen abundance in equation~(\ref{new_solar_abu}), rather than the metallicity originally assigned to it. Consequently, the use of \miles\ spectra to estimate stellar metallicities is consistent with using equation~(\ref{new_solar_abu}) for the solar metallicity that normalizes the absolute gas metallicity inferred from emission lines. Table~\ref{table1} lists the relative N2 gas metallicities thus computed, together with the equivalent widths of two lines used in such estimate (H$\alpha$ and [N{\sc ii}]~$\lambda$6583). \section{Gas metallicity vs stellar metallicity}\label{metal_vs_metal} In this section we compare the stellar metallicities worked out in \S~\ref{metalic} with the gas metallicity derived from emission lines in \S~\ref{gas_metal}. The scatter plot gas metallicity vs stellar metallicity is shown in Fig.~\ref{mainr}. It includes all \qbcd\ classes except Class~3, which has no emission lines. Several features are notable. First, the stellar metallicities of the most representative Classes (0 and 1) are systematically smaller than the nebular metallicities\footnote{ Since these metallicities are refereed to the solar value, we can directly compare the global metallicity by mass provided by the spectral fitting procedure, with the oxygen metallicity by number inferred from emission lines. They correspond to the same quantity under the implicit assumption that the relative metal abundance of our targets follow the solar composition. This should be a good approximation for dwarf galaxies \citep[e.g.,][]{mic08}. }. This result holds even when the (large) error bars of our metallicity estimates are taken into account. Figure~\ref{mainr} includes the error bars assigned to the metallicities of Class~0, and it is clear that the retrieved stellar and nebular metallicities disagree. The stellar metallicity error bar has been taken from the Monte~Carlo simulations described in \S~\ref{metalic}, and is listed in Table~\ref{table1}. As for the emission line metallicity, we take 0.18~dex inferred by \citet{pet04} from the dispersion of the N2 based metallicity when compared with the more precise $T_e$ method. (See also App.~\ref{appa}.) Error bars are similar for other classes. They have not been included to avoid cluttering Fig.~\ref{mainr}, but they are listed in Table~\ref{table1}. Note that contrarily to the behavior of Classes~0 and 1, Classes~2 and 4 present similar stellar and nebular metallicities. This is not so much due to a change of stellar metallicity, but to a significant decrease of the nebular metallicity. The same kind of agreement at low metallicity occurs for all \bcd\ classes. Figure~\ref{mainrc} includes the scatter plot of nebular vs stellar metallicity for the \bcd\ classes (the solid star symbols). Stellar and nebular metallicities agree in this case, discarding serious systematic errors biasing our conclusion. To be more precise, assuming that systematic errors in \qbcd s and \bcd s metallicities are similar, the \qbcd s and the \bcd s tend to have the same stellar metallicity but different nebular metallicities. Taking the most numerous Class~0 to represent them (i.e., the largest symbols with error bars in Fig.~\ref{mainrc}), the stellar metallicities $Z_s$ of \qbcd s and \bcd s are similar, \begin{equation} \log(Z_s/Z_\odot)_\qbcd\simeq\log(Z_s/Z_\odot)_\bcd, \label{equal_metal} \end{equation} and also similar to the nebular metallicity of \bcd s, \begin{equation} [{\rm O/H}]_\bcd\simeq\log(Z_s/Z_\odot)_\bcd, \label{equal_metal_2} \end{equation} which differs from the nebular metallicity of \qbcd s, \begin{equation} [{\rm O/H}]_\qbcd \simeq[{\rm O/H}]_\bcd+0.35. \label{diff_metal} \end{equation} As usual, we have employed the notation where $[{\rm O/H}]=\log({\rm O/H})-\log({\rm O/H})_\odot$, and $Z_\odot$ stands for the solar metallicity. Equations~(\ref{equal_metal}), (\ref{equal_metal_2}), and (\ref{diff_metal}) combined yield, \begin{equation} [{\rm O/H}]_\qbcd\simeq \log(Z_s/Z_\odot)_\qbcd + 0.35. \label{diff_metal_2} \end{equation} All the above identities have an uncertainty of the order of 0.2~dex, which is large but does not invalidate the trends. \begin{figure} \includegraphics[width=0.49\textwidth]{f11.ps} \caption{Emission line based metallicity vs stellar metallicity for the set of QBCD classes. The size of the symbol indicates the number of galaxies represented by the class, as specified in the inset. Boxes and asterisks correspond to two different estimates of emission line oxygen metallicity. The numbers inside the square symbols identify the major classes. Error bars for Class~0 are shown for reference, and they are similar to those of all major classes; see Table~\ref{table1}. } \label{mainr} \end{figure} \begin{figure} \includegraphics[width=0.49\textwidth]{f12.ps} \caption{Same as Fig.~\ref{mainr} but for both \qbcd s (the square symbols) and \bcd s (the star symbols). The dashed line shows the relationship for star-forming galaxies inferred from the works of \citet{tre04} and \citet{gal05} (see main text). The size of the symbol scales with the number of galaxies in the class. Our estimate of Class~0 error bars is shown for reference. } \label{mainrc} \end{figure} Figure~\ref{mainrc}, the dashed line, includes the relationship between nebular and stellar metallicities corresponding to a large set of SDSS star-forming galaxies. We have inferred such a relationship by combining the medians of the mass-metallicity relationships by \citet{tre04} and \citet{gal05}. \citet{tre04} derive metallicities from emission lines, whereas \citet{gal05} metallicities refer to the luminosity weighted stellar metallicity. The emission line metallicities are referred to the solar metallicity using the solar oxygen abundance employed by \citet{cha01}, which is the code behind \citet{tre04} estimates. Figure~\ref{mainrc} shows how the nebular metallicities are systematically larger than the stellar metallicities for galaxies of the same mass, and such difference is similar to the one we find for \qbcd s (equation~[\ref{diff_metal_2}]). We cite this disagreement as a consistency test for our metallicity determinations since both \citet{tre04} and \citet{gal05} derive metallicities using tools different and more elaborated than the ones used here. If an unknown bias is causing the differences between nebular and stellar metallicities in \qbcd s, it does not seem to be due to our specific simplifying hypotheses. The error bars employed so far correspond to 1$\,\sigma$, or 68\% confidence level. If we use 2$\,\sigma$ instead, we cannot discard the agreement between the gas and the stellar metallicities of \qbcd s \citep[in the case of N2 based nebular metallicities, 2$\,\sigma\simeq 0.41$~dex;][]{pet04}. Note, however, that the error bars used for the nebular metallicity are rather conservative. We find that the metallicities inferred from O3N2 and N2 agree (\S~\ref{gas_metal} and Fig.~\ref{mainr}), and the scatter of the O3N2 relationship found by \citet[][]{pet04} is significantly smaller than that of N2 method (2$\,\sigma\simeq 0.25$~dex for 03N2). Moreover, the scatter found by \citet{pet04} corresponds to individual extragalactic H~{\sc ii} regions. Part of such scatter have to be of random nature, and it cancels when averaging many different regions or, as we do, many different galaxies. Only the (unknown) systematic part of the error would be of relevance in our case. \section{Age of the stellar component}\label{ages_sect} The absorption line spectrum fitting procedure in \S~\ref{metalic} provides ages and metallicities for the stellar component of the galaxies. Figure~\ref{age} shows the metallicity vs age scatter plot corresponding to the two sets of galaxies, \qbcd s and \bcd s. We find that \qbcd s are systematically older than \bcd s. \qbcd s have ages in excess of 1~Gyr whereas all \bcd s have ages inferior to (but close to) 1~Gyr. The case of \qbcd\ Class~3 is worthwhile mentioning separately (the oldest age in Fig.~\ref{age}, of the order of 11~Gyr). It corresponds to the \qbcd\ galaxies without emission lines (Fig.~\ref{QBCDclasses}), which form the red clump of the color sequence (Fig.~\ref{color_classes}). These properties hint at Class~3 being early type galaxies, and the age we find is also consistent with this possibility. Such a very old origin of the stellar population of Class~3 is very well constrained according to our error analysis -- see Table~\ref{table1}. According to the conjecture we are examining in the paper (\S~\ref{introduction}), \qbcd s undergo successive starbursts that transform then to \bcd s for short periods \citep[lasting only 10 Myr or so; see, e.g.,][]{mas99}. The number density of \bcd s and \qbcd s requires the bursting phase to appear, statistically, every 0.3 Gyr. The fact that this timescale differs from the age we assign to the \qbcd s is not at odds with the conjecture. The stellar population we observe now has been produced not just during the last starburst, but during several bursts. It is the luminosity weighted age of these populations what we have estimated, which has to exceed the age of the last starburst. The fact that the age of the stellar population of \bcd s is shorter but not very different from the age of \qbcd s adds on to this picture. If \bcd s and \qbcd s are basically the same galaxies, but the \bcd s happen to be in a phase of enhanced star formation activity, then the underlaying stellar populations must have similar properties. This turns out to be the case. The metallicities are similar (\S~\ref{metal_vs_metal} and equation~[\ref{equal_metal}]), and \bcd\ stellar ages are shortened due to the strength of the current starburst. \begin{figure} \includegraphics[width=0.49\textwidth]{f13.ps} \caption{Stellar metallicity vs stellar age scatter plot for both \qbcd s (the square symbols) and \bcd s (the solid star symbols). Error bars for Class 0 \qbcd s and \bcd s are included. The size of the symbols codes the number of galaxies in the class, like in Fig.~\ref{mainr}. The range of the axes corresponds to the full range of ages and metallicities spanned by the \miles\ library. Note how the assigned ages and metallicities occupy a well-defined narrow region among of the possible solutions. } \label{age} \end{figure} The fact that the age of the stellar population of \bcd s is much larger than the age of a typical starburst supports that \bcd\ are not forming stars for the first time. They have an underlaying stellar population much older than a starburst. \section{Metal enrichment and Star Formation}\label{chemical} This section analyzes the difference between the observed nebular and stellar metallicities of \qbcd s. We will find that the nebular metallicity of \qbcd s seems to be much too high to be representative of the galaxy as a whole. Should it be representative, the \qbcd\ galaxies would have been forming stars during the last few Gyr at an unobservedly large SFR. As we conjectured in \paperi , the metallicity of \qbcd s inferred from emission lines probably represents a small fraction of the galactic gas, locally contaminated by recent starbursts. In principle, the difference between stellar metallicity and gas metallicity found in \S~\ref{metal_vs_metal} may be explained in terms of the chemical enrichment of the ISM during the time span between the formation of the stars and the present epoch. Assume that the chemical enrichment has followed a closed-box evolution. (The consequences of an open-box evolution will be discussed later on.) In this case the conservation of metals imposes the following constraint between the mass of stars $M_s$, the mass of gas $M_g$, the metallicity of the stars $Z_s$, the metallicity of the gas $Z$, and the yield $y$, \begin{equation} Z\,M_g + Z_s\, M_s = y\, M_s+ Z_0\, M. \label{chem1} \end{equation} (We will use without explicit citation well known results from the theory of chemical evolution of galaxies; see, e.g. \citeauthor{tin80}~\citeyear{tin80}; \citeauthor{pag97}~\citeyear{pag97}.) The left hand side of equation~(\ref{chem1}) gives the amount of metals now existing in stars and gas, which is equal to the metals created by stars, $y\, M_s$, plus the metals existing at the beginning of the starburst, $Z_0\, M$, where $M$ stands for the total mass in the star-forming closed-box, and $Z_0$ represents the initial metallicity of the gas. Equation~(\ref{chem1}) can be rewritten in a more convenient form, \begin{equation} \mu={{y+Z_0-Z_s}\over{y+Z-Z_s}}, \label{chem2} \end{equation} with $\mu$ the mass fraction of gas, \begin{equation} \mu=M_g/M. \end{equation} Also from the theory of closed-box evolution, \begin{equation} Z-Z_0=-y\ln\mu , \label{chem3} \end{equation} and \begin{equation} \mu=1-{{(1-R)\,\sfr\,t}\over{M}}, \label{chem4} \end{equation} with \sfr\ the average Star Formation Rate during the past time interval $t$, and $R$ the fraction of stellar mass that returns to the ISM rather than being locked into stars and stellar remants. Equations~(\ref{chem2}) and (\ref{chem3}) combined provide the difference of metallicity between gas and stars, \begin{equation} Z-Z_s=-y\,\big[{{\ln\mu}\over{1-\mu}}+1\big]. \label{chem5} \end{equation} In principle, the mass of the starburst $M$ is unknown, but our ignorance can be parameterized using a scaling factor $f$ between $M$ and the mass of stars in the galaxy at present $M_*$, i.e., \begin{equation} M=f\,M_*. \label{chem6} \end{equation} Although it is not a primary observable, the stellar mass content of a galaxy can be inferred from its observed luminosity and color. Using the calibration modeled by \citet{bel01}, the color transformations between Johnsons's colors and SDSS colors by \citet{jes05}, and the typical colors of the \qbcd\ candidates in \paperi , one finds a relationship between the stellar mass in solar mass units, and the absolute magnitude in the SDSS $g$~color, \begin{equation} \log(M_*/M_\odot)\simeq -0.50\, g+0.35\,. \label{sfr1} \end{equation} The parameters $R$ and $y$ are constrained by the stellar evolution models, so that $R\simeq 0.2$ \citep[e.g.,][]{tin80,pag97,apa04} and, for oxygen, $y\simeq 4\times 10^{-3}$ \citep[][and references therein]{dal07}. Then given the age, the \sfr , and the mass (i.e., $f$) of a starburst, equations~(\ref{chem4}), (\ref{chem5}) and (\ref{chem6}) allows us to predict the difference of metallicity between gas and stars, which is the parameter measured in \S~\ref{metal_vs_metal}. Consequently, we can use them to estimate the mass of the starburst and/or the \sfr\ required to explain the observed metal enrichment of the gas. This is what we do next. One can estimate the present \sfr\ of the \qbcd\ galaxies from their H$\alpha$ emission. We have used the prescription in \citet[][]{ken98}, which gives the SFR as a function of the H$\alpha$ luminosity. We derive the luminosity from the observed H$\alpha$ equivalent width, and the absolute luminosity of the galaxies in the SDSS $r$ bandpass, approximately centered at the H$\alpha$ wavelength. The transformation between magnitudes and fluxes has been carried out keeping in mind that the SDSS color system is an AB system \citep{smi02}, which renders, \begin{equation} {\rm SFR}\simeq \gamma\,{{W_{{\rm H}\alpha}}\over{100\,{\rm \AA}}} 10^{-0.4(r+19.0)}\, M_\odot\,{\rm yr^{-1}}, \label{present_sfr} \end{equation} with $W_{{\rm H}\alpha}$ the equivalent width in \AA , $r$ the integrated absolute $r$ magnitude, and $\gamma$ the fraction of galactic light contributing to the the starburst.\footnote{Note that the SDSS equivalent width corresponds to a spectrum taken at the center of the galaxy (\S~\ref{spectra}). Using the integrated luminosities to estimate $H\alpha$ fluxes assumes that the star-forming burst observed at the galaxy core extends evenly throughout the galaxy. The factor $\gamma$ accounts for the case where the starburst affects only a faction $\gamma$ of the galaxy.} Figure~\ref{sfr_uno} shows the \sfr s inferred from equation~(\ref{present_sfr}) for the individual \qbcd\ galaxies in Class~0 (the square symbols). We have used $\gamma=0.5$, as a reasonable upper limit for the extent of the starburst, but using 0.1 or 1 do not change any of the conclusions discused below. Typically, the \sfr\ is $0.1\,M_\odot\,$y$^{-1}$ for the brightest \qbcd s, i.e., when $g\simeq -18.5$. Figure~\ref{sfr_uno} also includes the \sfr\ of \bcd s -- in this case $\gamma=1$ to acknowledge that the starburst is spreadout all over the galaxy. \begin{figure} \includegraphics[width=0.49\textwidth]{f14.ps} \caption{Star Formation Rate (SFR) vs SDSS $g$~absolute magnitude. Both \qbcd\ galaxies (Class~0; the square symbols) and \bcd\ galaxies (full set; the times symbols) are included. The dashed line represents the average \qbcd\ \sfr , i.e., $\sfr\simeq 0.25\cdot (M_*/10^{10} \,M_\odot)$. The solid line corresponds to a \sfr\ 50 times larger. The stellar mass of the \qbcd\ galaxies derived from the $g$~magnitude is also included in the scale on top of the figure. } \label{sfr_uno} \end{figure} Figure~\ref{deltaz} shows the difference between nebular and stellar oxygen metallicities predicted by the closed-box evolution of Class~0 \qbcd\ galaxies. We assume $t$ to be the age of the stellar population derived in \S~\ref{ages_sect}. Equations~(\ref{chem4}), (\ref{chem5}), (\ref{chem6}), (\ref{sfr1}), and (\ref{present_sfr}) were used with $f=2$ (the square symbols) and $f=0.04$ (the times symbols). The case $f=2$ represents a galaxy-wide starburst able to pollute with metals the whole galactic gas. ($f=2$ assumes the same amount of mass in gas as the mass in stars, which is reasonable for low surface brightness dwarf galaxies like our \qbcd s. According to \citet{sta92}, they have one $M_\odot$ of H{\sc i} gas per solar luminosity, which corresponds to $f\simeq 3$.) In this case the predicted difference of metallicity is too low to account for the observed difference (\S~\ref{metal_vs_metal}), which is represented in Figure~\ref{deltaz} as a horizontal solid line. The amount of metals produced at the current \sfr\ during the age of the starburst is insufficient to effectively contaminate the whole ISM of the galaxies. If, on the other hand, the same starburst pollutes only a small fraction of the galactic gas ($f=0.04$, or a factor 50 smaller than the previous case), then the predicted and the observed metallicities agree (Fig.~\ref{deltaz}, the times symbols). Consequently, the observed $Z-Z_s$ can be explained if the gas from which we infer the metallicity represents only a small fraction of the total galactic gas. \begin{figure} \includegraphics[width=0.49\textwidth]{f15.ps} \caption{Difference between nebular and stellar metallicities ($Z-Z_s$) predicted if the chemical evolution of Class~0 \qbcd s follows a closed-box model. The difference is plotted vs the absolute $g$-color magnitude (bottom axis) and vs the stellar mass of the galaxy ($M_*$, top axis). The horizontal solid line corresponds to the observed difference, which can be reproduced only if a small fraction of the galactic gas is polluted with metals (see the main text). } \label{deltaz} \end{figure} Agreement between observed and model metallicities can be also reached if the \qbcd\ galaxies had an average \sfr\ during $t$ much larger than the present one. The predicted closed-box evolution depends on the ratio $\sfr/f$ rather than on \sfr\ and $f$ separately, and a decrease of $f$ is equivalent to an increase of \sfr ; see equations~(\ref{chem4}) and (\ref{chem6}). However, the required increase of \sfr\ is too high to be sensible. It would have to be 50 times larger than the observed ones. This level of {\em continuous} star formation activity during the last two Gyr is unreasonably high; it is shown as a solid line in Fig.~\ref{sfr_uno} and it corresponds to $\sim 5\,M_\odot\,$y$^{-1}$ for the brightest \qbcd\ galaxies. It is larger that the already large \sfr\ observed in our \bcd\ galaxies (see Fig.~\ref{sfr_uno}, the times symbols). So far our argumentation has considered closed-box chemical evolution. If the box is opened both infall of (low metallicity) gas from the intergalactic medium, and outflows of metal rich SNa ejecta are to be expected \citep[e.g.,][and references therein]{gar02,dal07}, and both processes reduce the metallicity of the gas with respect to the predictions of the closed-box model. Then it becomes even more difficult explaining the observation as a global chemical enrichment. An approximate way of considering this open-box evolution using closed-box equations consists of using effective yields inferred from observations rather than the yield predicted by stellar evolution models \citep[e.g.][]{dal07}. These effective yields are smaller than true yields, and the difference increases with decreasing galaxy mass (probably due to the decrease of the gravitational potential of the galaxy). Actually, the deviations are particularly large for dwarf galaxies, like our \qbcd s \citep[e.g.,][]{gar02,dal07}. If the yield $y$ is reduced, then $Z-Z_s$ decreases too -- see equation~(\ref{chem5}) and keep in mind that $\mu$ is fixed by equation~(\ref{chem4}). In short, the assumption of closed-box chemical evolution do not invalidate our conclusion, namely, that the metallicity we infer from the emission lines traces only a small fraction of the total galactic gas. So far we have put aside the behavior of the \bcd s. According to Fig.~\ref{mainrc}, they have the same nebular and stellar metallicities within error bars. In addition, the stellar populations have ages slightly smaller than 1\,Gyr (Fig.~\ref{age}). These two properties can be easily explained if the \bcd\ galaxies are \qbcd s experiencing a major but short starburst involving fresh well mixed galactic gas. In this case, the observed \bcd\ SFR is significantly larger than the average SFR during the age of the stellar population. Then the chemical evolution model to be applied has $M\sim M_*$, and a \sfr\ of the order of the \sfr~of~\qbcd s, i.e., a model similar to the case labelled as $2\,M_*$ in Fig.~\ref{deltaz}. The model prediction is $Z-Z_s$ at least one order of magnitude smaller than the difference observed in \qbcd s and, therefore, in agreement with the lack of metal enrichment observed in \bcd s (the stars in Fig.~\ref{mainrc}). \section{Conclusions}\label{conclusions} We analyze the metallicity of \qbcd s, i.e., galaxies that may be Blue Compact Dwarfs (\bcd s) during the periods of quiescence, when the major starburst characteristic of \bcd s is not so dominant. The \qbcd\ candidates were selected in \paperi\ from SDSS/DR6, where we also separate a reference sample of \bcd s. The metallicity inferred from emission lines of these \qbcd s turned out to exceed the metallicity of the \bcd s, an uneasy result if \bcd s have to descend from \qbcd s. Here we study whether the metallicity inferred from the emission lines of \qbcd s may not be representative of the full galactic gas, but reveals a local enrichment by recent starbursts. In this case the metallicities for the gas and for the stars must differ significantly. The work is based on SDSS/DR6 spectra, whose signal-to-noise ratio is not sufficient to measuring stellar metallicities from absorption lines of individual spectra. We improve the original signal-to-noise ratio by stacking observed spectra that are alike. The grouping of similar spectra was carried out by classifying the 21493 \qbcd\ galaxies using an automatic {\em k-means} classification procedure (\S~\ref{class}). The algorithm renders a small number of types of spectra or {\em classes}, with the first ten classes containing 90\% of all spectra, and the most numerous Class~0 having more than 36\% of them. As a by-product, this classification scheme provides a selective technique to identify galaxies in various states within the color sequence. In particular, one of our classes seems to contain only transition galaxies in the so-called {\em green valley} (Class~1, \S~\ref{green_valley}), whereas another class includes most of the red sequence galaxies (Class~3; \S~\ref{green_valley}). The typical Class~0 \qbcd\ galaxies belong to the blue sequence. So do \bcd\ galaxies. The stellar metallicities have been derived from the absorption lines using an ad-hoc procedure which fits the average profile of each class with single-stellar populations synthetic spectra based on the stellar library MILES (\S~\ref{metalic}). We develop our own simple but robust tool to get a intuitive control of the errors. Emission lines are masked out. The galactic continuum is also subtracted for fitting, so that only absorption line features contribute to the measurement. As inferred from our Monte-Carlo estimate of the random error budget, a direct fit of the full spectrum is good enough to assign ages, but it does not provide enough finesse to properly distinguish metallicities. Only after overweighting particular bandpasses of the stacked spectra \citep[corresponding to some of the Lick indexes defined by][]{wor94}, we bring the formal error bars down to reasonable limits, below 0.1~dex. Gas metallicities are obtained from emission lines with errors smaller than 0.2~dex (\S~\ref{gas_metal} and App.~\ref{appa}). When the gas and the stellar \qbcd\ metallicities are compared, gas metallicities turn out to be systematically larger than the stellar metallicities by some $\sim 0.35~$dex (\S~\ref{metal_vs_metal}). Despite the fact that this difference is not far from the formal error bars (actually, it is below the formal 2$\,\sigma$ level; \S~\ref{metal_vs_metal}), we regard it as significant for a number of reasons. First, it is systematic, so that the main \qbcd\ classes show it. Second, it is not present in \bcd s, where stars and gas show the same metallicity within error bars (Fig.~\ref{mainr}). Third, the excess of gas metallicity with respect to stellar metallicity is implicit in the luminosity-metallicity relationships for star-forming galaxies inferred by \citet[][gas]{tre04} and \citet[][stars]{gal05} (see \S~\ref{metal_vs_metal}, and the dashed line in Fig.~\ref{mainrc}). Despite the existence of all these supportive arguments, a caveat is in order. The stellar metallicity error bars only describe statistical errors, although systematic errors may dominate the error budget. Even if these systematic errors exist and are important, they should not modify the conclusions as they would affect both \qbcd s and \bcd s in the same way. However, one can never discard a source of (unknown and unsuspected) systematic errors affecting \qbcd s and \bcd s differently, which would force us to reconsider the metallicity discrepancies. The fraction of \qbcd\ galactic light produced by stars augment with the metallicity, so that the fainter the emission lines the more metal rich the gas. This result reinforces the conjecture that the emission lines come from self enriched ISM. The luminosity weighted ages of \qbcd s span the full range from 1 to 10 Gyr (Fig.~\ref{age}). The most common Class~0 is in the young part of such range, with an age below 2~Gyr. The fact that the age of the stellar population of \bcd s is shorter but not very different from the age of \qbcd s adds on to this picture. If \bcd s and \qbcd s are basically the same galaxies, but the \bcd s happen to be in a phase of enhanced star formation activity, then the underlaying stellar populations must have similar properties. Their stellar metallicities are similar (\S~\ref{metal_vs_metal}). The \bcd\ ages are smaller than the \qbcd s ages, but this can be easily due to the fact that their luminosity weighted average ages are reduced by the current starburst. In principle, the excess of metals in the ionized gas of \qbcd s, as revealed by their emission lines, may reflect the natural enrichment of the ISM produced by successive SN ejecta. Emission lines trace the present ISM, whereas stars sample it in the past when the metallicity was lower. The relative enrichment depends on the age of the stars, and also on the star formation rate providing the SNe. In the case of our \qbcd s, these two quantities are tightly constrained. We have estimated the (mean) age of the starburst, and the (current) star formation rate (SFR) as inferred from the observed H$\alpha$ luminosity \citep{ken98}. Using simple closed-box chemical evolution models, we argue that given the age and the star-formation rate, the observed starburst is not sufficient to enrich the full galactic ISM to the observed levels. However, age and SFR can be accommodated if the enriched galactic gas represents only a small fraction of the total gas ($\sim$1/50; \S~\ref{chemical}). The assumption of closed-box evolution does not invalidate the conclusion. As we point out in \paperi , \qbcd\ are quiet common, representing one out each three dwarf galaxies in the local universe. Since they are so common, it is conceivable that some of their properties are not exclusive of the \qbcd\ class, but a global property of dwarf galaxies with emission lines. In particular, the bias of metallicity inferred from emission lines may be present in all star-forming dwarf galaxies, rather than being a feature of our particular subset. We plan to explore this potential bias using the techniques developed in the paper, namely, the comparison between the metallicity estimates based on emission lines and absorption lines. Moreover, we plan of applying the classification tool in \S~\ref{class} to find out and characterize galaxy spectra corresponding to the various parts of the color sequence. The short green-valley phase is particularly interesting \citep[e.g., ][]{del07,sil08}, and we came across a simple method of identification. \begin{acknowledgements} Thanks are due to B.~Panter for clarifying discussions on the proper solar abundance normalization to be used with \citet{tre04} relationship. Thanks are also due to an anonymous referee for helping us improving the argumentation. We benefitted from comments and suggestions by R.~Amor\'\i n, I.~G. de~la~Rosa, C.~Esteban, A.~Manpaso, M.~Moll\'a, E.~P\'erez-Montero, R. S\'anchez-Janssen, G.~Stasi\'nska, and J. V\'\i lchez on aspects of this paper related to their area of expertise. This work has been partly funded by the Spanish {\em Ministerio de Educaci\'on y Ciencias}, projects AYA~2007-67965-03-01 and AYA~2007-67752-C03-01. Funding for the Sloan Digital Sky Survey (SDSS) and SDSS-II has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, the U.S. Department of Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho, and the Max Planck Society, and the Higher Education Funding Council for England. The SDSS Web site is http://www.sdss.org/. The SDSS is managed by the Astrophysical Research Consortium (ARC) for the Participating Institutions. The Participating Institutions are the American Museum of Natural History, Astrophysical Institute Potsdam, University of Basel, University of Cambridge, Case Western Reserve University, The University of Chicago, Drexel University, Fermilab, the Institute for Advanced Study, the Japan Participation Group, The Johns Hopkins University, the Joint Institute for Nuclear Astrophysics, the Kavli Institute for Particle Astrophysics and Cosmology, the Korean Scientist Group, the Chinese Academy of Sciences (LAMOST), Los Alamos National Laboratory, the Max-Planck-Institute for Astronomy (MPIA), the Max-Planck-Institute for Astrophysics (MPA), New Mexico State University, Ohio State University, University of Pittsburgh, University of Portsmouth, Princeton University, the United States Naval Observatory, and the University of Washington. \end{acknowledgements} {\it Facilities:} \facility{Sloan (DR6, spectra)}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Since the BELLE collaboration's discovery of the charmonium-associated state $X(3872)$,\cite{Choi:2003ue} hadron spectroscopy has been reinvigorated and recast.\cite{Olsen:2017bmm} Many of the newly observed states invite identification with compositions beyond the traditional quark--antiquark meson and three-quark baryon schemes, possibilities foreseen in the foundational quark-model papers.\cite{Zweig:1981pd} Tetraquark states composed of a heavy quark and antiquark plus a light quark and antiquark have attracted much attention. All the observed candidates fit the form $c \bar c q_k \bar q_l$, where the light quarks $q$ may be $u, d, \hbox{or } s$. The putative tetraquarks typically have strong decays to $c \bar c$ charmonium + light mesons. None is observed significantly below threshold for strong decays into two heavy--light meson states $\bar c q_k + c \bar q_l$. Estia Eichten and I have examined the possibility of unconventional tetraquark configurations for which all strong decays are kinematically forbidden.\cite{Eichten:2017ffp} In the heavy-quark limit, stable---hence exceedingly narrow---$Q_iQ_j \bar q_k \bar q_l$ mesons must exist. To apply this insight, we take into account corrections for finite heavy-quark masses to deduce which tetraquark states containing $b$ or $c$ quarks might be stable. The most promising candidate is a $J^P=1^+$ isoscalar double-$b$ meson, $\mathcal{T}^{\{bb\}-}_{[\bar u \bar d]}$. I will sketch our derivation and results, emphasizing the consequences for experiment, and indicate areas in which experimental and theoretical work can be productive. \section{Heavy-quark symmetry implies stable heavy tetraquark mesons $Q_iQ_j \bar q_k \bar q_l$} One-gluon-exchange between a pair of color-triplet heavy quarks is attractive for $(QQ)$ in a color-$\mathbf{\bar{3}}$ configuration and repulsive for the color-$\mathbf{6}$ configuration. The strength of the $\mathbf{\bar{3}}$ attraction is half that of the corresponding $(Q\bar{Q})$ in a color-$\mathbf{1}$. This means that in the limit of very heavy quarks, we may idealize the color-antitriplet $(QQ)$ diquark as a stationary, structureless color charge, as depicted in the leftmost panel in Figure~\ref{fig:dhtq}. \begin{figure} \centerline{\includegraphics[width=0.22\textwidth]{AxoPRLscaledupB}\qquad {\includegraphics[width=0.22\textwidth]{AxoPRLscaledupA}\qquad {\includegraphics[width=0.22\textwidth]{AxoPRLscaledupD}\qquad {\raisebox{-9pt}{\includegraphics[width=0.22\textwidth]{AxoPRLscaledupC}}}}}} \caption[]{Schematic evolution of a $Q_iQ_j \bar q_k \bar q_l$ state as the heavy-quark masses decrease (and the mean separation between the heavy quarks increases) from left to right.} \label{fig:dhtq} \end{figure} We can separate the strong dynamics binding the diquark from the long-range color interaction by which the light antiquarks interact with each other and are bound to the diquark ``nucleus.'' For sufficiently heavy quarks $Q$, a $Q_iQ_j \bar q_k \bar q_l$ tetraquark meson is stable against strong decays, as we can show by considering possible decay modes. First, we note that dissociation into two heavy--light mesons is kinematically forbidden. The $\mathcal{Q}$ value for the decay is \break $\mathcal{Q} \equiv m(Q_i Q_j \bar q_k \bar q_l) - [m(Q_i \bar q_k) + m(Q_j \bar q_l)] = \Delta(q_k, q_l) - {\cfrac{1}{2}}\!\left(\cfrac{2}{3}\ensuremath{\alpha_{\mathrm{s}}}\right)^2\![1 + O(v^2)]\overline M + O(1/\overline M)$, where $\Delta(q_k, q_l)$, the contribution due to light dynamics, becomes independent of the heavy-quark masses, $\overline M \equiv (1/{m_Q}_i + 1/{m_Q}_j)^{-1}$ is the reduced mass of $Q_i$ and $Q_j$, and \ensuremath{\alpha_{\mathrm{s}}}\ is the strong coupling. The velocity-dependent hyperfine corrections, here negligible, are calculable in the nonrelativistic QCD formalism.\cite{Caswell:1985ui} For large enough values of $\overline M$, the middle term on the right-hand side dominates, so the tetraquark is stable against decay into two heavy-light mesons. What of the other possible decay channel, a doubly heavy baryon plus a light antibaryon, $(Q_iQ_j \bar q_k \bar q_l) \to (Q_iQ_j q_m) + (\bar q_k \bar q_l\bar q_m)$? For very heavy quarks, the contributions of $Q$ motion and spin to the tetraquark mass are negligible. Since the $(QQ)$ diquark is a color-antitriplet, heavy-quark symmetry tells us that $m(Q_iQ_j \bar q_k \bar q_l) - m(Q_iQ_j q_m) = m(Q_x q_k q_l) - m(Q_x \bar q_m)$. The flavored-baryon--flavored-meson mass difference on the right-hand side has the generic form $\Delta_0 + \Delta_1/{M_Q}_x$. Using the observed mass differences, $m(\Lambda_c) - m(D) = 416.87\ensuremath{\hbox{ MeV}}$ and $m(\Lambda_b) - m(B) = 340.26\ensuremath{\hbox{ MeV}}$, and choosing effective quark masses $m_c \equiv m(\ensuremath{J\!/\!\psi})/2 = 1.55\ensuremath{\hbox{ GeV}}$, $m_b \equiv m(\Upsilon)/2 = 4.73\ensuremath{\hbox{ GeV}}$, we find $\Delta_1 = 176.6\ensuremath{\hbox{ MeV}}^2$ and $\Delta_0 =303\ensuremath{\hbox{ MeV}}$, hence the mass difference in the heavy-quark limit is $303\ensuremath{\hbox{ MeV}}$. The right-hand side is in every case smaller than the mass of the lightest antibaryon, $m(\bar p) = 938.27\ensuremath{\hbox{ MeV}}$, so no decay to a doubly heavy baryon and a light antibaryon is kinematically allowed. \emph{With no open channels in the heavy-quark limit, stable $Q_iQ_j \bar q_k \bar q_l$ mesons must exist.} To assess the implications for the real world, we must first test whether it makes sense to idealize the $(QQ)$ diquark as a tiny, structureless, color-antitriplet color source.\footnote{See Ref.~\citenum{Richard:2018yrm} for a thoughtful critical assessment.} As the separation between the heavy quarks increases, the light-antiquark cloud screens the $Q_iQ_j$ interaction, altering the $\mathbf{\bar{3}, 6}$ mix, and eventually leading to the division of the $(Q_iQ_j \bar q_k \bar q_l)$ state into a pair of heavy--light mesons. These changes are indicated in the progression from left to right in Figure~\ref{fig:dhtq}. Using a half-strength Coulomb$+$linear quarkonium potential, we verified that the rms core radii are small on the expected tetraquark scale: $\langle r^2\rangle^{1/2} = 0.28\ensuremath{\hbox{ fm}}\, (cc); 0.24\ensuremath{\hbox{ fm}}\, (bc); 0.19\ensuremath{\hbox{ fm}}\, (bb)$. This conclusion is supported by exploratory lattice QCD studies.\cite{Peters:2015tra} To ascertain whether stable tetraquark mesons might be observed, we must estimate masses of the candidate configurations. Numerous model calculations exist in the literature,\footnote{A useful compilation appears in Table IX of Ref.~\citenum{Luo:2017eub}.} but heavy-quark symmetry makes it possible to compute the $Q_iQ_j \bar q_k \bar q_l$ tetraquark masses directly, through the relation $m(Q_iQ_j \bar q_k \bar q_l) - m(Q_iQ_j q_m) = m(Q_x q_k q_l) - m(Q_x \bar q_m)$, with due attention to spin configurations and finite-mass corrections that arise from hyperfine interactions and kinetic-energy shifts for the light degrees of freedom.\footnote{The arithmetic is made explicit in Ref.~\citenum{Eichten:2017ffp}.} Experiments have determined nearly all the information about heavy baryons and heavy--light mesons needed to evaluate the right-hand side in every case of interest, i.e., for tetraquarks based on $bb, bc, \hbox{and }cc$ diquarks.\footnote{The lifetime ($\approx 0.4\ensuremath{\hbox{ ys}}$) of the top quark is too short to permit the formation of hadrons containing $t$.} The doubly heavy baryons have been more elusive: for the moment, the strongest evidence we have is for the $\Xi_{cc}^{++}$ candidate reported by the LHC$b$ experiment at a mass of $3621.40 \pm 0.78\ensuremath{\hbox{ MeV}}$.\cite{Aaij:2017ueg} With this input, we compute the mass of the lightest $(cc)$ tetraquark as $m(\{cc\}[\bar u \bar d]) = 3978\ensuremath{\hbox{ MeV}}$, which lies $102\ensuremath{\hbox{ MeV}}$ above the threshold for decay into $D^+D^{*0}$.\footnote{An earlier sighting by the SELEX Collaboration\cite{Mattson:2002vu} of a $\Xi_{cc}^+$ candidate at $3519\ensuremath{\hbox{ MeV}}$ would imply $m(\{cc\}[\bar u \bar d]) = 3876\ensuremath{\hbox{ MeV}}$, coincident with the threshold for dissociation into a heavy-light pseudoscalar and heavy-light vector. Signatures for weak decay would include $D^+K^-\ell^+\nu$ and $\Xi_c^+\bar n$. The $D^0D^+\gamma$ channel opens at $3734\ensuremath{\hbox{ MeV}}$.} This would be a $J^P = 1^+$ axial-vector meson, symmetric in $cc$ flavor and antisymmetric in the light antiquark flavors. In the absence of comprehensive experimental information about the other doubly heavy baryons, we rely for now on model calculations of their masses~\cite{Karliner:2014gca} as inputs to our tetraquark mass calculation. Our results for the lowest-lying levels are given in Table~\ref{tab:masses}. \begin{table}[h] \caption[]{Expectations for ground-state tetraquark masses, in MeV.} \label{tab:masses} \vspace{0.4cm} \centering \begin{tabular}{@{} lcccccc @{}} State & $J^P$ & $m(Q_iQ_j \bar q_k \bar q_l)$ & Example Decay Channel & $\mathcal{Q}$ [MeV] \\ \hline $\{cc\}[\bar u \bar d]$ & $1^+$ & $3978$ & $D^+{D}^{*0}$ 3876 & $102$ \\ $\{cc\}[\bar q_k \bar s]$ & $1^+$ & $4156$ & $D^+{D}^{*+}_s$ $3977$ & $179$ \\ $\{cc\}\{\bar q_k \bar q_l\}$ & $0^+,1^+,2^+$ & $4146,4167,4210$ & $D^+{D^0}, D^+{D}^{*0}$ $3734, 3876$ & $412, 292, 476$\\ $[bc][\bar u \bar d]$ & \textcolor{red}{$0^+$} & $7229$ & $B^-D^+/B^0D^0$ $7146$& $83$\\ $[bc][\bar q_k\bar s]$ & $0^+$ & 7406 & $B_s D$ $7236$ & 170 \\ $[bc]\{\bar q_k \bar q_l\}$ & $1^+$ & $7439$ & $B^*D/BD^*$ $7190/7290$ & $249$ \\ $\{bc\}[\bar u \bar d]$ & $1^+$ & $7272$ & $B^*D/BD^*$ $7190/7290$& $82$\\ $\{bc\}[\bar q_k \bar s]$ & $1^+$ & 7445 & $ DB_s^*$ 7282 & 163 \\ $\{bc\}\{\bar q_k \bar q_l\}$ & $0^+,1^+,2^+$ & $7461,7472,7493$ & $BD/B^*D$ $7146/7190$ & $317,282,349$\\ $\{bb\}[\bar u \bar d]$ & $1^+$ & $10482$ & $B^-\bar{B}^{*0}$ $10603$& \textcolor{red}{\fbox{$\mathbf{-121}$}} \\ $\{bb\}[\bar q_k \bar s]$ & $1^+$ & $10643$ & $\bar{B}\bar{B}_s^*/\bar{B}_s\bar{B}^*$ $10695/10691$ & \textcolor{red}{\fbox{$\mathbf{-48}$}} \\ $\{bb\}\{\bar q_k \bar q_l\}$ & $0^+,1^+,2^+$ & $10674,10681,10695$ & $B^-{B^0},B^-{B}^{*0}$ $10559, 10603$ & $115,78, 136$ \\ \hline \end{tabular} \end{table} We find two real-world candidates for stable tetraquarks: the axial vector $\{bb\}[\bar u \bar d]$ meson, $\mathcal{T}^{\{bb\}-}_{[\bar u \bar d]}$ bound by $121\ensuremath{\hbox{ MeV}}$, and the axial vector $\{bb\}[\bar u \bar s]$ and $\{bb\}[\bar d \bar s]$ mesons bound by $48\ensuremath{\hbox{ MeV}}$. Given the provisional doubly heavy baryon masses, we expect all the other $Q_iQ_j \bar q_k \bar q_l$ tetraquarks to lie at least $78\ensuremath{\hbox{ MeV}}$ above the corresponding thresholds for strong decay.\footnote{In model calculations, Karliner and Rosner\cite{Karliner:2017qjm} estimate somewhat deeper binding, and so point to additional $bc$ and $cc$ candidates.} We note that exploratory lattice studies also suggest that double-beauty tetraquarks should be stable.\cite{Bicudo:2015kna,Francis:2016hui} Promising final states include $\mathcal{T}^{\{bb\}}_{[\bar u \bar d]}(10482)^-\! \to \Xi^0_{bc}\bar{p}$, $B^-D^+\pi^-$, and $B^-D^+\ell^-\bar{\nu}$ (which establishes a weak decay), $\mathcal{T}^{\{bb\}}_{[\bar u \bar s]}(10643)^-\! \to \Xi^0_{bc}\bar{\Sigma}^-$, $\mathcal{T}^{\{bb\}}_{[\bar d \bar s]}(10643)^0\! \to \Xi^0_{bc}(\bar{\Lambda},\bar{\Sigma}^0)$, and so on. If they should lie near enough to threshold, the \emph{unstable} doubly heavy tetraquarks might be observed in ``wrong-sign'' (double flavor) combinations bearing $DD, DB, \hbox{or }BB$ quantum numbers. For example, a $J^P = 1^+\; \mathcal{T}^{\{cc\}}_{[\bar d \bar s]}(4156)^{++} \!\to D^+ D_s^{*+}$ resonance would constitute \emph{prima facie evidence} for a non-$q\bar{q}$ level carrying double charge and double charm. This would be a new kind of resonance, for which no attractive force is present at the meson--meson level. Other nearly bound candidates include $1^+\; \mathcal{T}^{\{bb\}}_{\{\bar q_k \bar q_l\}}(10681)^{0, -, --}$ ($\mathcal{Q} = +78\ensuremath{\hbox{ MeV}}$), $1^+\; \mathcal{T}^{\{bc\}}_{[\bar u \bar d]}(7272)^0$ ($\mathcal{Q} = +82\ensuremath{\hbox{ MeV}}$), $0^+\; \mathcal{T}^{[bc]}_{[\bar u \bar d]}(7229)^{0}$ ($\mathcal{Q} = +83\ensuremath{\hbox{ MeV}}$), and $1^+\; \mathcal{T}^{\{cc\}}_{[\bar u \bar d]}(3978)^+$ ($\mathcal{Q} = +102\ensuremath{\hbox{ MeV}}$). The production of stable doubly heavy tetraquarks (or their nearly bound counterparts) is undoubtedly a rare event, since it entails---at a start---the production of two heavy quarks and two heavy antiquarks. We have no rate calculation to offer, but note the large yield of $B_c$ mesons in the LHC$b$ experiment:\cite{Aaij:2014bva} $8995 \pm 103$ $B_c \to \ensuremath{J\!/\!\psi}\mu\nu_\mu X$ candidates in $2\ensuremath{\hbox{ fb}}^{-1}$ of $pp$ collisions at $8\ensuremath{\hbox{ TeV}}$, and the CMS observation\cite{Khachatryan:2016ydm} of double-$\Upsilon$ production in 8-TeV $pp$ collisions: $\sigma(pp \to \Upsilon\Upsilon+\hbox{ anything}) = 68 \pm 15\ensuremath{\hbox{ pb}}$. These suggest that the Large Hadron Collider experiments should be the first focus of searches for novel tetraquark mesons. The ultimate search instrument might be a future electron--positron Tera-$Z$ factory, for which the branching fractions~\cite{Patrignani:2016xqp} $Z \to b\bar{b} =15.12 \pm 0.05\%$ and $Z \to b\bar{b}b\bar{b} = (3.6 \pm 1.3) \times 10^{-4}$ encourage the hope of many events containing multiple heavy quarks. Two recent investigations go beyond the kinds of arguments I have presented here. Beginning from a situation in which all the constituents are taken to be heavy, so that one-gluon exchange prevails, Czarnecki and collaborators have proposed a figure of merit that governs the color-($\mathbf{\bar{3},6}$) admixture in the putative diquark system.\cite{Czarnecki:2017vco} They conclude that no stable $QQ\bar{Q}\bar{Q}$ (equal-mass) tetraquarks are to be expected in very-heavy-quark limit, and they find support for the binding of $bb\bar{q}\bar{q}$, in agreement with our conclusions. A generalization allows them to explore how the result depends on $N_\mathrm{c}$, the number of colors. A lattice--NRQCD study of the $bb\bar{b}\bar{b}$ system reveals no tetraquark with mass below $\eta_b\eta_b$, $\eta_b\Upsilon$, $\Upsilon\Upsilon$ thresholds in the $J^{PC} = 0^{++}, 1^{+-}, 2^{++}$ channels.\cite{Hughes:2017xie} \section{Some tasks to advance our understanding} \emph{Homework for Experiment.} The most straightforward request is to \textit{look for double-flavor resonances of two heavy--light mesons near threshold.} The ingredients for such searches should already exist in experiments that have reconstructed many $D$, $D_s$, $B$, and $B_s$ mesons. Next, \textit{extend to $\sqrt{s} = 13\ensuremath{\hbox{ TeV}}$ the measurement of representative cross sections for final states containing two heavy quarks and two heavy antiquarks.} Then we need to \textit{discover and determine the masses of doubly-heavy baryons.} These masses are essential ``engineering information'' for our purposes, as they are needed to implement the heavy-quark--symmetry calculation of tetraquark masses.\footnote{Doubly heavy baryons are of considerable interest in their own right. A light quark bound to a doubly heavy diquark has much in common---in both color configuration and dynamics---with a heavy--light meson. A further goal is to observe excitations of the diquark core, along with the energy levels of the bound light quark.} An important element of the study of doubly heavy baryons is to \textit{resolve the conundrum of the large mass difference between the $\Xi_{cc}^+$ and $\Xi_{cc}^{++}$} candidates reported by SELEX and LHC$b$, respectively. The ultimate experimental goal is to \textit{find stable tetraquarks through their weak decays.} \emph{Homework for Theory.} An important challenge is to \textit{develop expectations for the production of final states containing $Q_i, \bar{Q}_i, Q_j, \bar{Q}_j$, and eventually for the anticipated stable tetraquarks.} For the stable $Q_iQ_j \bar q_k \bar q_l$ states we discuss here, \textit{refine lifetime estimates beyond the simplest guess-by-analogy of $\tau \approx 1/3\ensuremath{\hbox{ ps}}$.} Extend the considerations of Refs.~\citenum{Richard:2018yrm,Czarnecki:2017vco} to \textit{understand how color configurations evolve with {$QQ$} (and $\bar{q}\bar{q}$) masses.} Continue to explore how diquarks influence hadron spectroscopy, by \textit{analyzing the stability of different body plans in the heavy-quark limit.} A notable example is a possible $(Q_iQ_j)(Q_kQ_l)(Q_mQ_n)$ dibaryon, with $\bar{Q}_p\bar{Q}_q\bar{Q}_r$ color structure. \section{Summary} In the limit of very heavy quarks $Q$, novel narrow doubly heavy tetraquark states must exist. Heavy-quark symmetry relates the doubly heavy tetraquark mass to the masses of a doubly heavy baryon, heavy-light-light baryon, and heavy-light meson. In the future, when we have more complete experimental knowledge of the doubly heavy baryon spectrum, the heavy-quark--symmetry relations should provide the most reliable predictions of doubly heavy tetraquark masses. Our current mass estimates---which must rely on plausible model inputs for the doubly heavy baryon masses---lead us to expect that the lightest $J^P = 1^+$ $\{bb\}[\bar u \bar d]$, $\{bb\}[\bar u \bar s]$, and $\{bb\}[\bar d \bar s]$ states should be exceedingly narrow, decaying only through the charged-current weak interaction. The observation of these novel tetraquark mesons would herald a new form of stable matter, in which the doubly heavy color-$\mathbf{\bar 3}$ $(Q_iQ_j)$ diquark is a basic building block. Unstable $Q_iQ_j \bar q_k \bar q_l$ tetraquarks with small $\mathcal{Q}$-values may be observable as resonant pairs of heavy-light mesons in channels with double flavor: $DD, DB, BB$. \section*{Acknowledgments} Je souhaite dire un tr\`es grand merci aux gentils organisateurs, \`a nos amies sauvetrices du secretariat, au personnel du Planibel, \`a Kim et Van, et \`a tous les fondateurs des Rencontres de Moriond. I am grateful to the Delta Institute for Theoretical Physics and Nikhef, the National Institute for Subatomic Physics, for generous hospitality in Amsterdam, where this note was prepared. I thank Estia Eichten for collaboration on the work reported here. This manuscript has been authored by Fermi Research Alliance, LLC under Contract No.\ DE-AC02-07CH11359 with the U.S.\ Department of Energy, Office of Science, Office of High Energy Physics. \section*{References}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Analytical holographic studies pioneered the modeling of heavy-ion collisions with the emergence of collectivity~\cite{Janik:2005zt,Albacete:2008vs,Grumiller:2008va,Gubser:2008pc}. Fast hydrodynamization, implying the early applicability of hydrodynamics to a relaxing fluid, got first established numerically in dynamical out-of-equilibrium holographic shock wave collisions~\cite{Chesler:2010bi,Casalderrey-Solana:2013aba,Casalderrey-Solana:2013sxa,Chesler:2015wra,Chesler:2015bba} and further corroborated in the presence of bulk viscosity~\cite{Janik:2015waa,Buchel:2015ofa,Buchel:2015saa,Rougemont:2015wca,Attems:2016ugt,Attems:2016tby,Czajka:2018bod,Czajka:2018egm,Attems:2017zam} to model the almost perfect fluid. As the Relativistic Heavy Ion Collider is accumulating plenty of experimental data in the ongoing beam energy scan~\cite{Bzdak:2019pkr} and the upcoming experimental Facility for Antiproton and Ion Research~\cite{Friese:2006dj,Durante:2019hzd} will search the quantum chromodynamics phase diagram for the presumed critical point, theoretical studies on criticality garner new attention~\cite{Stephanov:1998dy,Stephanov:1999zu,Erlich:2005qh,DeWolfe:2010he,Athanasiou:2010kw,Alba:2017hhe,Critelli:2017oub,Brewer:2018abr,Rougemont:2018ivt,Critelli:2018osu,Du:2020bxp,Li:2020hau,Hoyos:2020hmq,Dore:2020jye,Mroczek:2020rpm,Nahrgang:2020yxm,Dexheimer:2020zzs}. In the cool down of the formed quark-gluon plasma in hadronic collisions there is presumably a wide temperature range with large baryon chemical potential to hit the first-order phase transition line, whose endpoint is the critical point. A prime signal for such a first-order type phase transition is the spinodal instability. The gauge/gravity duality~\cite{Maldacena:1997re} opens up the possibility to study the dynamics of such a phase transition, as the holographic dual~\cite{Buchel:2005nt} of the spinodal instability is the Gregory-Laflamme instability~\cite{Gregory:1993vy,Emparan:2001wn,Emparan:2006mm,Emparan:2009cs,Emparan:2009at,Figueras:2015hkb}, which is amenable to general relativity calculations. In \cite{Attems:2017ezz} we discovered a metastable inhomogeneous unstable solution and subsequently \cite{Janik:2017ykj} found the phase separation of the spinodal instability. The spinodal instability develops in four stages: 1) exponential growth of the instability; 2) the reshaping; 3) the merger; 4) the preferred final solution~\cite{Attems:2019yqn,Bellantuono:2019wbn}. During the evolution of the spinodal instability one encounters different types of inhomogeneous states. With a field redefinition all stages of a strong spinodal instability and the hydrodynamization of shockwave collisions near a critical point were demonstrated to be described by hydrodynamics~\cite{Attems:2018gou,Attems:2019yqn}. A recent analysis discusses the finite size effects~\cite{Bea:2020ees} of the periodical longitudinal direction on the stability or instability of the plasma. A quite different setup involving plasma balls studies effects of the confined phase~\cite{Bantilan:2020pay}, but there is no spinodal region. The purpose of this paper is to vary criticality in order to see the dynamical effects of different first-order phase transitions. For this endeavour a Gregory-Laflamme type instability is evolved on the gravity side to induce the spinodal instability on the gauge theory side. Strictly speaking, below the critical temperature the Gregory-Laflamme instability has only unstable regions. In the holographic construction a Gregory-Laflamme \emph{type} instability is induced by the non-trivial potential of a weakly-coupled scalar field and is unstable only in a certain temperature range. To simplify the construction I consider a setup with no conserved charges, hence the critical point or the phase transition will lie on the temperature axes. Another distinction to quantum-chromodynamics besides the class of the phase transitions considered is the deconfined nature of the cold stable phase. The strength of the holographic approach is to be able to extract and simulate universal features of a two phase system in the limit of strong coupling. The focus of this paper is on the dynamical real-time features of the spinodal instability while approaching a critical point. Section 2 discusses the dual setup, in particular how to simulate on the gravity side the Gregory-Laflamme type instability: One evolves the Einstein equations in the bulk and reads off the boundary data describing the spinodal instability on the gauge theory side. If the first order phase transition occures far from (near) the critical point one speaks of a strong (soft) first order phase transition. Section 3 introduces a new criterium which is used to distinguish the inhomogeneous states - plateaux, peaks, valleys and gorges - in the relevant stages of the spinodal instability (reshaping, merger and final). This allows us in section 4 to characterize for different criticality the shape and surface tension of the interface seen in the preferred and settled final states of the spinodal instability. In section 5 several dynamical real-time setups of the spinodal instability with varying criticality are demonstrated, including a newly revealed dissipative process of a peak into a plateaux. Finally, I discuss the implications of the varying criticality for the spinodal instability in section 6. \section{Setup} The non-conformal bottom-up model \cite{Attems:2016ugt,Attems:2017ezz} employed here, is described by Einstein Hilbert action coupled to a scalar with a non trivial potential $V(\phi)$ \begin{align} S=\frac{2}{\kappa_{5}^{2}} \int d^{5} x \sqrt{-g} \left[ \frac{1}{4} \mathcal{R} - \frac{1}{2} \left( \nabla \phi \right) ^2 - V(\phi) \right] \,, \end{align} with $\kappa_5$ the five-dimensional Newton constant. The chosen potential $V(\phi)$ is conformal in the ultra-violet, where the spacetime is Anti-de-Sitter (AdS), and has a minimum corresponding to an infrared fixed point in the gauge theory. The potential is derived from a superpotential~\cite{Bianchi:2001kw}: \begin{align} \label{eq:superpotential} L W(\phi)=-\frac{3}{2}-\frac{\phi^2}{2}-\frac{\phi^4}{4\phi_M^2} \,, \end{align} where $L$ is the radius of the AdS solution in the ultra-violet. The first two terms in \eqref{eq:superpotential} are fixed by AdS and the dimension of the \enquote{dual} scalar operator. The third term is responsible for the non-conformality and the appearance of a critical point. The potential has a single parameter $\phi_{\rm M}$, whose value defines the transition type. For a subcritical value of $\phi_{\rm M}$ the transition is a first order phase transition, which then turns to a critical point, and finally for a supercritical value to a cross-over phase transition. This yields the following scalar potential \begin{align} \label{eq:potential} L^2 V(\phi) = -3 -\frac{3}{2}\phi^2 - \frac{1}{3} \phi^4 + \left(\frac{1}{2\phi_{\rm M}^4} {- \frac{1}{3\phi_{\rm M}^2}}\right) \phi^6- \frac{1}{12\phi_{\rm M}^4} \phi^8 \,, \end{align} where the term $-\frac{1}{3\phi_{\rm M}^2} \phi^6$ is responsible for critical behaviour.\\ In what follows, the numerical procedure outlined in \cite{Chesler:2013lia,vanderSchee:2014qwa,Attems:2017zam} is used and I will set the dimensionless parameter of the potential to the subcritical values\footnote{ Supercritical values of $\phi_{\rm M}$ lead to a cross-over phase transition with no spinodal instability, see~\cite{Attems:2017zam} for the relaxation channels and out-of-equilibrium properties. } \begin{align} \label{eq:phimvalues} \phi_{\rm M} = \left\{2.25, 2.3, 2.35, 2.4, 2.45\right\}. \end{align} \begin{figure*}[t] \begin{center} \begin{tabular}{cc} \includegraphics[width=.49\textwidth]{figs/eos_phiM_2_25.pdf} & \includegraphics[width=.49\textwidth]{figs/eos_phiM_2_35.pdf}\\[0.4cm] \includegraphics[width=.49\textwidth]{figs/eos_phiM_2_4.pdf} & \includegraphics[width=.49\textwidth]{figs/eos_phiM_2_45.pdf} \end{tabular} \end{center} \vspace{-5mm} \caption{\label{equationofstates}Energy density versus temperature for the theories with different first order phase transitions with varying criticality due to different parameter $\phi_{\rm M} = \{2.25, 2.35, 2.4, 2.45\}$. The full green line represents the energy densities in the cold and hot stable phases, the dashed blue in the meta stable regions and the dotted red in the unstable region. The vertical full gray line marks the temperature of the phase transition $T_c$. The horizontal full grey lines mark the stable energy densities $\mathcal{E}_{\text{low}}$ and $\mathcal{E}_{\text{high}}$ at the transition temperature. The horizontal dashed grey lines are the four initial energy densities $\{{\mathcal{E}}_1 , {\mathcal{E}}_2, {\mathcal{E}}_3, {\mathcal{E}}_4\}$ or just the two initial respective $\{{\mathcal{E}}_1 , {\mathcal{E}}_2 \}$ of the simulations.} \end{figure*} Each theory set by a different value of \eqref{eq:phimvalues} has a first order phase transition. The strongest phase transition (farthest from a critical point) in this setup occurs for $\phi_{\rm M} = 2.25$, while the softest (nearest to a critical point), has $\phi_{\rm M} = 2.45$ as one approaches the theory with critical point $\phi_{\rm M}^* \approx 2.521$. \footnote{It is numerically not practicable to get much closer to the critical point as due to the small range of unstable modes the spinodal instability then needs exponentially longer to kick in.} All the considered states are deconfined similar to the characteristic of the quark-gluon plasma. Note that the parameter $\phi_{\rm M}$ always appears as a square in the potential and therefore also in the equations of motion. Hence a small change in $\phi_{\rm M} \leq \phi_{\rm M}^*$ results in quite different theories and correspondingly in different equations of state.\\ In the trace of the stress tensor \begin{align} \label{eq:TTrace0} \left<T^{\mu}_\mu\right>= - \Lambda \left< \mathcal{O} \right> \,, \end{align} one recognizes the scale $\Lambda$, which sets the magnitude of the non-normalizable mode of the scalar field and is the source of the conformal invariance breaking. The rescaled stress tensor given by \begin{align} ({\cal E}, P_L, P_T, \mathcal V )= \tfrac{\kappa_5^2}{2 L^3} (-T^t_t, T^z_z, T^{x_\perp}_{x_\perp}, \mathcal O )\,, \end{align} omitting the expectation value signs, introduces the local energy density $\cal E$, the longitudinal pressure $P_L$, the transverse pressure $P_T$ and the expectation value of the scalar operator $\mathcal V$. Here $z$ is the dynamical and longitudinal direction, while $x_\perp$ are the transverse infinite homogeneous directions.\\ For the theories with values of $\phi_{\rm M}$ given in \ref{eq:phimvalues}, I compute the relevant thermodynamical quantities. Fig.~\ref{equationofstates} shows the energy density versus temperature with the additional indications of the local energy density of the stable high phase $\mathcal{E}_{\text{high}}$, the stable low energy phase $\mathcal{E}_{\text{low}}$, the transition pressure $P_c$ and the transition temperature $T_c$ (the thermodynamics of the theory with $\phi_{\rm M} =2.3$ has already been calculated~\cite{Attems:2017ezz,Attems:2019yqn}, so it is omitted in Fig.~\ref{equationofstates}). As seen in Fig.~\ref{equationofstates} the thermodynamics depend crucially on the parameter $\phi_{\rm M}$ and each choice illustrates a different first-order phase transition. Going from $\phi_{\rm M} =2.25$ to $\phi_{\rm M} =2.45$, one notices the first order phase transition to become smoother and less pronounced. This results in the shrinking of the unstable region in Fig.~\ref{equationofstates}, plotted in dashed red in the equation of state, both in the temperature and the local energy density range. \begin{figure*}[t] \begin{center} \begin{tabular}{cc} \includegraphics[width=.49\textwidth]{figs/FoT_phiM_2_25.pdf} & \includegraphics[width=.49\textwidth]{figs/FoT_phiM_2_4.pdf}\\[0.4cm] \includegraphics[width=.49\textwidth]{figs/cs2oT_phiM_2_25.pdf} & \includegraphics[width=.49\textwidth]{figs/cs2oT_phiM_2_4.pdf} \end{tabular} \end{center} \vspace{-5mm} \caption{\label{freeenergyandcssquared}The free energy $F$ and the speed of sound squared $c_s^2$ over temperature for the theories with different first order phase transitions far from the critical point and closer to the critical point with varying criticality due to different $\phi_{\rm M} = \{2.25, 2.4\}$. Each phase is denoted by the same color scheme as in Fig.~\ref{equationofstates}. } \end{figure*} In Table~\ref{tab:crit} are listed the respective $\mathcal{E}_{\text{high}}$, $\mathcal{E}_{\text{low}}$, $P_c$ and $T_c$ for each theory with varying first-order phase transition. In the section \ref{sec:crit}, I will use the pressure of the transition $P_c$ to determine the difference between the inhomogeneous states. At the transition temperature $T_c$ homogeneous stable states have the same equilibrium pressure $P_c$. There are several ways to compute the transition temperature $T_c$. One way is via the crossing of the free energy $F$ as illustrated in the top plots of Fig.~\ref{freeenergyandcssquared}. For the energy densities in the unstable region between $\mathcal{E}_{\text{low}}$ and $\mathcal{E}_{\text{high}}$ - the spinodal region, the speed of sound squared is negative as plotted in the lower plots of Fig.~\ref{freeenergyandcssquared} and the states are affected by a long-wave length instability. \begin{table}[h!] \begin{center} \caption{Local energy densities for the stable cold and hot phases of each theory\\and the corresponding transition pressures and temperatures.} \label{tab:crit} \begin{tabular}{l|l|l|l|l|l} \toprule $\boldsymbol{\phi_{\textrm M}}$ & $2.25$ & $2.3$ & $2.35$ & $2.4$ &$2.45$\\[0ex] \midrule $\boldsymbol{\mathcal{E}_{\text{high}} /\Lambda^4}$ & $7.2 \times 10^{-2}$ &$5.9 \times 10^{-2}$ & $4.7 \times 10^{-2}$ & $3.5 \times 10^{-2}$ & $2.5 \times 10^{-2}$\\ $\boldsymbol{\mathcal{E}_{\text{low}} /\Lambda^4}$ & $5.6 \times 10^{-5}$ & $9.4 \times 10^{-5}$ & $1.7 \times 10^{-4}$ & $3.5 \times 10^{-4}$ &$6.9 \times 10^{-4}$\\ $\boldsymbol{P_c /\Lambda^4}$ & $0.3 \times 10^{-7}$& $7.5 \times 10^{-6}$ & $0.7 \times 10^{-6}$ & $1.1 \times 10^{-6}$ &$2.9 \times 10^{-5}$\\ $\boldsymbol{T_c /\Lambda}$ & $0.251$ & $0.247$ & $0.244$ & $0.240$ & $0.236$\\ \bottomrule \end{tabular} \end{center} \end{table} Note the stark difference between the two stable local energy densities from more than three to less than two orders of magnitude. Due to the more than three orders of magnitude difference between the stable cold and hot phases and due to the large spikes in gradients during the non-linear regime of the spinodal instability, the numerical real-time treatment of this system is extremely challenging. Between $\phi_{\rm M} =2.25$ and $\phi_{\rm M} =2.45$ the value for the hot stable phase $\mathcal{E}_{\text{high}}$ more than halves and the value for the cold stable phase $\mathcal{E}_{\text{low}}$ is approximately multiplied by twelve. The change in $\mathcal{E}_{\text{high}}$ is significant as the system with $\phi_{\rm M} =2.25$ compared to $\phi_{\rm M} =2.45$ needs more than twice as much total integrated energy density for a fully phase separated setup. Similarly, the change of magnitude in $\mathcal{E}_{\text{low}}$ is significant as it sets the behaviour of the phase domain wall. Moreover, it eases or hardens the numerical treatment. Obviously, there is a trade off between the needed numerical resolution for the stronger instability and the much longer evolution of the softer instability. The setup uses homogeneous planar black brane solutions periodic in the $z-$direction with a finite box extent $L_z$ and different initial local energy densities: ${\mathcal{E}} (t = 0) =\{{\mathcal{E}}_1 , {\mathcal{E}}_2 , {\mathcal{E}}_3, {\mathcal{E}}_4\}$. For the softer theories with $\phi_M = \{2.4, 2.45 \}$ and a smaller $\mathcal{E}_{\text{high}}$, it is sufficient to simulate setups with ${\mathcal{E}} (t = 0) =\{{\mathcal{E}}_1 , {\mathcal{E}}_2 \}$. Moreover, for the theory with the parameter $\phi_M = 2.45$ the bigger energy densities ${\mathcal{E}}_3$ and ${\mathcal{E}}_4$ are no longer in the unstable phase: \begin{align} \{{\mathcal{E}}_1 , {\mathcal{E}}_2 , {\mathcal{E}}_3, {\mathcal{E}}_4 \} = \{ 0.9, 1.2, 1.6, 1.9 \}\times 10^{-2} {\Lambda^4} \,. \end{align} In order to trigger the spinodal instability, one only has to slightly perturb these homogeneous solutions at the start of an evolution or wait for the numerical white noise to kick-in. Here I use a small sinusoidal perturbation with the amplitude of $\Delta = 10^{-4}$ which, thanks to non-linear coupling, populates all unstable modes. This is much faster than numerical noise which needs to build up the unstable modes from amplitudes below $10^{-12}$. All the boundary data of the simulations is published for further analysis or comparisons as open data on a Zenodo repository~\cite{attems_maximilian_2020_3445360}. As all the chosen initial local energy densities are in the dynamical unstable region between $\mathcal{E}_{\text{low}}$ and $\mathcal{E}_{\text{high}}$ of each theory, the plasma will be subject to the spinodal instability, whose dynamical properties one is interested in. \section{Criterium for inhomogeneous states}\label{sec:crit} The spinodal instability cools down regions in the unstable energy density, so that they reach the cold stable phase and hence pushes out the energy density to other regions, which in turn accumulates and may reach the hot stable phase. This results in various inhomogeneous states, which are to be further classified. In what follows I will define the two distinct maxima and minima that form due to the spinodal instability: a peak and a plateau; a gorge and a valley. \subsection{Definition: peaks versus plateaux} While the maxima domains, peaks versus plateaux, can be respectively defined by their total energy density content and the typical order of magnitude of difference between them, there is a more distinct and stringent criterium: the transverse pressure $P_T$. When the maximum of the transverse pressure at the interface is of the value of the transition pressure $P_c$, which is the equilibrium pressure at the phase transition, the inhomogeneous state is a plateaux. For the criterium the numerical tolerance is set to $20\%$. \footnote{This is the usual tolerance criterium for the hydrodynamization process (i.e. the time when hydrodynamics applies). It can be stretched to $30\%$ or reduced to $10\%$ without much interpretational change.} Previously, to classify the simulation one had to integrate the local energy density of the state over spacetime and checking both the maximum and the two minima on the side of the state. With this criterium, one only needs to extract a single value. The criterium makes use of the homogeneous spatial direction, which in our case of planar black branes are the two perpendicular directions $x_\perp$, to distinguish between the two types of maxima and minima. The maximum of the transverse pressure of a peak is well below the critical pressure $P_c$. Therefore a peak is a local maximum.\\ Note that in contrast to the transverse pressure $P_T$, the longitudinal pressure $P_L$ is not a useful thermodynamical quantity for differentiating between the inhomogeneous states as it correlates with the fluid velocity of the formed peak or plateau. The longitudinal $z$ direction is the dynamical direction of the simulations. Therefore $P_L$ is very useful to check how well settled the ongoing simulation is. Once there is no ongoing dynamics, the longitudinal pressure equals up to numerical precision the critical pressure at any position $z$ on the boundary of the spacetime $P_L (z, t > t_{\rm final}) = P_c$. At late times the fluid velocity vanishes, as one reaches the static configuration. This means the longitudinal pressure is constant up to numerical precision once the system is settled. \\ For the two minima domains, the gorge and the valley, the same definition as for the maxima applies. The valley, in analogy to an extended U-shaped formation, has its transverse pressure at the critical pressure $P_c$. While the gorge, in analogy to a narrow V-shaped minima formation, has a transverse pressure lower than the critical pressure. The gorge corresponds to a local minima in the local energy density above $\mathcal{E}_{\text{low}}$. In all the simulations the gorge state is rarely realized. It happens intermittently before the merger of peaks or domains, but rarely forms in the reshaping stage as the spinodal instability tends to cool down to the lower stable phase forming valley extents.\\ The presence of a plateaux enforces a phase separated interface, where the transverse pressure inside the plateaux attains the critical pressure $P_c$. On the contrary, the maxima of the transverse pressure of a peak does not reach $P_c$ and its local energy density might not reach the hot stable phase. The distinction is significant and easy to compute by extracting the corresponding maximum of the transverse pressure. The distinction between peaks and plateaux is of course fluid, since a peak with sufficient local energy density input can be turned into a plateaux. Nevertheless, this new criterium greatly facilitates their distinction and is of interest for the study of the properties of the phase separation especially important as only plateaux have the full phase separation.\\ \subsection{Final stage} \begin{figure*}[t] \begin{center} \begin{tabular}{cc} \includegraphics[width=.51\textwidth]{figs/e_domain_phiM_2_35i_e0_15_2.pdf} \quad \hspace{-2mm} \includegraphics[width=.51\textwidth]{figs/pt_domain_phiM_2_35i_e0_15_2.pdf} \end{tabular} \end{center} \vspace{-5mm} \caption{\label{fig:domainvspeak} Longitudinal superposed profiles with $\phi_{\rm M} = 2.35$ and $L_z \Lambda = 107$ for a single plateau in continuous black with initial energy density ${\mathcal{E}}_2$ and for a single peak in dashed blue with initial energy density ${\mathcal{E}}_1$ of (left) the local energy density (right) the transverse pressure } \end{figure*} If the simulated box contains enough total energy density, the final stage of the spinodal instability will be an inhomogeneous phase separated solution (neglecting for now subtleties of quite small finite boxes, where finite size effects act as regulator of the instability~\cite{Bea:2020ees}) that forms a plateaux. In the special case of not enough total energy density in the system the formed final stage is a single peak.\\ The transverse pressure of the final stage of a simulation with the initial homogeneous local energy density ${\mathcal{E}}(t=0) = {\mathcal{E}}_2$ is plotted in the black continuous curve of Fig.~\ref{fig:domainvspeak} (right). The maximum value of the transverse pressure $P_T(z \approx -4.1) \approx 11.0 \times 10^{-5}$ is largely within the tolerance criterium of the critical pressure ($\approx 95\% P_c$), therefore it is a plateau. Whereas the maximum of the transverse pressure of the final stage with the same theory but less initial local energy density ${\mathcal{E}}(t=0) = {\mathcal{E}}_1$ has a value of around $~70\% P_c$ below the tolerance criterium, therefore its final inhomogeneous state is a peak. The final longitudinal profile of the transverse pressure is plotted in the blue dashed curve in Fig.~\ref{fig:domainvspeak} (right). The maximum in $P_T$ always corresponds to the location at the longitudinal direction $z$ of the maximum of the local energy density ${\mathcal{E}}$. As seen in Fig.~\ref{fig:domainvspeak} (left) the plateau has clearly a large extent near the hot stable phase in the local energy density, while the peak only touches it briefly around its maximum. In both cases (black continuous and blue dashed) the extended minima Fig.~\ref{fig:domainvspeak} (right) has the value of the critical pressure and hence it is a valley. Consequently the simulation in full black is a neat example of a fully phase separated solution.\\ \subsection{Reshaping stage}\label{subsec:reshaping} \begin{figure*}[t] \begin{center} \begin{tabular}{cc} \includegraphics[width=.51\textwidth]{figs/pt_plateau_peak_reshaping_phim_2_35_t_1600.pdf} \quad \hspace{-2mm} \includegraphics[width=.51\textwidth]{figs/pt_peaks_plateau_reshaping_phim_2_3_t_1180.pdf} \end{tabular} \end{center} \vspace{-5mm} \caption{\label{ptcriteriumreshaping} Transverse pressure $P_L$ longitudinal profile during reshaping as the solid black line and the critical pressure $P_c$ as the dashed gray line (left) with $\phi_M = 2.35$ at $t \Lambda = 1200$ forming a peak, two valleys and a plateau of the simulation visualized on bottom left of Fig.~\ref{plateauformationquad} (right) with $\phi_M = 2.3$ at $t \Lambda = 1180$ forming several peaks, a single plateau, and valleys in between of the simulation visualized on the left Fig.\ref{mergers} } \end{figure*} The plateaux are either formed directly early on in the reshaping stage or later on during the merger stage from several joined peaks. Again the longitudinal extent of the simulations is assumed to be wide enough to at least fit a phase separated system with two interfaces. Here are given examples of directly formed peaks and plateaux.\\ Of course in the early reshaping stage the distinction is by definition a lot messier compared to the settled final stage due to the ongoing dynamics. Nevertheless it is possible to distinguish between peaks and plateaux based on their transverse pressure with the relaxed criterium by allowing the maximum of a plateau or a valley to overshoot the equilibrium transition pressure. For example in Fig.~\ref{ptcriteriumreshaping}(left) one recognizes a single peak at the position $z \Lambda \approx -45$ and a single plateau at $z \Lambda \approx 0$ forming. It also indicates that at this particular moment the valley from $z \Lambda \approx 20$ to $z \Lambda \approx 52$ is also still out-of-equilibrium. The valley between $-35 \lesssim z \lesssim -15$ is much more equilibrated. In the simulation with a wide extent shown in Fig.~\ref{ptcriteriumreshaping}(right) one recognizes the formation of seven valleys, a single plateau at the position $z \Lambda \approx -12$ and six peaks, where the transverse pressure maximum is far lower than the transition pressure $P_c$. During its reshaping the maximum value of a plateau will severely overshoot the transition pressure $P_c$ while the maxima of the peaks do not reach $P_c$ during their reshaping. After reshaping in the merger stage, one observes the peaks to settle quite quickly and have only small changes over time: The maximum energy density of a peak decreases by less than $<1\%$ over very long times\cite{Attems:2019yqn}. Relatedly the transverse pressure changes by the same amount over long period times. In the merger stage the formed peaks typically merge together to plateaux. Whereas plateaux take a long time to settle, the more extended the longer and asymptotically tend to $P_c$. Therefore the relaxed inhomogeneous criterium works equally well during the reshaping and merger stages: By checking the transverse pressure of the inhomogeneous states to be far above or at least attaining a transverse pressure bigger than the tolerance criterium ($\ge 20\% P_c$) one is able to handily distinguish plateaux and peaks. \subsection{Merger stage} \begin{figure*}[t] \begin{center} \begin{tabular}{cc} \includegraphics[width=.51\textwidth]{figs/phase_merger_L7_e0_0_2_k_1_3depsilon_t5885} \quad \hspace{-2mm} \includegraphics[width=.51\textwidth]{figs/phase_merger_L7_e0_0_25_numnoise_8peaks_3depsilon_t10700} \end{tabular} \end{center} \vspace{-5mm} \caption{\label{mergers} Time evolution of two different merger with $\phi_{\rm M} = 2.3$ and $L_z\Lambda \simeq 187$ longitudinal extent: (left) Spacetime evolution ${\mathcal{E}\left(t=0\right)}={\mathcal{E}}_1$ and with initial $n=1$ perturbation up to $t\Lambda = 5885$; (right) Spacetime evolution ${\mathcal{E}\left(t=0\right)}={\mathcal{E}}_2$ and numerical noise as initial perturbation up to $t\Lambda = 10700$.} \end{figure*} An even more challenging stage for the distinction criterium is the merger stage. In the previous subsections the static final and the early reshaping stage in which each inhomogeneous state form have been discussed. Due to the static nature of the final stage and the usual crisp formation of the spinodal instability those stages are easier to classify. In the so-denominated merger stage the complicated dynamics with the peaks and domains merging happens. Here one discerns that mergers happen in a same qualitative way for the different theories: Peak $\leftrightarrow$ peak and peak $\leftrightarrow$ plateau mergers lead to the preferred final solution of a single phase separated plateau. It is worth point out that forming states with more than one plateau requires significant computing resources, due to the large longitudinal extent and the increased evolution time of this large merger and subsequent equilibration. Here I analyse for the first time two evolution, where the inhomogeneous formed states look like out-of-equilibrium or equilibrium plateaux mergers yet are in fact peak $\leftrightarrow$ plateau mergers. The two evolution of the mergers in the \fig{mergers} are a difficult example for the classification of the inhomogeneous states by the criterium. \\ In the merger of the \fig{mergers}(left) at around $t \Lambda \approx 2000$, one could be tricked into seeing two plateaux merging, while the maximum of the left peak only overshoots early after the left group merges the peaks around $t \Lambda \approx 1400$ the critical pressure, but distinctively settles below. The respective maxima each are a product of mergers and they look deceptively massive enough for being a plateau each. The right merging group of the \fig{mergers}(left) never comes to settling between $1700 < t \Lambda < 1900$, but as product of only two peaks is very likely a peak too. Hence one can classify in the \fig{mergerzooms}(left) the mergers as being peak $\leftrightarrow$ peak.\\ In the merger of the \fig{mergers}(right) at around $t \Lambda \approx 8750$, one could be equally tricked into seeing two $\mathcal{E}_{\text{high}}$ plateaux merging. Here the case is a it easier to disentangle as both maxima pick up almost constant velocity towards each other, but settle long before the merger. The left maximum is a plateau, while the right maximum is a peak as its transverse pressure settles at $\approx 0.75 P_c$. Consequently the evolution of the \fig{mergerzooms}(right) shows the process of a peak $\leftrightarrow$ plateau merger. Previously demonstrated merger in Fig.~13 of \cite{Attems:2019yqn} is happening in out-of-equilibrium where each plateaux is still oscillating around $\mathcal{E}_{\text{high}}$ before merging. Therefore it is a third example of a difficult classification during the merger stage. The left maximum seems to settle quite below the transition pressure, but the right maximum around it. Accordingly it is also a peak $\leftrightarrow$ plateau merger.\\ \begin{figure*}[t] \begin{center} \begin{tabular}{cc} \includegraphics[width=.51\textwidth]{figs/phase_merger_L7_e0_0_2_k_1_3depsilon_t2200_zoom} \quad \hspace{-2mm} \includegraphics[width=.51\textwidth]{figs/phase_merger_L7_e0_0_25_numnoise_8peaks_3depsilon_t10700_zoom} \end{tabular} \end{center} \vspace{-5mm} \caption{\label{mergerzooms} Zoom into the respective spacetime evolution of \fig{mergers} (left) out-of-equilibrium peak $\leftrightarrow$ peak merger (right) Merger of settled peak $\leftrightarrow$ domain } \end{figure*} Hence the new proposed criterium is useful to classify the inhomogeneous states in all the different stages of the spinodal instability. Only during subsequent fast violent out-of-equilibrium mergers the state may not be directly attributable. As conjectured each evolution reaches the same preferred final state, where the extent of the ultimate plateau depends on the total energy density of the calculation. \section{Characteristics of the interface} With the clear distinction between different states formed by the spinodal instability, we will now focus on the characteristics of the phase separation with varying criticality. \subsection{Shape} \begin{figure*}[t] \begin{center} \begin{tabular}{cc} \includegraphics[width=.51\textwidth]{figs/e_phiM_2_35i_shape_plateau.pdf} \quad \hspace{-2mm} \includegraphics[width=.51\textwidth]{figs/e_phiM_2_45i_shape_plateau.pdf} \end{tabular} \end{center} \vspace{-5mm} \caption{\label{fig:domainshape} Longitudinal profiles of an interface with (left) $\phi_{\rm M} = 2.35$ and initial energy density ${\mathcal{E}}_3$; (right) $\phi_{\rm M} = 2.45$ and initial energy density ${\mathcal{E}}_1$ both for a single plateau in continuous black with initial energy density ${\mathcal{E}}_3$ and the fitted interface shape in dashed green} \end{figure*} As seen in Fig.~\ref{fig:domainshape} the shape of the phase transition is well approximated by the function \begin{align}\label{eq:shape} {\mathcal{E}} (z) \approx \frac{\Delta {\mathcal{E}}}{2} \left[ 1 + \tanh \left( \frac{z - z_0}{b} \right) \right] + \mathcal{E}_{\text{low}} \end{align} with $\Delta {\mathcal{E}} = \mathcal{E}_{\text{high}} - \mathcal{E}_{\text{low}}$, $z_0$ the exact midpoint between $\mathcal{E}_{\text{low}}$ and $\mathcal{E}_{\text{high}}$ of the interface and $b$ corresponds to the extent of the interface. Note that the fit with Eq.~\ref{eq:shape} is very good for the interface from the midpoint $z_0$ to $\mathcal{E}_{\text{high}}$, but is only approximate on the downside approach from $z_0$ to $\mathcal{E}_{\text{low}}$. The lower tail fit improves considerably for softer interfaces. This improvement is visible in comparing Fig.~\ref{fig:domainshape} left and right plots. For the interface of the domain wall with a confining vacuum~\cite{Aharony:2005bm} the $\tanh$ function is an almost perfect fit~\cite{jarvinen:20200520}.\\ \begin{table}[h!] \begin{center} \caption{Approximate width of the plateau interface.} \label{tab:width} \begin{tabular}{l|l|l|l|l|l} \toprule $\boldsymbol{\phi_{\textrm M}}$ & $2.25$ & $2.3$ & $2.35$ & $2.4$ & $2.45$\\[0ex] \midrule $b \Lambda$ & $2.47$ & $2.75$ & $3.12$ & $3.62$ & $4.84$\\ \bottomrule \end{tabular} \end{center} \end{table} As intuitively given by the picture of a less pronounced first-order phase transition the interface grows with more criticality. There is almost a double increase of the extent going from $\phi_{\textrm M} =2.25$ to $\phi_{\textrm M} =2.45$ as listed in Table~\ref{tab:width}. Counterintuitively, this means the spacetime of the simulated box with the same amount of total energy density needs to be bigger for the subcritical phase transition to fit nicely inside. Nevertheless, it is hot stable energy density $\mathcal{E}_{\text{high}}$ in the studied theories, which is the dominating thermodynamical quantity for determining how much energy a phase separated final plateau has. In all cases roughly more than $90\%$ of the total energy density concentrate on the domain of the plateaux. While less than $10\%$ of the total energy density are found in the tails from the midpoint on. Of course, this relates to the small cold energy density $\mathcal{E}_{\text{low}}$ and the comparison assumes also the same finite longitudinal extent. The steepness of the phase transition is the dominating factor leading to only a small portion of the total energy density on the tails of the interface. \subsection{Surface tension} With increasing criticality, the pressure exercised by the interface decreases. This can be seen both by the decrease of the transverse pressure minima at phase transition and directly by the surface tension of the interface:\\ The surface tension of the interface is per definition the excess free energy per unit area in the transverse directions $x_\perp$. The surface tension of the interface is positive. For an equilibrated homogeneous system, the free energy density per unit volume is equal to the transverse pressure $F(z) = - P_T(z)$. As discussed in \cite{Attems:2019yqn} by integrating the transverse pressure minus the transition pressure over the full extent and taking into account that the final solution has two interfaces, one gets the surface tension of the respective interface \begin{align} \sigma = \frac{1}{2} \int_0^{L_z} dz \left[ F(z) - F_c \right] = -\frac{1}{2} \int_0^{L_z} \left[ P_T(z) - P_c \right] \,. \end{align} As expected, any presence of an interface increases the free energy of the system.\\ \begin{table}[h!] \begin{center} \caption{Surface tension in equilibrium and the absolute value of the minima\\ of the transverse pressure for each interface.} \label{tab:surface} \begin{tabular}{r|r|r|r|r|r} \toprule $\boldsymbol{\phi_{\textrm M}}$ & $2.25$ & $2.3$ & $2.35$ & $2.4$ & $2.45$\\[0ex] \midrule $\boldsymbol{\sigma /\Lambda^3}$ & $3.9 \times 10^{-3}$ & $2.9 \times 10^{-3}$ & $1.9 \times 10^{-3}$ & $1.1 \times 10^{-3}$ & $5.4 \times 10^{-4}$\\ \midrule $|\boldsymbol{Min(P_T)|/\Lambda^4} $ & $7.9 \times 10^{-4}$ & $5.3 \times 1 0^{-4}$ & $3.1 \times 10^{-4}$ & $1.5 \times 10^{-4}$ & $3.4 \times 10^{-5}$\\ \bottomrule \end{tabular} \end{center} \end{table} As listed in Table~\ref{tab:surface}, each theory gives rise to a distinct value. The surface tension decreases by almost an order of magnitude from the strongest to the softest first order phase transition. Likewise the minima of the transverse pressure at each interface, as seen in +able~\ref{tab:surface}, differ by more than an order of magnitude. This indicates that the range of simulated theories vary widely in criticality. \section{Evolution of the spinodal instability} After the outline of the static properties of the inhomogeneous state with varying criticality, I will demonstrate in what follows novel dynamical evolution of the spinodal instability. \subsection{Comparison of the formation time} \begin{figure*}[t] \begin{center} \begin{tabular}{cc} \includegraphics[width=.49\textwidth]{figs/3d_epsilon_phiM2_2_25_plateau_formation.pdf} & \includegraphics[width=.49\textwidth]{figs/3d_epsilon_phiM2_2_35_plateau_formation.pdf} \\[0.4cm] \includegraphics[width=.49\textwidth]{figs/3d_epsilon_peakmerger_dissipation_2_4_shift_25g4750.pdf} & \includegraphics[width=.49\textwidth]{figs/3d_epsilon_phiM2_2_45_plateau_formation.pdf} \end{tabular} \end{center} \vspace{-5mm} \caption{\label{plateauformationquad} Four spacetime evolution of the local energy density with a plateau formation for the subcritical theories from top left to bottom right with $\phi_{\rm M} = \{2.25, 2.35, 2.4, 2.45\}$, initial local energy density ${\mathcal{E}} (t= 0) = \{{\mathcal{E}}_3, {\mathcal{E}}_2, {\mathcal{E}}_2, {\mathcal{E}}_2\}$ and same longitudinal extent $L_z \Lambda \approx 107$, where the plateau forms before merger with peaks (top left and right) or the plateau forms with a peak dissipation (bottom left) or only directly a plateau reshapes (bottom right).} \end{figure*} The more critical, meaning the closer the theory is near a critical point, the slower the spinodal instability is. This seems apparent from the different thermodynamics as the unstable/spinodal region shrinks and the magnitude of the negative speed of sound squared decreases too. Summing it up: the more critical the theory is, the fewer unstable modes it contains and the unstable modes have a slower growth rate. The challenge here lies in unveiling this hypothesis in the full non-linear situation of the spinodal dynamics and to show the differences with softer and stronger phase transitions.\\ As a new measure of the dynamics of the spinodal instability I consider the formation time, defined as the time from the start of the evolution to when the system first reaches one of the two stable phases. At formation time the inhomogeneous system reaches either $\mathcal{E}_{\text{high}}$ or $\mathcal{E}_{\text{low}}$. As discussed in subsection \ref{subsec:reshaping}, this happens quite far from equilibrium during the reshaping stage. When the system has a large longitudinal extent this can be more complicated, as the stable phase is only reached after the merger of several peaks (seeing each of them is likely to be formed below $\mathcal{E}_{\text{high}}$). Moreover, the formation time would depend on the initial conditions and specifically on the initial velocity of the merging peaks. them slower or faster merger. The extraction of the formation time is approximate related to the fact, that one can setup scenarios specifically triggering the most unstable mode (instead of the first mode $n = 1$, which populates all unstable modes) or initializing with stronger initial perturbation, where the extracted formation time could be faster. To extract the formation time and for the sake of comparing different theories, one chooses a subset of simulations, where the spinodal instability directly forms at least one plateau and has similar initial homogeneous local energy density. This setups allow for a direct comparisons of the formation time of the spinodal instability with varying criticality. Due to the simulations having the initial energy density below, but of the similar magnitude of $\mathcal{E}_{\text{high}}$, usually the formation of a plateau with $\mathcal{E}_{\text{high}}$ happens long before the cool down of a valley to $\mathcal{E}_{\text{low}}$. The time where the system reaches both $\mathcal{E}_{\text{low}}$ and $\mathcal{E}_{\text{high}}$ is the full phase separation time. This full separation time depends heavily on the initial state and thus needs a much higher number of different simulations for a meaningful comparison. As both the cooled and the heated regions overshoot the respective equilibrium values of $\mathcal{E}_{\text{low}}$ and $\mathcal{E}_{\text{high}}$ no tolerance criterium is applied for the computation of the formation time. In what follows I extract the direct formation time of a plateau. \begin{table}[h!] \begin{center} \caption{Formation time of the spinodal instability with varying criticality} \label{tab:time} \begin{tabular}{l|l|l|l|l} \toprule $\boldsymbol{\phi_{\textrm M}}$ & $2.25$ & $2.35$ & $2.4$ &$2.45$ \\[0ex] \midrule $\boldsymbol{t_{\textrm formation} \Lambda}$ & $880$& $1380$ & $1725$& $2660$ \\ \bottomrule \end{tabular} \end{center} \end{table} The runs visualized in Fig.~\ref{plateauformationquad} provide from all the available simulations the direct plateau formation for each theory with similiar initial state. The formation time of the spinodal instability in the simulated theories triples near criticality as documented in the Table~\ref{tab:time}. In all the simulations of the Table~\ref{tab:time} the spinodal instability is triggered by the same small initial perturbation. Each simulation has also enough total energy density to form a plateau. Of course the theory with the strongest phase transition with $\phi_{\rm M} =2.25$ has the fastest formation time. The softest theory near criticality with $\phi_{\rm M} =2.45$ has a triple longer formation time. For the theory with $\phi_{\rm M} = 2.3$, no clean direct plateau formation has been simulated, hence it is not listed in the formation Table~\ref{tab:time}. Nevertheless its fastest extracted formation time of $t_{\textrm formation} \Lambda = 1190$ fits in the overall trend of the table. In the plateau merger of Fig.~\ref{mergers} one extracts the fastest respective formation time of $t_{\textrm formation} \Lambda = \{ 1460, 1420 \}$ for $\phi_{\rm M} =2.3$. Due to forming multiple peaks and having to wait for first mergers these formation times are slower and not representative for the corresponding theory. In Fig.~\ref{dissipation}(left) with $\phi_{\rm M} = 2.4$, one gets roughly a formation time of $t_{\textrm formation} \Lambda = 2400$ and in Fig.~\ref{dissipation}(right) with $\phi_{\rm M} = 2.35$ also of $t_{\textrm formation} \Lambda = 2400$. Here in both cases the slowdown, relative to the extracted values in Table~\ref{tab:time} happen due to a double formation of a plateau and a peak. In conclusion for a meaningful comparison of the formation time, one needs to simulate direct single plateau formations. Aptly the strongest phase transition with the largest spinodal region shows the fastest formation time. \subsection{Peak dissipation into plateaux} An open puzzle of the holographic simulations of the spinodal instability is the rigidity of the interface. While peaks and plateaux happen to move in the longitudinal extent, their interfaces keep a fixed shape. Even in collisions, the interface of the phase separation only wobbles in a very rigid way. Moreover, only over an extended spacetime have the peaks a slowly varying velocity with almost no distortion of their shape. Here are presented the first calculations of a peak dissipating into a plateau for two different theories. This shows the possibility of inhomogeneous states to change their shape.\\ \begin{figure*}[t] \begin{center} \begin{tabular}{cc} \quad \hspace{-2mm} \includegraphics[width=.49\textwidth]{figs/3d_epsilon_peakmerger_dissipation_2_35_shift_25e4750.pdf} & \includegraphics[width=.49\textwidth]{figs/3d_epsilon_plateau_peak_dissipation_phiM_2_4_shift_e0_0_15_t14500.pdf} \end{tabular} \end{center} \vspace{-5mm} \caption{\label{dissipation} Two subcritical spacetime evolution of the local energy density demonstrating a dissipation of a peak into a plateau with $L_z\lambda = 107$, (left) initial homogeneous ${\mathcal{E}}(t=0)= {\mathcal{E}}_4$ and $\phi_{\rm M} = 2.35$ up to $t \Lambda = 6250$; (right) initial homogeneous ${\mathcal{E}}(t=0)= {\mathcal{E}}_1$ and $\phi_{\rm M} = 2.4$ up to $t \Lambda = 14500$.} \end{figure*} This process has been newly observed and needs a finely tuned setup. The system needs to directly produce a plateau and a peak both with no initial velocities. In Fig.~\ref{plateauformationquad}(bottom left) and in Fig.~\ref{dissipation}(left $+$ right) are shown three systems for different theories $\phi_{\rm M} = \left\{2.35, 2.4\right\}$ and initial energies, but each with enough total energy density to each form a plateau and a peak. A similar fine-tuned setup can also be reproduced in a theory with a very strong first order phase transition with $\phi_{\rm M} = 2.3$. Therefore this untypical dissipation process does not depend on criticality. In the number of simulations\cite{attems_maximilian_2020_3445360,attems_maximilian_2019_3445360} performed up to date this process has not yet been observed in large extents, where several peaks or plateaux initially form. While peaks with a slow velocity form on a regular basis it is quite untypical for the spinodal instability to morph into quasi-static features. This happens only if an excited dominant unstable mode carries over from the first stage into the reshaping stage forming a certain number of quasi-static peaks. Then after some little perturbation these peaks usually gain velocity to merge. In this tuned setup, the peak diminishes while staying at the same location. In Fig.~\ref{plateauformationquad}(bottom left) the simulation first forms both a plateau at around $t \Lambda \approx 1725$ and a peak around $t \Lambda \approx 2250$. Surprisingly, the peak stays at the same spacetime with zero velocity and its maximum energy density decreases first quasi-linear over $t \Lambda \approx 2250$ time duration and then exponentially for $t\Lambda \approx 600$ until it completely disappears. During the quasi-linear stage of the dissipation of the peak, the plateau grows in total energy density, while the valley between the peak and plateau continues to exist. The valley fills up during the exponential stage of the dissipation precipitating the decay of the peak. Afterwards the minimum again cools down to the cold stable phase. After $t \Lambda \approx 5100$ the last amount of local energy density joins and slightly perturbes the plateau. In Fig.~\ref{dissipation}(left) one sees first the formation of a plateau at around $t \Lambda \approx 1900$ and a stationary peak around $t \Lambda \approx 2200$. At first the dissipation happens in quasi-linear form during $t\Lambda \approx 1300$ and then exponentially over $t\Lambda \approx 700$. The first stage is quasi-linear as on top of the dissipation proceeds the fast equilibration of the peak after its formation. This dissipation process takes less time as the dissipation in Fig.~\ref{plateauformationquad}(bottom left), as this peak initially has approximately a quarter less total energy density and the quasi-linear dissipation is faster. Again the end of full dissipation of the peak around $t \Lambda \approx 4200$ one notices a slight perturbation of the plateau as the last energy density joins. Here this perturbation is more difficult to discern in the visualization as the plateau itself is still equilibrating from its formation. In Fig.~\ref{dissipation}(right) one notices the formation of a plateau at around $t \Lambda \approx 1500$ and then of a stationary peak at around $t \Lambda \approx 2050$. While this would be the fastest formation time for $\phi_{\rm M} = 2.4$, the available dataset has no comparable simulations for any other theories with similar initial state, hence it was not considered in Table~\ref{tab:time}. The third case has the slowest dissipation with the peak only loosing one third of its height over the duration of more than $t \Lambda \approx 10000$. The exponential dissipative stage sets in at around $t \Lambda \approx 12500$. In this simulation the dissipation is still ongoing at $t \Lambda =14500$ and will take more computing time to fully dissipate.\\ All three presented atypical cases have a plateau forming before a stationary peak, which dissipates first in linear and than exponentially manner. In conclusion this atypical setup happens for a system with enough total energy density to form a plateau and a peak, but not another inhomogeneous structure. It is striking that the dissipation happens independently of criticality both for harder and softer phase transitions. It is the only process known that defies the rigidity of the interface. \section{Discussion} Simulating the spinodal instability far from and near a critical point, one recognizes that the harder the first order phase transition is the more dramatic the spinodal instability is. The shrinking extent and the harder surface tension of the interface of the phase separation go pair in pair. New insights from the dynamical simulations are the rising formation time, while showcasing similar merger dynamics with varying degree of criticality.\\ Looking at the holographic dynamics of the spinodal instability one remarks: The more pronounced the first-order phase transition, the more unstable modes there are which dictate a faster spinodal instability. This translates in a much faster formation time away from criticality. With decreasing criticality the domain wall between the stable phases shrinks. As the magnitudes between the local energy densities $\mathcal{E}_{\text{low}}$ and $\mathcal{E}_{\text{high}}$ grows with less criticality, one needs more local energy density in the system to generate a phase separated solution. The finding of this single preferred final solution\cite{Attems:2019yqn} is now substantiated with softer and harder first-order phase transitions. The newly proposed criterium for the distinction of this inhomogeneous state preforms well in all stages of the spinodal instability. It fittingly distinguishes between the phase separated plateaux and prevalent local maxima, so named peaks.\\ Moreover the newly found atypical setup with a peak dissipating into a plateaux is a progress in the understanding of the rigidity of the interface. For the first time, an inhomogeneous solution has been shown to change its shape after formation without directly merging. The peak without velocity slowly dissipates. This setup shows that the interface is not fully immovable. It is still surprising how rigid the interface is across the theories with widely different criticality. An open question remains to simulate a collision with high velocities, where the remnants would maybe separate again post collision. For now, all scenarios of the spinodal instability demonstrate peaks joining together with no side remnants. A possible scenario involves a peak with high speed crossing a plateau. It would also be interesting to be able to trigger the spinodal instability in a plasma blob created by shockwave collisions near a critical point. The numerical challenge here lies in the extreme slow down of the dynamics at a critical point~\cite{Attems:2018gou}.\\ Although this approach to criticality used a bottom-up model, the performed simulations suggest that the qualitative physics may be quite universal in strong coupling situations approaching a critical point where the negative speed of sound induces the spinodal instability. It would be interesting to break the planar symmetry to allow dynamics in the transversal directions. While the reshaping stage dynamics may change, the simulated formation times will be similar to this study. Extending the simulations to include conserved charges such as the chemical baryonic potential $\mu$ would be interesting to enhance the analysis to the full $T\mu$ phase diagram~\cite{Critelli:2017oub,Du:2020bxp,Dore:2020jye} and make contact with the experimental programs: A major goal for heavy-ion collisions is to determine the existence of a first-order phase transition between between hadronic and quark-gluon plasma in the QCD phase diagram~\cite{Fukushima:2010bq,Fukushima:2013rx}, which is predicted by numerous effective field theory models~\cite{Stephanov:2004wx}. As the first order transition presumably has a big temperature range it is worthwhile to explore if the heavy-ion experiments see a signature~\cite{Bluhm:2020mpc} of it. Very recently a sophisticated hadronic mean-field simulation using the Vlasov equation has also shown that hadronic systems initialized in unstable regions of the phase diagram undergo spontaneous spinodal decomposition~\cite{Sorensen:2020ygf}. It will be exciting to see the upcoming comparisons with experimental data influenced by the two phase system.\\ Previously we have demonstrated that the spinodal instability is very well described by a second-order hydrodynamics~\cite{Attems:2019yqn} with second-order purely spatial gradients included and redefined non-conformal second-order transport coefficients. It would be very interesting to develop the new hydrodynamics formulation of~\cite{Bemfica:2017wps,Kovtun:2019hdm}, which may be able to incorporate the large spatial gradients of the phase separation for a full hydrodynamical evolution. \section*{Acknowledgments}\label{sec:ack} I am grateful for fruitful discussions with J.~Casalderrey-Solana, E.~Kiritsis and D.~Mateos; Y.~Bea, J.~Brewer, N.~Jokela, A.~Vuorinen and M.~Zilhao for interesting exchanges and R.~Janik, M.~Jarvinen, M.~Stephanov and L.~Yin for general discussions on the critical point and phase transitions. I thank the Center for Supercomputing of Galicia (CESGA) for providing extensive High Performance Computing resources in the Finisterra cluster (project usciemat). I would like to thank the Institute for Nuclear Theory for their hospitality at the INT-19-1b workshop during early stages of this work. I acknowledge support through H2020-MSCA-IF-2019 ExHolo 898223. This work is also supported by the "María de Maeztu" Units of Excellence program MDM-2016-0692, Xunta de Galicia (Consellería de Educación) and the Spanish Research State Agency. \bibliographystyle{utphys}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{intro} Recently, \citet{knn} established a general equilibrium framework comprising multifactor CES production functions with estimated elasticities for each of the industrial sectors; each elasticity is measured by the slope of the regression line between the growths of factor shares and factor prices, which are observed in a set of linked input--output tables. The present study is intended to extend this framework in such a way as to incorporate substitution between domestic and imported commodities (i.e., Armington elasticity) and to endogenize the international trades for all commodities traded between two countries, namely, Japan and the Republic of Korea. Armington elasticity is an essential component in trade policy analyses. Previous work concerning economic assessment of trade liberalization schemes \citep[e.g.,][]{hrt, nakajima, uratakiyota, bokep} have used computable general equilibrium (CGE) models based on the Global Trade Analysis Project (GTAP) database. While these models make use of empirically estimated elasticities, the estimates of the elasticities between the aggregated factors, which are essentially based upon time series analyses, tend to show smaller elasticities than anticipated \citep{mdb}. Notwithstanding, Armington elasticities can be large in light of the indifferences between goods of the same classification but from different countries. Moreover, previous CGE models are calibrated at a single state (i.e., one-point calibration) to incorporate regionalities, except for the elasticity parameters, which are usually given a priori. From another perspective, \citet{imf} was concerned with the separability of foreign commodities i.e., the distinction between inter- and intra-group Armington elasticities. The inter-group elasticity is the elasticity of substitution between a basket of domestic commodities and that of imports as a whole, whereas the intra-group elasticity is the elasticity of substitution between a basket of imports from one foreign country and that from another. The estimates of inter-group elasticities were larger for intermediate input sectors, whereas the intra-group elasticities were significantly lower. In the same vein, \citet{feenstra} studied the elasticity of substitution between domestic and foreign goods (i.e., macro elasticity), and between varieties of foreign goods (i.e., micro elasticity) and essentially found the opposite: the micro elasticity was significantly larger than the macro elasticity. Our approach differs from those of the previous studies in two aspects. First, all elasticities are measured based upon published statistics, i.e., linked input--output tables for Japan and for Korea and the UN Comtrade database, and are not adopted from elsewhere. Second, we construct a model that completely replicates two temporally distant state observations rather than conducting a time series analysis to measure elasticities between aggregated factor inputs, as we are interested in a shorter term and a sector-wide policy implication such as regarding tariff liberalization. The state-replicating Armington elasticities are measured by two-point calibration. That is, we measure elasticity that agrees with the two observed domestic--foreign shares in both physical and monetary terms. Moreover, the elasticities are measured in a two-stage nested structure as illustrated in Fig. \ref{nesting}. \begin{figure}[t!] \includegraphics[scale=1]{nesting2.eps} \caption{Nested structure of macro and micro Armington elasticities. A foreign commodity price is given by aggregating the partner country's and the rest of the world's (ROW's) commodity prices. The compound commodity price is given by aggregating the domestic and foreign commodities' prices. Finally, the domestic price is given by a multifactor CES aggregator (i.e., unit cost function). } \label{nesting} \end{figure} Specifically, we evaluate the compound price of each factor input $w_{i}^C$, in terms of domestic and foreign factor input prices ($w_{i}^D$, $w_{i}^F$) that are observable, via CES aggregation whose macro elasticity replicates the observed domestic-foreign market shares. We then calibrate the micro elasticity by using $w_{i}^F$ and the partner country's domestic price $w_{i}^{D\prime}$ in order that the observed partner-ROW market shares are replicated. In this way and based upon 2000--2005 linked input--output tables for Japan and Korea, we construct a multi-sectoral (395 for Japan and 350 for Korea) general equilibrium model with endogenized bilateral trades, in contrast to the previous studies with limited variety of industrial sectors. The remainder of the paper is organized as follows. In the next section, we introduce the basics of the two-point calibration of the CES elasticity parameters, i.e., macro and micro Armington elasticities, and the multifactor CES elasticity estimation by regression. In Section 3, we apply these protocols using linked input--output tables for Japan and for Korea and the UN Comtrade database. In Section 4, we integrate domestic and trade models to construct a bilateral general equilibrium model for welfare analysis of trade liberalization. Section 5 provides concluding remarks. \section{The Model} \subsection{Macro Aggregator} Assume foreign and domestic commodities are, to a certain extent, substitutes with a constant elasticity of substitution (CES). Then, a composite product price in a country (whose index $i$ is omitted) can be evaluated by a CES aggregator of foreign and domestic commodity prices as follows: \begin{align} w^C=\left( \alpha \left(w^D\right)^{1-\varepsilon} + (1-\alpha) \left(w^F\right)^{1-\varepsilon} \right)^{\frac{1}{1-\varepsilon}} \equiv {U}\left( w^D, w^F \right) \label{one} \end{align} where $w^C$ is the composite price of a commodity in the concerned country, $w^F$ is the price of the imported foreign product (including tariff), and $w^D$ is the price of the domestic commodity. Here, the share parameter $\alpha \in (0,1)$ and the macro Armington elasticity $\varepsilon$ are subject to estimation. According to Shephard's lemma, we can obtain the cost share by taking derivatives as follows: \begin{align} &s^D=\frac{\partial w^C}{\partial w^D} \frac{w^D}{w^C}=\alpha \left( \frac{w^D}{w^C} \right)^{1-\varepsilon} &s^F=\frac{\partial w^C}{\partial w^F} \frac{w^F}{w^C}=(1-\alpha) \left( \frac{w^F}{w^C} \right)^{1-\varepsilon} \label{two} \end{align} where $s^D$ and $s^F$ denote the market shares of the domestic and imported commodities, respectively. One may verify that $s^D + s^F=1$ by taking (\ref{one}) into account. Below we show that $\varepsilon$ can be measured by two-point calibration using two temporally distant market share observations, namely, the reference market shares $( s_<^D, s_<^F )$ and the current market shares $( s_>^D, s_>^F )$, with the price changes in the domestic $(w_<^D, w_>^D)$ and imported commodities $(w_<^F, w_>^F)$. Now, according to (\ref{two}), the identities \begin{align} &s_<^D=\alpha \left( \frac{w_<^D}{w_<^C} \right)^{1-\varepsilon} &s_<^F=(1-\alpha) \left( \frac{w_<^F}{w_<^C} \right)^{1-\varepsilon} \label{three} \end{align} must hold at the reference state and the identities \begin{align} &s_>^D=\alpha \left( \frac{w_>^D}{w_>^C} \right)^{1-\varepsilon} &s_>^F=(1-\alpha) \left( \frac{w_>^F}{w_>^C} \right)^{1-\varepsilon} \label{four} \end{align} must hold at the current state. By virtue of (\ref{three}) and (\ref{four}), $\varepsilon$ can be solved (two-state calibrated) as follows: \begin{align} \varepsilon =1-\frac{\ln {s_>^D}/{s_<^D} - \ln {s_>^F}/{s_<^F}}{\ln {w_>^D}/{w_<^D} - \ln {w_>^F}/{w_<^F} } =1-\frac{\Delta \ln s^D - \Delta \ln s^F}{\Delta \ln w^D - \Delta \ln w^F} \label{five} \end{align} where $\Delta$ is the difference operator, i.e., current value minus reference value. Also, we may solve for the share parameter $\alpha$ as follows: \begin{align} \frac{\alpha}{1-\alpha} = \frac{s_<^D}{s_<^F} \left(\frac{w_<^F}{w_<^D} \right)^{ (1-\varepsilon) } = \frac{s_>^D}{s_>^F} \left(\frac{w_>^F}{w_>^D} \right)^{ (1-\varepsilon) } \label{sss} \end{align} In this way, we obtain the macro Armington aggregator (\ref{one}) that replicates both the reference and current states specified by (\ref{three}) and (\ref{four}), respectively. We also note that the compound price $w^C$ will be evaluated assuming (\ref{one}) and thus it is shown in brackets in Fig. \ref{nesting}. \subsection{Micro Aggregator} Let us indicate the partner country by $P$ and the ROW by $R$. Assume that the aggregated foreign import product price $w^F$ (whose commodity index $i$ is omitted) can be expressed as a CES aggregator function of price of commodity imported from the partner country $w^P$ and that from the ROW $w^R$, as follows: \begin{align} w^F= \left( \beta \left( w^P \right)^{1-\eta} +(1-\beta) \left( w^R \right)^{1-\eta} \right)^{\frac{1}{1-\eta}} \equiv {V}\left( w^P, w^R \right) \label{six} \end{align} where $\beta \in (0, 1)$ is the share parameter and $\eta$ is the micro Armington elasticity, both of which are subject to estimation. Note that $w^R$ must be evaluated assuming (\ref{six}) with the calibrated parameters, while $w^F$ and $w^P$ are statistically observable.\footnote{As we will be discussing later, the price of the commodity from the partner country $w^P$ will be measured by using the partner country's domestic price $w^{D\prime}$, the relative import barrier factor with respect to the partner country ${\mu}$, and the currency exchange factor $\nu$, i.e., $w^P=\nu {\mu} w^{D\prime}$.} Hence, the parameters are calibrated according to the two-state observation of the partner country's market share within the commodity's fraction of imports, i.e., $(s_<^P, s_>^P)$. Notice that $s_<^P + s_<^R = 1$ and $s_>^P + s_>^R = 1$ by definition. The following identities must hold at the reference state, according to Shephard's lemma applied to (\ref{six}): \begin{align} &s_<^P=\beta \left( \frac{w_<^P}{w_<^F} \right)^{1-\eta} &s_<^R=(1-\beta) \left( \frac{w_<^R}{w_<^F} \right)^{1-\eta} \label{eight} \end{align} Likewise, the following identities must hold at the current state: \begin{align} &s_>^P=\beta \left( \frac{w_>^P}{w_>^F} \right)^{1-\eta} &s_>^R=(1-\beta) \left( \frac{w_>^R}{w_>^F} \right)^{1-\eta} \label{nine} \end{align} By virtue of (\ref{eight} left) and (\ref{nine} left), $\eta$ can be solved (two-state calibrated) as follows: \begin{align} \eta =1-\frac{\ln {s_>^P}/{s_<^P} }{\ln {w_>^P}/{w_<^P} - \ln {w_>^F}/{w_<^F} } =1-\frac{\Delta \ln s^P }{\Delta \ln w^P - \Delta \ln w^F} \label{ten} \end{align} Also, we may solve for $\beta$ as follows: \begin{align} \beta = {s_<^P}\left(\frac{w_<^F}{w_<^P} \right)^{1-\eta} = {s_>^P}\left(\frac{w_>^F}{w_>^P} \right)^{1-\eta} \label{eleven} \end{align} Hence, we have the micro Armington aggregator (\ref{six}) that replicates both the reference and current states. Also note that $w^R$ will be evaluated by (\ref{six}): \begin{align*} w^R = \left( \frac{\left(w^F\right)^{1-\eta} - \beta \left(w^P\right)^{1-\eta}}{1-\beta} \right)^{\frac{1}{1-\eta}}. \end{align*} The in-bound price of the product imported from the partner country $w^P$ is evaluated by the domestic price at the partner country $w^{D\prime}$ and the barrier factor ${\mu}$ under the currency exchange factor $\nu$. The barrier factor $\mu$ captures varius factors such as insurance, freight, miscellaneous tax, and tariff factors. For further convenience, we may decompose ${\mu}$ into the tariff factor $1+\tau$, where $\tau$ represents the tariff rate, and other factors which we denote by $\rho$, as follows: \begin{align} w^P = \nu{\mu}w^{D\prime} = \nu (1+\tau)\rho w^{D\prime} \label{thirteen} \end{align} As we monitor $\nu$ and ${\mu}$ for the two states, $w^P$ can be evaluated accordingly, i.e., \begin{align} &w_<^P = \nu_< \cdot {\mu}_< \cdot w_<^{D\prime} &w_>^P = \nu_> \cdot {\mu}_> \cdot w_>^{D\prime} \label{fourteen} \end{align} \subsection{Multifactor CES Aggregator} Production of industry $j$ (index omitted) is assumed to be carried out under a constant returns multifactor CES (constant elasticity of substitution) whose unit cost function can be described in the following form: \begin{align} w^D = t^{-1} \left( \sum_{i=0}^n \lambda_i \left( w_i^C \right)^{1-\sigma} \right)^{\frac{1}{1-\sigma}} \label{fifteen} \end{align} where $\lambda_{i} \in (0,1)$ and $\sigma$ are the share parameter for the $i$th input and the multifactor CES elasticity of substitution, respectively, while $t$ denotes the productivity level. While $w^D$ is observable, $w_i^C$ depends on (\ref{one}) via $w^D$ and $w^F$, which are statistically observable, and the calibrated parameters $\alpha$ and $\varepsilon$. We note below that $\sigma$ and $t$ can be estimated by regression, for each industrial sector. The cost share of the $i$th input $s_i$ may be represented according to Shephard's lemma by differentiating (\ref{fifteen}) as follows: \begin{align} s_i = \frac{\partial w^D}{\partial w_i^C} \frac{w_i^C}{w^D} = \lambda_i t^{-1}\left( \frac{w_i^C}{w^D} \right)^{1-\sigma} \label{sixteen} \end{align} By taking the logarithm of both sides, we have \begin{align} \ln s_i = \ln \lambda_i -(1-\sigma) \ln t + (1-\sigma) \left( \ln {w_i^C} - \ln{w^D} \right) \label{seventeen} \end{align} Thus, the difference in (\ref{seventeen}) between two temporally distant states, i.e., reference and current, is given by the following formula: \begin{align} \Delta \ln s_i = -(1-\sigma) \Delta \ln t + (1-\sigma) \left( \Delta \ln {w_i^C} - \Delta \ln {w^D} \right) \label{eighteen} \end{align} Note, if $\sigma$ and $t$ are estimated by the slope and the intercept of (\ref{eighteen}), $\lambda_i$ will be determined by (\ref{sixteen}). \section{Measurements} \subsection{Armington Elasticities} A set of linked input--output tables includes sectoral transactions in both nominal and real terms. Hence, such a set of tables provides temporally distant observations of cost shares and prices (as indexes) for all factor inputs (and outputs). In this study, we use the 1995--2000--2005 linked input--output tables for both Japan \citep{miac} and Korea \citep{bok}, and we chose the year 2000 for reference and 2005 for current states. In order to calibrate macro elasticity $\varepsilon_j$ on two state observations using (\ref{five}), we standardize all prices at the \textit{current} state and evaluate the reference state prices by the current-standardized price index (the \textit{inflator}), which we denote by $q$. Specifically, we use the following terms for calibrating the parameters: \begin{align*} &\left( w_<^D, w_>^D \right) =\left( q^D, 1 \right) &\left( w_<^F, w_>^F \right) =\left( q^F, 1 \right) \end{align*} The parameters of the macro aggregator are thus evaluated by the following formulae, based on (\ref{five}) and (\ref{sss}): \begin{align*} &\alpha = s_>^D &\varepsilon = 1+\frac{\Delta \ln s^D -\Delta \ln s^F}{\ln q^D - \ln q^F} \end{align*} In order to evaluate micro elasticities, we need reference and current observations of the partner country and the ROW market shares $(s_<^P, s_>^P)$ within the foreign factor inputs. To this end, we use the 6-digit HS trade data of the UN Comtrade database \citep{comtrade}, spanning 6,376 goods, converted into the linked input--output sector classification\footnote{ Subsequent analysis will be confined to traded goods (products) while excluding services, due to data availability. } in order to obtain the market share of the partner country with respect to that of the ROW in two periods (2000 and 2005). Further, in order to calibrate the parameters of the micro aggregators, we need to specify the in-bound prices of the partner country's commodities as noted in (\ref{fourteen}). That is, we need the inflator $q^P$, while $q^F$ is observable in the linked input--output tables. \begin{align*} &\left( w_<^F, w_>^F \right) =\left( q^F, 1 \right) &\left( w_<^P, w_>^P \right) =\left( q^P, 1 \right) \end{align*} Therefore, we use the exchange rate that properly scales the two countries' price indexes. Specifically, (\ref{fourteen}) must be replaced by the following identities: \begin{align} &q^P = \nu_< \cdot {\mu}_< \cdot q^{D\prime} &1 = \nu_> \cdot {\mu}_> \cdot 1 \label{twenty} \end{align} Note that since $\nu_<{\mu}_< = (\nu_</\nu_>)({\mu}_</{\mu}_>)$, according to (\ref{twenty}), we may use current standardized index numbers for reference currency exchange factor $\nu_<$ as well as for the reference barrier factor $\mu_<$. In this way, we evaluate the in-bound partner country's commodity inflator $q^P$ by way of an inflator of the commodity produced inside the partner country $q^{D\prime}$. Then, according to (\ref{ten}) and (\ref{eleven}), the parameters of micro aggregator are determined by the following equations: \begin{align*} &\beta = s_>^P &\eta = 1 - \frac{\Delta \ln s^P}{\ln q^F - \ln q^P} \end{align*} \begin{figure}[t!] \includegraphics[width=.495\textwidth]{JPiepsilon.eps} \includegraphics[width=.495\textwidth]{JPieta.eps} \caption{Inverse macro Armington elasticity $\varepsilon_j^{-1}$ for Japan (left) and inverse micro Armington elasticity $\eta_j^{-1}$ for Japan against Korea (right). } \label{japan} \end{figure} In Fig. \ref{japan}, we display the two-point calibrated macro and micro Armington elasticities of 395 sectors for Japan. Note that the sectors are ordered according to Colin Clark's Three-sector theory, namely, $j=1$--27 are primary, $j=28$--294 are secondary, and $j=295$--395 are tertiary sectors. The figure shows the reciprocals since the calibrated elasticities were very large and diverse. Overall, we have very large macro Armington elasticities, meaning that the domestic and imported commodities are (almost complete) substitutes, while some of the imported commodities of the primary sectors show some extent of complementarity. On the other hand, Japan's micro Armington elasticities relative to Korean products are relatively small, meaning that the Korean-made commodities are somehow different from those of the rest of the importing countries, for Japan. In Fig. \ref{korea}, we display the two-point calibrated macro and micro Armington elasticities of 350 sectors for Korea. In this case, sectors are primary for $j = 1$--28, secondary for $j = 29$--282, and tertiary for $j=283$--350. The figure shows the reciprocals since the calibrated elasticities were very large. Overall, Korea's macro and micro Armington elasticities are both smaller than those of Japan. This means that Korean industries perceive foreign-made inputs to be somehow different from Korean-made inputs. \begin{figure}[t!] \includegraphics[width=.495\textwidth]{KRiepsilon.eps} \includegraphics[width=.495\textwidth]{KRieta.eps} \caption{Inverse macro Armington elasticity $\varepsilon_j^{-1}$ for Korea (left) and inverse micro Armington elasticity $\eta_j^{-1}$ for Korea against Japan (right). } \label{korea} \end{figure} \subsection{Multifactor CES Elasticities} We estimate multifactor CES elasticities for all production sectors according to the regression equation (\ref{eighteen}). However, in this case, we must measure $\Delta \ln w_i^C$ between current and reference states in advance, using the macro aggregator (\ref{one}) whose parameters are measured via the two-point calibration method presented previously. The reference and current compound prices evaluated with respect to the price indexes (inflators) used for domestic and foreign commodities are as follows: \begin{align*} &\left( w_{<}^C, w_{>}^C \right) = \left( q^C, 1 \right) & q^C = \left( \lambda \left( q^D \right)^{1-\varepsilon} + (1-\lambda) \left( q^F \right)^{1-\varepsilon} \right)^{\frac{1}{1-\varepsilon}} \end{align*} Using these values, we estimate $\sigma$ via (\ref{eighteen}). Specifically, $1-\sigma$ is estimated by the slope of the following linear regression equation: \begin{align} \Delta \ln s_i = - (1-\sigma) \Delta \ln t + (1-\sigma) \left( \ln q^D - \ln q_i^C \right) + u_i \label{twentytwo} \end{align} where $s_i$ is the cost share of input $i$ for the concerned industrial sector whose reference and current values are both available in a set of linked input--output tables\footnote{Notice that an input--output coefficient of input $i$ for output $j$ represents the cost share of factor $i$ for industry $j$.} and $u_i$ is the disturbance term. Further, note that growth of productivity, i.e., $\Delta \ln t$, is estimable from the intercept of the regression line, although that analysis is beyond the purpose of this study. We must note that linked input--output tables do not provide price indexes for the primary input (comprising labor and capital), which we aggregate as a single input in this study. To address this, we use the quality-adjusted price indexes of labor and capital compiled by \citet{jip} for Japan and by \citet{kip} for Korea for the corresponding periods in order to inflate the value added observed in nominal values. In Fig. \ref{japanCES}, we report the estimated multifactor CES elasticities $\sigma$ for all sectors (left) with the corresponding statistical significances (right) for Japan. Fig. \ref{koreaCES} is the equivalent figure for Korea. Further, we shall note that the average of the estimated elasticities (ignoring statistical significance) is 1.46 for Japan and 1.53 for Korea, and these values are almost identical to those estimated by using $q_i^D$ instead of $q_i^C$ in regression equation (\ref{twentytwo}) as reported in \citet{knn}. \begin{figure}[t!] \includegraphics[width=.495\textwidth]{JPElas.eps} \includegraphics[width=.495\textwidth]{JPpval.eps} \caption{Multifactor CES elasticities $\sigma_j$ estimated for Japan. } \label{japanCES} \end{figure} \begin{figure}[t!] \includegraphics[width=.495\textwidth]{KRElas.eps} \includegraphics[width=.495\textwidth]{KRpval.eps} \caption{Multifactor CES elasticities $\sigma_j$ estimated for Korea. } \label{koreaCES} \end{figure} \section{Bilateral Equilibrium} \subsection{Model Integration} In this section, we construct a bilateral multisectoral general equilibrium model that reflects all measured elasticities for the two countries. Let us first focus on one country's general equilibrium state of multisectoral production. We shall calibrate the share parameters at the current state in order to examine various policy shifts (such as tariff elimination) on the basis of the current state. As we have previously arranged that all current prices be the basis of price standardization, we may calibrate the share parameters $\lambda_i$ at the current state where the productivity is standardized at unity $t=1$, according to (\ref{sixteen}): \begin{align} &\lambda_i = a_i \label{twentythree} \end{align} Here, $a_i$ is the current state input--output coefficient (i.e., cost share) of input $i$ for the industry (output) concerned, and thus, $\sum_{i=0}^n a_i =1$. We may express the system of unit cost functions (\ref{fifteen}) as \begin{align*} w_1^D &= \left( a_{01} ( w_0^C )^{1-\sigma_1} +a_{11} ( w_1^C )^{1-\sigma_1} + \cdots +a_{n1} ( w_n^C )^{1-\sigma_1} \right)^{\frac{1}{1-\sigma_1}} \\ w_2^D &= \left( a_{02} ( w_0^C )^{1-\sigma_2} +a_{12} ( w_1^C )^{1-\sigma_2} + \cdots +a_{n2} ( w_n^C )^{1-\sigma_2} \right)^{\frac{1}{1-\sigma_2}} \\ &~~\vdots \\ w_n^D &= \left( a_{0n} ( w_0^C )^{1-\sigma_n} +a_{1n} ( w_1^C )^{1-\sigma_n} + \cdots +a_{nn} ( w_n^C )^{1-\sigma_n} \right)^{\frac{1}{1-\sigma_n}} \end{align*} or more concisely as \begin{align} \mathbf{w}^D = {H} \left( \mathbf{w}^C, w_0^C \right) \label{twentyfour} \end{align} The model for both countries according to the multifactor CES aggregator (\ref{fifteen}), the macro aggregator (\ref{one}), and the micro aggregator (\ref{six}), can be expressed as follows, where $J$ and $K$ indicate Japan and Korea, respectively: \begin{align} \mathbf{w}_J^D &= {H}_J \left( \mathbf{w}_J^C \right) &\mathbf{w}_K^D &= {H}_K \left( \mathbf{w}_K^C \right) \label{twofive} \\ \mathbf{w}_J^C &= {U}_J \left( \mathbf{w}_J^D, \mathbf{w}_J^F \right) &\mathbf{w}_K^C &= {U}_K \left( \mathbf{w}_K^D, \mathbf{w}_K^F \right) \\ \mathbf{w}_J^F &= {V}_J \left( \mathbf{w}_J^P \right) &\mathbf{w}_K^F &= {V}_K \left( \mathbf{w}_K^P \right) \label{twoseven} \end{align} Note that we eliminate $w_0^C$ from the multifactor CES aggregator since it is fixed as constant and $\mathbf{w}^R$ from the micro aggregators as we assume that ROW import prices are invariable (under the small-country assumption). In order to close (integrate) the model, we must introduce a weighted converter that connects the foreign sector with the domestic sector classifications in terms of 6-digit HS transactions. Specifically, a sector-HS converter $z_{jk}$ that assigns a sectoral commodity $j$ to an HS item $k$ has the following form: \begin{align*} z_{jk} = \frac{x_{jk}}{\sum_{k \in j} x_{jk}} \end{align*} where $x_{jk}$ represents the amount of import of HS item $k$ that belongs to sector $j$. As we represent Japan's sector-HS converter by matrix $\mathbf{z}_J$ and Korea's sector-HS converter by $\mathbf{z}_K$, Korea's 350 sectors can be converted into Japan's 395 sectors by $\mathbf{z}_K\mathbf{z}_J^{\intercal}$, and likewise Japan's sectors can be converted into Korea's by $\mathbf{z}_J\mathbf{z}_K^{\intercal}$, where $\intercal$ indicates transposition. Thereupon, we introduce the following identities, according to (\ref{thirteen}): \begin{align} &\mathbf{w}_J^P = \mathbf{w}_K^{D}\mathbf{z}_K\mathbf{z}_J^{\intercal}\left< \boldsymbol{\nu}_J\right>\left<\boldsymbol{\mu}_J \right> &\mathbf{w}_K^P = \mathbf{w}_J^{D}\mathbf{z}_J\mathbf{z}_K^{\intercal}\left< \boldsymbol{\nu}_K\right>\left<\boldsymbol{\mu}_K \right> \label{twoeight} \end{align} where angle brackets indicate diagonalization. Additionally, we know that $\nu \mu=1$ from (\ref{twenty}) at the current state. Hence, we know that the equilibrium solution to the bilateral integrated price system (\ref{twofive})--(\ref{twoeight}) at the current state is unity for all, i.e., $\mathbf{w}_J^D = \mathbf{w}_J^C= \mathbf{w}_J^F= \mathbf{w}_J^P=\mathbf{1}$ in terms of Japan's sectors and $\mathbf{w}_K^D = \mathbf{w}_K^C= \mathbf{w}_K^F= \mathbf{w}_K^P=\mathbf{1}$ in terms of Korea's sectors. \subsection{Tariff Elimination} We first calculate the equilibrium price when all tariffs that levied against the partner country's commodities in both countries were eliminated. For the purpose we specify the tariff rates levied at the current state and thus we used the UNCTAD Trade Analysis Information System \citep{trains} database. Specifically, we used the tariff rates evaluated by way of customs duties-imported values that were converted into ratios and distributed over the linked input--output product classifications. In Fig. \ref{tariff} we display the estimated tariff rates levied against the partner country's commodities for all sectoral commodities, for both countries. \begin{figure}[t!] \includegraphics[width=.495\textwidth]{JPTR.eps} \includegraphics[width=.495\textwidth]{KRTR.eps} \caption{Tariff rates $\tau$ for Japan against Korea (left) and for Korea against Japan (right). } \label{tariff} \end{figure} Note that ``Refined sake'' (59.0\%) and ``Beef cattle'' (22.5\%) were among the higher tariff rate commodities in Japan against Korea, whereas ``Vegetables'' (53.6\%) and ``Fruits'' (37.4\%) were among the higher tariff rate commodities in Korea against Japan. Let us now consider what happens if the tariff between the two countires were entirely eliminated over the current state. In that event the ex ante barrier factor $\mu^*$ will equal $\rho$ instead of $\rho (1+\tau)$, in regard to (\ref{thirteen}).\footnote{Here, we are assuming the exchange factor $\nu$ to be constant.} Thus, because $\nu{\mu}=1$ at the current state according to (\ref{twenty}), ${\mu}^*$ must be evaluated as follows: \begin{align*} \nu {\mu}^* = \nu\rho =\frac{\nu{\mu}}{1+\tau} =\frac{1}{1+\tau} \equiv \theta \end{align*} and hence, we must modify (\ref{twoeight}) when evaluating tariff-eliminated bilateral general equilibrium prices, as follows: \begin{align} &\mathbf{w}_J^P = \mathbf{w}_K^{D}\mathbf{z}_K\mathbf{z}_J^{\intercal}\left< \boldsymbol{\theta}_J \right> &\mathbf{w}_K^P = \mathbf{w}_J^{D}\mathbf{z}_J\mathbf{z}_K^{\intercal}\left< \boldsymbol{\theta}_K \right> \label{thirty} \end{align} Hereafter let us denote by $\boldsymbol{\pi}$ the tariff-eliminated bilateral general equilibrium prices. That is, \begin{align*} \boldsymbol{\pi} = \left( \boldsymbol{\pi}_J^D, \boldsymbol{\pi}_J^C, \boldsymbol{\pi}_J^F, \boldsymbol{\pi}_J^P, \boldsymbol{\pi}_K^D, \boldsymbol{\pi}_K^C, \boldsymbol{\pi}_K^F, \boldsymbol{\pi}_K^P \right) \end{align*} More specifically $\boldsymbol{\pi}$ is the fix point of the mapping $G: \mathbb{R}^{4(n_J+n_K)} \to \mathbb{R}^{4(n_J+n_K)}$ which comprises of the functions (\ref{twofive}--\ref{twoseven}) and (\ref{thirty}) i.e.,\footnote{The dimension of the sectors are $n_J=395$ for Japan and $n_K=350$ for Korea.} \begin{align} \boldsymbol{\pi} = G\left( \boldsymbol{\pi} \right) \label{threetwo} \end{align} Note that $G$ is a concave and monotone increasing mapping because CES aggregators $H$, ${U}$ and ${V}$ are all concave functions, and linear functions (\ref{thirty}) are also concave (although not strictly concave). Thus, $G$ becomes a contraction mapping and we may solve (\ref{threetwo}) for the fixed point by recursive means \citep[see e.g.,][]{kennan, kras} from arbitrary initial guess such as $\mathbf{1}$ i.e., \begin{align*} \boldsymbol{\pi} = \lim_{k \to \infty} G^k \left( \mathbf{1} \right) = G\left(\cdots G\left( G\left(G\left( \mathbf{1} \right) \right) \right) \cdots \right) \end{align*} \subsection{Prospective Analysis} Since we know by the Shephard's lemma that the factor input can be obtained by differentiating the unit cost function, inputs in physical units per physical unit output for all sectors, or the physical input--output coefficient matrix, can be obtained as the gradient of (\ref{twentyfour}) i.e., \begin{align*} \nabla \mathbf{w}^D = \begin{bmatrix} \frac{\partial {H}_1\left( \mathbf{w}^C, w_0^C\right)}{\partial w_0^C} & \frac{\partial {H}_2\left( \mathbf{w}^C, w_0^C\right)}{\partial w_0^C} & \cdots & \frac{\partial {H}_n\left( \mathbf{w}^C, w_0^C\right)}{\partial w_0^C} \\ \frac{\partial {H}_1\left( \mathbf{w}^C, w_0^C\right)}{\partial w_1^C} & \frac{\partial {H}_2\left( \mathbf{w}^C, w_0^C\right)}{\partial w_1^C} & \cdots & \frac{\partial {H}_n\left( \mathbf{w}^C, w_0^C\right)}{\partial w_1^C} \\ \vdots & \vdots & \ddots & \vdots \\ \frac{\partial {H}_1\left( \mathbf{w}^C, w_0^C\right)}{\partial w_n^C} & \frac{\partial {H}_2\left( \mathbf{w}^C, w_0^C\right)}{\partial w_n^C} & \cdots & \frac{\partial {H}_n\left( \mathbf{w}^C, w_0^C\right)}{\partial w_n^C} \end{bmatrix} = \begin{bmatrix} \nabla_0 \left( \mathbf{w}^C, w_0^C\right) \\ \nabla \left( \mathbf{w}^C, w_0^C\right) \end{bmatrix} \end{align*} where $\nabla_0{H}$ is an $n$ row vector, while $\nabla{H}$ is an $n\times n$ matrix. For later convenience, let us use the following terms to indicate monetary input--output coefficient matrices for current and posterior (with tariff elimination) states. \begin{align*} \textstyle 1 \nabla_0 H \left( \mathbf{1}, 1 \right) \left< \mathbf{1} \right>^{-1} &\equiv \mathbf{{a}}_0 & \textstyle \pi_0^C \nabla_0 H \left( \boldsymbol{\pi}^C, \pi_0^C \right) \textstyle \left< \boldsymbol{\pi}^C \right>^{-1} &\equiv \mathbf{\tilde{a}}_0 \\ \left< \mathbf{1} \right> \nabla H \left( \mathbf{1}, 1 \right) \left< \mathbf{1} \right>^{-1} &\equiv {\mathbf{{A}}} &\textstyle \left< \boldsymbol{\pi}^C \right> \nabla H \left( \boldsymbol{\pi}^C, \pi_0^C \right) \left< \boldsymbol{\pi}^C \right>^{-1} &\equiv \mathbf{\tilde{A}} \end{align*} Note that we set $\pi_0^C =1$ as we take the primary input $i=0$, which is not produced industrially, as the num{\'e}raire good. Also, $\mathbf{a}_0$ and $\mathbf{A}$ are the current state (observed) value-added and input--output coefficients, respectively. Below is the commodity balance in monetary terms: \begin{align} \mathbf{x} = \mathbf{A} \mathbf{x} + \mathbf{y} + \mathbf{e} - \mathbf{m} \label{threefour} \end{align} where, $\mathbf{x}$ denotes domestic output, $\mathbf{y}$ denotes domestic final demand, $\mathbf{e}$ denotes export, $\mathbf{m}$ denotes import, all in column vectors of monetary terms, while $\mathbf{A}\mathbf{x}$ represents the intermediate demand. Here, we may recall that we have obtained the current state foreign share of a commodity $s_>^F=1-\alpha$, by the amount of import $m$ (i.e., an element of $\mathbf{m}$ whose index is omitted) and the domestic total demand, i.e., \begin{align*} {m} = (1-\alpha) \left( y + \sum_{j=1}^n a_j x_j \right) \end{align*} For further conveniece let us define $s = s_>^F=1-\alpha$ and endogenize import with respect to the domestic total demand as follows: \begin{align*} \mathbf{m} = \left<\mathbf{s}\right> [\mathbf{A}\mathbf{x} + \mathbf{y}] \end{align*} Further, we may recall that the import from the partner country $\mathbf{m}^P$ can be replicated by the current share of the partner country's commodity $s_>^P$, which we hereafter denote $s^P$ for convenience, as follows: \begin{align*} \textstyle \mathbf{m}^P = \left< \mathbf{s}^P \right> \mathbf{m}=\left< \mathbf{s}^P \right>\left< \mathbf{s} \right> \left[ \mathbf{Ax} + \mathbf{y} \right] \end{align*} Displayed below is the commodity balance of the posterior state: \begin{align} \tilde{\mathbf{x}}= \tilde{\mathbf{A}} \tilde{\mathbf{x}} + \tilde{\mathbf{y}} + \left( \mathbf{e}^W + \tilde{\mathbf{e}}^P \right) - \left( \tilde{\mathbf{m}}^W + \tilde{\mathbf{m}}^P \right) \label{epcomb} \end{align} The posterior state values are distinguished by tildes. We assume that imports and exports are subject to change due to tariff elimination, except for the exports to the ROW. Notice that imports from the partner and the ROW are assumed to be proportional to the total domestic demands in the following manner: \begin{align} &\textstyle \tilde{\mathbf{m}}^P = \left< \tilde{\mathbf{s}}^P \right> \left< \tilde{\mathbf{s}} \right> \left[ \tilde{\mathbf{A}} \tilde{\mathbf{x}} + \tilde{\mathbf{y}} \right]=\tilde{\mathbf{e}}^{P\prime} &\textstyle \tilde{\mathbf{m}}^W = \left[ \mathbf{I} - \left< {\mathbf{s}}^P \right> \right] \left< {\mathbf{s}} \right> \left[ \tilde{\mathbf{A}} \tilde{\mathbf{x}} + \tilde{\mathbf{y}} \right] \label{mm} \end{align} As indicated above, the export against the partner country is determined by the import from the partner's partner country.\footnote{Here, a prime is used to indicate the partner country's export against its partner country.} The import coefficients are determined by (\ref{two}) and (\ref{nine}) as follows: \begin{align*} &\tilde{s}=(1-\alpha) \left( \frac{\pi^F}{\pi^C} \right)^{1-\varepsilon} &\tilde{s}^P=\beta \left( \frac{\pi^P}{\pi^F} \right)^{1-\eta} \end{align*} The posterior value added (external inputs) total can be evaluated by the import endogenized model in regard to the posterior commodity balance equation (\ref{epcomb}):\footnote{This model is otherwise called the Chenery--Moses type or the competitive import model.} \begin{align} \tilde{\mathbf{a}}_0 \tilde{\mathbf{x}} = \tilde{\mathbf{a}}_0 \left[ \mathbf{I}-\left[ \mathbf{I} - \left< {\mathbf{s}}^* \right> \right] \tilde{\mathbf{A}} \right]^{-1} \left[ \left[ \mathbf{I} - \left< {\mathbf{s}}^* \right> \right] \tilde{\mathbf{y}} + \mathbf{e}^W +\tilde{\mathbf{e}}^P \right] \label{endo} \end{align} where, the import coefficient $\mathbf{s}^*$ is specified as follows, according to (\ref{mm}): \begin{align*} \textstyle \left< {\mathbf{s}}^* \right> =\left< \tilde{\mathbf{s}}^P \right> \left< \tilde{\mathbf{s}} \right> + \left[ \mathbf{I} - \left< {\mathbf{s}}^P \right> \right] \left< {\mathbf{s}} \right> \end{align*} We assume an economy to maximize its final demand given the external inputs total, and to this end the compensation of increased exports against the partner country can be spent for whatever commodity demanded. We incorporate such external inputs into the domestic production in such a way that the external inputs (value added) total is fortified.\footnote{It may be more natural to incorporate export compensation into imports; but this option was not adopted on the ground that imports are endoginized with respect to domestic final demand alone as specified in (\ref{mm}).} In particular, we are to find a scalar $\delta$ of the following problem that maximizes the total ex ante value of the current-proportioned final demand i.e., $\tilde{\mathbf{y}}=\left< \boldsymbol{\pi}^{D}\right> \mathbf{y} \delta$, given the ex ante total value added (\ref{endo}), which is limited to the sum of the locally existing primary factor $\ell$ $(= \mathbf{a}_0 \mathbf{x})$, and the compensation of exports against the partner country, i.e., \begin{align} \textstyle \max_{\delta} \, \mathbf{1} \tilde{\mathbf{y}}=\mathbf{1} \left< \boldsymbol{\pi}^D \right> \mathbf{y} \delta \text{~~~s.t.~~~~~} \tilde{\mathbf{a}}_0 \tilde{\mathbf{x}} \leq \ell + \mathbf{1} \left( \tilde{\mathbf{e}}^P -\mathbf{e}^P \right) \label{max} \end{align} Note that the solution of (\ref{max}) determines the posterior total domestic demands and thus the imports from the partner country which, in turn, determines the compesation of exports against the partner's partner country via (\ref{mm}) that must enter into the constraint of the parter country's problem. In other words, (\ref{max}) must be solved recurrsively for both countries under the condition given by the partner country. \begin{figure}[tp!] \includegraphics[width=0.495\textwidth]{JPdy.eps} \includegraphics[width=0.495\textwidth]{JPdell.eps} \caption{Maximized increment of current-proportioned final demand $\tilde{\mathbf{y}} -\mathbf{y}$ (left) and the corresponding redistribution of the external inputs $\tilde{\mathbf{a}}_0 \tilde{\mathbf{x}} -{\mathbf{a}}_0 \mathbf{x}$ (right) for Japan.} \label{jp} \end{figure} \begin{figure}[t!] \includegraphics[width=0.495\textwidth]{KRdy.eps} \includegraphics[width=0.495\textwidth]{KRdell.eps} \caption{Maximized increment of current-proportioned final demand $\tilde{\mathbf{y}} -\mathbf{y}$ (left) and the corresponding redistribution of the external inputs $\tilde{\mathbf{a}}_0 \tilde{\mathbf{x}} -{\mathbf{a}}_0 \mathbf{x}$ (right) for Korea.} \label{kr} \end{figure} \begin{table}[tp!] \caption{Prospective analysis of tariff elimination between Japan and Korea.} \label{aaa} \begin{tabular}{rrrrr} \toprule & \multicolumn{2}{c}{Japan} & \multicolumn{2}{c}{Korea} \\ \cmidrule(l{.75em}r{.75em}){2-3}\cmidrule(l{.75em}r{.75em}){4-5} & BJPY & (BKRW) & BKRW & (BJPY) \\ \midrule Gross Domestic Product (GDP) & 505,269 & & 851,982 & \\ $\Delta$ Final Demand $\Delta y$ & 853 & 7,924 & 6,309 & 679 \\ $\Delta$ Import from Partner $\Delta m^P$ & 803 & 7,459 & 3,338 & 359 \\ $\Delta$ Export to Partner $\Delta e^{P}$ & 359 & 3,338 & 7,459 & 803 \\ \bottomrule \end{tabular} \end{table} Figs. \ref{jp} and \ref{kr} illustrate the increments of maximized current-proportioned final demand i.e., $\tilde{\mathbf{y}} -\mathbf{y}$ and the corresponding redistribution of the external inputs i.e., $\tilde{\mathbf{a}}_0 \tilde{\mathbf{x}} -{\mathbf{a}}_0 \mathbf{x}$ for Japan and Korea, respectively, under the tariff elimination between the two countries.\footnote{As regards (\ref{max}), the increment total of external inputs must be equal to the increment total of export compensation from the partner country i.e., $\mathbf{1} (\tilde{\mathbf{a}}_0 \tilde{\mathbf{x}} -{\mathbf{a}}_0 \mathbf{x}) = \mathbf{1} (\tilde{\mathbf{e}}^P - \mathbf{e}^P )$.} Notice that BJPY stands for billion Japanese yens and BKRW for billion Korean wons. The total effects are summarized in Table \ref{aaa}. The net benefit (in terms of gained final demand $\Delta y$) of tariff elimination is about 0.17\% of the current GDP (853 BJPY) for Japan, whereas it is about 0.74\% of the current GDP (6,309 BKRW) for Korea. As regards the redistribution of the external inputs, current-proportioned final demand maximization suggests that sectors such as $j=302$ (House rent), $j=25$ (Fisheries), $j=352$ (Medical service (medical corporations, etc.)), and $j=329$ (Information services) must be reinforced, and curtailed in sectors such as $j=65$ (Other liquors), $j=75$ (Woolen fabrics, hemp fabrics and other fabrics), $j=145$ (Miscellaneous leather products), and $j=17$ (Hogs), for Japan. On the other hand, preferable policy for Korea is to reinforce in sectors such as $j=71$ (Other liquors), $j=283$ (Wholesale and Retail trade), $j=19$ (Pigs), and $j=18$ (Beef cattle), and to curtail in sectors such as $j=53$ (Raw sugar), $j=26$ (Fishing), $j=63$ (Canned or cured fruits and vegetables), and $j=17$ (Motor vehicle engines, chassis, bodies and parts), for Korea. Table \ref{aaa} also displays the import change from the partner country $\Delta m^P$ defined as below, which by definition equals the export change of the partner country against its partner country $\Delta e^{P\prime}$, or more specifically, \begin{align*} \Delta m^P_J &= \mathbf{1} \left( \tilde{\mathbf{m}}^P_J - {\mathbf{m}}^P_J \right) = \mathbf{1} \Delta{\mathbf{m}}^{P}_J =\Delta e^{P}_K \\ \Delta m^{P}_K &= \mathbf{1} \left( \tilde{\mathbf{m}}^{P}_K - {\mathbf{m}}^{P}_K \right) = \mathbf{1} \Delta{\mathbf{m}}^{P}_K =\Delta e^{P}_J \end{align*} Thus, we may look into the \textit{net export} $\Delta f$ against the partner country as follows: \begin{align*} &\Delta e_J^P - \Delta m_J^P = \Delta m_K^P - \Delta e_K^P = \Delta e_J^P - \Delta e_K^P= \Delta m_K^P - \Delta m_J^P \equiv \Delta f_{JK} \\ &\Delta e_K^P - \Delta m_K^P = \Delta m_J^P - \Delta e_M^P= \Delta e_K^P - \Delta e_J^P= \Delta m_J^P - \Delta m_K^P \equiv \Delta f_{KJ} \end{align*} where naturally $\Delta f_{JK}+\Delta f_{KJ}=0$. \begin{table}[tp] \caption{Notable net export (top 20) from Japan to Korea $\Delta f_{JK}$.} \label{jptab} \footnotesize \begin{tabular}{rrr} \toprule sector/commodity & BJPY & BKRW \\ \midrule Fisheries & 48 & 446 \\ Sugar & 38 & 350 \\ Motor vehicle parts and accessories & 34 & 321 \\ Salted, dried or smoked seafood & 32 & 298 \\ Bottled or canned vegetables and fruits & 29 & 272 \\ Preserved agricultural foodstuffs (other than bottled or canned) & 28 & 258 \\ Other wearing apparel and clothing accessories & 18 & 164 \\ Machinery and equipment for construction and mining & 16 & 148 \\ Knitted apparel & 15 & 136 \\ Toys and games & 12 & 110 \\ Frozen fish and shellfish & 11 & 103 \\ Electric audio equipment & 11 & 99 \\ Activities not elsewhere classified & 10 & 96 \\ Rotating electrical equipment & 8 & 76 \\ Hot rolled steel & 8 & 76 \\ Synthetic fibers & 7 & 65 \\ Internal combustion engines for motor vehicles and parts & 6 & 60 \\ Machinery for service industry & 6 & 54 \\ Miscellaneous ceramic, stone and clay products & 5 & 48 \\ Trucks, buses and other cars & 5 & 44 \\ \bottomrule \end{tabular} \end{table} \begin{table}[tp!] \caption{Notable net export (top 20) from Korea to Japan $\Delta f_{KJ}$.} \label{krtab} \footnotesize \begin{tabular}{rrr} \toprule sector/commodity & BKRW &BJPY \\ \midrule Slaughtering and meat processing & 1,966 & 212 \\ Woolen fabrics, hemp fabrics and other fabrics & 1,360 & 146 \\ Other liquors & 1,350 & 145 \\ Refined sake & 647 & 70 \\ Miscellaneous leather products & 567 & 61 \\ Liquid crystal element & 434 & 47 \\ Woven fabric apparel & 322 & 35 \\ Analytical instruments, testing machine, measuring instruments & 206 & 22 \\ Pumps and compressors & 136 & 15 \\ Rolled and drawn aluminum & 134 & 14 \\ Petrochemical aromatic products (except synthetic resin) & 131 & 14 \\ Coal mining , crude petroleum and natural gas & 109 & 12 \\ Petrochemical basic products & 106 & 11 \\ Sheet glass and safety glass & 103 & 11 \\ Metal molds & 84 & 9 \\ Leather footwear & 54 & 6 \\ Integrated circuits & 53 & 6 \\ Other non-ferrous metals & 47 & 5 \\ Fruits & 46 & 5 \\ Petroleum refinery products (incl. greases) & 44 & 5 \\ \bottomrule \end{tabular} \end{table} In Table \ref{jptab} we display the positive entries of the net export from Japan to Korea $\Delta f_{JK}$.\footnote{The negative entries are the net import from Japan to Korea, or, the net export from Korea to Japan.} Likewise, Table \ref{krtab} is the positive entries of the net export from Korea to Japan. We may notice from these tables that, a lot of meat (i.e., Slaughtering and meat processing) will be exported from Korea to Japan, whereas Japan will export fish (i.e., Fisheries, Frozen fish and shellfish) to Korea, under tariff elimination. Other notable features are that Korea will net-export petrochemical products (e.g., Petrochemical aromatic productss (except synthetic resin), Coal mining, crude petroleum and natrual gas, Petrochemical basic products, Petroleum refinery products (incl. greases), etc.) to Japan, whereas Japan will net-export mechanical and assembling products (e.g., Motor vehicle parts and accessories, Machinery and equipment for construction and mining, Electric audio equipment, Rotating electrical equipment, etc.) to Korea. \section{Concluding Remarks} The highlight of this study may perhaps be the discovery of a way to calibrate the parameters of a two-input CES aggregator in order that the aggregator completely replicates the observed two temporally distant shares of inputs in both monetary and physical terms. The elasticity parameters i.e., the Armington elasticities that we obtained by way of this approach (i.e., two-point calibration), were found to be much larger than those observed in the previous studies based upon time series, implying almost complete substitutability between foreign and domestic commodities, which should not be too surprising. We then used the Armington aggregator functions to uncover the composite price index for each commodity, which is key for modeling production activities comprising many factor inputs, including imported commodities, for each industrial sector. As we are concerned with multisectoral production functions of multiple (more than two) factor inputs, we estimated multifactor CES production elasticities by linearly regressing the growth of commodity-wise cost shares against the relative growths of factor prices. We used published statistics, namely, linked input--output tables and the UN Comtrade database, to measure all the concerned elasticities (i.e., multifactor CES production, and micro and macro Armington elasticities) for both Japan and Korea. The two multisectoral general equilibrium models for Japan and Korea were integrated by the bilateral trading models which reflect the trade barriers between the two countries. Since the models presume constant returns in all activities and thus interact entirely in terms of unit costs and prices, we were able to simulate the bilateral general equilibrium consequences of eliminating tariffs between the two countries, without (physically) quantitative consideration. The consequential social benefits and costs of tariff elimination were estimated by the amount of linear final demand that can potentially be enhanced under the projected structure for a given total primary input. The result implies positive effects (in terms of total net benefit) for both countries, while considerable structural change is expected to be inevitable. \begin{flushleft} {\bf Conflict of Interest:} The authors declare that they have no conflict of interest. \end{flushleft} \bibliographystyle{spbasic_x} \raggedright
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Image spectrometers collect vast amounts of data which can be used for a variety of tasks. Possible applications include geological research, terrain analysis, material identification, military surveillance and many others. Fine spectral resolution can be a desired featured when it comes to detecting fingerprints in the spectral response of a scene. Such applications are enabled by the richness of data captured by multispectral and hyperspectral sensors. A problem of handling such wealth of information naturally arises and calls for the use of compression methods. Algorithms to compress hyperspectral and multispectral images have been studied for a long time and are still an active subject of research. Onboard compression enables spacecrafts to save transmission time, allowing more images to be sent to the ground stations. The design of compression algorithms for onboard applications must carefully meet the limited resources in terms of computational power and memory available on the spacecrafts. Two main compression techniques are available in this scenario: transform coding and predictive coding. Transform coding relies on computing a linear transform of the data to achieve energy compaction and hence transmit few carefully chosen transform coefficients. One of the most popular approaches is JPEG2000 \cite{taubmanjpeg2000} and its multidimensional extension \cite{jpeg2000ext}. A wavelet-based 2D lossless and lossy compression algorithm has also been standardized for space applications \cite{ccsds122}. Spectral transforms to eliminate the inter-band redundancy have been subject of intense research. There exists an optimal transform for Gaussian sources, \emph{i.e.}, the Karhunen-Lo\`{e}ve transform (KLT) but its complexity does not match the computational resources typically available for onboard compression. Hence, low-complexity approximations to the KLT have been derived, such as the Pairwise Orthogonal Transform (POT) \cite{pot}, the fast approximate KLT (AKLT) \cite{aklt} and the AKLT$_2$ \cite{aklt2}. Transform coding allows to perform lossless and lossy compression and to accurately control the rate in a simple manner thanks to the simple relation between rate and quantized transform coefficients \cite{taubmanjpeg2000} \cite{rhodomain}. On the other hand, per-pixel quality control as in near-lossless compression is hard to obtain. A near-lossless layer can be added to a transform coder, \emph{e.g.}, as in \cite{carvajalmagli}, but this requires to also implement a decoder onboard. Transform coding also typically suffers from the problem of dynamic range expansion, which is a direct consequence of energy compaction. While it is difficult to generalize due to the availability of many different transforms and predictors, a transform generally uses many (past and future) pixels of the image to represent a given pixel, while a predictor generally employs few pixels in a causal neighborhood, thus making it less prone to performance loss when the prediction is reset over different image areas, \emph{e.g.}, in order to achieve error resilience. Predictive coding uses a mathematical model to predict pixel values and encode only the prediction error. Adaptive linear prediction is often used \cite{calibrationartifacts, abrardoDSC, rizzolow, acap, asap, lastri} (\emph{e.g.}, the predictor considered in Sec.\ref{sec:123extension} relies on the LMS filter \cite{lms}, with the sign algorithm \cite{chosign} for weight update), but other methods have been devised as well, \emph{e.g.}, based on edge detection \cite{edgeprediction} or vector quantization \cite{vectorquant}. In lossless compression, the prediction residuals are written in the compressed file after entropy coding. Lossy compression instead quantizes them before entropy coding. The quantization step size determines the amount of compression and hence information losses with respect to the original image. Near-lossless compression is readily implemented by setting a maximum quantization step size, so that the quantization error never exceeds half of it. On the other hand, rate control in a predictive coder is challenging because: i) no simple mathematical relationship between the rate and the quantized prediction residual exists, ii) the quality of the prediction, hence the magnitude of the residuals, and ultimately the rate depend on how coarse the quantization is; as an example, the analysis of quantizer error propagation in the feedback loop is considered in \cite{aiazzipyramid} for the case of Laplacian pyramids. These aspects are further discussed in Sec. \ref{sec:bkg}. In this paper we propose an innovative design of a rate controller for a predictive encoder. We show that the proposed method can achieve accurate control, while having complexity suitable for onboard implementation. In particular, the algorithm is designed to work in line-based acquisition mode, as this is the most typical setup of spectral imaging systems. We first describe the proposed algorithm in general terms, as it can be applied to any predictive coder. Next, we focus our attention on using it with the LMS predictor used in the CCSDS-123 standard for lossless compression \cite{ccsds123}, which is an improved version of the Fast Lossless algorithm \cite{FL}. The resulting system can be seen as an extension of the standard featuring lossless, near-lossless and rate-controlled lossy compression. The rate controller provides lossy reconstructions with increasingly better quality, up to lossless encoding, as the target rate approaches that of lossless compression. Finally, the controller can also work in a hybrid rate-controlled and near-lossless mode by specifying the maximum quantization step size that the controller is allowed to use. The paper is organized as follows: in section \ref{sec:bkg} we review the literature on rate control methods; in section \ref{sec:basics} we outline the main idea and the basic steps involved in the algorithm; in section \ref{sec:blocktypes} we describe the specific steps of the algorithm; in section \ref{sec:modeB} we introduce a second version of the algorithm, achieving a more accurate control by introducing a slice-by-slice feedback mechanism exploiting the measured rate of the previously encoded slice; section \ref{hybrid} shows how the proposed rate controller can actually achieve control of both the rate and the maximum distortion, enabling a hybrid near-lossless rate control mode; section \ref{sec:123extension} proposes an extension of the CCSDS-123 standard to near-lossless and rate-controlled lossy compression; finally, in section \ref{results} we show the performance of the rate-control algorithm on some test images and compare the proposed extension of CCSDS-123 with state-of-the-art transform coding techniques. \section{Background} \label{sec:bkg} Rate control is a relatively well studied problem in the field of image and video coding, where it fits the framework of rate-distortion (RD) theory. The main task of rate-distortion optimization methods is to minimize the distortion of an encoded source (\emph{e.g.}, an image or a video sequence) subject to a constraint on the rate. This problem of carefully allocating the available resources is typically tackled by means of two techniques: Lagrangian optimization and dynamic programming. A more comprehensive review of such methods is covered by \cite{OrtegaSPM}. The classical method of Lagrangian optimization was introduced by Everett \cite{everett}, and relies on defining a cost function using a Lagrange multiplier to trade off rate and distortion. In particular, assume we have a budget-constrained allocation problem, such as our rate control problem: \begin{align} \label{constr} \underset{x(i)}{\mathsf{minimize~}} \sum_{i=1}^N D_{i,x(i)} \mathsf{~~subject~to~~}\sum_{i=1}^N R_{i,x(i)} \leq R_{\mathsf{target}} \end{align} where $x(i)$ is the coding option for unit $i$. The Lagrangian formulation consists in the unconstrained minimization of $J_i = D_i + \lambda R_i$, which can be shown \cite{everett}\cite{Shoham_EfficientBitAllocation} to yield the same solution of \eqref{constr} for a suitable $\lambda = \lambda^*$. Furthermore, if the coding units are independent, the minimization can be carried out independently for each unit $i$. One of the main issues of this method is to find the appropriate value of $\lambda$ needed to find the optimal distortion, while satisfying the rate constraint. By noticing that $\lambda=\lambda(R)$ is a monotonic function of the rate, an iterative search strategy, such as dychotomic search, can be used to find the correct value of $\lambda$. It is often the case that the coding units exhibit some form of dependency among each other, so that the coding decisions taken for one unit may have some impact on the other units. This is notably true for prediction-based systems \cite{shapiro_optimal}, where quantization of residuals introduces noise in the prediction loop and may degrade the quality of future predictions. The interdependency of the coding choices makes this problem more difficult to tackle and classical solutions, based on dynamic programming, typically model the dependencies using a tree or a trellis \cite{ortega_trellis} \cite{ortega_bitallocation} and find the optimal path by using the Dijkstra's shortest path algorithm \cite{shortestpath} or the Viterbi algorithm \cite{viterbi}. The rate constraint can be handled by a suitable pruning of the tree. In this paper we are studying a problem of rate control in the context of predictive coding on board of spacecrafts, posing significant constraints on the complexity of algorithms that can be used. The previously cited methods all exhibit a complexity that is unsuitable for the scenario we are considering or are largely inefficient (\emph{e.g.}, the standard Lagrangian approach with i.i.d. assumptions only). Onboard rate control is performed very easily in the case of systems adopting transform coding \cite{satellite_data_compression}, \emph{e.g.}, wavelet-based methods. This is due to the possibility to use an i.i.d. assumption among different coding units, allowing to establish simple relationships between rate and quantized transform coefficients \cite{taubmanjpeg2000} \cite{rhodomain}. However, such models do not hold in the case of predictive compression, making our task harder. Our approach uses models and independence assumptions to simplify the problem but we are forced to introduce corrections to the output of the models due to the inevitable dependencies introduced by the propagation of errors in the feedback loop. While the proposed procedure does not generally yield the optimal solution, it is a practical algorithm that can be used in low-complexity scenarios, such as onboard compression; moreover, it indeed achieves almost optimal performance, as will be shown in Sec.\ref{RDnumericalperformance}. \section{Rate Control Algorithm} \label{sec:basics} This section outlines the framework and the basic operations performed by the proposed rate control algorithm. The main idea behind the algorithm is to adopt a model to predict the rate and the distortion of the quantized prediction residuals. In order to achieve a flexible scheme allowing an effective control of the rate for various kinds of images, ranging from hyperspectral images (lots of bands, but typically small spatial resolution) to multispectral images (large spatial resolution, but few bands), the algorithm partitions the image into blocks of size $BS_x \times BS_y$, where $BS_x$ and $BS_y$ are tunable parameters. Partitioning into blocks allows to deal with the non-stationary behaviour of the image. In fact, the prediction mechanism is indeed able to eliminate slowly varying features but sudden variations in the image content (\emph{e.g.} discontinuities) are hardly predicted by the encoder, and consequently imply non-stationary prediction residuals. The task of the rate control algorithm is then to assign a quantization step to each block of residuals in a given spectral channel, according to the specified target rate. At the same time, this assignment affects the overall distortion introduced by the encoder and, hence, it should be chosen to keep the distortion as low as possible. In this scenario, computational complexity plays a major role in many ways. First of all, typical memory capabilities of systems for onboard image compression allow the storage of a limited number of lines of the image with all their spectral channels. To match this limitation, the rate control algorithm operates one slice at a time, where we denote as \emph{slice} a structure composed of one row of blocks with all their spectral channels. Moreover, as will be explained in section \ref{sec:models}, the algorithm does not even need to store all the lines in the slice but just a few of them, thus requiring very little memory. The main steps involved in the algorithm are: \begin{enumerate} \item the estimation of the variance of the unquantized prediction residuals by running the lossless predictor for a small number of lines (Sec. \ref{sec:models}); \item the $l_1$ projection algorithm to get an initial allocation of the quantization steps (Sec. \ref{sec:l1}); \item the Selective Diet algorithm for rate and distortion refinement (Sec. \ref{sec:SelectiveDiet}). \end{enumerate} \vspace*{-0.1cm} \subsection{Rate and distortion models} \label{sec:models} We now introduce the model used to describe the prediction residuals in each block. This model allows to obtain closed-form expressions for the rate and the distortion of the quantized residuals in the block. It is commonly observed that accurate predictors tend to yield residuals with leptokurtic (high kurtosis) distribution, hence similar to the Laplace probability density function, which we use to model the distribution of prediction residuals: \begin{align} f_r(x) = \frac{\Lambda}{2}e^{-\Lambda\vert x \vert}, \end{align} where $\Lambda$ is related to the variance $\sigma^2$ of the distribution by $\Lambda = \sqrt{\frac{2}{\sigma^2}}$. We assume that the residuals in each block and the blocks themselves are independent of each other. This is a simplifying assumption in two ways. First, the prediction mechanism may fail to remove all the correlation among the residuals. However, this does not pose a significant problem as we expect that most of the correlation is removed, hence making our independence assumption very close to reality; the same assumption is made in rate allocation for transform coding, where transform coefficients are often assumed to be independent. Second and more important, the quantization of the residuals introduces noise that propagates in the prediction loop. This leads to dependencies among the residuals and among blocks. Optimizing the allocation of the quantization step sizes taking into account these dependencies can lead to improvements as the model becomes more accurate. However, one must resort to dynamic programming methods (\emph{e.g.}, the Viterbi algorithm) that would be far too complex for our scenario. Consequently, we have explored a simplified way of including the effect of quantization noise in our model, \emph{i.e.}, augmenting the variance of the block by an estimate of the noise variance, which corresponds to assuming that the residuals and the quantization noise are independent: \begin{align} \label{variance_estimation} \tilde{\sigma^2} = \sigma^2 + \frac{Q^2}{12} , \end{align} where $Q$ is the quantization step used in the same block in previous slice. We do this because the quantization step size of the current slice is not known when we need to use this model, as it is indeed the output of the rate control process. It can be noticed that $\frac{Q^2}{12}$ is the mean square error produced by uniform scalar quantization of step size $Q$ under the high-rate approximation. The rate (expressed in bits-per-pixel) is derived as the entropy of an i.i.d. continuous source with Laplace distribution, after quantization by means of a uniform scalar quantizer with step size $Q$: \begin{align} \label{entropy} R = -p_0 \log_2 p_0 - 2\sum_{i=1}^\infty p_i \log_2 p_i ~, \end{align} so we need the probability $p_0$ that the residual is quantized to the zero value and the probability $p_i$ of being mapped to the (positive) integer $i$. For the uniform scalar quantizer we can write: \begin{align} \label{p0} p_0 = \int_{-\frac{Q}{2}}^{\frac{Q}{2}} \frac{\Lambda}{2}e^{-\Lambda \vert x \vert} dx = 1-e^{-\Lambda\frac{Q}{2}} \end{align} \begin{align} \label{pi} p_i = \int_{iQ-\frac{Q}{2}}^{iQ+\frac{Q}{2}} \frac{\Lambda}{2}e^{-\Lambda \vert x \vert} dx = \frac{1}{2}\left( e^{-\Lambda \left(iQ-\frac{Q}{2}\right)} - e^{-\Lambda \left(iQ+\frac{Q}{2}\right)} \right) \end{align} Inserting \eqref{p0} and \eqref{pi} into \eqref{entropy}, it is possible to derive \eqref{rate_usq}. \begin{figure*}[!t] \begin{align} \label{rate_usq} R(\Lambda,Q) = &-\left( 1 - e^{-\Lambda \frac{Q}{2}} \right) \log_2 \left( 1-e^{-\Lambda \frac{Q}{2}} \right) - \frac{e^{-\Lambda \frac{Q}{2}}}{\log(2)} \left[\log\left( \frac{1-e^{-\Lambda Q}}{2} \right) +\frac{\Lambda Q}{2} - \frac{\Lambda Q}{\left( 1-e^{-\Lambda Q } \right) } \right] \end{align} \begin{align} \label{dist_usq} D(\Lambda,Q) = \frac{2-\frac{1}{4}e^{-\Lambda \frac{Q}{2}} \left( \Lambda^2 Q^2 + 4\Lambda Q + 8 \right)}{\Lambda^2}&+ \frac{-\Lambda Q \left( \Lambda Q +4 \right) + e^{\Lambda Q} \left[ \Lambda Q \left( \Lambda Q -4 \right) +8 \right] -8}{4\Lambda^2} \frac{e^{-\frac{3}{2}\Lambda Q}}{1-e^{-\Lambda Q}} \end{align} \hrulefill \vspace*{4pt} \end{figure*} We use mean squared error (MSE) as distortion metric, which can be computed as \begin{align*} D(\Lambda,Q) &= \int_{-\frac{Q}{2}}^{\frac{Q}{2}} x^2 \frac{\Lambda}{2}e^{-\Lambda \vert x \vert} dx \\ &+ 2\sum_{i=1}^\infty \int_{iQ-\frac{Q}{2}}^{iQ+\frac{Q}{2}} \left( x-iQ \right)^2 \frac{\Lambda}{2}e^{-\Lambda \vert x \vert} dx , \end{align*} thus obtaining \eqref{dist_usq}. We can notice that both the rate and the distortion are functions of the variance $\sigma^2$ of the unquantized residuals in the block and of the quantization step size $Q$, whose value is yet unknown. Each block in the slice has its own variance parameter and quantizations step size. The variance must be estimated, while obtaining the quantization step size is really the ultimate goal of the rate control algorithm. The variance can be estimated by running the predictor \emph{without quantizing the prediction residuals} for a certain number of lines. A small fraction of the total lines in the block are sufficient to get good estimates of the variance of the residuals. In a software implementation, this is one of the main factors impacting on final complexity because it requires to run the predictor essentially twice: the first time on a small subset of the lines, without quantization, to estimate variances and then, once the quantization steps have been calculated, to perform the actual encoding pass, quantizing the residuals. The previous rate and distortion models are used by the algorithms presented in the following subsections to find the right value of $Q$ for each block to match the target rate globally and have a low distortion. \subsection{Projection onto the positive $l_1$ ball} \label{sec:l1} \begin{figure} \centerline{\includegraphics[width=0.8\columnwidth]{fig1.eps}} \vspace*{-0.3cm} \caption{The rate point corresponding to the lossless allocation of Q's is projected onto the simplex defined by the rate constraint} \label{simplex} \end{figure} The goal of the algorithm described in the following is to provide an initial solution to the allocation problem. This solution, albeit inaccurate, is a good starting point to initialize the following algorithm (Selective Diet, explained in section \ref{sec:SelectiveDiet}). Suppose that the encoder is given a target rate for the encoded image equal to $T$ bits-per-pixel (bpp), and suppose that there are $N_B$ blocks in the current slice ($N_B$ is the product of the number of blocks in one band times the number of bands). We define the quantity $R_{\mathsf{target}} = T \cdot N_B$ as the product of the target rate in bpp and the number of blocks in the slice (note that this quantity does not represent the actual number of bits at our disposal since we are multiplying times the number of blocks and not the number of pixels). Ideally we would like to satisfy the rate constraint exactly, hence have \begin{align} \label{rate_simplex} \sum_{i=1}^{N_B} R(\Lambda_i,Q_i) = R_{\mathsf{target}} \end{align} where $Q_i$ is the quantization step size selected for the $i$-th block. Notice that since the rate of each block is a positive quantity, \eqref{rate_simplex} defines a simplex in $N_B$ dimensions. We can consider an initial solution having $Q_i=1 ~~ \forall i$ (lossless encoding), with corresponding rates $R(\Lambda_i,1)$. Geometrically (see Fig. \ref{simplex}), we have a vector in an $N_B$-dimensional space whose entries are the rates $R(\Lambda_i,1)$ and we can project it onto the simplex defined by \eqref{rate_simplex}. In other words, we seek to solve the following optimization problem, where we slightly abuse notation using boldface to indicate $N_B$-dimensional vectors and making the $R$ function operate component-wise: \begin{align} \label{projection} \hat{\mathbf{R}} = \arg\min_{\mathbf{R}} \Vert \mathbf{R} - R(\mathbf{\Lambda},\mathbf{1}) \Vert_2 \mathsf{~~~subject~to~~~} \Vert \mathbf{R} \Vert_1 = R_{\mathsf{target}} \end{align} Problem \eqref{projection} is a continuous problem, whereas quantization step sizes are odd-integer-valued \footnote{Using odd-valued quantization step sizes is known to provide lower distortion for the same maximum error \cite{WuOdd}.}. After solving \eqref{projection} we need to search the value of $\hat{Q}_i$ such that $R(\Lambda_i,\hat{Q}_i)$ is closest to $\hat{R_i}$. Any search method such as linear search or binary search can be used for this purpose. Projection onto a simplex is a special case of projection onto the $l_1$ ball, since the simplex is the positive part of the $l_1$ ball. $l_1$ projections algorithms have been subject of great interest in recent years due to surge in research on sparse methods. The field of compressed sensing \cite{donoho2006cs} has spawned from the discovery that $l_1$ penalized regressors can reconstruct a sparse signal exactly from a small number of random measurements, hence many reconstruction algorithms \cite{spgl1} include steps involving projections on the $l_1$ ball. We refer to the algorithm proposed in \cite{l1projections} to address the specific problem of projections onto the simplex. The algorithm has been shown to have $\mathcal{O}(N_B\log N_B)$ complexity. Being a continuous approximation to an integer-valued problem, the allocation returned by the projection algorithm can only provide a rough approximation to the desired rate. Nevertheless, it is expected to be close to a good solution, hence it is possible to improve it by making local modifications. This is the task performed by the Selective Diet algorithm. \setlength{\textfloatsep}{10pt}{ \begin{algorithm} \caption{Projection algorithm to solve \eqref{projection}} \begin{algorithmic} \State Sort $R(\mathbf{\Lambda},\mathbf{Q})$ into $\mathbf{\mu}$ in descending order \State Find $\rho = \mathrm{max} \left\lbrace j : \mu_j - \frac{1}{j}\left( \sum_{i=1}^{N_B} \mu_i - R_{target} \right) > 0 \right\rbrace$ \State Define $\theta = \frac{1}{\rho} \left( \sum_{i=1}^{N_B} \mu_i - R_{target} \right)$ \State Find $\mathbf{w}$ such that $w_i = \mathrm{max} \left\lbrace R(\Lambda_i,Q_i) - \theta ,0 \right\rbrace$ \State Find $\mathbf{\hat{Q}} = R^{-1}(\mathbf{\Lambda},\mathbf{w})$ \end{algorithmic} \end{algorithm}} \vspace*{-0.3cm} \subsection{Selective Diet} \label{sec:SelectiveDiet} Selective Diet tries to solve an integer optimization problem consisting in lowering the distortion of the encoded slice while satisfying the constraint on its final rate. The algorithm is a local search method, similar in flavour to other discrete optimization methods such as hill climbing \cite{russellartificial} or meta-heuristics like tabu search \cite{tabu}. At a high level it is possible to say that the algorithm is primarily concerned with finding a solution that meets the specification on the rate as closely as possible, while promoting solutions having low distortion. It does so by making local adjustments to the solution provided by the $l_1$ projector, hence the need for a good initialization point. A graphic visualization of a single iteration is shown in Fig. \ref{SDscheme}. In this section, for convenience of explanation, we shall represent the blocks in the current slice as nodes in a chain. It is possible to modify the chain by making adjustments to the nodes, namely changing the quantization step size assigned to that node. Only local adjustments are allowed: the quantization step of each node can only be increased by 2 or decreased by 2. We shall call \emph{+2 level} an assignment of $Q_i+2$ where $Q_i$ is the current value of the quantization step, called \emph{default level}, and \emph{-2 level} an assignment equal to $Q_i-2$. A chain can be formed by choosing one of those three levels for each and every node. Consistently with the notation, we will call \emph{+2}/\emph{default}/\emph{-2} \emph{chain} a chain made only of nodes in the +2/default/-2 level. The ultimate goal of Selective Diet is creating a chain that meets the rate constraint and has low distortion. Let us now introduce a lemma at the basis of the local adjustments made to the default chain. \begin{lemma} \label{SDlemma} Suppose that the default chain satisfies $\sum_{i=1}^{N_B} R(\Lambda_i,Q_i) = R_\mathsf{target}$, then if there exists a new chain satisfying $\sum_{j=1}^{N_B} R(\Lambda_j,Q_j) = R_\mathsf{target}$, it must contain nodes from both the $+2$ and $-2$ levels. \end{lemma} \begin{proof} By contradiction, suppose that a chain meeting the rate constraint exists and is composed of nodes from the $+2$ and default levels only. However, $R(\Lambda_i,Q_i^{(+2)}) < R(\Lambda_i,Q_i^{(def)})$, so it must be that $\sum_{i=1}^{N_B} R(\Lambda_i,Q_i^{(ch)}) < R_\mathsf{target}$. Hence the rate is not met and such a chain does not exist. Similarly, suppose that a chain meeting the rate exists and is composed of nodes from the $-2$ and default levels only. However, $R(\Lambda_i,Q_i^{(-2)}) > R(\Lambda_i,Q_i^{(def)})$, so it must be that $\sum_{i=1}^{N_B} R(\Lambda_i,Q_i^{(ch)}) > R_\mathsf{target}$. Hence the rate is not met and such a chain does not exist. Therefore, a chain meeting the target can exist only if it uses nodes from both the $+2$ and $-2$ levels. \end{proof} Relying on this lemma, even when the rate is exact the algorithm must try to move some nodes to the -2 and +2 levels in order to optimize the distortion. The starting point is to consider the -2 chain as the new candidate output chain, since it has the lowest distortion. Obviously, selecting the -2 chain causes an increase in the rate, which must be compensated to meet the target. In order to reduce the rate moving back towards the target, some nodes are assigned to the +2 level. Each node is associated a cost function that considers the trade-off between the gain in rate reduction and the loss in quality due to switching from the -2 to the +2 level. The following cost function modelling the trade-off with a Lagrange multiplier is used: \begin{align} \label{SDcost} J_i &= \left[ D(\Lambda_i,Q_i^{(-2)}) - D(\Lambda_i,Q_i^{(+2)}) \right] \notag \\ &+ \lambda \left[ R(\Lambda_i,Q_i^{(-2)}) - R(\Lambda_i,Q_i^{(+2)}) \right] ~~~~ i \in [1,N_B] \end{align} The nodes are sorted by decreasing value of this cost function and this is the order in which the nodes are selected to be assigned to the +2 level. Specifically, one node at a time is added to the +2 level until the rate reaches $R_\mathrm{target}$. The new chain is then formed by the nodes that remained at the -2 level and the nodes that were demoted to the +2 level. This chain is taken as the new default chain for a new iteration of the algorithm in order to try to further improve distortion. Notice that even if in a single iteration the algorithm selects nodes from the +2 and -2 levels only, it is possible to reach any value of $Q$ using successive iterations, thus considering all possible odd values of the quantization step as possible choices for any block. The algorithm is run in a greedy manner, stopping when the distortion is not improving further. We have experimentally observed that the algorithm requires very few iterations (typically less than 10). It should be noted that the $l_1$ projector may occasionally provide an initial solution that is not close enough to the target rate. We address this issue in the following way: if $\sum_{i=1}^{N_B} R(\Lambda_i,\hat{Q}_i) \leq 0.99 R_\mathrm{target}$, it means that the solution of the $l_1$ projector is underutilizing the available rate, so lemma \ref{SDlemma} does not hold and we run an iteration of Selective Diet with only the default and -2 chains. Instead, when the rate exceeds the target, running Selective Diet in the standard fashion already allows to reduce it back to the target, so no modification is made. Finally, the value of $\lambda$ controls the tradeoff between the reduction in rate and increase in distortion when adding a node to the +2 level. The optimal value of $\lambda$ would let us choose those nodes that allow a maximization of the gain in rate and a minimization of the increase in distortion. However, finding the optimal value would be computationally very demanding, so we resort to initializing $\lambda$ to an empirically determined value ($\lambda=50$) that we observed to be performing nicely over the whole test image set. This value is adjusted dynamically by the algorithm, halving it every time an increase in the overall distortion is observed in place of a decrease and rerunning the optimization with the new value. It is also possible to devise a lower complexity solution that does not adjust $\lambda$ and does not repeat the optimization procedure, at a price of lower performance. The complete algorithm is summarized in Algorithm \ref{SDiet}. \begin{figure} \vspace*{-0.2cm} \centerline{\includegraphics[width=0.9\columnwidth]{fig2.eps}} \caption{One iteration of Selective Diet tries to reduce the quantization step size by 2, but due to the increase in rate, the step size is actually increased by 2 for some blocks chosen as the best tradeoff between increase in distortion and gain in rate. Note that blocks of the default chain have different steps sizes, although the chain is depicted as a straight line for convenience.} \label{SDscheme} \end{figure} \begin{algorithm} \caption{Selective Diet} \begin{algorithmic} \Require $\mathbf{Q_g}$, $\lambda=50$, $N_{iter}$ \For{$iter=1 \to N_{iter}$} \State Set \textbf{default} = $\mathbf{Q_g}$ , {$\mathbf{Q}^{(+2)}$} = \textbf{Q+2}, $\mathbf{Q}^{(-2)}$ = \textbf{Q-2} \State Set output chain $\mathbf{Q_g} = \mathbf{Q}^{(-2)}$ \State Compute $R_{diff} = \sum R(\mathbf{Q_g}) - R_{target}$, \emph{i.e.}, the rate you need to lose to reach the target \State Sort the nodes in {$\mathbf{Q}^{(+2)}$} by decreasing value of $J_i = \left( D_i^{(-2)} - D_i^{(+2)} \right) + \lambda \left( R_i^{(-2)} - R_i^{(+2)} \right)$ \State $i=1$ \While{$\sum R(\mathbf{Q_g}) - R_{target} < R_{diff}$} \State Replace the corresponding node in $\mathbf{Q_g}$ with the i-th node in the sorted {$\mathbf{Q}^{(+2)}$} \State $i=i+1$ \EndWhile \If{$iter \neq 1$} \If{Distortion did not lower AND inner iterations not exceeded} \State Set $\lambda \gets \lambda/2$ and repeat current iteration \Else \State Proceed to next iteration \EndIf \EndIf \EndFor \end{algorithmic} \label{SDiet} \end{algorithm} \section{Block Classification} \label{sec:blocktypes} The previous section outlined the basic operations of the rate control algorithm. We have discussed how models can be used to predict the rate and distortion of quantized blocks of prediction residuals. We have also introduced the $l_1$ projector and the Selective Diet algorithm that exploit the models to solve the problem of allocating quantization step sizes to the blocks to achieve the desired rate with low distortion. However, some improvements can be made in order to introduce additional features and solve problems not accounted for by the models; in this section we describe how blocks can be classified into three distinct classes to address those issues. In particular, each block can be of one out of three types, labelled as: $\mathrm{NORMAL}$, $\mathrm{INFTY}$, $\mathrm{SKIP}$. The $\mathrm{NORMAL}$ type is for regular blocks not falling in any of the other categories, whose behaviour in the algorithm is just as described so far. The $\mathrm{INFTY}$ type is for blocks that are estimated to have a very low variance of the prediction residuals (\emph{e.g.}, $\sigma^2 < 0.1$). This happens for blocks in which the original image is very uniform so that most of the residuals are zero or close to zero. The rate spent for these blocks is mostly determined by quantization noise in prediction loop, but this is not detected during variance estimation because it is run in a lossless fashion, thus not producing any quantization noise. This means that the simplifying assumption of \eqref{variance_estimation} does not hold. Underestimating the variance will result in very inaccurate estimates of the rate of those blocks and improper allocation of the quantization steps, potentially affecting other blocks due to the propagation of quantization errors. Therefore, in the algorithm we exclude $\mathrm{INFTY}$ blocks from the projection and Selective Diet steps in order to avoid feeding those algorithms with misleading information. $\mathrm{INFTY}$ blocks are then treated separately. After the projector returns its initial solution, whenever an $\mathrm{INFTY}$ block is encountered in the slice, the same $Q$ as the closest $\mathrm{NORMAL}$ block in the same band is assigned to it. If no $\mathrm{NORMAL}$ block has been encountered yet and it is not the first slice then the same $Q$ of the block in the same position in the previous slice is used. Otherwise, if it is the first slice, $Q = 1$ is used. If the last encountered block is not a $\mathrm{NORMAL}$ block but a $\mathrm{SKIP}$ block, then the current $\mathrm{INFTY}$ block becomes a $\mathrm{SKIP}$ block. Except when the block becomes $\mathrm{SKIP}$, the target rate is updated for the Selective Diet algorithm. It is assumed that the $\mathrm{INFTY}$ block is driven by quantization noise so the target rate is updated as \begin{align} R_\mathrm{target} \leftarrow R_\mathrm{target} - R\left(\sqrt{\frac{24}{Q^2}},Q\right) \end{align} Optionally, $\mathrm{SKIP}$ blocks can be generated as a way to perform a further rate-distortion optimization by deciding to ``skip" a block, \emph{i.e.}, set to zero the prediction residual for all samples in the block and signal it using a 1-bit flag, if the predicted increase in distortion is low compared to the rate saving obtained by not encoding the block at all. However, skipping may introduce significant noise in the prediction loop, so the amount of skipped blocks must be controlled. Block skipping is useful only at low rates, therefore $\mathrm{SKIP}$ blocks can be generated only when the target rate is below 1 bpp, and a fixed percentage of blocks is skipped, as function of the target rate. This percentage increases as the target rate decreases according to the following rule: \begin{align} p_s = \begin{cases} (1 - T)^3 &\mbox{if }T \leq 1\\ 0 &\mbox{otherwise } \end{cases} \end{align} In order to choose which blocks must be skipped, the blocks in the current slice are sorted by decreasing value of $\Lambda$ and the first blocks in the sorted order will be skipped. \section{Feedback-based mode} \label{sec:modeB} The rate control algorithm outlined so far is completely model-based, meaning that no information about the real rate of the encoded slices is available. We shall refer to this method as \textsc{MODE A} of the algorithm. A more accurate control can be achieved by adding a feedback mechanism that modifies the target rate for future slices based on the actual rate used to encode the previous slices. In particular, \textsc{MODE B} of the algorithm measures how many bits have been used to encode previous slices and adjusts the input target rate for the next slices so to achieve the global target rate. Note that we do not want to increase the complexity of the system, hence we are not performing a multi-pass encoding of the same slice but rather correcting the target for future slices. Although \textsc{MODE B} does not increase the complexity and can achieve more accurate control, it might lower the rate-distortion performance of the encoded image. To see this, let us consider a toy case in which the image is made of two slices, having the same rate-distortion function. The global rate-distortion curve for the whole image is convex, but, by adjusting the rate on a slice-by-slice basis, we operate on two distinct points of the curve and the final rate-distortion point lies on a straight line joining the two operating points, certainly above the convex curve. Hence, per-slice oscillations in the target rate introduce some suboptimality, which is more severe the farther apart the operating points of each slice lie in the rate-distortion plane. \textsc{MODE B} adopts a Least-Mean-Square tracking approach to determine the target rate for the next slice, after measuring the rate produced by the encoding of the current slice. The target update formula is derived to take into account two issues. First, the inaccuracies in the rate controller make the actual output rate different from the target, thus we want to estimate the input-output relationship of the controller and track it in case of nonstationary behaviour. Second, we would like to count how many bits were used up to the current slice, and modify the target rate depending on the amount of bits that we saved, and we would like to spend on the next slices or, viceversa, the number of bits that we spent but we should have not. The goal is to try to assign all, but not more than the budget bits at our disposal, by spending them on or saving them from the remaining slices. The final rate update formula, to be motivated hereafter, is: \begin{align} \label{newtargetbpp} T_{new}[n+1] &= \eta[n+1] + \frac{c[n+1]}{\tau}\cdot \frac{1}{\bar{w}[n]} \end{align} with \vspace*{-0.1cm} \begin{align} c[n+1] &= \sum_{k=0}^n \left( T - y[k] \right) = c[n] + T - y[n] \\ \eta[n+1] &= \eta[n] + \bar{w}[n]\left[ T - y[n] + \frac{c[n]}{\tau} \right] \label{NTupdate} \\ \bar{w}[n] &= \frac{1}{\vert \mathcal{I} \vert} \sum_{k\in \mathcal{I}} w[k] \end{align} where $y[n]$ is the actual rate produced encoding slice $n$, $T_{new}[n+1]$ is the target rate specified to the $(n+1)$-th slice, which is the next slice to be coded, and $T$ is the original target rate for the whole image (and the initial condition for $T_{new}$). $c[n]$, which we call ``residual budget'', stores how much deviation in rate from $T$ has been accumulated up to slice $n$. The $\tau$ factor used in the formulas plays the role of a time constant, ideally distributing the residual budget over $\tau$ future slices. It can be noticed that equation \eqref{costtrackingbudget} reduces to just a tracking term, when $\tau=+\infty$. Also notice that, for $\tau=2$, the residual budget term in \eqref{costtrackingbudget} is exactly $(c[n+1])^2$. $\bar{w}[n]$ is the ratio between output and input rate, averaged over the $\vert \mathcal{I}\vert$ previous slices identified by set $\mathcal{I}$, and $\vert \mathcal{I}\vert$ denotes the cardinality of the set. As we shall see, different choices of $\mathcal{I}$ are possible and yield different results. As special cases, we notice that, when $\mathcal{I}=\lbrace n \rbrace$, the algorithm does not average on previous slices, hence it is most suited for highly non-stationary scenarios, while, when $\mathcal{I}=\left\lbrace0,1,\ldots,n\right\rbrace$, the algorithm uses all the history for averaging, yielding the best performance for stationary scenarios. The following theorems prove the rate control performance in such special cases. The wide sense stationarity (WSS) assumption that we make in the proofs has been verified to be a rather good model, since it basically means that the non-ideal behaviour of the rate controller, that we are trying to correct, has certain regularity properties. We remark that experimental results showed that when $w[n]$ is WSS, the output rate of the memory-1 method converges to $T$ but the residual budget converges to a non-zero value proportional to the variance of $w[n]$. This is why we advocate that the long-memory method is better when we expect a stationary behaviour. However, the memory-1 method (or a method with a limited memory) is better for tracking non-stationarities thanks to Theorem \ref{costmin}. In the proofs we will denote $\frac{c[n+1]}{\tau}\cdot \frac{1}{\bar{w}[n]} = \xi$ for brevity. \begin{proposition}[\bf{Convergence of long-memory method}] Let the rate controller obey the input-output relationship $y[n] = w[n]T_{new}[n]$, being $w[n]$ a wide sense stationary random process with mean $\mathbb{E}\left[ w[n] \right] = \mu$ and $\mathbb{E}\left[ \left( w[n+l]-w \right) \left( w[n]-w \right) \right] = 0 , \forall l\neq 0$. Let $T_{new}$ be updated as in \eqref{newtargetbpp} with $\mathcal{I}=\left\lbrace0,1,\ldots,n\right\rbrace$. Then, \begin{align} \lim_{n\rightarrow +\infty} \mathbb{E}\left[ y[n] \right] &= T \tag{Convergence to target} \\ \lim_{n\rightarrow +\infty} \mathbb{E}\left[ c[n] \right] &= 0 \tag{Convergence to zero residual budget} \end{align} \end{proposition} \begin{proof} We will not give a formal proof of this result, rather just a sketch. We notice that the sequence of averages over $n$ samples $\bar{w}[n]$ has a limit $\lim_{n\rightarrow \infty} \bar{w}[n] = \mathbb{E}\left[ w[n] \right] = \mu$ thanks to the ergodicity of $w[n]$. We suppose that it reaches this limit value fast and thus we approximate $\bar{w}[n] \approx \mu$ for all $n>n_0$. Using this fact and performing some algebraic manipulations on \eqref{NTupdate} and \eqref{newtargetbpp}, similar to those done in the proof of Theorem \ref{convergence}, we obtain the following recursion \begin{align*} T_{new}[n+1] &= \frac{\mu}{\tau}T + \left( 2 - \mu w[n] - \frac{w[n]}{\mu\tau} \right) T_{new}[n] \\ &- \left( 1-\mu w[n]-\frac{w[n]}{\mu\tau} + \frac{\mu^2}{\tau} \right) T_{new}[n-1] \end{align*} Hence \begin{align*} \mathbb{E}\left[ y[n+1] \right] &= \mu \left( 2 - \mu^2 - \frac{1}{\tau} \right) \mathbb{E}\left[ T_{new}[n] \right] \\ &- \mu \left( 1-\mu^2-\frac{1}{\tau} + \frac{\mu^2}{\tau} \right) \mathbb{E}\left[ T_{new}[n] \right] + \frac{\mu^2}{\tau}T \end{align*} We take the limit on both sides to get \begin{align*} y^* &= \left( 2 -\mu^2-\frac{1}{\tau} -1+\mu^2 + \frac{1}{\tau} - \frac{\mu^2}{\tau} \right) y^* + \frac{\mu^2}{\tau}T \\ y^* &= \lim_{n\rightarrow +\infty} \mathbb{E}\left[ y[n] \right] = T. \end{align*} Similarly, the residual budget term can be shown to follow \begin{align*} c[n] &= \frac{1}{\frac{\mu}{\tau} + \frac{1}{\mu\tau}- \frac{1}{\mu\tau}} \cdot \\ & \Big[ T_{new}[n] - \left( 1-\mu w[n-1] - \frac{w[n-1]}{\mu\tau} \right) T_{new}[n-1] \\ &- \left(\mu+\frac{1}{w\tau} \right)T \Big] + T - w[n-1] T_{new}[n-1] \end{align*} Hence, \begin{align*} &\lim_{n\rightarrow +\infty} \mathbb{E}\left[ c[n] \right] = \frac{1}{\frac{\mu}{\tau} + \frac{1}{\mu\tau}- \frac{1}{\mu\tau}} \cdot \\ & \left[ \frac{T}{\mu} - \left( 1-\mu^2-\frac{1}{\tau} \right) \frac{T}{\mu} - \left( \mu+\frac{1}{\mu\tau} \right)T \right] + T - \mu\frac{T}{\mu} = 0 \end{align*} \end{proof} \begin{theorem}[\bf{Convergence of memory-1 method}] \label{convergence} Let the rate controller obey the input-output relationship $y[n] = w\cdot T_{new}[n]$, with $w^2 \leq 2$ and let $T_{new}$ be updated as in \eqref{newtargetbpp} with $\mathcal{I}=\lbrace n \rbrace$. Then, \begin{align} \lim_{n\rightarrow +\infty} y[n] &= T \tag{Convergence to target} \\ \lim_{n\rightarrow +\infty} c[n] &= 0 \tag{Convergence to zero residual budget} \end{align} \end{theorem} \begin{proof} \begin{align} \label{yproof} y[n] &= w \left( \eta[n] + \frac{c[n]}{\tau w} \right) \notag \\ &= \left( 1- w^2 - \frac{1}{\tau} \right)y[n-1] + w^2T + \frac{T}{\tau} + w^2\frac{c[n-1]}{\tau} \end{align} However, from the definition of $c[n]$: \begin{align} \label{cproof} c[n] = c[n-1] + T - y[n-1] \end{align} We can solve \eqref{yproof} for $c[n-1]$ and insert it in \eqref{cproof}. \begin{align} \label{ccrucial} c[n] &= \frac{\tau}{w^2} \left( y[n] - \left( 1-w^2-\frac{1}{\tau} \right)y[n-1] -w^2T - \frac{T}{\tau} \right) \notag \\ &+ T - y[n-1] \end{align} We can recall that \begin{align*} y[n+1] &= \left( 1- w^2 - \frac{1}{\tau} \right)y[n] + w^2T + \frac{T}{\tau} + w^2\frac{c[n]}{\tau} \\ &= \left( 2-w^2-\frac{1}{\tau} \right)y[n] \\&- \left( 1+\frac{1-\tau}{\tau}w^2 - \frac{1}{\tau} \right)y[n-1] + \frac{w^2}{\tau}T \end{align*} The general solution to that difference equation, considering the initial conditions $y[0]=wT$ and $y[1]=wT+w^2T-w^3T+\frac{T-wT}{\tau}$, is \begin{align*} y[n] &= \frac{T(1-w)}{\tau w^2-1} \left( 1-\frac{1}{\tau} \right)^n \\ &+ \left[ Tw - T - T \frac{1-w}{\tau w^2-1} \right] \left( 1-w^2 \right)^n + T \end{align*} It is easy to check the limit: \begin{align*} \lim_{n\rightarrow \infty} y[n] = T , \end{align*} provided that $w^2 \leq 2$. Moreover we can take the limit of \eqref{ccrucial} to check budget convergence: \begin{align*} \lim_{n\rightarrow \infty} c[n] = \frac{\tau}{w^2} \left( T - \left( 1-w^2-\frac{1}{\tau} \right)T -w^2T - \frac{T}{\tau} \right) = 0 \end{align*} \end{proof} \begin{theorem}[\bf{Cost minimization of memory-1 method}] \label{costmin} Let the rate controller obey the input-output relationship $y[n] = w[n]T_{new}[n]$, and let $T_{new}$ be updated as in \eqref{newtargetbpp} with $\mathcal{I}=\lbrace n \rbrace$. Then, update \eqref{NTupdate} is a gradient descent step towards the minimization of \begin{align} \label{costtrackingbudget} J = \underbrace{ \Bigg( T - y[n] \Bigg)^2 }_{\textsc{tracking}} + \underbrace{ \Bigg( T - y[n] + \frac{2c[n]}{\tau} \Bigg)^2 }_{\textsc{budget}} \end{align} \end{theorem} \begin{proof} \begin{align*} J &= T^2 + y^2[n] -2Ty[n] + 4\frac{c^2[n]}{\tau^2} + T^2 + y^2[n]\\ &- 4\frac{c[n]}{\tau}y[n] + 4\frac{c[n]}{\tau}T - 2Ty[n]\\ &= 2T^2 + 2w^2[n]\xi^2 - 2Tw[n]\xi + 4\frac{c^2[n]}{\tau^2} - 4\frac{c[n]}{\tau}w[n]\xi\\ &+ 4\frac{c[n]}{\tau}T -2T\xi w[n] + 2w^2[n]\eta^2[n] + 4w^2[n]\eta[n]\xi\\ &- 2Tw[n]\eta[n] 4\frac{c[n]}{\tau}w[n] \eta[n] - 2T\eta[n]w[n] \end{align*} \begin{align*} \frac{\mathrm{d}J}{\mathrm{d}(\eta[n])} &= 4w^2[n]\eta[n] + 4\xi w^2[n] \\ &- 2Tw[n] - 4\frac{c[n]}{\tau}w[n] - 2Tw[n] \end{align*} The gradient descent update equation is \begin{align*} \eta[n+1] &= \eta[n] -\alpha \frac{\mathrm{d}J}{\mathrm{d}(\eta[n])} \\ &= \eta[n] - 4\alpha w[n] \left( y[n] - T - \frac{c[n]}{\tau} \right) \end{align*} Thus, setting $\alpha=\frac{1}{4}$ we obtain \eqref{NTupdate}. \end{proof} \section{Hybrid near-lossless rate control} \label{hybrid} \begin{figure*} \subfigure[No $\mathrm{CLIP}$]{ \includegraphics[width=0.32\textwidth]{fig3a.eps}} \subfigure[$\mathrm{CLIP}=21$]{ \includegraphics[width=0.32\textwidth]{fig3b.eps}} \subfigure[$\mathrm{CLIP}=11$]{ \includegraphics[width=0.32\textwidth]{fig3c.eps}} \caption{AVIRIS sc0\_raw. (a) rate$=3.0052$ bpp, MAD=30, SNR=62.25 dB; (b) rate$=3.0046$ bpp, MAD=10, SNR=62.85 dB; (c) rate$=2.9968$ bpp, MAD=5, SNR=63.39 dB} \label{avirisclip} \end{figure*} The proposed rate control algorithm opens the way for an interesting hybrid operating mode in which one can simultaneously constrain target rate and maximum distortion. This significantly differs from traditional operating modes in which one can either specify the rate but has no control over the per-pixel maximum error (as it typically happens in rate-controlled transform coding approaches) or in which one specifies the maximum error but has no control over the rate (as it is easily done in near-lossless predictive schemes). The implementation of such hybrid mode is trivial by using the proposed rate controller because it is sufficient to limit the maximum quantization step size allowed in the $l_1$ projector and in Selective Diet. If such specification is compatible with finding an allocation of quantization step sizes that yields the prescribed target rate, then the algorithm successfully controls both the rate and the maximum distortion. Fig. \ref{avirisclip} reports the results of some experiments (see Sec. \ref{sec:123extension}-\ref{results} for more details on the test image) that graphically show the impact of constraining the maximum quantization step size (called $\mathrm{CLIP}$) on the distribution of quantization steps and on the rate and quality of the encoded image. In this case the controller successfully provides the desired rate even with the very demanding constraint $\mathrm{CLIP}=11$. Also, notice the improvement in terms of MAD and SNR obtained by the hydrid mode. The higher SNR obtained by enforcing a constraint on the maximum error should not be surprising. In fact, the $l_1$ projector and Selective Diet alone have no guarantee of optimality and enforcing an additional constraint allows to shrink the solution space, eliminating suboptimal allocations. Finally, if the user were to demand $\mathrm{CLIP}=5$, she would actually get MAD=2 but the controller would be unable to provide the target rate of 3 bpp and, in fact, provides $4.0586$ bpp. \section{Extension of CCSDS-123 to near-lossless and lossy compression with rate control} \label{sec:123extension} \subsection{Review of CCSDS-123} The Consultative Committee for Space Data Systems (CCSDS) has recently developed the CCSDS-123 recommendation, intended for lossless compression of multispectral and hyperspectral images. CCSDS-123 is based on the Fast Lossless compression algorithm \cite{FL} \cite{calibrationartifacts}, which is a predictive method. The algorithm computes a local sum $\sigma_{z,y,x}$, obtained from a causal neighborhood of the pixel. A weighted combination of the local sums in the $P$ previous bands yields the predicted pixel value. The algorithm adapts the weights using the sign algorithm \cite{chosign}, which is a low-complexity solution for the implementation of a least-mean-square filter. Let $s_{z,y,x}$ denote the pixel value at position $(x,y,z)$, then the encoder computes: \begin{align*} \hat{d}_{z,y,x} = \mathbf{W}_{z,y,x}^T \mathbf{U}_{z,y,x} = \mathbf{W}_{z,y,x}^T \left[\begin{array}{c} 4s_{z,y-1,x}-\sigma_{z,y,x}\\ 4s_{z,y,x-1}-\sigma_{z,y,x}\\ 4s_{z,y-1,x-1}-\sigma_{z,y,x}\\ 4s_{z-1,y,x}-\sigma_{z-1,y,x}\\ \vdots\\ 4s_{z-P,y,x}-\sigma_{z-P,y,x} \end{array}\right] \end{align*} A \emph{scaled predicted sample} $\tilde{s}_{z,y,x}$ is calculated from $\hat{d}_{z,y,x}$. The prediction residual is computed as $\Delta_{z,y,x} = s_{z,y,x} - \left\lfloor \frac{\tilde{s}_{z,y,x}}{2} \right\rfloor$ and then mapped to a positive integer $\delta_{z,y,x}$ to be entropy encoded. For further details, we refer the reader to the CCSDS-123 Blue Book \cite{ccsds123} and to the paper by Aug\'{e} \emph{et al.} \cite{ccsds123performance} for a more throughout explanation of the encoder parameters and their impact on performance. \subsection{Near-lossless extension} Extending the compression mechanism to near-lossless encoding simply requires to introduce a quantizer in the prediction loop. In particular, we use a uniform scalar quantizer to quantize the prediction residual $\Delta_{z,y,x}$ into $\hat{\Delta}_{z,y,x} = \mathrm{sgn}\left( \Delta_{z,y,x} \right)\cdot \left\lfloor \frac{\vert \Delta_{z,y,x} \vert + (Q-1)/2}{Q} \right\rfloor$. The quantized value is then mapped to a positive integer and sent to the entropy coding stage. In order to have synchronization with the decoder, we must consider the dequantized value $Q \hat{\Delta}_{z,y,x}$ for weight update. The near-lossless encoder uses a single quantization step size for the whole image. \subsection{Rate-controlled lossy extension} The rate-controlled version of the algorithm uses the proposed rate control method to assign a different quantization step size to each block in the image. Assuming that the encoder proceeds in a Band Interleaved by Line (BIL) order, the rate control procedure is called whenever the current pixel belongs to the first band and it is at the beginning of a new slice (\emph{i.e.}, position $z=0$, $y=k\cdot BS$, $x=0$). As explained in the previous sections, the rate controller first tries to encode $\mathrm{ESTLINES}$ lines (with all their spectral bands) in a lossless mode in order to estimate the variance of the prediction residuals. Once the variance is estimated and the allocation of quantization steps is performed, the encoder backtracks to position $(0,k\cdot BS,0)$, discarding all the weight updates done in the meanwhile and starts the actual encoding pass of the slice. Similarly to the near-lossless mode, the encoder now computes the quantized prediction residuals $\hat{\Delta}_{z,y,x}$, but now employing the quantization steps calculated by the controller for each block. It is important to notice that the chosen quantization step sizes must be written in the header of the compressed file for usage at the decoder side. In order to keep the overhead low we propose to use a differential encoding strategy adopting the Exp-Golomb code \cite{expgolomb}. Differential encoding amounts to encoding only differences between two successive quantization steps and, since they are expected to be close to each other, some compression is obtained. A simple universal code such as the Exp-Golomb code of order zero is then used to compress the differences. Finally, formulas \eqref{rate_usq} and \eqref{dist_usq} can be implemented by means of lookup tables. It can be noticed that the rate depends only on $\Lambda Q$ and that the distortion can be rewritten as the product of a function of $\Lambda Q$ and $Q^2$. We have verified that two lookup tables of roughly $45000$ integer values each are sufficient to ensure the correct behavior of the algorithm. The values in the rate table can be represented using 14 bits per value, while the distortion values need 13 bits. The total memory occupation of the two tables is thus about 152 kB. \vspace*{-0.25cm} \subsection{Range encoder} \label{sec:range} The CCSDS-123 recommendation defines an adaptive coding approach using Golomb-Power-of-2 codes, mainly due to its low complexity and good performance, as well as the existence of an earlier standard (CCSDS 121.0-B \cite{ccsds121}) using the Rice coding algorithm, embedded in the block-adaptive mode. We propose a different entropy coding stage based on the range coder \cite{martinrange}. The range coder is essentially a simplified arithmetic encoder. Such a block coder is needed in order to achieve rates lower than 1 bpp, as the minimum codeword length for the Golomb code is 1 bit. Moreover, a higher performance entropy coder improves the effectiveness of the rate controller, by limiting the suboptimality introduced at this stage. For efficiency reasons, the proposed range coder keeps four separate models for each band for the prediction residuals, as described in \cite{losslessLUT}. \vspace*{-0.1cm} \section{Numerical results} \label{results} We have performed extensive tests on images extracted from the corpus defined by the MHDC working group of the CCSDS for performance evaluation and testing of compression algorithms. A total of 47 images is used to generate the ensemble statistics, while for brevity we report numerical results for a smaller subset. The whole corpus comprises images of various nature, from ultraspectral images captured by IASI and AIRS sensors, through hyperspectral images captured by CASI, SFSI, AVIRIS and Hyperion sensors, to multispectral images captured by MODIS, Landsat, Vegetation, MSG, Pleiades and SPOT5 sensors. Table \ref{tab:testimages} reports details about the images used in the tests and the number of bands $P$ used for prediction. The images with the \textsc{NUC} suffix present Non-Uniformity Correction, \emph{i.e.}, a form of compensation of the different gains of the lines of the image, performed by means of a median filter, as described in \cite{pot}. The tests have multiple goals. First, we want to analyze the accuracy of the rate control algorithm, assessing how close the actual rate of the compressed image is with respect to the specified target. Second, we study the rate-distortion performance of the algorithm by drawing the full rate-distortion curve in order to compare it against the rate-distortion curve obtained by the near-lossless version of the encoder. This is known to be the optimal quantization step selection for a Gaussian source, but does not provide rate control, although many rate-distortion points are indeed achievable. We use this curve as an upper performance bound in order to estimate how close the proposed rate control algorithm can get to the ideal solution. Finally, we compare the performance of the proposed extension of CCSDS-123 to lossy compression with rate control against a state-of-the-art transform coder intended for onboard compression. \begin{table} \caption{Test Images} \label{tab:testimages} \centering \begin{tabular}{c c c c c} Image & Rows & Columns & Bands & $P$ \\ \hline \hline \textsc{AVIRIS sc0\_raw} & 512 & 680 & 224 & 15 \\ \hline \textsc{AIRS gran9} & 135 & 90 & 1501 & 10 \\ \hline \textsc{CASI-t0477f06-nuc} & 1225 & 406 & 72 & 2\\ \hline \textsc{CRISM-sc167-nuc} & 510 & 640 & 545 & 3 \\ \hline \textsc{CRISM-sc182-nuc} & 450 & 320 & 545 & 3 \\ \hline \textsc{CRISM-sc214-nuc} & 510 & 640 & 545 & 3 \\ \hline \textsc{frt00009326\_07\_vnir} & 512 & 640 & 107 & 3 \\ \hline \textsc{Geo\_Sample\_Flatfielded} & 1024 & 256 & 242 & 10 \\ \hline \textsc{M3targetB-nuc} & 512 & 640 & 260 & 3 \\ \hline \textsc{M3targetB} & 512 & 640 & 260 & 3 \\ \hline \textsc{MODIS-MOD01\_250m} & 8120 & 5416 & 2 & 1 \\ \hline \textsc{MODIS-MOD01\_500m} & 4060 & 2708 & 5 & 4 \\ \hline \textsc{MODIS-MOD01day} & 2030 & 1354 & 14 & 2 \\ \hline \textsc{MODIS-MOD01night} & 2030 & 1354 & 17 & 4 \\ \hline \textsc{montpellier} & 224 & 2456 & 4 & 3 \\ \hline \textsc{mountain} & 1024 & 1024 & 6 & 5 \\ \hline \textsc{t0477f06\_raw} & 1225 & 406 & 72 & 2 \\ \hline \textsc{toulouse\_spot5\_xs\_extract1} & 1024 & 1024 & 3 & 3 \\ \hline \textsc{vgt\_1b} & 10080 & 1728 & 4 & 3 \\ \hline \end{tabular} \end{table} \subsection{Complexity considerations} Before presenting the experimental performance of the proposed algorithm, we analyze its computational complexity both theoretically and on a real implementation. The lossless version of the compression algorithm is quite similar to the CCSDS-123 recommendation, with the exception of entropy coding stage, now replaced by the range coder. Its complexity and the one of the near-lossless scheme are therefore just marginally higher than CCSDS-123. The rate control algorithm has three main sources of complexity: \begin{itemize} \item the estimation of the variance of unquantized prediction residuals \item the $l_1$ projector \item the Selective Diet optimization algorithm \end{itemize} We remarked in Sec. \ref{sec:l1} that the $l_1$ projector has complexity $\mathcal{O}(N_B\log N_B)$, essentially due to the sorting procedure. The Selective Diet algorithm also has a sorting step as the main source of complexity. After the blocks in the current slice are sorted according to the value of the cost function, a linear scan is performed to optimize the quantization step sizes. This basic operation is repeated for $N_{iter}$ iterations, hence with good approximation we can say that Selective Diet has $\mathcal{O}(N_{iter}(N_B\log N_B + N_B))$ complexity. However, it is typically observed that the number of required iterations is very low (around 5 to 10) and can be bounded to a predefined value. We also profiled our C-language implementation of the compression algorithm and compared lossless encoding against rate-controlled encoding in terms of running times. We used the \emph{aviris sc0\_raw} image for our test, as it is one of the biggest in the dataset. Rate control was set to $3$ bpp with \textsc{MODE A}. The running time of the lossless encoder was $72.62$ seconds, while the rate-controlled encoder took $80.48$ seconds. The time spent writing to file was removed from both measurements in order to avoid any bias due to different file sizes. It can be noticed that the overhead of the rate controller is around $10\%$. Careful profiling of the code suggests that this overhead is due for $65\%$ ($5.11$ sec.) to variance estimation, while only $22\%$ ($1.73$ sec.) to optimization ($l_1$ projector and Selective Diet). The remaining $13\%$ is due to other inefficiencies in the code, which is not very optimized. This result confirms our intuition, presented in Sec. \ref{sec:models}, that variance estimation is the main source of complexity, and so the number of lines (\textsc{ESTLINES}) used for this task must be chosen carefully. All the results presented in this paper were obtained with \textsc{ESTLINES}$=2$. \subsection{Accuracy of rate control} \begin{figure*}[ht!] \addtolength{\subfigcapskip}{-0.2cm} \vspace*{-0.1cm} \centering \subfigure[1 bpp]{ \includegraphics[width=0.3\textwidth]{fig4a.eps}} \subfigure[2 bpp]{ \includegraphics[width=0.3\textwidth]{fig4b.eps}} \subfigure[3 bpp]{ \includegraphics[width=0.3\textwidth]{fig4c.eps}} \vspace*{-0.2cm} \caption{Histograms of output rates for mode B} \label{histograms} \end{figure*} \begin{table*} \caption{Output rates} \centering \begin{tabular}{c| c| c| c c c c} Image & Size (lines$\times$pixels$\times$bands) & Mode & \emph{1 bpp} & \emph{2 bpp} & \emph{3 bpp} & \emph{4 bpp}\\ \hline \multirow{2}{*}{\textsc{AVIRIS sc0\_raw}} & \multirow{2}{*}{$512\times680\times224$} & A & 0.951 & 1.963 & 2.955 & 3.959 \\ & & B & 1.004 & 1.995 & 3.006 & 3.994\\ \hline \multirow{2}{*}{\textsc{AIRS gran9}} & \multirow{2}{*}{$135\times90\times1501$} & A & 0.948 & 1.939 & 2.963 & 3.971 \\ & & B & 0.959 & 1.976 & 2.962 & 3.962\\ \hline \multirow{2}{*}{\textsc{CASI-t0477f06-nuc}} & \multirow{2}{*}{$1225\times406\times72$} & A & 0.881 & 1.944 & 2.924 & 3.981 \\ & & B & 0.999 & 1.994 & 2.995 & 3.988\\ \hline \multirow{2}{*}{\textsc{CRISM-sc167-nuc}} & \multirow{2}{*}{$510\times640\times545$} & A & 0.678 & 1.706 & 2.677 & 3.690 \\ & & B & 1.003 & 1.993 & 2.986 & 3.991 \\ \hline \multirow{2}{*}{\textsc{CRISM-sc182-nuc}} & \multirow{2}{*}{$450\times320\times545$} & A & 0.680 & 1.698 & 2.691 & 3.696 \\ & & B & 1.002 & 1.991 & 2.985 & 3.973 \\ \hline \multirow{2}{*}{\textsc{frt00009326\_07\_vnir}} & \multirow{2}{*}{$512\times640\times107$} & A & 0.607 & 1.427 & 2.231 & 3.140 \\ & & B & 1.000 & 1.999 & 3.001 & 3.994 \\ \hline \multirow{2}{*}{\textsc{Geo\_Sample\_Flatfielded}} & \multirow{2}{*}{$1024\times256\times242$} & A & 0.912 & 2.070 & 3.124 & 3.998 \\ & & B & 0.988 & 1.987 & 2.983 & 3.971 \\ \hline \multirow{2}{*}{\textsc{M3targetB-nuc}} & \multirow{2}{*}{$512\times640\times260$} & A & 0.889 & 1.974 & 3.043 & $3.834^{(*)}$ \\ & & B & 1.000 & 1.997 & 2.998 & $3.834^{(*)}$ \\ \hline \multirow{2}{*}{\textsc{MODIS-MOD01\_250m}} & \multirow{2}{*}{$8120\times5416\times2$} & A & 0.909 & 1.997 & 2.939 & 3.839 \\ & & B & 1.014 & 2.009 & 3.006 & 4.004 \\ \hline \multirow{2}{*}{\textsc{MODIS-MOD01day}} & \multirow{2}{*}{$2030\times1354\times14$} & A & 1.042 & 2.045 & 2.996 & 3.985 \\ & & B & 1.014 & 2.005 & 2.998 & 3.986 \\ \hline \multirow{2}{*}{\textsc{montpellier}} & \multirow{2}{*}{$224\times2456\times4$} & A & 0.959 & 2.122 & 3.123 & 4.105 \\ & & B & 1.025 & 2.035 & 3.030 & 4.032 \\ \hline \multirow{2}{*}{\textsc{mountain}} & \multirow{2}{*}{$1024\times1024\times6$} & A & 0.735 & 1.935 & 2.970 & $3.793^{(*)}$ \\ & & B & 1.002 & 2.003 & 3.003 & $3.793^{(*)}$ \\ \hline \multirow{2}{*}{\textsc{t0477f06\_raw}} & \multirow{2}{*}{$1225\times406\times72$} & A & 1.138 & 1.971 & 2.935 & 3.987 \\ & & B & 1.016 & 1.994 & 2.995 & 3.993 \\ \hline \multirow{2}{*}{\textsc{toulouse\_spot5\_xs\_extract1}} & \multirow{2}{*}{$1024\times1024\times3$} & A & 0.714 & 1.815 & 2.805 & 3.802 \\ & & B & 1.010 & 2.002 & 2.999 & 3.997 \\ \hline \multirow{2}{*}{\textsc{vgt\_1b}} & \multirow{2}{*}{$10080\times1728\times4$} & A & 0.630 & 1.813 & 2.878 & 3.914 \\ & & B & 1.009 & 2.004 & 3.002 & 4.001 \\ \hline \end{tabular} \\ \vspace*{0.4cm} (*) : lossless \label{tab:rates} \end{table*} In this section we show some results concerning the accuracy of the rate controller in terms of output rate. The tests are conducted for various target rates, and for the two operating modes of the algorithm: A and B. The predictor defined in the CCSDS-123 standard is used in the full prediction mode and with neighbour-oriented local sums. Square blocks of size $16\times16$ are used but the variance of the unquantized prediction residuals is obtained by running the lossless encoder on 2 lines only. This allows to buffer only two spectral lines at any given time, avoiding the need of large onboard memory buffers. Table \ref{tab:rates} reports a selection of the test images and the output rates obtained for the specified target rates. While later we will report full rate-distortion results, this test aims at assessing the accuracy achieved at obtaining a given target rate. It can be noted that the operating mode A is typically less accurate than mode B. Nonetheless it can still get very good accuracy in many cases, and, as explained in Sec. \ref{RDnumericalperformance}, it potentially has better rate-distortion characteristics. Mode B always has remarkably good accuracy, thanks to the information on the actual number of bits used to encode previous slices. Moreover, it can be seen that the algorithm performs equally well on both hyperspectral and multispectral images. Furthermore, Fig. \ref{histograms} reports histograms of the actual rate obtained by mode B on a total of 47 images belonging to the test set of the CCSDS. The bin width is 1\% of the target rate. It should be noticed that, for the histogram at 3 bpp, some of the images were encoded without losses using a rate lower than the target, hence they have not been considered in the histograms. Notice that many images in the test set reach accuracy as good as 1\% or less. We remark that the rate control results are consistent throughout this large test set and only few images failed to be encoded with good accuracy. This is due to the severe noise affecting those images, causing the predictor to have low performance, and consequently, the prediction residuals exhibit large deviations from the model we assumed. \subsection{Rate-distortion performance} \label{RDnumericalperformance} \begin{figure*}[ht!] \subfigure[]{ \includegraphics[width=0.32\textwidth]{fig5a.eps}} \subfigure[]{ \includegraphics[width=0.32\textwidth]{fig5b.eps}} \subfigure[]{ \includegraphics[width=0.32\textwidth]{fig5c.eps}} \caption{Rate-SNR curves. (a) \emph{AIRS gran9} , (b) \emph{CRISM-sc182-nuc}, (c) \emph{vgt1\_1b}} \label{rateditortion} \end{figure*} In this section we study the rate-distortion performance of the encoder, and, in particular, we focus on the suboptimality of the rate controller with respect to a near-lossless encoding of the images. The problem with near-lossless compression is that, apart from the lack of rate control, only certain rates can be achieved due to choice of a single quantization step for the whole image. At high rates, this causes rate-distortion points to be quite far apart from each other (\emph{e.g}, as much as $0.5$ bpp), hence not allowing very flexible choices for the rate-distortion operating point. On the other hand, rate control allows to achieve very fine granularity and any rate-distortion point, from low rates up to lossless compression, can be used. Figure \ref{rateditortion} shows the rate-SNR curves obtained for near-lossless compression, rate control with mode A and rate control with mode B for some test images. The following definition of SNR is used throughout the paper: \begin{align*} \textsc{SNR} = 10\log_{10} \frac{\sum_{i=1}^{N_{pixels}}x_i^2}{\sum_{i=1}^{N_{pixels}}(x_i-\hat{x}_i)^2} \end{align*} being $x_i$ and $\hat{x}_i$ the $i$-th pixel in the original image and in the decoded image, respectively. As already explained in section \ref{sec:modeB}, the great accuracy in the rate achieved by mode B is paid in terms of slightly lower rate-distortion performance. However, it is remarked that when the encoder is run relying on the rate control only, the greater accuracy of mode B often results in better quality than that provided by mode A, which often yields a rate lower than the target. Nevertheless, it can be noticed that the rate-distortion curves for both mode A and mode B are quite close to the near-lossless performance. As an example, for \emph{AIRS gran9} the gap is only about $0.2$ dB at 2 bpp. For \emph{frt00009326\_07\_vnir} the gap at 2 bpp is 0.2 dB for mode A and 0.4 bpp for mode B. We report image \emph{vgt1\_1b} as one of the worst cases of rate-SNR performance, where mode A loses about 1.5 dB with respect to near-lossless encoding and mode B about 1.8 dB, always at 2 bpp. We also remark that the curves were obtained without constraining maximum distortion, which can significantly improve performance, as shown in Sec. \ref{hybrid}. \subsection{Comparison with transform coding} The CCSDS-122 standard \cite{ccsds122} defines a transform coder employing the Discrete Wavelet Transform and a low-complexity Bit Plane Encoder, for the compression of 2D imagery. An extension of such standard to multiband imagery by including a spectral transform has been implemented and is publicly available online \cite{deltasoftware}. The implementation combines the CCSDS-122 encoder with the POT spectral transform \cite{pot}. The proposed system is run using the memory-1 mode B of rate control (slice-by-slice feedback) with $\tau=5$, with full prediction mode and neighbor-oriented local sums, while the transform system performs the rate allocation by means of the reverse waterfill algorithm \cite{taubmanjpeg2000}. We remark that the availability of the rate controller for the predictive system allows to perform a direct comparison, in which both systems work in a pure rate-controlled fashion by specifying a target rate and letting the encoder perform all the coding decisions automatically. The proposed rate controller is operated using $16\times 16$ blocks and $\mathrm{ESTLINES}=2$, meaning that only two lines out of 16 are used for estimation of the variance of unquantized prediction residuals. On the other hand, the transform coding system buffers 8 lines, thus requiring more memory. Table \ref{hydra_vs_delta} reports a comparison between the two systems, highlighting in bold the best results. The proposed predictive system is competitive against transform coding by typically providing superior quality, both in terms of SNR and in terms of maximum absolute distortion (MAD), for the same rate. Other quality metrics such as the maximum spectral angle (MSA) and average spectral angle (ASA) have been studied in the literature \cite{quality_metrics}, but we omit them for reasons of brevity. However, such metrics follow the same trends observed for SNR and MAD, respectively. We observe that, at lower rates, the proposed algorithm achieves significant gains in terms of MAD even when the SNR gain is small or for the few cases when the transform coder is more effective. We also report (Table \ref{hydra_gains}) the mean and median gains in terms of SNR and MAD obtained by the proposed algorithm on the whole corpus of images. We choose to report the median gain, as well as the mean, due to some outliers in the results that bias the mean gain statistics due to the large gain obtained by the proposed system. It is sometimes the case that the proposed system reaches lossless quality for the desired rate, while the transform coder does not. Such cases are excluded from the computation of the SNR gain as it would be infinite. We can notice that the higher gains are achieved for higher rates, confirming the typical behaviour of predictive encoders with respect to transform encoders. Finally, we report a visual comparison (Fig. \ref{visual_comp}) on a cropped portion of the first band of the \emph{vgt1\_1b} test image. The two algorithms are compared at the same rate of 2 bpp. Although it is difficult to see the differences with the naked eye on paper, the figures reporting the magnitude of the error clearly show that the proposed predictive approach consistently achieves smaller deviations from the original image. Also, notice that despite the block-based approach of the proposed algorithm, scalar quantization of the prediction residuals does not produce blocketization artifacts. \begin{figure*} \centerline{ \includegraphics[width=0.185\textwidth]{fig6a.eps} \includegraphics[width=0.185\textwidth]{fig6b.eps} \includegraphics[width=0.185\textwidth]{fig6c.eps} \includegraphics[width=0.185\textwidth]{fig6d.eps} \includegraphics[width=0.185\textwidth]{fig6e.eps}} \caption{Visual comparison of a crop of \emph{vgt1\_1b}, band 1. From left to right: original image, predictive approach, transform approach, absolute error for predictive, absolute error for transform} \label{visual_comp} \end{figure*} \vspace*{-0.2cm} \section{Conclusions} In this paper we have presented a rate control algorithm for onboard compression of hyperspectral and multispectral images designed to work with predictive encoders and suitable for implementation on spacecrafts. While rate control is easy to perform in the case of transform coding, the predictive coding paradigm poses significant challenges. We have proposed a scheme based on modelling the predicted rate and distortion for non-overlapping blocks of the image and optimizing the assignment of quantization step sizes over slices of the image. Extensive tests have shown that the algorithm can effectively control the output rate with excellent accuracy. Moreover, rate control solves one of the issues of near-lossless compression, \emph{i.e.}, the scarce number of operating points at high rates. In fact, the availability of a rate controller allows the user to choose any rate, depending on their specific needs. We have also proposed an extension of the CCSDS-123 standard to deal with lossy, near-lossless and hybrid near-lossless rate-controlled compression in a single package. The resulting architecture is competitive with the transform coding approach, significantly outperforming it at all rates from 1 bpp up to lossless compression. \begin{table*}[htbp] \caption{PREDICTIVE (CCSDS-123 + Rate Control B) vs. TRANSFORM (CCSDS-122 + POT + Reverse Waterfill)} \centerline{ \begin{tabular}{c|c|ccc|cc} \multicolumn{ 2}{c}{} & \multicolumn{ 3}{c}{PREDICTIVE} & \multicolumn{ 2}{c}{TRANSFORM} \\ IMAGE & TARGET (bpp) & RATE (bpp) & SNR (dB) & MAD & SNR (dB) & MAD \\ \hline & 1.00 & 1.004 & 46.15 & \bf{247} & \bf{48.87} & 433 \\ \textsc{aviris\_sc0} & 2.00 & 1.995 & \bf{55.93} & \bf{35} & 55.02 & 107 \\ $512\times680\times224$ & 3.00 & 3.006 & \bf{62.28} & \bf{30} & 59.69 & 41 \\ & 4.00 & 3.994 & \bf{69.49} & \bf{3} & 65.03 & 21 \\ \hline & 1.00 & 0.951 & \bf{47.79} & \bf{10} & 44.24 & 167 \\ \textsc{CRISM-sc214-nuc} & 2.00 & 1.938 & \bf{56.26} & \bf{4} & 52.72 & 45 \\ $510\times640\times545$ & 3.00 & 2.920 & \bf{62.09} & \bf{2} & 60.33 & 7 \\ & 4.00 & 3.818 & \bf{88.15} & \bf{1} & 65.32 & 3 \\ \hline & 1.00 & 1.001 & \bf{44.00} & \bf{63} & 37.16 & 1278 \\ \textsc{M3targetB} & 2.00 & 1.998 & \bf{54.25} & \bf{10} & 46.38 & 243 \\ $512\times640\times260$ & 3.00 & 2.997 & \bf{60.08} & \bf{8} & 58.52 & 32 \\ & 4.00 & 3.929 & \bf{69.55} & \bf{2} & 64.88 & 7 \\ \hline & 1.00 & 1.016 & \bf{29.36} & \bf{255} & 25.87 & 752 \\ \textsc{MODIS-MOD01\_500m} & 2.00 & 2.009 & \bf{39.32} & \bf{100} & 36.54 & 244 \\ $4060\times2708\times5$ & 3.00 & 3.005 & \bf{47.25} & \bf{37} & 42.99 & 168 \\ & 4.00 & 4.002 & \bf{54.08} & \bf{16} & 49.77 & 53 \\ \hline & 1.00 & 1.004 & 42.19 & \bf{88} & \bf{43.60} & 618 \\ \textsc{MODIS-MOD01night} & 2.00 & 2.002 & \bf{52.49} & \bf{32} & 51.10 & 276 \\ $2030\times1354\times17$ & 3.00 & 3.001 & \bf{59.84} & \bf{12} & 56.51 & 277 \\ & 4.00 & 4.000 & \bf{65.78} & \bf{5} & 61.48 & 47 \\ \hline & 1.00 & 1.024 & \bf{28.67} & \bf{137} & 27.16 & 670 \\ \textsc{montpellier} & 2.00 & 2.035 & \bf{37.23} & \bf{42} & 33.46 & 635 \\ $224\times2456\times4$ & 3.00 & 3.030 & \bf{44.60} & \bf{18} & 39.88 & 92 \\ & 4.00 & 4.032 & \bf{51.22} & \bf{7} & 45.44 & 47 \\ \hline & 1.00 & 1.010 & 23.80 & \bf{25} & \bf{24.26} & 74 \\ \textsc{toulouse\_spot5\_xs\_extract1} & 2.00 & 2.002 & \bf{31.88} & \bf{7} & 30.37 & 36 \\ $1024\times1024\times3$ & 3.00 & 2.999 & \bf{38.53} & \bf{4} & 35.98 & 14 \\ & 4.00 & 3.997 & \bf{44.30} & \bf{2} & 41.29 & 7 \\ \hline & 1.00 & 1.009 & \bf{31.07} & \bf{83} & 27.93 & 372 \\ \textsc{vgt1\_1b} & 2.00 & 2.004 & \bf{40.18} & \bf{27} & 37.05 & 231 \\ $10080\times1728\times4$ & 3.00 & 3.002 & \bf{47.31} & \bf{11} & 44.02 & 64 \\ & 4.00 & 4.001 & \bf{53.52} & \bf{5} & 49.76 & 15 \\ \hline \end{tabular}} \vspace*{-0.0cm} \label{hydra_vs_delta} \end{table*} \begin{table*}[htbp] \caption{MEAN AND MEDIAN GAIN} \centerline{ \begin{tabular}{c| c c| c c } RATE (bpp) & MEAN SNR GAIN (dB) & MEDIAN SNR GAIN (dB) & MEAN MAD GAIN & MEDIAN MAD GAIN \\ \hline 1.00 & 1.55 & 1.51 & 727 & 404 \\ \hline 2.00 & 2.82 & 2.17 & 323 & 117 \\ \hline 3.00 & 3.46 & 2.93 & 123 & 29 \\ \hline 4.00 & 6.6 & 4.31 & 54 & 7 \\ \hline \end{tabular}} \label{hydra_gains} \end{table*} \section{Acknowledgements} We would like to thank Ian Blanes from the Universitat Aut\`{o}noma de Barcelona for precious support on using the Delta software developed by the Group on Interactive Coding of Images.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \IEEEPARstart{W}ith the advancement in the vehicular connectivity and autonomy, Connected and Automated Vehicles (CAVs) have the potential to operate in a safer and more time- and fuel-efficient manner \cite{vahidi2018energy}. With Vehicle-to-Vehicle (V2V) and Vehicle-to-Infrastructure (V2I) communication, the controller has access to real-time look-ahead information including the terrain, infrastructure and surrounding vehicles. Intuitively, with connectivity technologies, controllers can plan a speed profile that allows the ego vehicle to intelligently pass more signalized intersections in green phases with fewer changes in speed. This problem is formulated as the eco-driving problem, which aims to minimize the weighted sum of the fuel consumption and the travel time between two designated locations by co-optimizing the speed trajectory and the powertrain control strategy \cite{sciarretta2015optimal, jin2016power}. This field of research has experienced significant momentum in the last decade. \cite{ozatay2014cloud,jin2016power, han2019fundamentals,sun2020optimal} address the eco-driving problem for vehicles with single power source, whereas \cite{mensing2012vehicle,guo2016optimal, olin2019reducing} study the problem with the hybrid electric powertrain architecture. The latter involves modeling multiple power sources and devising optimal control algorithms that can synergistically split the power demand to efficiently utilize the electric energy stored in battery. Maamria et al. \cite{maamria2018computation} systematically compare the computational requirements and the optimality of different formulations. Meanwhile, the difference also exists in the complexity of the driving scenarios. In \cite{ozatay2014cloud, olin2019reducing}, the eco-driving is formulated without considering the real-time traffic light variability. \cite{jin2016power,asadi2010predictive, sun2020optimal, guo2016optimal, guo2021ecodriving} have explicitly modeled and considered Signal Phase and Timings (SPaTs) and formulate and solve the eco-driving problem with optimal control techniques. In this work, the eco-driving problem of Connected and Automated Hybrid Electric Vehicles (CAHEVs) with the capability of passing traffic lights autonomously is studied. Recently, the use of Deep Reinforcement Learning (DRL) in the context of eco-driving has caught considerable attention. DRL provides a train-offline, execute-online methodology with which the policy is learned from the historical data or the interaction with simulated environments. Shi et al. \cite{shi2018application} modeled the conventional vehicles with ICE as a simplified model and implemented Q-learning to minimize the $CO_2$ emission at signalized intersections. Li et al. \cite{li2019ecological} apply an actor-critic algorithm on the ecological ACC problem in car-following mode. Guo and Wang \cite{guo2021integrated} proposed MPC-initialized Proximal Policy Optimization with Model-based Acceleration (PPOMA) for the problem of active signal priority control for trams. Pozzi et al. \cite{pozzi2020ecological} designed a velocity planner considering the signalized intersection and hybrid powertrain configuration with Deep Deterministic Policy Gradient (DDPG). Zhu et al. \cite{zhu2021deep} formulates the eco-driving problem as a Partially Observable Markov Decision Process (POMDP) and approaches it with PPO. While the strategies with Model-Free Reinforcement Learning (MFRL) in these studies show improvements in the average fuel economy and reductions in onboard computation, the methodology has a fundamental drawback. To teach the agent to drive under complex driving scenarios while satisfying all the constraints from powertrain and traffic rules, a complex and often cumbersome rewarding/penalizing mechanism needs to be designed. Furthermore, under such setup, the agent learns to satisfy constraints by minimizing the expected cost. For scenarios that are rare yet catastrophic, the scale of the cost penalizing constraint violation needs to be significantly larger than the learning objective itself\cite{shi2018application}. As a result, such extrinsic rewarding mechanism increases the design period and deteriorates the final performance. This paper proposes a safe-critical model-based off-policy reinforcement learning algorithm to solve the eco-driving problem in a connected and automated mild-HEV. The first contribution of this work, from the eco-driving application perspective, is the elimination of the extrinsic reward design in the eco-driving problem by the design of a Model-based Reinforcement Learning (MBRL) algorithm. The algorithm integrates RL with trajectory optimization, which incorporates the constraints from the powertrain dynamics, vehicle dynamics, and traffic rules in a constrained optimization formulation. The performance of the agent is meanwhile improved by using the learned terminal cost function from the RL mechanism. The second contribution of this work, from the RL algorithm perspective, is the development of Safe Model-based Off-policy Reinforcement Learning (SMORL), a safe-critical model-based off-policy Q-learning algorithm for systems with known dynamics. The algorithm has three unique features compared to the prior and current MBRL implementations. First, SMORL is off-policy as opposed to \cite{lowrey2018plan, karnchanachari2020practical,thananjeyan2020safety,thananjeyan2020abc}. While the use of the model in MBRL increases the sample efficiency \cite{wang2019benchmarking}, the collection of each individual transition becomes more computationally expensive as it commonly requires solving an online optimization problem, as opposed to a feedforward policy in MFRL. With the use of experience replay \cite{lin1992self} in the off-policy learning, the historical data can be used to greatly reduce the overall training time. To obtain the value function from Q function, an actor is explicitly trained as in Twin Delayed Deep Deterministic policy gradient algorithm (TD3)\cite{fujimoto2018addressing}. Second, the distributional mismatch between the actor and critic \cite{levine2020offline} in MBRL is explicitly addressed to improve training performance and stability with Batch Constrained Q-learning (BCQ) \cite{fujimoto2019off}. Third, the long-term feasibility of the policy is considered by extending the safe set \cite{rosolia2019sample,thananjeyan2020safety} to a higher dimensional setting using deep unsupervised learning. The remainder of the paper is organized as follows. Sec. \ref{sec: environment and formulation} presents the simulation environment and the eco-driving problem formulation. Sec. \ref{sec: background} introduces the preliminaries of the mathematical concepts, and Sec. \ref{sec: proposed method} presents the main algorithm SMORL. Sec. \ref{sec: implementation details} explains the detailed implementation of SMORL on the eco-driving problem, and Sec. \ref{sec: results} shows the training details and benchmarks the performance. \section{Eco-driving for CAHEVs} \label{sec: environment and formulation} \subsection{Environment} As collecting data in real-world driving data is expensive and potentially unsafe, a model of the environment is developed for training and validation purposes. The environment model, named EcoSim, consists of a Vehicle Dynamics and Powertrain (VD\&PT) model and a microscopic traffic simulator. Fig.\ref{fig: environment} shows EcoSim and its interaction with the controller and the learning algorithm. The controller commands three control inputs, namely, the Internal Combustion Engine (ICE) torque, the electric motor torque and the mechanical brake torque. The component-level torques collectively determine the HEV powertrain dynamics, the longitudinal dynamics of the ego vehicle and its progress along the trip. As in \cite{asadi2010predictive}, it is assumed that the ego vehicle is equipped with Dedicated Short Range Communication (DSRC) sensors, and SPaTs from signalized intersections become available once they are within the 500 $m$ range. The DRL agent utilizes the SPaT from the upcoming traffic light while ignoring the SPaT from any other traffic light regardless of the availability. Specifically, the controller receives the distance to the upcoming traffic light, its current status and its SPaT program as part of the observation. Finally, a navigation application with Global Positioning System (GPS) is assumed to be available on the vehicle such that the locations of the origin and destination, the remaining distance and the speed limits along the entire trip are available at every point during the trip. \begin{figure}[] \centering \includegraphics[width=\columnwidth]{Figures/model-based_framework.pdf} \caption{The Structure of The Environment Model} \label{fig: environment} \end{figure} \subsubsection{Vehicle and Powertrain Model} \label{sec: veh_dynamisc} A forward-looking dynamic powertrain model is developed for fuel economy evaluation and control strategy verification. In this work, a P0 mild-hybrid electric vehicle (mHEV) is considered, equipped with a 48V Belted Starter Generator (BSG) performing torque assist, regenerative braking and start-stop functions. The engine is modeled as low-frequency quasi-static nonlinear maps based on steady-state engine test bench data provided by the supplier. The map of instantaneous fuel consumption $\dot{m}_{\mathrm{fuel}}$ is a function of engine angular velocity $\omega_{\mathrm{eng}}$ and engine torque $T_{\mathrm{eng}}$, and the maps of torque limits $T_{\mathrm{eng}}^{\mathrm{min}}$ and $T_{\mathrm{eng}}^{\mathrm{max}}$ are functions of engine angular velocity $\omega_{\mathrm{eng}}$. The battery $SoC$ and voltage $V_{\mathrm{batt}}$ are governed by a zeroth-order equivalent circuit model shown as follows: \begin{gather} I_t = \frac{V_{\mathrm{OC}}(SoC_t) - \sqrt{V_{\mathrm{OC}}^2(SoC_t) -4 R_0(SoC_t) P_{\mathrm{bsg},t}}}{2R_0(SoC_t)} \label{eq: I_batt},\\ SoC_{t+1} = SoC_t -\frac{\Delta t}{C_{\mathrm{nom}}} (I_t + I_{\mathrm{aux}}), \end{gather} where $t$ is the discretized time index, and $\Delta t$ is the time discretization that is set to be $1 s$ in the study. The power consumed by auxiliaries is modeled by a calibrated constant current bias $I_{\mathrm{aux}}$. The cell open circuit voltage $V_{\mathrm{OC}}$ and internal resistance $R_0$ are maps of $SoC$ from a battery pack supplier. The vehicle dynamics model is based on the road-load equation: \begin{gather} \begin{aligned} {v}_{\mathrm{veh},t+1}= & v_{\mathrm{veh}, t} + \Delta t \biggl( \frac{T_{\mathrm{out},t}-T_{\mathrm{brk},t}}{MR_\mathrm{w}} - \dfrac{C_\mathrm{d}\rho_\mathrm{a} \Omega_\mathrm{f}v_{\mathrm{veh},t}^2}{2M} \\ & - g \cos{\alpha} C_\mathrm{r}v_{\mathrm{veh},t} - g\sin{\alpha} \biggr) \label{eq: V_veh_state} \end{aligned} \end{gather} Here, the four terms inside the bracket of the left-hand side are associated with the forward propulsion force, the tire rolling resistance, the aerodynamic drag, and the road grade, respectively. $T_{\mathrm{brk}}$ is the brake torque applied on wheel, $C_{\mathrm{d}}$ is the aerodynamic drag coefficient, $\rho_\mathrm{a}$ is the air density, $A_{\mathrm{f}}$ is the effective aerodynamic frontal area, $C_\mathrm{r}$ is rolling resistance coefficient, and $\alpha$ is the road grade. Besides the aforementioned models, which are directly associated with either the states or the objective in the eco-driving Optimal Control Problem (OCP) formulation, BSG, torque converter and transmission are also modeled in the study. The BSG is modeled as a quasi-static efficiency map to compute the BSG torque $T_\mathrm{bsg}$ and power output $P_{\mathrm{bsg}}$. A torque converter model is developed to compute the losses during the traction and regeneration modes. The transmission model is based on a static gearbox, and its efficiency $\eta_{\mathrm{trans}}$ is scheduled as a nonlinear map of the gear number $n_\mathrm{g}$, the transmission input shaft torque $T_{\mathrm{trans}}$ and the transmission input speed $\omega_{\mathrm{trans}}$. The detailed mathematical models of these components can be found in \cite{zhu2021deep}. The forward vehicle model was calibrated and validated using experimental data from a chassis dynamometer. Vehicle velocity, battery $SoC$, gear number, engine speed and fuel consumption were used to evaluate the model with the experimental data. Fig. \ref{fig: veh_vel_soc_fuel_validation_FTP} shows the sample results from model verification over the FTP-75 regulatory drive cycle. Results indicate that the vehicle velocity and $SoC$ are accurately predicted by the model. The mismatches in the battery $SoC$ can be attributed to the assumptions made in the simplified battery model such as modeling electrical auxiliary loads as a constant current bias. Further, the final value of the fuel consumption estimated by the model over the FTP-75 drive cycle is within 4\% of the actual fuel consumption which verifies that the model can be used for energy and fuel prediction over real-world routes. \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{Figures/FTP_V_veh_SOC_comp_SI.pdf} \caption{Validation of Vehicle Velocity, $SoC$ and Fuel Consumed over FTP Cycle.} \label{fig: veh_vel_soc_fuel_validation_FTP} \end{figure} \subsubsection{Traffic Model} \begin{figure}[!t] \centering \includegraphics[width=1\columnwidth]{Figures/DRL_Training_Map_rand_color.png} \caption{Map of Columbus, OH as the Traffic Environment for Training} \label{fig: map_of_columbus} \end{figure} A large-scale microscopic traffic simulator is developed in an open source software Simulation of Urban Mobility (SUMO) \cite{SUMO2018} as part of the environment. To recreate realistic mixed urban and highway trips for training, the map of the city of Columbus, OH, US is downloaded from the online database OpenStreetMap \cite{OpenStreetMap}. The map contains the length, shape, type and speed limit of the road segments and the detailed program of each traffic light at signalized intersections. Fig. \ref{fig: map_of_columbus} highlights the area that is covered in the study. In the shaded area, 10,000 random passenger car trips, each of which the total distance is randomly distributed from 5 $km$ to 10 $km$, are generated as the training set. Another 100 trips, indicated by the origins (red markers) and destinations (blue markers) in Fig. \ref{fig: map_of_columbus}, are generated following the same distribution as the testing set. In addition, the departure time of each trip follows a geometric distribution with the success rate $p=0.01$. The variation among the trips used for training leads to a learned policy that is less subject to local minima and agnostic to specific driving conditions (better generalizability) \cite{heess2017emergence}. \subsection{Optimization Formulation} \label{sec: eco-driving formulation} In the eco-driving problem, the objective is to minimize the weighted sum of fuel consumption and travel time between two designated locations. The optimal control problem is mathematically formulated as follows: \begin{subequations}\label{eq:OCP_formulation} \begin{align} \min_{\left\lbrace u_t \right\rbrace_{t=1}^\infty}\: &\mathbb{E} \left[ \sum_{t=1}^\infty \left[\lambda\dot{m}_{\text{fuel},t} + (1-\lambda) \right] \Delta t \cdot \mathbb{I}\left[s_t < s_{\text{total}}\right] \right]\\ \text{where} \:&u_t = \left[T_{\text{eng},t}, T_{\text{bsg},t}, T_{\text{brk},t}\right]^T \\ \text{s.t.} \: & SoC_{t+1} = f_{\text{batt}}(v_{\text{veh},t}, SoC_t, u_t) \label{eq:battery dynamics} \\ & v_{\mathrm{veh},t+1} = f_{\text{veh}}(v_{\text{veh}, t}, SoC_t, u_t) \\ & T_{\mathrm{eng}}^{\min}(\omega_{\mathrm{eng},t}) \leq T_{\mathrm{eng},t} \leq T_{\mathrm{eng}}^{\max}(\omega_{\mathrm{eng},t}) \label{eq:S2_con_Teng} \\ & T_{\mathrm{bsg}}^{\min}(\omega_{\mathrm{bsg},t}) \leq T_{\mathrm{bsg},t} \leq T_{\mathrm{bsg}}^{\max}(\omega_{\mathrm{bsg},t}) \label{eq:S2_con_Tbsg} \\ & I^{\min} \leq I_t \leq I^{\max} \label{eq:S2_con_I}\\ & SoC^{\text{min}} \leq SoC_t \leq SoC^{\text{max}} \label{eq: soc_constraint}\\ & SoC_T \geq SoC^{\mathrm{T}} \label{eq: terminal_soc_constraint}\\ & 0 \leq v_{\mathrm{veh},t} \leq v_{\mathrm{lim},t} \label{eq: speed_limit}\\ & (t, s_t) \notin \mathcal{S}_{\mathrm{red}}. \label{eq: traffic_constraint} \end{align} \end{subequations} Here, $\dot{m}_{\mathrm{fuel},t}$ is the instantaneous fuel consumption. $\lambda$ is a normalized weight on the fuel consumption. $\omega_{\mathrm{eng}}$ and $\omega_{\mathrm{bsg}}$ are the engine and BSG angular velocities, respectively, and they are static functions of vehicle speed $v_{\mathrm{veh}}$ and gear number $n_\mathrm{g}$. $f_{\mathrm{batt}}$ and $f_{\mathrm{veh}}$ are the battery and vehicle dynamics, respectively, introduced in Sec. \ref{sec: veh_dynamisc}. Eqn. \eqref{eq:S2_con_Teng} to \eqref{eq:S2_con_I} are the constraints imposed by the powertrain components. Eqn. \eqref{eq: soc_constraint} and Eqn. \eqref{eq: terminal_soc_constraint} are the constraints on the instantaneous battery $SoC$ and terminal $SoC$ for charge sustaining, respectively. Here, the subscript $T$ represents the time at which the vehicle reaches the destination. $SoC^\mathrm{min}$, $SoC^\mathrm{max}$ and $SoC^\mathrm{T}$ are commonly set to 30\%, 80\% and 50\% \cite{olin2019reducing,deshpande2021real}. Eqn. \eqref{eq: speed_limit} and \eqref{eq: traffic_constraint} are the constraints imposed by traffic conditions. The set $\mathcal{S}_{\mathrm{red}}$ represents the set in which the traffic light at the certain location is in red phase \cite{zhu2021gpu}. The problem is formulated as an infinite horizon problem in which the stage cost becomes zero once the system reaches the goal set, i.e. the traveled distance $s_t$ is greater than or equal to the total distance of the trip $s_{\mathrm{total}}$ while keeping the terminal $SoC_T$ greater than or equal to $SoC^{\mathrm{T}}$. In addition, anytime that the vehicle violates the traffic light constraints, i.e. Eqn. \eqref{eq: traffic_constraint}, the trip is considered a failure and the goal set is not reached. To solve the aforementioned optimization formulation as an OCP, a MBRL algorithm is proposed, and the preliminaries of the algorithm are included in the next section. \section{Preliminaries on MBRL}\label{sec: background} The nonlinear, stochastic, time-invariant system is considered in this work: \begin{equation} \begin{aligned} x_{t+1} &= f(x_t,u_t,w_t)\\ x_t &\in \mathcal{X} \subseteq \mathbb{R}^n, \: t \in \mathbb{N}_{+}\\ u_t &\in \mathcal{U}(x_t) \subseteq \mathbb{R}^m, \: t \in \mathbb{N}_{+}\\ w_t &\in \mathcal{W} \subseteq \mathbb{R}^p, \: t \in \mathbb{N}_{+}. \label{eq: system dynamics} \end{aligned} \end{equation} Here, $x_t$, $u_t$ and $w_t$ are the state, control, and uncertainty at time $t$. $\mathcal{X}$ and $\mathcal{U}$ are the feasible sets for states and inputs, respectively. The uncertainties are assumed to be independent and identically distributed (i.i.d.). Let $\pi:\mathcal{X} \rightarrow \mathcal{U}$ be a feasible deterministic policy and $\Pi$ be the set of all feasible deterministic policies. The objective of the OCP is to reach the goal set $\mathcal{G}\subseteq \mathbb{R}^n$ while finding the optimal policy $\pi^*$ that minimizes the expectation of the discounted sum of the costs defined as follows: \begin{equation} \begin{gathered} \pi^* = \argmin_{\pi \in \Pi}\eta(\pi), \; \text{where} \\ \eta(\pi) = \mathbb{E}_{w_t}\left[ \sum_{t=0}^\infty \gamma^{t}c\left(x_t,u_t\right)\right], \\ \text{where } u_t = \pi(x_t). \end{gathered} \end{equation} Here, $\gamma$ is the discount factor that prioritizes the immediate rewards and ensures the sum over the infinite horizon remains finite. As in \cite{thananjeyan2020abc}, the following assumption is made. \begin{assumption}[Costs] The cost is zero for the states inside the goal set $\mathcal{G}$ and positive for the states outside, i.e. $\exists \epsilon > 0$ such that $c(x,u)>\epsilon\mathbb{I}_{\mathcal{G}^C(x)}$ where $\mathbb{I}$ is the indicator function and $\mathcal{G}^C$ is the complement of the goal set $\mathcal{G}$. \end{assumption} As in\cite{borrelli2017predictive,rosolia2019sample,thananjeyan2020abc}, the following definitions are given. \begin{definition}[Robust Control Invariant Set] A set $\mathcal{C}\subseteq\mathcal{X}$ is said to be a robust control invariant set for the system Eqn. \eqref{eq: system dynamics} if for all $x(t) \in \mathcal{C}$, there exists a $u(t)\in \mathcal{U}$ such that $f(x(t),u(t), w(t))\in \mathcal{C}$, for all $w(t)\in \mathcal{W}$ and $t\in \mathbb{N}_+$. \end{definition} \begin{definition}[Robust Successor Set $\textit{Suc}(\mathcal{S})$] For a given set $\mathcal{S}$, its robust successor set $\text{Suc}(\mathcal{S})$ is defined as \begin{align} \begin{aligned} \text{Suc}(\mathcal{S})=&\left\lbrace x'\in \mathbb{R}^n: \exists x \in \mathcal{S}, \exists w \in \mathcal{W} \right. \\ & \quad \quad \quad \quad \: \left. \text{such that } x' = f(x, \pi(x), w) \right\rbrace. \end{aligned} \end{align} \end{definition} \begin{definition}[Robust Reachable Set $\mathcal{R}_N(x_0^j)$]For a given initial state $x_0^j$, the N-step robust reachable set $\mathcal{R}_N(x_0^j)$ of the system defined in Eqn. \eqref{eq: system dynamics} in a closed loop policy $\pi$ at iteration $j$ is defined recursively as \begin{equation} \begin{aligned} \mathcal{R}_{i+1}^{\pi}(x_0^j) &= \text{Suc}(\mathcal{R}_i^{\pi}(x_0^j)) \cap \mathcal{X},\\ \mathcal{R}_0^\pi(x_0^j) &= x_0^j, \end{aligned} \end{equation} where $i = 0, 1, \dots, N-1$. \end{definition} \begin{definition}[Safe Set] The safe set $\mathcal{SS}^j$ contains the full evolution of the system at iteration j, \begin{align} \mathcal{SS}^j=\left\lbrace \bigcup_{k=0}^\infty \mathcal{R}_k^{\pi} (x_0^j) \bigcup \mathcal{G}\right\rbrace. \label{eq: safe set} \end{align} \end{definition} As shown in \cite{rosolia2019sample}, the exact form of the safe set in \ref{eq: safe set} is a robust control invariant set. As calculating its exact form is intractable, especially for high dimensional nonlinear system, it is, in practice, approximated as \begin{align} \widetilde{\mathcal{SS}}^j = \bigcup_{k\in \mathcal{M}^j} x^k, \end{align} where $x^k=\lbrace x_t^k:t\in \mathbb{N}_+\rbrace$ is the trajectory at iteration $k$, and $\mathcal{M}^j = \lbrace k\in [0,j): \lim_{t\rightarrow \infty} x_t^k\in\mathcal{G}\rbrace$ is the set of indices of which the trajectories were successfully driven to the goal. As the safe set in this work is constantly evolving during training, the iteration index $j$ will be neglected in the remaining work. For any policy $\pi$, the value function $V^\pi:\mathcal{X}\rightarrow \mathbb{R}$, the Q function $Q^\pi:\mathcal{X}\times \mathcal{U} \rightarrow \mathbb{R}$ and the advantage function $A^\pi:\mathcal{X}\times \mathcal{U} \rightarrow \mathbb{R}$ are defined as follows: \begin{align} V^{\pi}(x_t) &= \begin{cases} \mathbb{E}_{\pi}\left[\displaystyle{\sum_{i=t}^\infty}\gamma^{i-t}c\left(x_i,u_i\right)|x_t\right], & x_t \in \mathcal{SS}\\ \infty, & \text{otherwise.} \end{cases}\\ Q^{\pi}(x_t,u_t) &= \begin{cases} \mathbb{E}_{\pi}\left[\displaystyle{\sum_{i=t}^\infty}\gamma^{i-t}c\left(x_i,u_i\right)|x_t,u_t\right], & \begin{aligned}&x_t \in \mathcal{SS}\\ &u_t \in \mathcal{U} \end{aligned} \\ \infty & \text{otherwise.} \end{cases}\\ A^{\pi}(x_t,u_t) &=Q^{\pi}(x_t,u_t) - V^{\pi}(x_t). \end{align} \section{Proposed Method} \label{sec: proposed method} In this work, an off-policy model-based deep reinforcement learning algorithm with an approximated safe set is proposed. At any given time $t$ during policy execution, the following trajectory optimization problem with a receding horizon of $H$ steps is solved: \begin{equation} \begin{aligned} \min_{\left\lbrace \tilde{u}_k \right\rbrace_{k=t}^{t+H-1}} & \mathbb{E}\left[ \sum_{k=t}^{t+H-1} \gamma^{k-t}c(\tilde{x}_k,\tilde{u}_k) + \gamma^H V^{\pi} (\tilde{x}_{k+H}) \right] \\ \text{s.t.} \:& \tilde{x}_{k+1} = f(\tilde{x}_k,\tilde{u}_k,w_k)\\ & \tilde{x}_t = x_t \\ & \tilde{x}_k \in \mathcal{X}, \: k = t, \dots, t+H-1 \\ & \tilde{x}_{t+H} \in \widetilde{\mathcal{SS}}\\ & \tilde{u}_k \in \mathcal{U}, \: k = t, \dots, t+H-1, \end{aligned} \label{eq: traj optim val func} \end{equation} where $\tilde{x}$ and $\tilde{u}$ are the variables for states and control actions in the predicted trajectory. Compared to the formulation in \cite{lowrey2018plan}, the state $\tilde{x}_k$ and action $\tilde{u}_k$ are explicitly constrained to be within the feasible region in the receding horizon and the terminal state $\tilde{x}_{t+H}$ to be within the safe set $\widetilde{\mathcal{SS}}$. With the presence of uncertainties in the dynamic system, solving the exact form of the above stochastic optimization problem can be challenging. In \cite{chua2018deep, thananjeyan2020safety, thananjeyan2020abc}, Cross Entropy Method (CEM) \cite{chua2018deep} is used to solve the problem with unknown dynamics as a chance constraint problem. In Section \ref{sec: eco-driving formulation}, techniques will be discussed to simplify and solve the optimization in the eco-driving problem. As most of the model-based deep reinforcement learning methods with trajectory optimization in literature learn the value function as the terminal cost for the MPC \cite{chua2018deep, lowrey2018plan, thananjeyan2020abc, karnchanachari2020practical}, the learning algorithm becomes on-policy. While the trajectory optimization increases the sample efficiency and helps exploration\cite{lowrey2018plan}, solving the trajectory optimization problem makes each data sample more computationally expensive. As a result, the training wall time is not necessarily reduced. In this work, the off-policy Q-learning \cite{watkins1989learning} is instead proposed. To use the learned Q function in trajectory optimization, the following equation needs to be solved: \begin{align} \min_{\left\lbrace \tilde{u}_k \right\rbrace_{k=t}^{t+H}} & \mathbb{E}\left[ \sum_{k=t}^{t+H-1} \gamma^{k-t}c(\tilde{x}_k,\tilde{u}_k) + \gamma^H Q^{\pi}_\theta (\tilde{x}_{t+H},\tilde{u}_{t+H}) \right],\label{eq: traj optim q func} \end{align} where $Q_\theta^{\pi}$ is the approximated $Q$ function parametrized by $\theta$. Compared to solving Eqn.\eqref{eq: traj optim val func}, solving Eqn. \eqref{eq: traj optim q func} requires one extra computational step: \begin{align} V^\pi(\tilde{x}_{t+H}) = \min_{\tilde{u}_{t+H}} Q_\theta^{\pi} (\tilde{x}_{t+H},\tilde{u}_{t+H}). \label{eq: q to v} \end{align} Depending on the dimension of the problem, solving Eqn. \eqref{eq: q to v} can be computationally intractable, especially for online control. Several algorithms, e.g. DDPG \cite{lillicrap2015continuous}, TD3 \cite{fujimoto2018addressing} and dueling network \cite{wang2016dueling} are proposed to obtain the value function from the Q function. In this work, the off-policy actor-critic algorithm TD3 is used since it reduces the overestimation and is shown to be more stable than DDPG. Specifically, with the sample $(x_j, u_j, c_j, x'_j)$ from the experience replay buffer $\mathcal{D}$ \cite{lin1992self}, the target for the Q function during training is constructed as follows, \begin{gather} y_j = c_j + \gamma \max_{i=1,2}Q_{\theta'_i}(x'_j,u'_j),\\ u'_j = \pi_{\phi'}(x'_j) \label{eq: td3_actor}. \end{gather} Here, $Q_{\theta_1}$ and $Q_{\theta_2}$ are two independently trained critic networks. $Q_{\theta'_1}$ and $Q_{\theta'_2}$ are the corresponding target networks. $\pi_{\phi}$ and $\pi_{\phi'}$ are the actor network and its target network, respectively. The critics are then updated following \begin{equation} \begin{gathered} \theta_i \leftarrow \theta_i - \alpha \nabla_{\theta_i} \left[\dfrac{1}{N}\sum_{j=1}^N\left(y_j-Q_{\theta_i}(x_j,u_j)\right)^2 \right], \: i = 1, 2, \label{eq: critic update} \\ \left\lbrace(x_j, u_j, c_j, x'_j) \sim \mathcal{D} \right\rbrace_{j=1}^N. \end{gathered} \end{equation} where $\alpha$ is the learning rate, and $N$ is the batch size. In the off-policy learning algorithm used here, the behavior policy is the trajectory optimization where state and action constraints within the receding horizon are satisfied thanks to the constrained optimization formulation. However, the trained actor $\pi_\phi$ makes decisions solely based on the Q function. The resulting mismatch between the distribution of state-action pairs induced by the actor $\pi_\phi$ and that collected by the behavior policy results in extrapolation error leading to unstable training \cite{levine2020offline}. In the eco-driving problem, the trajectory optimization ensures the power is solely generated from ICE when the battery $SoC$ is at the lower limit $SoC^{\mathrm{min}}$. Accordingly, no state-action pair resembling low $SoC$ and high motor torque can be collected, which leads to extrapolation in the Q function near the region. The error can eventually cause unstable training or inferior performance. To address the extrapolation error induced by the mismatch in distributions, Batch Constrained Q-learning (BCQ) \cite{fujimoto2019off} originally proposed for offline reinforcement learning is used. Here, a generative model, specifically a Variational Autoencoder (VAE) \cite{kingma2013auto}, $G_\omega(x)$ is trained to resemble the state-action distribution in the experience replay buffer. The background on VAE and the training objective are covered in Appendix \ref{appendix: vae}. Note that samples from the generative model $a' \sim G_\omega(x')$ should ideally match the distribution collected by the behavior policy. Instead of selecting action following Eqn. \eqref{eq: td3_actor}, the action is now selected as \begin{equation} \begin{gathered} u'_j = \argmin_{u_{j,k} + \xi_\phi(x_j, u_{j,k}, \Phi)} \left[\max_{i=1,2} Q_{\theta_i'}(x'_j,u_{j,k} + \xi_\phi(x_j, u_{j,k}, \Phi))\right],\\ \left\lbrace u_{j,k} \sim G_\omega(x'_j) \right\rbrace_{k=1}^n. \end{gathered} \end{equation} Here $n$ is the hyperparameter that is the number of actions sampled from the generative model. The action $u'$ used for the target value for the Q function is selected as the best among the $n$ sampled ones. Note that there is no longer an actor network mapping from state to action. Instead, to ensure the agent can learn on top of the actions sampled from the generative model imitating the behavior policy from the experience buffer, a perturbation network $\xi_\phi$ whose output is clipped between $[-\Phi,\Phi]$ is trained. The perturbation network $\xi_{\phi}$ is updated by deterministic policy gradient theorem from \cite{silver2014deterministic} as \begin{align} \phi \leftarrow \phi - \alpha \nabla_\phi\left[\dfrac{1}{N} \sum_{j=1}^N Q_{\theta_1}(x_j, u_j + \xi_\phi(x_j, u_j, \Phi))\right]. \label{eq: perturb update} \end{align} To reduce the accumulating error from bootstrapping, all the target networks are updated with a slower rates as \begin{align} \theta'_i &\leftarrow \tau \theta_i + (1-\tau) \theta'_i, \quad i = 1, 2, \label{eq: critic target update}\\ \phi' &\leftarrow \tau \phi + (1-\tau) \phi', \end{align} where $\tau$ is a constant on the order of $10^{-3}$ to $10^{-1}$. In Eqn. \eqref{eq: traj optim val func}, the terminal rollout state is constrained within the safe set. Since the safe set is an approximation to the robust control invariatn set, the constrain ensures that there exists policies that can safely drive the terminal state to the goal set. In \cite{thananjeyan2020safety, thananjeyan2020abc}, the safe set is approximated by kernel density estimation, which typically works well only for problems in low dimensions. Here, we extend the approximation to high-dimensional setting by using deep generative models. Following the notion in \cite{thananjeyan2020safety}, the safe set is approximated as \begin{align} \widetilde{\mathcal{SS}} = \lbrace x:p_\psi(x) \geq \delta \rbrace, \end{align} where $p_\psi:\mathcal{X}\rightarrow[0,1]$ is the probability that a state is inside the safe set parametrized by $\psi$, and the constant $\delta$ regulates how exploratory the controller is. Note that the generative model used for the safe set approximation needs to model the probability explicitly and can be slow in sampling, whereas the generative model resembling the distribution of state-action pairs in the experience replay needs to be fast in sampling while the explicit probability is not required. Due to the aforementioned consideration, the autoregressive model with Long Short-term Memory (LSTM) \cite{karpathy2015visualizing} is used. The description of the model as well as the training objective is included in Appendix \ref{appendix: autoregressive lstm}. In Sec.\ref{sec: implementation details}, the use of the autoregressive model in the application of eco-driving is motivated as the dimension of the problem can get large once the future conditions are sampled discretely. In summary, Safe Model-based Off-policy Reinforcement Learning (SMORL) is proposed. The algorithm builds on SAVED \cite{thananjeyan2020safety} and extends it to be an off-policy algorithm with the methods proposed in BCQ. The detailed step-by-step algorithm is included in Algorithm \ref{algorithm: smorl}. \begin{algorithm*}[] \SetAlgoLined Initialize Q-networks $Q_{\theta_1}$, $Q_{\theta_2}$ independently, and duplicate target networks $Q_{\theta'_1}$, $Q_{\theta'_2}$. \\ Initialize the perturbation network $\xi_\phi$, its target network $\xi_\phi'$ and VAE $G_\omega=\left\lbrace E_{\omega_1}, D_{\omega_2} \right\rbrace$.\\ Initialize the experience replay buffer $\mathcal{D}$.\\ Collect $N_0$ successfully executed trajectories with a baseline controller and initialize the safe set $\widetilde{\mathcal{SS}}$.\\ \For{$n_{iter}\in{1,\dots,N_{iter}}$}{ \While{$j^{th}$ trajectory NOT finished}{ Select control action $u_t$ by solving trajectory optimization in Eqn. \eqref{eq: traj optim val func}.\\ Sample mini-batch of $N$ transitions $(x, u, c, x')$ from $\mathcal{D}$.\\ For each transition, sample $n$ actions $u'_j$ from $G_\omega(x')$ and $n$ perturbations from $\xi_\phi(x',u',\Phi)$. \\ Update the critic networks $Q_{\theta_1}$, the target networks $Q_{\theta_2}$ following Eqn. \eqref{eq: critic update} and $Q_{\theta'_1}$, $Q_{\theta'_2}$ following Eqn. \eqref{eq: critic target update}. \\ Update perturbation network $\xi_\phi$ following Eqn. \eqref{eq: perturb update}. \\ Update VAE $G_\omega$ by maximizing Eqn. \eqref{eq: vae update}. } \If{$x_T\in\mathcal{G}$}{ Push the trajectory $\left\lbrace (x_t,u_t,c_t,x_{t+1}) \right\rbrace^T$ to $\mathcal{D}$.\\ Update the safe set $\widetilde{\mathcal{SS}}$ with minibatchs sampled from $\mathcal{D}$ following Eqn. \eqref{eq: autoregressive update}. } } \caption{Safe Model-based Off-policy Reinforcement Learning (SMORL)} \label{algorithm: smorl} \end{algorithm*} \section{Implementation Details} \label{sec: implementation details} \subsection{Trajectory Optimization} Specific to the eco-driving problem, the state vector $x_t$ is defined as a vector with 88 states. A description of the states are listed in Tab. \ref{tab: state_action_space}. Here, the first seven elements of the state vector are the battery $SoC$, the vehicle speed $v_{\mathrm{veh}}$, the current speed limit $v_{\mathrm{lim}}$, the next speed limit $v_{\mathrm{lim}}'$, the distance to the next speed limit $s_{\mathrm{lim}}$, the distance to the upcoming traffic light $s_{\mathrm{tls}}$ and the total remaining distance $s_{\mathrm{rem}}$. The remaining 81 elements are the sampled upcoming traffic light status in the next 80 seconds $x_{\mathrm{tfc}}$. For example, if the upcoming traffic light has 20 seconds remaining for the current red phase and will remain in green for the rest of the 80 seconds, the first 21 elements of the sampled upcoming traffic light status are 0, and the rest are set to 1. Compared to the manually extracted feature representation in \cite{zhu2021deep}, the sampled representation reduces the discontinuity and results in a better performance. \begin{table}[] \centering \caption{The State and Action Spaces of the Eco-driving Problem} \label{tab: state_action_space} \begin{tabular}{c|c|p{5cm}} & Variable & \multicolumn{1}{c}{Description}\\ [3pt] \hline \multirow{8}{*}{$\mathcal{X}$} & $SoC \in \mathbb{R}$ & Battery $SoC$ \\[3pt] & $v_{\text{veh}}\in \mathbb{R}$ & Vehicle velocity\\[3pt] & $v_{\text{lim}}\in \mathbb{R}$ & Speed limit at the current road segment\\[3pt] & $v'_{\text{lim}}\in \mathbb{R}$ & Upcoming speed limit\\[3pt] & $d_{\text{tfc}}\in \mathbb{R}$ & Distance to the upcoming traffic light\\[3pt] & $d'_{\text{lim}}\in \mathbb{R}$ & Distance to the road segment of which the speed limit changes\\[3pt] & $d_{\text{rem}}\in \mathbb{R}$ & Remaining distance of the trip\\[3pt] & $x_{\text{tfc}} \in \left\lbrace 0, 1 \right\rbrace^{81}$ & Sampled status of the upcoming traffic light\\[3pt] \hline \multirow{3}{*}{$\mathcal{U}$} & $T_{\text{eng}}\in \mathbb{R}$ & Engine torque\\[3pt] & $T_{\text{bsg}}\in \mathbb{R}$ & Motor torque \\[3pt] & $T_{\text{brk}}\in \mathbb{R}$ & Equivalent brake torque \end{tabular} \end{table} As the vehicle considered in this study is assumed equipped with connected features, e.g. advanced mapping and V2I connectivity, and surrounding vehicles are not included in the study, it is assumed that the ego vehicle can deterministically predict the uncertainties from driving conditions within the receding horizon in this study. Sun et al. \cite{sun2020optimal} suggest by formulating the problem as a chance constraint or a distributionally robust optimization problem, uncertainties in SPaT can be considered without additional computational load. In Eqn. \eqref{eq:OCP_formulation}, the receding horizon $H$ is in the time domain. While it is easier to incorporate the time-based information such as SPaT received from V2I communication in the time domain, an iterative dynamic look-ahead process is required to process any distance-based route feature, such as speed limits, grade, traffic light and stop sign locations. For example, the controller requires the speed limits as the constraints to generate speed trajectory while the speed limits can change based on the distance traveled by the speed trajectory. In this study, the value and Q functions are learned in the time domain for ease of integration with the time-based traffic simulator, while the trajectory optimization is conducted in the spatial domain. As SPaTs and speed limits do not depend on the decision made by the ego vehicle in the spatial domain, they are incorporated into the optimization problem as constraints, and only the vehicle speed, battery $SoC$ and the time at which the vehicle reaches the given distance are considered as the state in the trajectory optimization. Define the optimization state $z\in \mathcal{Z} \subseteq \mathbb{R}^3$ as \begin{align} z_s=\begin{bmatrix}v_{\mathrm{veh},s}, SoC_s, t_s\end{bmatrix}^T. \end{align} Here, $s$ is the index in the discretized spatial domain with $\Delta s = 10 m$, and the dynamics of $z$ in the time and spatial domains are converted following \begin{align} \dfrac{\Delta z}{\Delta s} = \dfrac{\Delta z}{\Delta t}\dfrac{\Delta t}{\Delta s} = \dfrac{\Delta z}{\Delta t}\dfrac{1}{v_{\mathrm{veh}}}. \end{align} As a result, the trajectory optimization is formulated as \begin{subequations} \begin{align} \min_{\left\lbrace \tilde{u} \right\rbrace_{k=s_t}^{s_t+H_s-1}} \: &\sum_{k=s_t}^{s_t+H_s-1} \gamma^{t_k} c(\tilde{z}_k,\tilde{u}_k) + \gamma^{t_{H_s}} V^\pi(\mathcal{G}(x_t,z_{H_s}))\\ \text{where:} \: & \notag \\ c(\tilde{x}_k, \tilde{u}_k) = &\left( \lambda \dot{m}_{\mathrm{fuel},k} + (1-\lambda) \right) \dfrac{\Delta s}{v_{\mathrm{veh},k}} \cdot\mathbb{I}\left[s_k < s_{\mathrm{total}}\right]\\ \text{s.t.} \: & SoC_{k+1} = f_{\mathrm{batt},s}\left(\tilde{z}_k, \tilde{u}_k\right) \\ & v_{\mathrm{veh},k+1} = f_{\mathrm{veh},s}\left(\tilde{z}_k, \tilde{u}_k\right)\\ & T_{\mathrm{eng}}^{\min}(\omega_{\mathrm{eng},k}) \leq T_{\mathrm{eng},k} \leq T_{\mathrm{eng}}^{\max}(\omega_{\mathrm{eng},k}) \\ & T_{\mathrm{bsg}}^{\min}(\omega_{\mathrm{bsg},k}) \leq T_{\mathrm{bsg},k} \leq T_{\mathrm{bsg}}^{\max}(\omega_{\mathrm{bsg},k}) \\ & I^{\min} \leq I_k \leq I^{\max} \\ & SoC^{\text{min}} \leq SoC_k \leq SoC^{\text{max}}\\ & 0 \leq v_{\mathrm{veh},k} \leq v_{\mathrm{lim},k}\\ & (t_k, s_k) \notin \mathcal{S}_{\mathrm{red}}\\ & \mathcal{G}(x_t, z_{H_s}) \in \widetilde{\mathcal{SS}}. \end{align} \end{subequations} Here, $s_t$ is the spatial index corresponding to the distance the ego vehicle has traveled at the time $t$. $H_s=20$ is the prediction step in the spatial domain, making the total prediction horizon 200 $m$. $\mathcal{G}: \mathcal{X} \times \mathcal{Z} \rightarrow \mathcal{X}$ is the function that takes the full state $x_t$ and the terminal optimization state $z_{H_s}$ and determines the predicted terminal full state $\tilde{x}_{t+t_{H_s}}$. For example, suppose there are 15 seconds left in the current green phase and $t_{H_s}$ in the optimization state is 10 seconds, i.e. it takes 10 seconds for the ego vehicle to travel the future 200 $m$, there will be 5 seconds left in the current green phase at the end of the prediction horizon. The trajectory optimization problem is solved by Deterministic Dynamic Programming (DDP) \cite{sundstrom2009generic}. The optimal deterministic policy $\mu_k^*: \mathcal{Z} \rightarrow \mathcal{U}$, $k=1, 2, \dots, H_{s}-1$, along with the optimal cost-to-go function $\mathcal{J}_k: \mathcal{Z} \rightarrow \mathbb{R}$, $k=1, 2, \dots, H_s$ can be calculated through backward recursion as \begin{subequations}\label{eq: dp update} \begin{gather} \mathcal{J}_{H_s}(z) = V^\pi\left(\mathcal{G}(x_t,z)\right) + \mathcal{P}_N(z) \label{eq: dp terminal cost}, \\ \mathcal{F}_k(z,u) = c(z,u) + \mathcal{P}_k(z) + \mathcal{J}_{k+1}(f_k(x,u)),\\ \mu_k^*=\argmin_{\mu_k} \mathcal{F}_k(z, \mu_k(z)) \label{eq: dp optimal policy},\\ \mathcal{J}_{k}(z) = \mathcal{F}_k(z, \mu^*_k(z)) \label{eq: dp optimal value}. \end{gather} \end{subequations} Here, $\mathcal{F}: \mathcal{Z}\times \mathcal{U}\rightarrow \mathbb{R}$ is the cost-to-go associated with a given immediate action and then the optimal policy. $\mathcal{P}_k:\mathcal{Z}\rightarrow\mathbb{R}$ and $\mathcal{P}_N:\mathcal{Z}\rightarrow\mathbb{R}$ are penalty functions introduced to ensure no constraint violation in the predicted trajectory. Solving Eqn. \eqref{eq: dp update} is computationally intensive yet highly parallelizable. Considering onboard GPU is readily available nowadays on self-driving vehicles, a real-time capable CUDA-based Parallel DDP (PDDP) solver in \cite{zhu2021gpu} is used in this work. In the cases where the stochasticity within the prediction horizon cannot be ignored, other gradient-free optimization methods, such as CEM or random shooting method \cite{nagabandi2018neural}, can be used as the trajectory optimizer. \subsection{Q Learning} Fig. \ref{fig: value_graph} shows the architecture of the neural network associated with the Q-learning. Upon receiving the state vector, the sampled traffic light status $w_{\mathrm{tfc}}$ are fed to a pre-trained autoencoder with Multilayer Perceptron (MLP) of size $(81, 100, 5, 100, 81)$ for dimensionality reduction. The remaining states along with the actions are concatenated with the latent states from the encoder and subsequently fed into another MLP of size $(200, 100, 50)$ to output the Q function for critic and the perturbation for the actor. The critic and actor do not share parameters in this work. \begin{figure}[!t] \centering \includegraphics[width=1\columnwidth]{Figures/value_graph.pdf} \caption{The Network Architecture of the Value and Q Functions.} \label{fig: value_graph} \end{figure} To accelerate the training and improve generalizability, the state of the vehicle is randomized for every 50 steps in simulation. When the domain randomization occurs, the battery $SoC$ and the vehicle velocity $v_{\mathrm{veh},t}$ are sampled from uniform distributions $\text{Uniform}(SoC^{\mathrm{min}}, SoC^{\mathrm{max}})$ and $\text{Uniform}(0, v_{\mathrm{lim},t})$, respectively. To guarantee feasibility during the trip, the domain randomization is disabled $200 m$ within signalized intersections or $1000 m$ within the final destination. \subsection{Safe Set Approximation} In the eco-driving problem, two types of constraints can induce feasibility issues, namely, the battery terminal $SoC$ constraint and the constraint imposed by traffic rules at signalized intersections. For the first case, the goal set is considered not reached when the vehicle is near the destination and it cannot sufficiently charge the battery back to $SoC^\mathrm{T}$ in the remaining distance. For the second case, the trip is considered failed when the vehicle breaks traffic rules at signalized intersections, and infeasibility occurs when the vehicle speed is too high and there is not enough distance to brake to stop in front of a traffic light in red or stop sign. A conservative low-speed controller that only uses ICE is used to collect the initial data for the experience replay buffer. During training, only samples from the trips that reach the goal set without violating any constraints are added to the experience replay buffer. In the eco-driving problem, the sampled traffic light $x_{\mathrm{tfc}}$ is binary, while the other variables are continuous. As the PDDP solver also discretizes the continuous state space, we consider the loss of accuracy with the same discretization is acceptable. The discretized states as in one-hot form in each dimension are fed into an LSTM network with 50 units sequentially as shown in Fig. \ref{fig: autoregressive_lstm}. The outputs from LSTM are then masked according to the number of categorical classes in each dimension. Finally, the softmax operator ensures the outputs to be a proper conditional probability distribution. As an alternative to LSTM, Causal 1D Convolutional Neural Networks (Causal Conv1D) \cite{oord2016wavenet} was also implemented as the network for the autoregressive model. The key difference is that the states in all dimensions can be fed into Causal Conv1D in parallel, whereas each dimension needs to be fed into LSTM sequentially. For applications with long sequences, Causal Conv1D can be more efficient and accurate \cite{bai2018empirical}. For the specific problem, Causal Conv1D shows no noticeable advantage over LSTM in either accuracy or inference speed. As a result, LSTM is chosen as it has fewer hyperparameters. When the receding horizon ($200 m$) in this study is longer than the critical braking distance \cite{zhu2021deep}, the vehicle will never violate any constraints imposed by signalized intersections. Nevertheless, using the safe set to constrain the terminal state is still essential for the following reason. At the last step of the receding horizon, the value function $V^\pi(\tilde{x}_{t+\Delta t_{H_s}})$ needs to be evaluated numerically for trajectory optimization. Since only the data from the safely executed trips are added into the buffer and there is no penalty mechanism for the constraint violation, the estimation of the critic network is valid only within the safe set and is subject to extrapolation error outside. Although the long receding horizon ensures the feasibility of the actual trajectory regardless of the use of a safe set, the training is subject to instability and the learned performance can significantly deteriorate without the constraint from the safe set. This effect is shown in Fig. \ref{fig: safe_set_effect}. Here, the two subplots on top show the optimized trajectories with and without the use of a safe set, respectively. The three curves in each plot are the trajectories from the optimizer at three consecutive seconds. During the first two seconds, the vehicle is more than $200m$ away from the traffic light, and thus the constraint from Eqn. \eqref{eq: traffic_constraint} is not considered in the trajectory optimization. The subplots on the bottom show the safety status of the terminal state in the dimension of the vehicle velocity and the time at which it reaches the end of the receding horizon before the signalized intersection appears in the receding horizon. Here, green means the state is considered safe, i.e. $p_\psi(x) \geq \delta$, and red otherwise. Although the actual trajectories, with or without a safe set, can slow down in time to avoid trespassing the red light thanks to the sufficiently long receding horizon, the terminal state without the safe set constraint has a speed of $20 m/s$ with $20 m$ left before a red light, which is unsafe. Meanwhile, comparing the bottom two subplots, the terminal velocity constrained by the safe set progressively reduces as the vehicle approaches the intersection in the red phase. In addition, given speed limit here is set to $v_\mathrm{lim}=22 m/s$, any state with a velocity higher than $22 m/s$ is considered unsafe. It can be noticed that the red region on the top right corners is incorrectly considered unsafe (false positive). This is because, by optimality, the agent rarely crosses an intersection in the green phase with low speed, therefore, these false positive regions do not affect the performance. \begin{figure}[!t] \centering \includegraphics[width=1\columnwidth]{Figures/safe_set_effect.pdf} \caption{The Effect Of the Safe Set on Trajectory Optimization.} \label{fig: safe_set_effect} \end{figure} As a summary for all the implementation details, the hyperparameters are listed in Tab. \ref{tab: hyperparameters}. \begin{table}[] \caption{Hyperparameters of the Q Learning and Safe Set Estimation} \label{tab: hyperparameters} \centering \begin{tabular}{l|l} Parameter & Value \\ \hline Weighting factor between fuel and time, $\lambda$ & 0.45 \\ Discount factor, $\gamma$ & 0.995 \\ Optimizer & Adam \\ Learning rate, $\alpha$ & 1e-4 \\ Experience buffer size & 2e5 \\ Batch size, $N$ & 256 \\ Target network update rate, $\tau$ & 1e-3 \\ Exploration rate, $\epsilon$ & 0.2 \\ Perturbation range in physical unit, $\Phi$ & 30 Nm \\ Sampled actions from the VAE decoder, $n$& 10 \\ Steps per domain randomization & 50 \\ LSTM size for the safe set & 50 \\ \end{tabular} \end{table} \section{Results}\label{sec: results} Both the PDDP optimizer and the neural network training require GPU. To get the results to be shown, the training took 24 hours on a node with an NVIDIA Volta V100 GPU and $2.4 GHz$ Intel CPU from Ohio Supercomputer Center \cite{OhioSupercomputerCenter1987}. As domain randomization is used during training, 5 trips out of 1000 randomly generated trips are repeatedly selected for every 25 training episodes to evaluate the performance of the controller and to quantify the progress of training. During the evaluation, domain randomization and epsilon greedy are both deactivated. Fig. \ref{fig: learning_curve} shows the evolution of the total costs, fuel economy and average speed of the 5 evaluation trips. Compared to the model-free on-policy method in \cite{zhu2021deep}, which takes 80,000 episodes to converge, the sample efficiency of the off-policy model-based method is significantly improved. In the meantime, with the constrained optimization formulation and the safe set, the training quickly learns to respect the constraints imposed by the terminal $SoC$ and signalized intersections. Furthermore, the fact that the agent does not need any extrinsic penalty from constraint violation and is still capable of learning to operate within the safe region significantly simplifies the design and tuning process as deployed in the previous reinforcement learning attempts on eco-driving \cite{zhu2021deep,pozzi2020ecological, li2019ecological}. \begin{figure}[!t] \centering \includegraphics[width=1\columnwidth]{Figures/learning_curve.pdf} \caption{The Evolution of the Total Costs, Fuel Economy, and Average Speed of the 5 Evaluation Trips.} \label{fig: learning_curve} \end{figure} Statistically, the performance of the agent trained with SMORL is compared against the four other strategies, a baseline strategy with Enhanced Driver Model (EDM) representing a human driver and heuristically calibrated energy management module \cite{gupta2019enhanced}, the hierarchical optimal controller using Approximate Dynamic Programming (ADP) in \cite{deshpande2021real}, the model-free DRL (MFDRL) agent proposed in \cite{zhu2021deep} and the wait-and-see (WS) solution. The WS solution assumes the speed limits and the sequences of all traffic lights of the entire trip are known \textit{a priori}, and it is solved by PDDP solver as well. Despite being non-causal and computational intractable for online implementation, the solution serves as the upper bound for the causal control strategies. The four strategies are evaluated on the 100 random trips as shown in Fig. \ref{fig: map_of_columbus}, and the fuel economy, average speed and variance of the battery $SoC$ are listed in Tab. \ref{tab: statistical_comparison}. \begin{table}[] \centering \caption{Fuel Economy, Average Speed and SoC Variance for Baseline, Model-free DRL, SMORL and WS Solutions} \label{tab: statistical_comparison} \begin{tabular}{c|c|c|c|c|c} \multicolumn{1}{l|}{} & \multicolumn{1}{c|}{Baseline} & \multicolumn{1}{c|}{ADP} & \multicolumn{1}{c|}{MFDRL} & \multicolumn{1}{c|}{SMORL} & \multicolumn{1}{c}{WS} \\ \hline \begin{tabular}[c]{@{}c@{}}Fuel Economy\\ mpg\end{tabular} & 32.4 & 39.5 & 40.8 & 41.6 & 47.5\\ \hline \begin{tabular}[c]{@{}c@{}}Speed Mean\\ $m/s$\end{tabular} & 14.1 & 13.9 & 12.5 & 14.0 & 14.5\\ \hline \begin{tabular}[c]{@{}c@{}}$SoC$ Variance\\ $\%^2$\end{tabular} & 12.1 & 21.6 & 18.2 & 52.6 & 22.6\\ \end{tabular} \end{table} Here, compared to the baseline strategy, the SMORL agent consumes $21.8 \%$ less total fuels while maintaining a comparable average speed. The benefit in fuel economy is achieved by avoiding unnecessary acceleration events and by taking advantage of a wider range of battery capacity as indicated by the higher $SoC$ variance. The SMORL agent shows dominant performance in average speed and fuel economy compared to the previously trained MFDRL strategy and the non-learning-based ADP strategy. The performance improvement over MFDRL is primarily due to two aspects. First, the online trajectory optimization solved by PDDP guarantees the global optimality within the receding horizon, which is more accurate and reliable than the actions generated from the one-step stochastic policy from neural networks as in MFDRL. Second, the fact that there is no extrinsic penalty to assist constraint satisfaction ensures that the agent focuses on learning only the objective of the OCP formulation, i.e. weighted sum of the trip time and fuel consumption, instead of a carefully designed yet delicate surrogate learning objective. In Fig. \ref{fig: density plot}, the average vehicle speed and the fuel economy of each trip are plotted against the traffic light density. As the WS solution calculates the global optimal solution with the knowledge of the full trip, it is able to navigate among the traffic lights accordingly, as indicated by the surprisingly increasing fuel economy. This can be due to the fact that when there are more traffic lights, the vehicle is forced to operate with a lower speed and lower fuel consumption condition. On the other hand, as the baseline driver has limited line-of-sight \cite{gupta2019enhanced} and the ADP, MFDRL and SMORL controllers have limited DSRC sensing range, the fuel economy decreases as the traffic light density increases. Nevertheless, as indicated by the slope of the fitted curve, the fuel economy of SMORL is less affected by the increase of the traffic light density compared to the baseline and ADP controller. \begin{figure}[] \centering \includegraphics[width=1\columnwidth]{Figures/density_plot.pdf} \caption{The Variation of the Average Speed and the Fuel Economy against Traffic Light Density for Baseline, SMORL and WS Solution} \label{fig: density plot} \end{figure} Fig. \ref{fig: comparison individual trip} shows the comparison among the baseline, ADP, SMORL and the WS solution on a specific testing trip. For this specific trip, while the differences in trip time are within $3s$, SMORL consumes $24.7\%$ and $11.0\%$ less fuel compared to the baseline and the ADP strategies, respectively. While SMORL demonstrates some merits similar to the WS solution, its inferiority to the WS solution is primarily due to the fact that only the SPaTs from the upcoming intersection are available to the controller. Additional comparisons among the four strategies can be found in Appendix \ref{appendix: additional comparison}. The ablation study for the key components in the algorithm is shown in Appendix \ref{sec: ablation}. \begin{figure}[] \centering \includegraphics[width=1\columnwidth]{Figures/comparison_dp_mbdrl.pdf} \caption{The Trajectory Comparison among Baseline, SMORL and WS.} \label{fig: comparison individual trip} \end{figure} \section{Conclusion}\label{sec: conclusion} In this paper, a safe-critical model-based off-policy reinforcement learning algorithm SMORL is proposed. The algorithm is applied to the eco-driving problem for CAHEV. Compared to the previous model-free attempts on eco-driving in the literature, the method does not require any extrinsic rewarding mechanism, and thus, greatly simplifies the design process and improves the final performance. With the online constrained optimization formulation and the approximate safe set, the learned strategy is capable of satisfying the constraints in the prediction horizon and restricting the state within the approximate safe set, which is an approximation to the robust control invariant set. The performance of the strategy trained with SMORL is compared to a baseline strategy representing human drivers' behavior over 100 randomly generated trips in Columbus, OH, US. With a comparable average speed, the strategy from SMORL consumes approximately $22\%$ less fuel. While the demonstration of the algorithm is on the eco-driving problem, we believe it can be applied to many other real-world problems, in particular to those with well-studied system dynamics, such as robotics and autonomous driving. Future studies include the extending the SMORL algorithm to include the presence of leading vehicles, as well as the integration and verification of the algorithm in a demonstration vehicle. \appendices \section{Variational Autoencoder}\label{appendix: vae} Let $X = \left\lbrace x_i \right\rbrace_{i=1}^N$ be some data set and $Z$ represent a set of low-dimensional latent variables, the objective is to maximize the marginal log-likelihood: \begin{equation} \begin{aligned} \log p(X) = \sum_{i=1}^N \log p(x_i) &= \sum_{i=1}^N \int_z \log p(x_i|z)p(z) dz\\ &= \sum_{i=1}^N \mathbb{E}_{z\sim p(z)} \log p(x_i|z), \end{aligned} \label{eq: marginal loglikelihood} \end{equation} As Eqn. \eqref{eq: marginal loglikelihood} is in general intractable, its variational lower bound is instead maximized: \begin{align} \begin{aligned} \mathcal{L}(\omega_1, \omega_2, X) = & -D_\mathrm{KL}\left(q_{\omega_1}(z|X)|| p(z)\right) \\ & + \mathbb{E}_{q_{\omega_1}(z|X)} \left[\log p_{\omega_2}(X|z)\right] \end{aligned}. \label{eq: vae update} \end{align} Here, $D_{\mathrm{KL}}$ is the Kullback–Leibler (KL) divergence, and $p(z)$ is the prior distribution that is typically assumed to be a multivariate normal distribution. $q_{\omega_1}(z|X)$ is the posterior distribution parametrized by $\omega_1$. To analytically evaluate the KL divergence, the posterior is typically constructed as $\mathcal{N}(z|\mu_{\omega_1}(X), \Sigma_{\omega_1}(X))$. From a coding theory perspective, $q_{\omega_1}(z|X)$ and $p_{\omega_2}(X|z)$ can be considered as a probabilistic encoder and a probabilistic decoder, respectively. To compute the $\nabla_{\omega_1} L(\omega_1, \omega_2, X)$, policy gradient theorem \cite{williams1992simple} or Reparametrization trick \cite{williams1992simple, kingma2013auto} can be used. The latter is often used in VAE as it typically leads to a lower variance. In practice, the encoder $q_{\omega_1}(z|X)$ and the decoder $p_{\omega_2}(X|z)$ can be any function approximator. An implementation of VAE as the generative model to sample actions can be found in \url{https://github.com/sfujim/BCQ}. In this work, the latent space dimension is selected to be 5, and the encoder and the decoder are both MLPs with 2 layers of 300 hidden units. \section{Autoregressive Model with LSTM}\label{appendix: autoregressive lstm} For any probability distribution, the joint distribution can be factorized as a product of conditional probabilities as follow: \begin{equation} \begin{aligned} &p(x) = \prod_{i=1}^K p(x^{(i)}|x^{(1)}, ..., x^{(i-1)}), \\ \rightarrow &\log p(x) = \sum_{i=1}^K \log p(x^{(i)}|x^{(1)}, ..., x^{(i-1)}), \end{aligned} \end{equation} where $x^{(i)}$ is the $i^{th}$ dimension of the discrete input vector and $K$ is the dimension of the input vector. As shown in Fig. \ref{fig: autoregressive_lstm}, the input vector in one-hot vector form is fed to the LSTM network in sequence. The $i^{th}$ output of the LSTM network after the softmax operation becomes a proper conditional probability $p(x^{(i)}|x^{(1)}, ..., x^{(i-1)})$. The model is trained by minimizing the KL divergence between the data distribution sampled from the experience replay buffer and the modeled distribution: \begin{equation} \begin{aligned} & \min_{\psi} \: D_{\mathrm{KL}}\left[p^*(x)||p_\psi(x)\right]\\ =& \min_{\psi} \: \mathbb{E}_{x\sim p^*(x)}\left[-\log p_\psi(x) \right] + \text{constant} \label{eq: autoregressive update} \end{aligned} \end{equation} \begin{figure}[!t] \centering \includegraphics[width=1\columnwidth]{Figures/autoregressive_rnn_model.pdf} \caption{The Network Architecture of Recurrent Autoregressive Model.} \label{fig: autoregressive_lstm} \end{figure} In this work, the LSTM network has a single layer and 50 hidden size. \section{Additional Comparison among Strategies} \label{appendix: additional comparison} Here, we show the comparison on two additional trips. The trip shown in Fig. \ref{fig: comparison individual trip high density} contains a large number of signalized intersections. As indicated by Fig. \ref{fig: density plot}, the gap between SMORL and the baseline and the gap between the wait-and-see solution and SMORL are both amplified by the high traffic density. The trip shown in Fig. \ref{fig: comparison individual trip low density} has a very low traffic density and the speed limits higher. In such case, the difference between SMORL and the wait-and-see solution becomes less noticeable. Meanwhile, SMORL was still able to consume less fuel by using the capacity of the battery more efficiently. \begin{figure}[!t] \centering \includegraphics[width=1\columnwidth]{Figures/comparison_dp_mbdrl_urban.pdf} \caption{Comparison for High-density Low-speed Scenario.} \label{fig: comparison individual trip high density} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=1\columnwidth]{Figures/comparison_dp_mbdrl_highway.pdf} \caption{Comparison for Low-density High-speed Scenario.} \label{fig: comparison individual trip low density} \end{figure} \section{Ablation} \label{sec: ablation} In this part, we compare the full SMORL algorithm with the four intermediate algorithms. All the algorithms presented below use trajectory optimization solved via the PDDP solver. Tab. \ref{tab: ablation} shows the difference in configuration and compares the trained final performance over the 100 trips used for testing. Here, we see that the safe set and BCQ both have a positive impact on the trained performance. In fact, the combination of TD3 and trajectory optimization (Config. 2) does not provide any show any significant improvement over trajectory optimization only (Config. 1). In addition, without the use of the safe set, the controller will deplete the battery $SoC$ to $SoC^\mathrm{min}$ at the end of the trip as the terminal state constraint cannot be considered unless with the help of extrinsic penalty. \begin{table}[] \caption{Ablation Study for SMORL} \label{tab: ablation} \centering \begin{tabular}{c|c|c|c|c|c} & \begin{tabular}[c]{@{}c@{}}Safe\\ Set\end{tabular} & Q-learning & \begin{tabular}[c]{@{}c@{}}Fuel\\ Economy\\ $mpg$\end{tabular} & \begin{tabular}[c]{@{}c@{}}Average\\ Speed\\ $m/s$\end{tabular} & \begin{tabular}[c]{@{}c@{}}Normalized\\ Cost\end{tabular} \\ \hline 1 & & None & 43.1 & 11.2 & 100 \\ \hline 2 & & TD3 & 39.1 & 13.9 & 88.0\\ \hline 3 & \CheckmarkBold & TD3 & 39.9 & 13.3 & 90.5 \\ \hline 4 & & BCQ & 38.5 & 14.3 & 87.2 \\ \hline 5 & \CheckmarkBold & BCQ & 41.6 & 14.0 & 86.5 \end{tabular} \end{table} \section*{Acknowledgment} The authors acknowledge the support from the United States Department of Energy, Advanced Research Projects Agency-Energy (ARPA-E) NEXTCAR project (Award Number DE-AR0000794) and Ohio Supercomputer Center. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \subsection{Preliminaries on Set-labeling} For all terms and definitions, not defined in this paper, we refer to \cite{FH} and for more about graph labeling, we refer to \cite{JAG}. Unless mentioned otherwise, all graphs considered here are simple, finite and have no isolated vertices. All sets mentioned in this paper are finite sets of non-negative integers. We denote the cardinality of a set $A$ by $|A|$. Let $\mathbb{N}_0$ denote the set of all non-negative integers. For all $A, B \subseteq \mathbb{N}_0$, the sum of these sets,denoted by $A+B$, is defined by $A + B = \{a+b: a \in A, b \in B\}$. The set $A+B$ defined above is known as the {\em sum set} of the sets $A$ and $B$. The following are the major concepts introduced in \cite{GS0}. Let $A$ and $B$ the set-labels of two adjacent vertices of a given graph $G$. Two ordered pairs $(a,b)$ and $(c,d)$ in $A\times B$ {\em compatible} if $a+b=c+d$. If $(a,b)$ and $(c,d)$ are compatible, then we write $(a,b)\sim (c,d)$. Clearly, $\sim$ is an equivalence relation. A {\em compatible class} of an ordered pair $(a,b)$ in $|A\times B|$ with respect to the integer $k=a+b$ is the subset of $A\times B$ defined by $\{(c,d)\in A\times B:(a,b)\sim (c,d)\}$ and is denoted by $[(a,b)]_k$ or $\mathsf{C}_k$. If $(a,b)$ is the only element in the compatibility class $[(a,b)]_k$, then it is called a {\em trivial class}. The compatibility classes which contain the maximum possible number of elements is called {\em saturated classes}. The compatibility class that contains maximum elements is called a {\em maximal compatibility class}. \begin{proposition}\label{P-CardCC} \cite{GS0} The maximum possible cardinality of a compatibility class in $(A,B)$ is $n$, where $n=min(|A|,|B|)$. That is, the cardinality of a saturated class in $(A,B)$ is $min(|A|,|B|)$. \end{proposition} The number of distinct compatibility classes in $A\times B$ is called the {\em compatibility index} of the pair of sets $(A,B)$ and is denoted by $\mho_{(A,B)}$. \subsection{Integer Additive Set-Indexers} An {\em integer additive set-indexer} (IASI, in short) is defined in \cite{GA} as an injective function $f:V(G)\rightarrow 2^{\mathbb{N}_0}$ such that the induced function $f^{+}:E(G) \rightarrow 2^{\mathbb{N}_0}$ defined by $f^{+} (uv) = f(u)+ f(v)$ is also injective. A graph $G$ which admits an IASI is called an IASI graph. The cardinality of the labeling set of an element (vertex or edge) of a graph $G$ is called the {\em set-indexing number} of that element. \begin{lemma}\label{L-3} \cite{GS0} Let $f$ be an IASI of a graph $G$ and $u,v$ be two vertices of $G$. Then, $f^{+}(uv)= f(u)+f(v)=\{a+b:a\in f(u), b\in f(v)\}$. Then, the set-indexing number of the edge $uv$ is $|f^{+}(uv)| = \mho_{(f(u),f(v))}$. \end{lemma} An IASI is said to be {\em $k$-uniform} if $|f^{+}(e)| = k$ for all $e\in E(G)$. That is, a connected graph $G$ is said to have a $k$-uniform IASI if all of its edges have the same set-indexing number $k$. The vertex set $V$ of a graph $G$ is defined to be {\em $l$-uniformly set-indexed}, if all the vertices of $G$ have the set-indexing number $l$. A {\em strong IASI} is defined in \cite{GS2} as an IASI $f$ such that $|f^{+}(uv)|=|f(u)| |f(v)|$ for all $u,v\in V(G)$. A graph which admits a strong IASI may be called a {\em strong IASI graph}. A strong IASI is said to be {\em strongly uniform IASI} if $|f^{+}(uv)|=k$, for all $u,v\in V(G)$ and for some positive integer $k$. \subsection{Arithmetic Integer Additive Set-Indexers} The studies about arithmetic IASIs of graphs, made in \cite{GS9}, \cite{GS10} and in \cite{GS11}, have established the following concepts. As elements in the set-labels of all elements of $G$ are in arithmetic progression, they must contain at least three elements. By the term, an {\em arithmetically progressive set}, (AP-set, in short), we mean a set whose elements are in arithmetic progression. We call the common difference of the set-label of an element of a given graph, the {\em deterministic index} of that element. Let $f:V(G)\to 2^{\mathbb{N}_0}$ be an IASI on $G$. For any vertex $v$ of $G$, if $f(v)$ is an AP-set, then $f$ is called a {\em vertex-aritmetic IASI} of $G$.A graph that admits a vertex-arithmetic IASI is called a {\em vertex-arithmetic IASI graph}. For an IASI $f$ of $G$, if $f^+(e)$ is an AP-set, for all $e\in E(G)$, then $f$ is called an {\em edge-aritmetic IASI} of $G$. A graph that admits an edge-arithmetic IASI is called an {\em edge-arithmetic IASI graph}. An IASI is said to be an {\em arithmetic integer additive set-indexer} if it is both vertex-arithmetic and edge-arithmetic. That is, an arithmetic IASI is an IASI $f$, under which the set-labels of all elements of a given graph $G$ are AP-sets. A graph that admits an arithmetic IASI is called an {\em arithmetic IASI graph}. If all the set-labels of all vertices of a graph $G$ are AP-sets and the set-labels of edges are not AP-sets, then the corresponding IASI is called {\em semi-arithmetic IASI}. If all the set-labels of all elements of a graph $G$ are AP-sets with the same difference $d$, then the corresponding IASI is called {\em isoarithmetic IASI}. An arithmetic IASI $f$ of a graph $G$, under which the differences $d_i$ and $d_j$ of the set-labels $f(v_i)$ and $f(v_j)$ respectively for two adjacent vertices $v_i$ and $v_j$ of $G$, holds the conditions $d_j=kd_i$ and $k$ is a non-negative integer such that $1< k \le |f(v_i)|$ is called {\em biarithmetic IASI}. \begin{theorem}\label{T-AIASI-g} \cite{GS9} A graph $G$ admits an arithmetic IASI graph $G$ if and only if for any two adjacent vertices in $G$, the deterministic index of one is a positive integral multiple of the deterministic index of the other and this integer is less than or equal to the cardinality of the set-label of the latter. \end{theorem} \section{Semi-Arithmetic IASIs} \begin{definition}{\rm A vertex-arithmetic IASI $f$ of a graph $G$, under which the differences $d_i$ and $d_j$ of the set-labels $f(v_i)$ and $f(v_j)$ respectively for two adjacent vertices $v_i$ and $v_j$ of $G$, holds the conditions $d_j=kd_i$ and $k$ is a non-negative integer greater than $|f(v_i)|$ is called the {\em semi-arithmetic IASI of the first kind}.} \end{definition} \begin{definition}{\rm A vertex-arithmetic IASI $f$ of a graph $G$, under which the differences $d_i$ and $d_j$ of the set-labels $f(v_i)$ and $f(v_j)$ respectively for two adjacent vertices $v_i$ and $v_j$ of $G$ are not multiples of each other, is called the {\em semi-arithmetic IASI of the second kind}.} \end{definition} \begin{theorem}\label{T-SAIASI1} Every first kind semi-arithmetic IASI of a graph $G$ is a strong IASI of $G$. \end{theorem} \begin{proof} Assume that the deterministic indices $d_i$ and $d_j$ of two adjacent vertices $v_i$ and $v_j$ respectively in $G$, where $d_i<d_j$ such that $d_j>|f(v_i)|.d_i$. Assume that $f(v_i)=\{a_r = a+rd_i:0 \le r <m\}$ and $f(v_j)=\{b_s=b+s\,k\,d_i:0\le s <n\}$. Now, arrange the terms of $f^+(v_iv_j)=f(v_i)+f(v_j)$ in rows and columns as follows. For $b_s\in f(v_j), 0\le s<n$, arrange the terms of $f(v_i)+b_s$ in $(s+1)$-th row in such a way that equal terms of different rows come in the same column of this arrangement. Then, the common difference between consecutive elements in each row is $d_i$. Since $k>|f(v_i)|$, the difference between the final element of any row and first element of its succeeding row is always greater than $d_i$. Therefore, no element is repeated in this arrangement. Therefore, total number of elements in $f(v_i)+f(v_j)$ is $|f(v_i)|\, |f(v_j)|$. Hence, $f$ is a strong IASI. \end{proof} Recall the following result proved in \cite{GS0}. \begin{proposition}\label{T-SIASI1a} \cite{GS0} If $f$ is strong IASI defined on a graph $G$, then for each adjacent pair of vertices $u$ and $v$ of $G$, each compatibility class of the pair of set-labels $f(u)$ and $f(v)$ is a trivial class. \end{proposition} \begin{corollary} Let $f$ be a first kind semi-arithmetic IASI of a graph $G$ and let $v_i$ and $v_j$ be two adjacent vertices in $G$. Then,all the compatibility classes in $f(v_i)\times f(v_j)$ are trivial classes. \end{corollary} \begin{proof} Since $f$ is a first kind semi-arithmetic IASI by Theorem \ref{T-SAIASI1}, $f$ is a strong IASI. Then, by Proposition \ref{T-SIASI1a}, all compatibility classes in $f(v_i)\times f(v_j)$ are trivial classes. \end{proof} An interesting question that arises in this context is about the existence of uniform semi-arithmetic IASIs. The following theorem establishes the necessary and sufficient condition for a semi-arithmetic IASI to be uniform. \begin{proposition}\label{P-SAIASI2} If $f$ is a first kind semi-arithmetic IASI of a graph $G$, then no edge of $G$ has a prime set-indexing number. \end{proposition} \begin{proof} Let $f$ be a first kind semi-arithmetic IASI of a graph $G$. Then, by Theorem \ref{T-SAIASI1}, $f$ is a strong IASI. Therefore, for any two adjacent vertices $u$ and $v$ of $G$, $|f^+(uv)|=|f(u)|\,|f(v)|$. If the edge $uv$ has a prime set-indexing number, say $p$, then $|f(u)|$ and $|f(v)|$ divide $p$. Therefore, either $|f(u)|=1$ or $|f(v)|=1$, which is a contradiction to the fact that every set-label have at least three elements. Hence, no edge of $G$ can have a prime set-indexing number. \end{proof} \begin{theorem}\label{T-SAIASI3} A first kind semi-arithmetic IASI of a graph $G$ is a uniform IASI if and only if either $G$ is bipartite or $V(G)$ is uniformly set-indexed. \end{theorem} \begin{proof} Let $f$ be a first kind semi-arithmetic IASI defined on a graph $G$. For a positive integer $l$, assume that $f$ is an $l$-uniform IASI. Let $v_i$ and $v_j$ be any two adjacent vertices of $G$ such that $|f(v_i)|=m$ and $|f(v_j)|=n$. Then, $m\,n =l$. Since, $f$ is $l$-uniform, every vertex that is adjacent to the vertex $v_i$ must have the set-indexing number $n$ and every vertex that is adjacent to the vertex $v_j$ must have the set-indexing number $m$. That is, in general, all the vertices adjacent to a vertex having set-indexing number $m$, must have the set-indexing number $n$ and all the vertices adjacent to a vertex having set-indexing number $n$, must have the set-indexing number $m$. If $m=n$, then our proof is complete. If $m\ne n$, then let $X$ be the set of all vertices of $G$ having set-indexing number $m$ and $Y$ be the set of all vertices of $G$ having set-indexing number $n$. Since, $m^2\ne l$, no two vertices in $X$ can be adjacent. Similarly, since $n^2\ne l$, no two vertices in $Y$ also can be adjacent. Therefore, $(X,Y)$ is a bipartition of $G$. Conversely, assume that either $G$ is bipartite or $V(G)$ is uniformly set-indexed. If $V(G)$ is $n$-uniformly indexed, then $|f^+(uv)|=|f(u)|\,|f(v)|=n^2 ~ \forall u,v~\in V(G)$. That is, $f$ is $n^2$-uniform. Now assume that $V(G)$ is not uniformly set-indexed. Then, by hypothesis, $G$ is bipartite. Let $(X,Y)$ be a bipartition of $G$. For a positive integer $d$, label all the vertices in $X$ by distinct $m$-element AP-sets with common difference $d$, and label all the vertices in $Y$ by distinct $n$-element AP-sets with common difference $k\,d$, where $k$ is a positive integer such that $k> max_{v_i\in X}\{f(v_i)\}$. Then, $f$ is a first kind semi-arithmetic IASI and by Theorem \ref{T-SAIASI1}, is a strong IASI. Therefore, every edge of $G$ has the set-indexing number $mn$. \end{proof} What is condition required for a second kind semi-arithmetic IASI to be a strong IASI? The following theorem provides an answer to this question. \begin{theorem} Let $f$ be an semi-arithmetic IASI defined on $G$. Also, let $|f(v_j)|=q.|f(v_i)|+r, 0<r<|f(v_i)|$. Then, $f$ is a strong IASI if and only if $q>|f(v_i)|$ or the differences$d_i$ and $d_j$ of two set labels $f(v_i)$ and $f(v_j)$ respectively, are relatively prime. \end{theorem}\label{T-AIASI-A} \begin{proof} First assume that $q>|f(v_i)|$. We arrange the elements of $f(v_i)+f(v_j)$ into rows and columns such that the sum of elements of $f(v_i)$ with the $r$-th element of $f(v_j)$, $1\le r\le |f(v_j)|$, as the elements of $r$-th row of the new arrangement. Since $q>|f(v_i)|$, in this arrangements, all the elements in the $r+1$-th row will be greater than all elements of the $r$-th row. That is, all elements in this arrangement are distinct. Hence, $f$ is a strong IASI. Now, assume that $q\le |f(v_i)|$. Then, by hypothesis, $gcd (d_i,d_j)=1$. Therefore, $r$ can not be a divisor of $d_i$. Then, no two elements of $f(v_i)+f(v_j)$ can belong to the same compatible class. Hence, $f$ is a strong IASI. Conversely, assume that $f$ is a strong IASI of $G$. Then, every compatible class $\mathsf{C}_{(a,b)}$ in $f(v_i)\times f(v_j)$ is a trivial class. This condition holds when $q>|f(v_i)$. Let $q\le |f(v_i)$. If $gcd(d_i,d_j)=t\ne 1$, then $t|d_i$ and hence $t|r$. Therefore, for some integers $q_1, q_2$, we have $q_1.d_i=q_2.r,~ q_1<q_2$. Hence, some terms in $f(v_i)\times f(v_j)$ are the same, which is a contradiction to the fact that $f$ is a strong IASI. Hence, $d_i$ (or $d_j$) is a multiple of $r$. Hence, $gcd (d_i,d_j)=1$. \end{proof} We note that an arithmetic IASI with arbitrary differences do not have saturated classes. In the following discussion, we find the number of maximal compatible classes for a second kind semi-arithmetic IASIs in the following theorem. \begin{theorem} Let $f$ be an arithmetic IASI with arbitrary differences on a graph $G$. Let $|f(v_j)|=q.|f(v_i)|+r$. Also, let $q_1$ and $q_2$ be the positive integers such that $q_1.|f(v_j)=q_2.r$. Then, the number of elements in a maximal compatible class of $f(v_i)\times f(v_j)$ is $\lfloor \frac{|f(v_j)|}{q_1} \rfloor$. \end{theorem} \begin{proof} We use the same notations as in Theorem \ref{T-AIASI-A}. Let $|f(v_i)|=m$ and $|f(v_j)=n$. Arrange the elements of $f(v_i)+f(v_j)$ into rows and columns such that the sum of elements of $f(v_i)$ with the $r$-th element of $f(v_j)$, $1\le r\le |f(v_j)|$, as the elements of $r$-th row of the new arrangement. By Theorem \ref{T-AIASI-A}, a compatibility class contains two or more elements if $q\le |f(v_i)|$ and $gcd(d_i,d_j)\neq 1$. Hence, there exist some positive integers $q_1$ and $r_1$ such that $q_1.d_i=r_1.r,~ q_1<r_1$. If $q_1<n$, then some values appear in the arrangement $\lfloor \frac{n}{q_1} \rfloor$ times. \end{proof} Analogous to the corresponding theorems for other types of IASI graphs, we observe the following result. \begin{proposition} Any subgraph of a semi-arithmetic IASI graph admits a semi-arithmetic IASI. \end{proposition} \begin{definition}{\rm \cite{DBW} For a given graph $G$, its line graph $L(G)$ is a graph such that each vertex of $L(G)$ represents an edge of $G$ and two vertices of $L(G)$ are adjacent if and only if their corresponding edges in $G$ incident on a common vertex in $G$.} \end{definition} \begin{proposition} The line graph $L(G)$ of a semi-arithmetic graph never admits a semi-arithmetic IASI. \end{proposition} \begin{proof} The set-labels of edges of a semi-arithmetic IASI graph $G$ are not AP-sets. Hence, the vertices in $L(G)$ corresponding to the edges in $G$ do not have AP-sets as their set-labels. Therefore, $L(G)$ does not admit a semi-arithmetic IASI. \end{proof} \begin{definition}{\rm \cite{MB} The {\em total graph} of a graph $G$ is the graph, denoted by $T(G)$, is the graph having the property that a one-to one correspondence can be defined between its points and the elements (vertices and edges) of $G$ such that two points of $T(G)$ are adjacent if and only if the corresponding elements of $G$ are adjacent (either if both elements are edges or if both elements are vertices) or they are incident (if one element is an edge and the other is a vertex). } \end{definition} \begin{proposition} The total graph $T(G)$ of a semi-arithmetic graph never admits a semi-arithmetic IASI. \end{proposition} \begin{proof} The set-labels of edges of a semi-arithmetic IASI graph $G$ are not AP-sets. Therefore, the vertices in $T(G)$ corresponding to the edges in $G$ do not have AP-sets as their set-labels. Hence, $T(G)$ does not admit a semi-arithmetic IASI. \end{proof} \begin{definition}{\rm \cite{FH} By {\em edge contraction operation} in $G$, we mean an edge, say $e$, is removed and its two incident vertices, $u$ and $v$, are merged into a new vertex $w$, where the edges incident to $w$ each correspond to an edge incident to either $u$ or $v$.} \end{definition} \begin{proposition} A graph obtained by contracting an edge of a semi-arithmetic IASI graph $G$ does not admit a semi-arithmetic IASI. \end{proposition} \begin{proof} Let $e=uv$ be an arbitrary edge of a semi-arithmetic IASI graph $G$. Let $G'=G\circ e$. Let $w$ be the new vertex obtained by removing the edge $e$ and identifying the two vertices $u$ and $v$ to get a new vertex $w$. It is customary to assign the same set-label of $e$ to the new vertex $w$. Therefore, the set-label of $w$ is not an AP-set. Hence, $G'$ does not admit a semi-arithmetic IASI. \end{proof} \begin{definition}{\rm \cite{KDJ} Let $G$ be a connected graph and let $v$ be a vertex of $G$ with $d(v)=2$. Then, $v$ is adjacent to two vertices $u$ and $w$ in $G$. If $u$ and $v$ are non-adjacent vertices in $G$, then delete $v$ from $G$ and add the edge $uw$ to $G-\{v\}$. This operation is known as an {\em elementary topological reduction} on $G$.} \end{definition} \begin{proposition} Let $G$ be a semi-arithmetic IASI graph and let $v$ be an arbitrary vertex of $G$ with $d(v)=2$ not contained in any triangle of $G$. Let $G'=(G-v)\{uw\}$, where $u$ and $w$ are adjacent vertices of $v$ in $G$. Then, $G'$ admits a semi-arithmetic IASI if and only if the deterministic indices of one of $u$ or $w$ is a positive integer multiple of the deterministic index of the other, where this integer is greater than the cardinality of the latter. \end{proposition} \begin{proof} Let $v$ be an arbitrary vertex of $G$ with $d(v)=2$ not contained in any triangle of $G$. Since $d(v)=2$, $v$ is adjacent to two vertices, say $u$ and $w$ in $G$. Now, remove the veretx $v$ from $G$. Then , the edges $uv$ and $vw$ will be eliminated. Join the end vertices $u$ and $w$. Let $G'=(G-v)\{uw\}$. Since, $G$ is semi-arithmetic, $G'$ is semi-arithmetic if and only if the set-label of the edge $uw$ is not an AP-set. This is possible only when the deterministic indices of one of $u$ or $w$ is a positive integer multiple of the deterministic index of the other, where this integer is greater than the cardinality of the latter. Therefore, $G'$ is semi-arithmetic if and only if the deterministic indices of one of $u$ or $w$ is a positive integer multiple of the deterministic index of the other, where this integer is greater than the cardinality of the latter. \end{proof} \begin{definition}{\rm \cite{RJT} A {\em subdivision} of a graph $G$ is the graph obtained by adding vertices of degree two into its edges.} \end{definition} \begin{proposition} A subdivision of a semi-arithmetic graph $G$ does not admit a semi-arithmetic IASI. \end{proposition} \begin{proof} Let $e=uv$ be an arbitrary edge of a semi-arithmetic IASI graph $G$. Let $G'$ be a subdivision of $G$ obtained by a new vertex $w$ to the edge $e$. Therefore, the edge $uv$ will be replaced by the two edges $uw$ and $vw$ in $G$. It is customary to assign the same set-label of $uv$ to the new vertex $w$. Therefore, the set-label of $w$ is not an AP-set. Hence, $G'$ does not admit a semi-arithmetic IASI. \end{proof} \section{Conclusion} In this paper, we have discussed some characteristics of graphs which admit a certain type of IASI called semi-arithmetic IASI. We have formulated some conditions for some graph classes to admit semi-arithmetic IASIs and we have discussed about some characteristics of semi-arithmetic IASI graphs. Certain problems in this area are still open. The IASIs under which the vertices of a given graph are labeled by different standard sequences of non negative integers, are also worth studying. The problems of establishing the necessary and sufficient conditions for various graphs and graph classes to have certain IASIs still remain unsettled. All these facts highlight a wide scope for further studies in this area.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The formation and evolution of galaxies is governed by a wide range of physical processes spanning from sub-parsec to giga-parsec scales. These include the growth of large-scale structure, cooling and heating of astrophysical plasmas, and the formation of stars and central black holes along with associated energy return processes collectively known as feedback. As observations of galaxies improve at an ever-advancing pace, a key goal of modern astrophysics is to understand how such observations can be used to constrain the balance of underlying physical processes driving galaxy evolution, particularly in regards to feedback processes that are currently among its least well-understood aspects. Modern galaxy formation models are generally built on the premise that feedback at the very lowest masses is dominated by photoionisation from the metagalactic ultraviolet background, feedback at scales above this but below $L^\star$ is driven primarily by energy and momentum from young stars and supernovae, and feedback in massive galaxies predominantly owes to energetic release from accretion disks around supermassive central black holes~\citep[see][for a review]{Somerville:2015}. While this framework has been broadly successful at reproducing many key observed characteristics of the galaxy population at a range of cosmic epochs, the physical understanding of most of these small-scale feedback processes remains coarse and heuristic~\citep[see][for a review]{Naab:2017}. Galaxy formation models have now progressed to a point where numerous models can reproduce similar core sets of observations, but they often do so with significantly different underlying physical models and assumptions. To discriminate among these physical drivers, it becomes important to develop modeling methodologies that are as well-motivated and realistic as possible, and to test such models against the widest possible suite of observations quantifying the stellar, gas, metal, and black hole properties of galaxies along with their surrounding gas. Cosmological-scale simulations that model galaxy growth and feedback dynamically within evolving large-scale structure are an increasingly valuable tool for testing and constraining galaxy formation physics, owing both to rapidly advancing computational power and the commensurate ability to concurrently model a large range of scales and physical processes. State of the art models now simultaneously predict the co-evolution of stars, interstellar media, black holes, and circum-galactic gas, enabling a holistic approach towards testing the input physics against observations across a wide range of scales, environments, and wavelengths. Modern cosmological-scale simulations such as Illustris~\citep{Vogelsberger:2014,Genel:2014}, Magneticum~\citep{Hirschmann:2014}, Horizon-AGN~\citep{Dubois:2014,Volonteri:2016,Kaviraj:2017}, Eagle~\citep{Schaye:2015}, MassiveBlack~\citep{Khandai:2015}, Blue Tides~\citep{Feng:2016}, Romulus~\citep{Tremmel:2017}, and Illustris-TNG~\citep{Springel:2018} have implemented ever-improving sub-grid models aimed at more successfully reproducing the stellar, gaseous, and black hole contents of galaxies in bulk, while numerous associated hierarchically-situated zoom simulations using more detailed input physics can examine the internal structural and dynamical properties of galaxies with increasing fidelity. The {\sc Mufasa}\ simulation project~\citep{Dave:2016} has added to the pantheon of such simulations, employing several novel approaches that distinguish it from others. First, it utilises meshless finite mass (MFM) hydrodynamics as implemented in the {\sc Gizmo}\ code~\citep{Hopkins:2015,Hopkins:2018}, which offers important accuracy advantages over Smoothed Particle Hydrodynamics (SPH), and owing to the mass conserving nature of its gas elements greater ease of analysis as compared to adaptive mesh refinement ({\sc Ramses}) and moving mesh ({\sc Arepo}) codes. Second, rather than employing simple parameterisations or a cooling shutoff to drive galactic outflows, the kinetic mass outflow rate is taken directly from very high-resolution simulations from the Feedback in Realistic Environments~\citep[FIRE;][]{Hopkins:2014,HopkinsFIRE:2018,Muratov:2015} simulations, providing a synergy between ISM-resolving simulations of individual galaxies and cosmological-scale simulations of galaxy populations. An aspect where {\sc Mufasa}\ was physically less well motivated than other state of the art simulations was in its depiction of black hole growth and AGN feedback. {\sc Mufasa}\ did not directly grow black holes and utilise the accretion energy for feedback to quench galaxies. Instead, following \citet{Gabor:2015}, {\sc Mufasa}\ implemented a heuristic ``quenching feedback'' in which diffuse gas within massive halos was prevented from cooling. This energy was envisioned to be putatively supplied by an AGN, but {\sc Mufasa}\ did not explicitly model black hole accretion and the interaction of its energy release with surrounding gas. The halo mass scale above which quenching feedback was applied was taken from the best-fit analytic equilibrium model of \citet{Mitra:2015,Mitra:2017}, and evolved slowly upwards with redshift. Despite its simplicity, this prescription displayed impressive successes at reproducing a red sequence and massive central galaxy properties in excellent accord with observations~\citep{Dave:2017b}, albeit with some non-trivial discrepancies such as over-quenching satellites particularly in the outskirts of large halos~\citep{Rafieferantsoa:2018}. This demonstrated that a model based primarily on starvation of the central galaxy via ``radio mode" feedback~\citep{Croton:2006,Bower:2006} is able to quench the galaxy population in a hydrodynamic simulation in broad agreement with observations. While this model represented an interesting numerical experiment, it would clearly be valuable to implement a more physically-motivated black hole growth and feedback model that retains and perhaps even extends the successes of {\sc Mufasa}'s more heuristic model. This is the primary goal of the {\sc Simba}\ project. As the descendent of {\sc Mufasa}, {\sc Simba}\ marries two lines of investigation to achieve this. First, it builds on the successful {\sc Mufasa}\ model, including its representation of star formation-driven feedback and other modern features. To this, {\sc Simba}\ adds a novel and promising model for black hole growth: Torque-limited accretion~\citep{Hopkins:2011,Angles:2013,Angles:2015,Angles:2017}. In this model, black hole growth is regulated by the ability for gas in the inner disk to lose angular momentum via disk instabilities. \citet{Hopkins:2011} developed an analytic formalism for this, tested and calibrated using sub-pc scale numerical simulations, which yielded a formula that connects the infall rate of material onto the black hole accretion disk with properties of the inner galactic disk. They showed that even at $\sim 1$~kpc resolution typical of cosmological simulations, their gravitational torque accretion formula provides a significantly better match to the measured accretion rate in their high-resolution simulations than employing the canonical Bondi accretion formula used in all other current cosmological black hole growth simulations. \citet{Angles:2013} explored the torque-limited accretion model via the post-processing of zoom simulations, and \citet{Angles:2015} extended this approach to cosmological simulations. Their most significant result was that, unlike Bondi accretion, torque-limited accretion does not require the black hole to self-regulate its own growth. In particular, torque-limited accretion naturally results in black holes growing along the observed galaxy-black hole scaling relations, even without any black hole feedback. There is one free parameter in the model which represents the fraction of material entering the accretion disk that accretes onto the black hole; a plausible choice of $\sim 10$\% provided a good match to data, insensitive to the choice of the (uncertain) black hole seed mass. \citet{Angles:2017} extended these previous works to self-consistently incorporate torque-limited accretion into {\sc Gizmo}, along with bipolar black hole winds, and demonstrated that the results obtained without feedback were reproduced in this case -- in particular, the inclusion of feedback self-consistently confirmed the primary result obtained in the post-processed case that black hole--galaxy scaling relations arise naturally without the black hole self-regulating its own growth. {\sc Simba}\ builds on this work to employ the torque-limited black hole accretion model of \citet{Angles:2017} when accreting from cold or star-forming gas, in order to self-consistently grow black holes within galaxies during the simulation run. The use of torque-limited black hole growth is unique among current cosmological simulations. {\sc Simba}\ also includes Bondi accretion, but only from hot gas when present close to the black hole since it is the physically appropriate model in that case. The second part of {\sc Simba}'s new black hole model involves a novel sub-grid prescription for active galactic nuclei (AGN) feedback. AGN feedback connects flows coming off the black hole accretion disk to energy release on scales of tens or hundreds of kpc. To model this transfer of energy from small to large scales, {\sc Simba}\ utilises kinetic outflows with outflow parameters based on observed AGN feedback. While there is still no well-defined theoretical consensus on the generation of black hole outflows and jets, recent observational progress has been rapid, showing that AGN can drive molecular and ionised outflows with velocities of $\sim 1000\;{\rm km}\,{\rm s}^{-1}$ or more~\citep{Sturm:2011,Greene:2012,Maiolino:2012,Liu:2013,Perna:2017a}, and jets at velocities up to $\sim 10^4\;{\rm km}\,{\rm s}^{-1}$ and more~\citep{Fabian:2012}. Generally, high-velocity jets are observed to arise from early-type galaxies hosting massive black holes with low accretion rates relative to its Eddington rate ($f_{\rm Edd}\la$few percent), while lower-velocity outflows typically arise in systems with higher $f_{\rm Edd}$~\citep{Best:2012,Heckman:2014}. Extreme systems such as bright quasars often show both types of outflows. {\sc Simba}'s black hole outflows are parameterised to broadly follow such observed trends. {\sc Simba}\ employs kinetic feedback (i.e. gas element kicks) for both feedback modes, with the kick velocity ranging from many hundreds of km/s in low-mass, fast-accreting black holes, up to many thousands of $\;{\rm km}\,{\rm s}^{-1}$ for slower-accreting black holes. A key unique feature is that {\sc Simba}'s kinetic feedback is purely bipolar, with the ejection vector given by the angular momentum of the inner disk. This direction is relatively stable over galaxy dynamical timescales. To be conservative in minimising black hole feedback impact on the galaxy interstellar medium, we employ an opening angle of zero. This is in contrast to other simulations that successfully reproduce massive galaxy properties using Bondi accretion, which employ either spherical thermal input~\cite[e.g. EAGLE;][]{Schaye:2015} or randomise the kinetic feedback's direction on short timescales~\citep[e.g. Illustris-TNG;][]{Weinberger:2017}. {\color{black} Horizon-AGN employed Bondi accretion with a two-mode feedback scheme~\citep{Dubois:2012}, but still used spherical thermal feedback during the high-Eddington growth phase, as later also done in Illustris-TNG. More detailed isolated elliptical simulations have also highlighted the importance of radiative mechanical feedback ~\citep{Gan:2014} that can reproduce observations of AGNs in ellipticals such as their duty cycle~\citep{Gan:2019}.} The reason {\sc Simba}\ is able to be successful with a genuinely bipolar model likely traces back to its accretion model: Torque-limited accretion does not require self-regulation of black hole growth, whereas Bondi accretion requires quasi-spherical energy distribution close to the black hole in order to self-regulate its own growth. In {\sc Simba}'s case, the energy input sphericalises at large distances, sufficient in massive halos to quench inflow into the galaxy by keeping the halo gas hot. In this way, {\sc Simba}'s accretion and feedback models work together to build a sub-grid description of AGN feedback that is more closely connected to observations of AGN winds and jets. In addition to kinetic AGN feedback, {\sc Simba}\ also includes X-ray feedback input into surrounding gas. The importance of this feedback channel has been emphasised in zoom simulations by \citet{Choi:2012}, showing that it can potentially drive the quenching of massive galaxies. We adapt this model to operate under the lower-resolution conditions present in {\sc Simba}'s cosmological-scales runs, and show that it plays a minor but non-trivial role in quenching the most massive galaxies. {\sc Simba}\ is the first cosmological-volume simulation to include such X-ray feedback. Another novel aspect of {\sc Simba}\ is that it includes a model for on-the-fly dust production and destruction, broadly following \citet{McKinnon:2017}'s implementation into {\sc Arepo} where the dust is passively advected with the gas. We include dust production from Type II supernovae (SNe) and Asymptotic Giant Branch (AGB) stars, and further growth via condensation from metals, while destruction can occur from sputtering, consumption by star formation, or SNe shocks. The fraction of metals locked into dust can be substantial, leading to significant changes in the predicted mass-metallicity relations. In {\sc Mufasa}, we found it necessary to reduce the SN yields arbitrarily by a factor of two in order to match the observed gas-phase mass-metallicity relation, but in {\sc Simba}\ we can reproduce this as well or better without such arbitrary factors, since a substantial fraction of the metals ends up locked in dust. In this paper, we describe the simulation methodology in {\sc Simba}\ which, besides the new black hole model, also makes various other minor improvements to {\sc Mufasa}\ (\S\ref{sec:sim}). We then present a range of observational comparisons to predicted stellar, gas, and metal properties, analogous to a sampling of results presented for {\sc Mufasa}\ in a series of recent papers~\citep{Dave:2016,Dave:2017a,Dave:2017b,Rafieferantsoa:2018}, paying particular attention to black hole and massive galaxy properties which represent the most direct test of {\sc Simba}'s new AGN feedback model (\S\ref{sec:results}). We show that {\sc Simba}\ reproduces many observations comparably well or better than {\sc Mufasa}, but now with a more realistic and self-consistent description of black hole growth and feedback. We then examine variants of {\sc Simba}'s AGN feedback model in order to isolate the impact of its various components (\S\ref{sec:variants}), showing that the high-velocity jet feedback is crucial for producing a quenched massive galaxy population. Finally, we summarize our results in \S\ref{sec:summary}. \section{Simulation Methodology}\label{sec:sim} \subsection{Code and input physics}\label{sec:code} The {\sc Simba}\ simulations utilise much of the framework of the {\sc Mufasa}\ simulations as described in \citet{Dave:2016}, but there are a number of updates and additions based on recent theoretical and observations results, in addition to the major change of modeling black hole growth and feedback as well as dust. Here we recap the main features of the model, and then describe in more detail the new aspects of {\sc Simba}. {\sc Simba}\ utilises a forked version of the {\sc Gizmo}\ cosmological gravity plus hydrodynamics solver~\citep{Hopkins:2015,Hopkins:2018}, in its Meshless Finite Mass (MFM) version. This code, based on {\sc Gadget-3}~\citep{Springel:2005}, evolves dark matter and gas elements together including gravity and pressure forces, handling shocks via a Riemann solver with no artificial viscosity. It performs very well in numerous standard hydrodynamics tests including strong shocks, rotating disks, and cold blobs moving through hot media~\citep{Hopkins:2015}. It also preserves the mass within each fluid element during the evolution, thereby enabling detailed tracking of gas flows. It thus marries the advantages of a particle-based code such as adaptivity in space and time, with the hydrodynamics accuracy of a Riemann solved-based mesh code, without the imposition of a Cartesian mesh that can cause difficulties with Galilean invariance and rotating shear flows. Radiative cooling and photoionisation heating are modeled using the {\sc Grackle-3.1} library~\citep{Smith:2017}, including metal cooling and non-equilibrium evolution of primordial elements. This is an updated version of {\sc Grackle-2.1} used in {\sc Mufasa}, offering two advantages: First, the adiabatic and radiative terms are evolved together during the cooling sub-timestep, unlike the previous operator-split approach where first the system was evolved adiabatically over the full timestep and then cooling was applied; this results in more accurate and stable thermal evolution particularly in the stiff regions of the cooling curve. Second, it includes self-shielding self-consistently during the simulation run, based on the \citet{Rahmati:2013} prescription in which the ionising background strength is attenuated depending (primarily) on gas density. A spatially-uniform ionising background is assumed as specified by \citet{Haardt:2012}, modified to account for self-shielding (A. Emerick, priv. comm.). These changes do not make a noticeable difference to the resulting galaxy population, but they do improve the accuracy of the baryonic thermal evolution which may be particularly important within circum-galactic gas. Furthermore, computing the neutral hydrogen content of gas particles is now done self-consistently within the code rather than via a post-processed application of self-shielding~\citep{Dave:2017a}. As in {\sc Mufasa}, we use an H$_2$-based star formation rate, where the H$_2$ fraction is computed based on the sub-grid model of \citet{Krumholz:2011} based on the metallicity and local column density, with minor modifications as described in \citet{Dave:2016} to account for variations in numerical resolution. The star formation rate is given by the H$_2$ density divided by the dynamical time: SFR$=\epsilon_*\rho_{H2}/t_{\rm dyn}$, where we use $\epsilon_*=0.02$~\citep{Kennicutt:1998}. The chemical enrichment model tracks eleven elements (H,He,C,N,O,Ne,Mg,Si,S,Ca,Fe) during the simulation, with enrichment tracked from Type II supernovae (SNe), Type Ia SNe, and Asymptotic Giant Branch (AGB) stars. {\color{black} The yield tables employed are the same as in {\sc Mufasa}, namely \citet{Nomoto:2006} for SNII yields, \citet{Iwamoto:1999} for SNIa yields, and AGB star enrichment following \citet{Oppenheimer:2006}. However,} we no longer apply an arbitrary reduction of yields by a factor of 2 that was previously needed to match the mass-metallicity relation, and instead lock individual metals into dust; we detail the dust model implementation in \S\ref{sec:dust}. Type Ia SNe and AGB wind heating are also included as in {\sc Mufasa}, along with ISM pressurisation at a minimum level as required to resolve the Jeans mass in star-forming gas as described in \citet{Dave:2016}. The model for star formation-driven galactic winds closely follows that in {\sc Mufasa}; we continue to use decoupled two-phase winds, with 30\% of wind particles ejected ``hot" i.e. with a temperature set by the supernova energy minus the wind kinetic energy. However, we make a significant update to the mass loading factor scaling with stellar mass. {\sc Mufasa}\ used the scalings taken from \citet{Muratov:2015}, who computed the outflow rates based on mass advection across a boundary at one-quarter of the virial radius in the FIRE zoom simulations. \citet{Angles:2017b} used similar FIRE simulations, but instead tracked individual particles in order to quantify the mass outflow rates out of the star-forming region, thus providing a more direct measurement of the amount of material leaving the ISM. This yields a cumulative mass loading factor versus stellar mass as shown in Figure~\ref{fig:eta_mstar}, which is well fit by a broken power law at $M_0=5.2\times 10^9 M_\odot$: \begin{equation} \eta(M_*) \propto \begin{cases} 9\Big(\frac{M_*}{M_0}\Big)^{-0.317},& \text{if } M_*<M_0\\ 9\Big(\frac{M_*}{M_0}\Big)^{-0.761},& \text{if } M_*>M_0\\ \end{cases} \end{equation} and is independent of redshift. This has a similar slope to \citet{Muratov:2015} below $M_0$ but with roughly double the amplitude, and is much steeper above $M_0$. {\color{black} It is also similar to the assumed mass loading factor in Illustris-TNG~\citep{Pillepich:2018}. Similar to TNG, {\sc Simba}\ employs a flat $\eta(M_*)$ below an $M_*$ corresponding to 16 star particles ($M_*\leq 2.9\times 10^8 M_\odot$ for the $100h^{-1}{\rm Mpc}$ run), otherwise poorly-resolved forming galaxies are unable to grow owing to excessive feedback.} As in {\sc Mufasa}, we further apply a reduction in $\eta$ at high redshifts, in order to allow for early galaxy growth in poorly resolved situations. {\sc Mufasa}\ was found to underproduce $z>6$ galaxies, and hence we strengthen the suppression factor at $z>3$ from $(a/0.25)$ to $(a/0.25)^{f_a}$, where $a$ is the expansion factor. We tune the value of $f_a$ based on the resolution of the simulation, since the origin of the lack of early galaxy formation owes to poor resolution. Testing has shown that we obtain converged results that match $z\ga 6$ observations (shown later) if we use $f_a=2$ at our largest ($100h^{-1}{\rm Mpc}$) volume's resolution, $f_a=1.5$ at $8\times$ higher mass resolution, and so on. Fortunately, because galaxy growth is very rapid at high redshifts, this choice makes little difference to galaxy predictions at $z\ga 3$ over most of cosmic time. Note that the FIRE simulations do not make strong predictions for $\eta(M_*)$ at $z\ga 3$ owing to the limited dynamic range covered by their zooms at early epochs, so this choice is not in obvious conflict with using FIRE scalings at lower redshifts. However, the ad hoc nature of this correction means that results for galaxy stellar growth at high redshifts from {\sc Simba}\ should be considered as tuned rather than predictive. \begin{figure} \includegraphics[width=\columnwidth]{WindLoading_vs_Mstar.pdf} \vskip-0.1in \caption{Mass loading factor $\eta$ versus stellar mass $M_*$ from a suite of FIRE simulations analysed via particle tracking in \citet{Angles:2017b}. The points show values measured at various redshifts, while the orange line is the best-fit relation. {\color{black} The gray solid line shows the \citet{Muratov:2015} scaling and the dashed and dotted gray lines show the mass loading factors used in the Illustris and Illustris-TNG simulations ($\eta(M_{\rm halo})$ fitting functions from \citealt{Pillepich:2018}, converted to $\eta(M_*)$ using the $M_*$--$M_{\rm halo}$ relation of \citealt{Moster:2013}).}} \label{fig:eta_mstar} \end{figure} {\color{black} A new feature in {\sc Simba}\ is metal-loaded winds. When a wind particle is launched, it extracts some metals from nearby particles to represent the local enrichment by the supernovae driving the wind. The metallicity added to the wind particle is given by \begin{equation} dZ = f_{\rm SNII} y_{\rm SNII}(Z) / {\rm MAX}(\eta,1) \end{equation} where $f_{\rm SNII}=0.18$ is the stellar mass fraction lost to supernova (assumed to be instantaneous in {\sc Simba}), $y_{\rm SNII}(Z)$ is the metal-dependent Type II SN yield for each species, and $\eta$ is the mass loading factor. This amount is subtracted from nearby gas in a kernel-weighted manner. If there is not enough metals nearby (as can happen early on), then $dZ$ is reduced appropriately. In all circumstances, the total metal mass is conserved. The metal loading factor (i.e. the ejected metallicity relative to surrounding gas) can be a factor of two or larger when the ISM has a metallicity $\ll y_{\rm SNII}$, but is more typically around 10-20\%, in broad agreement with metal loading estimates from zoom simulations~\citep{Muratov:2017,Christensen:2018}.} {\sc Simba}\ continues to use the wind velocity scalings from \citet{Muratov:2015} as in {\sc Mufasa}, since the scaling follows the expected quasi-linear scaling of wind speed with escape velocity as observed~\citep[e.g.][]{Martin:2005}, and also because a full analysis of the velocity distribution of outflowing gas is not available from FIRE. In {\sc Mufasa}, the amplitude was taken as a tunable parameter, and set to $2v_{\rm circ}$ at $v_{\rm circ}=200\;{\rm km}\,{\rm s}^{-1}$. Owing to the increase in the mass loading factor from low-mass galaxies, we find that a somewhat lower value of the wind velocity is required to compensate for this, so in {\sc Simba}\ we reduce the normalisation to 1.6: \begin{equation} v_w = 1.6 \Big(\frac{v_{\rm circ}}{200 \;{\rm km}\,{\rm s}^{-1}}\Big)^{0.12} v_{\rm circ} + \Delta v(0.25R_{\rm vir}) \end{equation} where $\Delta v(0.25R_{\rm vir})$ is the velocity corresponding to the potential difference between the launch point and one-quarter of the virial radius \citep[see][]{Dave:2016}. A related new aspect in {\sc Simba}\ is that we limit the wind kinetic energy to the available supernova energy by attenuating the wind speed when needed, although this only has noticeable effect in small galaxies at very early epochs. {\sc Simba}\ uses an on-the-fly approximate friends-of-friends (FOF) finder applied to stars and dense gas as described in \citet{Dave:2016} in order to compute the galaxy properties such as $M_*$, and as in {\sc Mufasa}\ obtains $v_{\rm circ}$ using a scaling based on the baryonic Tully-Fisher relation. Besides algorithmic improvements to improve parallel performance, the only change to this is that the FOF finder now also groups black holes into galaxies. \subsection{Black hole growth} The most significant change in {\sc Simba}\ relative to {\sc Mufasa}\ is that black holes are seeded and grown during the simulation, and the accretion energy is used to drive feedback that serves to quench galaxies. In this section we describe {\sc Simba}'s two-mode accretion model. The first mode closely follows the torque-limited accretion model presented in \citet{Angles:2017}, and we refer the reader there for full details. The second mode uses Bondi accretion, but solely from the hot gas component. In future work (Angl\'es-Alc\'azar et al., in prep.) we will show that this contribution from Bondi accretion is sub-dominant for all but the highest mass black holes. \subsubsection{Torque-limited accretion from cold gas} We model the gas inflow rate $\dot{M}_{\rm Torque}$ driven by disk gravitational instabilities from galactic scales down to the accretion disk surrounding the central black hole following \citet{Hopkins:2011}: \begin{equation}\label{eq:torque} \begin{split} \dot{M}_{\rm Torque} \; \approx \; \epsilon_{\rm T} \, f_{\rm d}^{5/2} \times \left ( \frac{M_{\rm BH}}{10^{8}\,{\rm M_{\odot}}} \right )^{1/6} \left ( \frac{M_{\rm enc}(R_{0})}{10^{9}\,{\rm M_{\odot}}} \right ) \\ \times \left ( \frac{R_{0}}{100\,{\rm pc}} \right )^{-3/2} \left (1 + \frac{f_{0}}{f_{\rm gas}} \right )^{-1} \, {\rm M_{\odot}\,yr^{-1}}, \end{split} \end{equation} where $f_{\rm d}$ is the disk mass fraction (including both stars and gas), $M_{\rm enc}(R_{0})$ is the total gas+stellar mass, $f_{\rm gas}$ is the gas mass fraction in the disk component, $f_{0} \approx 0.31 \, f_{\rm d}^{2} \, (M_{\rm d}(R_{0})/10^{9}{\rm M_{\odot}})^{-1/3}$, and all quantities are evaluated within a distance $R_{0}$ of each black hole enclosing the nearest 256 gas elements, with an upper limit $R_{0} \leq 2\,h^{-1}{\rm kpc}$ (comoving) imposed throughout the simulation. Evaluating this equation for $\dot{M}_{\rm Torque}$ requires separating the spheroidal and disk components within $R_{0}$, which we do by means of the kinematic decomposition implemented in \citet{Angles:2017}; {\color{black} see the Appendix of \citet{Angles:2015} for further tests of this. } Unlike previous work, we evaluate torque-limited accretion only for the cold ($T<10^{5}$~K) gas within the black hole kernel, since it relies on instabilities in a cold gaseous disk to drive mass inflow. {\color{black} We consider all ISM gas to be in the cold phase, where ISM gas is taken to be gas that has been artificially pressurised in order to resolve the Jeans mass as described in \citet{Dave:2016}. In our $100h^{-1}{\rm Mpc}$ run this corresponds to gas above a hydrogen number density of $n_H>0.13$~cm$^{-3}$ and within 0.5~dex of the pressurized ISM temperature floor.} The normalization factor $\epsilon_{\rm T} \equiv \epsilon_{\rm m} \times \alpha_{\rm T}$ encapsulates processes that affect the radial transport of gas on unresolved scales (e.g. nuclear star formation and stellar feedback and mass loss in winds from the accretion disk), where $\alpha_{\rm T} = 5$ is the original normalization of $\dot{M}_{\rm Torque}$ in \citet{Hopkins:2011} and we set $\epsilon_{\rm m} = 0.1$ to match the normalization of the local $M_{\rm BH}$--$M_\star$ relation as in \citet{Angles:2017}. One can view $\alpha_{\rm T}$ as corresponding to an efficiency of transport of material from the inner galactic disk onto the black hole accretion disk, and $\epsilon_m$ as the efficiency of transport from the accretion disk onto the black hole, for which 10\% is a canonical value. However, $\alpha_{\rm T}$ is itself fairly uncertain, and in the end the meaningful subgrid parameter is only the combination $\epsilon_{\rm T}$. \subsubsection{Bondi accretion from hot gas} Hot gas can also accrete onto black holes, but in this case Bondi accretion is a more appropriate physical mechanism since the hot gas is typically more spherically distributed. Thus we also account for Bondi accretion, but only from non-ISM gas with a temperature of $T>10^{5}$~K. Bondi-Hoyle-Lyttleton accretion is computed via the standard \citet{Bondi:1952} formula: \begin{equation} \dot{M}_{\rm Bondi} = \epsilon_m \frac{4\pi G^2 M_{\rm BH}^2 \rho}{(v^2+c_s^2)^{3/2}} \end{equation} where $\rho$ is the mean density of hot ($T>10^{5}$~K) gas computed within the black hole accretion kernel, $c_s$ is the kernel-averaged sound speed of the hot gas, and $v$ is the kernel-averaged velocity of the hot gas relative to the black hole. In practice, we neglect the gas relative velocity and set $v=0$, since the dynamics of the black hole particle is controlled by the repositioning algorithm (\S\ref{sec:seeds}). We do not include any boost factor, since hot gas is likely distributed quite smoothly over the size of the black hole kernel. For consistency with the gravitational torque rate, we suppress Bondi accretion by the same efficiency $\epsilon_{\rm m} = 0.1$. \subsubsection{Numerical implementation} The total accretion rate for a given black hole is then \begin{equation}\label{eq:mdot} \dot{M}_{\rm BH} = (1 - \eta) \times \, (\dot{M}_{\rm Torque} + \dot{M}_{\rm Bondi}), \end{equation} where we adopt a constant radiative efficiency $\eta = 0.1$ \citep[e.g.][]{YuTremaine:2002}. We apply an overall limit to the accretion rate, based on the given black hole's Eddington accretion rate. For torque-limited accretion, we apply a limit of 3 times the Eddington rate, based on the idea that non-spherical accretion can potentially exceed Eddington as occasionally observed particularly at higher redshifts~\citep{Martinez:2018} and consistent with recent accretion disk simulations \citep[e.g.][]{Jiang:2014}. For Bondi accretion, we apply a strict Eddington limit, since this is intended to represent quasi-spherical accretion from hot gas where the Eddington limit is directly applicable. Numerically, we follow \citet{SpringelBH:2005} and track separately the physical black hole mass, which grows continuously according to equation~\ref{eq:mdot}, and the dynamical black hole particle mass, which grows by stochastically accreting a fraction $f_{\rm m}$ of the mass of gas particles within $R_0$ with a probability that statistically satisfies mass conservation and the desired mass outflow rate in AGN winds (see below). A time step limiter is imposed on black hole particles such that black holes do not grow by more than 0.1\,\% of their current mass in a single time step. \subsection{Black hole feedback}\label{sec:bh} We incorporate a kinetic subgrid model for black hole feedback, along with X-ray energy feedback. The motivation for the kinetic feedback model comes from the observed dichotomy in black hole growth modes that is reflected in their outflow characteristics~\citep[e.g.][]{Heckman:2014}: A ``radiative mode" at high Eddington ratios ($f_{\rm Edd} \equiv \dot{M}_{\rm BH}/\dot{M}_{\rm Edd} \ga$ few percent), in which AGN are seen to drive multi-phase winds at velocities of $\sim 1000 \;{\rm km}\,{\rm s}^{-1}$ that include molecular and warm ionised gas~\citep[e.g.][]{Sturm:2011,Maiolino:2012,Perna:2017a}; and a ``jet mode" at low Eddington ratios, where AGN mostly drive hot gas in collimated jets at high velocities (of order $\sim 10^4\;{\rm km}\,{\rm s}^{-1}$) that in some circumstances are seen to inflate super-virial temperature bubbles in surrounding hot gas~\citep[e.g.][]{McNamara:2007,Fabian:2012}. This dichotomy also appears in radio jet activity (``high excitation" vs. ``low excitation" radio galaxies) above and below roughly a percent of the Eddington rate~\citep{Best:2012}, with the former tending to be found in lower-mass, bluer host galaxies and the latter in massive early-types. {\sc Simba}'s AGN feedback model is designed to directly mimic the energy injection into large-scale surrounding gas from these two modes, using purely bipolar feedback with velocities and temperatures taken as much as possible from AGN outflow observations. We also include X-ray heating from black holes broadly following the model introduced by \citet{Choi:2012}, modified to operate at the lower resolution of {\sc Simba}'s cosmological simulations. We now describe these subgrid models in more detail. \subsubsection{Kinetic feedback} For the high-$f_{\rm Edd}$ mode outflows, we choose an outflow velocity based on ionised gas linewidth observations of X-ray detected AGN from SDSS by \citet[see Fig. 8 of][]{Perna:2017a}, which we parameterise in terms of the black hole mass $M_{\rm BH}$ (in $M_\odot$) as \begin{equation} v_{\rm w,EL} = 500+500(\log{M_{\rm BH}}-6)/3\;\;{\rm km}\,{\rm s}^{-1}; \label{eq:vradiative} \end{equation} we refer to these as radiative AGN winds. The gas is ejected without modifying its temperature, meaning that it is ejected at the ISM temperature given by our ISM pressurisation model. This is generally consistent with observations suggesting typical electron temperature of $\sim 10^4$K for the ionised gas outflows~\citep{Perna:2017b}. This is broadly similar to the AGN feedback model implemented into {\sc Gizmo}\ by \citet{Angles:2017}, except here with a variable outflow velocity. If the Eddington ratio is below $f_{\rm Edd}<0.2$, we begin to transition to a jet mode with a velocity that becomes increasingly strong as $f_{\rm Edd}$ drops, as follows: \begin{equation} v_{\rm w,jet} = v_{\rm w,EL} + 7000 \log{(0.2/f_{\rm Edd})}\;\;{\rm km}\,{\rm s}^{-1}, \label{eq:vjet} \end{equation} with a cap to the velocity increase at $7000\;{\rm km}\,{\rm s}^{-1}$. In this way, full speed jets are achieved only once $f_{\rm Edd}\la 0.02$. To trigger jet mode, we also include an additional criterion requiring $M_{\rm BH}>M_{\rm BH,lim}$, motivated by observations showing that radio jets only arise in galaxies with velocity dispersions corresponding to black holes with $M_{\rm BH}\ga 10^8M_\odot$~\citep{Barisic:2017}. To be conservative we choose $M_{\rm BH,lim}$ lower than this, namely $M_{\rm BH,lim}=10^{7.5}M_\odot$. Physically, this mass limit is implemented in order to prevent small black holes that temporarily have low accretion rates from driving high-powered jets. Based on observations of AGN outflows and the inferred momentum and energy input \citep[e.g.][]{Fiore:2017,Ishibashi:2018}, we set the amount of material ejected in the AGN winds in order to obtain a momentum input of $ \dot{P}_{\rm out} = 20\,L/c$, where $L = \eta \, \dot{M}_{\rm BH} \, c^2$ is the bolometric luminosity of the AGN, $\eta=0.1$, and $c$ is the speed of light. This value is kept constant for both modes, resulting in the mass loading factor in AGN winds scaling inversely with the outflow velocity. For our parameter choices, a black hole with $M_{\rm BH} = 10^9\,M_\odot$ in the high-$f_{\rm Edd}$ mode injects outflows with $v_{\rm w,EL} = 1000\,\;{\rm km}\,{\rm s}^{-1}$, mass loading $\dot{M}_{\rm out,EL} / \dot{M}_{\rm BH} \approx 600$, and kinetic energy efficiency $\dot{E}_{\rm kin,EL} \approx 0.03\,L$, while in the low-$f_{\rm Edd}$ mode at full jet speed reaches $v_{\rm w,jet}=8000\;{\rm km}\,{\rm s}^{-1}$, $\dot{M}_{\rm out,jet} / \dot{M}_{\rm BH} \approx 75$, and $\dot{E}_{\rm kin,jet} \approx 0.3\,L$. Particles are selected to be ejected randomly from within the black hole accretion kernel, with probability \begin{equation}\label{eq:stoch2} p_{j} \; = \; \frac{ 1 - f_{\rm m}}{f_{\rm m}} \times \frac{w_{j}}{m_j} \times \dot{M}_{\rm BH} \, \Delta t, \end{equation} where $w_{j}$ is a kernel weight ($\Sigma_j \, w_j = 1$) and $f_{\rm m}$ is the fraction of mass accreted by the black hole. The desired mass loading factor relative to the black hole accretion rate ($\dot{M}_{\rm out} / \dot{M}_{\rm BH} = (1 - f_{\rm m}) / f_{\rm m}$) is achieved by setting $f_{\rm m}$ such that: \begin{equation}\label{eq:facc} \frac{\dot{P}_{\rm out}}{L/c} \, = \, 20 \, = \, \frac{v_{\rm w}}{\eta \,c} \, \left ( \frac{1 - f_{\rm m}}{f_{\rm m}} \right ). \end{equation} All outflows are ejected in a purely bipolar fashion. That is, we eject gas elements in a direction $\pm$parallel to the angular momentum vector of the inner disk that we use to compute the black hole accretion (typically, the 256 nearest gas particle neighbours to the black hole). We assume zero opening angle for all winds; this is probably conservative for the radiative mode winds, as the opening angles are likely to be wider, but for the jet mode it is a good approximation to observed highly collimated jets. Even in the case of initially spherical radiative winds, it is likely that there is substantial collimation from the inner galaxy disk on scales that we cannot resolve in our cosmological runs, so the assumption of collimated winds is likely to be closer to correct. Since the wind particles are launched from their current location, this results in a collimated outflow with a small but finite extent ($\la 1$~kpc). We note that the outflow direction can precess owing to variations in the inner disk, but is in practice typically stable over tens to hundreds of Myr. Hence any effect of ``sphericalising" the jet energy input on super-galactic scales is done self-consistently via the hydrodynamic interactions of the outflows with ambient gas at larger scales. Since jets are observed to carry very hot gas, we raise the temperature of jet mode (only) outflows to the virial temperature of the halo, specifically $T_{\rm vir}=9.52\times 10^7 (M_{\rm halo}/10^{15}M_\odot)^{1/3}$~K~\citep{Voit:2005}. This choice is motivated by observations showing that jets contain mostly synchrotron-emitting plasma, and eventually thermalise their energy into surrounding hot gas at around $T_{\rm vir}$~\citep{Fabian:2012}. The extra energy input required for this is typically less than a few percent of the jet kinetic energy, so it does not figure significantly into the overall energy budget. We apply a short hydrodynamic and radiative cooling decoupling time of $10^{-4}t_H$ to the outflowing wind gas elements, where $t_H$ is the Hubble time at launch. This is in order to avoid further entrainment within the unresolved ISM close to the black hole, since the mass loading is accounted for from the assumption of constant momentum input of $20L/c$. This also avoids some numerical inaccuracies from high Mach number shocks in very dense gas. We note that for the jet mode, this can result in a decoupled distance of up to tens of kpc at the present epoch. Hence the jet energy begins to be deposited at a distance comparable to the extent of observed radio lobes. {\sc Gizmo}\ employs a \citet{Durier:2012} timestep limiter in order to ensure proper interactions of the high-speed winds and their surrounding gas as they recouple. Our model has similarities to the two-mode thermal and kinetic AGN feedback model employed in Illustris-TNG~\citep{Weinberger:2017}. The main differences are as follows: (i) Illustris-TNG uses Bondi accretion rather than torque-limited accretion for cold, rotationally supported gas. (ii) Illustris-TNG uses spherical thermal feedback at high $f_{\rm Edd}$ rather than kinetic feedback. This may owe to the fact that torque-limited accretion does not require self-regulation, while Bondi accretion does~\citep{Angles:2013}, and hence {\sc Simba}\ can employ non-spherical feedback during the growth phase and yield black holes consistent with observed scaling relations. (iii) At low-$f_{\rm Edd}$, Illustris-TNG randomises the direction of the jets at each timestep, rather than always ejecting jets perpendicular to the inner disk. Our approach seems more physically motivated, since jets are not known to dramatically precess on timescales of $\sim$Myr (though they may occasionally do so over hundreds of Myr). (iv) Illustris-TNG uses, at maximum, 200\% of the AGN bolometric luminosity (assuming a 10\% radiative efficiency), whereas for our model, the maximum is approximately a third of the bolometric luminosity. Despite these and other minor differences, the use of two-mode AGN feedback as in Illustris-TNG and {\sc Simba}\ seems to be a reasonably successful approach in state of the art AGN feedback models. \subsubsection{X-ray feedback} We include energy input into surrounding gas from X-rays off the accretion disk, as motivated and discussed in \citet{Choi:2012}. Specifically, we compute the volume heating rate owing to X-rays following equation~12 of \citet{Choi:2012}, assuming (as they did) a radiative efficiency of 0.1. We only apply this heating when jet mode is activated, as the lower velocity winds typically arise in more gaseous blue galaxies for which radiative losses would be severe~\citep{Best:2012}. To be more explicit, we assume that more gas-rich galaxies are able to absorb and radiate away the X-ray energy, so we implement a gas fraction threshold such that we only apply X-ray heating if $f_{\rm gas} < 0.2$, where $f_{\rm gas}=M_{\rm gas}/M_*$ as computed by our galaxy finder, and we only include X-ray heating in galaxies with full-velocity jets. The X-ray heating is applied to gas within the black hole accretion kernel, scaled inversely with the square of the distance between the gas elements and the black hole, including Plummer softening based on the gas's smoothing length in order to mitigate large energy deposition in gas close to the black hole. For non-ISM gas, we directly increase the gas's temperature according to the heating flux at the gas's position. For ISM gas, because depositing such heat into a low-resolution, pressurised ISM as we assume in {\sc Simba}\ would cool quickly and not be physically well motivated, we instead take half of the X-ray energy and apply it a radial outwards kick; the remainder is added as heat. We further limit the total energy input in both kinetic and thermal forms to the overall available heating energy; if while looping over BH neighbors the X-ray energy input exceeds this value, then no further X-ray heating is done for that black hole at that timestep. The X-ray heating has a fairly minimal effect on the galaxy mass function, but it provides an important additional energy input to more fully quench massive galaxies, as we discuss in \S\ref{sec:variants}. \subsection{Black hole seeding and dynamics}\label{sec:seeds} We use the on-the-fly FOF algorithm to seed black holes in galaxies dynamically during the simulation \citep[e.g.][]{DiMatteo:2008,Angles:2017}. If a galaxy reaches a stellar mass $M_{*} > \gamma_{\rm BH} \times M_{\rm seed}$ and it does not already contain a black hole particle, then the star particle closest to the center of mass of the galaxy is converted into a black hole particle. For our fiducial simulations, we employ $M_{\rm seed} = 10^4$\,M$_{\odot}/h$~and $\gamma_{\rm BH} = 3\times10^5$, which places black holes in galaxies with $M_{*} \gtrsim 10^{9.5}$\,M$_{\odot}$. This somewhat high stellar mass threshold for black hole seeding is motivated by recent simulations from the FIRE project, showing that stellar feedback strongly suppresses black hole growth in low mass galaxies by evacuating the nuclear gas reservoir on $<100$\,pc scales \citep{Angles:2017c}. A qualitatively similar effect was also found in EAGLE~\citep{Bower:2017,McAlpine:2018} and {\sc Ramses}-based simulations~\citep{Dubois:2015,Habouzit:2017}, though their use of Bondi accretion may inhibit the growth of low mass black holes even in the absence of resolved stellar feedback (owing to the strong dependence $\dot{M}_{\rm Bondi} \propto M_{\rm BH}^{2}$). Owing to poorer cosmological resolution as well as {\sc Simba}'s decoupled kinetic winds that explicitly avoids interaction of star formation feedback with ISM gas, {\sc Simba}\ does not reproduce this effect self-consistently. Hence we simply seed black holes in the regime where they are expected to grow more efficiently. We note that our results are insensitive to the exact choice of $M_{\rm seed}$ and stellar mass threshold~\citep{Angles:2015}. We assume that dynamical friction is efficient enough to maintain black holes near the host galaxy's center. At every time step, black hole particles are repositioned to the location of the potential minimum within the FOF host group, if it is found within a distance $<4\times R_0$, where $R_{0}$ is the size of the black hole kernel used to compute the accretion rate. The black hole particle velocity is then set to the center of mass velocity of the FOF group. While current cosmological large volume simulations cannot self-consistently model the dynamics of black holes within galaxies, this algorithm is sufficient to capture the mass growth and feedback of ``well-behaved" central black holes \citep[see][for an attempt to include sub-grid dynamic friction for black holes in cosmological simulations]{Tremmel:2017}. Any two black holes located within $R_{0}$ are allowed to merge instantaneously if their relative velocity is lower than three times their mutual escape velocity. \subsection{Dust production, growth and destruction}\label{sec:dust} {\sc Simba}\ includes a dust physics module to track the lifecycle of cosmic dust. In this implementation, dust is passively advected following the gas particles. This treatment is essentially accurate, as gas drag is usually able to decelerate grains on very short time scales especially when the radiative pressure is weak, so the drift cannot be resolved in our simulations. We additionally assume all dust grains have the same physical properties with a fixed radius $a\ =\ 0.1\ \mu m$. We ignore the active dust cooling, which will be applied in future work. Dust is produced by condensation of metals from ejecta of SNe and AGB stars. We follow the prescription described in the work of \citet{Dwek:1998}, with updated condensation efficiencies based on recent studies. In the following, $m_{i,d}^j$ refers to the dust mass of the $i$th element (C, O, Mg, Si, S, Ca, Fe) produced by the $j$th stellar process (Type II SNe or AGB stars), whereas $m_{i,{\rm ej}}^j$ refers to the mass of ejecta from the $j$th process. The mass of dust produced by AGB stars with a carbon-to-oxygen mass ratio C/O $>$ 1 is expressed as \begin{equation} m_{i,d}^{\rm AGB}= \begin{cases} \delta_{\rm C}^{\rm AGB} (m_{C,{\rm ej}}^{\rm AGB} - 0.75 m_{O,{\rm ej}}^{\rm AGB}), & i\ =\ {\rm C}\\ 0, & {\rm otherwise,} \end{cases} \end{equation} where $\delta_i^{\rm AGB}$ is the condensation efficiency of element $i$ for AGB stars. The mass of dust produced by AGB stars with a carbon-to-oxygen mass ratio C/O $<$ 1 is expressed as \begin{equation} m_{i,d}^{\rm AGB}= \begin{cases} 0, & i\ =\ {\rm C}\\ 16 \sum \limits_{i=\rm{Mg,Si,S,Ca,Fe}} \delta_i^{\rm AGB} m_{i, {\rm ej}}^{\rm AGB}, & i\ =\ {\rm O}\\ \delta_i^{\rm AGB} m_{i, {\rm ej}}^{\rm AGB}, & {\rm otherwise,} \end{cases} \label{eq:2} \end{equation} where $\mu_i$ is the mass of element $i$ in atomic mass units. The mass of dust produced by Type II SNe is described as \begin{equation} m_{i,d}^{\rm SNII}= \begin{cases} 16 \sum \limits_{i=\rm{Mg,Si,S,Ca,Fe}} \delta_i^{\rm SNII} m_{i, {\rm ej}}^{\rm SNII}, & i\ =\ {\rm O}\\ \delta_i^{\rm SNII} m_{i, {\rm ej}}^{\rm SNII}, & {\rm otherwise,} \end{cases} \label{eq:3} \end{equation} where $\sigma_i^{\rm SNII}$ is the condensation efficiency of element $i$ for Type II SNe. We take a fixed dust condensation efficiency $\delta^{\rm AGB}_{i,\rm dust}=0.2$ based on the theoretical models of \cite{Ferrarotti:2006}. Guided by computations of \citet{Bianchi:2007}, we choose the dust condensation efficiency of Type II SNe $\delta^{\rm SN II}_{i,\rm dust}=0.15$ to match the low-metallicity end of the observed relation between dust-to-gas mass ratios (DGR) and gas-phase metallicities \citep{Remy-Ruyer:2014}. We omit the condensation of Type Ia SNe ejecta, as recent work suggests that Type Ia SNe are not significant sources of dust production \citep[see][]{Nozawa:2011,Dwek:2016,Gioannini:2017}. This is different from \citet{McKinnon:2016} and \citet{Popping:2017} where Type Ia SNe are assumed to have the same condensation efficiency as Type II SNe. Once dust grains are produced, they can grow by accreting gas-phase metals. Derived by \cite{Dwek:1998}, the growth rate of grain radius can be expressed as: \begin{equation} \left( \frac{{\rm d} M_{\rm dust}}{{\rm d} t} \right)_{\rm grow}=\left( 1-\frac{M_{\rm dust}}{M_{\rm metal}} \right) {\left( \frac{M_{\rm dust}}{\tau_{\rm accr}} \right)}, \end{equation} where $M_{\rm metal}$ is the total mass of dust and local gas-phase metals. Following \citet{Hirashita:2000} and \citet{Asano:2013} which assume the accretion is a two-body collisional process, the accretion time scale $\tau_{\rm accr}$ is \begin{equation} \tau_{\rm accr} = \tau_{\rm ref} \left( \frac{\rho_{\rm ref}}{\rho_g} \right) \left( \frac{T_{\rm ref}}{T_g} \right) {\left(\frac{Z_\odot}{Z_g} \right)}. \end{equation} where $\rho_g$, $T_g$ and $Z_g$ are the local gas density, temperature and metallicity, respectively. $\rho_{\rm ref}$, $T_{\rm ref}$ and $Z_{\rm ref}$ are the reference values correspondingly. We take $\rho_{\rm ref} = 100$~H~atoms~cm$^{-3}$, $T_{\rm ref}=20$~K and $\tau_{\rm ref} = 10$~Myr. Inclusion of the multiplier $({Z_\odot}/{Z_g})$, unlike \citet{McKinnon:2017}, is integral to reproduce the observed relation between the dust to gas ratio and gas-phase metal abundance (\S\ref{sec:dustprop}). Dust grains can be eroded by colliding with thermally excited gas especially in hot halos. A number of works have calculated the thermal sputtering rate in detail (e.g. \citealt{Barlow:1978,Draine:1979,Tielens:1994}). In this work, we adopt an analytic approximation of the growth rate of grain radii of \cite{Tsai:1995} (also adopted by \citealt{McKinnon:2017} and \citealt{Popping:2017}) described as \begin{equation} \left( \frac{{\rm d} a}{{\rm d} t} \right)_{\rm sp} = -\frac{a}{\tau_{\rm sp}}, \end{equation} where the sputtering time scale \begin{equation} \begin{aligned} \tau_{\rm sp} & = a \left| \frac{{\rm d} a}{{\rm d} t} \right|^{-1}\\ &\sim (0.17{\rm Gyr})\left( \frac{a}{0.1 \mu m} \right) \left( \frac{10^{-27}{\rm g\ cm^{-3}}}{\rho_g} \right)\left[ \left( \frac{T_0}{T_g}\right)^{\omega}+1 \right], \end{aligned} \end{equation} where $\omega$~=~$2.5$ controls the low-temperature scaling of the sputtering rate and $T_0\ =\ 2 \times 10^6$~K is the temperature above which the sputtering rate flattens. The corresponding dust mass changes as \begin{equation} \left( \frac{{\rm d} M_{\rm dust}}{{\rm d} t} \right)_{\rm sp} = -\frac{M_{\rm dust}}{\tau_{\rm sp}/3} \end{equation} Because SN blast waves are not resolved in our simulations, we implement an additional dust destruction mechanism by SN shocks which enhance inertia and the thermal sputtering of dust grains \citep{Dwek:1980,Seab:1983, McKee:1987,McKee:1989}. We follow the prescription outlined by \citet{McKinnon:2016} in this work. The growth rate of the dust particle mass due to SN destruction is \begin{equation} \left( \frac{{\rm d} M_{\rm dust}}{{\rm d} t} \right)_{\rm de} = -\frac{M_{\rm dust}}{\tau_{\rm de}}, \end{equation} where the characteristic time scale $\tau_{\rm de}$ is \begin{equation} \tau_{\rm de} = \frac{M_g}{\epsilon \gamma M_s}, \end{equation} where $M_g$ is the local gas mass, $\epsilon = 0.3$ is the efficiency with which grains are destroyed in SN shocks \citep{McKee:1989}, $\gamma$ is the local SN II rate, and $M_s$ is the mass of local gas shocked to at least 100 km/s. Considering that our simulations are unable to resolve multi-phase ISM, we apply the Sedov-Taylor solution to a homogeneous medium of $n_{\rm H}=0.13$~H~atoms~cm$^{-3}$ (the minimum SF threshold density of our simulations) and obtain \begin{equation} M_s = 6800\ E_{\rm SNII,51} \left( \frac{v_s}{100\ {\rm km\ s^{-1}}} \right), \end{equation} where $E_{\rm SNII,51}$ is the energy released by a SN II in units of 10$^51$~erg, and $v_s\sim 100$~km~$s^{-1}$ is the shock wave speed. We additionally destroy dust, as well as molecular hydrogen, completely in hot winds and during star formation (\S\ref{sec:code}) and {\color{black} in any gas that is impacted by AGN X-ray heating or jets (\S\ref{sec:bh}). This is done instantaneously, with all dust mass and metals being returned to the gaseous phase. Note that we do not do this for cold star-forming winds or AGN winds in the high-Eddington mode, so these outflows carry molecular gas and dust out of the galaxy. We leave for future work an investigation into whether this reproduces observations of AGN-driven molecular outflows~\citep[e.g.][]{Sturm:2011} and circum-galactic dust~\citep[e.g.][]{Peek:2015}.} \subsection{Runs and analysis} The primary {\sc Simba}\ runs have $1024^3$ dark matter particles and $1024^3$ gas elements. We are running four volumes: $100h^{-1}{\rm Mpc}$ down to $z=0$, $50h^{-1}{\rm Mpc}$ to $z=1$, $25h^{-1}{\rm Mpc}$ to $z=2$, and $12.5h^{-1}{\rm Mpc}$ to $z=5$. All runs have identical input physics, begin at $z=249$, and assume a \citet{Planck:2016} concordant cosmology of $\Omega_m=0.3$, $\Omega_\Lambda=0.7$, $\Omega_b=0.048$, $H_0=68\;{\rm km}\,{\rm s}^{-1}\;{\rm Mpc}^{-1}$, $\sigma_8=0.82$, and $n_s=0.97$. Other parameters such as the minimum gravitational softening length and mass resolutions are listed in Table~~\ref{tab:sims}. In this paper we will only present results from the main $100h^{-1}{\rm Mpc}$ run, as the other runs are at various stages of completion. We will also explore parameter space and compare to our previous {\sc Mufasa}\ simulations using $50h^{-1}{\rm Mpc}$ runs with $2\times 512^3$ particles that match {\sc Mufasa}'s size. We run a full physics {\sc Simba}\ simulation at this resolution, and in order to directly assess the impact of our new quenching feedback modules, namely jet and X-ray feedback, we also run a ``No-jet" simulation where these modules are turned off, and a "No-Xray" run where jets are kept on but X-ray feedback is turned off. All other input physics in these runs, including stellar feedback and radiative mode black hole feedback, remains identical to that in {\sc Simba}. \begin{figure*} \includegraphics[width=\columnwidth]{Tmap_m50n512_s27_sT_41.pdf} \includegraphics[width=\columnwidth]{Tmap_m50n512_s27_sT_77.pdf} \caption{Temperature map projected through a random 10~Mpc/h slice from a 50 Mpc/h {\sc Simba}\ volume, at $z=2$ (left) and $z=0$ (right). At $z=2$, warm-hot gas traces large-scale filaments, with energetic bipolar outflows owing to jets evident from the nodes where the most massive galaxies and black holes reside. At $z=0$, high-speed AGN outflows have shocked the IGM gas throughout much of this volume to well beyond the virial radii of halos, with cooler dense filamentary structures penetrating the hot gas.} \label{fig:tmap} \end{figure*} \begin{table*} \centering \caption{The {\sc Simba}\ simulation suite.} \label{tab:sims} \begin{tabular}{lcccccc} \hline Name & $L_{\rm box}^a$ & $\epsilon_{\rm min}^b$ & $z_{\rm end}^c$ & $m^d_{\rm gas}$ & $m^e_{\rm DM}$ & $M^f_{\rm *,min}$\\ \hline m100n1024 & 100 & 0.5 & 0 & $1.82\times 10^7$ & $9.6\times 10^7$ & $5.8\times 10^8$\\ m50n1024 & 50 & 0.25 & 1 & $2.28\times 10^6$ & $1.2\times 10^7$ & $7.3\times 10^7$\\ m25n1024 & 25 & 0.125 & 2 & $2.85\times 10^5$ & $1.5\times 10^6$ & $9.1\times 10^6$\\ m12.5n1024 & 12.5 & 0.0625 & 5 & $3.56\times 10^4$ & $1.88\times 10^5$ & $1.14\times 10^6$\\ \hline \end{tabular} \\$^a$ Box length in comoving $h^{-1}{\rm Mpc}$. \\$^b$ Minimum gravitational softening length in comoving $h^{-1}{\rm kpc}$. \\$^c$ Ending redshift (all begin at $z=249$). \\$^d$ Initial gas element mass resolution in $M_\odot$. \\$^e$ Dark matter particle mass resolution in $M_\odot$. \\$^f$ Minimum stellar mass of a resolved galaxy in $M_\odot$. \end{table*} To analyse the simulation outputs, we employ a suite of tools as described below. First, galaxies are identified using a friends-of-friends galaxy finder, assuming a spatial linking length of 0.0056 times the mean inter-particle spacing (equivalent to twice the minimum softening length). In our tests, this gives very similar results to the more comprehensive Spline Kernel Interpolative Denmax (SKID) galaxy finder. Galaxy finding is applied to all stars and black holes plus all gas elements with a density above the minimum SF threshold density of $n_H>$0.13~H~atoms~cm$^{-3}$; this captures all the stars and molecular gas in galaxies. Black holes are assigned to the galaxy to which they are most gravitationally bound; large galaxies can have many black holes. We take the central black hole to be the most massive black hole in the galaxy, and use this when we discuss black hole masses. In most cases, the other black holes are very small and add no significant black hole mass compared to the central one. Because significant amounts of neutral hydrogen can lie in an extended configuration beyond the star-forming region of galaxies, we assign \ion{H}{I} to galaxies in a separate step. To do this, we consider all gas elements with \ion{H}{I} fractions above 0.001, and assign them to the galaxy to which they are most gravitationally bound, i.e. its kinetic energy relative to the galaxy's center of mass velocity minus the potential energy from the galaxy at the gas element's location is minimised. Halos are identified on the fly during the simulation run using a 3-D friends-of-friends algorithm within {\sc Gizmo}, which is identical to the one in {\sc Gadget-3}\ written by V. Springel. The linking length is taken to be 0.2 times the mean inter-particle spacing. We do not identify or consider sub-halos in this work. Galaxies and halos are cross-matched in post-processing using the {\sc yt}-based package {\sc Caesar}, which outputs a single hdf5 catalogue containing all galaxy and halo information with many key properties pre-computed, as well as particle lists of individual galaxies and halos so that any other properties can be computed via user-written {\sc python} scripts. \begin{figure} \includegraphics[width=0.45\textwidth]{gal_282_crop.png} \includegraphics[width=0.45\textwidth]{gal_101_crop.png} \vskip-0.1in \caption{Examples of the molecular gas (left) and stellar (right) surface density distributions in star-forming disk galaxies with $M_*\approx 4.7\times 10^{10}M_\odot$ at $z=0$ (top four panels) and $z=2$ (bottom four panels), showing face-on and edge-on views. At $z=0$ there is a thin, well-ordered disk in both \ion{H}{i}\ and H$_2$, while the high-$z$ galaxy is clumpier and thicker.} \label{fig:disks} \end{figure} All results shown here are obtained from the {\sc Caesar} catalogs generated from simulation snapshots at specified redshifts. We output 151 snapshots to $z=0$, 105 to $z=1$, and 78 to $z=2$. Each snapshot is $\approx$250~GB in size, and the {\sc Caesar} catalogues are typically $\sim$15~GB each. \section{Results}\label{sec:results} \begin{figure*} \includegraphics[width=6.2in]{mfssfr_m100n1024.pdf} \vskip-0.1in \caption{Stellar mass function evolution from $z=6\rightarrow 0$, compared to observations as indicated in the legends. Green band shows the results from all {\sc Simba}\ galaxies, with the spread computed from jackknife resampling the 8 simulations sub-octants. Red and blue dashed lines show the mass functions of central galaxies below and above sSFR$=10^{-1.8+0.3z}$Gyr$^{-1}$, respectively. Cyan dotted line shows the results from EAGLE for comparison.} \label{fig:mfwind} \end{figure*} In this section we provide a comprehensive suite of predictions for {\sc Simba}\ for a range of key global galaxy properties. The purpose is to ascertain how well {\sc Simba}\ reproduces observed galaxy stellar and gas properties that have historically provided stringent constraints on feedback models in previous simulations, and thereby demonstrate the suitability of {\sc Simba}\ as a platform to study detailed galaxy evolution. To begin, we show in Figure~\ref{fig:tmap} a projected temperature map from the (50 Mpc/h)$^3$, $512^3$ {\sc Simba}\ simulation. The slice shown is 10~Mpc/h thick, and is arbitrarily chosen to contain representative structures in the volume. At $z=2$, the familiar Cosmic Web is evident as traced out by warmer gas arising from mild shock heating on filamentary structures. Closer inspection reveals the earliest AGN jets becoming active, with characteristic bipolar outflows that are typically perpendicular to the large-scale filaments. But these large early black holes are sparse, and most of the IGM is unaffected by feedback. In contrast, by $z=0$ (right panel), a significant volume of the IGM has been heated to high temperatures from AGN feedback, and the hot bubbles encroach upon regions untouched by AGN feedback containing the canonical warm filaments. These bubbles are reasonably spherical since they arise from clustered massive galaxies, each one ejecting jets that are relatively stable in direction but overlap quickly with neighboring outflows. In some cases individual bipolar jets and the resulting bow shocks can still be picked out. Such a dramatic impact on the IGM may have significant consequences for the ionisation state of diffuse neutral hydrogen~and the statistics of Lyman alpha forest absorbers \citep{Kollmeier:2014}, as well as the diffuse IGM pressure measurable via the Sunyaev-Zel'dovich effect~\citep{Lim:2018}; we will explore these in future work. In this paper, we focus on the demographics of the galaxy population predicted by {\sc Simba}. Figure~\ref{fig:disks} shows some examples of individual galaxies. We choose a Milky Way-sized disk galaxy at $z=0$, with $M_*\approx 4.7\times 10^{10}M_\odot$ and SFR$=1.3\ M_\odot$yr$^{-1}$, and show the face-on (upper row) and edge-on (lower row) views, in both H$_2$ surface density (left) and stellar mass surface density (right). The $z=2$ galaxy shown in the bottom four panels has essentially the same $M_*$, but with SFR$=45\ M_\odot$yr$^{-1}$ that is typical of a main sequence galaxy at Cosmic Noon. The $z=0$ disk is a grand design spiral, with a thin cold gas distribution. There is a small central hole in cold gas that owes to the AGN feedback from its $6\times 10^7M_\odot$ black hole accreting at $0.005 M_\odot$yr$^{-1}$. The stellar distribution does not show the spiral structure owing to the relatively low resolution of {\sc Simba}, compared to zooms or higher-resolution simulations such as Illustris-TNG and EAGLE. The $z=2$ system shows more prominent star forming clumps and a thicker gas distribution, and is overall more compact (note the scale bar). While {\sc Simba}'s numerical resolution smooths out many of the detailed internal features, this shows that it still produces galaxies that have features broadly like star-forming disk galaxies in the real Universe. We do not show more massive quenched examples, but as expected they tend to be elliptical in their stellar morphology, with little cold gas. \subsection{Galaxy stellar mass functions} Since galaxies are a collection of stars, the most basic property of a galaxy is its stellar mass. Given that the concordance cosmological model strongly constraints the halo mass function, the galaxy stellar mass function (GSMF) thus characterises the efficiency by which halos convert their baryons into stars. It is well established that (under the abundance-matching ansatz) the stellar-to-halo mass ratio drops quickly to low and high masses away from the peak at $L^*$~\citep[e.g.][]{Moster:2013,Behroozi:2013}, and current models attribute this to self-regulation by star formation-driven feedback below $L^*$ and quenching of galaxies due to AGN feedback above $L^*$~\citep{Somerville:2015}. Since the GSMF is reasonably well measured over much of cosmic time~\citep[albeit with non-trivial systematic uncertainties;][]{Mobasher:2015}, it represents a stringent test for the key feedback modules of a galaxy formation model. Indeed, simulations these days including {\sc Simba}\ tend to use the $z=0$ GSMF as a primary constraint to tune feedback models. Figure~\ref{fig:mfwind} shows the GSMF at $z=0.1,1,2,3,4,6$ from {\sc Simba}\ (green lines). Observational data is shown at $z=0$ from \citet{Bernardi:2017}. At $z=1,2,3$ we show observations from \citet{Tomczak:2014} combining CANDELS and zFOURGE data, while at $z=4,6$ we show observations based on CANDELS from \citet{Song:2016}. We also show the GSMF of central galaxies only, subdivided into star-forming (SF) and quenched (Q) samples at a specific SFR$=10^{-1.8+0.3z}$Gyr$^{-1}$. Error bars are shown from jacknife re-sampling over eight simulation sub-octants. Finally, we show the results from the EAGLE simulation as the dotted cyan line at selected redshifts. {\sc Simba}\ produces generally good agreement with the observed GSMFs at all redshifts, overall comparably well to EAGLE. There is excellent agreement at $z\geq 3$, especially given the systematic uncertainties in stellar mass determinations at higher redshifts \citep{Mobasher:2015}. At $z=2$, there starts to be a slight excess at the massive end in {\sc Simba}. This may owe to insufficient quenching of the most massive galaxies, or may represent an underestimate of the observed GSMF owing to selection effects in the rest-optical surveys used for the GSMF determinations which can miss massive dusty galaxies. At lower redshifts, there is a clear truncation at the massive end, but a mild overproduction of the most massive galaxies remains all the way to $z=0$. Like EAGLE, {\sc Simba}\ {\color{black} under-predicts the GSMF around $M^\star$ by a factor of up to two}; this was an advantage of {\sc Mufasa}\ that is unfortunately not retained in {\sc Simba}. This highlights that it continues to be a challenge to achieve such a sharp turndown in the GSMF using a physically-motivated AGN feedback model. The overproduction of the most massive galaxies could owe to a number of effects. First off, there are numerical uncertainties in quantifying the most massive systems, because they tend to have large extended envelopes of stars and many satellites that, owing to poor resolution, can be overmerged into the central object either during the dynamical evolution or during the post-processing galaxy finding stage. These tend to artificially boost the mass in the simulated massive galaxies. One way to mitigate this is to compare the stellar mass within fixed apertures to data, which \citet{Schaye:2015} showed using EAGLE can significantly reduce the mass of $M_*\ga 10^{11}M_\odot$ objects. There are also increased observational uncertainties at the massive end. For instance, it is a matter of debate as to how much of the surrounding stars should be classified as part of the central galaxy and how much should be intracluster light; this can strongly impact the stellar mass~\citep{Kravtsov:2018}. There is also the issue of the stellar initial mass function (IMF) -- stellar population~\citep{Conroy:2013} and dynamical~\citep{Cappellari:2013} studies suggest that the most massive galaxies have bottom-heavy IMFs relative to Milky Way-like galaxies, which can result in the stellar mass being underestimated by a factor of 2 or more for the most massive systems. Finally, there is an issue particular to this {\sc Simba}\ run -- it turns out, in the 100$h^{-1}{\rm Mpc}$ volume, by $z=0$, the largest halo has a virial mass of $M_{\rm halo}=1.16\times 10^{15}M_\odot$, which is larger than expected by about 50\% for its volume; this may contribute to the excess of the very most massive galaxies. Hence although at face value there is some disagreement at the massive end in comparing {\sc Simba}\ with recent observations, more work must be done to determine whether these discrepancies reflect a significant failing of {\sc Simba}'s input physics. \begin{figure*} \subfloat{\includegraphics[width=0.45\textwidth]{mainseq_z0.jpg}} \subfloat{\includegraphics[width=0.45\textwidth]{mainseq_z2.jpg}} \caption{Star formation rate--stellar mass relation at $z\approx0$ (left) and $z\approx 2$ (right). Points show {\sc Simba}\ galaxies, colour-coded by their black hole to stellar mass ratio. The thick green line shows the running median to star-forming galaxies (i.e. above the horizontal dotted yellow line). Observations at $z=0$ from GSWLC-X2 are shown as the grey hexbins, with the black dashed line showing the median to galaxies using the same sSFR cut as shown for {\sc Simba}. The errorbars show the $1\sigma$ spread around the running median value, typically $0.3-0.4$~dex. At high-$z$ we show the black dashed line as the best-fit relation for $2<z<2.5$ galaxies from \citet{Whitaker:2014}. Results from EAGLE are shown as the magneta dotted line for comparison. {\sc Simba}\ reproduces the star-forming main sequence at both redshift reasonably well, especially accounting for systematics in high-$z$ sSFR determinations, though in small galaxies it appears to overpredict the SFR at $z\sim 0$ and underpredict at $z\sim 2$} \label{fig:mainseq} \end{figure*} Examining the SF vs. Q samples, we see that massive quenched galaxies begin to appear in significant numbers at $z\ga 2$. By $z=1$ they outnumber the SF galaxies among the most massive galaxies with $M_*\ga 10^{11}M_\odot$, and by $z=0$, they dominate at $M_*\ga 2\times 10^{10}M_\odot$. The quenched population grows quickly at low redshifts, and the number of massive star-forming galaxies drops quickly since $z\sim 1$, in broad agreement with observations~\citep[e.g.][]{Bell:2004}. There are a few very small quenched centrals, but this is likely an artifact of the friends-of-friends halo finder. In summary, to within systematic uncertainties, {\sc Simba}\ produces a GSMF that is in quite good agreement with observations across most of cosmic time, with the possible exeption of the $z=0$ massive end. {\sc Simba}\ passes this primary check at a level comparable to that seen for {\sc Mufasa}, EAGLE, and Illustris-TNG~\citep{Pillepich:2018}. In no small part, this owes to these various models tuning feedback parameters to match such data, but even the fact that such a tuning is now possible is a recent and important step forward for cosmological hydrodynamic simulations. It does mean that the growth of galaxies' stellar component over time is no longer a strong discriminant between current galaxy formation models. Instead, for this we must rely on the many other predicted observables that are not used to tune the models. We now examine some of these predictions for {\sc Simba}. \subsection{Star formation rate--stellar mass relation} Another key barometer of galaxy formation models is the star formation rate--stellar mass (SFR$-M_*$) relation. Unlike the GSMF that is often used as a primary constraint on models, SFR$-M_*$ is not, making it more of a true prediction of models. The SFR$-M_*$ relation consists of a star-forming ``main sequence" of galaxies, and a population of quenched galaxies falling below the main sequence that dominates at high masses at later epochs. Getting the balance of these populations in accord with observations over cosmic time, as well as predicting their growth rates, has traditionally been difficult to reproduce in cosmological simulations. During Cosmic Noon, it has long been seen that cosmological models tend to underpredict the main sequence amplitude~\citep{Daddi:2007,Dave:2008,narayanan12a,Sparre:2015,Somerville:2015}, typically by a factor of $2-3$. Fixing this requires rather substantially changing the star formation histories, not just the overall SFRs, since a multiplicative constant on the SFR will tend to move galaxies along the relation rather than increase its amplitude. There are also potential observationally-oriented systematics that may be overestimating the SFR owing to one or more of many possible factors, such as galaxies being dominated by harder-ionising stellar populations at high-$z$, or having a more top-heavy initial mass function. Figure~\ref{fig:mainseq} shows the specific SFR$-M_*$ relation at $z=0.1$ (left) and $z=2.3$ (right) for {\sc Simba}\ galaxies. {\color{black} The SFRs are computed as instantaneous SFRs from the gas elements, which corresponds well to the SFR computed from young star particles when averaged over several tens of Myr.} The running median (green curve) includes star-forming galaxies only ({\sc Simba}-SF), defined as before by sSFR$>10^{-1.8+0.3z}$Gyr$^{-1}$ (dotted horizontal yellow line). {\color{black} The error bars show the $1\sigma$ spread around the median value in each bin.} Points are colour-coded by the ratio of black hole to stellar mass, with magenta points having higher $M_{BH}/M_*$; points at $M_*\la 10^{10}M_\odot$ in cyan have no or very small growing black holes, as we will discuss later. Galaxies with very low or zero SFR are plotted near the bottom for visibility. Observations at low-$z$ are shown from the GALEX-SDSS-WISE Legacy Catalog\citet[GSWLC;][]{Salim:2016,Salim:2018}, shown as grey hexbins, and the running median to the star-forming galaxies with the same criterion as above is shown as the black dashed line{\color{black}, along with error bars showing the $1\sigma$ spread around the median}. At high-$z$, we show the median sSFR$-M_*$ relation measured for $2<z<2.5$ galaxies by \citet{Whitaker:2014}. Finally, results from EAGLE are shown as the magenta dotted line. At $z=0$ (left panel), {\sc Simba}\ nicely reproduces the observed GSWLC main sequence slope and amplitude at $M_*\ga 10^{10}M_{\odot}$. Below this mass, {\sc Simba}\ shows noticeably higher SFRs. This mass corresponds to the onset of massive black holes, as shown by the growing number of magenta-coloured points with higher black hole mass for their $M_*$. Indeed, there is a very strong trend that the galaxies that are quenching are specifically the ones with a high $M_{BH}/M_*$ ratio; massive galaxies left on the main sequence at $z\sim 0$ in {\sc Simba}\ are only those that for some reason have not grown their black hole as rapidly. {\color{black}A similar trend is seen in EAGLE~\citep{Matthee:2019}, which arises owing to a spread in halo formation times~\citep{Davies:2019}.} We will investigate the detailed reasons for this dichotomy in {\sc Simba}\ in future work, but for now we note the tight connection between quenching and black holes already appearing in {\sc Simba}, which will be a recurring theme throughout this paper. {\color{black} The average slope of sSFR$-M_*$ for star-forming galaxies over the entire mass range plotted is $-0.27$, which is in reasonable agreement with observations~\citep[e.g.][]{Noeske:2007,Speagle:2014}. The scatter around the main sequence in {\sc Simba}\ is $0.3-0.4$~dex, with a mild tendency to drop with $M_*$; this is very comparable to that seen in the GSWLC data.} \begin{figure} \includegraphics[width=0.48\textwidth]{mainseqhist_z0.pdf} \vskip-0.05in \caption{Histogram of sSFR in three bins of stellar mass. Solid lines show the results for {\sc Simba}\ at $z=0.1$, while dotted lines show $z\sim0.1$ observations from GSWLC-D2. All galaxies with sSFR$<10^{-2.5}$~Gyr$^{-1}$ are placed in the lowestmost bin. There is good agreement, particularly in the quenched fractions in more massive galaxies, though {\sc Simba}\ produces somewhat too high sSFRs at low-$M_*$.} \label{fig:mainseqhist} \end{figure} At $z=2.2$ (right panel), the {\sc Simba}\ main sequence generally tracks the observed one from \citet{Whitaker:2014}, but is low in amplitude by $\approx\times 2$. This continues the trend in models that the main sequence at Cosmic Noon remains too low, though not quite as strongly as in some previous models. However, \citet{Leja:2018} points out that more sophisticated SED fitting applied to the latest datasets can lead to a systematic increase in the inferred $M_*$ while lowering the SFR that results in a combined $\approx 0.3$~dex lower sSFR compared to previous determinations. If confirmed, then at face value this would bring {\sc Simba}'s (and other models') simulated main sequence into agreement with $z\sim 2$ observations at long last. {\color{black} Finally, we show in Figure~\ref{fig:mainseqhist} histograms of the specific SFR, broken up into mass bins of $10^9<M_*<10^{10}M_\odot$, $10^{10}<M_*<10^{11}M_\odot$, and $M_*>10^{11}M_\odot$. Solid lines show the results from {\sc Simba}\ at $z=0.1$, while dashed lines show identically selected galaxies from the GSWLC-D2 catalog~\citep{Salim:2018}. Galaxies with sSFR$\leq 10^{-2.5}$~Gyr$^{-1}$ have been placed in the lowest sSFR bin. There is an overall bimodal distribution, with low-mass galaxies being predominantly star-forming, while massive galaxies are almost uniformly quenched. There is impressive agreement in the lowest sSFR bin, showing that {\sc Simba}\ well reproduces the quenched fractions at various $M_*$. However, the low-mass galaxies in {\sc Simba}\ have somewhat too high sSFR values, reflecting the same excess as seen in Figure~\ref{fig:mainseq}. } In summary, {\sc Simba}\ generally reproduces the main sequence of star-forming galaxies as seen at $z\approx 0,2$, to within current systematic uncertainties. Potential disagreements from data lie mostly at lower masses, where the observations are less certain and more subject to selection effects. The success at $M*\ga 10^{10}M_\odot$ is encouraging because it suggests that the balance of quenching and quenched galaxies in this transition mass near $M^\star$ is being reproduced roughly correctly in {\sc Simba}. There is also a strong trend that quenched galaxies at a given $M_*$ tend to have larger fractional black hole masses, which is an interesting prediction that can be tested in future samples of black hole mass measurements in sub-$M^\star$ galaxies. {\color{black} \subsection{Global star formation rate evolution} The evolution of the cosmic SFR density (SFRD) has long been a key test for cosmological galaxy formation models. While proper model comparisons to data can be challenging owing to the variety of different selection effects used to measure SFR over time, the recent compilation by \citet{Madau:2014} has provided a homogenised database for the SFRD that can be more robustly compared. \begin{figure} \includegraphics[width=0.48\textwidth]{madau_t.pdf} \vskip-0.1in \caption{Star formation rate density evolution versus age of the Universe in {\sc Simba}\ (curve), compared to the observational compilation of \citet{Madau:2014} (black points and best-fit line).} \label{fig:madau} \end{figure} Figure~\ref{fig:madau} shows the comparison of the cosmic SFRD as a function of cosmic age in {\sc Simba}\ versus the \citet{Madau:2014} compilation. The {\sc Simba}\ SFRD values include all the star formation in the volume at each epoch, but we have checked that including only star formation in resolved galaxies ($M_*>5.8\times 10^8 M_\odot$) makes a negligible difference. The overall shape of the predicted SFRD versus time is in good agreement with observations. {\sc Simba}\ matches the preset-day SFRD very well, and generally reproduces the order-of-magnitude rise in SFRD towards the peak at $z\sim 2$. There is a slight tendency for {\sc Simba}\ to form more stars globally at earlier epochs, with the peak shifted very slightly towards higher redshift compared to the best-fit line from \citet{Madau:2014}. The peak SFRD at $z\sim2$ is also slightly lower than observed, following the trend shown in Figure~\ref{fig:mainseq} that {\sc Simba}\ has slightly lower main sequence than observed. Despite these minor differences, the overall shape and amplitude is in very good agreement with observations, comparable to the agreement seen versus other recent simulations such as Illustris-TNG~\citep{Pillepich:2018}. } \subsection{Neutral and molecular gas fractions} \begin{figure} \includegraphics[width=3.5in]{fgevol_m100n1024.jpg} \vskip-0.2in \caption{Molecular (top) and neutral (bottom) gas fractions $M_{H2}/M_*$ and $M_{HI}/M_*$ as a function of $M_*$. The points show $z=0$ values from {\sc Simba}\ colour-coded by the deviation in sSFR from the star-forming main sequence -- bluer points have higher-than typical SFR, redder have lower. A running median at $z=0$ is shown as the cyan dashed line. For comparison we show the running medians at $z=1,2$ (green, magneta lines). Observations of $f_{H2}$ from xCOLDGASS~\citep{Saintonge:2017} are shown in the top panel, and observations of $f_{HI}$ from GASS~\citep{Catinella:2012} are shown in the bottom panel. {\sc Simba}\ predicts gas fraction scalings in good agreement with data, and predicts a small but significant amount of gas even in the most massive quenched systems.} \label{fig:fgas} \end{figure} {\sc Simba}\ tracks the neutral (\ion{H}{i}) and molecular (H$_2$) hydrogen separately during its evolution, via sub-grid prescriptions to account for molecular gas production and destruction, and approximate self-shielding that results in neutral gas. Thus {\sc Simba}\ lends itself to testing against a complementary set of constraints: the scaling relations of \ion{H}{i}\ and H$_2$ gas fractions versus $M_*$. Recent millimetre and radio observation data has greatly expanded our knowledge of gas contents for low-$z$ galaxies, with constraints at higher $z$ promising continued rapid advancement in the near future. Figure~\ref{fig:fgas} shows the scaling relations for H$_2$ (top) and \ion{H}{i}\ (bottom) mass fractions, versus $M_*$. The points show individual galaxies at $z=0$ colour-coded by their sSFR deviation from the main sequence at that $M_*$ ($\Delta$sSFR), where the main sequence is defined by fitting a running median to the main sequence. The running mean of the gas fractions is shown as the cyan dashed line. Observations are shown from the mass-selected GASS \ion{H}{i}\ survey~\citep{Catinella:2012} and its follow-up COLDGASS survey~\citep{Saintonge:2017} that obtained H$_2$ masses from CO emission measurements. We further show the mean predicted trends at $z=1$ (green dashed) and $z=2$ (magenta dashed), to illustrate how these quantities evolve. Overall, {\sc Simba}\ does an excellent job of reproducing the trends in both molecular and neutral gas fractions with stellar mass. There is a hint that the amplitude of both is low by about 0.1~dex, but given observational uncertainties such as the CO-to-H$_2$ conversion factor which is poorly determined particularly in the low-mass ($M_*<10^{10}M_\odot$) regime \citep[e.g.][]{narayanan12a,bolatto13a}, as well as theoretical uncertainties in the approximate way that self-shielding is applied, galaxy gas contents can be considered to be a remarkably successful prediction of {\sc Simba}. We note that our massive galaxies have a non-trivial amount of cold gas, despite their very low sSFR. This was not the case in {\sc Mufasa}~\citep{Dave:2017a}, where the most massive galaxies were devoid of essentially any cold gas. Recent observations seem to suggest that, perhaps surprisingly, many massive quenched galaxies contain substantial cold gas fractions of up to a percent or more~\citep[e.g.][]{Young:2014}, which is consistent with {\sc Simba}'s predictions. The efficiency of star formation from the molecular gas is, however, low. {\sc Simba}\ qualitatively reproduces this trend, possibly because the cold gas generally sits in a more extended configuration where the densities are not as high. Since the star formation rate is proportional to $\rho^{1.5}$, this means that even if the gas has high molecular content, its low density will curtail star formation relative to the same gas being in a compact configuration. The origin and fate of this cold dense gas is unclear; it could be a transient phase brought in by satellites, or else a stable phase maintained in a more diffuse configuration owing to the presence of hot gas and AGN feedback. We will examine the exact nature of cold gas in quenched galaxies in future work. Finally, the colours of the {\sc Simba}\ points in Figure~\ref{fig:fgas} indicate the deviation of a given galaxy from the main sequence. There is a clear trend that galaxies that are more gas-rich at a given $M_*$ have a higher SFR. This is unsurprising for H$_2$ in our models, given that star formation is tied to the molecular content. It is somewhat more surprising to see this for the \ion{H}{i}\ fraction, but such a correlation was also seen fairly strongly in {\sc Mufasa}~\citep{Dave:2017a}, though not as strong as for H$_2$~\citep{Rafieferantsoa:2019}. Such trends have also been noted in observations~\citep{Bothwell:2013} and in EAGLE~\citep{Lagos:2016}. \subsection{Gas-phase and stellar mass-metallicity relations} \begin{figure*} \centering \subfloat{\includegraphics[width=0.5\textwidth]{mzr_z0.jpg}} \subfloat{\includegraphics[width=0.5\textwidth]{mzr_z2.jpg}} \hfill \vskip-0.1in \caption{Gas-phase mass-metallicity relation at $z=0$ (left) and $z=2.3$ (right) from {\sc Simba}. Points are colour-coded by deviation in sSFR from the star-forming main sequence. Running median values are shown as the green lines. At low-$z$, best-fits to observations from \citet{Tremonti:2004,Andrews:2013,Yates:2019} are shown as the black lines. At $z\approx 2.3$, observations are shown from the MOSDEF survey~\citep{Sanders:2015}. {\sc Simba}\ reproduces the gas-phase MZR well at both redshifts, with a noticeable second-parameter dependence on SFR particularly at low-$z$.} \label{fig:mzr} \end{figure*} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{Zstellar.jpg} \hfill \vskip-0.15in \caption{Stellar mass--stellar metallicity relation at $z=0$ from {\sc Simba}, with a running median shown as the dashed yellow line. Points are colour-coded by specific SFR. Observations are shown from \citet{Gallazzi:2005,Panter:2008}. {\sc Simba}\ reproduces the stellar metallicities of galaxies fairly well, although it appears that low-mass star-forming galaxies tend to have somewhat higher metallicities than typically observed.} \label{fig:Zstellar} \end{figure} {\sc Simba}\ tracks the production and distribution of various heavy elements, through several nucleosynthetic channels. Produced metals can be carried out from galaxies via outflows, which in {\sc Simba}\ are typically mildly enriched compared to the mean ISM metallicity (see \S\ref{sec:code}). Additionally, {\sc Simba}\ locks individual metals into dust, removing them from the gas phase. Hence predictions for the relationship between galaxy stellar mass and metallicity, which is observed to be among the tightest relations known for galaxies, tests how numerous aspects of {\sc Simba}\ work together to establish galaxy metallicities. Metals can be associated with gas, stars, or dust. Measurements of the gas-phase metallicity reflect a balance between relatively pristine inflow and ejection of enriched material via outflows, and thus provide a direct constraint on the mass outflow rate in gas-rich (star-forming) galaxies~\citep{Finlator:2008}. The stellar metallicity is measured from stellar atmospheric absorption lines that reflect the accumulated metals from both gas and dust that ended up locked in the stars. The inclusion of dust production and destruction model can in principle therefore decouple the stellar and gas phase metallicities. Here we present predictions for the gas-phase and stellar metallicity scaling relations from {\sc Simba}. Figure~\ref{fig:mzr} shows the gas-phase mass-metallicity relation (gMZR) at $z=0$ (left) and $z=2.3$ (right). The gas-phase metallicity is computed as the SFR-weighted oxygen abundance in all galaxy gas particles, normalized to the solar value of 1.34\% \citep{Asplund:2009}. Points show central galaxies colour-coded by their deviation from the main sequence, as in Figure~\ref{fig:fgas}; black points are satellite galaxies. A running median for star-forming galaxies (sSFR$>10{-1.8+0.3z}$Gyr$^{-1}$) is shown as the dashed green line. Fits to observations at $z=0$ are shown from strong emission line fitting \citep[black dashed]{Tremonti:2004}, stacked measurement of direct metallicities \citep[black dot-dashed]{Andrews:2013}, and individual semi-direct metallicities~\citep[black dotted]{Yates:2019}. Observations at $z=2.3$ are shown from the MOSDEF survey~\citep{Sanders:2015}. {\sc Simba}\ predicts a gas phase mass-metallicity relation that agrees quite well with observations, lying generally in agreement with the range of current observational determinations. The metallicities may be slightly too high at the highest masses, but this turns out to be strongly dependent on the assumed cut for star-forming galaxies; a more stringent cut would lower the massive end fit, and highlights the sensitivity of MZR predictions there to precise selection effects. {\sc Mufasa}\ produced a gMZR that was slightly too steep, in addition to having an amplitude that roughly agreed with data only because of an arbitrary halving of the SNII yields~\citep{Dave:2017a}. In {\sc Simba}, metals are locked into dust, increasingly so at higher metallicities and hence larger masses. This likely leads to a suppression of the gas-phase metallicity in massive galaxies and thus a flatter gMZR, as well as a lower amplitude. We will examine the impact of dust on the metal content of galaxies in more detail in future work (Li et al., in prep.). Since $z=2.3$, {\sc Simba}\ produces more metal evolution at the low-mass end, in general agreement with observations suggesting that the most metal-rich galaxies are in place at early epochs~\citep[e.g.][]{Zahid:2014}. Star-forming galaxy metallicities are in good agreement with the MOSDEF data, suggesting that the amount of metal evolution from $z\sim 2\rightarrow 0$ is approximately correct in {\sc Simba}. {\sc Simba}\ also shows a clear second parameter dependence on the specific SFR such that at a given $M_*$, galaxies with lower sSFR tend to have higher metallicities. This has been noted observationally as the Fundamental Metallicity Relation~\citep[FMR;][]{Mannucci:2010,Lara-Lopez:2010}. The existence of the FMR remains somewhat controversial~\citep{Salim:2014,Sanchez:2017,Cresci:2018}, but a careful analysis of the MOSDEF data has revealed such a trend at $z\sim 2$~\citep{Sanders:2018}. The trend is quite obvious at $z=0$ with the bluest galaxies clearly having the lowest metallicities, but is not quite so evident at $z=2.3$ except at the massive end where a population of quenched galaxies has appeared. Finally, the small black points showing the satellites tend to lie above the mean relation, in qualitative agreement with data~\citep{Pasquali:2012}. Figure~\ref{fig:Zstellar} shows the mass-weighted stellar metallicity as a function of $M_*$ at $z=0$. Points show centrals colour-coded by sSFR, with satellites in black. The yellow dashed line shows a running median. Observations are shown from \citet{Gallazzi:2005} and \citet{Panter:2008}. {\sc Simba}\ nicely reproduces the stellar MZR for massive, quenched galaxies. At lower masses, the star-forming population dominates, and these tend to have a stellar MZR that is typically slightly higher, with larger scatter, than expected from an extrapolation of the massive galaxy relation. This owes to the fact that these galaxies have continued to form stars after their more massive counterparts have quenched. However, no such feature is evident in the observations, and hence {\sc Simba}\ produces a low-mass stellar MZR that is somewhat too high compared to observations. In summary, {\sc Simba}\ does a reasonable job reproducing observed galaxy metallicities, both stellar and gas phase, and the evolution out to Cosmic Noon. The fact that no arbitrary normalisation was required as in {\sc Mufasa}\ is a step forward, suggesting that {\sc Simba}\ is locking metals into dust in a realistic manner; we will explore this more in \S\ref{sec:dustprop}. \subsection{Galaxy photometric sizes} \begin{figure} \includegraphics[width=0.5\textwidth]{halfmass_m100n1024.jpg} \vskip-0.1in \caption{$R$-band 2-D projected half-light radii of {\sc Simba}\ galaxies at $z=0$, as a function of $M_*$. Points are colour-coded by sSFR. We fit median relations to the red and blue sub-samples, delineated by $10^{-1.8}$~Gyr$^{-1}$, as the cyan and magenta dashed lines lines, respectively. Observational relations are shown from \citet{Zhang:2017} from SDSS, split into red and blue galaxies. {\sc Simba}\ broadly reproduces the sizes of star-forming galaxies, but fails to show the observed trend that quiescent galaxies have a much steeper slope and are much more compact at low masses. } \label{fig:halfmass} \end{figure} Modern cosmological simulations typically have sufficent resolution to resolve the size of galaxies, even if the detailed structure of the ISM remains unresolved. Illustris highlighted the ability for simulations to produce galaxies populating the full range of the Hubble sequence~\citep{Vogelsberger:2014}. For EAGLE, galaxy sizes provided a key constraint on their star formation feedback implementation~\citep{Schaye:2015}, namely that they employ a steeper dependence of the star formation rate on the density in dense gas in order to prevent galaxies from being overly compact. {\sc Simba}\ did not use sizes to tune the feedback model, so instead they provide a test of it. To conduct a fair comparison to observed sizes, we compute projected galaxy half-{\it light} radii in the $R$-band. We obtain $R$-band luminosities from the \citet{Bruzual:2003} models interpolated to the age and metallicity of each star particle. The radius is determined for each galaxy by averaging 2-D half-light projections along the ${x,y,z}$ axes. Figure~\ref{fig:halfmass} shows the galaxy half-light sizes at $z=0$ from {\sc Simba}\ versus stellar mass. We colour-code the points by sSFR as before, and fit separate running medians for quenched ({\sc Simba}-Q) and star-forming ({\sc Simba}-SF) populations (magenta and cyan lines), where we divide the two populations at sSFR$=10^{-1.8}$~Gyr$^{-1}$ as before. For comparison we show observations from SDSS~\citep{Zhang:2017}. The SDSS sample has been subdivided into red and blue galaxies, albeit with a criterion based on photometry, not sSFR. Star-forming galaxy sizes in {\sc Simba}\ show an amplitude and scaling with $M_*$ that agrees quite well with the observed slope, which is encouraging. There is a suggestion that low-mass galaxies are too small, but this occurs in the mass range where the number of particles is below a few hundred, and given that the sizes are light-weighted, stochasticity can give rise to smaller-than-expected sizes. We note that a stellar mass-weighted size does not show this drop-off at low masses. But for well-resolved star-forming galaxies, the sizes are in quite good agreement with data. This is an important success that did not require any specific tuning of the feedback model. In contrast to the star-forming systems, {\sc Simba}\ shows quenched galaxy sizes that are quite discrepant with observations. Massive galaxy sizes are in reasonable agreement with data, but the lowest-mass quiescent galaxies are up to $\sim\times 3$ larger than the comparable sample in SDSS, showing that the size--mass trend for passive galaxies is incorrect in {\sc Simba}; indeed, in {\sc Simba}\ the low-mass passive galaxies are actually larger than the star-forming ones, which is opposite to the observed trend. There are a number of potential reasons for this. The large number of stellar orbits in older quiescent galaxies tends to puff out the distribution numerically. The discrepancy could also represent a failing in physics, if for instance low-mass galaxies are preferentially quenched via some rapid mode such as merging, violent disk instability, or stripping that is simultaneously associated with compactification~\citep[e.g.][]{Tacchella:2016}. {\color{black} Alternatively, it could be a failing of the feedback physics associated with quenching low-mass galaxies.} We can test this issue directly with higher resolution runs once they complete. For now, we note that the sizes of small quenched galaxies in {\sc Simba}\ is a clear failing of the current model, in contrast to its success at reproducing the sizes of star-forming galaxies. \subsection{Halo gas fractions} \begin{figure} \includegraphics[width=0.45\textwidth]{hotgasfrac.jpg} \vskip-0.1in \caption{Gas fractions as a function of $M_{500}$ at $z=0$. Black points show the total baryon fraction within the halo, and red and blue points show the gas fractions subdivided at $10^5$K. Purple hexbins show an observational compilation of hot gas fractions from \citet{McCarthy:2017}, to be compared to the red points. {\sc Simba}\ does a reasonable job of reproducing the observed trend of hot gas fraction with $M_{500}$, which has been difficult for previous simulations to achieve without tuning.} \label{fig:hotgasfrac} \end{figure} In {\sc Simba}, AGN feedback provides the primary energy input that serves to quench massive galaxies in large halos. Such energy input can concurrently have a strong impact on the amount and thermal state of hot gas within those halos. In particular, it can evacuate gas even from fairly sizeable halos, somewhat by entraining gas in jets but mostly by depositing heat that results in the gas becoming unbound to the halo. These processes result in halo gas fractions that deviate strongly from the mean cosmic baryon fraction, a departure that can be measured in real systems via X-ray emission from intra-group and intra-cluster gas. Such observations thus provide an important constraint on the AGN feedback model. The hot gas fraction as a function of halo mass has been a challenging constraint for modern cosmological AGN feedback models to reproduce~\citep{McCarthy:2017}. In Illustris, it was found that the AGN feedback mechanism over-evacuated hot gas from group-sized halos compared to observations~\citep{Genel:2014}, which provided one motivation for the new AGN feedback model in Illustris-TNG~\citep{Weinberger:2018}. {\color{black} Nonetheless, while closer than Illustris, TNG somewhat overpredicts the observed hot gas fractions~\citep{Barnes:2018}. EAGLE likewise overpredicts the hot gas fractions, while the {\sc Bahamas} simulation suite was able to match this with mild tuning ~\citep{McCarthy:2017}.} Here we examine this constraint for {\sc Simba}. Figure~\ref{fig:hotgasfrac} shows the baryon fractions as a function of halo mass ($M_{500}$) from {\sc Simba}. $M_{500}$ is computed as the radius enclosing 500 times the critical density, centered on the most bound halo particle. Black points show the total baryon fraction, red points show hot gas ($T>10^5$K) fractions, and blue show cold gas ($T<10^5$K). Colour-coordinated lines show the running median values. Note that the black points include the stellar (and black hole) contribution, which is not explicitly shown. A compilation of observations from \citet{McCarthy:2017} is shown as the purple hexbins. All fractions have been scaled to $\Omega_b=0.048$, so a halo at unity has its cosmic share of baryons. At $10^{12}M_\odot$ halos have about 40\% of the cosmic baryon fraction within $R_{500}$, with a large scatter. This dips to $\sim 30\%$ at $10^{12.5-13}M_\odot$, before rising again to large halo masses. The dip owes to jet AGN feedback, which we have checked by comparing to the No-jet test run, which shows baryon fractions around 90\% for all halos over this mass range (and a much flatter trend of hot gas fraction versus halo mass). This shows that the energy input required to quench galaxies can cause substantial evacuation of group-sized halos, as has been noted in e.g. Illustris~\citep{Genel:2014}. This strong evacuation has important implications for using these systems as probes of cosmology, which we will probe in future work. The hot baryon fraction is shown in red, which can be compared to the observations shown in purple. In massive systems, the total baryons are dominated by this hot phase. Most of this hot gas is near the virial temperature, so the results are insensitive to the exact value of the cut at $10^5$K. Comparing to observations, we see that {\sc Simba}'s halos have hot baryon fractions that are well in agreement with data in the overlapping mass range, in both amplitude and scatter. We note that there was no tuning done to obtain this agreement. The halo hot baryon fraction is thus a non-trivial success of {\sc Simba}'s AGN feedback model, and shows that {\sc Simba}\ evacuates halo baryons in a manner that is concordant with observations. In Borrow et al. (in prep.) we will examine quantitatively where these evacuated baryons end up. \subsection{Black hole mass vs stellar mass} \begin{figure} \includegraphics[width=0.5\textwidth]{mbhms_m100n1024.jpg} \vskip-0.1in \caption{$M_{\rm BH}-M_*$ relation at $z=0$ in {\sc Simba}. Points show galaxies, with centrals colour-coded by specific SFR, and satellites as the grey points. Observations are shown from \citet{Kormendy:2013} for comparison with bulge-dominated (redder) systems, while \citet{Bentz:2018} shows the relationship more appropriate for spiral star-forming systems at lower $M_*$. {\sc Simba}\ broadly reproduces these observed relations in its appropriate galaxy populations.} \label{fig:mbhms} \end{figure} The canonical relation that highlights the connection between galaxies and their central supermassive black holes is the relationship between the black hole mass and the galaxy bulge mass or stellar velocity dispersion~\citep[e.g.][]{Magorrian:1998,Kormendy:2013,McConnell:2013,Graham:2016,Bentz:2018}. Modern galaxy formation models that track black holes typically have free parameter(s) that are tuned to match these relations; in {\sc Simba}, this is set by the accretion efficiency tuned to $\epsilon_m=10$\%, while most other cosmological simulations (based on Bondi accretion) tune the AGN feedback efficiency. In previous works, \citet{Angles:2015} and \citet{Angles:2017} showed that the $M_{\rm BH}-M_*$ relation emerged naturally from the torque-limited accretion model, without or with AGN feedback, respectively. But these studies were done via post-processing or without star formation feedback. Here we examine whether the full physics model in {\sc Simba}\ likewise reproduces the relationship between black hole mass and galaxy properties. Figure~\ref{fig:mbhms} shows the black hole mass--stellar mass relation at $z=0$ for {\sc Simba}\ galaxies. Central galaxies are shown colour-coded by specific SFR, while satellite galaxies are indicated by grey points. The relationship for galaxy bulges is shown from \citet{Kormendy:2013} as the magenta dashed line; this is an appropriate comparison sample for bulge-dominated galaxies, which are expected to be the quiescent systems with redder points. Meanwhile, \citet{Bentz:2018} assembled a sample of reverberation-mapped galaxies, and found the steeper relation shown as the blue dotted line. In their case, the lower-mass systems are predominantly late-type galaxies, while the most massive systems are early-type. Hence in the region plotted, the \citet{Bentz:2018} sample is probably best compared to later-type systems, and therefore star forming (bluer points). We note that all observational relations show a large scatter, typically at least 0.3~dex, which is not represented on this plot. {\sc Simba}\ black holes generally lie in the range of observations. Although we tuned $\epsilon_m=0.1$ in order to obtain the correct amplitude of the relation, the slope of the relation is not tunable, particularly in terms of different galaxy sub-samples. Hence the agreement of the quiescent galaxy slope with the bulge-dominated galaxy black holes, and likewise the general agreement of the star-forming galaxies with the lower black hole masses at a given $M_*$, is in good agreement with observations. We note that there is some disagreement on whether the late-type galaxies have a steeper slope or the same slope but offset to lower black hole masses~\citep[e.g.][]{Graham:2016,Savorgnan:2016,Bentz:2018}, but {\sc Simba}\ predictions are broadly compatible with either scenario. At the low-mass end, consistent with \citet{Angles:2015}, torque-limited accretion grows black holes very quickly once the galaxy stellar mass exceeds $3\times 10^9M_\odot$, which is where we choose to seed the black holes in {\sc Simba}. Hence the rapid rise is not directly physical but a numerical artifact of our seeding prescription, though it is intended to mimic the physical effect of black hole growth suppression due to early star formation seen in e.g. FIRE \citep{Angles:2017c}. Also, we note that we attempt to keep black holes fixed to the centre of the galaxy potential well, but in dense regions this does not always work owing to the shallow potential wells in poorly resolved galaxies, so black holes can move between galaxies and thus merge. We continue to test for approaches for better handling this given the poor cosmological resolution. Overall, {\sc Simba}\ predicts a relationship between black hole and stellar masses in agreement with observations. In upcoming work we will examine black hole scaling relations in more detail, but for now, the good agreement corroborates the idea that black holes in {\sc Simba}\ grow in accord with observations, and thus the feedback energy released by black holes and used by {\sc Simba}\ to quench galaxies is plausible. \subsection{Dust Properties}\label{sec:dustprop} \begin{figure} \includegraphics[width=0.5\textwidth]{mfdust_m100n1024.pdf} \vskip-0.1in \caption{Dust mass function from {\sc Simba}\ at $z=0$, shown as the green shaded region. DMF is split into star-forming and quenched samples, shown as blue and red dashed lines, respectively. Observations are shown from \citet{Dunne:2011} and \citet{Clemens:2013}; differences owe to assumptions regarding inferring dust mass from far-IR flux. {\sc Simba}\ reproduces the observed shape of the DMF, and is in the range of observed amplitudes, modulo systematic uncertainties.} \label{fig:mfdust} \end{figure} \begin{figure} \includegraphics[width=0.48\textwidth]{dusttogas_m100n1024.jpg} \vskip-0.1in \caption{Dust to gas ratio as a function of gas-phase metallicity at $z=0$ in {\sc Simba}\ galaxies, colour-coded by specific SFR. Observations are shown as crosses from \citet{Remy-Ruyer:2014}. {\sc Simba}\ reproduces the observed trend and amplitude in dust-to-gas ratios.} \label{fig:dusttogas} \end{figure} \begin{figure} \includegraphics[width=0.48\textwidth]{dtm_ms_0_hex.jpg} \vskip-0.1in \caption{Metal mass fraction locked in dust, as a function of $M_*$, for star-forming galaxies in {\sc Simba}. Plot is colour-coded by the mean sSFR within each hexbin. Except for the smallest galaxies, typically one-third of the metals are locked in dust.} \label{fig:dtm} \end{figure} {\sc Simba}\ includes a model to form and destroy dust from metals within the ISM of galaxies during the simulation run. As a basic check on the production of dust, here we examine two measurables tracking dust in galaxies: The dust mass function, and the dust-to-gas ratio. Figure~\ref{fig:mfdust} shows the $z=0$ (bottom panel) and $z=2$ (top) dust mass function (DMF) from {\sc Simba}\ (green line), versus two $z=0$ observational determinations from \citet{Dunne:2011} and \citet{Clemens:2013}, and a $z=2$ determination from \citet{Dunne:2003}. At $z=0$, {\sc Simba}\ agrees well with \citet{Dunne:2011}, but not \citet{Clemens:2013}. The difference between the two can be traced to their assumption of the dust mass opacity coefficient used to infer the dust mass from far-IR data; \citet{Clemens:2013} showed that under the same assumption of this quantity, the two results agree. Hence given current uncertainties in inferring dust masses, it is probably premature to use the DMF as a strong constraint on models. But {\sc Simba}'s DMF is at least within the ballpark of currently observed values, with good agreement in the overall DMF shape. {\color{black} Unsurprisingly, the DMF is dominated by star-forming galaxies (blue dashed line), which is as observed~\citep{Beeston:2018}. The $z=2$ DMF is compared to observations from \citet{Dunne:2003}, and shows a deficit of $\sim\times 3$ in the number density of galaxies at a given dust mass. {\color{black}We note that the observational DMF by \cite{Dunne:2003} is from surveys of sub-mm sources with large beam sizes, which could result in multiple objects being blended within one beam therefore overestimating their dust masses.} If one regards the \citet{Clemens:2013} results at $z=0$ to be more accurate, as confirmed by \citet{Beeston:2018}, then the shortfall in the predicted DMF is very similar at both redshifts. This suggests that the evolution in dust masses in {\sc Simba}\ is viable, but the overall dust production is short, or else destruction is too efficient. It may be possible to remedy this with differing choices of dust parameters; we are exploring this. We note that our $z=0$ DMF agrees well with the predictions from cosmological simulations of \citet{McKinnon:2017}, owing in part to tuning of each model, but our $z=2$ DMF is significantly higher than theirs.} Figure~\ref{fig:dusttogas} shows the $z=0$ dust-to-gas ratio (DGR) as a function of gas-phase metal abundance. Points are colour-coded by sSFR. {\sc Simba}\ is in good agreement with the data shown \citep[crosses]{Remy-Ruyer:2014}, showing a slope of increasing DGR with metallicity as observed. In massive quenched systems, the DGR drops quickly. {\color{black}Finally, Figure~\ref{fig:dtm} shows the fraction of metals locked into dust at $z=0$ for star-forming galaxies. The green line shows the running median. For galaxies with $M_*\ga 10^{9.5}M_\odot$, the fraction is typically 30-40\%, but drops significantly to lower masses. This mostly explains why the mass-metallicity relation agrees with observations in {\sc Simba}\ without the {\it ad hoc} reduction of the yields by $\times 2$ as in {\sc Mufasa}. Low-sSFR galaxies also have fewer metals locked into dust, as AGN feedback returns metals locked in dust into the gas phase.} We will examine galaxy dust content and evolution in significantly more detail in forthcoming work (Li et al., in prep.), but these preliminary comparisons suggest that {\sc Simba}'s dust tracking model yields plausible galaxy dust contents. \section{AGN feedback variations}\label{sec:variants} AGN feedback is believed to be responsible for quenching galaxies. {\sc Simba}\ includes three different forms of AGN feedback: Radiative mode AGN winds, AGN jets, and X-ray feedback. In this section we examine the importance of these various modules in producing a quenched galaxy population, by running simulations with AGN jets and X-ray feedback off, and with only X-ray feedback off. We always include radiative AGN winds. For these tests we use $50h^{-1}{\rm Mpc}$, $2\times 512^3$ simulations run to $z=0$, with {\sc Simba}\ input physics and parameters except for the AGN feedback variations. \begin{figure} \includegraphics[width=0.45\textwidth]{mfvar_m50n512.pdf} \caption{Stellar mass function evolution from $z=2\rightarrow 0$ in $50h^{-1}{\rm Mpc}$, $512^3$ test runs with different AGN feedback variants: Original {\sc Simba}\ (green solid), Jet and X-ray feedback both turned off (No-jet; blue dashed), only X-ray feedback turned off (No-Xray; red dashed). For comparison we show the main $100h^{-1}{\rm Mpc}$ {\sc Simba}\ run (green dashed) reproduced from Figure~\ref{fig:mfwind}, as well as selected observations as indicated. Turning on just the jet feedback (No-Xray) results in a substantial truncation of the GSMF that does not occur without jets (No-jet).} \label{fig:mfvar} \end{figure} \begin{figure} \includegraphics[width=0.45\textwidth]{mbhms_m50n512_s50nojet.jpg} \includegraphics[width=0.45\textwidth]{mbhms_m50n512_s50nox.jpg} \vskip-0.1in \caption{$M_{BH}-M_*$ relations in test simulations with jet and X-ray black hole feedback turned off (No-jet, top panel), and jets on but X-ray feedback turned off (No-Xray, bottom). By comparing to Figure~\ref{fig:mbhms}, the jet feedback is seen to enact most of the quenching, but the X-ray feedback is important for fully quenching the most massive galaxies.} \label{fig:mbhmsvar} \end{figure} Figure~\ref{fig:mfvar} shows the GSMF for the full {\sc Simba}\ physics run, a {\it No-X} run turning off only X-ray feedback, and a {\it No-jet} run turning off both jet and X-ray feedback, at $z=2,1,0$. Observations are overplotted as described in Figure~\ref{fig:mfwind}. We do not show $z\geq 3$ results because these variants' GSMFs are indistinguishable there. Looking at the $z=0$ panel, without jets, the GSMF is strongly overproduced at high masses. Turning on jets results in much better agreement with full {\sc Simba}. This is even true when including X-ray feedback, which makes only a small change to the GSMF. The redshift evolution shows that the impact of jets is fairly minor at $z\sim 2$ in terms of the GSMF, only impacting the very largest few galaxies. The importance of jets in truncating the GSMF grows steadily with time, to where without jets the number density of $M_*=10^{12}M_\odot$ galaxies would be an order of magnitude higher, in strong disagreement with data. These results clearly show that the main driver in truncating the massive end of the GSMF in {\sc Simba}\ is AGN jet feedback. A more detailed view of how AGN feedback impacts both stellar and black hole growth can be obtained by examining the $M_{\rm BH}-M_*$ relation in these variants, shown in Figure~\ref{fig:mbhmsvar}, with galaxies colour-coded by specific SFR as in Figure~\ref{fig:mbhms}. For clarity we show only central galaxies. Comparing the {\it No-Jet} version (top panel) to the original {\sc Simba}\ in Figure~\ref{fig:mbhms} highlights several key points. As expected, the sub-$M^\star$ objects show little difference in the trends, in either $M_{\rm BH}$ or sSFR. However, for massive galaxies, the full {\sc Simba}\ run shows significantly lower sSFR and somewhat higher $M_{\rm BH}$, particularly for the most massive galaxies. This demonstrates more explicitly that the AGN jet feedback is crucial for quenching galaxies. Interestingly, the {\it No-jet} run still shows a few quenched galaxies at high $M_{\rm BH}$ around $M^\star$. These are clearly correlated with the presence of a massive black hole, and would not occur in a model with no AGN feedback at all. This arises from the fact that we still have radiative AGN winds in our {\it No-jet} run. These become effective around $M^\star$ because it is at the corresponding halo mass that a significant hot gaseous halo begins to form~\citep{Keres:2005,Gabor:2012}. The AGN energy can then be deposited into the hot gas, providing a mechanism for quenching the galaxy by shutting off the fuel supply~\citep{Dekel:2009}. So why do radiative winds cease to be effective at higher masses? An examination of the energetics shows the reason. In {\sc Simba}'s AGN feedback model the momentum input is assumed to be constant, which means that the AGN feedback energy scales as the wind velocity. Since {\sc Simba}'s black hole accretion rates are a quite weak function of $M_*$~(Thomas et al., in prep) while the number of hot halo baryons is growing, this means that the energy injected per hot halo baryon is dropping with the halo mass. The logarithmic increase in velocity with $M_{\rm BH}$ (eq.~\ref{eq:vradiative}) is too slow to compensate for this, so one quickly ends up in a situation where the energy injection is insufficient to keep the hot halo baryons near the virial temperature. What is required is a strong increase in the outflow velocity, and hence energy input, in this halo mass regime. This is why high-velocity AGN jets are crucial for quenching massive galaxies. The black hole masses in the {\it No-jet} run are also appear to be significantly lower. However, this can primarily be explained by the effect that $M_*$ values in this simulation are substantially higher, which moves galaxies leftwards in the $M_{\rm BH}-M_*$ diagram; the black hole masses themselves are not substantially different. The relative roles of Bondi vs. torque-limited accretion in growing black holes across the full mass range over cosmic time will be examined more fully in a forthcoming paper~(Ang\'es-Alc\'azar et al. in prep.). The {\it No-Xray} run is fairly similar to the full {\sc Simba}\ run, but there is a slight if noticeable increase in the sSFR in the massive galaxies. This is not so much as to contribute significant mass growth, hence the GSMF is only modestly affected, but it is higher than typical observed values for massive red and dead ellipticals. This suggests that the X-ray feedback is important for fully quenching massive galaxies in accord with observations, even if it does not play a leading role in regulating mass growth. \section{Summary}\label{sec:summary} We have introduced the new {\sc Simba}\ suite of cosmological galaxy formation simulations, and explored predictions from a 100 Mpc/h box run with $1024^3$ dark matter and $1024^3$ gas elements. The most novel aspect of {\sc Simba}\ is its implementation of black hole growth via torque-limited accretion, and two-mode black hole feedback via bipolar kinetic outflows. {\sc Simba}\ further includes numerous updates over its predecessor simulation {\sc Mufasa}, including a dust production and destruction model. In this paper we present comparisons to a range of different observational probes measuring the stellar mass, star formation rate, neutral and molecular gas, black hole, and dust properties in {\sc Simba}. We show that, in all cases, {\sc Simba}\ produces galaxies that are in quite reasonable agreement with observations. While our feedback parameters were generally chosen to follow observations or expectation from high-resolution simulations, some of these observations were used to further tune these parameters. However, many of them were not, and these represent model predictions that demonstrate the viability of {\sc Simba}\ as a platform for studying cosmological-scale galaxy evolution. Here are our main findings: \begin{itemize} \item {\sc Simba}\ produces a stellar mass function evolution that is in very good agreement with data across all masses at all cosmic epochs, although it may overproduce the massive end slightly by $z=0$. Quenched galaxies grow substantial in numbers at $z\la 2$, and by $z=0$ they dominate at $M_*>2\times 10^{10}M_\odot$. \item {\sc Simba}'s star forming main sequence is in good agreement with observations at $z=0$, and low at $z\approx 2$ by only a factor of two which is explainable via observational systematics. {\color{black} Predicted quenched fractions at $z=0$ as a function of $M_*$ are in good agreement with observations. } Galaxies at a given $M_*$ that quench first in {\sc Simba}\ have preferentially larger black holes. \item {\sc Simba}\ gas fractions, both neutral and molecular, show a dropping trend with $M_*$ that is in good agreement with observations. Gas fractions evolve downwards with time, but even at $z=0$, massive quenched galaxies still typically have some cold gas. \item The gas-phase and stellar mass--metallicity relations generally agree with observations at $z=0$ and $z\sim 2$. The MZR evolves upwards by a factor of $\sim\times 3$ for our smallest systems, but at $M_*\ga 10^{11}M_\odot$ there is little evolution. \item Galaxy photometric projected sizes are in good agreement with observations for star-forming systems, but are too large for quenched systems particularly at lower masses. \item There is substantial evacuation of baryons from halos at group scales, with Local Group-sized objects retaining typically only a third of their cosmic baryon fraction. Hot halo gas fractions show a rising trend with halo mass in good agreement with data. \item The black hole mass--stellar mass relation shows a population of quenched galaxies that agrees well with observations of bulge-dominated systems, while star-forming galaxies at a given $M_*$ have lower black hole masses. \item {\sc Simba}\ predicts a $z=0$ dust mass function and dust-to-gas ratios in agreement with observations, with more massive star-forming galaxies having higher ratios. At a given metallicity, galaxies that have higher SFR have higher dust-to-gas ratios. {\color{black} Roughly one-third of metals are locked in dust.} \end{itemize} These results demonstrate that {\sc Simba}\ broadly reproduces a wide range of observed galaxy properties including their stellar content, star formation rates, gas content, metallicities, sizes, black hole properties, and dust properties. Clearly there are many other more detailed constraints that would be tested, and in subsequent work we aim to examine these in more detail. It is important to note that {\sc Simba}\ also displays some aspects that are in conflict with observations. It fails to produce as sharp a knee in the $z=0$ stellar mass function as is observed. It produces low-mass quenched galaxy sizes that are larger than for star-forming systems, opposite to the observed trend in SDSS. There are suggestions that {\sc Simba}\ overproduces the stellar metallicities as well as the specific SFRs in low-mass present-day star-forming systems. Finally, in many low-$z$ scaling relations (e.g. gas and metallicities vs. $M_*$) there is an abrupt break in the typical properties of galaxies above and below $M_*\approx 2\times 10^{10}M_\odot$, which qualitatively agrees observations but quantitatively may be too sharp. This rapid transition contrasts with the too-gradual turn-down in the GSMF, suggesting a tension in how {\sc Simba}\ (and similar cosmological models) quench massive galaxies at low redshifts. Despite these minor issues, {\sc Simba}\ provides a state of the art platform for studying galaxy evolution across a range of masses, environments, and cosmic epochs, and promises to yield numerous new insights into the physical drivers of galaxy evolution over cosmic time. \section*{Acknowledgements} The authors acknowledge helpful discussions with Josh Borrow, Weiguang Cui, Shuiyao Huang, Katarina Kraljic, and Neal Katz. We thank Philip Hopkins for making {\sc Gizmo}\ public, and providing our group with early access. We thank Robert Thompson for developing {\sc Caesar}, and the {\sc yt} team for development and support of {\sc yt}. RD acknowledges support from the Wolfson Research Merit Award program of the U.K. Royal Society. DAA acknowledges support by a Flatiron Fellowship. The Flatiron Institute is supported by the Simons Foundation. DN and QL were supported in part by NSF Award AST-1715206 and HST Theory Award 15043.0001. \textcolor{black}{ This work used the DiRAC@Durham facility managed by the Institute for Computational Cosmology on behalf of the STFC DiRAC HPC Facility. The equipment was funded by BEIS capital funding via STFC capital grants ST/P002293/1, ST/R002371/1 and ST/S002502/1, Durham University and STFC operations grant ST/R000832/1. DiRAC is part of the National e-Infrastructure.} \bibliographystyle{mnras}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} In the $\rm \Lambda$CDM framework, the initial density perturbations grow by gravity and form dark matter haloes (DMHs), and galaxies are formed in DMHs through gas cooling. DMHs, and hence galaxies, become more massive and larger through the accretion of matter and mergers with other DMHs. The most massive and largest DMHs in today's universe are galaxy clusters. The DMH mass of galaxy clusters is typically $\gtrsim 10^{14}\, M_{\sun}$ (e.g. \citealp{Kravtsov_Borgani_2012,Overzier2016}) and a mature cluster hosts hundreds to thousands of galaxies. The properties of cluster galaxies are largely different from those of field galaxies. For example, at $z<1$, cluster galaxies are dominated by quiescent and/or elliptical galaxies with old stellar populations while most field galaxies are star-forming galaxies like spirals (e.g. \citealp{Dressler1980,Goto2003,Bower1998}). Part or all of these differences are thought to be caused by some environmental effects: ram-pressure stripping \citep{Gunn1972}, galaxy interaction, galaxy harassment \citep{Moore1998}, etc. When and how these differences were established is key to understanding the role of environments on galaxy formation. For this purpose, galaxies in clusters in early evolutionary stages should be investigated. Progenitors of local clusters at $z\gtrsim2$ are called proto-clusters. They are defined as a whole structure that will collapse into a cluster by $z=0$ (e.g. \citealp{Overzier2016}). A Proto-cluster typically extends to more than 20 comoving Mpc at $z\sim 2$ \citep{Chiang2013,Muldrew2015} and an even larger area at higher redshift, being split into a number of DMHs and unbound regions. Among those substructures, we define the ``core" of the proto-cluster as the most massive DMH\footnote{Massive systems with $M_\mathrm{DMH}>10^{14}\,M_{\sun}$ at $z\gtrsim 2$ are sometimes called high redshift clusters. However, they will also grow through the accretion of matter from the surrounding regions until $z=0$. In this sense, they are also regarded as massive proto-cluster cores.}. The mass of cores has a large scatter ($\sim 1\,\mathrm{dex}$) at $z\sim 2$, even if the descendant mass at $z=0$ is fixed \citep{Muldrew2015}. The relationship between the properties of galaxies and their location in proto-clusters is important to understand cluster galaxy formation. \citet{Muldrew2018} have studied galaxy evolution in proto-clusters by applying a semi-analytic galaxy evolution model to N-body simulations. They have found that galaxies in core regions have different properties from those in fields and the rest of the proto-cluster regions: a more top-heavy stellar mass function (SMF) and a higher fraction of quiescent galaxies especially for low-mass galaxies. A similar trend of the SMF has been reported in \citet{Muldrew2015,Lovell2018}. Recently, several proto-cluster cores have been found and they have a variety of star formation activity. \citet{Shimakawa2018} have found that $\mathrm{H \alpha}$ emitters in the densest regions of a proto-cluster at $z\sim 2.5$, which are regarded as cores, are more massive and more actively star-forming than those in the remaining regions of the same proto-cluster. Some cores are dominated by (dusty) star-forming galaxies unlike local mature clusters \citep{Wang2016, Oteo2018, Miller2018}, while massive cores with red sequence galaxies, which are similar to local clusters, have also been found \citep{Newman2014,Cooke2016,Lee-Brown2017,Willis2020}. Such variations may reflect different evolutionary stages of cores. Most of the reported cores are biased to possible progenitors of the most massive, Coma-like clusters ($M_\mathrm{DMH}>10^{15}\, M_{\sun}$ by $z=0$). Therefore, to reveal the whole aspect of galaxy evolution in proto-cluster cores, we need a large sample of cores including less massive ones. Systematic proto-cluster searches have been done by various techniques. One of such methods, the fixed aperture method, searches for an overdensity of high redshift galaxies (i.e. Lyman break galaxies (LBGs), line emitters, photometric redshift galaxies, etc.) over a given aperture (e.g. \citealp{Chiang2014,Toshikawa2018}). This method can successfully identify the whole region of a proto-cluster (e.g. \citealp{Chiang2015,Diener2015}). However, because this method uses a ten times larger aperture than the size of cores, it is difficult to isolate cores. Moreover, because LBGs and line emitters are typically star-forming galaxies, overdensities of such populations provide a biased view of proto-cluster galaxies. Another method is to use biased tracers. Some galaxy populations like high redshift radio galaxies and quasars are frequently located at dense environments \citep{Hatch2011,Hatch2014}. Therefore, one can use such objects as beacons of proto-clusters (\citealp{Venemans2007,Wylezalek2013,Wylezalek2014,Cooke2014}). However, it is unclear whether these objects can trace proto-clusters completely \citep{Lovell2018, Uchiyama2018}. Because the lifetime of quasars, $10^{6}$ to $10^{8}$ years \citep{Martini2004}, is relatively short, they may miss some fraction of proto-clusters. Furthermore, the feedback of active galaxies suppresses the formation of surrounding galaxies \citep{Uchiyama2019}, possibly resulting in a biased picture of galaxy formation in proto-clusters. In this study, we propose a new method to find proto-cluster cores at $z\sim 2$, the epoch when massive cores appear \citep{Chiang2017}, and use it in the Cosmic Evolution Survey (COSMOS; \citealp{Scoville2007}) field. The extended Press-Schechter model\footnote{To calculate the extended Press-Schechter model, we use a FORTRAN code written by Takashi Hamana. The code is found at \url{http://th.nao.ac.jp/MEMBER/hamanatk/OPENPRO/index.html}} predicts that a DMH whose mass is $\gtrsim 2-3 \times 10^{13}\, M_{\sun}$ at $z\sim 2$ typically evolves into the cluster mass regime, $\gtrsim10^{14}\, M_{\sun}$, by $z=0$. Therefore, we regard DMHs with $\gtrsim 2-3 \times 10^{13}\, M_{\sun}$ at $z\sim 2$ as proto-cluster cores and search for such massive systems. The stellar to halo mass relation says that galaxies with larger stellar masses are hosted by more massive DMHs. According to abundance matching technique, the typical stellar mass of central galaxies hosted by DMHs with $M_{\mathrm{DMH}}\gtrsim 10^{13}\, M_{\sun}$ is $M_{*}\gtrsim 10^{11}\, M_{\sun}$ (e.g. \citealp{Behroozi2013}). However, DMHs which host central galaxies with $M_{*}\gtrsim 10^{11}\, M_{\sun}$ cover a wide range of DMH mass ($10^{12}-10^{14}\, M_{\sun}$). This means that using single massive galaxies cannot isolate DMHs as massive as proto-cluster cores. A multiple system of massive galaxies is a possible tracer of a proto-cluster core. \citet{Bethermin2014} have studied the clustering of BzK-selected galaxies at $1.5<z<2.5$, finding that close pairs (separations are below $20\arcsec$) of massive ($M_{*}>10^{11}\, M_{\sun}$) quiescent galaxies as well as massive main-sequence galaxies with strong star formation ($>200\, M_{\sun}/\mathrm{yr}$) are possible progenitors of clusters. The host DMH masses of the former at $z\sim 2$ is $5.5_{-4.5}^{+5.1}\times 10^{13}\, M_{\sun}$, which is massive enough to be regarded as cores. Using a galaxy sample with spectroscopic redshifts, \citet{Diener2013} have explored candidate galaxy groups within $500\, \mathrm{kpc}$ in projected distance and $700\, \mathrm{km/s}$ in velocity difference at $1.8<z<3.0$. In comparison with mock galaxy catalogues, they have found that the candidate groups contain one thirds of the progenitors of present-day clusters, although they are mainly the progenitors of less massive systems ($10^{13}-10^{14}\, M_{\sun}$). Moreover, there is a significant overdensity not only of the spectroscopic redshift sample but also of a photometric redshift sample with $M_{*}\geq 10^{10}\, M_{\sun}$ around the candidate groups. These results lead to an assumption that pairs of massive galaxies are hosted by more massive DMHs than isolated massive galaxies. Thus, we use a pair of massive galaxies as a tracer of proto-cluster cores. We define the term ``pair" as a multiple system of massive galaxies whose extent is consistent with the size of a proto-cluster core. We refer to not only associations of two massive galaxies but also those of more than two as ``pairs". Since most of the ``pairs" identified in this paper consist of just two galaxies, we adopt this naming convention. To avoid possible selection bias, we use both star-forming galaxies and quiescent galaxies to find out pairs. The structure of this paper is as follows. In Section 2, we describe the data and galaxy samples used in this study. In Section 3, we introduce the method to find proto-cluster cores and show results. We also compare the results with the IllustrisTNG simulation to evaluate the effectiveness of our method. In Section 4, we examine properties of member galaxies in the core candidates focusing on the stellar mass function and the fraction of quiescent galaxies. Section 5 is devoted to a summary and conclusions. Throughout this paper, we assume a flat $\mathrm{\Lambda}$CDM cosmology with $(\Omega_\mathrm{m},\, \Omega_\mathrm{\Lambda},\, h,\, \sigma_\mathrm{8},\, n_{0})=(0.3,\, 0.7,\, 0.81, \,0.7,\, 0.9)$. We use the notations $\rm cMpc$ and $\rm pMpc$ to indicate comoving and physical scales, respectively. We assume a \citet{Chabrier2003} initial mass function. \section{Data and sample selection} \subsection{The COSMOS2015 catalogue} We use data from the COSMOS2015 galaxy catalogue (\citealp{Laigle2016}; COSMOS2015 hereafter). COSMOS2015 contains deep and multi-wavelength photometry, from near-ultraviolet (NUV) to far-infrared, on the COSMOS field. In this paper, we only use objects in the central $\sim 1.5\, \mathrm{deg}^{2}$ region covered by the UltraVISTA-DR2. We also limit our sample to galaxies with $m(K_\mathrm{s})\leq 24.0$. This magnitude cut is motivated so that the detection completeness is homogeneous over the UltraVISTA field. From the catalogue, we extract the following quantities: photometric redshift (photo-\textit{z}), stellar mass and galaxy classification flag, an indicator of star formation activity (star-forming or quiescent). In the catalogue, the \texttt{LEPHARE} code \citep{Arnouts2002,Ilbert2006} has been used to compute photo-\textit{z}'s and perform spectral energy distribution (SED) fitting, and the \textit{NUV-r} vs \textit{r-J} colour-colour plane has been used to classify galaxies \citep{Williams2009}: quiescent galaxies are defined as those with $M_\mathrm{NUV}-M_\mathrm{r}>3(M_\mathrm{r}-M_\mathrm{J})+1$ and $M_\mathrm{NUV}-M_\mathrm{r}>3.1$. At $1.5\leq z\leq 3.0$, this parent sample contains 60080 (167844) galaxies in total after (before) the magnitude cut. Among them, 57353 galaxies are classified as star-forming galaxies while the remaining 2727 galaxies are quiescent galaxies. \subsection{Sample selection} \label{sec:sample_selection} To identify massive DMHs, we use 1742 galaxies with $1.5\leq z\leq 3.0$ and $\log(M_{*}/M_{\sun})\geq 11$. We refer to the galaxies in this sample as ``massive galaxies (MGs)" hereafter. This sample accounts for 3\% of the parent sample at $1.5\leq z\leq 3.0$. For the cross-correlation analysis described in Section~\ref{sec:clustering}, a relatively large galaxy sample is needed. From the COSMOS2015 catalogue, we select galaxies with $1.5\leq z\leq 3.0$ and $10.2<\log(M_{*}/M_{\sun})<11$, whose total number is 16149. We refer to the galaxies in this sample as ``tracer galaxies" hereafter. This sample accounts for 27\% of the parent sample at $1.5\leq z\leq 3.0$. To examine properties of member galaxies of proto-cluster cores, we use all 86374 galaxies at $1.25\leq z\leq 3.25$. We refer to the galaxies in this sample as ``general galaxies" hereafter. These samples are summarised in Table~\ref{tab:sample}. \begin{table*} \centering \caption{Galaxy samples used in this paper.} \label{tab:sample} \begin{tabular}{lccccc} \hline sample & redshift & stellar mass cut & total & star-forming & quiescent \\ & & $[M_{\sun}]$ & [\#] & [\#] & [\#] \\ \hline parent sample & $1.5\leq z\leq 3.0$ & - & $60080$ & $57353$ & $2727$ \\ [2pt] massive galaxies (MGs) & $1.5\leq z\leq 3.0$ & $M_{*}\geq 10^{11}$ & $1742$ & $1207$ & $535$ \\ [2pt] tracer galaxies & $1.5\leq z\leq 3.0$ & $10^{10.2}< M_{*}<10^{11}$ & $16149$ & $14329$ & $1820$ \\ [2pt] \hline general galaxies & $1.25\leq z\leq 3.25$ & - & $ 86374$ & $82012$ & $4362$ \\ \hline \end{tabular} \begin{tablenotes}[normal] \item \textit{Note.} A magnitude cut of $m(K_\mathrm{s})\leq 24.0$ is applied to all samples. \end{tablenotes} \end{table*} \section{Construction of a proto-cluster core sample} In this section, we describe the method to identify proto-cluster core candidates and how to estimate their DMH mass. \subsection{Candidates for proto-cluster cores} \subsubsection{Pair finder} \label{sec:pair_finder} We use pairs of MGs to search for proto-cluster cores. In this study, a ``pair of MGs" refers to a multiple system of MGs, whose size is consistent with that of proto-cluster cores, $\sim 0.3\, \mathrm{pMpc}$. To identify such systems, we apply the following procedure to the MGs: \begin{enumerate} \item We pick up one galaxy and count neighbour galaxies within $\Delta \theta \leq 30\arcsec$ and $\Delta z \leq 0.12$ from that galaxy.\\ \item If the number of neighbours is more than one, we regard all of them as member galaxies of a ``pair".\\ \item The three dimensional position of the pair is defined as the average position of the member galaxies of the pair. \end{enumerate} We set $30\arcsec$ as the maximum separation of member galaxies. This value is slightly smaller than the size of a core with $M_\mathrm{DMH}\sim 2\times 10^{13}\, M_{\sun}$, $\sim 36\arcsec \sim 0.3\, \mathrm{pMpc}$ in radius, reducing the probability of chance projection. We also set $0.12$ as the maximum redshift difference among members, considering the uncertainty in photo-\textit{z} estimates in the COSMOS2015 catalogue. Since $\Delta z =0.12$ corresponds to about $170\, \mathrm{cMpc}$ at $z=2$, which is much larger than the size of a core, detected pairs may be contaminated by false pairs due to chance projection. We discuss this in Section~\ref{sec:true_pair_fraction}. \subsubsection{Detected core candidates} \label{sec:pair} Applying our pair finder to the 1742 MGs, we identify 75 pairs as proto-cluster core candidates. Their sky position is shown in Fig.~\ref{fig:pair_skydist}. While the majority (66 pairs) have only two MGs, 9 pairs have three or four members, plotted as star symbols. The redshift distribution of the 75 pairs is shown in Fig.~\ref{fig:pair_Nz} with that of the MG sample. The average redshift of the pairs, 1.85, is lower than that of the MG sample, 2.03. This difference may reflect the fact that there are more massive virialized systems at lower redshifts. We note that our core candidates contain a very massive ($M_\mathrm{DMH}\sim 10^{14}\, M_{\sun}$) core at $z\sim 2.5$ which has been spectroscopically confirmed in \citet{Wang2016}. \begin{figure} \includegraphics[width=\columnwidth]{./pair_m11_skydist_20200427.pdf} \caption{Sky distribution of the pairs of MGs colour-coded by redshift according to the colour bar. Pairs containing only two MGs are plotted as circles (66 pairs), while those containing three or four are plotted as star symbols (9 pairs). Grey dots are MGs (1742 in total). (A colour version of this figure is available in the online journal.)} \label{fig:pair_skydist} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{./Nz_pair_sample_20200220.pdf} \caption{The redshift distribution of MGs (orange) and pairs (blue). The histogram of MGs is normalised so that the total number matches that of pairs. The average redshifts of MGs and pairs are shown by dotted and dashed lines, respectively. (A colour version of this figure is available in the online journal.)} \label{fig:pair_Nz} \end{figure} \subsection{Clustering analysis} \label{sec:clustering} We use clustering analysis to estimate the average DMH mass of the core candidates obtained in Section~\ref{sec:pair}. Since we have only 75 candidates, we apply a cross-correlation technique. \subsubsection{The auto-correlation function of tracer galaxies} We first calculate the two-point angular auto-correlation function (ACF) of the tracer galaxy sample. We use an estimator of the ACF proposed by \citet{Landy1993}: \begin{equation} \label{ACF_estimator} \omega_\mathrm{ACF}(\theta)=\frac{DD(\theta)-2DR(\theta)+RR(\theta)}{RR(\theta)}, \end{equation} where $DD(\theta),\ DR(\theta)$, and $RR(\theta)$ are the normalised number counts of galaxy-galaxy, galaxy-random, and random-random pairs whose separations are $\theta$, respectively. We use $2\times 10^{5}$ random points uniformly distributed over the entire area where the data exist. We assume that the errors in the ACF come from the Poisson error in the $DD(\theta)$ term, \begin{equation} \varepsilon_\mathrm{ACF}=\frac{1+\omega_\mathrm{ACF}(\theta)}{\sqrt{DD_{0}(\theta)}}, \end{equation} where $DD_{0}(\theta)$ is the row number count of galaxy-galaxy pairs. We assume that the ACF can be described by a power-law: \begin{equation} \label{omega_model} \omega_\mathrm{model}(\theta)=A_{\omega}\theta^{-\beta}, \end{equation} where $A_{\omega}=\omega(1\arcsec)$ is the amplitude of the ACF. We fix $\beta$ to the fiducial value 0.8 (e.g. \citealp{Peebles1975,Ouchi2003}). When we apply the estimator in Equation~\eqref{ACF_estimator} to observational data of a finite survey area, the ACF is negatively biased due to the integral constraint (IC; \citealp{Groth1977}): \begin{equation} \omega_\mathrm{obs}(\theta) = \omega_\mathrm{true}(\theta) - \mathrm{IC}, \end{equation} where $\omega_\mathrm{obs}$ is the ACF derived from the observational data and $\omega_\mathrm{true}$ is the true ACF. Following \citet{Roche1999}, we calculate this term using random points: \begin{equation} \mathrm{IC} = \frac{\sum_{\theta} RR(\theta)\cdot \omega_\mathrm{model}(\theta)}{\sum_{\theta} RR(\theta)}=\frac{\sum_{\theta} RR(\theta)\cdot A_{\omega}\theta^{-\beta}}{\sum_{\theta} RR(\theta)}. \end{equation} We derive $\mathrm{IC} = 0.0027A_{\omega}$ in the COSMOS field. We fit $\omega(\theta)$ over $40\arcsec-2000\arcsec$ with correction of the IC. We then calculate the spatial two-point correlation function $\xi(r)$: \begin{equation} \xi(r) = \left(\frac{r}{r_{0}}\right)^{-\gamma}, \end{equation} where $r_{0}$ is the correlation length and $\gamma$ is slope of the power-law. The spatial correlation function $\xi(r)$ is linked to the angular correlation function $\omega(\theta)$ via the Limber transform \citep{Peebles1980,Efstathiou1991}: \begin{align} \beta &= \gamma - 1,\\ \label{a_omega_limber} A_{\omega} &= r_{0}^{\gamma} B \left(\frac{1}{2}, \frac{\gamma-1}{2} \right) \frac{\int_{0}^{\infty}{dz N(z)^{2}F(z)D_{\theta}(z)^{1-\gamma}g(z)}}{\left[\int_{0}^{\infty}{dz N(z)}\right]^{2}},\\ g(z) &= \frac{H_{0}}{c}(1+z)^{2}\left\{1+\Omega_\mathrm{m}z+\Omega_\mathrm{\Lambda}[(1+z)^{-2}-1]\right\}^{1/2}, \end{align} where $B$ is the beta function, $N(z)$ is the redshift distribution of galaxies used to derive the ACF and $D_{\theta}(z)$ is the angular diameter distance. $F(z)$ describes the redshift evolution of $\xi(r)$, which is modelled as $F(z)=[(1+z)/(1+\Bar{z})]^{-(3+\Bar{\epsilon})}$ with $\Bar{\epsilon}=-1.2$ \citep{Roche1999}, where $\Bar{z}$ is the average redshift of the sample. Then we define the linear bias parameter of galaxies $b_\mathrm{g}$, which represents the relative strength of galaxy clustering compared to dark matter at a large scale ($8\, \mathrm{cMpc}/{\it h_\mathrm{100}}$): \begin{equation} \label{bias} b_\mathrm{g} = \sqrt{\frac{\xi_\mathrm{g}\left(r=8\, \mathrm{cMpc}/ {\it h_\mathrm{100}}\right)}{\xi_\mathrm{DM}\left(r=8\, \mathrm{cMpc}/{\it h_\mathrm{100}}\right)}}, \end{equation} where $\xi_\mathrm{DM}(r)$ is the spatial correlation function of dark matter. We assume \citet{Eisenstein1999} model as the power spectrum of matter. To calculate $\xi_\mathrm{DM}(r)$, we use a python toolkit for cosmological calculations called \texttt{COLOSSUS} \citep{Diemer2018}. In this way, the bias parameter of the tracer galaxies is derived from Equation~\eqref{bias}. We assume that the bias parameter of galaxies approximates that of the underlying DMHs on large scales. \subsubsection{The cross-correlation function between cores and tracers} Cross-correlation technique is often applied when the sample size is small. We calculate the two-point angular cross-correlation function (CCF) between the core candidates and the tracer galaxies using the following estimator: \begin{equation} \label{CCF_estimator} \omega_\mathrm{CCF}(\theta)=\frac{D_\mathrm{s}D_\mathrm{t}(\theta)-D_\mathrm{s}R(\theta)-D_\mathrm{t}R(\theta)+RR(\theta)}{RR(\theta)}, \end{equation} where $D_\mathrm{s}D_\mathrm{t}(\theta)$, $D_\mathrm{s}R(\theta)$ and $D_\mathrm{t}R(\theta)$ are the normalised number counts of pair-tracer, pair-random, and tracer-random pairs whose separations are $\theta$, respectively. Since the sample sizes of tracers and random points are much larger than that of pairs, we assume that the errors in the CCF come from the Poisson error in the $D_\mathrm{s}D_\mathrm{t}(\theta)$ term: \begin{equation} \varepsilon_\mathrm{CCF}=\frac{1+\omega_\mathrm{CCF}(\theta)}{\sqrt{D_\mathrm{s}D_{\mathrm{t}_{0}}(\theta)}}, \end{equation} where $D_\mathrm{s}D_\mathrm{{t}_{0}}(\theta)$ is the row number count of pair-tracers. We fit the CCF using Equation~\eqref{omega_model} and derive its amplitude. Then, we calculate the correlation length of the spatial CCF in almost the same way as for the ACF. Instead of Equation~\eqref{a_omega_limber}, we use the following equation \citep{Croom1999}: \begin{equation} A_{\omega} = r_{0}^{\gamma} B \left(\frac{1}{2}, \frac{\gamma-1}{2} \right) \frac{\int_{0}^{\infty}{dz N_\mathrm{s}(z)N_\mathrm{t}(z)F(z)D_{\theta}(z)^{1-\gamma}g(z)}}{\left[\int_{0}^{\infty}{dz N_\mathrm{s}(z)}\right] \cdot \left[\int_{0}^{\infty}{dz N_\mathrm{t}(z)}\right]}, \end{equation} where $N_\mathrm{s}$ and $N_\mathrm{t}$ are the redshift distributions of pairs and tracer galaxies, respectively. For the term $F(z)$, we use the average redshift of pairs. After that, we derive the bias parameter of the cross-correlation from Equation~\eqref{bias}. With the bias parameters of tracer galaxies ($b_\mathrm{t}$) and the cross-correlation ($b_\mathrm{st}$), we estimate that of core candidates by: \begin{equation} b_\mathrm{s} = \frac{b_\mathrm{st}^{2}}{b_\mathrm{t}}. \end{equation} We use $b_\mathrm{s}$ to calculate the average mass of the core-hosting DMHs with the relation between the bias parameter $b$ and the ``peak height" in the linear density field, $\nu$, presented in \citet{Tinker2010}. Here, the peak height $\nu$ is defined as: \begin{equation} \nu = \frac{\delta_\mathrm{c}}{\sigma(M)}, \end{equation} where $\delta_\mathrm{c}= 1.686$ is the critical density for spherical collapse, and $\sigma(M)$ is the linear matter standard deviation on the Lagrangian scale of the halo. For this calculation, we use the python toolkit \texttt{COLOSSUS}. \subsubsection{DMH mass of the core candidates} \label{sec:DMH_mass} Fig.~\ref{fig:ACF_CCF} shows the ACF and the CCF thus obtained. A signal is clearly detected for both correlation functions. From these correlation functions, we estimate the average DMH mass of the core candidates; we also estimate the average DMH masses of isolated (i.e. non-pair) MGs in a similar manner (Fig.~\ref{fig:Mdh}). We confirm that the core candidates are hosted by very massive haloes with $M_\mathrm{DMH}=2.6_{-0.8}^{+0.9}\times 10^{13}\, M_{\sun}$, which is within our target mass range. We also find that this value is larger than the DMH masses of isolated MGs with $\log(M_{*}/M_{\sun})\geq 11.0$ and $\log(M_{*}/M_{\sun})\geq 11.3$ by 1.3 dex and 0.4 dex, respectively, indicating that pairs of MGs can trace more massive haloes than their isolated counterparts. \begin{figure} \includegraphics[width=\columnwidth]{./ACF_CCF_20200513.pdf} \caption{The ACF of tracer galaxies (grey) and the CCF between pairs of MGs and tracers (blue) after correction of the IC. The ACF and CCF are fitted by a power-law with $\beta=-0.8$, shown by dashed and solid lines, where the blue shaded region corresponds to the 1$\sigma$ error around the best fit power-law to the CCF. The error in the ACF fit, $\pm 0.045$, is not shown. The fitting range $40\arcsec<\theta<2000\arcsec$ is shown by yellow shade. (A colour version of this figure is available in the online journal.)} \label{fig:ACF_CCF} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{./Mdh_comp_temp_20200111.pdf} \caption{The mass of DMHs estimated by clustering analysis. A blue star indicates pairs of MGs. For comparison, the DMH masses of isolated MGs are also plotted. ``Isolated (all)" and ``isolated (most massive)" (orange) refer to non-pair galaxies whose stellar masses are larger than $10^{11}\, M_{\sun}$ and $10^{11.3}\, M_{\sun}$, respectively. In addition, we show the DMH mass of ``true pairs" assuming that the fraction of true pairs is 54\% (green) as calculated in Section~\ref{sec:true_pair_fraction}. (A colour version of this figure is available in the online journal.)} \label{fig:Mdh} \end{figure} \subsection{The fraction of true pairs and the intrinsic DMH mass} \label{sec:true_pair_fraction} Since we use a photo-\textit{z} galaxy catalogue, the detected pairs of MGs may be contaminated by false pairs due to chance projection. Although we cannot tell which pairs are true systems without spectroscopic follow up observation, we can statistically estimate the fraction of ``true pairs". Following the method introduced in \citet{Bethermin2014}, we estimate this fraction as a function of the maximum angular separation using the ACF of the MGs. In general, the ACF of galaxies is expressed by the sum of two components, the one-halo term and the two-halo term: \begin{equation} \omega_\mathrm{ACF}(\theta) = \omega_\mathrm{1h}(\theta) + \omega_\mathrm{2h}(\theta), \end{equation} where $\omega_\mathrm{1h}$ and $\omega_\mathrm{2h}$ are the one-halo and two-halo terms, respectively. The one-halo term comes from galaxy pairs hosted by the same haloes and the two-halo term originates from pairs hosted by different haloes. Therefore, we can estimate the fraction of true pairs by evaluating the relative strength of the one-halo term. The fraction of true pairs whose separation is less than $\theta$ can be calculated as: \begin{align} \label{f_true} f_\mathrm{true}(\theta) &= \frac{\int_{0}^{\theta}\omega_\mathrm{1h}(\theta^{\prime})\theta^{\prime} d\theta^{\prime}} {\int_{0}^{\theta} \left[1+\omega_\mathrm{ACF}(\theta^{\prime})\right] \theta^{\prime} d\theta^{\prime}}. \end{align} We first calculate the ACF of the MGs. Then, we derive the two-halo term assuming that this term can be described as: \begin{equation} \omega_\mathrm{2h}(\theta) = b^{2} \omega_\mathrm{DM}(\theta), \end{equation} where $b$, the normalisation, is the bias parameter and $\omega_\mathrm{DM}(\theta)$ is the angular ACF of dark matter calculated from the matter power spectrum. We fit $\omega_\mathrm{2h}(\theta)$ over $40\arcsec-2000\arcsec$ with correction of the IC. Finally, we use Equation~\eqref{f_true} to derive $f_\mathrm{true}$. Here we consider an additional correction of $\omega$. The ACF signal becomes weaker when the redshift window becomes larger. While the redshift window is 0.24 in our pair finder algorithm, that in this analysis is 1.5 ($1.5<z<3.0$). To correct for this effect, we multiply $\omega$ by 4.78, the typical ratio of $\omega_\mathrm{ACF}(\Delta z = 0.24)$ to $\omega_\mathrm{ACF}(1.5<z<3.0)$. Fig.~\ref{fig:f_true} shows the ACF of the MGs and $f_\mathrm{true}$. In our pair finder we adopt $30\arcsec$ as the maximum separation, resulting in $f_\mathrm{true}=54\%$. Since isolated MGs have a weaker clustering signal than real pairs, the contamination by false pairs reduces the clustering signal of pairs. We estimate the bias of real pairs, $b_\mathrm{true}$, and hence the intrinsic DMH mass of cores with the following relation \citep{Bethermin2014}: \begin{equation} b_\mathrm{pair}^{2} = f_\mathrm{true}b_\mathrm{true}^{2} + 2(1-f_\mathrm{true}b_\mathrm{c}^{2}), \end{equation} where $b_\mathrm{pair}$ is the bias parameter of the core candidates obtained in Section~\ref{sec:clustering} and $b_\mathrm{c}$ is the bias parameters of contaminants. We approximate $b_\mathrm{c}$ by the bias of the MGs. The intrinsic DMH mass is found to be $4.0_{-1.5}^{+1.8}\times 10^{13}\,M_{\sun}$, which is shown in Fig.~\ref{fig:Mdh} with label of ``corrected". Using the Millennium Simulation, \citet{Muldrew2015} have shown that the most massive progenitor haloes at $z=2$ of present-day $M_\mathrm{DMH}=1\times 10^{14}\, M_{\sun}$ clusters have a median mass of $1.4\times 10^{13}\, M_{\sun}$, with a $1\sigma$ scatter of $0.22$ dex. The mean DMH mass of the cores exceeds this best-fitting median value even before contamination correction. Then we estimate the descendant DMH mass of the cores using the extended Press-Schechter model. We assume that all the cores are located at $z=1.85$. The descendant masses are shown in Fig.~\ref{fig:eps_descendant_halomass} as blue and green shades for masses before and after correction of contamination, respectively. We find that the host haloes of the cores can grow into $1\times 10^{14}\ M_{\sun}$ at $z=0$, comparable to the mass of a Virgo or Fornax-like cluster \citep{Chiang2013}. \begin{figure} \includegraphics[width=\columnwidth]{./ACF_ftrue_z15to3_m11_omegacorr478_20200217.pdf} \caption{\textit{Top panel}: The ACF of MGs. Black points show the observed ACF. A dashed line shows the two-halo term of the ACF, derived from fitting in $40\arcsec< \theta <2000\arcsec$. \textit{Bottom panel}: The fraction of true pairs as a function of pair separation. The blue solid line shows $f_\mathrm{true}$ for pairs whose separations are smaller than $\theta$, which is given by Equation~\eqref{f_true}, while the orange dashed line is $f_\mathrm{true}$ at a given $\theta$. (A colour version of this figure is available in the online journal.)} \label{fig:f_true} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{./Mdh_growth_20200217.pdf} \caption{The descendant mass of the DMHs hosting pairs calculated by the extended Press-Schechter model from $z=1.85$ to $z=0$. Shaded regions show the $1 \sigma$ scatters and dashed lines are the modes. Blue and green colours correspond to initial masses of $2.62\times 10^{13}\, M_{\sun}$ and $3.96 \times 10^{13}\, M_{\sun}$, respectively. The latter is the intrinsic mass after correction of contamination (see Section~\ref{sec:true_pair_fraction}). (A colour version of this figure is available in the online journal.)} \label{fig:eps_descendant_halomass} \end{figure} \subsection{The number density of cores} \label{sec:number_density} To check whether our pair-finding method finds massive DMHs completely, we compare the number density of our core candidates to that derived from the halo mass function. Assuming that all of the most massive DMHs host a single pair of MGs, we first calculate the minimum mass of DMHs which host a pair ($M_\mathrm{min}$) as follows: \begin{equation} b_\mathrm{obs} = \frac{ \int^{\infty}_{M_\mathrm{min}} b(M)\, \frac{dn(M)}{dM}\, dM }{ \int^{\infty}_{M_\mathrm{min}} \frac{dn(M)}{dM}\, dM }, \end{equation} where $\frac{dn(M)}{dM}$ is the halo mass function and $b(M)$ is the bias parameter as a function of halo mass. Here, we adopt \citet{Sheth1999} as the halo mass function. Then we calculate the number density of DMHs which are more massive than $M_\mathrm{min}$. In Table~\ref{tab:number_density}, we summarise the number density of each population. Our core candidates have a lower number density than that estimated from the halo mass function by factor of 2.5 (3.5) with (without) true pair correction, resulting in $\sim 40\% \ (30\%)$ completeness. We further explore the completeness as a function of DMH mass in the next section using the IllustrisTNG simulation. \begin{table} \centering \caption{The number density of cores, DMHs and clusters.} \label{tab:number_density} \begin{tabular}{lccc} \hline objects & redshift & $M_\mathrm{min}\,[M_{\sun}]$ & $n\,[\mathrm{cMpc^{-3}}]$ \\ \hline cores & $1.5\leq z\leq 3.0$ & $1.6\times 10^{13}$ & $2.8 \times 10^{-6}$ \\ [2pt] DMHs$^a$ & $1.5\leq z\leq 3.0$ & $1.6\times 10^{13}$ & $1.0\times 10^{-5}$ \\ [2pt] cores (true)$^b$\ & $1.5\leq z\leq 3.0$ & $2.5\times 10^{13}$ & $1.5\times 10^{-6}$ \\ [2pt] DMHs$^a$ & $1.5\leq z\leq 3.0$ & $2.5\times 10^{13}$ & $4.0\times 10^{-6}$ \\ [2pt] local clusters & $z=0$ & $1.0\times 10^{14}$ & $1.5\times 10^{-5}$ \\ \hline \end{tabular} \begin{tablenotes}[normal] \item \textit{Notes.} $^a$The number density of DMHs calculated by the halo mass function. $^b$A true pair fraction of $54\%$ is considered. \end{tablenotes} \end{table} \subsection{Comparison with the IllustrisTNG} \label{sec:illustris} In this paper, we assume that pairs of MGs are typically hosted by more massive DMHs than isolated MGs. In the previous section, we confirm this hypothesis in a statistical manner with observational data. However, as shown in Section~\ref{sec:number_density}, our method may not be able to find all massive DMHs. To evaluate the effectiveness of pairs as tracers of cores, we need to know the mass distribution of pair-host DMHs, the fraction of massive DMHs which host pairs, and the fraction of pair-host DMHs which can actually grow into $M_\mathrm{DMH}\geq 10^{14}\, M_{\sun}$ at $z=0$. Since observational data do not tell us individual halo masses, we employ a mock galaxy catalogue of the IllustrisTNG project for this purpose. The IllustrisTNG project is a series of cosmological magnetohydrodynamical simulations of galaxy formation and evolution including various baryon physics: star formation, stellar evolution, chemical enrichment, primordial and metal-line cooling of the gas, stellar feedback, and black hole formation, growth and feedback \citep{Pillepich2018a,Weinberger2017}. The simulations consist of three runs with different box sizes and each run also has three different resolutions. We use results from TNG300 which has the largest volume $\sim( 205\, \mathrm{cMpc}/h)^3$ among the three runs. Thanks to the large volume, TNG300 is suitable for investigating properties of rare objects like galaxy clusters. Among the three TNG300 runs, we select the one with the highest mass resolution, TNG300-1, and use the halo (group) and galaxy (subhalo) catalogues as well as merger trees (i.e. the merger histories of individual haloes). A detailed description about the simulations is found in IllustisTNG presentation papers \citep{Naiman2018,Springel2018,Pillepich2018c,Marinacci2018,Nelson2018}. First, from the mock galaxy catalogue of $z=2$ (snapshot 33), we extract the positions and stellar masses of galaxies. Then we select galaxies with $\log(M_{*}/M_{\sun})\geq 11$ and apply the pair finder to them. Instead of the angular separation criterion in Section~\ref{sec:pair_finder}, we consider a condition that three dimensional separations are $< 0.3\, \mathrm{pMpc}$. We identify 103 pairs from 2092 massive mock galaxies. The number of independent pair-host haloes is 100 because some pairs are hosted by the same haloes. In the top panel of Fig.~\ref{fig:Illustris-pair} we show the relation between the stellar masses of central galaxies and their host DMH masses\footnote{We approximate DMH masses by $M_{200}$, which represents the total mass enclosed by a sphere whose inner mass density is 200 times the critical density of the universe.}. Small stars and dots mean DMHs which host pairs and isolated centrals, respectively. For pair-host DMHs, we plot the largest stellar mass among each pair. For a series of stellar mass bins with a width of 0.2 dex, we calculate median DMH masses. Large stars and circles show the median masses of pair and isolated central host DMHs, respectively. We find that at a fixed stellar mass, the median mass of DMHs which host a pair is larger by 0.15 to 0.3 dex. This suggests that pairs of MGs are effective tracers of the most massive DMHs in the universe at $z\sim2$. We also show the fraction of DMHs which host a pair as a function of halo mass in the bottom panel of Fig.~\ref{fig:Illustris-pair}. Blue triangles show the pair-host fraction of DMHs which is more massive than a given mass and orange circles represent the differential fraction. DMH masses estimated from clustering analysis and $M_\mathrm{min}$ obtained in Section~\ref{sec:number_density} are also plotted as solid and dashed lines, respectively. At the mass of $M_\mathrm{min}$ with (without) true pair correction, the cumulative pair-host fraction is $\sim 50\%\ (30\%)$, being consistent with the completeness calculated from the halo mass function. Furthermore, the pair-host fraction monotonically increases with halo mass. These results mean that pairs of MGs can effectively trace DMHs which are massive enough to be regarded as proto-cluster cores. Finally, we investigate the fraction of pair-host haloes at $z=2$ that can evolve into $\geq 10^{14}\, M_{\sun}$ at $z=0$. Tracing merger histories of pair-host haloes, we find that 100 independent pair-host haloes at $z=2$ become 89 independent haloes at $z=0$, indicating that mergers reduce $\sim 10\%$ of pair-host haloes. Among those descendants, 63 haloes are more massive than $10^{14}\, M_{\sun}$, which are regarded as clusters. This means that the purity of pair-host haloes as tracers of proto-cluster cores is $63\%$. In the simulation box, there are 280 clusters. Therefore, the completeness of pairs as tracers of $z=0$ clusters is $23\%$. We further investigate the completeness for $z=0$ clusters in terms of their mass. Following \citet{Chiang2014}, we divide $z=0$ clusters into three types according to their mass: Fornax-like ($M_\mathrm{DMH}=1-3 \times 10^{14}\,M_{\sun}$), Virgo-like ($M_\mathrm{DMH}=3-10 \times 10^{14}\,M_{\sun}$) and Coma-like ($M_\mathrm{DMH}>1 \times 10^{15}\,M_{\sun}$) clusters. At $z=0$, the numbers of descendants of pair-host haloes (and all haloes in the simulation box) classified as Fornax-like, Virgo-like and Coma-like clusters are 38 (235), 22 (42), 3 (3), respectively, resulting in $16\%$, $52\%$ and $100\%$ completeness for each type. This suggests that pairs of MGs are not only good tracers of the progenitor haloes of the most massive clusters but also those of Virgo-like clusters. In Figure~\ref{fig:Illustris-descendant}, we show the DMH masses of pair-host haloes and their descendants at $z=0$. At fixed $M_\mathrm{DMH}(z=0)$, the masses of progenitors have a 1$\sigma$ scatter of $0.2-0.4$ dex, which is similar to the value found by \citet{Muldrew2015}. This relatively large scatter implies that there are various paths of halo mass growth. For each type of clusters, we check the fraction of DMHs which become more than ten times more massive from $z=2$ to $z=0$. For Fornax-like, Virgo-like and Coma-like clusters, these fractions are roughly 15\%, 60\% and 100\%, respectively, suggesting that the progenitors of more massive clusters tend to grow more rapidly after $z=2$. \begin{figure*} \includegraphics[width=2\columnwidth]{./Mh_Mstar_fpair_20200217.pdf} \caption{\textit{Top panel}: The relation between stellar mass and host DMH mass for massive galaxies ($\log(M_{*}/M_{\sun})\geq 11$) in the IllustrisTNG300-1. Small cyan stars refer to pairs of massive galaxies with separations smaller than $0.3\, \mathrm{pMpc}$ while dots show isolated central galaxies. Large stars and circles show corresponding median values in 0.2 dex stellar mass bins. A blue dotted line is the average $M_\mathrm{DMH}$ of pair-host DMHs in IllustrisTNG300-1. \textit{Bottom panel}: The fraction of DMHs which host pairs of massive galaxies as a function of halo mass. Blue triangles show the pair-host fraction of DMHs more massive than a given mass and orange circles represent the differential fraction. Solid and dashed lines show the average $M_\mathrm{DMH}$ of the pairs estimated by clustering analysis and $M_\mathrm{min}$ obtained in Section~\ref{sec:number_density}, where blue and green colours show, respectively, before and after true pair correction. (A colour version of this figure is available in the online journal.)} \label{fig:Illustris-pair} \end{figure*} \begin{figure} \includegraphics[width=\columnwidth]{./Illustris_descendant_mass_comp_20200430.pdf} \caption{\textit{Top panel}: The DMH masses of pair-host haloes and those of the descendants at $z=0$. Blue squares, green circles and magenta diamonds show the masses of progenitor haloes of Fornax-like, Virgo-like and Coma-like clusters, respectively. Progenitors which have less massive descendants than $10^{14}\, M_{\sun}$ are shown by black triangles. Grey dashed lines show the ratio of $M_\mathrm{DMH}(z=0)$ to $M_\mathrm{DMH}(z=2)$. \textit{Bottom panel}: The completeness of pairs of massive galaxies as tracers of $z=0$ clusters in four mass bins. The meanings of the symbols are the same as those in the top panel. The black dotted line shows the completeness in the whole mass range above $1\times10^{14}\,M_{\sun}$, which is 0.23. (A colour version of this figure is available in the online journal.)} \label{fig:Illustris-descendant} \end{figure} \section{Properties of member galaxies of proto-cluster cores} We examine the stellar mass function (SMF) and the quiescent fraction for galaxies in the detected cores. Since the COSMOS2015 catalogue is a photo-\textit{z} sample, we subtract field galaxies statistically as described below. \subsection{Field subtraction and the field stellar mass function} We extract all galaxies down to $\log(M_{*}/M_{\sun})=9.0$ in cylindrical regions around the 75 cores with a radius of $\Delta r=0.3\,\mathrm{pMpc}$ and a line of sight length $\Delta z=0.5$. We adopt this relatively large $\Delta z$ value not to miss low-mass galaxies near the mass limit that have much larger photo-\textit{z} uncertainties than $\log(M_{*}/M_{\sun})\geq 11.0$ galaxies. The galaxies in these cylindrical regions are contaminated by field galaxies. We perform field subtraction in the following manner. First, we calculate the SMFs of field galaxies by dividing the galaxy sample of $\log(M_{*}/M_{\sun})\geq 9.0$ into 20 redshift bins of range $1.25<z<3.25$ and width $\Delta z=0.1$. For each redshift bin, we also compute the total cosmic volume occupied by the cylindrical regions around the cores. Then, multiplying the field SMFs by these cosmic volumes, we estimate the total number of contamination galaxies falling within the 75 cylindrical regions as a function of stellar mass. Finally, we subtract this mass function of contaminants from the raw counts around the cores. We also need a field SMF averaged over $1.5<z<3.0$ that is compared with the SMF of member galaxies. Because the redshift distribution of the core sample is slightly different from that of the general galaxy sample, we calculate this field SMF as: \begin{equation} \label{phi_field} \Phi_\mathrm{field} = \frac{\sum_{i}{n(z_{i})\Phi_{\mathrm{field},\, i}}}{\sum_{i}{n(z_{i})}}, \end{equation} where ${z_{i}}$ is the i-th redshift bin, $n(z_{i})$ is the number of cores at ${z_{i}}$, and $\Phi_{\mathrm{field},\,i}$ is the field SMF at $z_{i}$. \subsection{The stellar mass function} The SMFs of galaxies in the cores and that of the field galaxies are shown in the top panel of Fig.~\ref{fig:SMF}. To calculate the former, we assume that DMHs hosting a pair are spheres with a radius of $0.3\,\mathrm{pMpc}$. Completeness correction as a function of stellar mass is not considered. In Fig.~\ref{fig:SMF}, grey, blue and red lines refer to the SMFs of total galaxies, star-forming galaxies and quiescent galaxies, respectively. For comparison, we calculate the SMFs around isolated MGs with stellar masses of $\log(M_{*}/M_{\sun})\geq 11.3$ and $\log(M_{*}/M_{\sun})\geq 11.0$ in the same way as that for pairs. Note that above $\log(M_{*}/M_{\sun})=11$, the SMFs are positively biased because there are at least two (one) MGs in each core (each isolated MG) which are used to identify them. It is found that the SMFs of total and star-forming galaxies in the cores as well as around isolated MGs have a flat shape below $\log(M_{*}/M_{\sun})=11$, where the SMFs are not affected by selection bias. We also find that the normalisation of the SMF of the cores is roughly twice as large as those of the two classes of isolated MGs, meaning that the pairs reside in denser environment. To discuss the shapes of the SMFs and the galaxy formation efficiency in the cores, we also calculate the ratio between the SMF of member galaxies and that of field galaxies for each star formation class. The normalisations of the SMFs of the cores are roughly two to three orders of magnitude higher than those of the field galaxies. We again normalise the ratio of SMFs by total mass as: \begin{equation} \label{SMF_unitmass} \frac{N_\mathrm{core}}{N_\mathrm{field}} = \frac{\Phi_\mathrm{core}}{\Phi_\mathrm{field}} \frac{\rho_\mathrm{crit}\Omega_\mathrm{m}V_\mathrm{core}}{M_\mathrm{core}}, \end{equation} where $\rho_\mathrm{crit}$ is the critical density of the universe in our cosmology, $V_\mathrm{core}$ is the average comoving volume and $M_\mathrm{core}$ is the DMH mass of the cores, respectively. The results are plotted in the bottom panel of Fig.~\ref{fig:SMF}. We find that this ratio increases with stellar mass. In other words, the member galaxies of proto-cluster cores have a more top-heavy SMF than field galaxies. This result is qualitatively consistent with the simulation \citep{Lovell2018,Muldrew2018}. We note that the SMFs of field galaxies are not exactly the same among the three panels because the redshift distributions $n({z_{i}})$ of corresponding massive galaxy populations are different. See the definition of $\Phi_\mathrm{field}$ in Equation~\eqref{phi_field}. We also find that the ratio of the SMFs is below unity, although marginal, at $\log(M_{*}/M_{\sun})\lesssim 10$ and above unity at $\log(M_{*}/M_{\sun})\gtrsim 10$, meaning that in core regions, the formation of low-mass galaxies may be suppressed while that of high-mass galaxies is enhanced compared to the field. Destruction of low-mass galaxies by mergers and/or tidal disruption \citep{Martel2012} are possible causes of the lower formation efficiency of low-mass galaxies. Another possibility is the suppression of star formation of low-mass galaxies. As described in Section~\ref{sec:fq}, low-mass galaxies in the cores have a higher quiescent fraction than their field counterparts. This may support this possibility. For high-mass galaxies, the high density environment of proto-cluster cores may enhance the formation of high-mass galaxies by the early formation of large DMHs and/or more frequent mergers \citep{Muldrew2018}. Trends similar to those seen in the SMFs of the cores have been found in several observational studies which focus on both global and local environments. At high redshift, $z\sim 2.5$, \citet{Shimakawa2018} have found that the SMF of $\mathrm{H \alpha}$ emitters in the densest regions of a proto-cluster is more top-heavy than that in less dense regions in terms of a clear excess of high-mass galaxies ($\log(M_{*}/M_{\sun})>10.5$), although they have not been able to find a clear difference at the low-mass end. Together with the evidence that high-mass galaxies in the densest regions are more actively star-forming, they have concluded that the formation of massive galaxies has been accelerated in the densest parts of a proto-cluster. Our SMF for star-forming galaxies in the cores is qualitatively consistent with these results, implying an enhancement of the star formation of high-mass galaxies in the cores. Differences in the SMFs between mature clusters and fields have also been reported at $z\lesssim 1.5$. \citet{VanderBurg2013,VanderBurg2018} have shown that cluster galaxies have more top-heavy SMFs at $0.5<z<1$ primarily because of a shallower low-mass end slope, especially for quiescent galaxies. \citet{Nantais2016} have reported that in clusters at $z\sim 1.5$, the SMF of quiescent galaxies with low stellar masses ($\log(M_{*}/M_{\sun})\lesssim10.5$) has a roughly 50\% contribution to the total SMF, while only 20\% in fields. They interpret this as environmental quenching of low-mass galaxies, although they do not find a clear difference in the shape of the SMF of total galaxies between clusters and fields. At $\log(M_{*}/M_{\sun})\sim 10$, our SMF of quiescent galaxies in the cores shows higher $\Phi_\mathrm{core}/\Phi_\mathrm{field}$ values compared to more massive bins. This may imply that cores at $z\sim 2$ are similar to mature clusters $z\lesssim 1.5$ in terms of a higher fraction of low-mass quiescent galaxies than fields. We should note that the SMF of quiescent galaxies has negative values at the lowest-mass bins ($\log(M_{*}/M_{\sun})\lesssim 9.5$) possibly due to low statistics. The effect of local environment on galaxy formation has been studied by many papers. Using the Bayesian motivated N-th nearest neighbour as an environment measure \citep{Cowan2008}, \citet{Kawinwanichakij2017} have shown that quiescent galaxies are likely to reside in denser environments than star-forming ones even at fixed stellar mass at $0.5<z<2.0$. The same trend has also been reported in \citet{Malavasi2016}, which have used the number density of galaxies within a cylindrical region as an environment measure. These results are qualitatively consistent with our results. At lower redshift ($0.55<z<1.3$), \citet{Tomczak2017} have found strong dependence of the shape of the SMFs on local environment. They have used Voronoi tessellation \citep{Darvish2015} as an environment measure and shown that galaxies in denser environments have more top-heavy SMFs than in fields for both star-forming and quiescent galaxies, which is similar to what we find. With the same environmental measure as \citet{Kawinwanichakij2017}, \citet{Papovich2018} have argued that there are not major differences in the shape of the SMF for either star-forming or quiescent galaxies between high- and low-density environments at $1.5<z<2.0$. However, they have also pointed out that the SMFs of star-forming galaxies at $\log(M_{*}/M_{\sun})\sim 10.5$ in dense environments show an excess, which is seen in our SMF. These past studies are broadly consistent with ours when the differences in the definition of local environment are taken into account. Note that proto-cluster cores are extremely high-density regions where the 3D galaxy density is two orders of magnitude higher than the cosmic average as found in the comparison of the SMFs between cores and fields. In any case, proto-cluster cores are the most promising places to detect environmental dependence in the early universe. \begin{figure*} \includegraphics[width=2\columnwidth]{./SMF_comp_unitmass_200200121.pdf} \caption{\textit{Top panel}: The stellar mass functions (SMFs) of galaxies in the cores (left), around the most massive ($\log(M_{*}/M_{\sun})\geq 11.3$) isolated galaxies (middle), and around massive ($\log(M_{*}/M_{\sun})\geq 11.0$) isolated galaxies (right). Detection incompleteness has not been corrected. Grey, blue and red colours mean the SMFs of all galaxies, star-forming galaxies, and quiescent galaxies, respectively. Grey shaded regions show the mass range suffering from selection bias. \textit{Bottom panel}: Same as top panels but divided by the field SMFs and normalised by total mass using Equation~\eqref{SMF_unitmass}. A black dotted line indicates unity. This normalisation is only valid for the cores. Arrows mean a negative value. (A colour version of this figure is available in the online journal.)} \label{fig:SMF} \end{figure*} \subsection{The quiescent fraction} \label{sec:fq} We measure the quiescent fraction for galaxies in the cores. Here, the quiescent fraction $f_\mathrm{q}$ is defined as\\ \begin{equation} f_\mathrm{q}=\frac{N_\mathrm{q}}{N_\mathrm{total}}, \end{equation} where $N_\mathrm{total}$ and $N_\mathrm{q}$ are the numbers of total and quiescent galaxies, respectively. As in the previous section, we also compute $f_\mathrm{q}$ for galaxies around the two classes of isolated MGs for comparison. The results are shown in Fig.~\ref{fig:fq}. It is found that all three environments have a higher quiescent fraction than the field. In each panel, the quiescent fraction of member galaxies is higher than in the field at $\log(M_{*}/M_{\sun})\lesssim 10.6$ while it is almost the same at $\log(M_{*}/M_{\sun})\gtrsim 10.6$. This probably reflects the fact that satellite galaxies around massive centrals are more likely to be quenched than isolated galaxies even at $z\sim 2$ \citep{Kawinwanichakij2016,Ji2018}. Interestingly, the $f_\mathrm{q}$ in the cores is higher than those in the others. In Table \ref{tab:fq_wholemass}, we summarise $f_\mathrm{q}$ in the whole mass range below $10^{11}\, M_{\sun}$, where galaxy number counts are not directly affected by selection bias. The $f_\mathrm{q}$ of galaxies in the cores is $17_{-4}^{+4}\%$, which is $3.3_{-0.8}^{+0.8}$ times higher than that of field galaxies, while that of galaxies around isolated MGs with $\log(M_{*}/M_{\sun})\geq11.0\,(11.3)$ is $11_{-2(4)}^{+2(4)}\%$, which is $2.4_{-0.4\,(0.8)}^{+0.4\,(0.8)}$ times higher than field galaxies. This suggests that proto-cluster cores are more evolved systems than DMHs hosting isolated MGs. The value of $f_\mathrm{q}$ has been examined for several individual clusters at $1.6<z<1.8$, and much higher values than what we find have been reported: $f_\mathrm{q}\gtrsim 30\%$ at $\log(M_{*}/M_{\sun})\lesssim10.5$ and $f_\mathrm{q}\gtrsim 80\%$ at $\log(M_{*}/M_{\sun})\gtrsim10.5$ (\citealp{Newman2014,Cooke2016,Lee-Brown2017}). These differences may partly come from the fact that the clusters in these previous studies are more massive ($M_\mathrm{DMH}\gtrsim 8\times10^{13}\, M_{\sun}$) and thus more evolved systems than our cores. Part of the differences may also be due to cluster-to-cluster variation because these studies are each based on only a single cluster. Then, we also calculate the environmental quenching efficiency (QE): \begin{equation} QE=\frac{f_\mathrm{q}^\mathrm{member}-f_\mathrm{q}^\mathrm{field}}{1-f_\mathrm{q}^\mathrm{field}} = 1-\frac{f_\mathrm{sf}^\mathrm{member}}{f_\mathrm{sf}^\mathrm{field}}, \end{equation} where $f_\mathrm{q}^\mathrm{member}$ and $f_\mathrm{q}^\mathrm{field}$ ($f_\mathrm{sf}^\mathrm{member}$ and $f_\mathrm{sf}^\mathrm{field}$) are the quiescent (star-forming) fraction of galaxies in the environment in question and in the field. This quantity describes what fraction of star-forming galaxies in the field would be additionally quenched if they were in the given environment. The QE for the cores, $0.13_{-0.04}^{+0.04}$, is higher than that for the isolated MGs with $\log(M_{*}/M_{\sun})\geq11.0\,(11.3)$, $0.07_{-0.02\,(0.04)}^{+0.02\,(0.04)}$. In Fig.~\ref{fig:comp_QE}, we plot the QE measurement of the cores (blue pentagon) together with those of known clusters in the literature. \citet{Quadri2012} and \citet{Cooke2016} have each calculated the QE for a single cluster at $z\sim 1.6$, using galaxies with $\log(M_{*}/M_{\sun})\geq10$ and $10\leq \log(M_{*}/M_{\sun})\leq 10.7$, respectively. \citet{Nantais2017}, \citet{Rodriguez2019} and \citet{Balogh2014} have measured QEs using 14, 24 and 10 clusters at various redshifts. \citet{Nantais2017} and \citet{Rodriguez2019} have used galaxies with $\log(M_{*}/M_{\sun})\geq10.3$ and $\log(M_{*}/M_{\sun})\geq10.0$, respectively. For the QE of \citet{Balogh2014}, we plot the result for $\log(M_{*}/M_{\sun})=10.5$. To classify galaxies into star-forming or quiescent, all the above studies have used a colour-colour diagram based on \citet{Williams2009}. \citet{Contini2020} have calculated the QE for clusters in an analytic galaxy formation model. They define clusters as DMHs with $\log(M_{*}/M_{\sun})>14.2$, and use galaxies with $\log(M_{*}/M_{\sun})\geq 9.5$. They define quiescent galaxies as those with a lower specific star formation rate than the inverse of the Hubble time. As a general trend, the QE becomes lower with increasing redshift. A qualitatively similar trend has been found for the QE of galaxies in locally dense environments \citep{Peng2010,Kawinwanichakij2017,Chartab2020}. One needs to be careful when comparing individual QE values directly, because the QE data in Fig.~\ref{fig:comp_QE} are heterogeneous in terms of the identification of clusters, the selection method of galaxies, and the stellar mass range used to calculate QEs. For a detailed comparison, we focus on the result of \citet{Nantais2017} shown by grey stars. They have found that the QE changes dramatically after $z\sim 1.5$, from $QE\sim 0.16$ at $z\sim 1.6$ to $QE\sim 0.62$ at $z\sim 1.3$. To compare the QE for the cores with those obtained by \citet{Nantais2017}, we calculate it again by using galaxies in the mass range of $\log(M_{*}/M_{\sun})>10.3$ (orange diamond). We find that the QE of the cores is positive, meaning that some mechanisms of environmental quenching have already worked in $z\sim 2$ cores. In addition, the QE of the cores is almost the same value as of mature clusters in \citet{Nantais2017} at $z\sim 1.6$ although the DMH mass of the cores is one-order of magnitude smaller than those of the $z\sim 1.6$ clusters. This result supports a scenario that cluster environments have not quenched galaxies significantly until $z\sim 1.5$ when a whole proto-cluster region starts to collapse, although excess quenching is already seen in cores. We note that at $z\sim 1.6$ the descendant mass of the cores does not reach $10^{14}\, M_{\sun}$, meaning that our cores may not be the progenitors of the $z\sim 1.6$ clusters. \begin{table} \centering \caption{The quiescent fraction ($f_\mathrm{q}$) and the environmental quenching efficiency (QE) of member galaxies in cores and around two classes of massive isolated galaxies, and those of corresponding field galaxies.} \label{tab:fq_wholemass} \begin{tabular}{lcccc} \hline objects & $f_\mathrm{q}^\mathrm{member}$ & $f_\mathrm{q}^\text{field}$ & $f_\mathrm{q}^\mathrm{member}$/$f_\mathrm{q}^\mathrm{field}$ & $QE$ \\ \hline core & $0.17_{-0.04}^{+0.04}$ & $0.052_{-0.001}^{+0.001}$ & $3.3_{-0.8}^{+0.8}$ & $0.13_{-0.04}^{+0.04}$ \\ [6pt] iso\_{11.3}$^a$ & $0.11_{-0.04}^{+0.04}$ & $0.045_{-0.001}^{+0.001}$ & $2.4_{-0.8}^{+0.8}$ & $0.07_{-0.04}^{+0.04}$ \\ [6pt] iso\_{11.0}$^b$ & $0.11_{-0.02}^{+0.02}$ & $0.045_{-0.001}^{+0.001}$ & $2.4_{-0.4}^{+0.4}$ & $0.07_{-0.02}^{+0.02}$ \\ \hline \end{tabular} \begin{tablenotes}[normal] \item \textit{Notes.} $^a$Isolated MGs ($\log(M_{*}/M_{\sun})\geq11.3$). $^b$Isolated MGs ($\log(M_{*}/M_{\sun})\geq11.0$). In the calculation, we exclude galaxies with $\log(M_{*}/M_{\odot})\geq 11$ to avoid possible selection biases. \end{tablenotes} \end{table} \begin{figure*} \includegraphics[width=2\columnwidth]{./fq_comp_20200117.pdf} \caption{The quiescent fraction ($f_\mathrm{q}$) in the cores (left), around the most massive ($\log(M_{*}/M_{\sun})\geq 11.3$) isolated galaxies (middle), and around massive ($\log(M_{*}/M_{\sun})\geq 11.0$) isolated galaxies (right) plotted as blue symbols. The $f_\mathrm{q}$ of field galaxies is also plotted in each panel (grey symbols). In the mass range of $\log(M_{*}/M_{\sun})>11$, which is coloured in grey, $f_\mathrm{q}$ is affected by selection bias. An arrow means $f_\mathrm{q}<0$ due to field subtraction. (A colour version of this figure is available in the online journal.)} \label{fig:fq} \end{figure*} \begin{figure} \includegraphics[width=\columnwidth]{./QE_comp_20200507.pdf} \caption{Environmental quenching efficiency (QE), defined as $QE=(f_\mathrm{q}^\mathrm{member}-f_\mathrm{q}^\mathrm{field})/(1-f_\mathrm{q}^\mathrm{field})$, as a function of redshift. A blue pentagon and an orange diamond are the QEs of the cores calculated for galaxies with $9.0<\log(M_{*}/M_{\sun})<11$ and $\log(M_{*}/M_{\sun})>10.3$, respectively. The other grey symbols are QEs in the literature. Stars, upward triangles, a downward triangle, an open circle and an open square are the QEs for cluster environments presented in \citet{Nantais2017,Rodriguez2019,Balogh2014,Cooke2016,Quadri2012}, respectively. Open symbols indicate QEs for individual clusters. A grey dashed line shows the QE calculated for clusters in an analytic galaxy formation model \citep{Contini2020}. (A colour version of this figure is available in the online journal.)} \label{fig:comp_QE} \end{figure} \section{Summary and Conclusions} We have searched for proto-cluster cores at $z\sim 2$ in $\sim 1.5\, \mathrm{deg}^{2}$ of the COSMOS field by using pairs of MGs ($\log(M_{*}/M_{\sun})\geq11$) as tracers, and examined properties of member galaxies in the cores. The main results are as follows. \begin{enumerate} \item We find 75 pairs of MGs whose separations are $<30\arcsec$, among which 54\% are estimated to be real. \item A clustering analysis finds that the average mass of DMHs hosting the pairs is $2.6_{-0.8}^{+0.9}\times 10^{13}\, M_\mathrm{\sun}$, and $4.0^{+1.8}_{-1.5}\times 10^{13}\, M_\mathrm{\sun}$ after contamination correction. Using the extended Press-Schechter model, we also calculate the descendant DMH mass and confirm that the pairs are typically progenitors of Virgo or Fornax-like clusters. \item The IllustrisTNG simulation shows pairs of MGs are good tracers of DMHs which are massive enough to be regraded as proto-cluster cores. At a fixed stellar mass, the median mass of DMHs which host pairs is larger by 0.15 to 0.3 dex than those of DMHs which do not. We also find that more than 50\% of DMHs with $2.6\times 10^{13}\, M_{\sun}$ host pairs, which is consistent with the completeness estimated from the halo mass function. Since the pair-host fraction is a monotonically increasing function of $M_\mathrm{DMH}$, the most massive DMHs can be traced by pairs at $z=2$. We trace merger trees from $z=2$ to $z=0$ to identify descendants of pair-host haloes. We find that 100 independent DMHs which host pairs at $z=2$ become 89 independent DMHs at $z=0$. At $z=0$, the numbers of descendants of pair-host haloes (and all haloes in the simulation box) classified as Fornax-like, Virgo-like and Coma-like clusters are 38 (235), 22 (42), 3 (3), respectively, resulting in $16\%$, $52\%$ and $100\%$ completeness for each type. This suggests that a pair of MGs can trace progenitors of both the most massive clusters and less massive ones. \item The member galaxies of the cores have a more top-heavy SMF than the field except for quiescent galaxies. When normalised by total mass, the ratio of SMFs between cores and the field is below unity at $\log(M_{*}/M_{\sun})\lesssim 10$ and above unity at $\log(M_{*}/M_{\sun})\gtrsim 10$. The low ratio at $\log(M_{*}/M_{\sun})\lesssim 10$, if real, may indicate that low-mass galaxies in cores are more likely to be prevented from forming stars, or destroyed by mergers and/or tidal disruption than field galaxies. On the other hand, the star formation of high-mass galaxies may be enhanced by the early formation of massive DMHs and/or more frequent mergers. These trends are similar to SMFs in previous studies focusing on known (proto-)clusters and local high-density regions. \item The quiescent fraction of the member galaxies in the cores is higher than that of the field at $\log(M_{*}/M_{\sun})\lesssim 10.6$. The quiescent fraction averaged over the whole mass range $9<\log(M_{*}/M_{\sun})<11$ is $0.17_{-0.04}^{+0.04}$, which is three time higher than that of the field. We also calculate the environmental quenching efficiency (QE) and find that the QE in the cores is comparable to that of mature clusters at $z\sim 1.6$ in the literature. This supports a scenario that cluster environments have not quenched galaxies significantly until $z\sim 1.5$ when a whole proto-cluster region starts to collapse, although excess quenching is already seen in cores. \end{enumerate} We have statistically shown that proto-cluster cores at $z\sim 2$ have similar properties to mature clusters at $z\lesssim 1.5$ in terms of an excess of high-mass galaxies and a higher fraction of low-mass quiescent galaxies. These results suggest that stellar mass assembly and quenching are accelerated as early as $z\sim 2$ in proto-cluster cores. To investigate other properties further, spectroscopic confirmation of the individual cores is needed. Our core sample presents good targets for spectroscopic surveys like the Subaru Prime Focus Spectrograph survey \citep{Takada2014}. If we derive precise redshifts of member galaxies, we can calculate individual DMH masses from their velocity dispersions. We can also reveal detailed star-forming activities with spectroscopic data, and thus the formation history of cluster galaxies \citep{Harikane2019}. The method presented in this paper can be applied to other survey data with stellar mass and photo-\textit{z} estimates. Therefore, combining wide field surveys like the Subaru Hyper Suprime-Cam survey (HSC-SSP), we can construct a much larger core sample over a wide redshift range. \section*{Acknowledgements} We would like to thank Drs. Haruka Kusakabe, Taku Okamura, Mr. Takahiro Sudoh and Ms. Hinako Goto for helpful comments and discussions. We also would like to thank the anonymous referee for very constructive comments. Based on data products from observations made with ESO Telescopes at the La Silla Paranal Observatory under ESO programme ID 179.A-2005 and on data products produced by TERAPIX and the Cambridge Astronomy Survey Unit on behalf of the UltraVISTA consortium. We acknowledge the team of the IllustrisTNG project (\url{https://www.tng-project.org/}). We use the following open source software packages for our analysis: \texttt{numpy} \citep{numpy:2011}, \texttt{pandas} \citep{pandas:2010}, \texttt{scipy} \citep{scipy:2001}, \texttt{astropy} \citep{astropy:2013,astropy:2018} and \texttt{matplotlib} \citep{matplotlib:2007}. RM acknowledges a Japan Society for the Promotion of Science (JSPS) Fellowship at Japan. This work is supported in part by JSPS KAKENHI Grant Numbers JP19K03924 (KS) and JP18J40088 (RM). \bibliographystyle{mnras}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Describing a solid in terms of its magnetic properties requires the knowledge of an effective spin model which displays the same interesting physical properties as the many-electron Hamiltonian whose exact solution would give the complete description of the system. The determination of the form of the effective spin model and of the strength of the interactions between the constituent spins starting from the initial electronic model is, in general, a complicated many-body problem \cite{Auslender, Lichtenstein87, Stocks98, Katsnelson00, Katsnelson02, Bruno03, Udvardi03, Katsnelson04, Katsnelson10, Secchi13, Szilva13, Secchi15}. We have recently derived expressions for the parameters of the magnetic interactions within an extended (multi-orbital) Hubbard model \cite{Secchi15}, in the presence of arbitrary relativistic couplings affecting the electronic degrees of freedom (such as spin-orbit, magnetic anisotropy, Zeeman coupling with an external magnetic field). The formulas presented in Ref.\cite{Secchi15}, after neglecting the vertices of two-electron Green's functions, are expressed in terms of single-electron (but fully interacting) Green's functions $G$ and the single-electron (hopping) Hamiltonian $T$. The use of the representation via $T$ \cite{Katsnelson10, Secchi15} for computations related to real materials requires the additional step of a tight-binding parametrization, which is implemented only in some methods of electronic structure calculations. On the other hand, a presentation of the formulas in terms of Green's functions $G$ and self-energies $\Sigma$ would make them more suitable for implementation via Dynamical Mean Field Theory (DMFT) \cite{Metzner89, Georges96, Kotliar06}, since \emph{any} DMFT calculation deals with $G$ and $\Sigma$. Writing the parameters in a way that explicitly exhibits self-energies, analogous to what was done in Refs.\cite{Katsnelson00, Katsnelson02, Secchi13}, also allows to explicitly include the approximation of local self-energy, which is the key assumption of DMFT. We here present the adaptation of the formulas for the exchange tensor to this scheme. \section{Method and discussion} We consider the extended multi-orbital Hubbard Hamiltonian \cite{Kanamori63, Hubbard65, Kugel82, Lichtenstein98, Georges13, Secchi15}, \begin{align} \hat{H} = \sum_{o, \sigma, m} \sum_{o', \sigma', m'} \hat{\phi}^{\dagger}_{o, \sigma, m} T^{o, \sigma, m}_{o', \sigma', m'} \hat{\phi}^{o', \sigma', m'} + \hat{H}_V, \label{Hamiltonian} \end{align} where the field operator $\hat{\phi}^{\dagger}_{o , \sigma , m }$ creates an electron with quantum numbers $\lbrace o, \sigma, m \rbrace$: $o$ refers to a set of the orbital indices (for a basis of localized Wannier wave functions, these are the atom index $a$, the principal atomic quantum number $n$ and the angular momentum quantum number $l$: $o \equiv \lbrace a, n, l \rbrace$), while $\sigma \in \lbrace \uparrow, \downarrow \rbrace$ and $m \in \lbrace -l , -l + 1, \ldots, l \rbrace$ are the third components of the intrinsic-spin and orbital angular momenta, respectively. Local angular momenta are measured with respect to local reference frames, which depend on $o$ and might not be collinear \cite{Secchi15}. The single-particle Hamiltonian matrix $T^{o, \sigma, m}_{o', \sigma', m'}$ is completely arbitrary, so it can include any relativistic single-electron terms (Zeeman coupling, spin-orbit, magnetic anisotropies). The interaction Hamiltonian $\hat{H}_V$ is assumed to be rotationally invariant \cite{Secchi15}. The goal in Ref.\cite{Secchi15} was to map the model given by Eq.\eqref{Hamiltonian} onto an effective model of classical spins $\boldsymbol{e}_o$ including up to (arbitrary) quadratic interactions, with Hamiltonian \begin{align} H_{\mathrm{spin}} = \sum_o \boldsymbol{e}_o \cdot \boldsymbol{\mathcal{B}}_o + \frac{1}{2} \sum_{o, o'} \sum_{\alpha, \alpha'} e_{o ,\alpha} e_{o', \alpha'} \mathcal{H}_{o o'}^{\alpha \alpha'} , \label{spin model} \end{align} determined by the exchange tensor $\mathcal{H}_{o o'}^{\alpha \alpha'} = \mathcal{H}_{o' o}^{\alpha' \alpha}$ (here and in the following $\alpha$ and $\alpha'$ are used to denote the space coordinates, e.g. $x, y, z$) and the effective magnetic field $\boldsymbol{\mathcal{B}}_o$. It is convenient to decompose the exchange tensor into the three vectors $\boldsymbol{\mathcal{J}}_{o o'} = \boldsymbol{\mathcal{J}}_{o' o}$ (anisotropic exchange), $\boldsymbol{\mathcal{D}}_{o o'} = - \boldsymbol{\mathcal{D}}_{o' o}$ (Dzyaloshinskii-Moriya interaction), and $\boldsymbol{\mathcal{C}}_{o o'} = \boldsymbol{\mathcal{C}}_{o' o}$ (symmetric non-diagonal exchange), defined as \begin{align} \mathcal{J}^{\alpha}_{o o'} \equiv \mathcal{H}^{\alpha \alpha}_{o o'}, \quad \mathcal{D}^{\alpha}_{o o'} \equiv \frac{1}{2} \sum_{\alpha' \alpha''} \varepsilon^{\alpha \alpha' \alpha''} \mathcal{H}^{\alpha' \alpha''}_{o o'}, \quad \mathcal{C}^{\alpha}_{o o'} \equiv \frac{1}{2} \sum_{\alpha' \alpha''} \left| \varepsilon^{\alpha \alpha' \alpha''} \right| \mathcal{H}^{\alpha' \alpha''}_{o o'} , \end{align} where $\varepsilon^{\alpha \alpha' \alpha''}$ is the completely anti-symmetric tensor of rank 3. The Heisenberg model is obtained as the particular case in which $\mathcal{H}_{o o'}^{\alpha \alpha'} \equiv \delta^{\alpha \alpha'} \mathcal{J}_{o o'}$. To perform the mapping, in Ref.\cite{Secchi15} we have derived the response of the thermodynamic potential of the electronic system under small spatially-dependent rotations of the spin quantization axes associated with each orbital spinor denoted by $o$, up to second order in the rotation angles. The derivation of such response involves path integration over the fermionic fields after the introduction of auxiliary bosonic degrees of freedom which express the amplitudes of rotations from an initial spin configuration; the coefficients of the interactions between the remaining bosons are put in correspondence with the parameters of the spin model \eqref{spin model} by imposing that the thermodynamic potential of the spin system after the spin rotations is equal to that of the electrons. Excluding the vertex contributions, the parameters of the spin model are expressed in terms of single-electron Green's functions (which of course include interaction effects) and the single-particle part of the electronic Hamiltonian, $T$. This procedure is similar to the one previously adopted in Refs.\cite{Katsnelson00, Katsnelson02} for the case of quenched orbital moments, but in Ref.\cite{Secchi15} we have considered rotations of the local total spins $\hat{\boldsymbol{S}}_o = \hat{\boldsymbol{l}}_o + \hat{\boldsymbol{s}}_o$, where $\hat{\boldsymbol{l}}_o$ and $\hat{\boldsymbol{s}}_o$ are, respectively, the orbital and intrinsic angular momenta associated with the states $o$. More precisely, we have considered rotations in the space of the single-particle eigenfunctions of $\hat{\boldsymbol{S}}_o^2$ and $\hat{S}^z_o$, analogously to Ref.\cite{Katsnelson10}, while in Refs.\cite{Katsnelson00, Katsnelson02} the rotations affected the space of eigenfunctions of $\hat{\boldsymbol{s}}_o^2$ and $\hat{s}^z_o$. This allowed us to obtain formulas for the exchange tensor that can be separated into contributions coming from the interactions between spin-spin, orbital-orbital, or spin-orbital degrees of freedom of the electrons. It should be noted that this possibility is not applicable within Density Functional Theory (DFT) formulations, where observables are expressed in terms of the charge density and the intrinsic-spin density. The possibility of rotating local total spins is related to the representation of the electronic Hamiltonian in terms of localized wave functions, which implies a higher number of degrees of freedom with respect to DFT (related to the fact that the set of localized states would be over-complete in theory, or not even complete in practice due to truncation). The computation of the magnetic parameters via DMFT is greatly simplified if they are formulated in terms of single-particle Green's functions and self-energies $\Sigma$ in magnetically ordered states, since this avoids the initial step of a tight-binding parameterization of the single-electron Hamiltonian $T$. To remove $T$ and introduce $\Sigma$, we use the equations of motion for Matsubara Green's functions (Dyson equations), which we write in general matrix notation as \begin{align} & \left( \omega - \mathrm{i} \mu \right) G(\mathrm{i} \omega) + \mathrm{i} T \cdot G(\mathrm{i} \omega) = 1 - \Sigma(\mathrm{i} \omega) \cdot G(\mathrm{i} \omega) , \nonumber \\ & \left( \omega - \mathrm{i} \mu \right) G(\mathrm{i} \omega) + \mathrm{i} G(\mathrm{i} \omega) \cdot T = 1 - G(\mathrm{i} \omega) \cdot \Sigma(\mathrm{i} \omega) . \label{Matsubara} \end{align} These equations hold for the Matsubara Green's functions defined according to the following convention: \begin{align} G^1_2(\tau ) \equiv - \mathrm{i} \left< \mathcal{T} \hat{\psi}^1(\tau) \, \hat{\psi}^{\dagger}_2 \right> \equiv \frac{1}{\beta} \sum_{\omega} G^1_2(\mathrm{i} \omega) \mathrm{e}^{- \mathrm{i} \omega \tau } , \end{align} where $\omega = (2 n + 1) \pi / \beta$ is a fermionic Matsubara frequency. As a particular case, the single-electron density matrix is given by \begin{align} \rho \equiv - \mathrm{i} \, G(\tau = 0^-) = - \mathrm{i} \, \frac{1}{\beta} \sum_{\omega} \mathrm{e}^{\mathrm{i} \omega 0^+} G(\mathrm{i} \omega) . \end{align} We now have to distinguish between the magnetic parameters that can be computed from the second-order response in the rotation angles and those which are computed from the first-order response. From Ref.\cite{Secchi15}, we note that the former terms can all be written in terms of the following quantity: \begin{align} F^{\mathrm{XY}}_{o \alpha, o' \alpha'} \equiv \, & - \delta_{ o o' } \frac{1}{2} \mathrm{Tr}_{m, \sigma} \left( \left\{ S_{o \alpha}^{\mathrm{X}} ; S_{o \alpha'}^{\mathrm{Y}} \right\} \cdot \left\{ \rho ; T \right\}^o_o \right) \nonumber \\ & + \mathrm{Tr}_{m, \sigma} \left( S^{\mathrm{X}}_{o \alpha} \cdot T^o_{o'} \cdot S^{\mathrm{Y}}_{o' \alpha'} \cdot \rho^{o'}_o + S^{\mathrm{Y}}_{o' \alpha'} \cdot T^{o'}_o \cdot S^{\mathrm{X}}_{o \alpha} \cdot \rho^o_{o'} \right) \nonumber \\ & + \frac{1}{\beta} \sum_{\omega} \mathrm{e}^{\mathrm{i} \omega 0^+} \mathrm{Tr}_{m, \sigma} \Big\{ S^{\mathrm{X}}_{o \alpha} \cdot \left[ G(\mathrm{i} \omega ) \cdot T \right]^o_{o'} \cdot S^{\mathrm{Y}}_{o' \alpha'} \cdot \left[ G(\mathrm{i} \omega ) \cdot T \right]_o^{o'} \nonumber \\ & - S^{\mathrm{X}}_{o \alpha} \cdot G(\mathrm{i} \omega )^o_{o'} \cdot S^{\mathrm{Y}}_{o' \alpha'} \cdot \left[ T \cdot G(\mathrm{i} \omega ) \cdot T \right]_o^{o'} \nonumber \\ & - S^{\mathrm{X}}_{o \alpha} \cdot \left[ T \cdot G(\mathrm{i} \omega ) \cdot T \right]^o_{o'} \cdot S^{\mathrm{Y}}_{o' \alpha'} \cdot G(\mathrm{i} \omega )_o^{o'} \nonumber \\ & + S^{\mathrm{X}}_{o \alpha} \cdot \left[ T \cdot G(\mathrm{i} \omega ) \right]^o_{o'} \cdot S^{\mathrm{Y}}_{o' \alpha'} \cdot \left[ T \cdot G(\mathrm{i} \omega ) \right]_o^{o'} \Big\} , \label{F} \end{align} where X, Y $\in \lbrace \mathrm{spin}, \mathrm{orb} \rbrace$ refer to either spin- or orbital- related terms, that is, \begin{align} S^{\mathrm{spin}}_{o \alpha} \equiv s_{o \alpha} \equiv \frac{1}{2} \sigma_{o \alpha}, \quad \quad S^{\mathrm{orb}}_{o \alpha} \equiv l_{o \alpha}, \end{align} where $s_{o \alpha}$ is an intrinsic spin matrix ($\sigma_{o \alpha}$ is a Pauli matrix), while $l_{o \alpha}$ is an orbital angular momentum matrix. In Eq.\eqref{F} we have used the notation $\left\{ A ; B \right\} \equiv A \cdot B + B \cdot A$ to denote the anti-commutator of the matrices $A$ and $B$; in the following we will also make use of $\left[ A ; B \right] \equiv A \cdot B - B \cdot A$ to denote the commutator. From the Dyson equations \eqref{Matsubara}, we have (the frequency arguments of Green's functions $G$ and self-energies $\Sigma$ are implicit): \begin{align} & T \cdot G = - \mathrm{i} \left[ 1 - \left( \omega - \mathrm{i} \mu \right) G - \Sigma \cdot G \right] , \quad G \cdot T = - \mathrm{i} \left[ 1 - \left( \omega - \mathrm{i} \mu \right) G - G \cdot \Sigma \right] , \nonumber \\ & T \cdot G \cdot T = - \mathrm{i} T + \Sigma - \Sigma \cdot G \cdot \Sigma + \left( \omega - \mathrm{i} \mu \right) \left( 1 - \Sigma \cdot G - G \cdot \Sigma \right) - \left( \omega - \mathrm{i} \mu \right)^2 G , \nonumber \\ & \left[ T ; \rho \right] = \mathrm{Tr}_{\omega} \left[ \Sigma ; G \right] , \label{Matsubara 3} \end{align} where we have introduced the notation \begin{align} \frac{1}{\beta} \sum_{\omega} \mathrm{e}^{\mathrm{i} \omega 0^+} f(\mathrm{i} \omega) \cdot g(\mathrm{i} \omega) \equiv \mathrm{Tr}_{\omega} (f \cdot g). \end{align} Applying Eqs.\eqref{Matsubara 3} to Eq.\eqref{F}, we obtain \begin{align} F^{\mathrm{XY}}_{o \alpha, o' \alpha'} = & \frac{1}{2} \delta_{o o'} \mathrm{Tr}_{\omega} \mathrm{Tr}_{m, \sigma} \Big( \left\{ S^{\mathrm{X}}_{o \alpha} ; S^{\mathrm{Y}}_{o \alpha'} \right\} \cdot \left\{ \Sigma ; G \right\}_o^{o} \Big) \nonumber \\ & - \mathrm{Tr}_{\omega} \mathrm{Tr}_{m, \sigma} \Big\{ S^{\mathrm{X}}_{o \alpha} \cdot \left[ G \cdot \Sigma \right]^o_{o'} \cdot S^{\mathrm{Y}}_{o' \alpha'} \cdot \left[ G \cdot \Sigma \right]_o^{o'} \nonumber \\ & + S^{\mathrm{X}}_{o \alpha} \cdot \left[ \Sigma \cdot G \right] ^o_{o'} \cdot S^{\mathrm{Y}}_{o' \alpha'} \cdot \left[ \Sigma \cdot G \right] _o^{o'} - S^{\mathrm{X}}_{o \alpha} \cdot \left[ \Sigma \cdot G \cdot \Sigma \right]^o_{o'} \cdot S^{\mathrm{Y}}_{o' \alpha'} \cdot G_o^{o'} \nonumber \\ & - S^{\mathrm{X}}_{o \alpha} \cdot G^o_{o'} \cdot S^{\mathrm{Y}}_{o' \alpha'} \cdot \left[ \Sigma \cdot G \cdot \Sigma \right]_o^{o'} \Big\} \nonumber \\ & - \mathrm{Tr}_{\omega} \mathrm{Tr}_{m, \sigma} \Big( S^{\mathrm{X}}_{o \alpha} \cdot \Sigma^o_{o'} \cdot S^{\mathrm{Y}}_{o' \alpha'} \cdot G_o^{o'} + S^{\mathrm{Y}}_{o' \alpha'} \cdot \Sigma_o^{o'} \cdot S^{\mathrm{X}}_{o \alpha} \cdot G^o_{o'} \Big) . \label{before DMFT} \end{align} We then consider the magnetic parameters determined from the first-order response. From Eqs.(68) of Ref.\cite{Secchi15}, we see that these are $\mathcal{B}_o^x$, $\mathcal{B}_o^y$, $\mathcal{D}_{o o'}^x$, $\mathcal{D}_{o o'}^y$, $\mathcal{C}_{o o'}^x$, and $\mathcal{C}_{o o'}^y$. The first-order response term (in the RHS of Eqs.(68) of Ref.\cite{Secchi15}) can be written as \begin{align} \mathcal{V}^{\mathrm{X}}_{o \alpha} & = \mathrm{i} \, \mathrm{Tr}_{m, \sigma} \left( S^{\mathrm{X}}_{o \alpha} \cdot \left[ \rho ; T \right]^o_o \right) = \mathrm{i} \, \mathrm{Tr}_{m, \sigma} \mathrm{Tr}_{\omega} \left( S^{\mathrm{X}}_{o \alpha} \cdot \left[ G ; \Sigma \right]^o_o \right) \nonumber \\ & = \mathrm{i} \, \mathrm{Tr}_{m, \sigma} \mathrm{Tr}_{\omega} \sum_{o'} \left[ S^{\mathrm{X}}_{o \alpha} \cdot \left( G^o_{o'} \cdot \Sigma^{o'}_o - \Sigma^o_{o'} \cdot G^{o'}_o \right) \right] . \label{first} \end{align} By separating local and non-local terms, as well as taking into account the symmetries of the latter, it is then possible to identify the remaining parameters of the spin model. It should be noted that the parameters obtained with this procedure are \emph{not equivalent} to those expressed in terms of the single-electron Hamiltonian $T$ in Refs.\cite{Secchi15} and \cite{Katsnelson10}. These are different definitions, which respect the defining equations (68) of Ref.\cite{Secchi15}, but are more directly applicable for a DMFT implementation. In the next section we list the resulting formulas for the magnetic parameters. \section{Results} \subsection{Dzyaloshinskii-Moriya interaction} The Dzyaloshinskii-Moriya parameters, given by the vector $\boldsymbol{\mathcal{D}}_{o o'}$, are written as \begin{align} & \left( \mathcal{D}_{o o'}^{x} \right)^{\mathrm{spin}} = \frac{ \mathrm{i} }{2 } \mathrm{Tr}_{m, \sigma} \mathrm{Tr}_{\omega} \Big[ s_{o x} \cdot \left( G^o_{o'} \cdot \Sigma^{o'}_o - \Sigma^o_{o'} \cdot G^{o'}_o \right) \nonumber \\ & \quad \quad \quad \quad\quad - s_{o' x} \cdot \left( G^{o'}_o \cdot \Sigma^o_{o'} - \Sigma^{o'}_o \cdot G^o_{o'} \right) \Big] , \nonumber \\ & \left( \mathcal{D}_{o o'}^{x} \right)^{\mathrm{orb}} = \frac{ \mathrm{i} }{2 } \mathrm{Tr}_{m, \sigma} \mathrm{Tr}_{\omega} \Big[ l_{o x} \cdot \left( G^o_{o'} \cdot \Sigma^{o'}_o - \Sigma^o_{o'} \cdot G^{o'}_o \right) \nonumber \\ & \quad \quad \quad \quad\quad - l_{o' x} \cdot \left( G^{o'}_o \cdot \Sigma^o_{o'} - \Sigma^{o'}_o \cdot G^o_{o'} \right) \Big] , \end{align} \begin{align} & \left( \mathcal{D}_{o o'}^{y} \right)^{\mathrm{spin}} = \frac{ \mathrm{i} }{2 } \mathrm{Tr}_{m, \sigma} \mathrm{Tr}_{\omega} \Big[ s_{o y} \cdot \left( G^o_{o'} \cdot \Sigma^{o'}_o - \Sigma^o_{o'} \cdot G^{o'}_o \right) \nonumber \\ & \quad \quad \quad \quad\quad - s_{o' y} \cdot \left( G^{o'}_o \cdot \Sigma^o_{o'} - \Sigma^{o'}_o \cdot G^o_{o'} \right) \Big] , \nonumber \\ & \left( \mathcal{D}_{o o'}^{y} \right)^{\mathrm{orb}} = \frac{ \mathrm{i} }{2 } \mathrm{Tr}_{m, \sigma} \mathrm{Tr}_{\omega} \Big[ l_{o y} \cdot \left( G^o_{o'} \cdot \Sigma^{o'}_o - \Sigma^o_{o'} \cdot G^{o'}_o \right) \nonumber \\ & \quad \quad \quad \quad\quad - l_{o' y} \cdot \left( G^{o'}_o \cdot \Sigma^o_{o'} - \Sigma^{o'}_o \cdot G^o_{o'} \right) \Big] , \end{align} \begin{align} & \left( \mathcal{D}_{o o'}^z \right)^{\mathrm{spin-spin}} = \frac{1 }{2} \! \left( F_{o x, o' y}^{\mathrm{spin}, \, \mathrm{spin}} - F_{o' x, o y}^{\mathrm{spin}, \, \mathrm{spin}} \right) , \nonumber \\ & \left( \mathcal{D}_{o o'}^z \right)^{\mathrm{orb-orb}} = \frac{1 }{2} \! \left( F_{o x, o' y}^{\mathrm{orb}, \, \mathrm{orb}} - F_{o' x, o y}^{\mathrm{orb}, \, \mathrm{orb}} \right) , \nonumber \\ & \left( \mathcal{D}_{o o'}^z \right)^{\mathrm{spin-orb}} = \frac{1 }{2} \! \left( F_{o x, o' y}^{\mathrm{spin}, \, \mathrm{orb}} + F_{o x, o' y}^{\mathrm{orb}, \, \mathrm{spin}} - F_{o' x, o y}^{\mathrm{spin}, \, \mathrm{orb}} - F_{o' x, o y}^{\mathrm{orb}, \, \mathrm{spin}} \right) . \end{align} \subsection{Symmetric out-of-diagonal interactions} The symmetric out-of-diagonal interaction parameters, given by the vector $\boldsymbol{\mathcal{C}}_{o o'}$ with $o \neq o'$, are written as \begin{align} & \left( \mathcal{C}_{o o'}^{x} \right)^{\mathrm{spin}} = \frac{ \mathrm{i} }{2 } \mathrm{Tr}_{m, \sigma} \mathrm{Tr}_{\omega} \Big[ s_{o x} \cdot \left( G^o_{o'} \cdot \Sigma^{o'}_o - \Sigma^o_{o'} \cdot G^{o'}_o \right) \nonumber \\ & \quad \quad \quad \quad\quad + s_{o' x} \cdot \left( G^{o'}_o \cdot \Sigma^o_{o'} - \Sigma^{o'}_o \cdot G^o_{o'} \right) \Big] , \nonumber \\ & \left( \mathcal{C}_{o o'}^{x} \right)^{\mathrm{orb}} = \frac{ \mathrm{i} }{2 } \mathrm{Tr}_{m, \sigma} \mathrm{Tr}_{\omega} \Big[ l_{o x} \cdot \left( G^o_{o'} \cdot \Sigma^{o'}_o - \Sigma^o_{o'} \cdot G^{o'}_o \right) \nonumber \\ & \quad \quad \quad \quad\quad + l_{o' x} \cdot \left( G^{o'}_o \cdot \Sigma^o_{o'} - \Sigma^{o'}_o \cdot G^o_{o'} \right) \Big] , \end{align} \begin{align} & \left( \mathcal{C}_{o o'}^{y} \right)^{\mathrm{spin}} = - \frac{ \mathrm{i} }{2 } \mathrm{Tr}_{m, \sigma} \mathrm{Tr}_{\omega} \Big[ s_{o y} \cdot \left( G^o_{o'} \cdot \Sigma^{o'}_o - \Sigma^o_{o'} \cdot G^{o'}_o \right) \nonumber \\ & \quad \quad \quad \quad\quad + s_{o' y} \cdot \left( G^{o'}_o \cdot \Sigma^o_{o'} - \Sigma^{o'}_o \cdot G^o_{o'} \right) \Big] , \nonumber \\ & \left( \mathcal{C}_{o o'}^{y} \right)^{\mathrm{orb}} = - \frac{ \mathrm{i} }{2 } \mathrm{Tr}_{m, \sigma} \mathrm{Tr}_{\omega} \Big[ l_{o y} \cdot \left( G^o_{o'} \cdot \Sigma^{o'}_o - \Sigma^o_{o'} \cdot G^{o'}_o \right) \nonumber \\ & \quad \quad \quad \quad\quad + l_{o' y} \cdot \left( G^{o'}_o \cdot \Sigma^o_{o'} - \Sigma^{o'}_o \cdot G^o_{o'} \right) \Big] , \end{align} \begin{align} & \left( \mathcal{C}_{o o'}^z \right)^{\mathrm{spin-spin}} = - \frac{1 }{2} \left( F_{o x, o' y}^{\mathrm{spin}, \, \mathrm{spin}} + F_{o' x, o y}^{\mathrm{spin}, \, \mathrm{spin}} \right) , \nonumber \\ & \left( \mathcal{C}_{o o'}^z \right)^{\mathrm{orb-orb}} = - \frac{1 }{2} \left( F_{o x, o' y}^{\mathrm{orb}, \, \mathrm{orb}} + F_{o' x, o y}^{\mathrm{orb}, \, \mathrm{orb}} \right) , \nonumber \\ & \left( \mathcal{C}_{o o'}^z \right)^{\mathrm{spin-orb}} = - \frac{1 }{2} \left( F_{o x, o' y}^{\mathrm{spin}, \, \mathrm{orb}} + F_{o x, o' y}^{\mathrm{orb}, \, \mathrm{spin}} + F_{o' x, o y}^{\mathrm{spin}, \, \mathrm{orb}} + F_{o' x, o y}^{\mathrm{orb}, \, \mathrm{spin}} \right) . \end{align} \subsection{Local out-of-diagonal anisotropy} The local out-of-diagonal anisotropy is given by the vector $\boldsymbol{\mathcal{C}}_{o o}$. We have \begin{align} & \left( \mathcal{C}_{o o}^z \right)^{\mathrm{spin-spin}} = - F_{o x, o y}^{\mathrm{spin,} \, \mathrm{spin}} , \nonumber \\ & \left( \mathcal{C}_{o o}^z \right)^{\mathrm{orb-orb}} = - F_{o x, o y}^{\mathrm{orb,} \, \mathrm{orb}} , \nonumber \\ & \left( \mathcal{C}_{o o}^z \right)^{\mathrm{spin-orb}} = - F_{o x, o y}^{\mathrm{spin,} \, \mathrm{orb}} - F_{o x, o y}^{\mathrm{orb,} \, \mathrm{spin}} , \end{align} \begin{align} & \left( \mathcal{C}_{oo}^x \right)^{\mathrm{spin}} = \mathrm{i} \, \mathrm{Tr}_{\omega} \mathrm{Tr}_{m, \sigma} \left( s_{o x} \cdot \left[ G_o^o ; \Sigma_o^o \right] \right) - \left( \mathcal{B}_o^y \right)^{\mathrm{spin}} , \nonumber \\ & \left( \mathcal{C}_{oo}^x \right)^{\mathrm{orb}} = \mathrm{i} \, \mathrm{Tr}_{\omega} \mathrm{Tr}_{m, \sigma} \left( l_{o x} \cdot \left[ G_o^o ; \Sigma^o_o\right] \right) - \left( \mathcal{B}_o^y \right)^{\mathrm{orb}} , \end{align} \begin{align} & \left( \mathcal{C}_{oo}^y \right)^{\mathrm{spin}} = - \mathrm{i} \, \mathrm{Tr}_{\omega} \mathrm{Tr}_{m, \sigma} \left( s_{o y} \cdot \left[ G_o^o ; \Sigma^o_o\right] \right) - \left( \mathcal{B}_o^x \right)^{\mathrm{spin}} , \nonumber \\ & \left( \mathcal{C}_{oo}^y \right)^{\mathrm{orb}} = - \mathrm{i} \, \mathrm{Tr}_{\omega} \mathrm{Tr}_{m, \sigma} \left( l_{o y} \cdot \left[ G_o^o ; \Sigma^o_o\right] \right) - \left( \mathcal{B}_o^x \right)^{\mathrm{orb}} , \end{align} where the components of the effective magnetic field are \begin{align} & \boldsymbol{\mathcal{B}}_o^{\mathrm{spin}} \equiv g_{1/2} \mu_{\mathrm{B}} \boldsymbol{B}_o \left[ \mathrm{Tr}_{ \sigma} \left( \boldsymbol{s}_{o } \mathrm{Tr}_{m } \rho^{o }_{o } \right) \cdot \boldsymbol{u}^z_o \right] , \nonumber \\ & \boldsymbol{\mathcal{B}}_o^{\mathrm{orb}} \equiv g_l \mu_{\mathrm{B}} \boldsymbol{B}_o \left[ \mathrm{Tr}_{m} \left( \boldsymbol{l}_{o } \mathrm{Tr}_{ \sigma} \rho^{o }_{o } \right) \cdot \boldsymbol{u}^z_o \right] , \label{B sep} \end{align} with $g_{1/2}$ and $g_l$ being the intrinsic-spin and orbital $g$-factors, respectively, and $\boldsymbol{B}_o$ the value of the external magnetic field acting at the position of the orbitals $o$. \subsection{Anisotropic exchange interactions} \label{exch sep} The anisotropic exchange parameters are given by the vector $\boldsymbol{\mathcal{J}}_{o o'}$ with $o \neq o'$. They are written as \begin{align} & \left( \mathcal{J}^x_{o o'} \right)^{\mathrm{spin-spin}} = F_{o y, o' y}^{\mathrm{spin}, \, \mathrm{spin}} , \nonumber \\ & \left( \mathcal{J}^x_{o o'} \right)^{\mathrm{orb-orb}} = F_{o y, o' y}^{\mathrm{orb}, \, \mathrm{orb}} , \nonumber \\ & \left( \mathcal{J}^x_{o o'} \right)^{\mathrm{spin-orb}} = F_{o y, o' y}^{\mathrm{spin}, \, \mathrm{orb}} + F_{o' y, o y}^{\mathrm{spin}, \, \mathrm{orb}} , \end{align} \begin{align} & \left( \mathcal{J}^y_{o o'} \right)^{\mathrm{spin-spin}} = F_{o x, o' x}^{\mathrm{spin}, \, \mathrm{spin}} , \nonumber \\ & \left( \mathcal{J}^y_{o o'} \right)^{\mathrm{orb-orb}} = F_{o x, o' x}^{\mathrm{orb}, \, \mathrm{orb}} , \nonumber \\ & \left( \mathcal{J}^y_{o o'} \right)^{\mathrm{spin-orb}} = F_{o x, o' x}^{\mathrm{spin}, \, \mathrm{orb}} + F_{o' x, o x}^{\mathrm{spin}, \, \mathrm{orb}}, \end{align} and the terms related to $\mathcal{J}^{z}_{o o'}$ are given by the averages of the respective terms contributing to $\mathcal{J}^{x}_{o o'}$ and $\mathcal{J}^{y}_{o o'}$, since $\mathcal{J}^{z}_{o o'} = \left( \mathcal{J}^{x}_{o o'} + \mathcal{J}^{y}_{o o'} \right) / 2$. It is instructive to consider the particular case where the self-energy is local not only in (atom) position space, but also diagonal in the principal quantum number ($n$) and orbital angular momentum ($l$) indices (we recall that $o \equiv \lbrace a, n, l \rbrace$), so that \begin{align} \Sigma^o_{o'} \approx \delta^o_{o'} \Sigma_o ; \label{DMFT approx} \end{align} we obtain \begin{align} & \left( \mathcal{J}^x_{o o'} \right)^{\mathrm{spin-spin}} \approx - \mathrm{Tr}_{\omega} \mathrm{Tr}_{m, \sigma} \Big\{ \left[ s_{o y} ; \Sigma_o \right] \cdot G^o_{o'} \cdot \left[ s_{o' y} ; \Sigma_{o'} \right] \cdot G_o^{o'} \Big\} , \nonumber \\ & \left( \mathcal{J}^{x}_{o o'} \right)^{\mathrm{orb-orb}} \approx - \mathrm{Tr}_{\omega} \mathrm{Tr}_{m, \sigma} \Big\{ \left[ l_{o y} ; \Sigma_o \right] \cdot G^o_{o'} \cdot \left[ l_{o' y} ; \Sigma_{o'} \right] \cdot G_o^{o'} \Big\} , \nonumber \\ & \left( \mathcal{J}^{x}_{o o'} \right)^{\mathrm{spin-orb}} \approx - \mathrm{Tr}_{\omega} \mathrm{Tr}_{m, \sigma} \Big\{ \left[ s_{o y} ; \Sigma_o \right] \cdot G^o_{o'} \cdot \left[ l_{o' y} ; \Sigma_{o'} \right] \cdot G_o^{o'} \nonumber \\ & \quad \quad \quad \quad \quad \quad \quad + \left[ s_{o' y} ; \Sigma_{o'} \right] \cdot G_o^{o'} \cdot \left[ l_{o y} ; \Sigma_{o} \right] \cdot G^o_{o'} \Big\} , \end{align} \begin{align} & \left( \mathcal{J}^y_{o o'} \right)^{\mathrm{spin-spin}} \approx - \mathrm{Tr}_{\omega} \mathrm{Tr}_{m, \sigma} \Big\{ \left[ s_{o x} ; \Sigma_o \right] \cdot G^o_{o'} \cdot \left[ s_{o' x} ; \Sigma_{o'} \right] \cdot G_o^{o'} \Big\} , \nonumber \\ & \left( \mathcal{J}^{y}_{o o'} \right)^{\mathrm{orb-orb}} \approx - \mathrm{Tr}_{\omega} \mathrm{Tr}_{m, \sigma} \Big\{ \left[ l_{o x} ; \Sigma_o \right] \cdot G^o_{o'} \cdot \left[ l_{o' x} ; \Sigma_{o'} \right] \cdot G_o^{o'} \Big\} , \nonumber \\ & \left( \mathcal{J}^{y}_{o o'} \right)^{\mathrm{spin-orb}} \approx - \mathrm{Tr}_{\omega} \mathrm{Tr}_{m, \sigma} \Big\{ \left[ s_{o x} ; \Sigma_o \right] \cdot G^o_{o'} \cdot \left[ l_{o' x} ; \Sigma_{o'} \right] \cdot G_o^{o'} \nonumber \\ & \quad \quad \quad \quad \quad \quad \quad + \left[ s_{o' x} ; \Sigma_{o'} \right] \cdot G_o^{o'} \cdot \left[ l_{o x} ; \Sigma_{o} \right] \cdot G^o_{o'} \Big\} . \end{align} In this highly symmetric case, also assuming collinear magnetic states and that Green's functions and self-energies are diagonal in spin space, one obtains the isotropic exchange parameter \begin{align*} \left( \mathcal{J}^x_{o o'} \right)^{\mathrm{spin-spin}} = \left( \mathcal{J}^y_{o o'} \right)^{\mathrm{spin-spin}} = \left( \mathcal{J}^z_{o o'} \right)^{\mathrm{spin-spin}} \equiv \left( \mathcal{J}_{o o'} \right)^{\mathrm{spin-spin}} \end{align*} as \begin{align} \left( \mathcal{J}_{o o'} \right)^{\mathrm{spin-spin}} \approx \mathrm{Tr}_{\omega} \mathrm{Tr}_{m } \left( \Sigma_o^{S} \cdot G^{o \uparrow}_{o' \uparrow} \cdot \Sigma^S_{o'} \cdot G_{o \downarrow}^{o' \downarrow} + \Sigma_o^{S} \cdot G^{o \downarrow}_{o' \downarrow} \cdot \Sigma^S_{o'} \cdot G_{o \uparrow}^{o' \uparrow} \right) , \end{align} where $\Sigma_o^{S} \equiv \left(\Sigma_o^{\uparrow} - \Sigma_o^{\downarrow} \right) / 2$, which is consistent with Eq.(109) of Ref.\cite{Secchi15} and with the previous literature \cite{Katsnelson00, Katsnelson02, Secchi13}. \subsection{Local exchange interactions (diagonal anisotropy)} The local exchange interaction parameters, or diagonal components of the local magnetic anisotropy, are given by $\boldsymbol{\mathcal{J}}_{o o}$. Within our rotational procedure, it is possible to determine only two of the three parameters as a function of the third one. Putting $\left( \alpha, \bar{\alpha} \right) = \left( x, y \right)$ or $\left( y, x \right)$, we obtain \begin{align} & \left( \mathcal{J}_{o o}^{\bar{\alpha}} - \mathcal{J}_{o o}^z \right)^{\mathrm{spin-spin}} = F_{o \alpha, o \alpha}^{\mathrm{spin,} \, \mathrm{spin}} + \left( \mathcal{B}^{z}_{o} \right)^{\mathrm{spin}} + \frac{1}{2} \sum_{o' \neq o} \left( \mathcal{J}^{x}_{o o'} + \mathcal{J}^{y}_{o o'} , \right)^{\mathrm{spin-spin}} , \nonumber \\ & \left( \mathcal{J}_{o o}^{\bar{\alpha}} - \mathcal{J}_{o o}^z \right)^{\mathrm{orb-orb}} = F_{o \alpha, o \alpha}^{\mathrm{orb,} \, \mathrm{orb}} + \left( \mathcal{B}^{z}_{o} \right)^{\mathrm{orb}} + \frac{1}{2} \sum_{o' \neq o} \left( \mathcal{J}^{x}_{o o'} + \mathcal{J}^{y}_{o o'} \right)^{\mathrm{orb-orb}} , \nonumber \\ & \left( \mathcal{J}_{o o}^{\bar{\alpha}} - \mathcal{J}_{o o}^z \right)^{\mathrm{spin-orb}} = F_{o \alpha, o \alpha}^{\mathrm{spin,} \, \mathrm{orb}} + F_{o \alpha, o \alpha}^{\mathrm{orb,} \, \mathrm{spin}} + \frac{1}{2} \sum_{o' \neq o} \left( \mathcal{J}^{x}_{o o'} + \mathcal{J}^{y}_{o o'} \right)^{\mathrm{spin-orb}} , \end{align} where the non-local exchange terms are given in Section \ref{exch sep}, and the magnetic field components are defined in Eqs.\eqref{B sep}. \section{Conclusions} We have provided the formulas for the general exchange tensor expressing the quadratic magnetic interactions in strongly correlated systems, in a version that can be implemented via DMFT. The formulas allow to compute the effects due to intrinsic-spin and orbital degrees of freedom of the electrons on equal footing (the orbital magnetic moments are not quenched), and to distinguish between spin, orbital and spin-orbital interactions that contribute to the exchange tensor. The obtained formulas represent the extension to the relativistic case and the generalization to unquenched orbital magnetic moments of the well-known formulas for spin-only exchange interactions \cite{Katsnelson00}, which are recovered as a particular case. We remark that effects due to the non-locality of self-energies in position space are included in our theory both as presented in Ref.\cite{Secchi15} and as presented here; although they cannot be computed within DMFT, a possible approach to include them is via the Dual-Fermion scheme \cite{Rubtsov08}. \section*{Acknowledgements} This work is supported by the European Union Seventh Framework Programme under grant agreement No.281043 (FEMTOSPIN) and by the Deutsche Forschungsgemeinschaft under grant SFB-668. \section*{References}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The topological type of a normal complex surface singularity $(X,o)$ is determined by its link (an oriented smooth connected 3--manifold), or, by the dual graph of any good resolution (a connected graph with a negative definite intersection form \cite{GRa,Mu}, which serves also as plumbing graphs of the link \cite{neumann}). The main question we target is the following: \begin{quest} If one fixes a topological type (say, a minimal good resolution graph) and varies the possible analytic types supported on this fixed topological type, then what are the possible values of the geometric genus $p_g$? \end{quest} Slightly more concrete version is formulated as follows: \begin{prob} Associate combinatorially an integer ${\rm MAX}(\Gamma)$ to any (resolution) graph $\Gamma$, such that for any analytic type supported by $\Gamma$ one has $p_g\leq {\rm MAX}(\Gamma)$, and furthermore, for certain analytic structure one has equality. Moreover, define by symmetric properties ${\rm MIN}(\Gamma)$ as well. \end{prob} A possible topological lower bound for $p_g$ can be constructed as follows. Fix a resolution $\tX\to X$ and for any divisor $l$ supported by the exceptional divisor set $\chi(l):= -(l, l-Z_K)/2$, where $Z_K$ is the anti-canonical cycle (see below) and $( \, , \, )$ denotes the intersection form. Set also ${\rm min}\chi$ as $\min_{l}\chi(l)$. Then ${\rm min}\chi$ is a topological invariant computable from $\Gamma$; Wagreich considered the expression $p_a(X,o)=1-\min\chi$, and called it the `artihmetical genus' \cite{Wael}. Moreover, for any analytic structure, whenever $p_g>0$, one also has (see e.g. \cite[p. 425]{Wael}) \begin{equation}\label{eq:min} 1-{\rm min}\chi \leq p_g. \end{equation} Indeed, one verifies that ${\rm min}\chi$ can be realized by an effective cycle $l_0>0$ (see e.g. \cite{Book}). Then from the cohomological long exact sequence associated with $0\to \cO_{\tX}(-l_0)\to\cO_{\tX}\to \cO_{l_0}\to 0$ one has $$p_g+\chi(l_0)=\dim H^0(\cO_{\tX})/H^0(\cO_{\tX}(-l_0))+h^1(\cO_{\tX}(-l_0))\geq 1$$ (since $H^0(\cO_{\tX})/H^0(\cO_{\tX}(-l_0))$ contains the class of constants). (\ref{eq:min}) sometimes is sharp: e.g. for elliptic singularities (when ${\rm min}\chi=0$) Laufer proved that for the generic analytic structure one has indeed $p_g=1-{\rm min}\chi=1$ \cite{la.me}. For different generalizations of (\ref{eq:min}) (inequalities, which involve besides $\min\chi$ and $p_g$ some other analytic invariants as well) see e.g. \cite[(2.6)]{Tomari86} or \cite[Prop. 8]{KNF}. However, the authors do not know if the above bound (\ref{eq:min}) is always optimal: \begin{quest} Does there exist for any $\Gamma$ an analytic structure with $p_g=1-{\rm min}\chi$?\end{quest} A possible upper bound for $p_g$ is constructed as follows \cite{nem.lattice}. Let $\{E_i\}_{i\in\cI}$ denote the set of irreducible exceptional curves, and for simplicity {\it we will assume that each $E_i$ is rational}. For any effective cycle $Z>0$ let $\cP(Z)$ be the set of monotone computation sequences $\gamma=\{l_k\}_{k=0}^t$ of cycles supported on the exceptional curve with the following properties: $l_0=0$, $l_t=Z$, and $l_{k+1}=l_k+ E_{i(k)}$ for some $i(k)\in \cI$. Associated with such $\gamma$ we define $$S(\gamma):=\sum_{k=0}^{t-1} \max\{0, (E_{i(k)},l_k)-1\}.$$ Set also ${\rm Path}(Z):=\min _{\gamma\in\cP(Z)} S(\gamma)$. Then for any analytic structure supported on $\Gamma$ one has \begin{equation}\label{eq:maxZ} h^1(\cO_Z)\leq {\rm Path}(Z). \end{equation} Indeed, from the exact sequence $0\to \cO_{E_{i(k)}}(-l_k)\to \cO_{l_{k+1}}\to \cO_{l_k}\to 0$ we get \begin{equation*} h^1(\cO_{l_{k+1}})-h^1(\cO_{l_k})\leq h^1(\cO_{E_{i(k)}}(-l_k)) = \max\{0, (E_{i(k)},l_k)-1 \} \ \ \ \ \ (0\leq k<t), \end{equation*} hence the inequality follows by summation. Since $p_g=h^1(\cO_{\lfloor Z_K \rfloor})=h^1(\cO_Z)$ for any $Z\geq \lfloor Z_K\rfloor $ when $Z_K\ge 0$, it is natural to define ${\rm Path}(\Gamma):= \min_{Z\geq \lfloor Z_K\rfloor } {\rm Path}(Z)$. It satisfies \begin{equation}\label{eq:max} p_g \leq {\rm Path}(\Gamma ). \end{equation} The computation of ${\rm Path}(\Gamma)$ is rather hard. In \cite{nem.lattice} (see also \cite{N-Sig}) is related with the Euler characteristic of the `path lattice cohomology' of $\Gamma$. In the next statement we collect some families of singularities when (\ref{eq:max}) is sharp. \begin{thm} In the next statement we consider singularities with rational homology sphere link. In the following cases $p_g={\rm Path}(\Gamma)$ (hence these analytic families realize the maximal $p_g$ on their topological type): -- \ weighted homogeneous normal surface singularities \cite{Book} (in fact, for star shaped graphs with all $E_i$ rational, ${\rm Path}(\Gamma)$ equals the topological expression of Pinkham valid for $p_g$ \cite{pinkham}), -- \ superisolated hypersurface singularities \cite{N-Sig}, -- \ isolated hypersurface Newton--nondegenerate singularities \cite{N-Sig}, -- \ rational singularities \cite{Book}, -- \ Gorenstein elliptic singularities \cite{Book}. \end{thm} One can expect that the realization $p_g={\rm Path}(\Gamma)$ is even more general. However, the main aim of the present article is to show that the upper bound (\ref{eq:max}) in general is not sharp: for certain graph $\Gamma$ the bound ${\rm Path}(\Gamma)$ cannot be realized. Surprisingly, the very same example shows some additional statements as well: (the third part is motivated by the `conviction' that usually `large' $p_g$ is realized simultaneously with `small' maximal ideal cycle): \begin{thm}\label{t:pg<path} There exists a numerically Gorenstein topological type for which -- \ $p_g<{\rm Path}(\Gamma)$ for any analytic type supported on $\Gamma$; -- \ even if an analytic type realizes the maximal $p_g$ (among all analytic types supported on the topological type under discussion) it is not necessarily Gorenstein; -- \ even if an analytic type realizes the maximal $p_g$, the maximal ideal cycle is not necessarily the Artin cycle. \end{thm} Our fixed topological type, which has the above properties, is given by the minimal good graph from Figure 1. \begin{figure} \begin{picture}(300,45)(30,0) \put(125,25){\circle*{4}} \put(150,25){\circle*{4}} \put(175,25){\circle*{4}} \put(200,25){\circle*{4}} \put(225,25){\circle*{4}} \put(150,5){\circle*{4}} \put(200,5){\circle*{4}} \put(125,25){\line(1,0){100}} \put(150,25){\line(0,-1){20}} \put(200,25){\line(0,-1){20}} \put(125,35){\makebox(0,0){$-3$}} \put(150,35){\makebox(0,0){$-1$}} \put(175,35){\makebox(0,0){$-13$}} \put(200,35){\makebox(0,0){$-1$}} \put(225,35){\makebox(0,0){$-3$}} \put(160,5){\makebox(0,0){$-2$}} \put(210,5){\makebox(0,0){$-2$}} \end{picture} \caption{The graph $\Gamma$} \label{fig:gamma} \end{figure} In the next statements we assume that $\X$ has the resolution graph $\Gamma$ from Figure 1 and $\tX$ is its minimal good resolution. Let $\zmi$ be the Artin cycle, while $\zma$ the maximal ideal cycle introduced by S. S.-T. Yau \cite{Yau1} (see \defref{d:cycles}). For this graph one has ${\rm min}\chi=-1$ and ${\rm Path}(\Gamma)=4$. The first equality follows from \cite[Example 4.4.1]{nem.lattice}, or by using (\ref{eq:min}), $\chi(\zmi)=-1$ and the existence of an analytic structure with $p_g=2$. The second equality follows again from \cite{nem.lattice} (see also the description of the $\chi$--function for graphs with two nodes in \cite{laszlo}). Nevertheless, we will verify it below as well. With these notations we prove the following. \begin{thm}[Cf. \sref{s:pre}, \sref{s:<4}] For any analytic structure one has $\coeff_{E_0}(\zma)\le 2$ (where $E_0$ is the $(-13)$-curve), and $p_g\X\le 3$. If $\X$ is Gorenstein, then $p_g\X=3$. \end{thm} \begin{thm} Any analytic structure satisfies one of the following properties: \begin{enumerate} \item $\zma=\zmi$, $p_g\X=3$, and $\X$ is a non-Gorenstein Kodaira singularity (cf. \thmref{t:ma=mi}). \item $\zma=2\zmi$ and $\X$ is of splice type (hence Gorenstein with $p_g\X=3$, cf. \thmref{t:splice}). \item $2\zmi\leq \zma < 3\zmi$ (there are three cases, see below), $p_g\X=2$ and $\X$ is not Gorenstein (cf. \thmref{t:gen} and Section \ref{s:nez2z}). \end{enumerate} \end{thm} \begin{cor} The following are equivalent: \begin{enumerate} \item $\zma=\zmi$; \item $\X$ is a Kodaira singularity. \end{enumerate} \end{cor} \begin{cor} The following are equivalent: \begin{enumerate} \item $\zma=2\zmi$, $p_g\X=3$; \item $\X$ is of splice type (complete intersection); \item $\X$ is Gorenstein. \end{enumerate} \end{cor} For Kodaira (or Kulikov) singularities see \cite{kar.p,Stevens85}, for splice singularities see \cite{nw-HSL}. \begin{rem} (1) In general, a Gorenstein singularity with integral homology sphere link and with $\zmi^2=-1$ is not necessarily of splice type. An example can be found in \cite[4.6]{supiso} (where the minimal good graph is even star-shaped). (2) For the two cases with $p_g=3$ (non--Gorenstein Kodaira and splice complete intersection) we provide precise realizations; however for the $p_g=2$ cases we will not give the realizations (e.g. equations) in this article. (3) The next table lists all the possible analytic structures supported by $\Gamma$ with some of their key properties. $E$ is the exceptional curve of the minimal resolution. For the notation $E_i^*$ see Section \ref{s:2}. \[ \begin{array}{c|c|c|c|c|c|c} \hline \zma & \text{Gorenstein} & p_g & h^1(\cO_E(-E)) & h^1(\cO_E(-2E)) & \mult & \emb \\ \hline \zmi & \text{No (Kodaira)} & 3 & 1 & 0 & 3 & 4 \\ \hline 2\zmi & \text{Yes (splice)}& 3 & 0 & 1 & 4 & 4 \\ \hline 2\zmi \ \mbox{or} & \text{No} & 2 & 0 & 0 & 6 & 7 \\ E^*_1\ \mbox{or} \ E^*_4 & & & & & & \\ \hline \end{array} \] (4) In most of the proofs we use `computation sequences'. Computation sequences were introduced and deeply exploited by Laufer, they constitute a powerful machinery in the theory of surface singularities. The present manuscript supports this fact as well. \end{rem} \begin{rem} After we finished our manuscript the referee drew our attention to the excellent article \cite{Konno-yaucycles} of K. Konno, which we were not aware of. We thank the referee for this information. Indeed, our proofs and arguments and some of the statements have overlaps with the results of this article, which contains several important results regarding the key cycles of a resolution of a normal surface singularity. After this information, however, we decided not to change the structure (and the proofs) of our statements, in this way the present manuscript still remains (more or less) self-contained and more readable. In this Remark we wish to list some of the overlaps and give the credits to \cite{Konno-yaucycles}. (Definitely, this list covers only the overlaps, and not the huge amount of results of \cite{Konno-yaucycles}.) In \cite{Konno-yaucycles} the author studies singularities with $Z_{min}^2=-1$. Our main example belongs to this family too, in fact, it even belongs to the simplest class of `essentially irreducible $Z_{min}$' of Konno. For example, in `essentially irreducible $Z_{min}$' case, the fact that $p_g\leq 4$ when $Z_{min}^2=-1$ and $\chi(Z_{min})=\min \chi= -1$ is shown in Theorem 3.9 of \cite{Konno-yaucycles}. Furthermore, in \cite[Th. 3.9]{Konno-yaucycles} is also stated that the singularity must be a doublepoint whenever $p_g=4$. (This overlaps with the first part of our Theorem \ref{t:pg3}.) Also, the calculations of the present note in the Gorenstein case (\S\ref{s:2},I) is much similar to \cite[Th. 3.11]{Konno-yaucycles}, which might even shorten slightly the proof of our Theorem \ref{t:splice}. A related statement can be found also in \cite[Lemma 3.4]{Konno-yaucycles}. \end{rem} \begin{acknowledgement} The second author thanks the R\'enyi Institute of Mathematics, Budapest, Hungary, for the warm hospitality during his visit. \end{acknowledgement} \section{Preliminary}\label{s:pre} Let $\X$ be a normal complex surface singularity and $\pi\:\tX\to X$ a resolution with exceptional set $E$. Let $\{E_i\}_{i\in \cI}$ denote the set of irreducible components of $E$. We denote by $\Gamma$ the resolution graph of $\X$. The group of cycles is defined by $L:=\sum _{i\in \cI}\Z E_i$. Let us simplify into $DD'$ the intersection number $(D,D')$. For any function $f\in H^0(\cO_{\tX})$, $f\ne 0$, let $(f)_E$ denote the exceptional part of $\di (f)$, namely, $(f)_E=\sum_{i\in \cI}\ord_{E_i}(f\circ \pi)E_i \in L$. A divisor $D$ on $\tX$ is said to be nef (resp. anti-nef) if $DE_i\ge 0$ (resp. $DE_i\le 0$) for all $i\in \cI$. We write $h^i( * )=\dim_{\C}H^i(*)$. Moreover, for an effective cycle $l\in L$ we write $H^i(l):=H^i(\cO_{l})$, and $\chi(l)$ denotes the Euler characteristic $\chi(\cO_l)=h^0(l)-h^1(l)$. By Riemann--Roch formula, for a divisor $D$ on $\tX$, \[ \chi(\cO_l(D))=h^0(\cO_l(D))-h^1(\cO_l(D))=\chi(l)+Dl=-(l^2-Z_Kl)/2+Dl, \] where $Z_K$ denotes the canonical cycle (see \defref{d:cycles}). The expression $\chi(l) = -(l^2-Z_Kl)/2 $ is extended for any $l\in L$. \begin{defn}\label{d:cycles} We define the (minimal) Artin cycle $\zmi$, the maximal ideal cycle $\zma$, and the cohomological cycle $\zco \in L$ as follows: \begin{enumerate} \item $\zmi=\min\defset{Z>0}{\text{$Z$ is anti-nef}}$. \item $\zma=\min\defset{(f)_E}{f\in \m_{X,o}}$, where $\m_{X,o}$ is the maximal ideal of $\cO_{X,o}$. \item $\zco=\min\defset{Z>0}{h^1(\cO_Z)=p_g\X}$ if $p_g\X>0$. $\zco=0$ if $p_g\X=0$. \item The canonical cycle $Z_K\in L\otimes \Q$ is defined by $K_{\tX}E_i=-Z_KE_i$ for all $i\in \cI$. If $Z_K\in L$, then $\X$ or $\Gamma$ is said to be numerically Gorenstein. \end{enumerate}\end{defn} For the existence of the unique cohomological cycle on any resolution (with the property $h^1(Z)<p_g$ for any $Z\not\geq \zco$) see Reid \cite[\S 4.8]{chap}. One has $\zco\leq \lfloor Z_K\rfloor$. Recall that $\X$ is Gorenstein if and only if $-Z_K\sim K_{\tX}$ (linear equivalence on $\tX$). \begin{rem} Let $k$ be a positive integer. (1) If $\zma=k\zmi$, $\tX$ is the minimal resolution, and $\cO_{\tX}(-\zma)$ has no base point, then the same equality holds on any resolution. (2) If $\zma=k\zmi$ on a resolution, then the same equality holds on the minimal resolution. \end{rem} \begin{thm}[{Konno \cite[\S 3]{Konno-coh}}] \label{t:chc} (1) If $\X$ is Gorenstein and $p_g\X\ge 2$, then $p_g\X> p_a(\zmi)=1-\chi(\zmi)$. (2) Assume that $\X$ is numerically Gorenstein and $Z_K\ge 0$. Then $\X$ is Gorenstein if and only if $Z_K=\zco$. \end{thm} Next, assume that the link of $\X$ is a $\Q$-homology sphere and the graph $\Gamma$ is numerically Gorenstein. It is not hard to verify that in the numerically Gorenstein case ${\rm Path}(\Gamma)={\rm Path}(Z_K)$ (a detailed proof can be found in \cite{Book}). The next results analyse certain cases when the inequality $p_g\X\le {\rm Path}(\Gamma)$ from (\ref{eq:max}) is strict. \begin{thm} \label{t:bd} Assume that $\Gamma$ is numerically Gorenstein and $Z_K> \zco$ for some analytic structure $(X,o)$ (that is, $(X,o)$ is not Gorenstein). Then, if one of the following properties hold: (1) either $\{\gamma\in \cP(Z_K)\,:\, S(\gamma)={\rm Path}(\Gamma)\}\to \{E_i\}_{i\in\cI}, \ \ \gamma\mapsto E_{i(t-1)}$, is surjective, or (2) the support $|Z_K-\zco|$ is $E$, \noindent then $p_g(X,o)<{\rm Path}(\Gamma)$. \end{thm} \begin{proof} We prove that if $p_g={\rm Path}(\Gamma)$ and the surjectivity (1) holds then $\zco=Z_K$. Indeed, the assumption $p_g={\rm Path}(\Gamma)$ implies that along a path (any path) $\gamma$ with $p_g={\rm Path}(\Gamma) =S(\gamma) $, whenever $p_g$ can grow with $E_{i(k)}l_k-1>0$, it necessarily grows with this amount. On the other hand, for any choice of $\gamma$, $l_{t-1}$ has the form $Z_K-E_{i(t-1)}$. Since $l_{t-1} E_{i(t-1)}-1= 2\chi(E_{i(t-1)})-1=1$, the assumption $p_g={\rm Path}(\Gamma)$ implies that necessarily $h^1(Z_K-E_{i(t-1)})<h^1(Z_K)=p_g$. By the surjectivity (1) we get that this must be the case for any $E_i$, that is, $h^1(Z_K-E_{i})<h^1(Z_K)=p_g$ for any $i\in\cI$. This shows that $\zco=Z_K$. Suppose that the condition (2) holds. Fix $\gamma\in \cP(Z_K)$, $\gamma=\{l_k\}_{k=0}^t$, with $S(\gamma)={\rm Path}(\Gamma)$. Let $\gamma'$ be the shorter path $\gamma'=\{l_k\}_{k=0}^{t-1}$. Then by similar computation as above $S(\gamma')=S(\gamma)-1$. Hence, by (\ref{eq:maxZ}), $p_g=h^1(\zco)\leq h^1(\cO_{Z_K-E_{i(t-1)}})\leq S(\gamma')<S(\gamma)={\rm Path}(\Gamma)$. \end{proof} \begin{ass} From now on, we assume that the minimal good resolution graph $\Gamma$ of $\X$ is as in \figref{fig:gamma}. \end{ass} The cycles $\zmi$ and $Z_K$ are shown in the next picture: \begin{picture}(300,45)(10,0) \put(25,25){\circle*{4}} \put(50,25){\circle*{4}} \put(75,25){\circle*{4}} \put(100,25){\circle*{4}} \put(125,25){\circle*{4}} \put(50,5){\circle*{4}} \put(100,5){\circle*{4}} \put(25,25){\line(1,0){100}} \put(50,25){\line(0,-1){20}} \put(100,25){\line(0,-1){20}} \put(25,35){\makebox(0,0){$2$}} \put(50,35){\makebox(0,0){$6$}} \put(75,35){\makebox(0,0){$1$}} \put(100,35){\makebox(0,0){$6$}} \put(125,35){\makebox(0,0){$2$}} \put(60,5){\makebox(0,0){$3$}} \put(110,5){\makebox(0,0){$3$}} \put(225,25){\circle*{4}} \put(250,25){\circle*{4}} \put(275,25){\circle*{4}} \put(300,25){\circle*{4}} \put(325,25){\circle*{4}} \put(250,5){\circle*{4}} \put(300,5){\circle*{4}} \put(225,25){\line(1,0){100}} \put(250,25){\line(0,-1){20}} \put(300,25){\line(0,-1){20}} \put(225,35){\makebox(0,0){$5$}} \put(250,35){\makebox(0,0){$14$}} \put(275,35){\makebox(0,0){$3$}} \put(300,35){\makebox(0,0){$14$}} \put(325,35){\makebox(0,0){$5$}} \put(260,5){\makebox(0,0){$7$}} \put(310,5){\makebox(0,0){$7$}} \end{picture} One easily verifies that $\chi(\zmi)=-1$, hence $h^1(\zmi)=2$, which implies $p_g\geq 2$. (In fact, ${\rm min}\chi$ is also $-1$, cf. \cite[4.4.1]{nem.lattice}.) For any path $\gamma=\{l_k\}_k$ we say that $\gamma$ has a simple jump at $k$ if $E_{i(k)}l_k=2$. Let us prove first that for the above graph one has ${\rm Path}(\Gamma)\leq 4$. For this we have to construct a path with (at most) four simple jumps. We start with $l_0=0$, then we add a base-element, say the $(-13)$--vertex $E_0$. Then there exists a `Laufer computation sequence' starting from $E_0$ and ending with $\zmi$, determined by Laufer's algorithm (for the Artin cycle) \cite{la.ra}, which has exactly two simple jumps, and at all the other steps $E_{i(k)}l_k=1$. Next, we add a base--element (say $E_5$, one of the $(-1)$--base cycles) to $\zmi$. Then, again, there is a computation sequence starting with $\zmi+E_5$ and ending with $2\zmi$ with exactly one simple jump and at all the other steps $E_{i(k)}l_k=1$. Finally, constructed in similar way, there is a increasing sequence starting with $2\zmi$ and ending with $Z_K$ such that there are two steps with $E_{i(k)}l_k=0$ (including the very first one), one simple jump, and at all the other steps $E_{i(k)}l_k=1$. (Since $\chi(Z_K-E_i)=1>\chi(Z_K)=0$, a jump necessarily must appear.) This shows that ${\rm Path}(\Gamma)\leq 4$, hence for any analytic structure $p_g\leq 4$. In Section \ref{s:Kod} we show (using also from Section \ref{s:<4} that $p_g<4$) that the Kodaira analytic structure satisfies $p_g=3$ and $\zco \le 2\zmi\leq Z_K-E$ (cf. (\ref{eq:zcoh})). Hence, by Theorem \ref{t:bd}, ${\rm Path}(\Gamma)=4$. Moreover, analysing the long exact cohomological sequences at each step along the pathes considered above, we obtain that \begin{equation}\label{eq:h1s} \left\{ \begin{array}{l} h^1(\zmi)=2, \\ h^1(2\zmi)\leq h^1(\zmi)+1,\\ h^1(Z_K)=p_g\leq h^1(2\zmi)+1.\end{array}\right. \end{equation} Furthermore, the reader is invited to verify (by constructing the corresponding pathes) that the above sequence--construction procedure has the following additional property as well. For any $i\in\cI$, there is a sequence starting with $2\zmi$ and ending with $Z_K$, with all the properties listed above, and which ends with $E_{i}$ (that is, at the very last step we have to add $E_i$). Therefore, Theorem \ref{t:bd} and (\ref{eq:h1s}) read as follows. \begin{cor}\label{cor:Gorenstein} If there exists a singularity $(X,o)$ with graph $\Gamma$ (as in Figure 1) and $p_g=4$ then $(X,o)$ should be Gorenstein and necessarily $h^1(m\zmi)=m+1$ for $m=1,2,3$. (Note that $3\zmi \ge Z_K$.) \end{cor} This will be an important ingredient in proving that $p_g=4$ is not realized. In the rest of this section, we assume that $\pi\: \tX\to X$ is the {\it minimal} resolution. Then $E$ is an irreducible curve with $E^2=-1$ and it has two ordinary cusps; it corresponds to the $(-13)$--curve in \figref{fig:gamma}. One verifies the following facts. \begin{equation}\label{eq:chiE} h^1(E)=2, \quad \chi(\cO_E(-nE))=n-1, \quad \chi(nE)=(n^2-3n)/2 \; \text{ for } n\ge 0. \end{equation} From the exact sequence \begin{equation}\label{eq:0-1} 0 \to \cO_{\tX}(-E)\to \cO_{\tX}\to \cO_E \to 0, \end{equation} we have \begin{equation}\label{eq:pg-2} h^1(\cO_{\tX}(-E))=p_g\X-2. \end{equation} By adjunction formula, we obtain that $Z_K=3E$. By the Grauert-Riemenschneider vanishing theorem, $H^1(\cO_{\tX}(-3E))=0$. Therefore, the exact sequence $ 0\to \cO_{\tX}(-3E)\to \cO_{\tX}(-2E)\to \cO_{E}(-2E)\to 0, $ implies \begin{equation}\label{eq:2E}\left\{ \begin{array}{ll} (a) \ \ \dim \frac{H^0(\cO_{\tX}(-2E))}{H^0(\cO_{\tX}(-3E))}=\dim H^0(\cO_E(-2E))\geq \chi(\cO_E(-2E))=1,\\ \\ (b) \ \ \ h^1(\cO_{\tX}(-2E))=h^1(\cO_E(-2E)).\end{array}\right. \end{equation} Hence, the definition of $\zma$ and (\ref{eq:2E})(a) imply the following. \begin{prop}\label{p:zle2e} $\zma\le 2E$ on the minimal resolution. \end{prop} \section{A singularity with $p_g\ge4$ does not exist} \label{s:<4} The aim of this section is to prove the following. \begin{thm}\label{t:pg3} For all analytic structures $(X,o)$ supported on $\Gamma$ one has $2\le p_g\X\le 3$. If $\X$ is Gorenstein, then $p_g\X=3$. \end{thm} The proof consists of several step. Notice that the second part follows from \eqref{eq:min} and Theorem \ref{t:chc}, since $1-\chi(\zmi)=2$ (provided that we verify that $p_g\leq 3$). Hence we need to prove that $p_g=4$ cannot occur. To do this, we assume that $p_g\X=4$ for certain $(X,o)$ and we will deduce a contradiction. By Corollary \ref{cor:Gorenstein} $(X,o)$ is necessarily Gorenstein. Let $\tX$ be the minimal resolution. Then $K_{\tX}=-Z_K=-3E$. Moreover, by Corollary \ref{cor:Gorenstein} again, in the minimal good resolution $h^1(m\zmi)=m+1$ for $m=1,2,3$. Hence in the minimal resolution (e.g. by Leray spectral sequence argument) \begin{equation}\label{eq:h1s2} h^1(mE)=m+1 \ \ (m=1,2,3). \end{equation} From \eqref{eq:pg-2} $h^1(\cO_{\tX}(-E))=2$, and from \eqref{eq:2E} we also have $h^1(\cO_{\tX}(-2E))=1$, since $h^1(\cO_E(-2E))=h^0(\cO_E)=1$ by duality. From the exact sequence \begin{equation}\label{eq:E2E} 0 \to \cO_E(-E)\to \cO_{2E}\to \cO_E \to 0, \end{equation} we also obtain $h^1(\cO_E(-E))=1$. So $h^0(\cO_E(-E))=1$ since $\chi(\cO_E(-E))=0$. Since $h^1(\cO_{\tX}(-2E))-h^1(\cO_{\tX}(-E))+h^1(\cO_E(-E))=0$, from the exact sequence \begin{equation}\label{eq:1-2} 0 \to \cO_{\tX}(-2E)\to \cO_{\tX}(-E)\to \cO_E(-E) \to 0, \end{equation} $H^0(\cO_{\tX}(-E))\to H^0(\cO_E(-E))\cong \C$ is surjective. Therefore, $\zma=E$. Let $s\in H^0(\cO_E(-E))$ be the image of a general function $f\in H^0(\cO_{\tX}(-E))$. Consider the exact sequence \[ 0\to \cO_E(-E) \xrightarrow{\times s} \cO_E(-2E) \to \cO_P(-2E)\to 0, \] where $P\in E$ is the zero of $s$. Since $\deg \cO_E(-E)=1$, $P$ is a nonsingular point of $E$. Since $h^1(\cO_E(-2E))=1=h^1(\cO_E(-E))$ we get that $H^0(\cO_E(-2E))\to H^0(\cO_P(-2E))$ is surjective, hence $P$ is not a base point of $H^0(\cO_E(-2E))$. Furthermore, since $H^0(\cO_{\tX}(-2E))\xrightarrow{r} H^0(\cO_E(-2E))$ is surjective, there exists a function $g\in H^0(\cO_{\tX}(-2E))$ such that $r(g)(P)\ne 0$ and $(g)_E=2E$. We can choose local coordinates $x,y$ at $P$ such that $E=\{x=0\}$, $f=xy$, $g=x^2$. Then $\m_{X,o}\cO_{\tX}=(x,y)\cO_{\tX}(-E)$ at $P$, or, $\m_{X,o} \cO_{\tX}=\m_P\cO_{\tX}(-E)$. Hence $\mult\X=-E^2+1=2$. Now, it is well--known that a normal surface singularity with multiplicity two is necessarily a hypersurface of suspension type: $\X=(\{z^2+h(x,y)=0\}, o)$ in suitable local coordinates. However, this is impossible by the following proposition and by the fact that the splice diagram of $\Gamma$ is \begin{picture}(300,40)(50,0) \put(125,25){\circle*{4}} \put(150,25){\circle*{4}} \put(200,25){\circle*{4}} \put(225,25){\circle*{4}} \put(150,5){\circle*{4}} \put(200,5){\circle*{4}} \put(125,25){\line(1,0){100}} \put(150,25){\line(0,-1){20}} \put(200,25){\line(0,-1){20}} \put(145,30){\makebox(0,0){$3$}} \put(158,30){\makebox(0,0){$7$}} \put(193,30){\makebox(0,0){$7$}} \put(206,30){\makebox(0,0){$3$}} \put(155,17){\makebox(0,0){$2$}} \put(205,17){\makebox(0,0){$2$}} \end{picture} \begin{prop} \cite{NWCasson} \ Assume that the link of $\{z^n+h(x,y)=0\}$ is an integral homology sphere. Then the following facts hold. \begin{enumerate} \item $h$ is irreducible; \item Assume that the splice diagram of $h$ is the following (for details see \cite{EN}): \begin{picture}(400,55)(40,20) \put(60,60){\circle*{4}} \put(100,60){\circle*{4}} \put(150,60){\circle*{4}} \put(250,60){\circle*{4}} \put(300,60){\circle*{4}} \put(100,30){\circle*{4}} \put(150,30){\circle*{4}} \put(250,30){\circle*{4}} \put(300,30){\circle*{4}} \put(92,65){\makebox(0,0){$a_1$}} \put(142,65){\makebox(0,0){$a_2$}} \put(240,65){\makebox(0,0){$a_{s-1}$}} \put(292,65){\makebox(0,0){$a_s$}} \put(109,65){\makebox(0,0){$1$}} \put(159,65){\makebox(0,0){$1$}} \put(259,65){\makebox(0,0){$1$}} \put(309,65){\makebox(0,0){$1$}} \put(108,50){\makebox(0,0){$p_1$}} \put(158,50){\makebox(0,0){$p_2$}} \put(262,50){\makebox(0,0){$p_{s-1}$}} \put(308,50){\makebox(0,0){$p_s$}} \put(200,60){\makebox(0,0){$\cdots$}} \put(100,30){\line(0,1){30}} \put(150,30){\line(0,1){30}} \put(250,30){\line(0,1){30}} \put(300,30){\line(0,1){30}} \put(60,60){\line(1,0){115}} \put(225,60){\vector(1,0){110} \end{picture} \noindent Then $(a_ip_i,n)=1$ for all $i$. \item The splice diagram of $\{z^n+h(x,y)=0\}$ is \begin{picture}(400,55)(40,20) \put(60,60){\circle*{4}} \put(100,60){\circle*{4}} \put(150,60){\circle*{4}} \put(250,60){\circle*{4}} \put(300,60){\circle*{4}} \put(100,30){\circle*{4}} \put(150,30){\circle*{4}} \put(250,30){\circle*{4}} \put(300,30){\circle*{4}} \put(92,65){\makebox(0,0){$a_1$}} \put(142,65){\makebox(0,0){$a_2$}} \put(240,65){\makebox(0,0){$a_{s-1}$}} \put(292,65){\makebox(0,0){$a_s$}} \put(109,65){\makebox(0,0){$n$}} \put(159,65){\makebox(0,0){$n$}} \put(259,65){\makebox(0,0){$n$}} \put(309,65){\makebox(0,0){$n$}} \put(108,50){\makebox(0,0){$p_1$}} \put(158,50){\makebox(0,0){$p_2$}} \put(262,50){\makebox(0,0){$p_{s-1}$}} \put(308,50){\makebox(0,0){$p_s$}} \put(200,60){\makebox(0,0){$\cdots$}} \put(100,30){\line(0,1){30}} \put(150,30){\line(0,1){30}} \put(250,30){\line(0,1){30}} \put(300,30){\line(0,1){30}} \put(60,60){\line(1,0){115}} \put(225,60){\line(1,0){110}}\put(335,60){\circle*{4}} \end{picture} \end{enumerate} \end{prop} \section{The case $\zma=\zmi$}\label{s:Kod} Proposition 2.7 and Lemma 2.9.1 of \cite[\S 2]{kar.p} guarantee the existence of a normal complex surface singularity $\X$ with minimal good resolution graph $\Gamma$ on which $\zma=\zmi$. Indeed, let us construct an `extended' graph $\Gamma^e$ by gluing a $(-1)$--vertex to the $(-13)$--vertex of $\Gamma$ by a new edge. In this way we get a negative semi--definite graph. By a theorem of Winters \cite{Winters} there exists a family of projective curves $h_W:W\to (\C,0)$ such that $W$ is smooth, the central fiber is encoded by $\Gamma^e$, and the nearby fibers are smooth. Let $\tX$ be a convenient small neighbourhood of the union of central curves indexed by $\Gamma$. Then this union of curves can be contracted by Grauert theorem \cite{GRa} to get a singularity $(X,o)$ and $\tX$ serves as its minimal good resolution, on which the restriction $h$ of $h_W$ is a function with $(h)_E=\zmi$. An analytic type constructed in this way is called Kodaira \cite{kar.p} (or Kulikov \cite{Stevens85}). We shall prove the following. \begin{thm}\label{t:ma=mi} If $\zma=\zmi$ on the minimal good resolution, then $\X$ necessarily is a non-Gorenstein Kodaira singularity with $p_g\X=3$, $\emb\X=4$ and $\mult\X=3$. Furthermore such $\X$ is the total space of a one-parameter family of the curve singularity defined by $\rank\begin{pmatrix} z_1 & z_2 &z_3 \\ z_2 &z_3 & z_1^2 \end{pmatrix}<2$ in $(\C^3,0)$. \end{thm} \begin{proof} We note that $\zma=E$ on the minimal resolution if and only if $\zma=\zmi$ on the minimal good resolution, because if $\di(f)=E+H$ on the minimal resolution, then $H$ intersects $E$ transversally. Assume that $\tX$ is the minimal resolution and that $\zma=E$. Note that $H^1(\cO_{\tX}(-nE))=0$ for $n\ge 3$ by the vanishing theorem (cf. \cite{Gi}). Then $\X$ is a Kodaira singularity by \cite[2.9.1]{kar.p} and $\cO_{\tX}(-E)$ has no fixed component. Hence $\dim H^0(\cO_{\tX}(-E))/H^0(\cO_{\tX}(-2E))\ne 0$. From the exact sequence \eqref{eq:1-2}, we have \begin{multline*} h^1(\cO_{\tX}(-E)) \ge h^1(\cO_E(-E))=h^0(\cO_E(-E))\\ \ge \dim_{\C} H^0(\cO_{\tX}(-E))/H^0(\cO_{\tX}(-2E)) \ge 1. \end{multline*} Since by \thmref{t:pg3} $p_g\X\le 3$, in fact we have $p_g\X=3$ by \eqref{eq:pg-2}, and all the inequalities above are equalities. Hence, via (\ref{eq:E2E}), \begin{equation}\label{eq:zcoh} \zco=2E.\end{equation} By \thmref{t:chc}, $\X$ is not Gorenstein. Since $H^1(\cO_{\tX}(-3E))=0$, it follows from \cite[3.1]{OWYrees} (cf. also with the exact sequence from (\ref{eq:1-2})) that $1=h^1(\cO_{\tX}(-E))>h^1(\cO_{\tX}(-nE))$ for $n\ge 2$. In particular, $h^1(\cO_{\tX}(-nE))=0$ for $n\ge 2$. Thus we obtain that $H^0(\cO_{\tX}(-nE)) \to H^0(\cO_E(-nE))$ is surjective for $n\ge 0$ and $h^0(\cO_E(-nE))=n-1$ for $n\ge 2$ by \eqref{eq:chiE}. Let us compute the multiplicity of $\X$. Since $h^0(\cO_E(-E))=h^0(\cO_E(-2E))=1$, $\cO_{\tX}(-E)$ and $\cO_{\tX}(-2E)$ have a base point $P$. Take a general section $s\in H^0(\cO_E(-E))$, and consider the exact sequence \[ 0 \to \cO_E(-2E))\xrightarrow{\times s} \cO_E(-3E) \to \cO_P(-3E) \to 0. \] Then $H^0(\cO_E(-3E)) \to H^0(\cO_P(-3E))$ is surjective since $h^0(\cO_E(-2E))=1$ and $h^0(\cO_E(-3E))=2$. Since $H^0(\cO_{\tX}(-3E))\xrightarrow{r} H^0(\cO_E(-3E))$ is surjective, $\cO_{\tX}(-3E)$ has no base point. Hence a general function $g\in H^0(\cO_{\tX}(-3E))$ satisfies $r(g)(P)\ne 0$ and $(g)_E=3E$. As in \sref{s:<4}, for suitable coordinates $x,y$ at $P$, $\m_{X,o} \cO_{\tX}=(y,x^2)\cO_{\tX}(-E)$, where $E=\{x=0\}$. Taking the blowing up $\phi_1\:X_1\to \tX$ at the base point $P$, we have a new base point $Q\in X_1$ such that $\m_{X,o}\cO_{X_1}=\m_Q\cO_{X_1}$. Let $\phi_2\:X_2\to X_1$ be the blowing up at the base point $Q$. Let $E_i\subset X_i$ be the exceptional set of $\phi_i$, $Z_1=\phi_1^*E+E_1$, and $Z_2=\phi_1^*Z_1+E_2$. Then the maximal ideal cycle on $X_2$ is $Z_2$ and $\cO_{X_2}(-Z_2)$ has no base point. Hence $\mult \X=-Z_2^2=3$. Since $\emb\X\le \mult\X+1=4$ (cf. \cite{Abhy}), and $\X$ is not Gorenstein, we have $\emb\X=4$, because any hypersurface is Gorenstein. Let $h\in \m_{X,o}$ be a general function. Then \[ \mult(\{h=0\},o)=\mult\X, \quad \emb(\{h=0\},o)=\emb\X-1. \] By the formula of Morales \cite[2.1.4]{mo.rr}, \[ \delta((\{h=0\},o))=-(Z_KZ_2+Z_2^2)/2=2=\emb((\{h=0\},o)) -1. \] Hence $(\{h=0\},o)$ is a partition curve $Y(3)$ in \cite[\S 3]{b-c.rat}. This ends the proof of the theorem. \end{proof} \begin{ex} We give defining equations of a Kodaira singularity with graph $\Gamma$. Let us recall \cite[Example 6.3]{o.numGell}. Let $(X',o)\subset (\C^4,o)$ be a singularity defined by $$ \rank \begin{pmatrix} x & y & z \\ y-3w^2 & z+w^{3} & x^2+6wy-2w^3 \end{pmatrix}<2. $$ It is a numerically Gorenstein elliptic singularity. It shares the topological type the hypersurface singularity $(Y_{2},o):=\{x^2+y^3+z^{13}=0\}\subset (\C^3,o)$ with $p_g(Y_2,o)=2$, however $p_g(X',o)=1$. The exceptional set $E'$ of the minimal resolution of $(X',o)$ consists of two rational curve $E'_1$ and $E_2'$ with $E_1'^2=-1$, $E_2'^2=-2$, $E_1'E_2'=1$ and $E_1'$ has an ordinary cusp. The maximal ideal cycle is $2E'_1+E_2'$. The affine piece $V_1\subset \C^5$ of the partial resolution (see \cite[Example 6.3]{o.numGell}) of $(X',o)$ is defined by the equations $$ sx= y-3w^2, \quad sy= z+w^{3}, \quad sz=x^2+6wy-2w^3. $$ Consider the order of the coordinate functions on the exceptional set $E'$ on $V_1$. Then the order of $s$ is zero, and the order of $w$ is less than those of $x,y,z$. Hence $\zma=(w)_{E'}$. Note that $H:=\di(w)-(w)_{E'}$ intersects $E_1'\setminus E_2'$ transversally. The graph of $\di (w)$ on the minimal good resolution is as follows (the arrow corresponds to the strict transform of $H$): \begin{picture}(200,60)(-10,20) \put(60,60){\circle*{4}} \put(100,60){\circle*{4}} \put(150,60){\circle*{4}} \put(100,30){\circle*{4}} \put(190,60){\circle*{4}} \put(60,68){\makebox(0,0){$(4)$}} \put(60,52){\makebox(0,0){$-3$}} \put(150,68){\makebox(0,0){$(2)$}} \put(100,68){\makebox(0,0){$(12)$}} \put(90,30){\makebox(0,0){$(6)$}} \put(160,30){\makebox(0,0){$(1)$}} \put(190,68){\makebox(0,0){$(1)$}} \put(108,52){\makebox(0,0){$-1$}} \put(108,35){\makebox(0,0){$-2$}} \put(158,52){\makebox(0,0){$-7$}} \put(190,52){\makebox(0,0){$-2$}} \put(100,30){\line(0,1){30}} \put(150,30){\line(0,1){30}} \put(150,60){\vector(0,-1){30}} \put(60,60){\line(1,0){130}} \end{picture} Let $\phi\: \X \to (X',o)$ be the double cover of $X'$ brabched along $w=0$, namely, $\cO_{X,o}=\cO_{X',o}\{t\}/(t^2-w)$. Then $\X$ is defined by $$ \rank \begin{pmatrix} x & y & z \\ y-3t^4 & z+t^6 & x^2+6t^2y-2t^6 \end{pmatrix}<2. $$ By the method of \cite[III. Appendix 1]{nem.5}, $\X$ has the resolution graph $\Gamma$, and $(t)_E=\zma=\zmi$. \end{ex} \section{The case $\zma=2\zmi$}\label{s:2} Assume that $\tX$ is the minimal good resolution and $\zma=2\zmi$ on $\tX$. We express the irreducible components of $E$ as $E_0, \dots, E_6$ as below. \begin{picture}(200,60)(0,20) \put(60,60){\circle*{4}} \put(100,60){\circle*{4}} \put(140,60){\circle*{4}} \put(180,60){\circle*{4}} \put(100,30){\circle*{4}} \put(180,30){\circle*{4}} \put(220,60){\circle*{4}} \put(60,68){\makebox(0,0){$E_1$}} \put(60,52){\makebox(0,0){$-3$}} \put(140,68){\makebox(0,0){$E_0$}} \put(140,52){\makebox(0,0){$-13$}} \put(220,52){\makebox(0,0){$-3$}} \put(180,68){\makebox(0,0){$E_5$}} \put(100,68){\makebox(0,0){$E_6$}} \put(90,30){\makebox(0,0){$E_2$}} \put(190,30){\makebox(0,0){$E_3$}} \put(220,68){\makebox(0,0){$E_4$}} \put(108,52){\makebox(0,0){$-1$}} \put(108,35){\makebox(0,0){$-2$}} \put(170,52){\makebox(0,0){$-1$}} \put(170,35){\makebox(0,0){$-2$}} \put(100,30){\line(0,1){30}} \put(180,30){\line(0,1){30}} \put(60,60){\line(1,0){160}} \end{picture} The cycle $E_i^*\in L$ is defined by $E_i^*E_i=-1$, $E_i^*E_j=0$ for all $j\ne i$. (In general, $E_i^*$ is an element of $L\otimes \Q$. In our case, $E_i^*\in L$ since the intersection matrix is unimodular.) E.g., $\zmi=E_0^*$. From the exact sequence \[ 0\to \cO_{\tX}(-2\zmi)\to \cO_{\tX}(-\zmi)\to \cO_{\zmi}(-\zmi)\to 0 \] we have \begin{equation}\label{eq:2-1} h^1(\cO_{\tX}(-2\zmi))-h^1(\cO_{\tX}(-\zmi))=\chi(\cO_{\zmi}(-\zmi))=0. \end{equation} Note that this equality holds whenever $\zma\ge 2\zmi$. \vspace{2mm} \noindent {\bf I. \ The Gorenstein case.} \ \begin{thm}\label{t:splice} Assume that $\zma=2\zmi$ on the minimal good resolution and $\X$ is Gorenstein. Then $\X$ is of splice type and the ``leading form''of the splice diagram equations are given by \[ z_1^2z_2+z_3^2+z_4^3, \quad z_1^3+z_2^2+z_4^2z_3, \] where $z_i$ corresponds to the end $E_i$. Furthermore, we have $\mult\X=4$ and that $\cO_{\tX}(-\zma)$ has no base points. \end{thm} The graph $\Gamma$ satisfies the semigroup condition and we read the above defining equations from \cite{nw-HSL}. If $X$ is of splice type, we have $\mult\X=2\cdot 2=4$, because the tangent cone is defined by the regular sequence $z_3^2$, $z_2^2$. Furthermore, $\cO_{\tX}(-\zma)$ has no base points since $-\zma^2=4$ (or, by analysing the divisors $E_1^*$ and $E_4^*$ of $z_1$ and $z_4$). Therefore, it is sufficient to prove that the end curve condition is satisfied (see \cite{nw-ECT}). Since $\X$ is Gorenstein, we have $p_g\X=3$ by \thmref{t:pg3}. Therefore, from \eqref{eq:pg-2} and \eqref{eq:2-1}, \begin{equation}\label{eq:44}h^1(\cO_{\tX}(-\zmi))=h^1(\cO_{\tX}(-2\zmi))=1.\end{equation} \begin{lem}\label{l:e4} Let $Z=E_4^*$. Then $\cO_{\tX}(-Z)$ has no fixed component. In particular, there exists a function $f\in H^0(\cO_{\tX}(-Z))$ such that $\di (f)=Z+H$, where $H$ is non-exceptional and $HE=HE_4=1$ (that is, $H$ is a `cut' of $E_4$), and hence the end curve condition at $E_4$ is satisfied. \end{lem} \begin{proof} If $\cO_{\tX}(-Z)$ has a fixed component, then every component of $E-E_4$ is also a fixed component because for any cycle $D>0$ and the minimal anti-nef cycle $D'$ such that $D'\ge D$, we have $H^0(\cO_{\tX}(-D'))= H^0(\cO_{\tX}(-D))$ (and if $D'>Z$ then $D'\geq Z+E$ too). We will show that $E_6$ cannot be a fixed component. Since $Z>\zmi$ (hence $h^1(\cO_Z)\geq h^1(\cO_{\zmi})=2$), $\zco=Z_K$ and $ C:=Z_K-Z=E_0+E_1+E_2+2E_6>0, $ we obtain that $h^1(\cO_Z)=2$. Thus \begin{equation}\label{eq:11} h^1(\cO_{\tX}(-Z))\ge p_g\X-h^1(\cO_Z)=1. \end{equation} Consider the exact sequences \begin{align*} &0\to \cO_{\tX}(-Z-(C-E_6))\to \cO_{\tX}(-Z-E_6)\to \cO_{C-2E_6}(-E_6)\to 0,\\ &0\to \cO_{\tX}(-Z-C)\to \cO_{\tX}(-Z-(C-E_6))\to \cO_{E_6}(-(C-E_6))\to 0. \end{align*} Since $h^1(\cO_{C-2E_6}(-E_6))=h^1(\cO_{\tX}(-Z-C))=0$ and $h^1(\cO_{E_6}(-(C-E_6)))=1$, we obtain \begin{equation}\label{eq:22} 1\ge h^1(\cO_{\tX}(-Z-E_6)). \end{equation} Therefore, (\ref{eq:11}) and (\ref{eq:22}) implies that $h^1(\cO_{\tX}(-Z))\ge h^1(\cO_{\tX}(-Z-E_6))$. \noindent This fact, and the exact sequence \[ 0\to \cO_{\tX}(-Z-E_6)\to \cO_{\tX}(-Z)\to \cO_{E_6}\to 0 \] show that the restriction map $H^0(\cO_{\tX}(-Z)) \to H^0(\cO_{E_6})$ is non-trivial. Hence $E_6$ cannot be a fixed component. \end{proof} \begin{lem}\label{l:e3*} Let $Z=E_3^*$. Then $\cO_{\tX}(-Z)$ has no fixed component. \end{lem} \begin{proof} Similarly as in the proof of the previous lemma, it is enough to verify that $E_6$ is not a fixed component. There exists a computation sequence $\{Z_k\}_{k=0}^t$ from $Z_0= Z+E_6$ to $Z_t=Z_K+\zmi+E_5+E_3$ such that $Z_{k+1}=Z_k+E_{i(k)}$, $Z_kE_{i(k)}>0$, such that we add the base elements $E_1$, $E_2$, $E_0$, and $E_6$ in this order. Then $Z_3E_{i(3)}=2$; at all the other steps $Z_kE_{i(k)}=1$. From the exact sequences \[ 0 \to \cO_{\tX}(-Z_{i+1})\to \cO_{\tX}(-Z_i)\to \cO_{E_{v(i)}}(-Z_i)\to 0, \] we obtain that $h^1(\cO_{\tX}(-Z_K-\zmi-E_5-E_3))+1\ge h^1(\cO_{\tX}(-Z-E_6))$. But, by a similar exact sequence, which connects $Z_K+\zmi$ with $Z_t$ (by adding $E_5$ and $E_3$ in this order) $h^1(\cO_{\tX}(-Z_K-\zmi-E_5-E_3))= h^1(\cO_{\tX}(-Z_K-\zmi))$, which is zero by Kodaira type vanishing. Hence \begin{equation}\label{eq:11b} 1\ge h^1(\cO_{\tX}(-Z-E_6)). \end{equation} Let $D=E_0+E_1+E_2+2E_6$. Then $D$ is a minimally elliptic cycle on its support and thus $h^1(D)=1$. Since $\cO_{\tX}(-E_4^*)$ has no fixed component one has $H^0(\cO_D(-E^*_4))\not=0$. This and $E_4^*D=0$ imply that $\cO_{D}(-E_4^*)\cong \cO_D$. On the other hand, since $2Z-3E_4^*=E_3-E_4$, we obtain that \[ \cO_{D}(-2Z) \cong \cO_{D}(-3E_4^*)\cong \cO_D. \] Since $\pic(D)$ has no torsion, we obtain $\cO_{D}(-Z)\cong \cO_D$. Therefore, \begin{equation}\label{eq:22b} h^1(\cO_{\tX}(-Z))\ge h^1(\cO_{D}(-Z))=1. \end{equation} Finally, from \eqref{eq:11b}, \eqref{eq:22b} and the exact sequence \[ 0\to \cO_{\tX}(-Z-E_6)\to \cO_{\tX}(-Z)\to \cO_{E_6}\to 0, \] we obtain that $E_6$ cannot be a fixed component. \end{proof} Therefore, the end curve condition is satisfied at all ends, and we finished the proof of \thmref{t:splice}.\\ \noindent {\bf II. \ The non--Gorenstein case.} \ \begin{thm}\label{t:gen} Assume that $\zma=2\zmi$ on the minimal good resolution and $\X$ is not Gorenstein. Then $p_g\X=2$ and $\zco=E+E_5+E_6$ on the minimal good resolution. Furthermore $\mult\X=6$ and $\emb\X=7$. \end{thm} Assume that $\tX$ is the minimal resolution. Then $\zma=2E$. By \thmref{t:chc}, we have $h^1(\cO_{2E})=p_g\X$. Clearly $h^1(\cO_{E})=h^1(\cO_{2E})$ if and only if $p_g\X=2$; in this case, $\zco=E$ and the cohomological cycle on the minimal good resolution can be computed by \cite[2.6]{OWYcore}. We assume that $h^1(\cO_{E})<h^1(\cO_{2E})$, namely, $p_g\X=3$; we shall again deduce a contradiction. From the exact sequence \[ 0\to \cO_{\tX}(-2E)\to \cO_{\tX}\to \cO_{2E}\to 0, \] and from $2E=\zma$, and $\chi(2E)=-1$, we have $h^1(\cO_{\tX}(-2E))=1$. By \eqref{eq:2E}, we have $h^1(\cO_E(-2E))=1$ too. By duality, $h^0(\cO_E(K+3E))=1$ holds. Hence \begin{equation}\label{eq:ISO}\cO_E(K+3E)\cong \cO_E.\end{equation} Note that the groups of isomorphism classes of numerically trivial line bundles on $\tX$ and $2E$ coincide, namely $H^1(\cO_{\tX})=H^1(\cO_{2E})$. Hence the triviality of $\cO_{2E}(K+3E)$ would contradict to the fact that $\X$ is not Gorenstein. We have the following exact sequence \begin{equation}\label{eq:ext} 0\to \cO_E(K+2E) \xrightarrow{\alpha} \cO_{2E}(K+3E) \xrightarrow{\beta} \cO_E(K+3E) \to 0 \end{equation} obtained by tensoring by $\cO_{\tX}(K+3E)$ the exact sequence \begin{equation}\label{eq:ext2} 0\to \cO_E(-E) \to \cO_{2E}\to \cO_E \to 0. \end{equation} Note that from (\ref{eq:ext2}) we obtain $h^1(\cO_E(-E))=1$ because $h^1(\cO_{2E})=3$ by the assumption. Set $A:=\cO_E(K+2E)$, $B:=\cO_E(K+3E)$ and $N:=\cO_{2E}(K+3E)$. Then, by (\ref{eq:ISO}), $A\cong \cO_E(-E)$ and $B\cong \cO_E$. Hence, both exact sequences (\ref{eq:ext}) and (\ref{eq:ext2}) are extensions of $B$ by $A$. It is sufficient to show the following. \begin{clm}\label{cl:M} For any nontrivial extension \[ 0\to A \to M \to B \to 0 \] of $\cO_{\tX}$-modules $B$ by $A$, we necessarily have an isomorphism $M\cong \cO_{2E}$. \end{clm} Let $\Theta$ denote the bijection from the set of equivalence classes of extensions of $B$ by $A$ to $\Ext^1(B,A)$. This map is given by $\Theta( 0\to A \to M \to B \to 0)=\delta(\id_B)$, where $\delta\: \Hom(B,B)\to \Ext^1(B,A)$ is the connecting map of the long exact sequence obtained by the functor $\Hom(B, \ \ )$. We denote the extension \eqref{eq:ext} by $\xi$. For any $a\in \C^*$, we define an extension $a\cdot\xi$ by \[ a\cdot\xi\: \quad 0\to A \xrightarrow{\alpha} N \xrightarrow{a^{-1}\beta} B \to 0. \] Then $a\cdot\xi$ and $b\cdot\xi$ are quivalent if and only if $a=b$. We show that $a\Theta(\xi)=\Theta(a\cdot\xi)$. Here the first multiplication is in the $\C$--vector space $ \Ext^1(B,A)$. Let us consider the injective resolution of $\xi$: \[ \begin{array}{ccccccccc} & & 0 & & 0 & & 0 & & \\ & & \downarrow & & \downarrow & & \downarrow & & \\ 0 & \longrightarrow & A & \xrightarrow{\ \ \alpha\ \ } & N & \xrightarrow{\ \ \beta\ \ } & B & \longrightarrow & 0 \\ & & \downarrow & & \downarrow & & \downarrow & & \\ 0 & \longrightarrow & I_0 & \xrightarrow{\ \ \alpha_0\ \ } & I_0' & \xrightarrow{\ \ \beta_0\ \ } & I_0'' & \longrightarrow & 0 \\ & & \downarrow & & \downarrow & & \downarrow & & \\ 0 & \longrightarrow & I_1 & \xrightarrow{\ \ \alpha_1\ \ } & I_1' & \xrightarrow{\ \ \beta_1\ \ } & I_1'' & \longrightarrow & 0 \\ & & \downarrow & & \downarrow & & \downarrow & & \\ & & \vdots & & \vdots & & \vdots & & \end{array} \] Then the injective resolution of $a\cdot\xi$ is obtained by replacing $\beta$ (resp. $\beta_i$) by $a^{-1}\beta$ (resp. $a^{-1}\beta_i$) in the diagram above. We denote by $\delta_{\beta}$ the connecting map associated with $\xi$. Applying the functor $\Hom(B, \ \ )$ to the diagram corresponding to $a\cdot \xi$, we see that $\delta_{a^{-1}\beta}(\id_B)=a\delta_{\beta}(\id_B)$. Hence we obtain $\Theta(a\cdot\xi)=a\Theta(\xi)$. Since $\Ext^1(B,A)\cong H^1(\cO_E(-E))\cong \C$, the above $\C^*$ action on $\Ext^1(B,A)\setminus\{0\}$ is transitive, namely $\C^* \to \C^*\Theta(\xi)$ is bijective onto $\Ext^1(B,A)\setminus\{0\}$, or $\C^*\Theta(\xi)=\Ext^1(B,A)\setminus\{0\}$. Hence the extensions (\ref{eq:ext}) and (\ref{eq:ext2}) differ only by a non--zero constant multiplication (as above) and $\cO_{2E}(K+3E)\cong \cO_{2E}$. This implies that the singularity is Gorenstein, a contradiction. In particular, we have proved \clmref{cl:M} and that $p_g\X=2$. Next we compute the multiplicity and the embedding demension. Since $p_g\X=2$, we have $h^1(\cO_E)=h^1(\cO_{2E})=2$. By \eqref{eq:chiE} and \eqref{eq:E2E}, we have $h^0(\cO_{2E})=1$ and $h^0(\cO_E(-E))=0$. By \eqref{eq:1-2}, we have $h^1(\cO_{\tX}(-2E))=h^1(\cO_{\tX}(-E))=p_g\X-2=0$. By \eqref{eq:chiE} and \eqref{eq:2E}, we have $H^0(\cO_{\tX}(-2E))\to H^0(\cO_E(-2E))$ is surjective and $h^0(\cO_E(-2E))=1$. Therefore $\cO_{\tX}(-2E)$ has base point. Let $g\in H^0(\cO_{\tX}(-2E))$ be a general element and $\di(g)=2E+H$. Consider the exact sequence \[ 0\to \cO_{\tX}(-E)\xrightarrow{\times g} \cO_{\tX}(-3E) \to \cO_H(-3E) \to 0. \] Since $H^0(\cO_{\tX}(-3E)) \to H^0(\cO_H(-3E))$ is surjective, $\cO_{\tX}(-3E)$ has no base point. Therefore there exists a function $h\in H^0(\cO_{\tX}(-3E))$ such that $(h)_E=3E$ and the image in $H^0(\cO_E(-3E))$ is nonzero at the base points of $\cO_{\tX}(-2E)$, namely, at $E\cap H$. We resolve the base points and compute the multiplicity. We have the following three cases. Note that $HE=2$. \begin{enumerate} \item Assume that $H\cap E$ has two distinct points $p_1$ and $p_2$; clearly these are smooth points of $E$. Let $\phi\: Y\to \tX$ be the blowing up at $H\cap E$ and $F_i=\phi^{-1}(p_i)$. If $Z$ denote the maximal ideal cycle on $Y$, then $Z=\phi^*(2E)+F_1+F_2$ and $\cO_Y(-Z)$ has no base points. Therefore $\mult\X=-Z^2=6$. Clearly the strict transform $F_0$ of $E$ is the cohomological cycle and $\cO_{F_0}(-Z)\cong \cO_{F_0}$. Therefore $Z$ is a $p_g$-cycle by \cite[3.10]{OWYgood}. Hence $\emb\X=-Z^2+1=7$ by \cite[6.2]{OWYgood}. \item Assume that $H$ intersects $E$ at a smooth point $p\in E$. We have local coordinates $x ,y$ at $p$ such that $E=\{x=0\}$. Then, at $p$, we may assume that $h=x^3$ and $g=x^2(y^2-xg_1)$ for some $g_1\in \C\{x,y\}$ with $g_1(0,0)\ne 0$; therefore, $\m_{X,o}\cO_{\tX}=(x^3,x^2y^2)\cO_{\tX}=(x,y^2)\cO_{\tX}(-2E)$. This base point can be resolved by two times of blowing ups; the graph of $\di(g)$ is the following, where $F_0$ denote the strict transform of $E$. \begin{picture}(100,60)(0,20) \put(100,60){\circle*{4}} \put(150,60){\circle*{4}} \put(200,60){\circle*{4}} \put(150,68){\makebox(0,0){$(6)$}} \put(85,60){\makebox(0,0){$F_0$}} \put(100,68){\makebox(0,0){$(2)$}} \put(160,30){\makebox(0,0){$(1)$}} \put(200,68){\makebox(0,0){$(3)$}} \put(100,52){\makebox(0,0){$-3$}} \put(160,52){\makebox(0,0){$-1$}} \put(200,52){\makebox(0,0){$-2$}} \put(150,60){\vector(0,-1){30}} \put(100,60){\line(1,0){100}} \end{picture} \\ By the same argument in (1), we obtain that $\mult\X=6$ and $\emb\X=7$. \item If $H$ intersects $E$ at a singular point of $E$, then $H$ is nonsingular and the strict transform of $H$ intersects transversally one of the $(-3)$-curves on the minimal good resolution. We may reset our situation as follows. Let $\tX$ be the minimal good resolution with exceptional set as in \sref{s:2} and suppose that $\zma=(g)_E=E_4^*$ and $(h)_E=3\zmi$. By \lemref{l:E4bs}, $\cO_{\tX}(-\zmi)$ has a base point, say $P$. Since ${\rm coeff}_{E_4}(E_4^*)=5$ and ${\rm coeff}_{E_4}(3\zmi)=6$, we see that $\m_{X,o}\cO_{\tX}=\m_P\cO_{\tX}(-\zma)$ and the base point is resolved by the blowing up at $P$. Then $\mult\X=-\zma^2+1=6$ and $\emb\X=7$ by the same argument as in (1). \end{enumerate} \section{The case $\zma\ne \zmi$, $2\zmi$}\label{s:nez2z} We assume that $\tX$ is the minimal good resolution with exceptional set as in \sref{s:2} and that $\zma\ne \zmi$, $2\zmi$ on $\tX$. If the maximal ideal cycle on the minimal resolution is $E$, then the base point of $\cO(-E)$ is a smooth point of $E$ and thus $\zma=\zmi$. Hence $\coeff_{E_0}(\zma)=2$ by \proref{p:zle2e}. On the other hand, any anti-nef cycle on $\tX$ with $\coeff_{E_0}=2$ is one of the following three cycles: \[ 2\zmi=2E_0^*, \quad E_1^*, \quad E_4^*. \] Hence we have to analyse the new cases when $\zma$ equals either $\quad E_1^*$ or $ E_4^*$. Since the two cases are symmetric, in the sequel we assume that $\zma=E_4^*$. First we start with the following lemma. \begin{lem}\label{lem:kzmi} For any $\ell\geq 1$ and for analytic structure supported by $\Gamma$ (a) \ the line bundle $\cO_{\tX}(-(\ell+2)\zmi)$ has no fixed component. (b) \ $h^1(\cO_{\tX}(-(\ell+2)\zmi))=0$. \end{lem} \begin{proof} (a) There exists a computation sequence starting from $E^*_4+\ell\zmi+E_6$ and ending with $Z_K+\ell\zmi$ by adding (in this order) $E_1,\ E_2, \ E_6, E_0$, such that at the first three steps $Z_kE_{i(k)}=1$ and at the last step $Z_kE_{i(k)}\leq 1$. Hence $h^1(\cO_{\tX}(-E^*_4-\ell\zmi-E_6))\leq h^1(\cO_{\tX}(-Z_K-\ell\zmi))=0$. In particular, from the exact sequence $0\to \cO_{\tX}(-E^*_4-\ell\zmi-E_6)\to \cO_{\tX}(-E^*_4-\ell\zmi)\to \cO_{E_6}\to 0$, $$\frac{H^0(\cO_{\tX}(-E^*_4-\ell\zmi))}{H^0(\cO_{\tX}(-E^*_4-\ell\zmi-E_6))}\cong \C.$$ Hence, there exists a function $f$ with ${\rm coeff}_{E_6}(f)=12+6\ell$, ${\rm coeff}_{E_0}(f)= 2+\ell$ and ${\rm coeff}_{E_5}(f)\geq 14+6\ell$. Symmetrically, there exists another function $f'$ with ${\rm coeff}_{E_6}(f')\geq 14+6\ell$, ${\rm coeff}_{E_0}(f')= 2+\ell$ and ${\rm coeff}_{E_5}(f')= 12+6\ell$. Hence the divisor of $f+f'$ is $(\ell+2)\zmi$. (b) There is a Laufer computation sequence starting from $Z_K+(\ell-1)\zmi$ and ending with $(\ell+2)\zmi$ such that at every step $Z_kE_{i(k)}=1$. Hence $h^1(\cO_{\tX}(-(\ell+2)\zmi))=h^1(\cO_{\tX}(-Z_K-(\ell-1)\zmi))=0$. \end{proof} \begin{lem}\label{l:E4bs} If $\zma=E_4^*$ then $p_g\X=2$ (hence $(X,o)$ is not Gorenstein), and $\cO_{\tX}(-E_4^*)$ has a base point. \end{lem} \begin{proof} Let $C=E_4^*-2\zmi=E_3+E_4+2E_5$. In the exact sequence \[ 0\to \cO_{\tX}(-E_4^*)\to \cO_{\tX}(-2\zmi)\to \cO_{C}\to 0, \] the assumption implies $H^0(\cO_{\tX}(-E_4^*))=H^0(\cO_{\tX}(-2\zmi))$, hence \begin{equation}\label{eq:66} h^1(\cO_{\tX}(-E_4^*))=1+h^1(\cO_{\tX}(-2\zmi)). \end{equation} Let $D=Z_K-E_4^*=E_0+E_1+E_2+2E_6$. Then we have $h^1(\cO_{D}(-E_4^*))=1$ as in the proof of \lemref{l:e3*}. From the exact sequence \[ 0\to \cO_{\tX}(-Z_K) \to \cO_{\tX}(-E_4^*)\to \cO_{D}(-E_4^*)\to 0, \] we obtain $h^1(\cO_{\tX}(-E_4^*))=1$. By \eqref{eq:66}, we have $h^1(\cO_{\tX}(-2\zmi))=0$. It follows from \eqref{eq:2-1} and \eqref{eq:pg-2} that $p_g\X=2$. Furthermore, $\X$ is not Gorenstein by \thmref{t:pg3}. There exists a computation sequence $\{Z_k\}$ starting from $E^*_4+E_4$ and ending with $3\zmi$ such that $Z_kE_{i(k)}=2$ at two steps and otherwise $=1$. Since $h^1(\cO_{\tX}(-3\zmi))=0$ (cf. Lemma \ref{lem:kzmi}(b)), we obtain $ h^1(\cO_{\tX}(-E^*_4-E_4))=2 $. In particular, from the exact sequence \[ 0\to \cO_{\tX}(-E^*_4-E_4)\to \cO_{\tX}(-E^*_4)\to \cO_{E_4}(-E^*_4)\to 0, \] the image of the map $H^0(\cO_{\tX}(-E^*_4))\to H^0(\cO_{E_4}(-E^*_4))$ is 1--dimensional. Hence $\cO_{\tX}(-E^*_4)$ has a base point. \end{proof} Let $f$ be the generic element of $\m_{X,o}$. Its divisor on $\tX$ has the form $\zma+H$, where $H$ is a cut of $E_4$ cutting it transversally in a unique point $P$. Then in local coordinates around $P$ (with $\{x=0\}=E$) $f$ has the form $x^5y$. By Lemma \ref{lem:kzmi}(a) there exists a function $g$ with $(g)_E=3\zmi$, hence at $P$ with local equation $x^6$. Therefore, $\m_{X,o}\cO_{\tX}=\m_P\cO_{\tX}(-\zma)$ and $\mult\X=-\zma^2+1=6$. Next, $\emb\X=7$ by the same argument as in (1) of the previous section. \begin{rem} Assume that $(X,o)$ is a singularity supported by $\Gamma$ with $p_g=2$. Then $h^1(\cO_{\tX}(-2\zmi))=h^1(\cO_{\tX}(-3\zmi))=0$. Hence, from the exact sequence $0\to \cO_{\tX}(-3\zmi)\to \cO_{\tX}(-2\zmi)\to \cO_{\zmi}(-2\zmi)\to 0$ we obtain that $$\frac{H^0(\cO_{\tX}(-2\zmi))}{H^0(\cO_{\tX}(-3\zmi))}\cong \C.$$ Since the divisors of the analytic functions are the anti-nef cycles, and the only anti-nef cycles $C$ with $C\geq 2\zmi$ and $C\not\geq 3\zmi$ are $2\zmi, \ E_1^*, \ E_4^*$, out of these three cycles {\it exactly one} appears as the divisor of an analytic function chosen by the analytic type. That divisor equals $\zma$. \end{rem} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2] \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section*{Abstract} \begin{abstract} In this short note, we discuss the complexity of the search space for the problem of finding a CSG expression (or CSG tree) corresponding to an input point-cloud and a list of fitted solid primitives. \end{abstract} \section{Introduction} We are interested in the problem of reconstructing a CSG expression from an unstructured point-cloud. Following \cite{fayolle2016evolutionary}, the input point-cloud is first segmented and solid geometric primitives are fitted to each segment as, for example, in \cite{friedrich2020hybrid}. Given the input point-cloud and a set of solid primitives, we need to form a CSG tree expression, involving primitives from the set of fitted primitives, and corresponding to the input point-cloud. Generating a CSG expression from various types of input has been the topic of multiple works, such as, for example, generating a CSG expression from a B-Rep model \cite{shapiro1991construction,shapiro1993separation,buchele2004three}, a triangle mesh \cite{du_tog2018} or a point-cloud \cite{fayolle2016evolutionary, wu2018constructing,friedrich2018,friedrich_gecco2019}. Recently, the problem of generating CSG expressions from polygons or point-clouds has also attracted interest from the programming language community \cite{nandi2018,nandi2020} or from the machine learning community, with approaches relying on deep artificial neural networks \cite{sharma2018csgnet,tian2019learning,ellis2019write,sharma2020neural,walke2020learning,kania2020ucsg}. In this short note, we are interested in analyzing the asymptotic time complexity of the CSG tree search given a list of fitted primitives and a sampled point-cloud. In the following, we denote by $\Phi(P)$ a CSG tree for a primitive set $P=\{p_1, p_2, \ldots, p_{|P|}\}$. A CSG tree is a binary tree, its inner nodes are (Boolean) operation taken from the set of operations $O$, and its leaves are geometric primitives taken from the set of fitted primitives $P$. In the rest of the text, we use interchangeably the terms CSG expression and CSG tree. \section{Enumerating CSG Trees} To keep things simpler, we consider in this section only the binary operations: $O=\{\cup^{*},\cap^{*},-^{*}\}$ and omit the unary complement operation $\setminus^{*}$. In practice, it is always possible to simulate the complement operation ($\setminus^{*}$) from the difference operation ($-^{*}$) by adding to the list of primitives a primitive corresponding to the universe set. We start by considering the case of binary trees with $n$ internal nodes and $n+1$ leaves. The number of such binary trees is $C(n)$, the so-called Catalan number, given by \begin{equation} C(n) = \frac{1}{n+1} \binom{2n}{n}. \end{equation} See, for example, \cite{Knuthfasc4}. Figure \ref{fig:catalan} shows the $C(0), C(1), C(2)$ and $C(3)$ trees corresponding to $n=0, 1, 2$ and $3$ internal nodes. \input{bin_trees.tex} The $n+1$ leaf labels are selected from $P$. Each primitive in $P$ can be selected more than once. Since there are $n+1$ leaves, there are $|P|^{n+1}$ possible leaf label configurations.\footnote{$|S|$ is the cardinality of the set $S$.} \\ The labels for the $n$ inner nodes (operations) are selected from $O$. Each operation in $O$ can be selected more than once. So, there are $|O|^{n}$ possible operation node label configurations. \\ Thus, in total there are \begin{equation}\label{eq:tree_enum} |P|^{n+1} \cdot |O|^{n} \cdot C(n) \end{equation} possible CSG trees with $2n+1$ nodes ($n$ internal nodes and $n+1$ leaf nodes), corresponding to the set of primitives $P$ and the set of Boolean operations $O$. For a given number of inner nodes $n$, the number of CSG trees is given by (\ref{eq:tree_enum}). However, in general, we do not know which value to use for $n$. The only available information is the number of fitted primitives in the set $P$. \\ Instead, we use heuristics to find a lower and upper bound for $n$. Let $n_{\min}$ and $n_{\max}$ the minimum and maximum numbers of inner nodes. In order to get all possible trees for a given set of primitives $P$, we need to count all possible trees for all possible number of inner nodes between $n_{\min}$ and $n_{\max}$ \begin{equation}\label{eq:sum_tree} \sum_{i=n_{\min}}^{n_{\max}} |P|^{i+1} \cdot |O|^{i} \cdot C(i). \end{equation} For a given primitive set $P$, we use the following heuristics to estimate $n_{\min}$ and $n_{\max}$. We have $n_{\min} = |P|-1$, since the tree should contain all primitives at least once (we do not consider the case where $P$ contains redundant or spurious primitives). Thus there should be at least $|P|$ leaves, resulting in at least $|P|-1$ inner nodes. \\ Strictly speaking, it is not possible to derive a value for $n_{\max}$, since the size of the CSG tree is unbounded. Indeed, it is always possible to add inner nodes and leaves (redundancies) without modifying the geometric set corresponding to the CSG expression (for example by taking the union of the expression with itself). \\ Instead, we need to look for possible empirical values for $n_{\max}$. In \cite{friedrich2018}, the estimation for the maximum tree height $h_{\max}\approx \sqrt{\pi/2 \cdot |P|\cdot(|P|-1)}$ is used ($n_{\max}$ can then be estimated from $h_{\max}$). Experiments revealed that it often produces too high values. A tighter choice of $n_{\max}$ depends on the size of the primitive set $P$, the spatial configuration of the primitives as expressed in the intersection graph (see Fig.\,\ref{fig:example1} for an example) and the overall complexity of the model surface that should be represented by the CSG tree. The latter is difficult to quantify in practice. In the following, we look at possible techniques for reducing the size of the search space and thus simplifying the CSG tree extraction problem. \section{Fundamental Products and Disjunctive Normal Form} \label{sec:dnf} \subsection{The Full Disjunctive Normal Form} Similar to the full Disjunctive Normal Form (DNF) for Boolean functions, one can restrict the tree topology to the set of all primitive (or their complement) intersections (the so-called fundamental products \cite{shapiro1991construction}) that are combined via the set union operation \begin{equation}\label{eq:dnf} \Phi(P) = \bigcup_{k=1}^{2^{|P|}-1} \epsilon_k \left( g_1 \cap^* g_2 \dots \cap^* g_{|P|} \right), \qquad g_i \in \{p_i, \setminus^{*}p_i\}, \end{equation} where $\epsilon_k$ is equal to one if the corresponding $k-$th fundamental product is included in the CSG expression, and zero otherwise. The result is commonly referred to as a two-level CSG representation \cite{shapiro1991construction,Shapiro1991}. \\ This formulation reduces the search space complexity to $\mathcal{O}(2^{|P|})$ since there are $2^{|P|}$ fundamental products and we only need to check for each of them if it is inside the target solid $S$. A downside of this approach is the excessive size of the resulting CSG expression, since each clause involves the intersection of all the primitives (or their complement). \\ When working with Boolean functions, the equivalent of fundamental products that are fully inside the target solid $S$ are called implicants. An implicant that can't be further factored by removing literals is called a prime implicant \cite{Knuthfasc1}. Computing the prime implicants for the CSG expression (\ref{eq:dnf}) can result in a more optimized (more compact) CSG expression. \subsection{Non-empty Fundamental Products} The complexity of the search space can be further reduced by noticing that we do not need to consider all the fundamental products, but only those corresponding to non-empty point-sets. See Fig.\,\ref{fig:example0} for an example with a set of primitives and the corresponding non-empty fundamental products. \begin{figure}[!htbp] \centering \begin{tabular}[c]{cc} \begin{subfigure}[c]{0.35\linewidth} \includegraphics[width=\textwidth]{figures/example_0.pdf} \caption{ } \label{fig:example0} \end{subfigure}& \begin{subfigure}[c]{0.5\linewidth} \includegraphics[width=\textwidth]{figures/example_1b.pdf} \caption{ } \label{fig:example1} \end{subfigure} \end{tabular} \caption{(a) Example with the primitives $P=\{A,B,C,D,E,F\}$ and $S$, the solid to represent, in grey. The numbers $1$-$15$ identify the non-empty fundamental products. (b) The corresponding intersection graph $G$. The example is adapted from \cite{feld2018optimizing}.} \label{fig:example} \end{figure} The non-empty fundamental products can be determined from the intersection graph $G=(P,E)$ of the primitives in $P$. The set of vertices in $G$ corresponds to the set of primitives $P$. There is an edge $(p_i, p_j)$, for $i,j \in \{1, \ldots, |P|\}$, between two vertices $p_i$ and $p_j$ if the corresponding geometric primitives intersect. Figure \ref{fig:example1} shows an example of intersection graph corresponding to the set of primitives shown in Fig.\,\ref{fig:example0}. Computing the intersection graph $G$ has a complexity of $\mathcal{O}(|P|^2)$ in the worst case, but can be improved in practice with the use of spatial acceleration structures \cite{zomorodian12fast}. When only the non-empty fundamental products are considered, the complexity of the search space becomes proportional to the number of non-empty fundamental products $n_{f}$. If the geometric primitives in $P$ are all spatially disjoints, then $E$ is empty and $n_f$ reaches its minimum value, $n_f=|P|$. If $G$ is fully connected, then $n_{f}$ reaches its maximum value, $n_f=2^{|P|}-1$. Please note, that in general it is not possible to decide whether a fundamental product is empty or not by just considering the intersection graph since it depends on the particular shape of the primitives involved. This approach still results in possibly large CSG expressions. A better method to further reduce the search space and keeping the tree size limited is described in the next section. \section{Dominant Halfspaces and Solid Decomposition} \label{sec:decomp} Dominant halfspaces $\{d_1,...,d_n\} \subseteq P$ are primitives that are located either fully inside or fully outside of the target solid $S$. For example, primitives $A$ and $F$ in Fig.\,\ref{fig:example0} are dominant primitives of the solid in grey. A solid can be decomposed using dominant halfspaces as \cite{shapiro1991construction} \begin{equation} \label{eq:dec} S = ((...( S_{rem} \circ d_1 ) \circ ...) \circ d_2) \circ d_n, \end{equation} where $S_{rem}$ is the remaining solid after decomposition and $\circ$ is either the difference operator if the following primitive in the expression dominates $\setminus^{*}S$ or the union operator if it dominates $S$. The remaining solid $S_{rem}$ can be described as an expression containing all the remaining non-dominant primitives. The time complexity of the decomposition algorithm is $\mathcal{O}(|P|^2)$. In the worst case, each iteration of the decomposition results in a single primitive being removed from $S$. Thus, the first iteration visits each primitive once ($|P|$ visits) to check if it is dominant. After removing one single dominant primitive, the second iteration needs $|P|-1$ visits, and so on, resulting in $\sum_{k=|P|}^{1}k = \frac{|P|^2 + |P|}{2}$ necessary visits in total. The decomposition can be applied recursively, making it a powerful tool for search space reduction. Furthermore, the expression is optimal, since each dominant halfspace is used exactly once in the output expression \cite{shapiro1991construction}. Early factoring of dominant halfspaces is used in the following approaches for BRep to CSG conversion \cite{Buchele1999,Buchele2003,buchele2004three}. If $S_{rem}$ is not empty after the decomposition, one needs to compute a CSG expression for the remaining solid from the remaining non-dominant primitives. For example, one can use the approach described in Section~\ref{sec:dnf}, and build the intersection graph of the non-dominant primitives. For sufficiently large models, this graph is not connected and a connected component analysis results in a set of sub-intersection graphs. The corresponding expressions for each sub-graph can be extracted independently and the result is then merged. This can be used to further reduce the search space (see, for example, \cite{friedrich_gecco2019,friedrich2020csg-optim}). \bibliographystyle{plain}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sect1} B. Mazur and J. Tate in \emph{Refined Conjectures of the Birch and Swinnerton-Dyer Type} postulated a series of conjectures of the BSD-type in terms of finite layers. The goal was to find ``functions with adelic type domains of definition and ranges of values'' for which the $p$-adic $L$ functions were only a component, as expressed by Yuri Manin \cite{man2}. The Mazur and Tate conjecture (MT conjecture) is similar in spirit to the Birch and Swinnerton-Dyer conjecture (BSD conjecture). The conjecture has two assertion: \begin{enumerate} \item[1.] One that relates the rank of the elliptic curve with the order of vanishing of modular elements. \item[2.] The other that gives an explicit formula that relates arithmetic invariants of the curve with the modular element modulo the $r$-power of an augmentation ideal. In this formula, we have: \begin{enumerate} \item On the Arithmetic side: invariants like the Tamawaga constant, the order of the torsion group, the order of the Tate-Shafarevich group as exponents of a bi-multiplicative function, called the {\it corrected regulator}. \item On the Analytic side: the modular element, defined in terms of modular symbols, and which is an analogue of a Stickelberger element. \end{enumerate} \end{enumerate} In the present work, we show computational evidence only related to the second assertion of the conjecture. Our goal was to expand the evidence in favor of the conjecture (4) given by B. Mazur and J. Tate in \cite{bmjt}. In particular, they tested the conjecture for the elliptic curves $37$A and $43$A of rank 1 over sets $S=\{q\}$, where $q$ is a single prime of non split multiplicative reduction. They gave a very specific formula on those examples with prime conductor and group of Tate-Shafarevich trivial. We modify their equation so that any elliptic curve of rank 1 can be tested with no restrictions. The change consist on introducing adequate exponents on each side of the equation, the exponents depend on invariants of BSD type as the mentioned above, and we also introduce a value $\mu$ which is explained below. Hence, our contribution is to present a very concrete and easy to test conjecture and some computational evidence for it. \section{Mazur-Tate Conjecture (General Setting)}\label{section-mt-conjecture} Assume $E$ is an elliptic curve over $\field{Q}$ with conductor $N$. Consider a N\'eron differential $\omega$ for $E$. Such $\omega$ is unique up to sign. Let $\Lambda_E$ be the N\'eron lattice (i.e. the lattice generated by the ``periods'' $\int_\gamma\omega\in\field{C}$, where $\gamma$ runs through loops in $E(\field{C})$) . There is a unique pair of positive real numbers $\Omega_E^+$ and $\Omega_E^-$ such that one of the two conditions holds: \begin{enumerate} \item $\Lambda_E=\Omega_E^+\field{Z}+\Omega_E^-i\field{Z}$ \item $\Lambda_E\subset\Omega_E^+\field{Z}+\Omega_E^-i\field{Z}$ is the sub-lattice generated by the complex numbers $a\Omega_E^+ +b\Omega_E^-i$ such that $a-b\equiv 0$ (mod $2$). \end{enumerate} In the first case, we say that $\Lambda_E$ is rectangular, otherwise $\Lambda_E$ is non-rectangular. Let $f$ be the modular form associated to $E$, and let $a/b$ be a rational number. We define the modular elements $[a/b]^+_E$ and $[a/b]^-_E$ by: \begin{equation} \label{eq1} 2\pi\int_{0}^{\infty}f(a/b+it)dt=\Omega_E^+[a/b]^+_E+\Omega_E^-[a/b]^-_E i. \end{equation} We will write $[a/b]$ instead of $[a/b]^+_E$, since we will be concerned only with the plus symbols on $E$. The number $[a/b]$ is rational, and if $b$ is prime to the conductor of the curve, the value is an integer \cite{man1}. Let $S$ be a finite set of primes, let $S'$ be the subset of $S$ of primes with multiplicative reduction at a fixed elliptic curve $E$. Set \begin{equation} M=\prod_{p\in S-S'}p\prod_{p\in S'}p^{e_p} \end{equation} with integers $e_p\geq 0$, and set \begin{equation} G_M=(\field{Z}/M\field{Z})^*/(\pm 1) \end{equation} If $a$ is an integer coprime to $M$, let $\sigma_a$ denote its associated element in $G_M$. Let $R$ be a subring of $\field{Q}$ containing $1/2$ and $1$ over the order of the torsion of $E(\field{Q})$. Define the modular element as \begin{equation} \Theta_{E,M}:=\frac{1}{2}\sum_{a \mathrm{mod} M}\left[\frac{a}{M}\right]\cdot\sigma_a\in R[G_M] \end{equation} Let $\epsilon:R[G_M]\rightarrow R$ be the augmentation map, defined by \begin{equation} \sum r_i \sigma_a \rightarrow \sum r_i \end{equation} and let $I=\ker(\epsilon)$ its augmentation ideal. Let $X$ be the N\'eron model of $E$, let $X(\field{F}_p)$ be fiber of the N\'eron model of $E$ at $p$, let $X^0(\field{F}_p)=E_{ns}(\field{F}_p)$ be the non-singular points of $E$ modulo $p$ and let $N_p=X(\field{F}_p)/X^0(\field{F}_p)$ be the group of connected components in the fiber. Define $\phi_{S}$ as the order of the cokernel of the natural projection: \begin{equation} \pi_{S} : E\rightarrow\prod_{p\notin S'}N_p \end{equation} as $q$ ranges through the set of all primes. Conjecture $4$ in \cite{bmjt} is the following: \begin{conjecture} \label{mtconj}(``Birch-Swinnerton-Dyer type'' conjecture.) Let $r=\mathrm{rank}(E(\field{Q}))+\#(S')$. The modular element $\Theta_{E,M}\in R[G_M]$ lies in the $r$-th power of the augmentation, $I^r\subset R[G_M]$, and if $\tilde{\Theta}_{E,M}$ denotes its image in $I^r/I^{r+1}$: \begin{equation} \tilde{\Theta}_{E,M}=\#(\mbox{\cyrr Sh})\cdot\phi_{S}\cdot\nu_r(\mathrm{Disc}_S(E))\in I^r/I^{r+1} \end{equation} \end{conjecture} In the following pages, we explain the term $\nu_r(\mathrm{Disc}_S(E))$. \subsection{Definition of $\mathrm{Disc}_S(E)$.} \subsubsection{Local construction of the regulator}\label{local_regulator} Using the theory of biextensions and splittings, Mazur and Tate introduce local canonical heights and corrected discriminants. We give a brief summary of their work to introduce regulators. For more details, see \cite{bmjt0} and \cite{bmjt}. \begin{definition} If $A$, $B$ and $C$ are abelian groups. A biextension of $(A,B)$ by $C$ is an object $\califas{E}$ such that for each triple $(a,b,c)\in A\times B\times C$, we can assign a unique element $[a,b,c]\in \califas{E}$ such that $a\califas{E}:=[a,B,C]\subseteq \califas{E}$ has a group structure isomorphic to $B\times C$; and analogously, $b\califas{E}:=[A,b,C]$ has a group structure isomorphic to $A\times C$. Also, $C$ acts freely on $\califas{E}$. \end{definition} Now, let $\tilde{A}$, $\tilde{B}$ and $\tilde{C}$ be other abelian groups. If $\alpha:\tilde{A}\rightarrow A$, $\beta:\tilde{B}\rightarrow B$ are injective homorphisms, and $\rho: C\rightarrow \tilde{C}$ is a surjective homomorphism, we can obtain a biextension $\tilde{\califas{E}}$ given by the pullback of $\califas{E}$ by $\alpha$ and $\beta$, and the pushout of $\califas{E}$ by $\rho$. \begin{definition} Let $\califas{E}$ be a biextension of $(A,B)$ by $C$, and $\rho:C\rightarrow \tilde{C}$ a group homomorphism. A $\rho$-splitting of $\califas{E}$ is a map $$\psi:\califas{E}\rightarrow \tilde{C}$$ such that \begin{enumerate} \item $\psi(\omega\cdot x)=\rho(\omega)\cdot \psi(x)$ for $x\califas{E}$ and $w\in C$. \item $\psi|_{a\califas{E}}$ and $\psi|_{b\califas{E}}$ are group homomorphisms. \end{enumerate} \end{definition} If $A$ and $B$ are dual varieties over a field $K$, we know that there exists a bi-extension $\califas{E}$ of $(A,B)$ by $K^*$ that expresses the duality \cite{gr7}. Denote this biextension by $\califas{E}(K)$. \begin{definition} A modification $(\tilde{\califas{E}},\alpha,\beta,\rho)$ of $\califas{E}(K)$ is a biextension $\tilde{\califas{E}}$ obtained by injective homomorphisms $\alpha:\tilde{A}\rightarrow A(K)$, $\beta:\tilde{B}\rightarrow B(K)$; and a surjective homomorphism $\rho: K^*\rightarrow \tilde{C}$. \end{definition} \begin{definition} A trivialization $(\alpha,\beta,\rho,\psi)$ of $\califas{E}(K)$ is a modification $(\tilde{\califas{E}},\alpha,\beta,\rho)$ of $\califas{E}(K)$ and a $\rho$-splitting $\psi$ of $\tilde{\califas{E}}$. \end{definition} Notice that if $\alpha:\tilde{A}\rightarrow A(K)$, $\beta:\tilde{B}\rightarrow B(K)$ and $\rho:K^*\rightarrow C$ are group homomorphism as above, and $(\tilde{\califas{E}},\alpha,\beta,\rho)$ is the associated modification, we have a bi-multiplicative function $$\langle\mbox{ },\mbox{ }\rangle_{\tilde{\califas{E}}}:\tilde{A}\times\tilde{B}\rightarrow \tilde{C}$$ defined by $$\langle\tilde{a},\tilde{b}\rangle_{\tilde{\califas{E}}}:=\rho\left(\langle\alpha(\tilde{a}),\beta(\tilde{b})\rangle_\califas{E}\right) \mbox{ ($\forall\tilde{a}\in\tilde{A}$ and $\forall\tilde{b}\in\tilde{B}$) }.$$ Here, $\langle\mbox{ },\mbox{ }\rangle_\califas{E}:A\times B\rightarrow K^*$ is the bilinear pairing that express the duality. If we define $\tilde{\psi}: \tilde{\califas{E}}\rightarrow \tilde{C}$ as $$\tilde{\psi}\left(\left[\tilde{a},\tilde{b},\tilde{c}\right]\right):=\tilde{c}\cdot\langle\tilde{a},\tilde{b}\rangle_{\tilde{\califas{E}}},$$ thus $\tilde{\psi}$ is a $\rho$-splitting of $\tilde{\califas{E}}$. And therefore, $(\tilde{\califas{E}},\alpha,\beta,\rho)$ is a trivilization of $\califas{E}(K)$. Working over local fields, Mazur and Tate \cite{bmjt} described what they called {\it ``the canonical trivilizations"}. From now on, we will assume that our local fields are the fields $\field{Q}_p$ for $p$ a prime number, that our global field is $K=\field{Q}$, that $A=E$ is an elliptic curve and $B=E^\vee$ is its dual variety. Also, for each prime $p$, we will consider a system of group homomorphisms: $$\alpha_p:A_p\rightarrow E(\field{Q}_p),$$ $$\beta_p:B_p\rightarrow E(\field{Q}_p),$$ $$\rho_p:\field{Q}_p^*\rightarrow C_p,$$ where $\alpha_p$ and $\beta_p$ are injective and $\rho_p$ is surjective. Hence, we will have modifications $(\alpha_p,\beta_p,\rho_p)$ with their corresponding $\rho_p$-splittings. For the purpose of this article, we are interested in the following three trivializations: \begin{enumerate} \item[a)] {\it N\'eron unramified trivialization.} Let $A_p=E(\field{Q}_p)$, $B_p=E^0(\field{Q}_p)$ and $C_p=\field{Q}_p^*/\field{Z}_p^*\simeq \field{Z}$. Here, $E^0(\field{Q}_p)$ denotes the group of points in $E(\field{Z}_p)$ whose reduction modulo $p$ is in the componente of zero in the fiber $E(\field{F}_p)$. The homomorphisms $\alpha_p$ and $\beta_p$ are the natural inclusions; $\rho_p:\field{Q}_p^*\rightarrow \field{Q}_p^*/\field{Z}_p^*$ is the natural projection. Now, $\psi_p:\califas{E}_p\rightarrow \field{Q}_p^*/\field{Z}_p^*$ is the only canonical splitting such that $\psi(\califas{E}_p(\field{Z}_p))=0$. \item[b)] {\it Tamely ramified trivialization.} Let $A_p=E(\field{Q}_p)$, $B_p=E^1(\field{Q}_p)$, $C_p=\field{Q}_p^*/p\field{Z}_p^*\simeq \field{F}_p^*$. The maps $\alpha_p$ and $\beta_p$ are the inclusions again and $\rho_p$ is the projection. Now, $E^1(\field{Q}_p)$ are the points in $E(\field{Q}_p)$ whose reduction modulo $p$ is zero in the conected component of zero in the fiber $E(\field{F}_p)$. \item[c)] {\it Split Multiplicative trivialization.} If $p$ is a prime of split multiplicative reduction, then $E(\field{Q}_p)$ is isomorphic to the Tate curve $E_{q_p}=\field{Q}_p^*/q_p^\field{Z}$, where $q_p$ is the multiplicative local period. Hence, in this trivialization, we take $A_p=B_p=\field{Q}_p^*$ and $C_p=\field{Q}_p^*$. And, $\beta_p=\alpha_p:\field{Q}_p^*\rightarrow\field{Q}_p^*/q_p^\field{Z}$ is the natural parametrization of $E_{q_p}$, and $\rho_p:\field{Q}_p^*\rightarrow C_p=\field{Q}_p^*$ is the identity. \end{enumerate} \subsubsection{Global Construction of Regulator} For the finite set $S$ (See section \ref{section-mt-conjecture}.), we will construct extended Mordel groups $A_S$, $B_S$ and $C_S$ as follows: According to subsection \ref{local_regulator}, for each subset of primes $S\subseteq\wp$, there is a system of homomorphisms $$\alpha_p:A_p\rightarrow E(\field{Q}_p),$$ $$\beta_p:B_p\rightarrow E(\field{Q}_p),$$ $$\rho_p:\field{Q}_p^*\rightarrow C_p,$$ with their corresponding trivializations $$\psi_p:\tilde{\califas{E}}_p\rightarrow C_p.$$ The trivialization $\psi_p$ is determined by the rule: \begin{enumerate} \item[a)] $\psi_p$ is the {\it N\'eron unramified trivialization}, if $s\notin S$. \item[b)] $\psi_p$ is the {\it Tamely ramified trivialization}, if $s\in S-S'$. \item[c)] $\psi_p$ is the {\it Split Multiplicative trivialization}, if $s\in S'$. \end{enumerate} We define $A_S$ to be the set of pairs $(P,(a_p))$ such $P\in E(\field{Q})$, $(a_p)\in\prod_{p\in\wp}A_p$ and $\alpha_p(a_p)=i_p(P)$ for all prime $p$, where $i_p:A(\field{Q})\rightarrow E(\field{Q}_p)$ is the canonical inclusion. We define $B_S$, similarly. Now, from the $3$ possibilities of local trivializations, we can write $C_p=\field{Q}_p^*/U_p$, where $U_p$ could be either $\field{Z}_p^*$, $p\field{Z}_p^*$ or $\{1\}$. Hence, we have a morphism $$\rho:=(\rho_p):\coprod_{p\in\wp}\field{Q}_p^*\rightarrow\bigoplus_{p\in\wp}(\field{Q}^*_p/U_p).$$ Now, if we mod out by $\field{Q}^*$ using the natural inclusions $\field{Q}^*\hookrightarrow\field{Q}_p^*$, define: \begin{equation} C_S:=\coprod_{p\in\wp}\field{Q}_p^*/\field{Q}^*(\prod_{p\in\wp}U_p). \end{equation} Set $$\phi:\bigoplus_{p\in\wp} C_p\rightarrow C_{S},$$ the natural map given by coordinates. For $a=(P,(a_p))\in A_S$ and $b=(Q,(b_p))\in B_S$, define the bimultiplicative pairing by \begin{equation} \langle a,b\rangle_S:=\phi(\prod_{p}\psi_p(x_p))=\prod_{p}\phi\circ\psi_p(x_p), \end{equation} where $x_p=[a_p,b_p,k]\in\tilde{\califas{E}}_p$ Notice $A_p= E(\field{Q}_p)$ and $B_p= E^0(\field{Q}_p)$ for almost all $p$, and since $P\in A(\field{Z}_p)$ and $Q\in B(\field{Z}_p)$ for almost all $p$, we have that $\psi_p(x_p)=1$ for almost all $p$. Hence, the global bi-multiplicative function is computed as a finite product. In fact, in our example, we have \begin{equation}\label{C_S_1} C_S:=\left(\prod_{p\in S-S'}\field{F}_p^*\times \prod_{p\in S'}\field{Z}_p^*\right)/(\pm 1) . \end{equation} Now, $A_S$ and $B_S$ are finitely generated groups of the same rank: $$r=\mathrm{rank}(A(K))+\#(S')\cdot \dim(A)\mbox{ . (Reference: \cite{bmjt}.)}$$ Hence, if $\{P_1,P_2,\ldots,P_r\}$ generates the free part of $A_S$ and $\{Q_1,Q_2,\ldots,Q_r\}$ generates the free part of $B_S$, set $$\mathrm{disc}_S=\det_{1\leq i, j\leq r} \langle P_i, Q_j \rangle$$ The value $\mathrm{disc}_S$ is well defined up to sign. But, we can choose an adecuate orientation for our purposes. Now, for our computations, it is useful to work on a subring $R\subset\field{Q}$ containing the torsion of $A_S$ and $B_S$. Hence, we will consider the element $$d_S:=1\otimes \mathrm{disc}_S \in R\otimes \mathrm{Sym}_r(C_S).$$ This discriminant does not work well as the regulator, see the heuristic discussion about it in \cite{bmjt}. Instead, the {\it corrected discriminant} is defined as a sum of discriminants $d_T$ over subsets $T\subset S$ containing $S'$. For any subset $T\subset S$, we have natural mappings: $x_{S,T}:A_S\rightarrow A_T$, $y_{S,T}:B_S\rightarrow B_T$ and $z_{T,S}:C_T\rightarrow C_S$. There is also a unique map $\mu_{S,T}:C_S\rightarrow C_T$, such that $$\mu_{S,T}\circ z_{T,S}=\prod_{p\in S-T} (p-1)\cdot c\mbox{ }\mbox{ for all $c\in C_S$}$$ Thus, the {\it corrected discriminant} of $S$ is defined as: \begin{equation}\label{corrected_disc} \mathrm{Disc}_S(A)=\sum_{S'\subset T\subset S} (-1)^{\#(T-S')} \mu_{S,T}(j_T\cdot d_T)\in R\otimes \mathrm{Sym}_r(C_S), \end{equation} where $j_T=\left(\prod_{p\in S-S'} n_p \right)/ (B_{S'}:B_S)$, $n_p=\# B^0(\field{Q}_p)$, and $(B_{S'}:B_S)$ is the index of $B_S$ in $B_S'$. Now, from equation (\ref{C_S_1}) there is a natural surjective homomorphism $C_S \twoheadrightarrow G_M$. And, also a natural identification of $G_M$ with $I^2/I$ (as is described in next section). Thus, we have a natural map $C_S\rightarrow I^2/I$, which induces a natural homomorphism: \begin{equation} \nu_r: R\otimes \mathrm{Sym}_r(C_S)\rightarrow I^{r+1}/I^r \end{equation} Now, we should notice that the formula in Conjecture \ref{mtconj} ocurrs in $I^{r+1}/I^r$, and thus, the analogous of the regulator is $\nu_r\left(\mathrm{Disc}_S(A)\right)$. \section{MT Conjecture (Rank 1, Ordinary and Good Reduction Setting)} \subsection{The Analytic Side} In this section, we assume that $E$ has rank $1$ and that $S$ has only primes of ordinary reduction. In this context, Conjecture \ref{mtconj} in section \ref{section-mt-conjecture} states that \begin{enumerate} \item[a)] $$\Theta_{E,M}\in I$$ \item[b)] $$\tilde{\Theta}_{E,M}=\#(\mbox{\cyrr Sh})\cdot\phi_{S_m}\cdot\nu_r(\mathrm{Disc}_S(E))\in I/I^{2}$$ \end{enumerate} Now, assertion a) is equivalent to have $\epsilon(\Theta_{E,M})=0$, or equivalently \begin{equation} \sum_{a \mathrm{mod} M}\left[\frac{a}{M}\right]=0. \end{equation} Hence, we have \begin{equation} \Theta_{E,M}=\frac{1}{2}\sum_{a \mathrm{mod} M}\left[\frac{a}{M}\right]\cdot(\sigma_a-e)\in R[G_M] \end{equation} where $e$ is the identity on $G_M$. The Hurewicz Theorem for augmentation ideals gives an isomorphism of abelian groups $G_M\simeq I/I^2$ given by the map $r(g-e)\mapsto g^r$ for $g\in G$ and $r\in \field{Z}$. Hence, we will test assertion b) of the Conjecture directly on the group: $$G_M=\left(\prod_{p\in S}\field{F}_p^* /\pm 1\right).$$ Since we cannot compute always square roots in $\field{F}_p^*$, we will test the conjecture for the square of $\tilde{\Theta}_{E,M}$, which is equivalent to eliminate the $\frac{1}{2}$ on $\Theta_{E,M}$. Conjecture \ref{mtconj} is additive, but our testing will be multiplicative \begin{definition}\label{def_mod_symb} For $S$ having only primes of good reduction and an elliptic curve $E$ with $\mathrm{rank}(E)\geq 1$, we define the following multiplicative modular element: \begin{equation} \label{eq4} l(S)=\prod_{a\in (Z/MZ)^*} a^{[a/M]} \mbox{(mod $M$)} \end{equation} \end{definition} with $M=\prod_{p\in S} p$ . The values $[a/M]$ are integers if $gcd(M, N)=1$ by 5.4 in \cite{man1}, so the multiplicative modular element is well defined. \subsection{The Arithmetic side} In this section, we also assume that $E$ is an elliptic curve with positive rank. First, assume $p$ is a prime of good reduction and $S=\{p\}$. In this case, we will describe how to compute $\mathrm{Disc}_S(E)$. An element $x\in\califas{E}_p$, can also be described by a triplet $x=[\mathfrak{a},D,c]$, where $\mathfrak{a}=\sum_i n_i(P_i)$ is a zero cycle with $P_i\in E(\field{Q}_p)$, $D=\sum_j m_j(Q_j)$ is a divisor in $E^0(\field{Q}_p)$ algebraically equivalent to zero whose support is disjoint to $\mathfrak{a}$, and $c\in\field{Q}_p^*$ \cite{bmjt} and \cite{neron}. Now, this symbol satisfies the properties: \begin{enumerate} \item[a)] $[\mathfrak{a},div(f),1]=[\mathfrak{a},0,f(\mathfrak{a})]$ for a rational function $f$ defined on $E(\field{Q}_p)$ with $f(\mathfrak{a})=\prod_jf(Q_j)^{m_j}$. \item[b)] $[\mathfrak{a}_R,D_R,c]=[\mathfrak{a},D,c]$, where $\mathfrak{a}_R$ (resp. $D_R$) is obtained from $\mathfrak{a}$ (resp. $D$) by translating each point by $R$. \end{enumerate} Now, since $E$ is an elliptic curve, we identify a point $P\in E$, with the zero cycle $(P)-(O)$. Hence, the discriminant is \begin{equation} \mathrm{Disc}_{\{p\}}(E)=\psi_p([(P)-(O),(O)-(Q_p),1]) \end{equation} where $P$ is a generator of $E(\field{Q})$, $Q$ is a generator of $E^0(\field{Q})$ and $Q_p=n_p Q$. Notice that the element $[(P)-(O),(O)-(Q_p),1]$ is above $E(\field{Q}_p)\times E^1(\field{Q}_p)$ on the biextension $\califas{E}_p$. Now, the value $\psi_p([\mathfrak{a},D,1])$ coincides with the N\'eron's symbol $(D,\mathfrak{a})_{v_p} $ for the $p$-adic valuation in $\field{Q}_p$. In particular, Theorem $3$ in \cite{neron} says how to compute $(D,\mathfrak{a})_{v_p} $ if $D$ is equivalent to $O$. To compute $\mathrm{Disc}_{\{p\}}(E)$ is helpful to use property 2) above, translating by a point $P'$. Hence, \begin{equation} \mathrm{Disc}_{\{p\}}(E)=\psi_p([(P+P')-(P'),(P')-(Q_p+P'),1]) \end{equation} This value is the $g$ function defined by Mazur and Tate in page $747$ of \cite{bmjt}: Let $P$, $Q$ and $P'$ be as above. For $p\nmid N$ prime, consider the quantity: \begin{equation} \label{eq6} g(P,Q,P',p)=\frac{d(P'+P)d(P'+Q_p)}{d(P')d(P'+P+Q_p)}\mbox{ (mod $p$)} \end{equation} where $d(T)$ is the square root of the denominator of the $x$-coordinate of a point $T$. We will consider the square of this $g$ function, just assuming that $d(T)$ is the $x$-coordinate of $T$. This will balance the cancellation of the $\frac{1}{2}$ in $\Theta_{E,M}$, and it is in concordance with definition \ref{def_mod_symb}. We sumarize the properties of the $g$ function in the following proposition: \begin{proposition} \label{mrlema}\mbox{ } \begin{enumerate} \item If $P\in E(\field{Q})$, $Q\in E^0(\field{Q})$, then $g(P,Q,P',p)$ does not depend on $P'$. Moreover, if $P$ is a generator of the free part of $E(\field{Q})$ and $Q$ is a generator of the free part of $E^0(\field{Q})$, then this value depends only on $E$ and $p$. \item The function $$ \hat{g}:E\times E_0\rightarrow \prod_{p\nmid N} \field{F}_p^*, $$ defined by $$\hat{g}(P,Q)_p:=g(P,Q,P',p)\mbox{ at the $p$ coordinate}$$ is bi-multiplicative. \end{enumerate} \end{proposition} Now, let $S$ be a finite set of primes having only good reduction at $E$. Set $M=\prod_{p\in S}p$, $n_S=\prod_{p\in S}n_p$ and $Q_S=n_S Q$. If $P$ and $Q$ are generators of the free part of $E$, define \begin{equation} \label{gfunction} g(S)=\frac{d(P'+P)d(P'+Q_S)}{d(P')d(P'+P+Q_S)}\mbox{ (mod $M$)} \end{equation} where $P'$ is a point on $E$ such than non of the $d$'s is zero. Now, if $M'\mid M$, let $$Y_{M',M}:(\field{Z}/M'\field{Z})^*\rightarrow(\field{Z}/M\field{Z})^*$$ be the map defined by $a\mapsto b^{\phi(M/M')}$, where $a\in(\field{Z}/M'\field{Z})^*$ and $b\in(\field{Z}/M\field{Z})^*$ such that $a\equiv b$ (mod $M'$), and $\phi$ is {\it the Euler phi}. \begin{definition} The $G$ function on $S$ is \begin{equation} G(S):=\prod_{T\subseteq S} Y_{M_T,M}\big(g(P,Q_T,M_T)\big)^{(-1)^{(1+\#(T))}} \end{equation} where $M_T=\prod_{q\in T}q$, $n_T=\prod_{p\in T}n_p$ and $Q_T= n_T Q$. \end{definition} \subsection{Multiplicative Equations of Mazur-Tate Conjecture} Assume $E$ is an elliptic curve of rank $1$. Let $E_0$ be the group of everywhere good reduction points of $E$. First, assume $S$ has only points of ordinary reduction (i.e. $S'=\{\}$). Therefore, $\phi_S$ is the cokernel of the natural projection: \begin{equation} \pi_{S}: E\rightarrow\prod_{p\in\wp}N_p \end{equation} where $p$ ranges through the set of all primes $\wp$. The kernel of $\pi_S$ is $E_0$. Hence, the induced map $$E/E_0\hookrightarrow \prod_{p\in\wp}N_p$$ is an injection of finite groups and its cokernel is the cokernel of $\pi_S$. Hence, \begin{equation} \phi_S=\frac{C}{\#(E/E_0)}, \end{equation} where $C=\#\left(\prod_{p\in\wp}N_p\right)=\prod_{p\in\wp}c_p$ and $c_p=|N_p|$ are the Tamagawa numbers. If $S'\neq \emptyset $, then we divide by $C'=\prod_{p\in S'} c_p$, to obtain \begin{equation}\phi_S=\frac{C}{C' \#(E/E_0)}.\end{equation} Let $E_{tors}$ be the group of torsion points of $E$. If $u$ is the order of torsion in $E$ and $v$ is the order of the torsion in $E_0$, then we can explicitly compute the order $\#(E/E_0)$ as $\frac{\mu u}{v}$, where \begin{equation} \mu=\mbox{min\{$j>0$ : $jP+R\in E_0$ and $R\in E_{tors}$\}} \end{equation} and $P$ is any generator of the free part of $E$. Thus, Conjeture \ref{mtconj} on its multiplicative form and running over all good reduction points gives: \begin{conjecture} \label{mazur2} (Rank $1$ at all Good Reduction Primes.) Let $E$ be a curve of rank $1$, let $P$ be a generator of $E$ (modulo torsion), and let $Q$ be a generator of $E_0$ (modulo torsion), then: \begin{equation} \hat{l}^{uv}=\hat{g}(P,Q)^{|\mbox{\cyrr Sh}| |coker(\phi)|}\in\prod_{p\nmid N} \field{F}_p \end{equation} where $|\mbox{\cyrr Sh}|$ is the order of the Tate-Shafarevich group and $\hat{l}=\prod_{p\nmid N} l(\{p\})$. \end{conjecture} Notice that if we exponentiate the above equation by $u/v$, we obtain the equation: \begin{equation} \hat{l}^{u^2}=\hat{g}(P,Q)^{\frac{C |\mbox{\cyrr Sh} |}{\mu}} \end{equation} which looks more like the classical BSD. For a more general $S$, having only good reduction points, the conjecture \ref{mtconj} in its multiplicative form becomes: \begin{conjecture} \label{mazur3} (Rank $1$ for S having only Good Reduction Primes.) Let $E$ be a curve of rank $1$ and $S$ having only good reduction primes, then: \begin{equation} l(S)^{uv}=G(S)^{|\mbox{\cyrr Sh}| |coker(\phi)|}\in G_M \end{equation} where $M=\prod_{p\in S}p$ and $|\mbox{\cyrr Sh}|$ is the order of the Tate-Shafarevich group. \end{conjecture} In Chapter 4 of \cite{port}, we explained how to test Conjecture \ref{mazur3} using the individual computations on each prime $p\in S$. \section{Testing conjectures \ref{mazur2} and \ref{mazur3}.} On \cite{port}, we tested the above conjecture for the first $300$ elliptic curves in the Cremona database \cite{cre1}. All these cases have trivial Tate-Shafarevich group. But, we also tested in \cite{port} for an elliptic curve having a non-trivial Tate-Shafarevich group. The curve was \begin{equation} y^2+xy+y=x^3-x^2-8587x-304111 \end{equation} with conductor $N=1610$ and $|\mbox{\cyrr Sh}|=4$. Those computations were done using the Pari calculator \cite{pari} with the help of the script \cite{msym}, we tested each curve for $p<300$ and $p\nmid N$. Now, we enlarge our experimental evidence using SAGE \cite{sage}. We test the Conjecture \ref{mazur2} on the first $3000$ curves elliptic on the Cremona database (already included in SAGE). We also check the Conjecture \ref{mazur2} for more elliptic curves with non-trivial Tate-Shafarevich group. We check on the first $20$ elliptic curves with $|\mbox{\cyrr Sh}|=4$ and on the first $7$ elliptic curves with $|\mbox{\cyrr Sh}|=9$. We use {\it The L-functions and Modular Forms Database} \cite{mfdb} to search for the required elliptic curves to test. The files with the computing evidence and the scripts are available on \mbox{ } \framebox[3.3in]{ https://github.com/portillofco/MazurTateProject } \mbox{ } \begin{note} {\bf Last comment regarding normalization of modular symbols.} We use the usual methods for computing modular symbols and take advantage of the computing power of Pari-gp and Sage. There have been continous advancement on the methods for computing modular symbols and also in the computing power used on computations, but correct normalization is still a practical issue to be considered during the testing of the conjecture. The computation of the modular symbols $[a/b]^+$ using only Linear Algebra is alright up to multiplication by a constant. On our first computations \cite{port} using Pari, we determined the constant by a series aproximation of the value $[a/b]^+$. Now, Sage computes $[a/b]^+$ correctly in most of the cases, but there are still a few curves when Sage prompts a WARNING MESSAGE. For example, for the curve 158 in the Cremona Data Base, we received the following WARNING MESSAGE: {\bf Warning : Could not normalize the modular symbols, maybe all further results will be multiplied by -1, 2 or -2.} In such cases, we just verified which of the proposed values works for the conjecture. We must point out that in all the curves tested, one of the suggested values works. We believe that some numerical modular symbols can be used to compute the constant in a direct way \cite{wuth}. \end{note} Finally, we mention that we made the computations using a HP Workstation with a Procesor Intel Xeon E5-2640v2 with 8 nodes and 48GB of RAM memory. \subsubsection*{Acknowledgements} I would like to thank Felipe Voloch for his guidance and advice during the development of this research. I am also very grateful to John Tate for his valuable help and his many explanations regarding the conjecture $4$ in \cite{bmjt}. The updating of this article and the new experimental evidence was supported by the project {\it PI2013-38} of the agreement {\it UACM/SECITI/060/2013}. I thank also the support of my collegues Isa\'{\i}as L\'opez and Felipe Alfaro during the development of the aforementioned project.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:1www} The expression \begin{equation*} {\mathcal M}_{+}(\alpha,\beta)=\liminf_{q\to\infty}q||q\alpha-\beta|| \end{equation*} measures how well multiples of a fixed irrational $\alpha>0$ approximate a real number $\beta$. A similar concept is defined by Rockett and Sz\"usz (\cite{rockett92:_contin_fract} Ch. 4, \S9), where they consider the slight variant, ${\mathcal M}(\alpha,\beta)$, (the \emph{two-sided} case) with the initial $q$ replaced by $|q|$. It is evident that (see, for example, \cite{pinnner01:Moron,Komatsu1997192}) \begin{equation*} {\mathcal M}(\alpha,\beta)=\min\bigl({\mathcal M}_{+}(\alpha,\beta),{\mathcal M}_{+}(\alpha,-\beta)\bigr). \end{equation*} We define \begin{equation} \label{eq:4} \begin{aligned}[t] {\mathcal S}_{+}(\alpha)&=\{{\mathcal M}_{+}(\alpha,\beta): \beta\in {\mathbf R}^{+}\}\\ {\mathcal S}(\alpha)&=\{{\mathcal M}(\alpha,\beta): \beta\in {\mathbf R}^{+}\}. \end{aligned} \end{equation} We refer to the first set as the \emph{(one-sided) inhomogeneous approximation spectrum} of $\alpha$. ${\mathcal M}_{+}(\alpha,\beta)$ and the corresponding spectrum have been considered in precisely this form by various authors, \cite{komatsu99:_inhom_dioph_approx_some_quasi_period_expres,komatsu99:_ii,cusick96:_halls_ray_in_inhom_dioph_approx,blanksby67:_various_probl_inhom_dioph_approx}, and the ideas relate to inhomogeneous minima of binary quadratic forms \cite{barnes56:_linear_inhom_dioph_approx,blanksby67:_various_probl_inhom_dioph_approx,MR0053162,MR0054654,MR0067939,barnes54,barnes2}. In the celebrated paper (\cite{m.47:_sum_and_produc_of_contin_fract}), Hall showed that the \emph{Lagrange spectrum}, ${\mathcal L}=\{{\mathcal M}_{+}(\alpha,0):\alpha\in {\mathbf R}\}$, contains an interval $[0,\mu_{H}]$ ($\mu_H>0$) subsequently called \emph{Hall's Ray}. The precise value of $\mu_{H}$ has been determined by Freiman (\cite{freiman73:_hall}) in a heroic calculation; we refer the reader to \cite{cusick89:_markof_lagran}, where this result is discussed in detail. Our aim here is to prove the existence of an interval $[0,\mu_{\alpha}]$ in the inhomogeneous spectrum for all irrationals $\alpha$, though without a precise value for the maximum endpoint of the interval. It is clear that the result fails for rational $\alpha$. Since ${\mathcal M}_{+}(\alpha,\beta)={\mathcal M}_{+}(\alpha,\beta+1)$, the values of $\beta$ may be restricted to the unit interval $[0,1)$. Similarly, we may assume without loss of generality that $0\le\alpha<1$. The key theorem of this paper is the following: \begin{thm} \label{thm:1} For $\alpha$ irrational, the set ${\mathcal S}_{+}(\alpha)$ contains an interval of the form $[0,\mu_{\alpha}]$ for some $\mu_{\alpha}>0$. \end{thm} Once this is established, it is straightforward to extend to the two-sided case, and to binary quadratic forms. \subsection{History} As far as we are aware, the first work on inhomogeneous minima dates back to Minkowski \cite{minkowski01:_ueber_annaeh_groes_zahlen} who expressed his results in terms of binary quadratic forms. He showed that if $a,b,c,d$ are real numbers with $\Delta=ad-bc\neq 0$ then, for any real numbers $\lambda$ and $\mu$, there are integers $m,n$ such that \begin{equation*} |(am-bn-\lambda)(cm-dn-\mu)|\leq \frac{1}{4}\Delta. \end{equation*} This implies that $\inf_q|q|||q\alpha-\beta||\leq \frac{1}{4}$ for all $\alpha,\beta$. The same conclusion is true for $\mathcal M(\alpha,\beta)$ but this requires more work. In fact Khintchine \cite{khintchine35:_neuer_beweis_veral_hurwit_satzes} proved that $\mathcal M_+(\alpha,\beta)\leq \frac{1}{3}$, and the result with $\frac{1}{4}$ replacing $\frac{1}{3}$ is claimed by Cassels as derivable from his methods in \cite{cassels54:ueber}. Khintchine \cite{khintchine26:_ueber_klass_approx} showed that there exists $\delta>0$ such that, for any $\alpha$, there exists $\beta$ for which \begin{equation*} \mathcal M(\alpha,\beta)\geq \delta. \end{equation*} In fact, like Minkowski, he deals with the infimum rather than $\liminf$. Fukusawa gave an explicit value for $\delta$ of $1/457$ and this was subsequently improved by Davenport ($\delta=1/73.9$) \cite{davenport50:_theor_khint} and by Prasad ($\delta=3/32$)~\cite{prasad51}. These papers are of special significance because they develop a methodology for handling calculations of values of $\mathcal M_+(\alpha,\beta)$ that has been the cornerstone of much subsequent work, and underlies the techniques used in this paper. Far too many authors have contributed to the understanding of ${\mathcal M}(\alpha,\beta)$ and $\mathcal M_+(\alpha,\beta)$ for us to reference all of the papers here. As far as we are aware, the first ray results occur in \cite{fukasawa26:_ueber_groes_betrag_form_i}, Satz XIII, where it is shown that if, in a semi-regular continued fraction expansion of $\alpha$, the partial quotients tend to $\infty$ then ${\mathcal S}(\alpha)$ contains the interval $[0,\frac{1}{4}]$. Barnes obtains essentially the same result in \cite{barnes56:_linear_inhom_dioph_approx} though he states a weaker one: that, for each $t\in [0,\frac{1}{4}]$, there are uncountably many $\alpha$'s and $\beta$'s with ${\mathcal M}(\alpha,\beta)=t$. The predominant methodology for handling problems of this kind, originating with Davenport \cite{davenport50:_theor_khint}, invokes some form of continued fraction expansion of $\alpha$ and a corresponding digit expansion of $\beta$. We will use this methodology but choose to use the negative continued fraction because of the simple and ``decimal''-like geometrical interpretation of the expansion of $\beta$ associated with it (which we call the \emph{Davenport Expansion}). Use of the regular continued fraction is possible, and was first done by Prasad~\cite{prasad51}, but makes the construction less intuitive and more complicated from our perspective, because divisions of subintervals alternate in direction. The general machinery for the regular continued fraction is well-exposed in Rockett and Sz\"usz \cite{rockett92:_contin_fract}. Cassels also uses the Davenport expansion ideas in his paper \cite{cassels54:ueber}, without attribution, where he shows that, except for special cases, $\mathcal M_+(\alpha,\beta)\leq \frac{4}{11}$. Several authors have contributed to refinement of the technique, including S\'os \cite{sos58:_dioph_approx_ii}, and Cusick, Rockett and Sz\"usz \cite{cusick94:_inhom_dioph_approx}. These authors ascribe the origin of the technique to Cassels in \cite{cassels54:ueber}. Almost all of the work for this paper, including a more complicated proof of the main theorem, was done in the early 1990's, and versions of it have been circulating privately since then. Its ideas and results have been used and cited in various places, in particular, in \cite{pinnner01:Moron,pinner01:_inhom_halls_ray_period}. \section {The negative continued fraction expansion for $\alpha$} \label{sec:23456} Here we briefly describe the features needed from the theory of the negative continued fraction. For a more complete discussion of the corresponding concepts for the regular continued fraction, see \cite{rockett92:_contin_fract} or for the more general semi-regular continued fraction see Perron~\cite{perron54:_lehre_ketten}. For $0<\alpha<1$, let $\alpha_1=\alpha$, $a_1=\lceil\frac1{\alpha_1}\rceil$, and define, recursively, \begin{equation*} a_i=\left\lceil\frac1{\alpha_i}\right\rceil\qquad \text{and}\qquad\alpha_{i+1}=a_i-\frac1{\alpha_i}. \end{equation*} so that $a_i\ge 2$ and $0<\alpha_{i+1}<1$, for all $i$. Evidently, $\alpha$ has the continued fraction expansion \begin{equation*} \label{eq:2} \alpha= \cfrac{1}{a_1-\cfrac{1}{a_2-\cfrac1{a_3-\cfrac{1}{\ddots}}}}, \end{equation*} abbreviated as $\alpha=\langle a_1,a_2,a_3,\ldots\rangle$. The numbers $\alpha_i$ are called the $i$th \emph{complete quotients} of $\alpha$ and satisfy \begin{equation*} \label{eq:7} \alpha_i=\langle a_i,a_{i+1},a_{i+2},\ldots\rangle. \end{equation*} Since $\alpha$ is irrational, the \emph{partial quotients} $a_i$ are greater than $2$ for infinitely many indices $i$, and so there is a unique sequence $a'_1,a'_2,a'_3,\ldots$ of positive integers such that \begin{equation} \label{eq:8} a_1,a_2,a_3,\ldots=a'_1+1,\underbrace{2,\ldots,2}_{a'_2-1}, a'_3+2,\underbrace{2,\ldots,2}_{a'_4-1},a'_5+2,\underbrace{2,\ldots,2}_{a'_6-1}, a'_7+2,\ldots. \end{equation} It will be necessary occasionally to discuss the usual continued fraction expansion of $\alpha$, now expressible as \begin{equation} \label{eq:9} \alpha=\cfrac{1}{a'_1+\cfrac{1}{a'_2+\cfrac{1}{a'_3+\cfrac{1}{\ddots}}}}. \end{equation} Eventually, we will split the proof of Theorem~\ref{thm:1} into two cases, corresponding to whether or not the sequence $(a_{n}')$ is bounded. We make use of the (negative continued fraction) \emph{convergents} $p_i/q_i$ to $\alpha$: \begin{equation} \label{eq:18} \frac{p_i}{q_i}=\langle a_1,a_2,\ldots,a_i\rangle, \end{equation} satisfying the recurrence relations \begin{equation} \label{eq:19} p_{i+1}=a_{i+1}p_i-p_{i-1}\qquad\text{and}\qquad q_{i+1}=a_{i+1}q_i-q_{i-1} \end{equation} where $i\ge1$ and $p_0,\ q_0=0,\ 1$. Easily established are the following simple properties: \begin{align} \label{eq:21} 1&=p_iq_{i-1}-q_ip_{i-1}\\ \label{eq:22} \alpha&=\frac{(a_i-\alpha_{i+1})p_{i-1}-p_{i-2}}{(a_i-\alpha_{i+1})q_{i-1}-q_{n-2}} =\frac{p_i-\alpha_{i+1}p_{i-1}}{q_i-\alpha_{i+1}q_{i-1}}. \end{align} Moreover, $q_{i-1}/q_i=\overline\alpha_i$ where \begin{equation} \label{eq:24} \overline\alpha_i=\langle a_i,a_{i-1},\ldots,a_1\rangle. \end{equation} Since $q_0=1$, the identity \begin{equation} \label{eq:25} q_i=\frac1{\overline\alpha_1\overline\alpha_2\ldots\overline\alpha_i} \end{equation} follows. This section concludes with a brief description of the \emph{Ostrowski expansion} (see \cite{rockett92:_contin_fract}) for positive integers. Any given integer $q\ge1$ can be written as a sum of the form \begin{equation} \label{eq:27} q=\sum^n_{k=1}c_kq_{k-1} \end{equation} where \begin{equation} \label{eq:28} c_n\ge1\qquad\text{and}\qquad 0\le c_k\le a_k-1\qquad \text{for}\qquad1\le k\le n. \end{equation} A greedy algorithm is used to determine the coefficients $c_{n}$. It is not hard to verify that \begin{equation} \label{eq:31} q_k-1=(a_1-2)q_0+(a_2-2)q_1+\cdots +(a_{k-1}-2)q_{k-2}+(a_k-1)q_{k-1}. \end{equation} This last identity yields that, for no pair of indices $i$ and $j$, is there a consecutive subsequence of coefficients of the form \begin{equation} \label{eq:32} (c_i,c_{i+1},\dots,c_j) =( a_i-1,a_{i+1}-2,a_{i+2}-2,\ldots,a_{j-1}-2,a_j-1). \end{equation} The basic facts about the Ostrowski expansion are described in the following lemma. \begin{lem} \label{thm:98983} Each integer $q\ge1$ has a unique expansion of the form \eqref{eq:27} such that the constraint \eqref{eq:28} holds and no consecutive sub-sequence of coefficients is of the form \eqref{eq:32}. \end{lem} \section{ The Davenport expansion of $\beta$} \label{sec:3gfd} We now describe the \emph{Davenport Expansion} for the elements $\beta$ of the interval $[0,1)$. While the expansion is analogous to that used in \cite{cusick96:_halls_ray_in_inhom_dioph_approx}, we remind the reader that it is based on a different continued fraction algorithm. This approach results in a ``decimal''-like geometry of the Davenport expansion in the negative continued fraction case which makes more intuitive the invocation of Hall's theorem on sums of Cantor sets~\cite{m.47:_sum_and_produc_of_contin_fract} later. This is a key component of the proof in the bounded case. For $0\le\beta<1$, let $\beta_1=\beta$ and define, inductively, \begin{equation*} \label{eq:42} b_i=\left\lfloor\frac{\beta_i}{\alpha_i}\right\rfloor\qquad\text{and}\qquad \beta_{i+1}=\frac{\beta_i}{\alpha_i}-b_i. \end{equation*} so that $0\le b_i\le a_i-1$ and $0\le\beta_{i+1}<1$. The convergent sum $\beta=\sum_{k=1}^\infty b_kD_k$ is called the \emph{ Davenport expansion} of $\beta$ or the \emph{Davenport sum} of the sequence $(b_k)$ relative to $\alpha$. The integers $b_{i}$ are the \emph{Davenport coefficients}. In the same way as in the decimal expansion $0.999\ldots$ is identified with $1.000\ldots$, we identify \begin{equation} \label{eq:decimal_ambig} b_1, b_2, \ldots, b_i, a_i-1, a_{i+1}-2, a_{i+2}-2, \ldots, \text{ with } b_1, b_2, \ldots, b_i+1, 0,0, \ldots \end{equation} for $b_i<a_i-1$, since their Davenport sums are the same. Figure~\ref{fig:1} gives an illustration of the geometry of the situation for the case when $\alpha=\langle 5,3,5,3,\ldots\rangle$. The interval $[0,1)$ is subdivided by the numbers $n\alpha \pmod 1$, $(n=1,2,3, 4)$ into $5$ intervals, the first four of which are ``long'' and the last ``short'' since $5\alpha>1$. When we allow $n$ to range up to $13$, each long interval is then subdivided into $3$ intervals with the same pattern in each: $2$ ``long'' intervals and $1$ ``short'' interval, whereas the ``short'' interval is divided into just $1$ ``long'' interval and $1$ ``short'' interval. This pattern of ``long'' and ``short'' intervals is repeated at finer and finer resolutions as $n$ increases, reflecting, in this example, the periodic structure of the continued fraction. This structure corresponds to a ``decimal'' expansion with restrictions on digits, involving dependencies on the preceding digits. The general case is described below. \begin{figure}[ht!] \centering \includegraphics[width=\textwidth]{fig1.pdf} \caption{The ``Long-Short'' Picture for $\alpha=\langle 5,3,5,3,\dots\rangle$} \label{fig:1} \end{figure} From the inductive step in the Davenport expansion, \begin{equation*} \label{eq:43} \beta_i=b_i\alpha_i+\beta_{i+1}\alpha_i \end{equation*} and, as a result, \begin{equation} \label{eq:44} \beta_i=b_i\alpha_i+b_{i+1}\alpha_i\alpha_{i+1}+\ldots +b_j(\alpha_i\alpha_{i+1}\ldots\alpha_j) +\beta_{j+1}(\alpha_i\alpha_{i+1}\ldots\alpha_j) \end{equation} for all $j\ge i$. Note that $\beta_{i}$ is the location of $\beta$ in the rescaled copy of the (long) interval in which it is contained. We define \begin{equation*} \label{eq:45} D_{1}=1,\qquad D_i=\alpha_1\alpha_2\ldots\alpha_i \end{equation*} and write \begin{equation*} \label{eq:46} \beta_iD_{i-1}=b_iD_i+b_{i+1}D_{i+1}+\cdots +b_jD_j+\beta_{j+1}D_j. \end{equation*} $D_{i}$ is the length of the long intervals at the $i$th level, and $D_{i}-D_{i+1}$ is the length of the short intervals at that level. The following result is straightforward. \begin{thm} \label{thm:hdjd} Let $\beta=\sum^\infty_{k=1}b_kD_k$ where $(b_i)$ is a sequence of positive integers. Then $0\le\beta<1$ and $(b_i)$ are the Davenport coefficients of $\beta$ if and only if $b_i<a_i$ for all $i\ge1$ and no block of the form \begin{equation} \label{eq:69} a_i-1,a_{i+1}-2,a_{i+2}-2,\dots,a_{j-1}-2,a_j-1 \end{equation} or of the form \begin{equation} \label{eq:70} a_i-1,a_{i+1}-2,a_{i+2}-2,a_{i+3}-2,\ldots \end{equation} occurs in $(b_i)$. \end{thm} The exceptional cases in this result; when $b_i,b_{i+1},\ldots,b_j$ is of the form $ a_i-1,a_{i+1}-2,a_{i+2}-2,\dots,a_{j-1}-2,a_j-1$, correspond to the missing long intervals in the short intervals one level higher. As in the example in Figure~\ref{fig:1}, each short interval has one fewer long interval at the next level. In the general geometric picture, $a_{1}-1$ multiples of $\alpha$ subdivide the unit interval into $a_{1}$ intervals, the first $a_{1}-1$ of which have length $\alpha$ and the last of length $1-(a_{1}-1)\alpha$. The next multiple (modulo 1) is $\alpha_{1}\alpha_{2}=a_{1}\alpha-1$. This subdivides each of the long intervals at the previous level into $a_{2}-2$ intervals of the same length followed by a short interval. The final short interval of the initial subdivision is subdivided into $a_{2}-2$ long intervals followed by a short interval. This pattern is repeated at all finer resolutions with the appropriate partial quotients. By means of the Davenport expansion, we can describe the integer pairs $(p,q)$ for which $0<q\alpha-p<1$. It is straightforward to see that if $q=\sum^n_{k=1}c_kq_{k-1}$ is the Ostrowski expansion of $q$ then \begin{equation} \label{eq:72} p=\sum^n_{k=1}c_kp_{k-1},\quad i\ge 1. \end{equation} \begin{lem} \label{thm:klksjd} \begin{enumerate} \item Let $q\ge1$ be an integer with Ostrowski expansion as in \eqref{eq:27} and let $p$ be defined by \eqref{eq:72}. Then $0<q\alpha-p<1$ and \begin{equation*} \label{eq:75} q\alpha-p=\sum^\infty_{k=1}b_kD_k \end{equation*} is the Davenport expansion of $q\alpha-p$, where $(b_i)$ is the sequence $c_1,c_2,\ldots,c_n,0,0,0,\ldots$. \item Let $0<\beta<1$ and let $(b_i)$ be the Davenport coefficients of $\beta$. Then there are integers $q\ge1$ and $p$ such that $\beta=q\alpha-p$ if and only if there is $n\ge1$ such that $b_i=0$ for all $i>n$. Further, if that is so then $q=\sum^n_{k=1}b_kq_{k-1}$ and $p=\sum^n_{k=1}b_kp_{k-1}$. \end{enumerate} \end{lem} \section{Calculation of ${\mathcal M}_{+}(\alpha,\beta)$ via the Davenport Expansion} \label{sec:4gfala} The Davenport expansion will be used to calculate ${\mathcal M}_{+}(\alpha,\beta)$. Again we stress that the underlying ideas are not really new, being essentially contained in the work of Davenport, Cassels, S\'os, and others. Accordingly, we omit much of the justification and instead aim to provide geometrical insights. To begin, let $0\le\beta<1$ and let $(b_i)$ be the Davenport coefficients of $\beta$. We define \begin{equation*} \label{eq:95} \begin{aligned}[t] Q_n&=\sum^n_{k=1}b_kq_{k-1}\\ Q'_n&= \begin{cases}Q_n+q_{n-1}&\text{ if $Q_n<q_n-q_{n-1}$}\\ Q_n+q_{n-1}-q_n&\text{ if $Q_n\ge q_n-q_{n-1}$} \end{cases} \end{aligned} \end{equation*} for all $n\ge1$. The two cases here correspond to when $\beta$ lies in a long or a short interval, respectively, at the appropriate level of the decomposition of the interval. If $\beta$ is in a short interval, then the right endpoint of that interval occurred earlier in the decomposition; hence the $q_{n}-q_{n-1}$ term. The next two lemmas are relatively straightforward consequences of these definitions and ideas. \begin{lem}\label{thm:5543} \begin{enumerate} \item $0\le Q_n<q_n$ for all $n\ge1$ and $Q_n\ge q_{n-1}$ if and only if $b_n\ne0$. \item $Q_n\ge Q_{n-1}$ for all $n\ge2$ and $Q_{n-1}=Q_n$ if and only if $b_n=0$. \item $0\le Q'_n<q_n$ for all $n\ge1$ and $Q'_n\ge q_{n-1}$ if and only if $Q_n<q_n-q_{n-1}$. \item $Q'_n\ge Q'_{n-1}$ for all $n\ge2$ and $Q'_{n-1}=Q'_n$ if and only if $Q_n\ge q_n-q_{n-1}$. \item The inequality $Q_n\ge q_n-q_{n-1}$ holds if and only if there is some index $m$ with $1\le m\le n$ such that the sequence $b_m,b_{m+1},\ldots,b_n$ is equal to \begin{equation*} \label{eq:102} a_m-1,a_{m+1}-2,a_{m+2}-2,\ldots,a_n-2. \end{equation*} \end{enumerate} \end{lem} The last condition, $Q_n\ge q_n-q_{n-1}$, occurs if the point $\beta$ is inside a short interval. The integers $Q_n$ and $Q'_n$ are used to define quantities $\lambda_n(\beta)$ and $\rho_n(\beta)$, the significance of which will be evident from the following lemma. \begin{defn} Let $0\le\beta<1$ and let $\beta_1,\beta_2,\beta_3,\ldots$ be the sequence of numbers generated by applying the Davenport expansion algorithm to $\beta$. We define \begin{equation} \label{eq:103} \lambda_n(\beta)=Q_nD_n\beta_{n+1} \end{equation} and \begin{equation} \label{eq:104} \rho_n(\beta)= \begin{cases} Q'_nD_n(1-\beta_{n+1})&\text{ if $Q_n<q_n-q_{n-1}$}\\ Q'_nD_n(1-\alpha_{n+1}-\beta_{n+1})&\text{ if $Q_n\ge q_n-q_{n-1}$} \end{cases} \end{equation} for all $n\ge1$. \end{defn} Recall that $Q_{n}$ is the ``count'' of $q\alpha$ that corresponds the left endpoint of the interval at level $n$ that contains $\beta$, and that $D_{n}$ is the length of a long interval at that level. It follows that $D_n\beta_{n+1}$ is the distance to $\beta$ from the left endpoint of the interval at level $n$ containing $\beta$. In similar vein, $\rho_{n}(\beta)$ is the count for the right endpoint of that interval multiplied by the distance from $\beta$ to that endpoint. The next lemma is straightforward from the geometrical picture of the interval decompositions. \begin{lem} \label{thm:6795} Let $n<m$, $0<\beta<1$, and $(b_i)$ be the Davenport coefficients of $\beta$, with $b_i\ne 0$ for infinitely many $i$. Then \begin{enumerate} \item \begin{equation*} \label{eq:105} \lambda_n(\beta)=Q_n||Q_n\alpha-\beta||\qquad\text{and}\qquad \rho_n(\beta)=Q'_n||Q'_n\alpha-\beta||. \end{equation*} \item If $b_n=0$ then $\lambda_n(\beta)=\lambda_{n-1}(\beta)$. In other words if $\beta$ is in the first interval of the decomposition at level $n$, then $Q_{n}\alpha$ is $Q_{n+1}\alpha$ modulo 1. \item If $b_n\ne0$ and $b_m\ne0$ and $b_i=0$ for all $i$ which satisfy $n<i<m$ then $q_{n-1}D_m\le\lambda_n(\beta)<q_nD_{m-1}$. \item If $Q_n\ge q_n-q_{n-1}$ then $\rho_n(\beta)=\rho_{n-1}(\beta)$. In other words, if $\beta$ is a short interval (namely a rightmost) at level $n$ then $Q_{n}'\alpha$ is equal to $Q_{n}\alpha$ modulo 1. \item If $Q_n<q_n-q_{n-1}$ and $Q_m<q_m-q_{m-1}$ and $Q_i\ge q_i-q_{i-1}$ for all $i$ which satisfy $n<i<m$ then $q_{n-1}D_m(1-\alpha_{m+1})\le\rho_n(\beta)<q_nD_{m-1}(1-\alpha_m)$ unless $m=n+1$ in which case $q_{n-1}D_{n+1}(1-\alpha_{n+2})\le\rho_n(\beta)<q_nD_n$. \end{enumerate} \end{lem} The next lemma is a key step in calculating ${\mathcal M}_{+}(\alpha,\beta)$ in terms of $\lambda_n(\beta)$ and $\rho_n(\beta)$. \begin{lem} \label{thm:gda} For $n\geq 1$, \begin{equation} \label{eq:112} \min\{\lambda_n(\beta),\rho_n(\beta),\lambda_{n+1}(\beta),\rho_{n+1}(\beta)\} \end{equation} is a lower bound for the infimum of the set $\{q||q\alpha-\beta||:\;q_n\le q<q_{n+1}\}$. \end{lem} \begin{proof} We sketch the proof of the result. The diagram showing the key ideas is given in Figure~\ref{fig:2}. \begin{figure}[ht!] \includegraphics[width=0.9\textwidth]{fig2.pdf} \caption{The Approximations of $\beta$ } \label{fig:2} \end{figure} Write $I_{n}$ and $I_{n+1}$ for the intervals prescribed by the Davenport expansion at level $n$ and $n+1$ that contain $\beta$: $I_{n}=[Q_{n}\alpha,Q_{n}'\alpha]$, $I_{n+1}=[Q_{n+1}\alpha,Q_{n+1}'\alpha]$. The obvious candidates for the smallest values of $q||q\alpha-\beta||$ for $q_{n}\leq q\leq q_{n+1}$ are the cases $q=Q_{n+1}$ or $q=Q_{n+1}'$ --- the left and right endpoints of the interval $I_{n+1}$ at level $n+1$ containing $\beta$. It is clear from fairly straightforward size considerations that they do better than any $q\alpha \in I_{n}\ (q_{n} \leq q\leq q_{n+1})$. It is also clear that the candidates $q=Q_{n}$ and $q=Q'_{n}$ are better than any $q\alpha \not\in I_{n} (q_{n} \leq q\leq q_{n+1})$ since $Q_{n}<q_{n}$. \end{proof} The key equation for calculation of ${\mathcal M}_{+}(\alpha,\beta)$ is in the following theorem, which captures the important ingredient of the preceding lemma. \begin{thm} \label{thm:333} If $0<\beta<1$ and no integers $q\ge1$ and $p$ satisfy $\beta=q\alpha-p$ then \begin{equation} \label{eq:143} {\mathcal M}_{+}(\alpha,\beta)=\min\left\{\liminf_{n\to\infty}\lambda_n(\beta),\; \liminf_{n\to\infty}\rho_n(\beta)\right\}. \end{equation} \end{thm} For completeness, we note that, in Theorem~\ref{thm:333}, we have not dealt with the possibility that $\beta$ is of the form $q\alpha-p$ where $q$ and $p$ are positive integers. In this case, we have \begin{equation} \label{eq:151} {\mathcal M}_{+}(\alpha,\beta)=\liminf_{q'\to\infty} q'||q'\alpha-q\alpha-p|| =\liminf_{q'\to\infty} q'||(q'-q)\alpha|| \end{equation} and consequently \begin{equation} \label{eq:152} {\mathcal M}_{+}(\alpha,\beta)=\liminf_{q'\to\infty} (q'-q)||(q'-q)\alpha||={\mathcal M}_{+}(\alpha,0). \end{equation} The quantity ${\mathcal M}_{+}(\alpha,0)$ it is, of course, the homogeneous approximation constant of $\alpha$. \section{The Unbounded Case} \label{sec:5adas} In this section we dispense quickly and relatively straightforwardly with the case where $\alpha$ has unbounded partial quotients ($a^\sharp_{n}$) in its ordinary continued fraction, before turning to the much more difficult case of bounded partial quotients. We write \begin{equation*} \label{eq:6} \mathcal{M}_+ (\alpha)=\sup_{\beta} {\mathcal M}_{+}(\alpha,\beta). \end{equation*} The following theorem is the key result of this section. \begin{thm} \label{thm:unbounded} If $\alpha$ has unbounded partial quotients in its ordinary continued fraction then \begin{equation*} \label{eq:10} \{\mathcal{M}_+(\alpha,\beta):\beta\in {\mathbf R} \}=[0,\mathcal{M}_+(\alpha)]. \end{equation*} \end{thm} In this case, we avoid the problems of long sequences of $2$s in the negative continued fraction by making use of the Davenport expansion of $\beta$ with respect to $\alpha$ using the ordinary continued fraction. This theory is described in Rockett and Sz\"usz~\cite{rockett92:_contin_fract} with a different notation. The notation we use is largely that of Cassels~\cite{cassels54:ueber} with $\sharp$ appended to indicate use of the ordinary continued fraction but with $D^{\sharp}_n$ denoting the quantity he refers to as $\epsilon_n$. Note that, when $\beta=n\alpha+m$ for $n$ and $m$ integers, $\mathcal M(\alpha,\beta)=0$. Accordingly, we restrict attention to $\beta$ not of this form. Set $\alpha=[0;a^{\sharp}_1,a^{\sharp}_2,...]$ and let $(n_k)$ be a sequence of indices on which the partial quotients are strictly monotonically increasing. Now let $0<c<\mathcal{M}_+ (\alpha)$ and choose $\beta$ for which $c<\mathcal{M}_+(\alpha,\beta)\le \mathcal{M}_+(\alpha)$. Let its Davenport coefficients be $(b^{\sharp}_j)$ in the ordinary continued fraction. We will construct a sequence $(c^{\sharp}_j)$ so that $c^{\sharp}_j=b^{\sharp}_j$ except on a subsequence of the $n_k$ which will be chosen sufficiently sparse for our purposes. Since $\beta=\sum_{k=1}^\infty b^{\sharp}_k D^{\sharp}_k$, where $D^{\sharp}_k=q^{\sharp}_{k-1}\alpha-p^{\sharp}_{k-1}$, we put $$\lambda^{\sharp}_n(\beta)=Q^{\sharp}_n\|Q^{\sharp}_n\alpha-\beta\|$$ The ordinary case of (\ref{eq:143}) (see\cite{cassels54:ueber} or \cite{rockett92:_contin_fract}) gives \begin{multline}\label{eq:words} \lambda^{\sharp}_n(\beta)=(\sum_{k=1}^n b^{\sharp}_k q^{\sharp}_{k-1})|\sum_{k={n+1}}^\infty b^{\sharp}_k D^{\sharp}_k|\\ =q^{\sharp}_{n}|D^{\sharp}_n|(b^{\sharp}_n\frac{q^{\sharp}_{n-1}}{q^{\sharp}_n}+b^{\sharp}_{n-1}\frac{q^{\sharp}_{n-2}}{q^{\sharp}_{n-1}}\frac{q^{\sharp}_{n-1}}{q^{\sharp}_n}+...) |b^{\sharp}_{n+1}\frac{D^{\sharp}_{n+1}}{D^{\sharp}_n}+b^{\sharp}_{n+2}\frac{D^{\sharp}_{n+2}}{D^{\sharp}_n}+...| \end{multline} Note that $q^{\sharp}_n|D^{\sharp}_n|=[a^{\sharp}_n,a^{\sharp}_{n+1},...]/[a^{\sharp}_n,a^{\sharp}_{n-1},a^{\sharp}_{n-2},...,a^{\sharp}_2,a^{\sharp}_1]$, and so is absolutely bounded above and away from zero. For this choice of $\beta$, this product is always at least $1/30$ and so the second two terms in the product are each at least $1/60$. Changing the value of $b^{\sharp}_n$ by 1 will change the value of $\lambda^{\sharp}_{n}$ by at most $1/(a^{\sharp}_{n}-1)$, so by choosing $n=n_k$ and adjusting the value of $b^{\sharp}_{n_{k}}$ to $c_{n_{k}}$, we replace $\beta$ by $\widetilde{\beta}$, so that \begin{equation*} \label{eq:11} c<\min(\lambda_{n}(\widetilde{\beta}),\lambda_{n-1}(\widetilde{\beta}))<c+2/a^{\sharp}_n.\end{equation*} By making this change at the indices $n_{k}$ (so that $a^{\sharp}_{n_{k}}\to\infty$), and putting $c^{\sharp}_n=b^{\sharp}_n$ elsewhere, we obtain a number $\gamma=\sum_k c^{\sharp}_k D^{\sharp}_k$ for which \begin{equation*} \label{eq:13} \mathcal{M}_+(\alpha,\gamma)=c, \end{equation*} since the effect of these changes for other $\lambda_n$ is smaller than that at $n=n_k$ or $n=n_{k-1}$. In fact we have: \begin{lem}\label{lem1} Let $\beta$ have Davenport coefficients $(b^{\sharp}_i)$ in the ordinary continued fraction. Given any $\epsilon>0$ and $k$ sufficiently large, there is an $M=M(k)<k/2$ and $N=N(k)$ such that if $m\not\in (k-M,k+N)$ then any change in $b^{\sharp}_k$ will not change $\lambda^{\sharp}_m(\beta)$ or $\rho^{\sharp}_{m}(\beta)$ by more than $\epsilon$. \end{lem} \begin{proof} This follows quickly by (\ref{eq:words}), since $\alpha$ must have infinitely many partial quotients in its continued fraction expansion which are larger than $2$. If $k$ is sufficiently large then there are at least $-\log {\epsilon}/\log 2$ such terms $a^{\sharp}_n$ in $n\in [k/2,k)$ and at least $-\log {\epsilon}/\log 2$ such terms $a^{\sharp}_n$ in $(k,k+N]$. Consequently any change in $b_k$ will make a variation in the value of $\lambda^{\sharp}_m(\beta)$ and $\rho^{\sharp}_{m}(\beta)$ less than $\epsilon$. We now choose a sequence of the $n_k$ which are sufficiently sparse that these intervals do not overlap. Choose $c_{n_k}$ so that \begin{equation*} \label{eq:14} c<\min_{j\in[n_k-M(n_k),n_k+N(n_k]}(min(\lambda^{\sharp}_{j}(\gamma),{\rho^{\sharp}_j}(\gamma))< c+2/a^{\sharp}_{n_k}. \end{equation*} This is clearly possible using the fact that changing $b_k$ by 1 increases or decreases the expression in (\ref{eq:words}) by no more than $1/(a^{\sharp}_n-1)$. This completes the proof of the fact that for such well approximable $\alpha$ the spectrum consists of a single ray. \end{proof} \section{The Bounded Case} \label{sec:6tdas} In the light of results of the previous section, we restrict attention from this point to the case where the ordinary continued has bounded partial quotients ($a^\sharp_{n}$). This translates in the case of the negative continued fraction to the sequence $a_1,a_2,a_3,\ldots$ being bounded, with least upper bound $M$, and the lengths of the blocks of consecutive $2$'s also being bounded with least upper bound $N-1\geq 0$. Then it follows from equations \eqref{eq:18} and \eqref{eq:19} that \begin{equation} \label{eq:26} \frac1{M}<\overline\alpha_i<\frac{N}{N+1} \end{equation} hold for all $i\ge1$. We choose $L$ to be the smallest integer such that \begin{equation} \label{eq:175} \Bigl(\frac{N}{N+1}\Bigr)^L\le\frac{(1-\frac{N}{N+1})(1-\frac{N^{2}}{(N+1)^{2}})}{M^N(M^2-1)}. \end{equation} The numbers $N$ and $L$ will figure significantly in the proof in the bounded case. \subsection{Computation of $\mathcal M_+(\alpha,\beta)$} \label{sec:scomanbbd} We will define a collection of $\beta$'s, in terms of their Davenport coefficients, which $\beta$ have the property that for some subsequence $(k(i))$ of positive integers \begin{equation} \label{eq:154} {\mathcal M}_{+}(\alpha,\beta)=\liminf_{i\to\infty}\lambda_{k(i)}(\beta). \end{equation} This enables us to work with just the $\lambda_{k(i)}$ rather than the $\rho_{n}$ and simplifies the rest of the proof of our main theorem. We assume throughout the remainder of the proof of the bounded partial quotient case that $\beta\neq n\alpha+m$ for some integers $n$ and $m$. We record some simple results in the following lemma. \begin{lem} \label{thm:lkadlnla} \begin{enumerate} \item For $i<j$, \begin{equation} \label{eq:15} q_iD_j =\frac{\alpha_{i+1}\alpha_{i+2}\ldots\alpha_j}{1-\overline\alpha_i\alpha_{i+1}}. \end{equation} \item Let $r$ and $s$ be positive integers satisfying $r\ge sL$. Then \begin{equation*} \label{eq:176} q_uD_{v-1}<q_{n-1}D_m(1-\alpha_{m+1})<q_{n-1}D_m \end{equation*} whenever $u$, $v$, $n$ and $m$ are positive integers with $u+r<v$ and $n<m\le n+s+N$. \end{enumerate} \end{lem} \begin{proof} The first part is a simple calculation. For the second part, note that the right inequality is obviously true since $0<\alpha_{m+1}<1$. To prove the left inequality we observe that \eqref{eq:15} implies \begin{equation*} \label{eq:177} q_uD_{v-1}=\frac{\alpha_{u+1}\alpha_{u+2}\dots\alpha_{v-1}}{1-\overline\alpha_u\alpha_{u+1}}. \end{equation*} Using \eqref{eq:26}, \eqref{eq:24} and $u+r<v$, we have \begin{equation*} \label{eq:178} q_uD_{v-1}<\frac{R^{v-u-1}}{1-R^2}\le\frac{R^r}{1-R^2}, \end{equation*} where $R=N/(N+1)$. Similarly, \begin{equation*} \label{eq:179} \begin{aligned}[t] q_{n-1}D_m(1-\alpha_{m+1}) & =\frac{\alpha_{n}\alpha_{n+1}\dots\alpha_m(1-\alpha_{m+1})} {1-\overline\alpha_{n-1}\alpha_n}\\ &>\frac{M^{-(s+N+1)}(1-R)}{1-M^{-2}}. \end{aligned} \end{equation*} The lemma is, therefore, true if \begin{equation*} \label{eq:181} \frac{R^r}{1-R^2}\le\frac{M^{-(s+N+1)}(1-R)}{1-M^{-2}} \end{equation*} Since $r\ge sL$ and $R<1$ and $s\ge1$ and $R^LM<1$ we have $R^rM^{s-1}<R^{sL}M^{s-1}<R^L$ and the result follows immediately from the definition of $L$. \end{proof} \begin{thm} \label{thm:98967} Choose positive integers $r$ and $s$ with $r\ge sL$, and an increasing sequence of indices $(k(i))$ with $k(i+1)>k(i)+r$. Let $0<\beta<1$ with Davenport coefficients $(b_i)$ satisfy: \begin{enumerate} \item for each $i\ge1$ the sequence $b_{k(i)+1},b_{k(i)+2},\ldots,b_{k(i)+r}$ is a block of $r$ zeros; \item there is no block of $N+s$ consecutive zeros in $(b_{n})$ between $k(i)+r$ and $k(i+1)$, \item $\beta$ is not in short intervals at level $n$ for $N+s$ consecutive values of $n$, in other words the Davenport coefficients of $\beta$ contain no sequence of the form $a_{j}-1,a_{j+1}-2,\ldots,a_{j+N+s-1}-2$. \end{enumerate} then \begin{equation*} \label{eq:184} {\mathcal M}_{+}(\alpha,\beta)=\liminf_{i\to\infty}\lambda_{k(i)}(\beta). \end{equation*} \end{thm} \begin{proof} By Theorem~\ref{thm:333}, it is enough to show that \begin{equation*} \label{eq:186} \lambda_n(\beta)\ge\lambda_{k(i)}(\beta)\qquad\text{or}\qquad \lambda_n(\beta)\ge\lambda_{k(i+1)}(\beta) \end{equation*} and \begin{equation} \label{eq:187} \rho_n(\beta)\ge\lambda_{k(i)}(\beta) \end{equation} for all integers $n$ with $k(i)\le n<k(i+1)$, for $i$ sufficiently large. We choose $i>i_{0}$ to ensure that some $b_{j}\neq 0$ for some $j<i_{0}$ and that $\beta$ has appeared in a long interval before that stage. If this were not possible $\beta$ would be a multiple of $\alpha$ modulo 1. Now fix $n$ between $k(i)$ and $k(i+1)$. We will liberally use the fact stated in Lemma~\ref{thm:6795} that we can move back and forth between $\lambda_{n}(\beta)$ and $\lambda_{m}(\beta)$ provided the intervening $b_{k}$ are all zero. Similarly, at the other extreme, we could move back and forward between $\rho_{n}(\beta)$ and $\rho_{m}(\beta)$ provided that at the intervening levels $\beta$ is in short intervals. Choose $u\leq k(i)< v$ to be such that $b_{j}=0$ if $u<j<v$ and to be the extreme integers with that property. We observe that $v-u>r$. It follows from Lemma~\ref{thm:6795} that \begin{equation*} \label{eq:12} \lambda_{k(i)}<q_{u}D_{v-1}. \end{equation*} If $n<k(i)+r$ then $\lambda_{k(i)}=\lambda_{n}$. If not, then $b_{n}$ is followed by a block of at most $N+s$ zeros unless $b_{m}=0$ for all $m$ with $n<m<k(i+1)$, in which case $\lambda_{k(i+1)}=\lambda_{n}$. If $\lambda_{k(i)}\neq\lambda_{n}\neq \lambda_{k(i+1)}$ then \begin{equation*} \label{eq:190} q_{n-1}D_m\le\lambda_n(\beta), \end{equation*} for some $m\le n+N+s$. That $\lambda_{k(i)}(\beta)\le\lambda_n(\beta)$ follows from \begin{equation*} \label{eq:191} q_uD_{v-1}\le q_{n-1}D_m. \end{equation*} which follows immediately from $m\le n+s+N$ and $u+r<v$. The argument to show that \eqref{eq:187} holds when $k(i)\le n<k(i+1)$ is similar but uses the fact that $\beta$ is not in a long sequence of consecutive short intervals. \end{proof} \subsection{Elements of ${\mathcal S}_{+}(\alpha)$} Now we give a construction for certain elements of ${\mathcal S}_{+}(\alpha)$ using Theorem~\ref{thm:98967}. First we impose additional constraints on the sequence $(k(i))$ so that the limits of the sequences $\overline\alpha_{k(i)}$ \eqref{eq:24} and $\alpha_{k(i)+1}$ both exist. Moreover, the limits lie strictly between $0$ and $1$, since \eqref{eq:26} and \eqref{eq:24} hold for all $i\ge1$ and $0<1/M<N/(N+1)<1$. The collection of $\beta$ to be described in terms of their Davenport expansions will be the ones for which $\mathcal M(\alpha,\beta)$ are in the Hall's Ray. \begin{defn} We choose $(K(i))$ be an increasing sequence of indices with gaps $K(i+1)-K(i)$ tending to infinity such that the limits \begin{equation} \label{eq:198} \begin{aligned} a_1^-,a_2^-,a_3^-,\ldots& =\lim_{i\to\infty}a_{K(i)},a_{K(i)-1},\ldots,a_2,a_1\ldots\\ a_1^+,a_2^+,a_3^+,\ldots &=\lim_{i\to\infty}a_{K(i)+1},a_{K(i)+2},a_{K(i)+3},\ldots, \end{aligned} \end{equation} exist; that is, that in each case the sequence of integers eventually becomes constant. The existence of such a sequence follows quickly by a diagonal argument from the finiteness of the alphabet from which the $a_i$'s are chosen. \end{defn} We write \begin{equation} \label{eq:197} \alpha^-=\langle a_1^-,a_2^-,a_3^-,\ldots\rangle \qquad\text{and}\qquad \alpha^+=\langle a_1^+,a_2^+,a_3^+,\ldots\rangle. \end{equation} The following lemma is a straightforward consequence of the properties of the sequence $a_1,a_2,a_3,\ldots$ \begin{lem} \label{thm:616161} Each of the sequences $\alpha^-$ and $\alpha^+$ have all of their terms less than or equal to $M$ and contain no block of $N$ consecutive $2$'s. \end{lem} Evidently, the numbers $\alpha^-$ and $\alpha^+$ are irrational with $0<\ \alpha^{\pm}<1$, and the partial quotients of their regular continued fraction expansions satisfy \eqref{eq:26}. All of the theory in the preceding sections is applicable to $\alpha^-$ or $\alpha^+$ in place of $\alpha$. We introduce the following notation. For $i\ge1$, define \begin{equation} \label{eq:200} \alpha^-_i=\langle a^-_i,a^-_{i+1},a^-_{i+2},\ldots\rangle\qquad\text{and}\qquad \alpha^+_i=\langle a^+_i,a^+_{i+1},a^+_{i+2},\ldots\rangle \end{equation} and set \begin{equation} \label{eq:201} D^-_i=\alpha^-_1\alpha^-_2\ldots\alpha^-_i\qquad\text{and}\qquad D^+_i=\alpha^+_1\alpha^+_2\ldots\alpha^+_i. \end{equation} It follows from \eqref{eq:197}, \eqref{eq:200}, and the discussion above that \begin{equation*} \label{eq:203} \alpha^-_k=\lim_{i\to\infty}\overline\alpha_{K(i)-k+1}\qquad\text{and}\qquad \alpha^+_k=\lim_{i\to\infty}\alpha_{K(i)+k}. \end{equation*} Hence \begin{equation*} \label{eq:204} \begin{aligned} D^-_k&=\lim_{i\to\infty}\overline\alpha_{K(i)}\overline\alpha_{K(i)-1}\ldots\overline\alpha_{K(i)-k+1} =\lim_{i\to\infty}\frac{q_{K(i)-k}}{q_{K(i)}}\\ D^+_k&=\lim_{i\to\infty}\alpha_{K(i)+1}\alpha_{K(i)+2}\ldots\alpha_{K(i)+k} =\lim_{i\to\infty}\frac{D_{K(i)+k}}{D_{K(i)}}. \end{aligned} \end{equation*} The next lemma, a crucial one in the proof, makes use of these identities. \begin{lem} \label{thm:67023} Let $(b_i)$ be the Davenport coefficients of a number $\beta\in[0,1]$ for which both of the limits \begin{equation*} \begin{aligned}[t] b^-_1,b^-_2,b^-_3,\ldots &=\lim_{i\to\infty}b_{K(i)},b_{K(i)-1},\ldots,b_1,0,0,0,\ldots\\ b^+_1,b^+_2,b^+_3,\ldots &=\lim_{i\to\infty}b_{K(i)+1},b_{K(i)+2},b_{K(i)+3},\ldots \end{aligned} \label{eq:206} \end{equation*} exist and let \begin{equation*} \label{eq:208} \beta^-=\sum^\infty_{k=1}b^-_kD^-_k\qquad\text{and}\qquad \beta^+=\sum^\infty_{k=1}b^+_kD^+_k. \end{equation*} Then \begin{equation*} \label{eq:209} \lim_{i\to\infty}\lambda_{K(i)}(\beta)=\frac{\beta^-\beta^+}{1-\alpha^-\alpha^+}. \end{equation*} \end{lem} \begin{proof} By definition \begin{equation*} \label{eq:210} \lambda_{K(i)}(\beta)=Q_{K(i)}D_{K(i)}\beta_{K(i)+1} =\frac{Q_{K(i)}}{q_{K(i)}}q_{K(i)}D_{K(i)}\beta_{K(i)+1} \end{equation*} and, by \eqref{eq:200}, \begin{equation*} \label{eq:212} \lim_{i\to\infty}q_{K(i)}D_{K(i)}=\frac1{1-\alpha^-\alpha^+}. \end{equation*} In consequence, it is sufficient to observe that \begin{equation*} \label{eq:213} \lim_{i\to\infty}\frac{Q_{K(i)}}{q_{K(i)}}=\beta^-\qquad\text{and}\qquad \lim_{i\to\infty}\beta_{K(i)+1}=\beta^+. \end{equation*} This is a straightforward consequence of the fact that \begin{equation*} \label{eq:214} D_{K(i)}\beta_{K(i)+1}=\sum^\infty_{k=1}b_{K(i)+k}D_{K(i)+k} \end{equation*} and a corresponding expression for the first limit. \end{proof} Now we define two Cantor-like subsets of $[0,1)$ in terms of their Davenport expansions. \begin{defn} \begin{enumerate} \item $\beta\in E(\alpha,s)$ if and only if in its Davenport coefficents $(b_i)$ no block $b_i,b_{i+1},\dots,b_{i+s}$ consists solely of zeros or is of the form \begin{equation} \label{eq:225} a_i-2,a_{i+1}-2,\dots,a_{i+s-1}-2,a_{i+s}-1. \end{equation} Note that this does not preclude tail sequences of the form $a_{i}-1, a_{i+1}-2, a_{i+2}-2, \ldots$. \item $\beta\in F(\alpha,s)$ if and only if in the sequence $b_1,b_2,b_3,\dots$ no block $b_i,b_{i+1},\dots,b_{i+s}$ consists solely of zeros or is of the form \begin{equation*} \label{eq:226} a_i-1,a_{i+1}-2,a_{i+2}-2,\dots,a_{i+s}-2. \end{equation*} \end{enumerate} We note that both of $F(\alpha,s)$ and $E(\alpha,s)$ are closed subsets of $[0,1]$. \end{defn} We now state and prove the main result of this section. \begin{thm} \label{thm:main_r_s} Let $r$ and $s$ be positive integers which satisfy $s\ge N$ and $r\ge sL$ and let $\alpha^-$ and $\alpha^+$ be defined by \eqref{eq:200} and $\alpha^+_r$ by \eqref{eq:200} and $D^+_r$ by \eqref{eq:201}. For all $e\in E(\alpha^-,s)$ and $f\in F(\alpha^+_{r+1},s)$ there is some $\beta$ with $0<\beta<1$ such that \begin{equation} \label{eq:241} {\mathcal M}_{+}(\alpha,\beta)=\frac{efD^+_r}{1-\alpha^-\alpha^+}. \end{equation} \end{thm} \begin{proof} We will exhibit appropriate Davenport expansions of $\beta$ to achieve this result for $f\in F(\alpha^+_{r+1},s)$ and $e\in E(\alpha^-,s)$. Let $e\in E(\alpha^-,s)$ and $f\in F(\alpha^+_{r+1},s)$. We shall prove there is a $\beta$ with $0<\beta<1$ which satisfies \eqref{eq:241} by constructing its Davenport coefficients $(b_i)$. Specifically, we shall construct $b_1,b_2,b_3,\ldots$ so that the limits \begin{equation*} \label{eq:242} \begin{aligned}[t] b^-_1,b^-_2,b^-_3,\ldots &=\lim_{i\to\infty}b_{K(i)},b_{K(i)-1},\ldots,b_1,0,0,0,\ldots\\ b^+_1,b^+_2,b^+_3,\ldots &=\lim_{i\to\infty}b_{K(i)+1},b_{K(i)+2},b_{K(i)+3},\ldots \end{aligned} \end{equation*} exist and \begin{equation} \label{eq:244} e=\sum^\infty_{k=1}b^-_kD^-_k\qquad\text{and}\qquad fD^+_r=\sum^\infty_{k=1}b^+_kD^+_k. \end{equation} Lemma~\ref{thm:67023} then yields: \begin{equation} \label{eq:245} \lim_{i\to\infty}\lambda_{K(i)}(\beta)=\frac{efD^+_r}{1-\alpha^-\alpha^+}. \end{equation} We describe sequences $b^+_1,b^+_2,b^+_3,\ldots$ and $b^-_1,b^-_2,b^-_3,\ldots$ for which \eqref{eq:245} holds. Let $f_1,f_2,f_3,\ldots$ be the Davenport coefficients of $f$ with respect to $\alpha^+_{r+1}$ and observe that \begin{equation*} \label{eq:247} f=\sum^\infty_{k=1}f_k\alpha^+_{r+1}\alpha^+_{r+2}\ldots\alpha^+_{r+k}. \end{equation*} Multiplication by $D^+_r$ gives \begin{equation*} \label{eq:248} fD^+_r=\sum^\infty_{k=1}f_kD^+_{r+k}. \end{equation*} and therefore the right hand formula in \eqref{eq:244} holds if we define \begin{equation} \label{eq:249} b^+_1,b^+_2,b^+_3,\ldots=\underbrace{0,\ldots,0}_{r},f_1,f_2,f_3,\ldots. \end{equation} It is easily seen that these satisfy the appropriate conditions for a Davenport expansion. For a number $e\in E(\alpha^-,s)$, we let $e_1,e_2,e_3,\ldots$ be the $\alpha^-$-expansion of $e$ and as above we observe that \begin{equation*} \label{eq:252} e=\sum^\infty_{k=1}e_kD^-_k. \end{equation*} The left hand formula in \eqref{eq:244} then holds if we set \begin{equation*} \label{eq:253} b^-_1,b^-_2,b^-_3,\ldots=e_1,e_2,e_3,\ldots. \end{equation*} Next, we specify enough of the sequence $b_1,b_2,b_3,\ldots$ to ensure that \eqref{eq:249} and \eqref{eq:244} hold. At this point Figure~\ref{fig:ls} illustrates definition of the various pieces of the sequence. \begin{figure}[ht!] \centering \includegraphics[width=0.9\textwidth]{fig3.pdf} \caption{The definition of the sequence $b_{i}$.} \label{fig:ls} \end{figure} For this purpose, we choose a positive integer $i_0$ and sequences of integers $(u(i))^\infty_{i=i_0}$ and $(v(i))^\infty_{i=i_0}$ such that $ K(i)\le u(i)<u(i)+N<v(i)\le K(i+1)$ for all $i\ge i_0$ and \begin{equation*} \label{eq:257} \lim_{i\to\infty}u(i)-K(i)=\infty\qquad\text{and}\qquad \lim_{i\to\infty}K(i+1)-v(i)=\infty. \end{equation*} Such sequences exist since the differences $K(i+1)-K(i)$ tend to infinity as $i$ increases. Furthermore we can also assume that, for all $i\ge i_0$, \begin{equation*} \label{eq:258} \begin{aligned}[t] a_{K(i)+1},a_{K(i)+2},\ldots,a_{u(i)} &=a^+_1,a^+_2,\ldots,a^+_{u(i)-K(i)}\\ a_{K(i+1)},a_{K(i+1)-1},\ldots,a_{v(i)} &=a^-_1,a^-_2,\ldots,a^-_{K(i+1)-v(i)+1}. \end{aligned} \end{equation*} We ensure \eqref{eq:249} and \eqref{eq:244} hold by defining \begin{equation*} \label{eq:260} \begin{aligned}[t] b_{K(i)+1},b_{K(i)+2},\ldots,b_{u(i)} &=b^+_1,b^+_2,\ldots,b^+_{u(i)-K(i)}\\ b_{K(i+1)},b_{K(i+1)-1},\ldots,b_{v(i)} &=b^-_1,b^-_2,\ldots,b^-_{K(i+1)-v(i)+1} \end{aligned} \end{equation*} for all $i\ge i_0$. Before completing the specification of $(b_j)$ we further restrict $i_0$ and the sequences $(u(i))^\infty_{i=i_0}$ and $(v(i))^\infty_{i=i_0}$. \begin{equation*} \label{eq:262} b_{u(i)}\ne0\qquad\text{and}\qquad b_{v(i)}\ne0 \end{equation*} for all $i\ge i_0$. This is relatively easy to arrange from the properties of the $K(i)$ in relation to the Davenport expansion, and of the sequences $(u(i))$ and $(v(i))$. To complete the specification of $(b_j)$ we introduce one more sequence. We choose the sequence $(w(i))^\infty_{i=i_0}$ so that \begin{equation*} \label{eq:265} u(i)<w(i)\le u(i)+N \text{ and } a_{w(i)}\ge3 \quad (i\ge i_0). \end{equation*} Such a choice is clearly possible. We can now unambiguously define \begin{equation*} \label{eq:267} b_j= \begin{cases} 0&\text{ if $1\le j\le K(i_0)$}\\ a_j-2&\text{ if $u(i)<j<w(i)$ for some $i\ge i_0$}\\ a_j-3&\text{ if $j=w(i)$ for some $i\ge i_0$}\\ a_j-2&\text{if $w(i)<j<v(i)$ for some $i\ge i_0$.} \end{cases} \end{equation*} It is not hard to verify that $0\le b_i<a_i$ for all $i\ge1$ and since $b_{w(i)}=a_{w(i)}-3$ for all $i\ge i_0$ it is also clear that no subsequence $b_i,b_{i+1},b_{i+2},\ldots$ is of the form \eqref{eq:70}. It is easy to check that $(b_i)$ are Davenport coefficients by showing that no block $b_i,b_{i+1},\ldots,b_j$ is of the form \eqref{eq:69}. Now we observe that the hypotheses of Theorem~\ref{thm:98967} holds with $k(i)=K(i)$ for all $i$ to complete the proof. \end{proof} \subsection{Cantor dissections for $ E(\alpha,s)$ and $F(\alpha,s)$} Our eventual aim is to show that if the integer $s$ is large enough then the product of the two sets $ E(\alpha^-,s)$ and $F(\alpha^+_{r+1},s)$, where $r\ge1$, contains an interval. Towards that aim we describe each of these two sets in terms of Cantor dissections. We do this for a generic $\alpha$ rather than $\alpha^{-}$ and $\alpha^{+}$ at this stage. We collect together a few definitions. \begin{defn} \label{defn:sss} \begin{enumerate} \item $H(\alpha,s)$ and $G(\alpha,s)$ are the smallest closed intervals containing $F(\alpha,s)$ and $E(\alpha,s)$ respectively. \item For each sequence $\mathbf{c}_n=c_1,c_2,\ldots,c_n$ of positive integers,define: \begin{equation*} \label{eq:288} \begin{aligned}[t] S(\mathbf{c}_n)&=\sum^n_{k=1}c_kD_k,\\ F(\mathbf{c}_n)&=\{\gamma=\sum_{k=1}^{\infty}b_{k}D_{k}\in F(\alpha,s):b_{k}=c_{k},\ (k=1,2, \ldots,n)\} \end{aligned} \end{equation*} where $\sum_{k=1}^{\infty}b_{k}D_{k}$ is the Davenport expansion of $\gamma$. Denote by $C(\mathbf{c}_n)$ the smallest closed interval which contains $F(\mathbf{c}_n)$. Observe that $C(\mathbf{c}_n)$ may be the empty set. \item when $(\mathbf{c}_n)\neq \emptyset$, $$C(\mathbf{c}_n)= [\underline C(\mathbf{c}_n), \overline C(\mathbf{c}_n)]$$ where $$\underline C(\mathbf{c}_n)=\inf C(\mathbf{c}_n),\ \overline C=\sup C(\mathbf{c}_n)\text{ and } |C(\mathbf{c}_n|=\overline C(\mathbf{c}_n)-\underline C(\mathbf{c}_n).$$ \end{enumerate} We allow the possibility that $n=0$ in which case $C(\;)=H(\alpha,s)$. \end{defn} The dissection of $H(\alpha,s)$ to obtain $F(\alpha,s)$ begins by replacing $C(\;)=H(\alpha,s)$ with the collection of intervals \begin{equation*} \label{eq:291} \{C(0),C(1),\ldots,C(a_1-1)\}. \end{equation*} The $n$-th stage of the dissection replaces each non-empty interval $C(\mathbf{c}_n)$ by the collection of intervals \begin{equation} \label{eq:292} \{C(\mathbf{c}_{n+1}):\;0\le c_{n+1}<a_{n+1}\}. \end{equation} From the definition of $C(\mathbf{c}_n)$ it is clear that it is the smallest closed interval containing the collection of intervals \eqref{eq:292}. Moreover the restrictions on the digits results in gaps between all of these. As an illustration, note that if $C(\mathbf{c}_{n})$ has the last $n-1$ $c_{k}$ all equal to $0$, then at the next level $C(\mathbf{c}_{n},0)=\emptyset$. The same kind of phenomenon occurs at the opposite end because of the restriction on the number of terms of the form $a_{i}-2$. It is clear that this is a Cantor dissection that produces $F(\alpha,s)$, and we have \begin{equation*} \label{eq:294} \begin{gathered}[t] S(\mathbf{c}_n)+D_{n+s+1}\le\underline C(\mathbf{c}_n)\leq \overline C(\mathbf{c}_n)\le S(\mathbf{c}_n)+D_n\\ C(\mathbf{c}_{n+1})\subset [S(\mathbf{c}_n)+c_{n+1}D_{n+1}+D_{n+s+2},\;S(\mathbf{c}_n)+(c_{n+1}+1)D_{n+1}]. \end{gathered} \end{equation*} Clearly $|C(\mathbf{c}_n)|\le D_n$, and it is evident that \begin{equation*} \label{eq:296} F(\alpha,s)=\bigcap^\infty_{n=1}\bigcup \{C(\mathbf{c}_n)\ne\emptyset:\;0\le c_i<a_i\}. \end{equation*} Now we obtain more precise estimates of the values of the endpoints $\underline C(\mathbf{c}_n)$ and $\overline C(\mathbf{c}_n)$. \begin{lem} \label{thm:ousds} Let $s\ge N$, $C(\mathbf{c}_n)\ne\emptyset$, $t$ the largest integer with $0\le t\le n$ such that all of $c_{n-t+1},c_{n-t+2},\ldots,c_n$ are zero and $u$ the unique integer with $0\le u\le n$ such that $c_{n-u+1},c_{n-u+2},\ldots,c_n$ is equal to \begin{equation*} \label{eq:331} a_{n-u+1}-1,a_{n-u+2}-2,a_{n-u+3}-2,\ldots,a_n-2. \end{equation*} Then \begin{equation*} \label{eq:329} \underline C(\mathbf{c}_n)<\left\{\alignedat2 &S(\mathbf{c}_n)+D_{n+s}\quad&&\text{if $t=0$}\\ &S(\mathbf{c}_n)+D_{n+1}+D_{n+s}\quad&&\text{if $t>0$} \endalignedat\right. \end{equation*} and \begin{equation*} \label{eq:330} \overline C(\mathbf{c}_n)>\left\{\alignedat2 &S(\mathbf{c}_n)+D_n-D_{n+s-N}\quad&&\text{if $u=0$}\\ &S(\mathbf{c}_n)+D_{n+N+1}\quad&&\text{if $u>0$} \endalignedat\right. \end{equation*} \end{lem} \begin{proof} Write $C=C(\mathbf{c}_n)$ and note that $\underline C=\underline C(\mathbf{c}_n)$ is the number $\beta$ whose Davenport coefficients $(b_i)$ are of the form \begin{equation} \label{eq:332} c_1,c_2,\ldots,c_n,\underbrace{0,\ldots,0}_{s-t},1,\underbrace{0,\ldots,0}_{s},1, \underbrace{0,\ldots,0}_s,1,\ldots. \end{equation} Note that $t\le s$ else $c_1,c_2,\ldots,c_n$ ends with more than $s$ consecutive zeros and $C=\emptyset$. In other words, \begin{equation*} \label{eq:333} \underline C=\sum^n_{k=1}c_kD_k+D_{n+s-t+1}+D_{n+2s-t+2}+D_{n+3s-t+3}+\ldots, \end{equation*} and \begin{equation*} \label{eq:334} \underline C\le\left\{\alignedat2 &S(\mathbf{c}_n)+D_{n+s+1}+D_{n+2s+2}+D_{n+3s+3}+\ldots \quad&&\text{if $t=0$}\\ &S(\mathbf{c}_n)+D_{n+1}+D_{n+s+2}+D_{n+2s+3}+\ldots \quad&&\text{if $t>0$}\endalignedat\right. \end{equation*} Further, since $s\ge N$ we know \begin{equation*} \label{eq:335} D_{n+s}>D_{n+s+1}+D_{n+2s+2}+D_{n+3s+3}+\ldots, \end{equation*} and \begin{equation*} \label{eq:336} D_{n+s}>D_{n+s+2}+D_{n+2s+3}+D_{n+3s+4}+\ldots, \end{equation*} and the truth of the first statement of the lemma is evident. We describe the Davenport expansion of $\overline C=\overline C(\mathbf{c}_n)$ next. Let $k(0)=n-u$ and inductively define the sequence $k(1),k(2),k(3),\ldots$ by choosing $k(i)$ to be the largest integer such that \begin{equation*} \label{eq:337} k(i-1)+2\le k(i)\le k(i-1)+s+1\qquad\text{and}\qquad a_{k(i)}\ge3. \end{equation*} This is possible by the properties of $a_n$ as enunciated in Lemma~\ref{thm:616161} for $\alpha=\alpha^-$ and $\alpha=\alpha^+$. Further, \begin{equation*} \label{eq:338} k(i)\ge k(i-1)+s-N+2 \end{equation*} because if not $a_k=2$ for all $k$ between and including $k(i-1)+s-N+2$ and $k(i-1)+s+1$ contrary to the definition of $N$. Now $\overline C$ is the number $\beta$ whose Davenport coefficients $(b_i)$ are defined by \begin{equation*} \label{eq:339} b_i= \begin{cases} c_i&\text{ if $i\le k(0)$}\\ a_i-1&\text{ if $i=k(j)+1$ for some $j\ge0$}\\ a_i-2&\text{ if $k(j)+1<i<k(j+1)$ for some $j\ge0$}\\ a_i-3&\text{ if $i=k(j)$ for some $j\ge1$.} \end{cases} \end{equation*} These are clearly Davenport coefficients, and the sequence contains no block $b_i,b_{i+1},\ldots,b_{i+s}$ of the form \eqref{eq:225} nor does it contain a block of $b_i,b_{i+1},\ldots,b_{i+s}$ consisting solely of zeros. We conclude that $\beta\in F(\alpha,s)$. It is also fairly clear that the sequence $b_1,b_2,b_3,\ldots$ begins with $c_1,c_2,\ldots,c_n$. It remains to show that no other element of $C(\mathbf{c}_n)$ is larger than $\beta$. If that were the case, and there were some $\beta'\in C(\mathbf{c}_n)$ with Davenport coefficients $(b'_i)$ such that $\beta'>\beta$. However, the form of the definition of $\beta$ prohibits any possible increase in the values of the Davenport coefficients while staying a member of $F(\alpha,s)$ and starting with $c_{1}, c_{2},\ldots, c_{n}$. Evidently $\underline C=\sum^\infty_{k=1}b_kD_k$. By truncating this series at the term with index $k(1)+1$ and making some minor rearrangements we find that \begin{equation*} \label{eq:344} \overline C>\sum^{k(0)}_{l=1}c_lD_l+\sum^{k(1)+1}_{l=k(0)+1}(a_l-2)D_l +D_{k(0)+1}-D_{k(1)}+D_{k(1)+1}. \end{equation*} We consider two cases. First we suppose $u=0$ and hence $k(0)=n$. In this case, since \begin{equation*} \label{eq:345} D_n<\sum^{k(1)+1}_{l=n+1}(a_l-2)D_l+D_{n+1}+D_{k(1)+1} \end{equation*} we obtain $\overline C>S(\mathbf{c}_n)+D_n-D_{k(1)}$. It is easy to deduce from \eqref{eq:332} with $i=1$ that $D_{k(1)}>D_{n+s-N}$ and the second statement of the lemma is proved. Now we suppose $u>0$ and hence $k(0)<n$. Since $k(1)\ge n+1$, \begin{equation*} \label{eq:346} \overline C>\sum^n_{l=1}c_lD_l +\sum^{k(1)+1}_{l=n+1}(a_l-2)D_l-D_{k(1)}+D_{k(1)+1}. \end{equation*} As $a_{k(1)}\ge3$ and $a_{k(1)+1}\ge2$ we have \begin{equation*} \label{eq:347} \overline C>\sum^n_{l=1}c_lD_l+\sum^{k(1)-1}_{l=n+1}(a_l-2)D_l+D_{k(1)+1}. \end{equation*} The definition of $N$ implies there is some $i$ with $n+1\le i\le n+N+1$ such that $a_i\ge3$ and so \begin{equation*} \label{eq:348} \sum^{n+N+1}_{l=n+1}(a_l-2)D_l\ge D_{n+N+1}. \end{equation*} It follows that if $k(1)-1\ge n+N+1$ then the second statement of the lemma is true. If on the other hand $k(1)<n+N+1$ then $D_{k(1)+1}\ge D_{n+N+1}$ and again the truth of the second statement is clear. This completes the proof the lemma. \end{proof} A key remark about the $F(\alpha,s)$ construction, that will not be true for the case if $E(\alpha,s)$, is that, at least generically, the deleted intervals resulting from the ``zeros'' condition and the ``$a_n-2$'' condition in this Cantor construction have the property that their left and right endpoints respectively are $S(c_1,...,c_n+1)$. Next we deal with the set $E(\alpha,s)$. This is a little more complicated; the two restrictions on the Davenport coefficients of the elements of $E(\alpha,s)$ no longer correspond to a single gap in the dissection of $G(\alpha,s)$. We make the following definition. \begin{defn} $A(\;)=[0, 1-D_{1}]$ and $B(\;)=[1-D_{1},1]$. For each sequence $\mathbf{c}_n=c_1,c_2,\ldots,c_n$ of positive integers, we define $A(\mathbf{c}_n)$ to be the smallest closed interval containing $E(\alpha,s)\cap [S(\mathbf{c}_n),S(\mathbf{c}_n)+D_n-D_{n+1}]$, and $B(\mathbf{c}_n)$ to be the smallest closed interval containing $E(\alpha,s)\cap [S(\mathbf{c}_n)+D_{n}-D_{n+1},S(\mathbf{c}_n)+D_n]$. \end{defn} The dissection of $G(\alpha,s)$ begins by replacing $G(\alpha,s)$ with the pair of intervals $A(\;)$ and $B(\;)$. The next step is the substitution \begin{equation*} \label{eq:308} \begin{aligned}[t] A(\;)&\to \{A(0),B(0),A(1),B(1),\ldots,A(a_1-3),B(a_1-3),A(a_1-2)\} \\ B(\;)&\to \{B(a_1-2),\;A(a_1-1)\}. \end{aligned} \end{equation*} The $n$-th step of the dissection is \begin{equation} \label{eq:310} \begin{aligned} \emptyset\neq A(\mathbf{c}_{n})&\to \{A(\mathbf{c}_{n+1}):\;0\le c_{n+1}\le a_{n+1}-2\}\\ &\qquad \cup\{B(\mathbf{c}_{n+1}):\;0\le c_{n+1}\le a_{n+1}-3\}\\ \emptyset\neq A(\mathbf{c}_{n})&\to \{B(c_1,c_2,\ldots,c_n,a_{n+1}-2),\; A(c_1,c_2,\ldots,c_n,a_{n+1}-1)\}. \end{aligned} \end{equation} where again we use the notation $\mathbf{c}_n=c_1,c_2,\ldots,c_n)$. We note that $A(\mathbf{c}_n)$ and $B(\mathbf{c}_n)$ are the smallest closed intervals containing the collections \eqref{eq:310} at the previous level. This follows because \begin{equation*} \label{eq:312} S(\mathbf{c}_n)+D_n-D_{n+1} =S(c_1,c_2,\ldots,c_n,a_{n+1}-2)+D_{n+1}-D_{n+2}. \end{equation*} For the moment, we write \begin{equation*} \label{eq:315} A=A(\mathbf{c}_n)\qquad B=B(\mathbf{c}_n)\qquad S=S(\mathbf{c}_n). \end{equation*} Now let $\beta\in E(\alpha,s)$ and suppose its sequence of Davenport coefficients $(b_i)$ begins with $c_1,c_2,\ldots,c_n$. Then the block $b_{n+1},b_{n+2},\ldots,b_{n+s+1}$ does not consist entirely of zeros, and so $b_i\ge1$ and $\beta\ge S+D_i$ for some $i$ with $n+1\le i\le n+s+1$. Hence $\beta\ge S+D_{n+s+1}$. Since $A$ is the smallest closed interval containing all such numbers $\beta$ which also satisfy $\beta\le S+D_n-D_{n+1}$ it follows that \begin{equation} \label{eq:316} S+D_{n+s+1}\le\underline A\leq \overline A\le S+D_n-D_{n+1}. \end{equation} In particular $|A|\le D_n$. If, on the other hand, $\beta>S+D_n-D_{n+1}$, there is $i\ge n+1$ such that the block $b_{n+1},b_{n+2},\ldots,b_i$ is of the form \begin{equation*} \label{eq:317} a_{n+1}-2,a_{n+2}-2,\ldots,a_{i-1}-2,a_i-1. \end{equation*} We conclude that \begin{equation} \label{eq:320} S+D_n-D_{n+1}+D_{n+s+1}\le\underline B\leq \overline B\le S+D_n. \end{equation} In fact, \begin{align*} \label{eq:321} A(\mathbf{c}_{n+1})&\subset [S+c_{n+1}D_{n+1}+D_{n+s+2},\;S+(c_{n+1}+1)D_{n+1}-D_{n+2}]\\ B(\mathbf{c}_{n+1})&\subset [S+(c_{n+1}+1)D_{n+1}-D_{n+2}+D_{n+s+2},\;S+(c_{n+1}+1)D_{n+1}]. \end{align*} We note that all such intervals where $0\le c_{n+1}<a_{n+1}$ are disjoint, and since \begin{equation} \label{eq:323} E(\alpha,s)=\bigcap^\infty_{n=1}\bigcup\{ A(\mathbf{c}_n)\ne\emptyset,\;B(\mathbf{c}_n)\ne\emptyset :\;0\le c_i<a_i\}, \end{equation} it is totally disconnected. Again the gaps arise because of the constraints on digits in the definition of $E(\alpha,s)$. Now we need to find estimates for the endpoints of the intervals $A(\mathbf{c}_n)$ and $B(\mathbf{c}_n)$, just as we have for $C(\mathbf{c}_n)$ in Lemma~\ref{thm:ousds}. \begin{lem} \label{thm:yahdd} Let $s\ge N$ and $A(\mathbf{c}_n)\ne\emptyset$. Then \begin{equation*} \label{eq:369} \underline A(\mathbf{c}_n)<S(\mathbf{c}_n)+D_n-D_{n+1}-D_{n+3N} \end{equation*} and \begin{equation*} \label{eq:370} \overline A(\mathbf{c}_n)=S(\mathbf{c}_n)+D_n-D_{n+1}. \end{equation*} Further, \begin{equation*} \label{eq:371} \underline A(\mathbf{c}_n)<S(\mathbf{c}_n)+D_{n+s} \end{equation*} whenever $n=0$ or $B(c_1,c_2,\ldots,c_{n-1},c_n-1)\neq \emptyset$. \end{lem} \begin{proof} Write $A=A(\mathbf{c}_n)$. We note first that $\underline A$ is the number $\beta$ whose Davenport coefficients $(b_i)$ are of the form \begin{equation*} \label{eq:372} c_1,c_2,\ldots,c_n,\underbrace{0,\ldots,0}_{s-t},1,\underbrace{0,\ldots,0}_{s},1, \underbrace{0,\ldots,0}_{s},1,\ldots. \end{equation*} where $t$ is the largest integer with $0\le t\le n$ for which $c_{n-t+1},c_{n-t+2},\ldots,c_n$ are all zero, and observe that there is some $j$ satisfying \begin{equation*} \label{eq:373} n+1\le j\le n+s-t+N+1 \end{equation*} such that $b_j\le a_j-3$ and $b_i=a_i-2$ for all $i$ with $n+1\le i\le j-1$. It follows that \begin{equation} \label{eq:376} \underline A\le \sum^n_{k=1}c_kD_k+\sum^{j-1}_{k=n+1}(a_k-2)D_k+(a_j-3)D_j+D_j. \end{equation} Since $j\le n+2N$, \begin{equation*} \label{eq:377} \underline A\le\sum^n_{k=1}c_kD_k+\sum^{n+2N}_{k=n+1}(a_k-2)D_k. \end{equation*} We know \begin{equation*} \label{eq:378} \sum^{n+2N}_{k=n+1}(a_k-2)D_k =D_n-D_{n+1}-\sum^\infty_{k=n+2N+1}(a_k-2)D_k \end{equation*} and since the definition of $N$ implies there is some $k$ with $n+2N<k\le n+3N$ such that $a_k\ge3$ we conclude that \eqref{eq:376} does not exceed $D_n-D_{n+1}-D_{n+3N}$. The truth of the first statement of the lemma is now evident. Now we redefine $\beta$ as \begin{equation*} \label{eq:379} \beta=S(\mathbf{c}_n)+D_n-D_{n+1} \end{equation*} and observe that it has Davenport coefficients \begin{equation*} \label{eq:380} c_1,c_2,\ldots,c_n,a_{n+1}-2,a_{n+2}-2,a_{n+3}-2,\ldots. \end{equation*} This contains no block $b_i,b_{i+1},\ldots,b_j$ of the form \eqref{eq:69} nor a block $b_i,b_{i+1},\ldots,b_{i+s}$ of the form \eqref{eq:225} and so $\beta\in E(\alpha,s)$. Now suppose $n=0$ or $B(c_1,c_2,\ldots,c_n-1)$ is non-empty. If $B(c_1,c_2,\ldots,c_n-1)\ne\emptyset$ then $c_n\ge1$ and so $t=0$. Obviously $t$ is also zero if $n=0$. As a result, \begin{equation*} \label{eq:382} \underline A=\sum^n_{k=1}c_kD_k+D_{n+s+1}+D_{n+2s+2}+D_{n+3s+3}+\ldots. \end{equation*} Because $s\ge N$ we know that \begin{equation*} \label{eq:383} D_{n+s}\ge D_{n+s+1}+D_{n+2s+2}+D_{n+3s+3}+\ldots, \end{equation*} and this is enough to complete the proof. \end{proof} \begin{lem} \label{thm:oaqpa} Let $s\ge N$ and $B(\mathbf{c}_n)\ne\emptyset$. Then \begin{align*} \label{eq:384} \underline B(\mathbf{c}_n) &<S(\mathbf{c}_n)+D_n-D_{n+1}+D_{n+2}+D_{n+s},\\ \overline B(\mathbf{c}_n)&=S(\mathbf{c}_n)+D_n. \end{align*} Further, \begin{equation*} \label{eq:386} \underline B(\mathbf{c}_n) <S(\mathbf{c}_n)+D_n-D_{n+1}+D_{n+s} \end{equation*} whenever $n=0$ or $c_n\ne a_n-2$ and $A(\mathbf{c}_n)$ is non-empty. \end{lem} \begin{proof} As usual, we write $B=B(\mathbf{c}_n)$ and observe that $B$ contains the number $\beta$ whose Davenport coefficients $b_1,b_2,b_3,\ldots$ are equal to \begin{equation*} \label{eq:387} c_1,c_2,\ldots,c_n,a_{n+1}-1,\underbrace{0,\ldots,0}_s,1 ,\underbrace{0,\ldots,0}_s,1,\underbrace{0,\ldots,0}_s,1,\ldots. \end{equation*} Therefore \begin{equation*} \label{eq:394} \underline B\le\sum^n_{k=1}c_kD_k+(a_{n+1}-1)D_{n+1} +D_{n+s+2}+D_{n+2s+3}+D_{n+3s+4}+\ldots. \end{equation*} We note that \begin{equation*} \label{eq:395} D_{n+s}\ge D_{n+s+2}+D_{n+2s+3}+D_{n+3s+4}+\ldots. \end{equation*} The first inequality of the lemma then follows since $a_{n+1}D_{n+1}=D_n+D_{n+2}$. For the second statement of the lemma we consider $\overline B=\beta$ where $\beta$ is \begin{equation} \label{eq:396} \beta=S(\mathbf{c}_n)+D_n, \end{equation} so that $\beta=\sum^\infty_{k=1}c_kD_k$ where $b_1,b_2,b_3,\ldots$ is the sequence \begin{equation*} \label{eq:397} c_1,c_2,\ldots,c_n,a_{n+1}-1,a_{n+2}-2,a_{n+3}-2,a_{n+4}-2,\ldots, \end{equation*} and the rest is clear. Now suppose either $n=0$ or $A(\mathbf{c}_n)\ne\emptyset$ and $c_n\ne a_n-2$. In this case $\underline B$ is the number $\beta$ whose Davenport coefficient sequence $(b_i)$ begins with \begin{equation} \label{eq:398} c_1,c_2,\ldots,c_n,a_{n+1}-2,a_{n+2}-2,\ldots,a_{n+s-1}-2,a_{n+s}-1 \end{equation} and continues with \begin{equation} \label{eq:399} \underbrace{0,\ldots,0}_s,1,\underbrace{0,\ldots,0}_s,1, \underbrace{0,\ldots,0}_s,1,\ldots. \end{equation} Clearly $(b_i)$ is a sequence of Davenport coefficients and $\beta\in E(\alpha,s)$. Further, since $(b_i)$ begins with \eqref{eq:398}, \begin{equation*} \label{eq:401} \beta\ge\sum^n_{k=1}c_kD_k+\sum^{n+s-1}_{k=n+1}(a_k-2)D_k+(a_{n+s}-1)D_{n+s}. \end{equation*} As the sequence of Davenport coefficients of $\underline B$ begins with \eqref{eq:398} and continues with \eqref{eq:399}, \begin{equation*} \label{eq:404} \underline B=\sum^n_{k=1}c_kD_k+\sum^{n+s}_{k=n+1}(a_k-2)D_k+D_{n+s} +D_{n+2s+1}+D_{n+3s+2}+\ldots. \end{equation*} By using the appropriate identities of Section~\ref{sec:3gfd} we obtain \begin{equation*} \label{eq:405} \underline B=\sum^n_{k=1}c_kD_k+D_n-D_{n+1}+D_{n+s+1} +D_{n+2s+1}+D_{n+3s+2}+D_{n+4s+3}+\ldots. \end{equation*} The usual arguments yield \begin{equation*} \label{eq:406} D_{n+s}\ge D_{n+s+1}+D_{n+2s+1}+D_{n+3s+2}+D_{n+4s+3}+\ldots \end{equation*} and the truth of the final statement of the lemma is clear. \end{proof} \subsection{Application of Hall's Theorem} As mentioned in the introduction to the last section, we shall now use a theorem of Hall, namely Theorem~2.2 in \cite{m.47:_sum_and_produc_of_contin_fract}, to show that if $s$ is large enough then the product of the sets $E(\alpha,s)$ and $F(\alpha,s)$ contains an interval. This idea was used in the context of inhomogeneous diophantine approximation by Cusick, Moran and Pollington, see \cite{cusick96:_halls_ray_in_inhom_dioph_approx}. The actual statement of Hall's theorem~\cite{m.47:_sum_and_produc_of_contin_fract} concerns the sum of Cantor sets but, as Hall points out, his result can be applied to products by taking logarithms. Specifically, we have \begin{equation*} \label{eq:328} \log(E(\alpha,s).F(\alpha,s))=\log E(\alpha,s)\;+\;\log F(\alpha,s) \end{equation*} and since the logarithm function is continuous and strictly increasing, it maps the Cantor dissections of $G(\alpha,s)$ and $H(\alpha,s)$ to Cantor dissections of $\log G(\alpha,s)$ and $\log H(\alpha,s)$, respectively. Before applying Hall's theorem we need to check that his Condition~1 holds. This condition states that if, in going from level $n$ to $n+1$, an interval $C$ is replaced by two disjoint intervals $C_{1}$ and $C_{2}$ with an open interval $C_{12}$ between them, so that $C_{1}\cup C_{12}\cup C_{2}=C$, then the length of $C_{12}$ should not exceed the minimum of $|C_{1}|$ and $|C_{2}|$. We note, as Hall does in his discussion of bounded continued fractions, that the transition from the $n$th to the $(n+1)$th stage of the Cantor dissections leading to the sets $F(\alpha,s)$ and $E(\alpha,s)$ can be done by iteratively removing just one ``middle'' interval at a time. To verify Condition~1 of Hall, it is enough to show that for any pair of adjacent intervals formed at the $n$th stage of the Cantor dissection to produce either $F(\alpha,s)$ or $E(\alpha,s)$, the minimum of their lengths exceeds the length of the removed interval. We can now verify this for the Cantor dissection for $\log F(\alpha,s)$. \begin{lem} \label{thm:abdoas} There is an integer $s_0\ge N$ such that if $s\ge s_0$ and if $C_1$ and $C_2$ are non-empty neighbouring intervals arising at the same stage of the Cantor dissection for $F(\alpha,s)$ then \begin{equation} \label{eq:349} |\log C_{12}|\le\min\{|\log C_1|,|\log C_2|\} \end{equation} where $C_{12}$ is the open interval lying between $C_1$ and $C_2$. \end{lem} \begin{proof} Let $s\ge N$ and let $C_1$ and $C_2$ and $C_{12}$ be as described. We assume without loss of generality that $C_1$ lies to the left of $C_2$. Our aim is to show that if $s$ is large enough then the number \begin{equation*} \label{eq:350} |\log C_{12}|=\log\underline{C_2}-\log\overline{C_1} \end{equation*} is less than or equal to both \begin{equation*} \label{eq:351} |\log C_1|=\log\overline{C_1}-\log\underline{C_1}\text{ and } |\log C_2|=\log\overline{C_2}-\log\underline{C_2}. \end{equation*} By rearranging and using the properties of logarithms we reduce this statement to \begin{equation} \label{eq:352} \underline{C_1}\;\underline{C_2}\le\overline{C_1}\;\overline{C_1}\qquad\text{and}\qquad \underline{C_2}\;\underline{C_2}\le\overline{C_1}\;\overline{C_2}. \end{equation} Note that, since \begin{equation*} \label{eq:353} 4\underline{C_1}\underline{C_2}=(\underline{C_1}+\underline{C_2})^2-(\underline{C_1}-\underline{C_2})^2, \end{equation*} to prove the first of the inequalities in \eqref{eq:352} it is enough to show \begin{equation*} \label{eq:354} \underline{C_1}+\underline{C_2}<2\;\overline{C_1}, \end{equation*} and we concentrate on this. Since $C_1$ and $C_2$ arise at the same stage of the dissection and $C_1$ lies to the left of $C_2$ we write \begin{equation*} \label{eq:355} C_1=C(\mathbf{c}_n)\qquad\text{and}\qquad C_2=C(c_1,c_2,\ldots,c_{n-1},c'_n) \end{equation*} where $c'_n>c_n$. The key fact here is that $C(c_{1},c_{2},\ldots,c_{n},c)$ is empty only for the extreme values of $c$, because of the conditions that describe $F(\alpha,s)$. Hence $c'_n=c_n+1$. We write \begin{equation*} \label{eq:356} S_1=S(\mathbf{c}_n)\qquad\text{and}\qquad S_2=S(c_1,c_2,\ldots,c_{n-1},c'_n). \end{equation*} Note that $S_2=S_1+D_n$. Assume $t$ is the largest integer with $0\le t\le n$ such that all of $c_{n-t+1},c_{n-t+2},\ldots,c_n$ are zero and $u$ the unique integer with $0\le u\le n$ such that $c_{n-u+1},c_{n-u+2},\ldots,c_n$ is equal to \eqref{eq:323}. We denote the corresponding integers for $C_2$ by $t'$ and $u'$, respectively. We know $u'=0$ else $c_1,c_2,\ldots,c_{n-1},c'_n$ ends with \begin{equation*} \label{eq:357} a_{n-u+1}-1,a_{n-u+2}-2,a_{n-u+3}-2,\ldots,a_{n-1}-2,a_n-1 \end{equation*} implying that $C_2=\emptyset$. Similarly, $t'=0$ since $c'_n\ge1$. Hence \begin{equation*} \label{eq:358} \overline {C_1}>S_1+D_n-D_{n+s-N}\qquad\text{and}\qquad\underline {C_2}<S_2+D_{n+s}. \end{equation*} and \begin{equation*} \label{eq:359} \underline {C_1}<S_1+D_{n+1}+D_{n+s}\qquad\text{and}\qquad \overline {C_2}>S_2+D_{n+N+1}-D_{n+s-N}. \end{equation*} We are now ready to consider the inequalities in \eqref{eq:352}. The inequalities above imply that \begin{equation*} \label{eq:360} 2\;\overline{C_1}-(\underline{C_1}+\underline{C_2})>S_1+2D_n-2D_{n+s-N}-S_2-D_{n+1}-2D_{n+s}. \end{equation*} Further, $S_2=S_1+D_n$ and $D_{n+s-N}\ge D_{n+s}$ and thus \begin{equation*} \label{eq:361} 2\;\overline{C_1}-(\underline{C_1}+\underline{C_2})>D_n-D_{n+1}-4D_{n+s-N}. \end{equation*} Since \eqref{eq:26} holds for all $i\ge1$ we know there is some $s_0\ge N$ such that \begin{equation*} \label{eq:362} 1-\alpha_{n+1}-4\;\alpha_{n+1}\alpha_{n+2}\ldots\alpha_{n+s-N}>0 \end{equation*} and hence \begin{equation*} \label{eq:363} D_n-D_{n+1}-4D_{n+s-N}>0 \end{equation*} if $s\ge s_0$. Note that the size of $s_0$ is independent of $n$. It follows that if $s\ge s_0$ then $\underline{C_1}+\underline{C_2}<2\;\overline{C_1}$ and we have the desired result. For the second inequality in \eqref{eq:352} we observe that \begin{equation*} \label{eq:364} \overline{C_1}\;\overline{C_2}-\underline{C_2}\;\underline{C_2} >(S_1+D_n-D_{n+s-N})(S_2+D_{n+N+1}-D_{n+s-N})-(S_2+D_{n+s})^2. \end{equation*} Since $S_1\ge0$ and $S_2\ge D_n$ and $D_{n+s-N}\ge D_{n+s}$ we have \begin{equation*} \label{eq:365} \overline{C_1}\;\overline{C_2}-\underline{C_2}\;\underline{C_2} >(D_n-D_{n+s-N})(D_n+D_{n+N+1}-D_{n+s-N})-(D_n+D_{n+s-N})^2 \end{equation*} and hence \begin{equation*} \label{eq:366} \overline{C_1}\;\overline{C_2}-\underline{C_2}\;\underline{C_2} >D_n(D_{n+N+1}-4D_{n+s-N})-D_{n+N+1}D_{n+s-N}. \end{equation*} Clearly $D_n>D_{n+N+1}$ and therefore it suffices to show that if $s$ is large enough then \begin{equation*} \label{eq:367} D_{n+N+1}-4D_{n+s-N}>D_{n+s-N} \end{equation*} or equivalently \begin{equation*} \label{eq:368} 1>5\;\alpha_{n+N+2}\alpha_{n+N+3}\ldots\alpha_{n+s-N}. \end{equation*} As above, this is an easy consequence of \eqref{eq:26}. \end{proof} We can now verify that Hall's Condition~1 holds for the dissection for $\log E(\alpha,s)$. \begin{lem} \label{thm:dddasd} There is an integer $s_0\ge N$ such that if $s\ge s_0$ and if $C_1$ and $C_2$ are non-empty neighbouring intervals arising at the same stage of the Cantor dissection which produces $E(\alpha,s)$ then \begin{equation*} \label{eq:407} |\log C_{12}|\le\min\{|\log C_1|,|\log C_2|\} \end{equation*} where $C_{12}$ is the open interval lying between $C_1$ and $C_2$. \end{lem} \begin{proof} Let $s\ge N$ and let $C_1$ and $C_2$ and $C_{12}$ be as described. We may assume without loss of generality that $C_1$ lies to the left of $C_2$. We know from proof of Lemma~\ref{thm:abdoas} that it is sufficient to prove the inequalities \begin{equation} \label{eq:408} \underline{C_1}\;\underline{C_2}\le\overline{C_1}\;\overline{C_1}\text{ and } \underline{C_2}\;\underline{C_2}\le\overline{C_1}\;\overline{C_2} \end{equation} hold when $s$ is large enough. We can also make use of statement \eqref{eq:352}. We consider two possibilities for $C_1$. We suppose first that \begin{equation} \label{eq:409} C_1=A(\mathbf{c}_n). \end{equation} In this case $B(c_1,c_2,\ldots,c_n)\ne\emptyset$ and therefore \begin{equation*} \label{eq:410} C_2=B(\mathbf{c}_n). \end{equation*} To see this, we produce a number $\beta$ that belongs to $B(\mathbf{c}_n)$. To this end we note that in the Cantor dissection of $G(\alpha,s)$ the intervals \begin{equation*} \label{eq:411} A(c_1,c_2,\ldots,c_{n-1},a_n-2)\text{ and } A(c_1,c_2,\ldots,c_{n-1},a_n-1) \end{equation*} have no right neighbours since they result from the dissection of $A(c_1,c_2,\ldots,c_{n-1})$ and $B(c_1,c_2,\ldots,c_{n-1})$, respectively. Therefore, either $n=0$ or $c_n\le a_n-3$. It follows from the proof of Lemma~\ref{thm:yahdd} that $\overline C_1$ lies in $E(\alpha,s)$ and has Davenport coefficients \begin{equation*} \label{eq:412} c_1,c_2,\ldots,c_n,a_{n+1}-2,a_{n+2}-2,a_{n+3}-2,\ldots. \end{equation*} Now let $\beta=\sum^\infty_{k=1}b_kD_k$ where $b_1,b_2,b_3,\ldots$ is the sequence \begin{equation*} \label{eq:413} c_1,c_2,\ldots,c_n,a_{n+1}-1,a_{n+2}-2,a_{n+3}-2,a_{n+4}-2,\ldots. \end{equation*} It is straightforward again to check that $\beta\in E(\alpha,s)$. It now follows from \eqref{eq:320} that $\beta$ belongs to an interval of the form $A(c'_1,c'_2,\ldots,c'_n)$ or $B(c'_1,c'_2,\ldots,c'_n)$. By observing that \begin{equation*} \label{eq:414} \beta=S(\mathbf{c}_n)+D_n \end{equation*} and applying the inequalities in \eqref{eq:316}, it can be seen that the only possibility is $\beta\in B(\mathbf{c}_n)\ne\emptyset$. We can now apply Lemmas~\ref{thm:yahdd} and ~\ref{thm:oaqpa} to $C_1$ and $C_2$. As usual, it is convenient to write $S=S(\mathbf{c}_n)$. Lemma~\ref{thm:yahdd} implies \begin{equation*} \label{eq:415} \underline{C_1}<S+D_n-D_{n+1}-D_{n+3N}\qquad\text{and}\qquad \overline{C_1}=S+D_n-D_{n+1} \end{equation*} and Lemma~\ref{thm:oaqpa} implies \begin{equation*} \label{eq:416} \underline{C_2}<S+D_n-D_{n+1}+D_{n+s}\qquad\text{and}\qquad \overline{C_2}=S+D_n. \end{equation*} It follows that \begin{equation*} \label{eq:417} 2\;\overline{C_1}-(\underline{C_1}+\underline{C_2})>D_{n+3N}-D_{n+s}. \end{equation*} Since \eqref{eq:26} holds for all $i\ge1$ we know there is some $s_0\ge 3N+1$ such that if $s\ge s_0$ then \begin{equation*} \label{eq:418} 1>\alpha_{n+3N+1}\alpha_{n+3N+2}\ldots\alpha_{n+s}. \end{equation*} We emphasis that the size of $s_0$ does not depend on $n$. For such a choice of $s_0$ we have $D_{n+3N}>D_{n+s}$ and hence $\underline{C_1}+\underline{C_2}<2\;\overline{C_1}$ for all $s\ge s_0$. An application of \eqref{eq:352} gives first inequality in \eqref{eq:408} for $s\ge s_0$. For the second inequality in \eqref{eq:408} we observe that \begin{equation*} \label{eq:419} \overline{C_1}\;\overline{C_2}-\underline{C_2}\;\underline{C_2} >(S+D_n-D_{n+1})(S+D_n)-(S+D_n-D_{n+1}+D_{n+s})^2. \end{equation*} Since $S\ge0$ it follows that \begin{equation*} \label{eq:420} \overline{C_1}\;\overline{C_2}-\underline{C_2}\;\underline{C_2} >(D_n-D_{n+1})(D_{n+1}-2D_{n+s})-D_{n+s}^2. \end{equation*} Therefore it suffices to show there is some $s_0$ (which does not depend on $n$) such that \begin{equation*} \label{eq:421} D_n-D_{n+1}>D_{n+s}\qquad\text{and}\qquad D_{n+1}-2D_{n+s}>D_{n+s} \end{equation*} or equivalently \begin{equation*} \label{eq:422} 1>\alpha_{n+1}+\alpha_{n+1}\alpha_{n+2}\ldots\alpha_{n+s}\qquad\text{and}\qquad 1>3\;\alpha_{n+2}\alpha_{n+3}\ldots\alpha_{n+s} \end{equation*} for all $s\ge s_0$. This is easily done with the help of \eqref{eq:26}. The other possibility for $C_1$ is that \begin{equation*} \label{eq:423} C_1=B(\mathbf{c}_n). \end{equation*} It is easy to see that $\beta=\sum^\infty_{k=1}b_kD_k\in A(c_1,c_2,\ldots,c_{n-1},c_n+1)$ where $b_1,b_2,b_3,\ldots$ is the sequence \begin{equation*} \label{eq:425} c_1,c_2,\ldots,c_{n-1},c_n+1,a_{n+1}-2,a_{n+2}-2,a_{n+3}-2,\ldots, \end{equation*} and so $A(c_1,c_2,\ldots,c_{n-1},c_n+1)\ne\emptyset$. Therefore \begin{equation*} \label{eq:424} C_2=A(c_1,c_2,\ldots,c_{n-1},c_n+1). \end{equation*} Again we apply Lemmas~\ref{thm:yahdd} and ~\ref{thm:oaqpa} to $C_1$ and $C_2$. This time we write \begin{equation*} \label{eq:428} S_1=S(\mathbf{c}_n)\qquad\text{and}\qquad S_2=S(c_1,c_2,\ldots,c_{n-1},c_n+1). \end{equation*} Note that $S_2=S_1+D_n$. Lemma~\ref{thm:oaqpa} implies \begin{equation*} \label{eq:429} \underline{C_1}<S_1+D_n-D_{n+1}+D_{n+2}+D_{n+s}\qquad\text{and}\qquad \overline{C_1}=S_1+D_n \end{equation*} and Lemma~\ref{thm:yahdd} implies \begin{equation*} \label{eq:430} \underline{C_2}<S_2+D_{n+s}\qquad\text{and}\qquad \overline{C_2}=S_2+D_n-D_{n+1}. \end{equation*} These combine to yield \begin{equation*} \label{eq:431} 2\;\overline{C_1}-(\underline{C_1}+\underline{C_2})>D_{n+1}-D_{n+2}-2D_{n+s}. \end{equation*} Using \eqref{eq:26} we know there is some $s_0\ge1$ (which does not depend on $n$) such that \begin{equation*} \label{eq:432} 1>\alpha_{n+2}+2\;\alpha_{n+2}\alpha_{n+3}\ldots\alpha_{n+s} \end{equation*} and hence $D_{n+1}>D_{n+2}+2D_{n+s}$ for all $s\ge s_0$. As a result $\underline{C_1}+\underline{C_2}<2\;\overline{C_1}$ if $s\ge s_0$ and using \eqref{eq:352} we conclude that the first inequality in \eqref{eq:408} holds if $s$ is large enough. To see that the second inequality in \eqref{eq:408} is true we note that \begin{equation*} \label{eq:433} \overline{C_1}\;\overline{C_2}-\underline{C_2}\;\underline{C_2} >(S_1+D_n)(S_2+D_n-D_{n+1})-(S_2+D_{n+s})^2. \end{equation*} Since $S_2=S_1+D_n$ and $S\ge0$ it follows that \begin{equation*} \label{eq:434} \overline{C_1}\;\overline{C_2}-\underline{C_2}\;\underline{C_2} >D_n(D_n-D_{n+1}-2D_{n+s})-D_{n+s}^2. \end{equation*} Therefore it suffices to show there is some $s_0$ (which does not depend on $n$) such that \begin{equation*} \label{eq:435} D_n-D_{n+1}-2D_{n+s}>D_{n+s} \end{equation*} or equivalently \begin{equation*} \label{eq:436} 1>\alpha_{n+1}+3\;\alpha_{n+1}\alpha_{n+2}\ldots\alpha_{n+s} \end{equation*} for all $s\ge s_0$. Again this is easily done with the help of \eqref{eq:26}. \end{proof} These sequence of lemmas leads to the following key precursor to the main result. \begin{thm} \label{thm:7gadaa} There is an integer $s_0\ge N$ such that if $s\ge s_0$ and $R=N/(N+1)$ and \begin{equation*} \label{eq:437} P_1=\frac{R^{2s}}{(1-R^{(s-N)})^2}\qquad\text{and}\qquad P_2=1-R^{(s-N)} \end{equation*} then $P_2\ge P_1$ and the interval $[P_1,P_2]$ lies in the product of the sets $E(\alpha,s)$ and $F(\alpha,s)$. \end{thm} \begin{proof} The proof applies Theorem~2.2 in Hall's paper \cite{m.47:_sum_and_produc_of_contin_fract} to the sum \begin{equation} \label{eq:438} \log E(\alpha,s)\;+\;\log F(\alpha,s). \end{equation} It is appropriate to outline why this is possible. In the last section we showed that the sets $E(\alpha,s)$ and $F(\alpha,s)$ are the result of Cantor dissections of the intervals $G(\alpha,s)$ and $H(\alpha,s)$. By applying the logarithm function it follows that the sets $\log E(\alpha,s)$ and $\log F(\alpha,s)$ are the result of Cantor dissections of the intervals $\log G(\alpha,s)$ and $\log H(\alpha,s)$. We know from Lemmas~\ref{thm:abdoas} and ~\ref{thm:dddasd} that these dissections satisfy Condition~1 in Hall's paper, if $s$ is large enough. In other words, there is some $s_0\ge N$ such that for all $s\ge s_0$ Hall's theorem applies to the sum \eqref{eq:438}. Note that since $R<1$ we can choose $s_0$ so that we also have $P_2>P_1$. Hall's theorem implies that the sum \eqref{eq:438} contains the interval \begin{equation*} \label{eq:439} [\log x_2+\log y_2-2\min\{\log x_2-\log x_1,\;\log y_2-\log y_1\} ,\;\log x_2+\log y_2] \end{equation*} where \begin{equation*} \label{eq:440} x_1=\underline G(\alpha,s)\qquad x_2=\overline G(\alpha,s)\qquad y_1=\underline H(\alpha,s)\qquad y_2=\overline H(\alpha,s). \end{equation*} It follows immediately that the product of $E(\alpha,s)$ and $F(\alpha,s)$ contains the interval \begin{equation*} \label{eq:441} [x_2y_2(\max\{x_1/x_2,y_1/y_2\})^2,\;x_2y_2]. \end{equation*} To prove the lemma it suffices to show \begin{equation} \label{eq:442} x_2y_2(\max\{x_1/x_2,y_1/y_2\})^2\le P_1\qquad\text{and}\qquad x_2y_2\ge P_2. \end{equation} To this end we observe that $\overline G(\alpha,s)=\overline B(\;)$ and $\overline H(\alpha,s)=\overline C(\;)$ and hence Lemma~\ref{thm:ousds} and ~\ref{thm:oaqpa} imply $x_2=1$ and $y_2>1-D_{s-N}$. Therefore $x_2y_2>1-D_{s-N}$. We know $D_{s-N}=\alpha_1\alpha_2\ldots\alpha_{s-N}$ and since \eqref{eq:26} holds for all $i\ge1$ it is easy to see that the second inequality in \eqref{eq:442} is true. For the first inequality in \eqref{eq:442} we observe that $\underline G(\alpha,s)=\underline A(\;)$ and $\underline H(\alpha,s)=\underline C(\;)$ and hence Lemmas~\ref{thm:ousds} and ~\ref{thm:yahdd} imply $x_1<D_s$ and $y_1<D_s$. Thus \begin{equation*} \label{eq:443} x_1/x_2<D_s\qquad\text{and}\qquad y_1/y_2<D_s/(1-D_{s-N}). \end{equation*} Clearly $x_2y_2<1$ and it follows that \begin{equation*} \label{eq:444} x_2y_2(\max\{x_1/x_2,y_1/y_2\})^2<\frac{D_s^2}{(1-D_{s-N})^2}. \end{equation*} The truth of the first inequality in \eqref{eq:440} can now be seen by expressing $D_{s-N}$ and $D_s$ in terms of the numbers $\alpha_i$ and applying \eqref{eq:26}. \end{proof} Finally, we return to the sets $E(\alpha^-,s)$ and $F(\alpha^+_{r+1},s)$, where $r\ge1$. Recall that $\alpha^-$ and $\alpha^+$ are defined by \eqref{eq:200} and $\alpha^+_{r+1}$ by \eqref{eq:200}. We know from Lemma~\ref{thm:616161} that $\alpha^-$ and $\alpha^+$ satisfy all the constraints we have placed on $\alpha$. Clearly the same is true of $\alpha^+_{r+1}$. We can, therefore, replace $E(\alpha,s)$ and $F(\alpha,s)$ in Theorem~\ref{thm:abdoas} with $E(\alpha^-,s)$ and $F(\alpha^+_{r+1},s)$, respectively. In this manner we obtain the following corollary. \begin{corollary} There is an integer $s_0\ge N$ such that if $s\ge s_0$ and $R=N/(N+1)$ and \begin{equation*} \label{eq:445} P_1=\frac{R^{2s}}{(1-R^{(s-N)})^2}\qquad\text{and}\qquad P_2=1-R^{(s-N)} \end{equation*} then $P_2\ge P_1$ and the product of the sets $E(\alpha^-,s)$ and $F(\alpha^+_{r+1},s)$, where $r\ge1$, contains the interval $[P_1,P_2]$. \end{corollary} \subsection{The existence of Hall's ray} In this section, we prove the existence of a Hall's ray in the set ${\mathcal S}_+(\alpha)$ in~\eqref{eq:4}; that is we prove Theorem~\ref{thm:1}. \begin{proof}[Proof of Theorem~\ref{thm:1}] The proof of this theorem consists of showing that the set ${\mathcal S}_+(\alpha)$ contains a chain of intersecting intervals whose endpoints converge to zero. We shall construct the chain with the help of Theorem~\ref{thm:main_r_s} and the Corollary to Theorem~\ref{thm:7gadaa}. Let $s_0\ge N$ be the integer mentioned in the Corollary to Theorem~\ref{thm:7gadaa} and define $r_0$ to be the smallest integer which is greater than or equal to $s_0L$. Note that since $L\ge1$ we have $s_0=\lfloor r_0/L\rfloor$ where as usual $\lfloor x\rfloor$ denotes the largest integer which is less than or equal to $x$. Now let $r$ be an integer with $r\ge r_0$ and put $s=\lfloor r/L\rfloor$. Since $r/s\ge L$ we can apply Theorem~\ref{thm:main_r_s}. Thus for every number $x$ in the product of the sets $E(\alpha^-,s)$ and $F(\alpha^+_{r+1},s)$ there is some $\beta$ with $0<\beta<1$ such that \begin{equation*} \label{eq:450} {\mathcal M}^{+}(\alpha,\beta)=\frac{xD^+_r}{1-\alpha^-\alpha^+}. \end{equation*} Because $r\ge r_0$ we know that $s\ge s_0$. Therefore Theorem~\ref{thm:7gadaa} implies $P_1\le P_2$ and the product of the sets $E(\alpha^-,s)$ and $F(\alpha^+_{r+1},s)$ contains the interval $[P_1,P_2]$ where \begin{equation*} \label{eq:451} P_1=\frac{R^{2s}}{(1-R^{(s-N)})^2}\qquad\text{and}\qquad P_2=1-R^{(s-N)} \end{equation*} and $R=N/(N+1)$. It follows that for every number $\mu$ in the interval \begin{equation} \label{eq:452} \left[\frac{P_1D^+_r}{1-\alpha^-\alpha^+}\; ,\;\frac{P_2D^+_r}{1-\alpha^-\alpha^+}\right] \end{equation} there is some $\beta$ with $0<\beta<1$ such that $M(\alpha,\beta)=\mu$. In other words the interval \eqref{eq:452} lies in the set ${\mathcal S}^{+}(\alpha)$. Since $r$ was any integer with $r\ge r_0$ we conclude that ${\mathcal S}^{+}(\alpha)$ contains a chain of intervals. By choosing $s_0$ large enough we can ensure that the intervals just mentioned intersect. To this end let $s'=\lfloor (r+1)/L\rfloor$ and set \begin{equation*} \label{eq:453} P'_1=\frac{R^{2s'}}{(1-R^{(s'-N)})^2}\qquad\text{and}\qquad P'_2=1-R^{(s'-N)}. \end{equation*} Note that $s'\ge s$. According to the argument above, the interval for the integer $r+1$ is \begin{equation*} \label{eq:454} \left[\frac{P'_1D^+_{r+1}}{1-\alpha^-\alpha^+}\; ,\;\frac{P'_2D^+_{r+1}}{1-\alpha^-\alpha^+}\right]. \end{equation*} It will overlap the interval \eqref{eq:452} if both the inequalities \begin{equation} \label{eq:455} \frac{P'_1D^+_{r+1}}{1-\alpha^-\alpha^+}\; \le\;\frac{P_2D^+_r}{1-\alpha^-\alpha^+}\qquad\text{and}\qquad \frac{P_1D^+_r}{1-\alpha^-\alpha^+}\; \le\;\frac{P'_2D^+_{r+1}}{1-\alpha^-\alpha^+} \end{equation} hold. These inequalities become $P'_1\alpha^+_{r+1}\le P_2$ and $P_1\le P'_2\alpha^+_{r+1}$ and, substituting for $P_1$, $P_2$, $P'_1$ and $P'_2$ and rearranging, we have \begin{equation} \label{eq:456} R^{2s'}\alpha^+_{r+1}\le(1-R^{(s'-N)})^2(1-R^{(s-N)}) \end{equation} and \begin{equation} \label{eq:457} R^{2s}\le(1-R^{(s-N)})^2(1-R^{(s'-N)})\alpha^+_{r+1}. \end{equation} Now we observe that $R<1$ and hence the quantities $R^{2s}$ and $R^{(s-N)}$ and $R^{2s'}$ and $R^{(s'-N)}$ all converge to zero as $s$ and $s'$ increase to infinity. Since $s'\ge s\ge s_0$ and the term $\alpha^+_{r+1}$ satisfies $1/M<\alpha^+_{r+1}<N/(N+1)$, it is clear that by choosing $s_0$ sufficiently large we can ensure that \eqref{eq:455} always holds. We conclude as indicated that $s_0$ can be chosen so that successive members in the chain of intervals in ${\mathcal S}^{+}(\alpha)$ intersect one another. Evidently the endpoints of the interval \eqref{eq:452}. converge to zero as $r$ increases to infinity. \end{proof} \bibliographystyle{plain}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:Introduction} \input{intro} \section{Preliminaries and Notation} \label{sec:Preliminaries} We use the lowercase letters $x$, $y$, $z$, $g$ and $\gamma$ to refer to vectors throughout the paper. Normally parenthesized superscripts, like $x^{(k)}$ refer to vectors as well, whereas subscripts refer to the components of the corresponding vectors. For any positive integer $k$, $[k] := \{1,\ldots,k\}$. $\mathop{\mathrm{sign}}(a)$ is the interval-valued sign function, i.e. $\mathop{\mathrm{sign}}(a) = \{1\}$ or $\{-1\}$ corresponding to $a>0$ or $a<0$. For $a = 0$, $\mathop{\mathrm{sign}}(a) = [-1,1]$. Unless otherwise specified, $\| \cdot \|$ refers to the Euclidean norm $\nbr{x} := \left(\sum_{i}x_{i}^{2}\right)^ {\frac{1}{2}}$, $\|\cdot\|_1$ will denote the $l_1$ norm, $\nbr{x}_1 = \left(\sum_i|x_i| \right)$, $\inner{\cdot}{\cdot}$ denotes the Euclidean dot product $\inner{x}{y} = \sum_{i} x_{i}y_{i}$. Through out the paper inequalities between vectors are to be interpreted component wise \emph{i.e.}\ $x \ge y$ means that $x_i \ge y_i$ for all $i \in [d]$. The following definition will be used extensively in the paper: \begin{definition} \label{def:lip-cont-grad} Suppose a function $f: \RR^d \to \RR$ is differentiable on $\RR^d$. Then $f$ is said to have Lipschitz continuous gradient ({\textit{l.c.g}}) with respect to a norm $\|\cdot\|$ if there exists a constant $L$ such that \begin{align} \label{eq:lip-cont-grad} \| \nabla f(x) - \nabla f(x')\| \leq L \| x - x'\| \qquad \forall\ x, x'\in \RR^d. \end{align} \end{definition} An important fact (see, e.g., \cite[Thm. 2.1.5]{Nesterov03a}) we will use is that if a function $f$ has Lipschitz continuous gradient with respect to a norm $\|\cdot\|$, then it satisfies the following generalized bounded Hessian property \begin{align} \label{eq:generalized_hessian} f(x) \leq f(x') + \inner{{\nabla} f(x')}{x-x'} + \frac{L}{2}\|x-x'\|^2. \end{align} An operator $T:\mathbb{R}^d \to \mathbb{R}$ is said to be {\em isotone} iff \begin{equation} \label{eq:isotone} x \ge y \quad \Rightarrow \quad T(x) \ge T(y). \end{equation} An important isotone operator that we will frequently deal with is the {\em shrinkage} operator $\mathbf{S}_\tau:\mathbb{R}^d \to \mathbb{R}$ defined, for $\tau > 0$, as \begin{align} \label{eq:shrinkage} [\mathbf{S}_\tau(x)]_i := S_\tau(x_i) \end{align} where $S_\tau(a)$ is the scalar shrinkage operator: \begin{equation} \label{eq:scshrinkage} S_\tau(a) := \begin{cases} a - \tau & a > \tau \\ 0 & a \in [-\tau,\tau] \\ a + \tau & a < -\tau. \end{cases} \end{equation} \section{Algorithms} \label{sec:Algorithms} We will consider three iterative algorithms for solving the minimization problem~\eqref{eq:Reg_l_1_loss}. All of them enjoy the descent property: $F(x^{(k+1)}) \le F(x^{(k)})$ for successive iterates $x^{(k)}$ and $x^{(k+1)}$. \begin{algorithm} \begin{algorithmic} \STATE Initialize: Choose an appropriate initial point $x^{(0)}$. \FOR{$k=0,1,\ldots$} \STATE $x^{(k+1)} \leftarrow \mathbf{S}_{\lambda/L}(x^{(k)} - \frac{\nabla f(x^{(k)})}{L})$ \ENDFOR \end{algorithmic} \caption{Gradient Descent ({\sc GD})} \label{alg:gd} \end{algorithm} Algorithm~\ref{alg:gd}, known as Gradient Descent ({\sc GD}), is one of the most common iterative algorithms used for convex optimization (See \cite{BeckTeb09}, \cite{DucSing09} and references therein). It is based on the idea that using corollary \eqref{eq:generalized_hessian} to generate a linear approximation of $f$ at the current iterate $x^{(k)}$, we can come up with the following global upper approximation of $F$: \[ F(x) \leq f(x^{(k)}) + \inner{ \nabla f(x^{(k)}) }{ x - x^{(k)} } + \frac{L}{2}\| x - x^{(k)} \|^2 + \lambda \| x \|_1 \ . \] It is easy to show that the above approximation is minimized at $x = \mathbf{S}_{\lambda/L}(x^{(k)} - \nabla f(x^{(k)})/L)$ (\cite{BeckTeb09}). This is the next iterate for the {\sc GD}\ algorithm. We call it ``Gradient Descent'' as it reduces to the following algorithm \[ x^{(k+1)} = x^{(k)} - \frac{\nabla f(x^{(k)})}{L} \] when there is no regularization (i.e. $\lambda = 0$). Finite time convergence rate for the {\sc GD}\ algorithm are well known. \begin{theorem} \label{thm:gd} Let $\cbr{ x^{(k)} }$ be a sequence generated by the {\sc GD}\ algorithm. Then, for any minimizer $x^\star$ of~\eqref{eq:Reg_l_1_loss}, and $\forall k \ge 1$, \[ F(x^{(k)}) - F(x^\star) \le \frac{ L \| x^\star - x^{(0)} \|^2}{2\,k} \] \end{theorem} The above theorem can be found in, e.g., \cite[Thm. 3.1]{BeckTeb09}. \begin{algorithm} \begin{algorithmic} \STATE Initialize: Choose an appropriate initial point $y^{(0)}$. \FOR{$k=0,1,\ldots$} \STATE $y^{(k,0)} \leftarrow y^{(k)}$ \FOR{$j=1$ to $d$} \STATE $y^{(k,j)}_j \leftarrow S_{\lambda/L}(y^{(k,j-1)}_j - [\nabla f(y^{(k,j-1)})]_j \,/\, L)$ \STATE $\forall i\neq j$, $y^{(k,j)}_i \leftarrow y^{(k,j-1)}_i$ \ENDFOR \STATE $y^{(k+1)} \leftarrow y^{(k,d)}$ \ENDFOR \end{algorithmic} \caption{Cyclic Coordinate Descent ({\sc CCD})} \label{alg:ccd} \end{algorithm} The second algorithm, Cyclic Coordinate Descent ({\sc CCD}), instead of using the current gradient to update all components simultaneously, goes through them in a cyclic fashion. The next ``outer'' iterate $y^{(k+1)}$ is obtained from $y^{(k)}$ by creating a series of $d$ intermediate or ``inner'' iterates $y^{(k,j)}$, $j \in [d]$, where $y^{(k,j)}$ differs from $y^{(k,j-1)}$ only in the $j$th coordinate whose value can be found by minimizing the following one-dimensional over-approximation of $F$ over the scalar $\alpha$: \begin{equation} \label{eq:1d_over_approx} f(y^{(k,j-1)}) + \lambda \sum_{i \neq j} |y^{(k,j-1)}_i| + [ \nabla f(y^{(k,j-1)}) ]_j \cdot (\alpha - y^{(k,j-1)}_j) + \frac{L}{2} (\alpha - y^{(k,j-1)})_j^2 + \lambda |\alpha|\ . \end{equation} It can again be verified that the above minimization has the closed form solution \begin{align*} \alpha = S_{\lambda/L}\rbr{y^{(k,j-1)}_j - \frac{[{\nabla} f(y^{(k,j-1)})]_j}{L}} \end{align*} which is what {\sc CCD}\ chooses $y^{(k,j)}_j$ to be. Once all coordinates have been cycled through, $y^{(k+1)}$ is simply set to be $y^{(k,d)}$. Let us point out that in an actual implementation, the inner iterates $y^{(k,j)}$ would not be computed separately but $y^{(k)}$ would be updated ``in place''. For analysis purposes, it is convenient to give names to the intermediate iterates. Note that for all $j \in \cbr{0,1,\hdots,d}$, the inner iterate looks like \[ y^{(k,j)} = \sbr{y^{(k+1)}_1, \hdots,y^{(k+1)}_j,y^{(k)}_{j+1},\hdots, y^{(k)}_d}\ . \] In the {\sc CCD}\ algorithm updating the $j$th coordinate uses the newer gradient value ${\nabla} f(y^{(k,j-1)})$ rather than ${\nabla} f(y^{(k)})$ which is used in {\sc GD}. This makes {\sc CCD}\ inherently sequential. In contrast, different coordinate updates in {\sc GD}\ can easily be done by different processors in parallel. However, on a single processor, we might hope {\sc CCD}\ converges faster than {\sc GD}\ due to the use of ``fresh'' information. Therefore, it is natural to expect that {\sc CCD}\ should enjoy the finite time convergence rate given in Theorem~\ref{thm:gd} ( or better). We show this is indeed the case under an {\em isotonicity assumption} stated in Section~\ref{sec:Analysis} below. Under the assumption, we are actually able to show the correctness of the intuition that {\sc CCD}\ should converge faster than {\sc GD}. \begin{algorithm} \begin{algorithmic} \STATE Initialize: Choose an appropriate initial point $z^{(0)}$. \FOR{$k=0,1,\ldots$} \STATE $z^{(k,0)} \leftarrow z^{(k)}$ \FOR{$j=1$ to $d$} \STATE $z^{(k,j)}_j \leftarrow \mathop{\mathrm{argmin}}_{\alpha}\ F(z^{(k,j-1)}_1,\ldots, z^{(k,j-1)}_{j-1},\alpha,z^{(k,j-1)}_{j+1},\ldots,z^{(k,j-1)}_d)$ \STATE $\forall i\neq j$, $z^{(k,j)}_i \leftarrow z^{(k,j-1)}_i$ \ENDFOR \STATE $z^{(k+1)} \leftarrow z^{(k,d)}$ \ENDFOR \end{algorithmic} \caption{Cyclic Coordinate Minimization} \label{alg:ccm} \end{algorithm} The third and final algorithm that we consider is Cyclic Coordinate Minimization ({\sc CCM}). The only way it differs from {\sc CCD}\ is that instead of minimizing the one-dimensional over-approximation~\eqref{eq:1d_over_approx}, it chooses $z^{(k,j)}_j$ to minimize, \[ F(z^{(k,j-1)}_1,\ldots,z^{(k,j-1)}_{j-1},\alpha,z^{(k,j-1)} _{j+1},\ldots,z^{(k,j-1)}_d) \] over $\alpha$. In a sense, {\sc CCM}\ is not actually an algorithm as it does not specify how to minimize $F$ for any arbitrary smooth function $f$. An important case when the minimum can be computed exactly is when $f$ is quadratic as in \eqref{eq:quadratic}. In that case, we have \[ z^{(k,j)}_j = S_{\lambda/A_{j,j}} \left( z^{(k,j-1)}_j - \frac{[Az^{(k,j-1)} + b]_j}{A_{j,j}} \right)\ . \] If there is no closed form solution, then we might have to resort to numerical minimization in order to implement {\sc CCM}. This is usually not a problem since one-dimensional convex functions can be minimized numerically to an extremely high degree of accuracy in a few steps. For the purpose of analysis, we will assume that an exact minimum is found. Again, intuition suggests that the accuracy of {\sc CCM}\ after any fixed number of iterations should be better than that of {\sc CCD}\ since {\sc CCD}\ only minimizes an over-approximation. Under the same isotonicity assumption that we mentioned above, we can show that this intuition is indeed correct. We end this section with a cautionary remark regarding terminology. In the literature, {\sc CCM}\ appears much more frequently than {\sc CCD}\ and it is actually the former that is often referred to as ``Cyclic Coordinate Descent'' (See \cite{HastTib07} and references therein). Our reasons for considering {\sc CCD}\ are: (i) it is a nice, efficient alternative to {\sc CCM}, and (ii) a stochastic version of {\sc CCD} (where the coordinate to update is chosen randomly and not cyclically) is already known to enjoy finite time $O(1/k)$ expected convergence rate (\cite{ShaiAmbuj09}). \section{Analysis} \label{sec:Analysis} We already mentioned the known convergence rate for {\sc GD}\ (Theorem~\ref{thm:gd}) above. Before delving into the analysis, it is necessary to state an assumption on $f$ which accompanied by appropriate starting conditions results in particularly interesting properties of the convergence behavior of {\sc GD}, as described in lemma \ref{lem:GD_compare}. The {\sc GD}\ algorithm generates iterates by applying the operator \begin{align} \label{eq:GD_operator} T_{{\sc GD}}(x) := \mathbf{S}_{\lambda/L}\left( x - \frac{\nabla f(x) }{ L} \right) \end{align} repeatedly. It turns out that if $T_{{\sc GD}}$ is an isotone operator then the {\sc GD}\ iterates satisfy lemma \ref{lem:GD_compare} which is essential for our convergence analysis. The above operator is a composition of $\mathbf{S}_{\lambda/L}$, an isotone operator, and $\mathbf{I} - \nabla f/L$ (where $\mathbf{I}$ denotes the identity operator). To ensure overall isotonicity, it suffices to assume that $\mathbf{I} - \nabla f/L$ is isotone. This is formally stated as: \begin{assumption} The operator $ x \mapsto x - \frac{\nabla f(x)}{L} $ is isotone. \end{assumption} Similar assumptions appear in the literature comparing Jacobi and Gauss-Seidel methods for solving linear equations~\cite[Chap. 2]{BertTsit89}. When the function $f$ is quadratic as in~\eqref{eq:quadratic}, our assumption is equivalent to assuming that the off-diagonal entries in $A$ are non-positive, i.e. $A_{i,j} \le 0$ for all $i\neq j$. For a general smooth $f$, the following condition is sufficient to make the assumption true: $f$ is twice-differentiable and the Hessian $\nabla^2f(x)$ at any point $x$ has non-positive off-diagonal entries. In the next few subsections, we will see how the isotonicity assumption leads to an isotonically decreasing (or increasing) behavior of {\sc GD}, {\sc CCD}\ and {\sc CCM}\ iterates under appropriate starting conditions. To specify what these starting conditions are, we need the notions of super- and subsolutions. \begin{definition} A vector $x$ is a supersolution iff $ x \ge \mathbf{S}_{\lambda}\left( x - \nabla f(x) \right) $. Analogously, $x$ is a subsolution iff $ x \le \mathbf{S}_{\lambda}\left( x - \nabla f(x) \right) $. \end{definition} Since the inequalities above are vector inequalities, an arbitrary $x$ may neither be a supersolution nor a subsolution. The names ``supersolution'' and ``subsolution'' are justified because equality holds in the definitions above, \emph{i.e.}\ $ x = \mathbf{S}_{\lambda}\left( x - \nabla f(x) \right) $ iff $x$ is a minimizer of $F$. To see this, note that subgradient optimality conditions say that $x$ is a minimizer of $F = f + \lambda \| \cdot \|_1$ iff for all $j \in [d]$ \begin{align} \label{eq:optimality_cond} 0 \in [\nabla f(x)]_j + \lambda \mathop{\mathrm{sign}}(x_j)\ . \end{align} Further, it is easy to see that, \begin{align} \label{eq:equivalence} \forall a,b \in \mathbb{R},\ \tau >0, \qquad 0 \in b + \lambda \mathop{\mathrm{sign}}(a) \qquad \Leftrightarrow \qquad a = S_{\lambda/\tau}(a - b/\tau) \end{align} We prove a couple of properties of super- and subsolutions that will prove useful later. The first property refers to the scale invariance of the definition of super- and subsolutions and the second property is the monotonicity of a single variable function. \begin{lemma} \label{lem:scale} If for any $\tau > 0$, \begin{align} \label{eq:scale} x \ge \mathbf{S}_{\lambda/\tau}\left( x - \frac{\nabla f(x)}{\tau} \right) \end{align} then $x$ is a supersolution. If $x$ is a supersolution then the above inequality holds for all $\tau > 0$. Similarly, if for any $\tau > 0$, \[ x \le \mathbf{S}_{\lambda/\tau}\left( x - \frac{\nabla f(x)}{\tau} \right) \] then $x$ is a subsolution. If $x$ is a subsolution then the above inequality holds for all $\tau > 0$. \end{lemma} \begin{proof} See Appendix \ref{sec:scale_proof} \end{proof} \begin{lemma} \label{lem:monotonic} If $x$ is a supersolution (resp. subsolution) then for any $j$, the function \[ \tau \mapsto S_{\lambda/\tau}\left( x_j - \frac{[\nabla f(x)]_j}{\tau} \right) \] is monotonically nondecreasing (resp. nonincreasing). \end{lemma} \begin{proof} See Appendix \ref{sec:monotonic_proof} \end{proof} \subsection{Gradient Descent} \label{subsec:GD} \begin{lemma} \label{lem:GD_compare} If $x^{(0)}$ is a supersolution and $\cbr{x^{(k)}}$ is the sequence of iterates generated by the {\sc GD}\ algorithm then $\forall k \geq 0$, \begin{align*} 1)\quad x^{(k+1)} &\leq x^{(k)} & 2)\quad x^{(k)} \text{ is a supersolution} \end{align*} If $x^{(0)}$ is a subsolution and $\cbr{x^{(k)}}$ is the sequence of iterates generated by the {\sc GD}\ algorithm then $\forall k \geq 0$, \begin{align*} 1)\quad x^{(k+1)} &\geq x^{(k)} & 2) \quad x^{(k)} \text{ is a subsolution} \end{align*} \end{lemma} \begin{proof} We only prove the supersolution case. The proof for the subsolution case is analogous. We start with a supersolution $x^{(0)}$. Consider the operator \begin{align*} T_{{\sc GD}}(x) := \mathbf{S}_{\lambda/L}\left(x - \frac{{\nabla} f(x)}{L}\right) \end{align*} given by \eqref{eq:GD_operator}. By the isotonicity assumption, $T_{{\sc GD}}$ is an isotone operator. We will prove by induction that $T_{{\sc GD}}(x^{(k)}) \le x^{(k)}$. This proves that $x^{(k+1)} \le x^{(k)}$ since $x^{(k+1)} = T_{{\sc GD}}(x^{(k)})$. Using lemma \ref{lem:scale}, the second claim follows by the definition of the $T_{{\sc GD}}$ operator. The base case $T_{{\sc GD}}(x^{(0)}) \le x^{(0)}$ is true by Lemma~\ref{lem:scale} since $x^{(0)}$ is given to be a supersolution. Now assume $T_{{\sc GD}}(x^{(k)}) \le x^{(k)}$. Applying the isotone operator $T_{{\sc GD}}$ on both sides we get $T_{{\sc GD}}(T_{{\sc GD}}(x^{(k)})) \le T_{{\sc GD}}(x^{(k)})$. This is the same as $T_{{\sc GD}}(x^{(k+1)}) \le x^{(k+1)}$ by definition of $x^{(k+1)}$ which completes our inductive claim. \end{proof} \subsection{Cyclic Coordinate Descent ({\sc CCD})} \label{subsec:CCD} \begin{lemma} \label{lem:CCD_Compare} If $y^{(0)}$ is a supersolution and $\cbr{y^{(k)}}$ is the sequence of iterates generated by the {\sc CCD}\ algorithm then $\forall k \geq 0$, \begin{align*} 1) \quad y^{(k+1)} &\leq y^{(k)} & 2) \quad y^{(k)} \text{ is a supersolution} \end{align*} If $y_0$ is a subsolution and $\cbr{y^{(k)}}$ is the sequence of iterates generated by the {\sc CCD}\ algorithm then $\forall k \geq 0$, \begin{align*} 1)\quad y^{(k+1)} &\geq y^{(k)} & 2)\quad y^{(k)} \text{ is a subsolution} \end{align*} \end{lemma} \begin{proof} We will only prove the supersolution case as the subsolution proof is analogous. We start with a supersolution $y^{(0)}$. We will prove the following: If $y^{(k)}$ is a supersolution then, \begin{equation} \label{eq:claim1} y^{(k+1)} \le y^{(k)} \ , \end{equation} \begin{equation} \label{eq:claim2} y^{(k+1)} \text{ is a supersolution} \end{equation} Then the lemma follows by induction on $k$. Let us make the induction assumption that $y^{(k)}$ is a supersolution and try to prove~\eqref{eq:claim1} and~\eqref{eq:claim2}. To prove these, we will show that $y^{(k,j)} \le y^{(k)}$ and $y^{(k,j)}$ is a supersolution by induction on $j \in \cbr{0,1,\hdots,d}$. This proves \eqref{eq:claim1} and \eqref{eq:claim2} for $y^{(k+1)}$ since $y^{(k+1)} = y^{(k,d)}$. For the base case ($j=0$) of the induction, note that $y^{(k,0)} \le y^{(k)}$ is trivial since the two vectors are equal. For the same reason, $y^{(k,0)}$ is a supersolution since we have assumed $y^{(k)}$ to be a supersolution. Now assume $y^{(k,j-1)} \le y^{(k)}$ and $y^{(k,j-1)}$ is a supersolution for some $j > 0$. We want to show that $y^{(k,j)} \le y^{(k)}$ and $y^{(k,j)}$ is a supersolution. Since $y^{(k,j-1)}$ and $y^{(k,j)}$ differ only in the $j$th coordinate, to show that $y^{(k,j)} \le y^{(k)}$ given $y^{(k,j-1)} \le y^{(k)}$, it suffices to show that $y^{(k,j)} \le y^{(k,j-1)}$, i.e. \begin{equation} \label{eq:jthentry} y^{(k,j)}_j \le y^{(k,j-1)}_j = y^{(k)}_j \ . \end{equation} Since $y^{(k,j-1)} \le y^{(k)}$ applying the isotone operator $\mathbf{I} - \nabla f/L$ on both sides and taking the $j$th coordinate gives, \[ y^{(k,j-1)}_j - \frac{ [\nabla f(y^{(k,j-1)})]_j }{L} \le y^{(k)}_j - \frac{ [\nabla f(y^{(k)})]_j }{L} \] Applying the scalar shrinkage operator on both sides gives, \begin{align*} S_{\lambda/L}\left(y^{(k,j-1)}_j - \frac{ [\nabla f(y^{(k,j-1)})]_j }{L}\right) &\le S_{\lambda/L}\left(y^{(k)}_j - \frac{ [\nabla f(y^{(k)})]_j }{L}\right) \le y^{(k)}_j \end{align*} The left hand side is $y^{(k,j)}_j$ by definition while the second inequality follows because $y^{(k)}$ is a supersolution. Thus, we have proved~\eqref{eq:jthentry}. Now we prove that $y^{(k,j)}$ is a supersolution. Note that we have already shown $y^{(k,j)} \le y^{(k,j-1)}$. Applying the isotone operator $\mathbf{I} - \frac{\nabla f}{L}$ on both sides gives, \begin{gather} \label{eq:forj} y^{(k,j)}_j - \frac{ [ \nabla f(y^{(k,j)}) ]_j}{L} \le y^{(k,j-1)}_j - \frac{ [ \nabla f(y^{(k,j-1)}) ]_j}{L} \ , \\ \label{eq:forothers} \forall i\neq j,\ y^{(k,j)}_i - \frac{ [ \nabla f(y^{(k,j)}) ]_i}{L} \le y^{(k,j-1)}_i - \frac{ [ \nabla f(y^{(k,j-1)}) ]_i}{L} \ . \end{gather} Applying a scalar shrinkage on both sides of~\eqref{eq:forj} gives, \[ S_{\lambda/L}\left(y^{(k,j)}_j - \frac{ [ \nabla f(y^{(k,j)}) ]_j}{L}\right) \le S_{\lambda/L}\left(y^{(k,j-1)}_j - \frac{ [ \nabla f(y^{(k,j-1)}) ]_j}{L} \right)\ . \] Since the right hand side is $y^{(k,j)}_j$ by definition, we have, \begin{equation} \label{eq:superforj} S_{\lambda/L}\left(y^{(k,j)}_j - \frac{ [ \nabla f(y^{(k,j)}) ]_j}{L}\right) \le y^{(k,j)}_j \ . \end{equation} For $i \neq j$, we have \begin{align} \notag y^{(k,j)}_i = y^{(k,j-1)}_i &\ge S_{\lambda/L}\left( y^{(k,j-1)}_i - \frac{[ {\nabla} f(y^{(k,j-1)})]_i }{L} \right)\\ \label{eq:superforothers} &\ge S_{\lambda/L}\left( y^{(k,j)}_i - \frac{[ {\nabla} f(y^{(k,j)})]_i }{L} \right) \ . \end{align} The first inequality above is true because $y^{(k,j-1)}$ is a supersolution (by Induction Assumption) (and Lemma~\ref{lem:scale}). The second follows from~\eqref{eq:forothers} by applying a scalar shrinkage on both sides. Combining~\eqref{eq:superforj} and~\eqref{eq:superforothers}, we get \begin{align*} y^{(k,j)} \geq \mathbf{S}_{\lambda/L}\left(y^{(k,j)} - \frac{{\nabla} f(y^{(k,j)})}{L} \right) \end{align*} which proves, using Lemma~\ref{lem:scale}, that $y^{(k,j)}$ is a supersolution. \end{proof} \subsection{Comparison: {\sc GD}\ vs. {\sc CCD}} \label{subsec:Compare_GD_CCD} \begin{theorem} \label{thm:GD_CCD_Compare} Suppose $\cbr{ x^{(k)} }$ and $\cbr{ y^{(k)} }$ are the sequences of iterates generated by the {\sc GD}\ and {\sc CCD}\ algorithms respectively when started from the same supersolution $x^{(0)} = y^{(0)}$. Then, $\forall k\ge 0$, \[ y^{(k)} \le x^{(k)}\ . \] On the other hand, if they are started from the same subsolution $x^{(0)} = y^{(0)}$ then the sequences satisfy, $\forall k \ge 0$, \[ y^{(k)} \ge x^{(k)}\ . \] \end{theorem} \begin{proof} We will prove lemma \ref{thm:GD_CCD_Compare} only for the supersolution case by induction on $k$. The base case is trivial since $y^{(0)} = x^{(0)}$. Now assume $y^{(k)} \le x^{(k)}$ and we will prove $y^{(k+1)} \le x^{(k+1)}$. Fix a $j \in [d]$. Note that we have, \[ y^{(k+1)}_j = y^{(k,j)}_j = S_{\lambda/L}\left(y^{(k,j-1)}_j - \frac{[\nabla f(y^{(k,j-1)})]_j }{ L}\right)\ . \] By Lemma~\ref{lem:CCD_Compare}, $y^{(k,j-1)} \le y^{(k)}$. Applying the isotone operator $S_{\lambda/L} \circ (\mathbf{I} - \nabla f/L)$ on both sides and taking the $j$th coordinate gives, \[ S_{\lambda/L}\rbr{ y^{(k,j-1)}_j - \frac{ [{\nabla} f(y^{(k,j-1)})]_j }{L} } \leq S_{\lambda/L}\rbr{ y^{(k)}_j - \frac{ [{\nabla} f(y^{(k)}) ]_j }{L} } \ . \] Combining this with the previous equation gives, \begin{equation} \label{eq:ybound} y^{(k+1)}_j \le S_{\lambda/L}\rbr{ y^{(k)}_j - \frac{ [{\nabla} f(y^{(k)}) ]_j }{L} } \ . \end{equation} Since $y^{(k)} \le x^{(k)}$ by induction hypothesis, applying the isotone operator $S_{\lambda/L} \circ (\mathbf{I} - \nabla f/L)$ on both sides and taking the $j$th coordinate gives, \[ S_{\lambda/L}\rbr{ y^{(k)}_j - \frac{ [{\nabla} f(y^{(k)}) ]_j }{L} } \le S_{\lambda/L}\rbr{ x^{(k)}_j - \frac{ [{\nabla} f(x^{(k)}) ]_j }{L} }\ . \] By definition, \begin{equation} \label{eq:vdef} x^{(k+1)}_j = S_{\lambda/L}\rbr{ x^{(k)}_j - \frac{ [{\nabla} f(x^{(k)}) ]_j }{L} } \ . \end{equation} Combining this with the previous inequality and ~\eqref{eq:ybound} gives, \[ y^{(k+1)}_j \le x^{(k+1)}_j\ . \] Since $j$ was arbitrary this means $y^{(k+1)} \le x^{(k+1)}$ and the proof is complete. \end{proof} \subsection{Cyclic Coordinate Minimization ({\sc CCM})} \label{subsec:CCM} Since {\sc CCM}\ minimizes a one-dimensional restriction of the function $F$, let us define some notation for this subsection. Let, \begin{align*} \fres{j}(\alpha;x) &:= f(x_1,\ldots,x_{j-1},\alpha,x_{j+1},\ldots,x_d) \\ \Fres{j}(\alpha;x) &:= F(x_1,\ldots,x_{j-1},\alpha,x_{j+1},\ldots,x_d) \ . \end{align*} With this notation, {\sc CCM}\ update can be written as: \begin{align} \label{eq:ccmup1} z^{(k,j)}_j &= \mathop{\mathrm{argmin}}_{\alpha}\ \Fres{j}(\alpha; z^{(k,j-1)}) \\ \notag \forall i\neq j,\ z^{(k,j)}_i &= z^{(k,j-1)}_i \ . \end{align} In order to avoid dealing with infinities in our analysis, we want to ensure that the minimum in~\eqref{eq:ccmup1} above is attained at a finite real number. This leads to the following assumption. \begin{assumption} \label{asmp:strict} For any $x \in \mathbb{R}^d$ and any $j \in [d]$, the one-variable function $\fres{j}(\alpha;x)$ (and hence $\Fres{j}(\alpha;x)$) is strictly convex. \end{assumption} This is a pretty mild assumption: considerably weaker than assuming, for instance, that the function $f$ itself is strictly convex. For example, when $f$ is quadratic as in \eqref{eq:quadratic}, then the above assumption is equivalent to saying that the diagonal entries $A_{j,j}$ of the positive semi definite matrix $A$ are all strictly positive. This is much weaker than saying that $f$ is strictly convex (which would mean $A$ is invertible). The next lemma shows that the {\sc CCM}\ update can be represented in a way that makes it quite similar to the {\sc CCD}\ update. \begin{lemma} \label{lem:CCM_update} Fix $k \ge 0, j \in [d]$ and consider the {\sc CCM}\ update~\eqref{eq:ccmup1}. Let $g(\alpha) = \fres{j}(\alpha; z^{(k,j-1)})$. If the update is non-trivial, i.e. $z^{(k,j)}_j \neq z^{(k,j-1)}_j$, it can be written as \[ z^{(k,j)}_j = S_{\lambda/\tau}\left( z^{(k-1,j)}_j - \frac{ \sbr{{\nabla} f(z^{(k,j-1)})}_j }{\tau} \right) \] for \begin{equation} \label{eq:diffratio} \tau = \frac{g'(z^{(k,j)}_j) - g'(z^{(k,j-1)}_j)}{ z^{(k,j)}_j - z^{(k,j-1))}_j }\ . \end{equation} Furthermore, we have $0 < \tau \le L$. \end{lemma} \begin{proof} See Appendix \ref{sec:CCM_Update} \end{proof} We point out that this lemma is useful only for the analysis of {\sc CCM}\ and not for its implementation (as $\tau$ depends recursively on $z^{(k,j)}_j$) except in an important special case. In the quadratic example \eqref{eq:quadratic}, $g(\alpha)$ is a one-dimensional quadratic function. In this case $\tau$ does not depend on $z^{(k,j)}_j$ and is simply $A_{j,j}$. This leads to an efficient implementation of {\sc CCM}\ for quadratic $f$. We are now equipped with everything to prove the following behavior of the {\sc CCM}\ iterates. \begin{lemma} \label{lem:CCM_Compare} If $z_0$ is a supersolution and $\cbr{z^{(k)}}$ is the sequence of iterates generated by the {\sc CCM}\ algorithm then $\forall k \geq 0$, \begin{align*} 1)\quad z^{(k+1)} &\leq z^{(k)} & 2)\quad z^{(k)} \text{ is a supersolution} \end{align*} If $z_0$ is a subsolution and $\cbr{z^{(k)}}$ is the sequence of iterates generated by the {\sc CCD}\ algorithm then $\forall k \geq 0$, \begin{align*} 1)\quad z^{(k+1)} &\geq z^{(k)} & 2)\quad z^{(k)} \text{ is a subsolution} \end{align*} \end{lemma} \begin{proof} Again, we will only prove the supersolution case as the subsolution case is analogous. We are given that $z^{(0)}$ is a supersolution. We will prove the following: if $z^{(k)}$ is a supersolution then, \begin{gather} \label{eq:ccmclaim1} z^{(k+1)} \le z^{(k)}\ ,\\ \label{eq:ccmclaim2} z^{(k+1)} \text{ is a supersolution}\ . \end{gather} Then the lemma follows by induction on $k$. Let us assume that $z^{(k)}$ is a supersolution and try to prove~\eqref{eq:ccmclaim1} and~\eqref{eq:ccmclaim2}. To prove these we will show that $z^{(k,j)} \le z^{(k)}$ and $z^{(k,j)}$ is a supersolution by induction on $j \in \cbr{0,1,\hdots,d}$. This proves \eqref{eq:ccmclaim1} and \eqref{eq:ccmclaim2} for $z^{(k+1)}$ since $z^{(k+1)} = z^{(k,d)}$. . The base case ($j=0$) of the induction is trivial since $z^{(k,0)} \le z^{(k)}$ since the two vectors are equal. For the same reason, $z^{(k,0)}$ is a supersolution since we have assumed $z^{(k)}$ to be a supersolution. Now assume $z^{(k,j-1)} \le z^{(k)}$ and $z^{(k,j-1)}$ is a supersolution for some $j > 0$. We want to show that $z^{(k,j)} \le z^{(k)}$ and $z^{(k,j)}$ is a supersolution. If the update to $z^{(k,j)}$ was trivial, i.e. $z^{(k,j-1)} = z^{(k,j)}$ then there is nothing to prove. Therefore, for the remainder of the proof assume that the update is non-trivial (and hence Lemma~\ref{lem:CCM_update} applies). Since $z^{(k,j-1)}$ and $z^{(k,j)}$ differ only in the $j$th coordinate, to show that $z^{(k,j)} \le z^{(k)}$ given that $z^{(k,j-1)} \le z^{(k)}$, it suffices to show that $z^{(k,j)} \le z^{(k,j-1)}$, i.e. \begin{equation} \label{eq:ccmjthentry} z^{(k,j)}_j \le z^{(k,j-1)}_j = z^{(k)}_j \ . \end{equation} As in Lemma~\eqref{lem:CCM_update}, let us denote $\fres{j}(\alpha; z^{(k,j-1)}$ by $g(\alpha)$. The lemma gives us a $\tau \in (0,L]$ such that, \begin{equation} \label{eq:fromrep} z^{(k,j)}_j = S_{\lambda/\tau} \left( z^{(k,j-1)}_j - \frac{[ {\nabla} f(z^{(k,j-1)}) ]_j}{\tau} \right)\ . \end{equation} Since $z^{(k,j-1)}$ is a supersolution by induction hypothesis and $\tau \leq L$, using Lemma~\ref{lem:monotonic} we get \begin{align*} z^{(k,j)}_j &\le S_{\lambda/L} \left( z^{(k,j-1)}_j - \frac{[ {\nabla} f(z^{(k,j-1)}) ]_j}{L} \right) \le S_{\lambda/L} \left( z^{(k)}_j - \frac{[ {\nabla} f(z^{(k)}) ]_j}{L} \right) \le z^{(k)}_j \ . \end{align*} where the second inequality above holds because $z^{(k,j-1)} \le z^{(k)}$ by induction hypothesis and since $\mathbf{S}_{\lambda/L} \circ (\mathbf{I} - {\nabla} f/L)$ is an isotone operator. The third holds since $z^{(k)}$ is a supersolution (coupled with Lemma~\ref{lem:scale}). Thus, we have proved~\eqref{eq:ccmjthentry}. We now need to prove that $z^{(k,j)}$ is a supersolution. To this end, we first claim that \begin{equation} \label{eq:equaldiff} z^{(k,j-1)}_j - \frac{[{\nabla} f(z^{(k,j-1)})]_j}{\tau} = z^{(k,j)}_j - \frac{[{\nabla} f(z^{(k,j)})]_j}{\tau} \ . \end{equation} This is true since \begin{align*} &\quad z^{(k,j-1)}_j - \frac{[{\nabla} f(z^{(k,j-1)})]_j}{\tau} - z^{(k,j)}_j + \frac{[{\nabla} f(z^{(k,j)})]_j}{\tau} \\ &= z^{(k,j-1)}_j - z^{(k,j)}_j - \frac{1}{\tau}( g'(z^{(k,j-1)}_j) - g'(z^{(k,j)}_j) ) \\ &= z^{(k,j-1)}_j - z^{(k,j)}_j - ( z^{(k,j-1)}_j - z^{(k,j)}_j ) = 0\ . \end{align*} The first equality is true by definition of $g$ and the second by~\eqref{eq:diffratio}. Now, applying $S_{\lambda/\tau}$ to both sides of~\eqref{eq:equaldiff} and using~\eqref{eq:fromrep}, we get \begin{align} \notag z^{(k,j)}_j &= S_{\lambda/\tau} \left( z^{(k,j-1)}_j - \frac{[ {\nabla} f(z^{(k,j-1)}) ]_j}{\tau} \right) \\ \label{eq:ccmsuperforj} &= S_{\lambda/\tau} \left( z^{(k,j)}_j - \frac{[ {\nabla} f(z^{(k,j)}) ]_j}{\tau} \right) \ . \end{align} For $i \neq j$, $z^{(k,j)}_i = z^{(k,j-1)}_i$ and thus we have \begin{align*} &\quad z^{(k,j-1)}_i - \frac{ [{\nabla} f(z^{(k,j-1)})]_i }{\tau} - z^{(k,j)}_i + \frac{ [{\nabla} f(z^{(k,j)})]_j }{\tau} \\ &= -\frac{1}{\tau} \sbr{ [{\nabla} f(z^{(k,j-1)})]_i - [{\nabla} f(z^{(k,j)})]_i } \geq 0 \end{align*} The last inequality holds because we have already shown that $z^{(k,j-1)} \ge z^{(k,j)}$ and thus by isotonicity of $\mathbf{I} - {\nabla} f/L$, we have \[ [{\nabla} f(z^{(k,j-1)})]_i - [{\nabla} f(z^{(k,j)})]_i \le L(z^{(k,j-1)}_i - z^{(k,j)}_i) = 0\ . \] Using the monotonic scalar shrinkage operator we have \begin{align*} S_{\lambda/\tau}\rbr{z^{(k,j-1)}_i - \frac{ [{\nabla} f(z^{(k,j-1)})]_i }{\tau} } \geq S_{\lambda/\tau}\rbr{z^{(k,j)}_i - \frac{ [{\nabla} f(z^{(k,j)})]_i }{\tau} } \end{align*} which, using the inductive hypothesis that $z^{(k,j-1)}$ is a supersolution, further yields \begin{align} \label{eq:ccmsuperforothers} z^{(k,j)}_i = z^{(k,j-1)}_i \geq S_{\lambda/\tau}\rbr{z^{(k,j-1)}_i - \frac{ [{\nabla} f(z^{(k,j-1)})]_i }{\tau} } &\geq S_{\lambda/\tau}\rbr{z^{(k,j)}_i - \frac{ [{\nabla} f(z^{(k,j)})]_i }{\tau} } \ . \end{align} Combining~\eqref{eq:ccmsuperforj} and~\eqref{eq:ccmsuperforothers}, we get \begin{align*} z^{(k,j)} \geq \mathbf{S}_{\lambda/\tau}\left(z^{(k,j)} - \frac{{\nabla} f(z^{(k,j)})}{\tau} \right) \end{align*} which proves, using Lemma~\ref{lem:scale}, that $z^{(k,j)}$ is a supersolution. \end{proof} \subsection{Comparison: {\sc CCD}\ vs. {\sc CCM}} \label{subsec:Compare_CCD_CCM} \begin{theorem} \label{thm:CCD_CCM_Compare} Suppose $\cbr{ y^{(k)} }$ and $\cbr{ z^{(k)} }$ are the sequences of iterates generated by the {\sc CCD}\ and {\sc CCM}\ algorithms respectively when started from the same supersolution $y^{(0)} = z^{(0)}$. Then, $\forall k\ge 0$, \[ z^{(k)} \le y^{(k)}\ . \] On the other hand, if they are started from the same subsolution $y^{(0)} = z^{(0)}$ then the sequences satisfy, $\forall k \ge 0$, \[ z^{(k)} \ge y^{(k)}\ . \] \end{theorem} \begin{proof} We will only prove the supersolution case as the subsolution case is analogous. Given that $y^{(0)} = z^{(0)}$ is a supersolution, we will prove the following: if $z^{(k)} \le y^{(k)}$ then, \begin{equation} \label{eq:ccdccmclaim} z^{(k+1)} \le y^{(k+1)} \ . \end{equation} Then the lemma follows by induction on $k$. Let us assume $z^{(k)} \le y^{(k)}$ and try to prove~\eqref{eq:ccdccmclaim}. To this end we will show that $z^{(k,j)} \le y^{(k,j)}$ by induction on $j \in \cbr{0,1,\hdots,d}$. This infers \eqref{eq:ccdccmclaim} since $z^{(k+1)} = z^{(k,d)}$ and $y^{(k+1)} = y^{(k,d)}$. The base case ($j=0$) is true by the given condition in the lemma since $z^{(k,0)} = z^{(k)}$ as well as $y^{(k,0)} = y^{(k)}$. Now, assume $z^{(k,j-1)} \le y^{(k,j-1)}$ for some $j > 0$. We want to show that $z^{(k,j)} \le y^{(k,j)}$. Since $z^{(k,j-1)}, z^{(k,j)}$ and $y^{(k,j-1)}, y^{(k,j)}$ differ only in the $j$th coordinate, to show that $z^{(k,j)} \le y^{(k,j)}$ given that $z^{(k,j-1)} \le y^{(k,j-1)}$, it suffices to show that \begin{equation} \label{eq:ccdccmjthentry} z^{(k,j)}_j \le y^{(k,j)}_j\ . \end{equation} If the update to $z^{(k,j)}$ is non-trivial then using Lemma~\ref{lem:CCM_update}, there is a $\tau \in (0,L]$, such that \begin{align} \notag z^{(k,j)}_j &= S_{\lambda/\tau}\rbr{z^{(k,j-1)}_j - \frac{ [{\nabla} f(z^{(k,j-1)})]_j }{\tau} } \\ \label{eq:zkjupper} &\leq S_{\lambda/L}\rbr{z^{(k,j-1)}_j - \frac{ [{\nabla} f(z^{(k,j-1)})]_j }{L} } \ , \end{align} where the last inequality holds because of Lemma~\ref{lem:monotonic} and the fact that $z^{(k,j-1)}$ is a supersolution (Lemma~\ref{lem:CCM_Compare}). If the update is trivial, i.e. $z^{(k,j)}_j = z^{(k,j-1)}_j$ then using \eqref{eq:ccmup1} and \eqref{eq:optimality_cond} we have \[ 0 \in [{\nabla} f(z^{(k,j)})]_j + \lambda\mathop{\mathrm{sign}}(z^{(k,j)}_j)\ . \] which coupled with \eqref{eq:equivalence} gives \[ z^{(k,j)}_j = S_{\lambda/L}\rbr{z^{(k,j)}_j - \frac{[{\nabla} f(z^{(k,j)})]_j}{L}} \leq S_{\lambda/L}\rbr{z^{(k,j-1)}_j - \frac{[{\nabla} f(z^{(k,j-1)})]_j}{L}} \] where the last inequality is obtained by applying the isotone operator $\mathbf{S}_{\lambda/L} \circ (\mathbf{I} - {\nabla} f/L)$ to the inequality $z^{(k,j)}\leq z^{(k,j-1)}$ which holds by lemma \ref{lem:CCM_Compare}. Thus \eqref{eq:zkjupper} holds irrespective of the triviality of the update. Now applying the same isotone operator to the inequality $z^{(k,j-1)} \le y^{(k,j-1)}$ and taking the $j$th coordinate gives, \[ S_{\lambda/L}\rbr{z^{(k,j-1)}_j - \frac{ [{\nabla} f(z^{(k,j-1)})]_j }{L} } \le S_{\lambda/L}\rbr{y^{(k,j-1)}_j - \frac{ [{\nabla} f(y^{(k,j-1)})]_j}{L} }\ . \] The right hand side above is, by definition, $y^{(k,j)}_j$. So, combining the above with~\eqref{eq:zkjupper} gives~\eqref{eq:ccdccmjthentry} and proves our inductive claim. \end{proof} \section{Convergence Rates} \label{sec:Rates} Our results so far have given inequalities comparing the iterates generated by the three algorithms. We finally want to compare the function values obtained by these iterates. For doing that, the next lemma is useful. \begin{lemma} \label{lem:func_compare} If $y$ is a supersolution and $y \le x$ then $F(y) \le F(x)$. \end{lemma} \begin{proof} Since $F$ is convex, we have \begin{align} \label{eq:func_value_inequality} F(y) - F(x) &\leq \inner{{\nabla} f(y) + \lambda \rho}{y - x} \end{align} for any $\rho \in \partial\|y\|_1$. We have assumed that $y \leq x$. Thus in order to prove $F(y) - F(x) \le 0$, it suffices to show that \begin{align} \label{eq:func_values_condition} \forall i\in[d], \qquad \exists \rho_i \in \mathop{\mathrm{sign}}(y_i) \qquad \text{s.t.} \qquad \gamma_i + \lambda \rho_i \ge 0 \end{align} where, for convenience, we denote the gradient ${\nabla} f(y)$ by $\gamma$. Since $y$ is a supersolution, Lemma~\ref{lem:scale} gives, \begin{align} \label{eq:Super_sol_relation} \forall i\in[d], \qquad y_i \geq S_{\lambda/L}\left(y_i - \frac{\gamma_i}{L}\right) \end{align} For any $i \in [d]$, there are three mutually exclusive and exhaustive cases. \begin{description} \item[Case (1)]: $y_i > \frac{\gamma_i + \lambda}{L}$ Plugging this value in \eqref{eq:Super_sol_relation} and using the definition of scalar shrinkage~\eqref{eq:scshrinkage}, we get \begin{align*} y_i \geq y_i - \frac{\gamma_i + \lambda}{L} \end{align*} which gives $\gamma_i + \lambda \geq 0$ and hence $y_i>0$. Thus, we can choose $\rho_i = 1 \in \mathop{\mathrm{sign}}(y_i)$ and we indeed have $\gamma_i + \lambda \rho_i \ge 0$. \item[Case (2)]: $y_i \in [\frac{\gamma_i-\lambda}{L},\frac{\gamma_i + \lambda}{L}]$ In this case, we have $y_i \geq S_{\lambda/L}(y^{(k)}_i - \frac{\gamma_i}{L}) = 0$. Thus, \begin{align*} \frac{\gamma_i+\lambda}{L} \geq y_i \geq 0\ . \end{align*} Thus we can choose $\rho_i = 1 \in \mathop{\mathrm{sign}}(y_i)$ and we have $\gamma_i + \lambda \rho_i \ge 0$. \item[Case (3)]: $y_i < \frac{\gamma_i - \lambda}{L}$ Plugging this value in \eqref{eq:Super_sol_relation} and using the definition of scalar shrinkage~\eqref{eq:scshrinkage}, we get \begin{align*} y_i \geq y_i - \frac{\gamma_i - \lambda}{L} \end{align*} which gives $\gamma_i - \lambda\geq 0$. Now if $y_i \leq 0$, we can set $\rho = -1 \in \mathop{\mathrm{sign}}(y_i)$ and will have $\gamma_i + \lambda \rho_i \geq 0$. On the other hand, if $y_i > 0$, we need to choose $\rho_i = 1$ and thus $\gamma_i + \lambda\geq 0$ should hold if \eqref{eq:func_values_condition} is to be true. However, we know $\gamma_i - \lambda\geq 0$, and $\lambda \geq 0$ so $\gamma_i + \lambda\geq 0$ is also true. \end{description} Thus in all three cases we have that there is a $\rho_i \in \mathop{\mathrm{sign}}(y_i)$ such that~\eqref{eq:func_values_condition} is true. \end{proof} There is a similar lemma for subsolutions whose proof, being similar to the proof above, is skipped. \begin{lemma} If $y$ is a subsolution and $y \ge x$ then $F(y) \le F(x)$. \end{lemma} If we start from a supersolution, the iterates for {\sc CCD}\ and {\sc CCM}\ always maintain the supersolution property. Thus Lemma \ref{lem:func_compare} ensures that starting from the same initial iterate, the function values of the {\sc CCD}\ and {\sc CCM}\ iterates always remain less than the corresponding {\sc GD}\ iterates. Since the {\sc GD}\ algorithm has $O(1/k)$ accuracy guarantees according to Theorem \ref{thm:gd}, the same rates must hold true for {\sc CCD}\ and {\sc CCM}. This is formalized in the following theorem. \begin{theorem} \label{thm:main_theorem} Starting from the same super- or subsolution $x^{(0)} = y^{(0)} = z^{(0)}$, let $\cbr{x^{(k)}}$, $\cbr{y^{(k)}}$ and $\cbr{z^{(k)}}$ denote the {\sc GD}, {\sc CCD}\ and {\sc CCM}\ iterates respectively. Then for any minimizer $x^*$ of \eqref{eq:Reg_l_1_loss}, and $\forall k\geq 1$, \[ F(z^{(k)}) \le F(y^{(k)}) \le F(x^{(k)}) \le F(x^\star) + \frac{ L \| x^\star - x^{(0)} \|^2}{2\,k} \] \end{theorem} \section{Conclusion} \label{sec:Conclusion} Coordinate descent based methods have seen a resurgence of popularity in recent times in both the machine learning and the statistics community, due to the simplicity of the updates and implementation of the overall algorithms. Absence of finite time convergence rates is thus one of the most important theoretical issues to address. In this paper, we provided a comparative analysis of {\sc GD}, {\sc CCD}\ and {\sc CCM}\ algorithms to give the first known finite time guarantees on the convergence rates of cyclic coordinate descent methods. However, there still are a significant number of unresolved questions. Our comparative results require that the algorithms start from a supersolution so that the property is maintained for all the subsequent iterates. We also require an isotonicity assumption on the $\mathbf{I} - {\nabla} f/L$ operator. Although this is a fairly common assumption in numerical optimization \citep{BertTsit89}, it is desirable to have a more generalized analysis without any restrictions. Since stochastic coordinate descent \citep{ShaiAmbuj09} converges at the same $O(1/k)$ rate as {\sc GD}\ without additional assumptions, intuition suggests that same should be true for {\sc CCD}\ and {\sc CCM}. A theoretical proof of the same remains an open question. Some greedy versions of the coordinate descent algorithm (e.g., \citep{WuLange08}) still lack a theoretical analysis of their finite time convergence guarantees. Although \cite{Clarkson08} has a $O(1/k)$ rates for a greedy version, the analysis is restricted to a simplex domain and does not generalize to arbitrary domains. The phenomenal performance of greedy coordinate descent algorithms on real life datasets makes it all the more essential to validate these experimental results theoretically. \bibliographystyle{alpha}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{sec:intro} The study of threshold functions for the appearance of spanning structures plays an important role in the theory of random graphs. Unlike in the case of small subgraphs, which was resolved by \citet{ErdRen60} (for balanced graphs) and by \citet{Bol81} (for general graphs), in the case of general spanning structures only sufficient conditions are known. These lead to upper bounds for the threshold of a general spanning graph, although the expectation threshold conjecture of \citet{KK07}, if true, predicts the threshold for \emph{any} graph up to a logarithmic factor. Apart from particular structures where the thresholds are known, such as perfect matchings~\cite{ErdRen66}, $F$-factors~\cite{JohKahVu08}, Hamilton cycles~\cite{Kor77,Pos76} or spanning trees~\cite{Montgomery19} (to name a few), the most general result providing upper bounds was, until recently, due to \citet{Ri00}, giving in some cases asymptotically optimal upper bounds (lattices, hypercubes, $k$-th powers of Hamilton cycles for $k\ge 3$~\cite{KO12}). An excellent survey by \citet{Boe17} provides references to many other results, in particular algorithmic ones. The recent breakthrough work by \citet{FKNP19} established the fractional expectation threshold conjecture of \citet{Tal10}, providing in many cases optimal thresholds or being off by at most a logarithmic factor. The subsequent work by \citet{KNP20} exploited the proof approach in~\cite{FKNP19} in a more efficient way, allowing to erase the logarithmic factor in the case of the square of a Hamilton cycle, and thus proving the threshold for its appearance to be $n^{-1/2}$. In a recent paper, \citet{Frieze20} studied thresholds for the containment of spanning $K_r$-cycles, i.e., cyclically ordered edge-disjoint copies of $K_r$ with two consecutive copies sharing a vertex. He proved the optimal threshold of the form $n^{-2/r}\log^{1/\binom{r}{2}}n$ by reducing this problem to another result of Riordan about coupling the random graph with the random $r$-uniform hypergraph~\cite{Ri18} (see also the work of \citet{Hec18} for the triangle case). Frieze also raised the question about the threshold for the containment of a spanning $C_4$-cycle, where the copies of $C_4$ are ordered cyclically and two consecutive cycles overlap in exactly one edge, whereby each cycle $C_4$ overlaps with two copies of $C_4$ in opposite edges (there are some possible variations, but this would be a canonically defined structure). Such $C_4$-cycles are referred to in~\cite{Frieze20} as a $C_4$-cycle with overlap $2$, where it is also observed that the threshold for its appearance is at most $ n^{-2/3} \log n$, which follows from~\cite{FKNP19}. The purpose of this paper is to contribute to the large body of work on thresholds for spanning structures by establishing thresholds for spanning $2$-overlapping $C_4$-cycles (which we denote by $C^{e}_{4,n}$), thus answering the question of Frieze~\cite{Frieze20}, and also for $2$-overlapping $K_r$-cycles (defined below) for $r\geq4$. Both structures cannot be handled directly by the results in~\cite{FKNP19, Ri00}. In order to obtain these results, we generalise the approach of \citet{KNP20}. As the results, we establish the following thresholds. The first theorem answers the question of Frieze~\cite{Frieze20}. \begin{theorem}\label{thm:Ctwo-threshold} The threshold for the appearance of $C^{e}_{4,n}$ in $G(2n,p)$ is $\Theta(n^{-2/3})$. \end{theorem} Our second result generalises the recent work of \citet{KNP20} on the threshold for the square of a Hamilton cycle. The square of a Hamilton cycle can be seen as the particular case $r=3$ of a structure which we call $2$-overlapping (or edge-overlapping) spanning $K_r$-cycle and denote by $K_{r,2,n}$, for $r\geq3$. This consists of a set of cyclically ordered copies of $K_r$, where consecutive cliques share exactly one edge and, if $r\geq4$, all other cliques are pairwise vertex-disjoint\COMMENT{In the case $r=3$, this is impossible, and we enforce that each clique shares exactly one vertex with the consecutive of its consecutive; this precisely defines the square of a Hamilton cycle.}. \begin{theorem}\label{thm:Kr-cycle} Let $r\ge 3$ and $n\in \mathbb{N}$ with $(r-2)\mid n$. Then, the threshold for the appearance of $K_{r,2,n}$ in $G(n,p)$ is $\Theta(n^{-2/(r+1)})$. \end{theorem} To prove \cref{thm:Ctwo-threshold,thm:Kr-cycle}, we state and prove a general lemma (the fragmentation lemma, \cref{lem:fragmentation}), which has potential to handle more spanning structures. This lemma is a generalisation of the work of Kahn, Narayanan and Park on the square of a Hamilton cycle~\cite[Lemma~3.1]{KNP20} to handle structures for which constantly many rounds of exposure may be necessary, in contrast to~\cite{KNP20}, where only two rounds are used, and to~\cite{FKNP19}, where logarithmically many rounds are necessary. The organisation of the paper is as follows. In the next section, \cref{sec:fragmentation}, we provide the main definitions, state a general lemma (the fragmentation lemma, \cref{lem:fragmentation}), and use it to establish a general theorem (\cref{thm:main}) about thresholds for certain spanning graphs. \Cref{thm:main} is actually the main general result of the paper, and \cref{thm:Ctwo-threshold,thm:Kr-cycle} are two of its applications. We prove these two applications in \cref{sec:special_cases}. Finally, in \cref{sec:conclude} we collect a few remarks, and in the Appendix we provide the proof of \cref{lem:fragmentation}. \section{A general theorem for thresholds}\label{sec:fragmentation} Given any real numbers $a$ and $b$, we write $[a,b]$ to refer to the set $\{n\in\mathbb{Z}:a\leq n\leq b\}$. For an integer $n$, we often abbreviate $[n]\coloneqq[1,n]$. We use standard $O$ notation for asymptotic statements. A hypergraph $\mathcal{H}$ on the vertex set $V\coloneqq V(\mathcal{H})$ is a subset of the power set $2^V$. The elements of $\mathcal{H}$ are referred to as edges. The hypergraph $\mathcal{H}$ is said to be $r$-bounded if all its edges have cardinality at most $r$, and $r$-uniform if all the edges have exactly $r$ vertices. Oftentimes, we will consider multihypergraphs $\mathcal{H}$ on $V$, where we view $\mathcal{H}$ as a multiset with elements from $2^V$. To ease readability, we will often refer to multihypergraphs as hypergraphs. We also omit floor and ceiling signs whenever they do not affect our asymptotic computations. Following \cite{KNP20}, we say that a (multi-)hypergraph $\mathcal{H}$ is \emph{$q$-spread} if, for every $I\subseteq V(\mathcal{H})$, we have \begin{equation*}\label{eq:spread-def} |\mathcal{H}\cap \langle I\rangle|\le q^{|I|}|\mathcal{H}|, \end{equation*} where $\langle I\rangle\coloneqq \{J\subseteq V(\mathcal{H}):I\subseteq J\}$ and $\mathcal{H}\cap \langle I\rangle$ is the set of edges of $\mathcal{H}$ in $\langle I\rangle$ (with multiplicities if $\mathcal{H}$ is a multihypergraph). The \emph{spreadness} of $\mathcal{H}$ is the minimum $q$ such that $\mathcal{H}$ is $q$-spread. Let $S\in \mathcal{H}$ and $X\subseteq V(\mathcal{H})$. For any $J\in \mathcal{H}$ such that $J\subseteq S\cup X$, we call the set $J\setminus X$ an \emph{$(S, X)$-fragment}. Given some $k\in \mathbb{N}$, we say that the pair $(S,X)$ is \emph{$k$-good} if some $(S,X)$-fragment has size at most $k$, and we say it is \emph{$k$-bad} otherwise. More generally, let $\mathcal{H}_0$ be some $k_0$-bounded (multi-)hypergraph. Let $k_0\geq k_1\geq \ldots\geq k_t$ be a sequence of integers and $X_1,\ldots,X_t$ be a sequence of subsets of $V(\mathcal{H}_0)$. Then, we define a sequence of $k_i$-bounded multihypergraphs $\mathcal{H}_1,\ldots,\mathcal{H}_{t}$ inductively as follows. Let $i\in[t]$, and assume the hypergraph $\mathcal{H}_{i-1}$ is already defined. Then, consider each $S\in \mathcal{H}_{i-1}$ such that $(S,X_i)$ is a $k_i$-good pair, and let $\mathcal{H}_i$ be the multihypergraph which consists of one (arbitrary) $(S,X_i)$-fragment of size at most $k_i$ for each such $k_i$-good pair $(S,X_i)$. That is, we define $\mathcal{G}_i\coloneqq\{S\in\mathcal{H}_{i-1}:(S,X_i)\text{ is }k_i\text{-good}\}$ and, for each $S\in\mathcal{G}_i$, $\mathcal{J}_i(S)\coloneqq\{J\setminus X_i:J\in\mathcal{H}_{i-1},J\subseteq S\cup X_i, |J\setminus X_i|\leq k_i\}$. We then fix an arbitrary function $f_i\colon\mathcal{G}_i\to\bigcup_{S\in\mathcal{G}_i}\mathcal{J}_i(S)$ such that $f_i(S)\in\mathcal{J}_i(S)$ for every $S\in\mathcal{G}_i$ (for instance, we may simply pick the lexicographically smallest element in the set) and define \[ \mathcal{H}_{i}\coloneqq \{f_i(S):S\in\mathcal{G}_i\}. \] We will refer to the sequence $(\mathcal{H}_0,\mathcal{H}_1,\ldots,\mathcal{H}_t)$ as a \emph{fragmentation process} with respect to $(k_1,\ldots, k_t)$ and $(X_1,\ldots,X_t)$. In our applications, we will let $X_1,\ldots,X_t$ be random subsets of $V(\mathcal{H}_0)$ and choose a suitable sequence $k_0,\ldots,k_t$ which will guarantee that the hypergraphs in the sequence do not become very small (with high probability). Observe that the fragments at the $i$-th step of this process (that is, the edges of $\mathcal{H}_i$) correspond to subsets of the edges of $\mathcal{H}_0$ which have not been covered by the sets $X_1,\ldots,X_i$. In particular, for all $i\in[t]$ and all $I\subseteq V(\mathcal{H}_0)$ we have that\COMMENT{Let $S\in\mathcal{H}_i\cap\langle I\rangle$, so in particular $S\in\mathcal{H}_i$. But then, as $\mathcal{H}_i$ was obtained by a fragmentation process, $S$ is a subset of some uniquely defined $S'\in\mathcal{H}_0$ . Since $\langle I\rangle$ contains all supersets of $I$, we have that $S$ is a superset of $I$ and, thus, so is $S'$. Hence $S'\in\mathcal{H}_0\cap\langle I\rangle$.\\ Note about $S'$: say that $i=1$ (for larger $i$, the same holds in an iterative way). Then, $S'$ is \emph{not} chosen so that $S=S'\setminus X_1$, but rather as the set $S'$ from which we have chosen $S$ as an $(S',X_1)$-fragment. This guarantees that, for each $S\in\mathcal{H}_i$, the chosen $S'$ is distinct (possibly as an element of the multiset), and thus the above really yields the desired bound. The reason why we are still guaranteed that $S\subseteq S'$ is that $S=J\setminus X_1$ for some $J\subseteq S'\cup X_1$, so $S=J\setminus X_1\subseteq S'\setminus X_1\subseteq S'$.} \begin{equation}\label{equa:fragmentationProperty} |\mathcal{H}_i\cap \langle I\rangle|\leq|\mathcal{H}_0\cap \langle I\rangle|. \end{equation} While the general framework developed in \cite{FKNP19,KNP20} works for arbitrary hypergraph thresholds, here we focus on graphs. Let $F$ be some (possibly spanning) subgraph of the complete graph $K_n$, and let $\mathcal{F}$ denote the set of all copies of $F$ in $K_n$\COMMENT{In more generality, we could define $\mathcal{F}$ as the subgraph of all minimal elements of an increasing property $\mathcal{P}$, in the same way as in \cite{FKNP19}; I believe our methods would transfer as long as each element of $\mathcal{F}$ has the same size, so $\mathcal{F}$ is uniform; they should also work if it is not uniform, but we might need to be a bit more careful.}. We will identify copies of $F$ from $\mathcal{F}$ with their edge sets, and we thus view $\mathcal{F}$ as a $k$-uniform hypergraph, where $k=|E(F)|$, on the vertex set $M\coloneqq \binom{[n]}{2}$. We now define a strengthening of the notion of spreadness of hypergraphs which is key for our results. For $q,\alpha,\delta\in(0,1)$, we say that a $k$-bounded hypergraph $\mathcal{F}$ on vertex set $M$ is \emph{$(q,\alpha,\delta)$-superpread} if it is $q$-spread and, for any $I\subseteq M$ with $|I|\le \delta k$, we have \begin{equation*}\label{eq:superspread-def} |\mathcal{F}\cap \langle I\rangle|\le q^{|I|} k^{-\alpha c_I} |\mathcal{F}|, \end{equation*} where $c_I$ is the number of components of $I$ (when $I$ is viewed as a subgraph of $K_n$). The role of the term $k^{-\alpha c_I}$ will become clear later, but, roughly speaking, it will be responsible for bounding the threshold by $O(q/\alpha)$. The value of the constant $\delta$ actually plays no role in the result, but we do need it to be bounded away from $0$ for our approach to work. The following result is the main lemma of the paper. It will be used to iteratively build a spanning copy of $F$ in $G(n,p)$ through a fragmentation process. \begin{lemma}\label{lem:fragmentation} Let $d,\alpha,\delta>0$ with $\alpha,\delta<1$. Then, there is a fixed constant $C_0$ such that, for all $C\ge C_0$ and $n\in\mathbb{N}$, the following holds. Let $F$ be some subgraph of $K_n$ with $\Delta(F)\le d$\COMMENT{Note: this condition cannot be relaxed because we use \cref{lem:num_subgraphs}.} and $k_0\coloneqq|E(F)|=\omega(1)$, and let $\mathcal{F}$ be the set of all copies of $F$ in $K_n$. Assume that $\mathcal{F}$ is $(q,\alpha,\delta)$-superspread with $q\geq4k_0/(Cn^2)$ and that $(\mathcal{H}_0,\mathcal{H}_1,\ldots,\mathcal{H}_i)$ is some fragmentation process with $\mathcal{H}_0\coloneqq \mathcal{F}$ such that, for each $j\in[i]$, $\mathcal{H}_j$ is $k_j$-bounded and $|\mathcal{H}_j|\ge |\mathcal{H}_{j-1}|/2$, and $k_i=\omega(k_0^{\alpha})$. Then, for $w\coloneqq Cq \binom{n}{2}$, $k\coloneqq k_ik_0^{-\alpha}$ and $X$ chosen uniformly at random from $\binom{M}{w}$, we have \begin{equation}\label{eq:fragmentation} \mathbb{E}\left[\left\lvert\left\{ (S,X) : S\in \mathcal{H}_i, (S,X)\text{ is }k\text{-bad}\right\}\right\rvert\right]\le 2C^{-k/3} |\mathcal{H}_i|. \end{equation} \end{lemma} The proof of \cref{lem:fragmentation} closely follows the proofs of Lemma~3.1 from~\cite{KNP20} and Lemma~3.1 from~\cite{FKNP19}. Therefore, for the sake of completeness, we give its proof in \cref{app:Fragmentation}, for the convenience of the interested reader. Equipped with \cref{lem:fragmentation} we can now establish the following. \begin{theorem}\label{thm:main} Let $d,\alpha,\delta,\varepsilon>0$ with $\alpha,\delta<1$. Then, there is a fixed constant $C_0$ such that, for all $C\ge C_0$ and $n\in\mathbb{N}$, the following holds. If $F$ is a subgraph of $K_n$ with $\Delta(F)\le d$ and $k_0\coloneqq|E(F)|=\omega(1)$ and the hypergraph $\mathcal{F}$ of all copies of $F$ is $(q,\alpha,\delta)$-superspread with $q\geq4k_0/(Cn^2)$, then, for $p\ge C q$, \[ \mathbb{P}\left[F\subseteq G(n,p)\right]\geq1-\varepsilon. \] \end{theorem} This result immediately provides an upper bound of $Cq$ for the threshold for the appearance of $F$ as a subgraph of $G(n,p)$\COMMENT{We do not need to use the results of Friedgut: the original result of Bollobás and Thomason (see the proof in Frieze-Karonski) already shows that $p$ as above is an upper bound on the threshold. Perhaps we should cite some of these.}. If a matching lower bound can be found (say, by the standard first moment method\COMMENT{Can we always show that the spreadness of a hypergraph is a lower bound for the threshold?}), then this establishes the threshold for the appearance of any graph $F$ which satisfies the conditions in the statement. The proof of \cref{thm:main} follows along similar lines as the proofs in~\cite{FKNP19,KNP20}: one proceeds in rounds of sprinkling random edges by showing that, after each round of exposure (which corresponds to a step of the fragmentation process), the random graph contains larger pieces of the desired structure (or, conversely, the missing fragments become smaller). In the general proof in~\cite{FKNP19}, the authors show that the progress in each round shrinks the percentage of the edges from the desired structure by a factor of $0.9$, which results in logarithmically many steps and, thus, a $\log n$ factor with respect to the fractional expectation threshold of the structure (this result is quite general, though, and oftentimes a logarithmic factor is indeed needed, as in the case of spanning trees, Hamilton cycles and $K_r$-factors in random graphs, or of perfect matchings and loose Hamilton cycles in random hypergraphs). The threshold for the square of a Hamilton cycle $K_{3,2,n}$ happens to be $n^{-1/2}$, and in this case, as shown in~\cite{KNP20}, two rounds suffice: the shrinkage factor there is $n^{-1/2}$, so that after the first round a second moment computation suffices in the second round of exposure/sprinkling. We show that the threshold for the appearance of $F$ is at most $O(q)$ and for this we will need $1/\alpha$ rounds (exposing each time edges with probability $Cq$, for some constant $C$): the shrinkage factor in all but the last round will be $n^{-\alpha}$, so that we can apply the second moment method in the last round. In the proof of \cref{thm:main} we make use of the following auxiliary lemma. Again, its proof follows similarly as the proof of Proposition~2.2 in~\cite{KNP20}, and we thus defer it to \cref{app:Fragmentation}. \begin{lemma}\label{lem:num_subgraphs} Let $F$ be a graph with $f$ edges and maximum degree $d$. Then, the number of subgraphs $I$ of $F$ with $\ell$ edges and $c$ components is at most \[ (4ed)^{\ell}\binom{f}{c}. \] \end{lemma} \begin{proof}[Proof of \cref{thm:main}] We first note that, by adjusting the value of $C_0$, we may assume that $n$ is sufficiently large, and therefore $k_0$ is sufficiently large too. We may also assume that $q<C_0^{-1}$. In the beginning, we will switch and work with the $G(n,m)$ model instead of $G(n,p)$. This can be done easily since these models are essentially equivalent for $m=p\binom{n}{2}$ (see, e.g.,~\cite[Proposition~1.12]{JLR00}). We proceed as follows. We consider $G(n,m_1)\cup G(n,m_2)\cup\ldots\cup G(n,m_t)$ with $t=\lceil1/\alpha\rceil-1$ and $m_i=Kq\binom{n}{2}$ for each $i\in[t]$, where $K$ is assumed to be sufficiently large throughout (and $C_0$ will be defined as $2(t+1)K$). We then define a fragmentation process on $\mathcal{F}$ with respect to $(k_1,\ldots,k_t)$ and $(G(n,m_1),\ldots,G(n,m_t))$, where the integers $k_1,\ldots,k_t$ will be defined shortly. We prove that a.a.s.~each step of this fragmentation process satisfies the conditions of \cref{lem:fragmentation}, so that we may iteratively apply it and conclude that each of the subsequent hypergraphs is not `too small'. At the end of this process, we will be sufficiently `close' to a copy of $F$ that a second moment argument will yield the result. To be precise, we first consider the hypergraph $\mathcal{H}_0\coloneqq \mathcal{F}$ and take $X_1\coloneqq G(n,m_1)$ and $k_1\coloneqq k_0^{1-\alpha}$. We consider a first step in the fragmentation process. We obtain a multihypergraph $\mathcal{H}_1$ of $(S,X_1)$-fragments which is $k_1$-bounded, where each $S$ is an edge of $\mathcal{H}_0$. In particular, by the assertion~\eqref{eq:fragmentation} of \cref{lem:fragmentation} and Markov's inequality, we have that \begin{equation}\label{eq:successful} \mathbb{P}\left[|\mathcal{H}_1|\ge |\mathcal{H}_0|/2\right]\ge 1-4K^{-k_1/3}. \end{equation} Suppose now that we have already run the fragmentation process $(\mathcal{H}_0,\mathcal{H}_1,\ldots,\mathcal{H}_i)$, for some $i\in[t-1]$, and that $|\mathcal{H}_j|\ge |\mathcal{H}_{j-1}|/2$ for all $j\in[i]$. We run one further step of the fragmentation process with $X_{i+1}\coloneqq G(n,m_{i+1})$ and $k_{i+1}\coloneqq k_ik_0^{-\alpha}$ to obtain a $k_{i+1}$-bounded hypergraph $\mathcal{H}_{i+1}$ of $(S,X_1\cup\ldots\cup X_{i+1})$-fragments (where, again, each $S$ is an edge of $\mathcal{H}_0$). By another application of \cref{lem:fragmentation} and Markov's inequality, we obtain that \begin{equation}\label{eq:successful2} \mathbb{P}\left[|\mathcal{H}_{i+1}|\ge |\mathcal{H}_i|/2\right]\ge 1-4K^{-k_{i+1}/3}. \end{equation} We say that the fragmentation process $(\mathcal{H}_0,\mathcal{H}_1,\ldots,\mathcal{H}_t)$ is \emph{successful} if $|\mathcal{H}_j|\ge |\mathcal{H}_{j-1}|/2$ for all $j\in[t]$. Let $\beta\coloneqq1-t\alpha$, and note that, by the definition of $t$, we have $0<\beta\leq\alpha$. By \eqref{eq:successful} and \eqref{eq:successful2}, we conclude that the probability that the fragmentation process $(\mathcal{H}_0,\mathcal{H}_1,\ldots,\mathcal{H}_{t-1})$ which we run is successful is\COMMENT{Note we are assuming $k_0\to\infty$ and, since $\beta$ is a positive constant (but $\beta-\alpha\leq0$), we also have $k_0^\beta\to\infty$. This is clearly the leading term, as in all other cases the exponent is a smaller constant.} \[ 1-4\sum_{i=1}^{t}K^{-k_i/3}\ge1-4\sum_{i=1}^{\lceil{1}/{\alpha}\rceil-1}K^{-k_0^{1-i\alpha}/3}=1-O\left(K^{-k_0^{\beta}/3}\right). \] To summarise, a.a.s.~the fragmentation process is successful and, thus, yields a $k_{t}$-bounded multihypergraph $\mathcal{H}_t$ of $(S,X_1\cup\ldots\cup X_t)$-fragments, where $k_{t}=k_0^{\beta}$, $|\mathcal{H}_t|\geq2^{-t}|\mathcal{H}_0|$ and each $S$ is an element of $\mathcal{H}_0$. We now apply one more round of sprinkling. In this final round we switch and work with the random set $X\coloneqq G(n,p)$ with $p=Kq$. We may also assume that $\mathcal{H}_t$ is $k_t$-uniform, since every set $S\in\mathcal{H}_t$ is contained in some $S'\in\mathcal{F}$ and thus we can add some arbitrary $k_t-|S|$ vertices from $S'\setminus S$ to~$S$. The proof now will proceed along the same lines as the proof in~\cite[Theorem~1.2]{KNP20}. Define the random variable $Y\coloneqq |\{S\in\mathcal{H}_t:S\subseteq G(n,p)\}|$. Our aim is to estimate the variance of~$Y$ and to show that $\mathbb{P}[Y=0]\le \varepsilon$. This would mean that the random graph $G(n,p)\cup\bigcup_{i=1}^{t} G(n,m_i)$ contains a copy of $F$ with probability at least $1-\varepsilon$ (by~\cite[Proposition~1.12]{JLR00}, this also applies to $G\left(n,C_0q\right)$). We estimate the variance of $Y$ as follows (recall that we work in $G(n,p)$ now). Let $R\in \mathcal{H}_t$, so $|R|=k_t=k_0^{\beta}$. Then, using the fact that $\mathcal{F}$ is $(q,\alpha,\delta)$-superspread and \eqref{equa:fragmentationProperty}, for each $\ell\in[k_t]$ we have that \begin{align*} |\{S\in \mathcal{H}_t: |S\cap R|=\ell\}|&\le \sum_{L\subseteq R, |L|=\ell} |\mathcal{H}_t\cap\langle L\rangle|\le \sum_{L\subseteq R, |L|=\ell} |\mathcal{F}\cap\langle L\rangle| \le \sum_{L\subseteq R, |L|=\ell} q^{|L|} k_0^{-\alpha c_L} |\mathcal{F}|,\\ &=\sum_{c=1}^\ell \sum_{L\subseteq R, |L|=\ell, c_L=c} q^{\ell} k_0^{-\alpha c} |\mathcal{F}| \overset{\text{\cref{lem:num_subgraphs}}}{\le} \sum_{c=1}^\ell (4ed)^\ell \binom{k_t}{c} q^{\ell} k_0^{-\alpha c} |\mathcal{F}|\\ &=q^{\ell} |\mathcal{F}| (4ed)^\ell \sum_{c=1}^\ell \binom{k_t}{c} k_0^{-\alpha c} \le q^{\ell} |\mathcal{F}| (4ed)^\ell \sum_{c=1}^\ell \left(\frac{ ek_t k_0^{-\alpha}}{c}\right)^c =q^{\ell} |\mathcal{F}| e^{O(\ell)}, \end{align*} where the implicit constants in the $O$ notation are independent of $K$. We therefore get the following bound on the variance: \[ \mathrm{Var}[Y]\le p^{2k_t}\sum_{R,S\in \mathcal{H}_t,\, R\cap S\neq\varnothing} p^{-|R\cap S|}\overset{|\mathcal{H}_t|\le |\mathcal{F}|}{\le} |\mathcal{F}|^2p^{2k_t}\sum_{\ell=1}^{k_t} e^{O(\ell)}p^{-\ell}q^{\ell}=O\left(\mathbb{E}[Y]^2/K\right), \] where we use the facts that $p=Kq$, $\mathbb{E}[Y]=p^{k_t}|\mathcal{H}_t|$ and $|\mathcal{H}_t|\ge 2^{-t}|\mathcal{F}|=\Theta(|\mathcal{F}|)$. By letting $K$ be sufficiently large, the result follows by Chebyshev's inequality. \end{proof} \section{Applications of Theorem~\ref{thm:main}}\label{sec:special_cases} We use this section to prove \cref{thm:Ctwo-threshold,thm:Kr-cycle} as applications of \cref{thm:main}. \subsection{Spanning \texorpdfstring{$C_4$}{C4}-cycles}\label{sect31} Throughout this section, we assume that $n$ is even. Observe that the graph $C^{e}_{4,n}$ has $3n/2$ edges. Let $\mathcal{C}$ be the $(3n/2)$-uniform hypergraph on the vertex set $M=\binom{[n]}{2}$ where we see (the set of edges of) each copy of $C^{e}_{4,n}$ as an edge of $\mathcal{C}$. We write $|\mathcal{C}|$ for the number of its edges and notice that $|\mathcal{C}|=(n-1)!/2$. Indeed, consider an arbitrary labelling $v_1,\ldots,v_n$ of the vertices. We define a copy of $C^{e}_{4,n}$ uniquely based on this ordering: we consider a matching between the set of even vertices and odd vertices (where we add the edge $v_{2i-1}v_{2i}$ for each $i\in[n/2]$), and then we define a cycle of length $n/2$ on the set of even vertices and another cycle in the set of odd vertices (where the edges of these cycles join the vertices which are closest in the labelling, seen cyclically). In this way, each of the $C_4$'s which conform the copy of $C^{e}_{4,n}$ is given by four consecutive vertices in the labelling, starting with an odd vertex $v_{2i-1}$, so that its edges are $\{v_{2i-1}v_{2i},v_{2i+1}v_{2i+2},v_{2i-1}v_{2i+1},v_{2i}v_{2i+2}\}$. Now one can easily verify that there are $2n$ different labellings which yield the same copy of $C^{e}_{4,n}$ (there are $n/2$ possible starting points while maintaining the same cyclic ordering; if the ordering is reversed, the resulting graph is the same; and if all pairs of vertices $\{v_{2i-1}v_{2i}\}$ are swapped, the resulting graph is also the same). Recall that a hypergraph $\mathcal{C}$ is $q$-spread, for some $q\in(0,1)$, if $|\mathcal{C}\cap \langle I\rangle|\le q^{|I|}|\mathcal{C}|$ for all $I\subseteq M$, where $\langle I\rangle$ denotes the set of all supersets of $I$. Moreover, for $\alpha,\delta\in(0,1)$, we say $\mathcal{C}$ is $(q,\alpha,\delta)$-superpread if it is $q$-spread and, for every $I\subseteq M$ with $|I|\le 3\delta n/2$, we have \begin{equation*} |\mathcal{C}\cap \langle I\rangle|\le q^{|I|} (3n/2)^{-\alpha c_I} |\mathcal{C}|, \end{equation*} where $c_I$ is the number of components of $I$. Our main goal now is to establish that the hypergraph $\mathcal{C}$ is $(250n^{-2/3},1/3,1/15)$-superspread. \begin{lemma}\label{obs:small-edge-spread} Let $I\subseteqC^{e}_{4,n}$ be a graph with $\ell\le n/10$ edges and $c$ components. Then, we have \[ |V(I)|-c\geq\frac{2}{3}\ell+\frac{c}{3}. \] \end{lemma} \begin{proof} Let $I_1,\ldots,I_c$ be the components of $I$ with at least one edge, and let $v_1,\ldots,v_c$ be the number of vertices spanned by $I_1,\ldots,I_c$, respectively. Since for all $j\in[c]$ we have $|I_j|\le|I|\le n/10$, we conclude the following easy bound on any component: \begin{equation}\label{eq:edge-bound} |I_j|\le \frac{1}{2}\left(4\cdot 2+(v_j-4)\cdot 3\right)=\frac{3}{2}v_j-2. \end{equation} Indeed, this holds since the maximum degree of $I$ is at most $3$ and in every component $I_j$ with $4\leq v_j\leq n/10$ there are four vertices whose sum of degrees is at most $8$. For $v_j\in[2,3]$ we have $|I_j|= v_j-1$, hence the bound given in~\eqref{eq:edge-bound} holds in these cases as well. Summing over all $j\in[c]$, we obtain that \[\ell=|I|\leq\frac32|V(I)|-2c,\] which yields the desired result by rearranging the terms. \end{proof} \begin{lemma}\label{obs:large-edge-spread} Let $I\subseteqC^{e}_{4,n}$ be a graph with $\ell$ edges and $c$ components. Then, we have \[ |V(I)|-c\geq\frac{2}{3}\ell-1. \] \end{lemma} \begin{proof} Since every vertex of $C^{e}_{4,n}$ has degree $3$, the bound in the statement holds trivially if $I$ has only one component. We may thus assume that $I$ contains at least two components with at least one edge each. But then, one can directly check that \eqref{eq:edge-bound} must hold, and we can argue exactly as in \cref{obs:small-edge-spread}, which leads to a better bound than claimed in the statement. \end{proof} \begin{lemma}\label{obs:general-spread} The hypergraph $\mathcal{C}$ is $(250n^{-2/3},1/3,1/15)$-superspread. \end{lemma} \begin{proof} Let $I\subseteq M$. We need to obtain upper bounds for $|\mathcal{C}\cap \langle I\rangle|$. If $I$ is not contained in any copy of $C^{e}_{4,n}$, then $|\mathcal{C}\cap \langle I\rangle|=0$, so we may assume $I$ is a subgraph of some copy of $C^{e}_{4,n}$. Recall that a copy of $C^{e}_{4,n}$ can be defined by an ordering of $[n]$ and that exactly $2n$ such orderings define the same copy of $C^{e}_{4,n}$. Thus, it suffices to bound the number of orderings of $[n]$ which define a copy of $C^{e}_{4,n}$ containing $I$. Let $I_1,\ldots,I_c$ be the components of $I$ which contain at least one edge. For each $j\in[c]$, choose a vertex $x_j\in V(I_j)$ (note that there are $v_j$ possible choices for this, which leads to a total of \begin{equation}\label{equa:bound1C4} \prod_{j=1}^cv_j\leq 2^{|I|} \end{equation} choices for $\{x_1,\ldots,x_c\}$). Now, each ordering $\sigma$ of $[n]$ (recall this defines a copy of $C^{e}_{4,n}$) induces an ordering on the set consisting of the vertices $x_1,\ldots,x_c$ as well as all isolated vertices in $I$. Let us denote this induced ordering as $\tau=\tau(\sigma)$. We now want to bound the total number of possible orderings $\sigma$ by first bounding the number of orderings $\tau$ (which depend on the choice of $x_1,\ldots,x_c$) and then the number of orderings $\sigma$ with $\tau=\tau(\sigma)$. After the choice of $x_1,\ldots,x_c$, the number of possible orderings $\tau$ is \begin{equation}\label{equa:bound2C4} (n-|V(I)|+c)!. \end{equation} Now, in order to obtain some $\sigma$ such that $\tau=\tau(\sigma)$, it suffices to `insert' the vertices which are missing into the ordering, and this must be done in a way which is consistent with the structure of the components $I_j$. For each $j\in[c]$, consider a labelling of the vertices of $I_j$ starting with $x_j$ and such that each subsequent vertex has at least one neighbour with a smaller label. Then, we insert the vertices of $I_j$ into the ordering following this labelling, and note that, for each vertex, there are at most three choices, as $\Delta(I_j)\leq3$. This implies there are at most $3^{|V(I_j)|-1}\leq3^{|I_j|}$ possible ways to fix the ordering of the vertices of $I_j$. By considering all $j\in[c]$, we conclude that there are at most \begin{equation}\label{equa:bound3C4} \prod_{j=1}^c 3^{|I_j|}\le 3^{|I|} \end{equation} possible orderings $\sigma$ which result in the same $\tau$. Combining \eqref{equa:bound1C4}, \eqref{equa:bound2C4} and \eqref{equa:bound3C4} with the fact that there are $2n$ distinct orderings $\sigma$ which result in the same copy of $C^{e}_{4,n}$, we conclude that \begin{equation}\label{equa:bound4C4} |\mathcal{C}\cap \langle I\rangle|\le \frac{6^{|I|}}{2n}(n-|V(I)|+c)!\le 6^{|I|}(n-|V(I)|+c-1)!. \end{equation} We can now estimate the spreadness of $\mathcal{C}$. Consider first any $I\subseteq M$ with $|I|\leq n/10=|C^{e}_{4,n}|/15$, and let $c$ be its number of components. Then, by substituting the bound given by \cref{obs:small-edge-spread} into \eqref{equa:bound4C4}, we conclude that \[|\mathcal{C}\cap \langle I\rangle|\leq 6^{|I|}\left(n-\frac23|I|-\frac{c}{3}-1\right)!.\] By using the bound on $|I|$ and taking into account that $|\mathcal{C}|=(n-1)!/2$ and $|C^{e}_{4,n}|=3n/2$, we conclude that $|\mathcal{C}\cap \langle I\rangle|\leq q^{|I|}|C^{e}_{4,n}|^{-c/3}|\mathcal{C}|$ for $q\geq250n^{-2/3}$\COMMENT{Let $\ell\coloneqq |I|$. It suffices to check that \[6^{\ell}\left(n-\frac{2}{3}\ell-\frac{c}{3}-1\right)!\leq q^\ell\left(\frac32n\right)^{-c/3}\frac{(n-1)!}{2}.\] By Stiring's approximation, it suffices to check that (for sufficiently large $n$) \[q^\ell\geq4\cdot6^{\ell}\left(\frac{n}{n-\frac{2}{3}\ell-\frac{c}{3}}\right)^{1/2}e^{2\ell/3}\left(e\frac32\frac{n}{n-\frac{2}{3}\ell-\frac{c}{3}}\right)^{c/3}\left(\frac{n-\frac{2}{3}\ell-\frac{c}{3}}{n}\right)^n\left(n-\frac{2}{3}\ell-\frac{c}{3}\right)^{-2\ell/3}.\] By taking roots, we want to have \[q\geq\left(4\left(\frac{n}{n-\frac{2}{3}\ell-\frac{c}{3}}\right)^{1/2}\right)^{1/\ell}6e^{2/3}\left(e\frac32\frac{n}{n-\frac{2}{3}\ell-\frac{c}{3}}\right)^{c/(3\ell)}\left(\frac{n-\frac{2}{3}\ell-\frac{c}{3}}{n}\right)^{n/\ell}\left(n-\frac{2}{3}\ell-\frac{c}{3}\right)^{-2/3}.\] Now consider each term in the expression above. The term inside the first big parenthesis tends to $1$ as $\ell$ goes to infinity, and it is always bounded by some constant (say, it is at most $8$, since we have a upper bound on $\ell$ and, thus, also on $c$ that leads to the thing inside the second parenthesis being a constant smaller than $2$). The term $6e^{2/3}$ remains as is. The next term can be bounded by $(3e/2)^{1/3}$ since $c\leq\ell$. The next term can be bounded simply by $1$. The final term, then, is at most $(n/2)^{-2/3}$. Putting all of these bound together, it follows that it suffices to have \[q\geq8\cdot6e^{2/3}(3e/2)^{1/3}(n/2)^{-2/3}=2^{13/3}3^{4/3}e n^{-2/3}.\] In particular, the constant above is at most $250$.}. Similarly, assume $I\subseteq M$ has $|I|>n/10$ edges and $c$ components. By substituting the bound given by \cref{obs:large-edge-spread} into \eqref{equa:bound4C4}, we now have that \[|\mathcal{C}\cap \langle I\rangle|\leq 6^{|I|}\left(n-\frac23|I|\right)!.\] Now, as above, we conclude that $|\mathcal{C}\cap \langle I\rangle|\leq q^{|I|}|\mathcal{C}|$ for $q\geq12n^{-2/3}$.\COMMENT{Let $\ell\coloneqq|I|$. It suffices to show that \[6^\ell\left(n-\frac{2}{3}\ell\right)!\leq q^\ell\frac{(n-1)!}{2}.\] By making use of Stirling's approximation, we have that $(n-2\ell/3)!\geq\sqrt{2\pi(n-2\ell/3)}((n-2\ell/3)/e)^{n-2\ell/3}$, and using the same approximation for $n!$, we conclude that it suffices to have \[q^\ell\geq4n\frac{\sqrt{n-2\ell/3}}{\sqrt{n}}\cdot6^\ell e^{2\ell/3}\frac{\left(n-\frac{2}{3}\ell\right)^{n-2\ell/3}}{n^n}.\] Now note that \[4n\frac{\sqrt{n-2\ell/3}}{\sqrt{n}}\cdot6^\ell e^{2\ell/3}\frac{\left(n-\frac{2}{3}\ell\right)^{n-2\ell/3}}{n^n}\leq4n6^\ell e^{2\ell/3}\frac{\left(\left(1-\frac{2\ell}{3n}\right)n\right)^{n-2\ell/3}}{n^n}\leq4n6^\ell e^{2\ell/3}n^{-2\ell/3}.\] Therefore, it suffices to have \[q\geq(4n)^{1/\ell}6 e^{2/3}n^{-2/3}.\] But now, by the bound $\ell\geq n/10$, we have that $(4n)^{1/\ell}\to1$, so in particular the inequality holds (for sufficiently large $n$) if $q=6.1e^{2/3}n^{-2/3}\leq12n^{-2/3}$.} Combining the two statements above, it follows by definition that $\mathcal{C}$ is $((250n^{-2/3},1/3,1/15))$-superspread, as we wanted to see. \end{proof} \begin{proof}[Proof of \cref{thm:Ctwo-threshold}] \Cref{obs:general-spread,obs:small-edge-spread} establish that $\mathcal{C}$ is $(250n^{-2/3},1/3,1/15)$-superspread. By \cref{thm:main} we have that, if $p\ge Cn^{-2/3}$, where $C$ is a sufficiently large constant, then \[ \mathbb{P}\left[C^{e}_{4,n}\subseteq G(n,p)\right]\ge 1/2. \] To finish the argument, one can employ a general result of Friedgut~\cite{Fri05} (see, e.g., a recent paper of \citet{NS20}) which allows to establish that \[\mathbb{P}\left[C^{e}_{4,n}\subseteq G(n,(1+o(1))p)\right]=1-o(1).\qedhere\] \end{proof} \begin{comment} \begin{observation}\label{obs:number-subgraphs} Let $F$ be a subgraph of $C^{e}_{4,n}$ with $f$ edges, then the number of subgraphs $U$ of $F$ with $\ell$ edges and $c$ components is at most \[ (12e)^{\ell}\binom{f}{c}. \] \end{observation} \begin{proof} The proof follows similar to the proof of Proposition~2.2 from~\cite{KNP20}: the number of connected $h$-edge subgraphs of $G$ containing a given vertex ist less than $(e\Delta(G))^h$. Now we first specify the roots of the $c$ components of the $\ell$-edge subgraph of $F$ in at most $2^{c}\binom{f}{c}$ ways, then choose the sizes of the components in at most $\binom{\ell-1}{c-1}$ ways and then choose the subgraphs along $F$ in at most $\prod_{j=1}^c(3e)^{\ell_j}=(3e)^\ell$ ways. In total we get at most $(3e)^\ell\binom{\ell-1}{c-1} 2^{c}\binom{f}{c}\le (12e)^{\ell}\binom{f}{c}$ subgraphs. \end{proof} \end{comment} \subsection{Spanning \texorpdfstring{$K_r$}{Kr}-cycles} In the following we will study copies of $K_r$ arranged in a cyclic way. Since there are several ways how two consecutive copies of $K_r$ can overlap, we provide a precise definition of what will be called an $s$-overlapping $K_r$-cycle. \begin{definition}\label{def:Krsn} Let $r> s\ge 0$ and $n\in \mathbb{N}$ with $(r-s)\mid n$ be integers. A $K_{r,s,n}$-cycle is a graph on vertex set $\mathbb{Z}_n=[0,n-1]$ whose edge set is the union of the edge sets of $n/(r-s)$ copies of $K_r$, where for each $i\in[0,n/(r-s)-1]$ there is a copy of $K_r$ on the vertices $[i(r-s),i(r-s)+r-1]$ (modulo $n$). \end{definition} In other words, the $n/(r-s)$ copies of $K_r$ are arranged cyclically on the vertex set $\mathbb{Z}_n$, so that two consecutive copies of $K_r$ intersect in exactly $s$ vertices and two non-consecutive cliques intersect in as few vertices as possible. The case $s=0$ corresponds to a $K_r$-factor. The threshold for the property of containing a $K_r$-factor was famously determined by \citet{JohKahVu08}. For the case $s=1$, the copies of $K_r$ in $K_{r,1,n}$ are edge-disjoint. As mentioned in the introduction, the threshold for the appearance of $K_{r,1,n}$ in $G(n,p)$ was recently determined by \citet{Frieze20}. When $s=r-1$, the $s$-overlapping $K_r$-cycles are usually referred to as the $(r-1)$-th power of a Hamilton cycle $C_n$, where the $k$-th power of some arbitrary graph $G$ is obtained by connecting any two vertices of $G$ which are at distance at most $k$ with an edge. The threshold for the appearance of $K_{r,r-1,n}$ is known to be $n^{-1/r}$. This was observed by \citet{KO12} for $r\ge 4$, while the case $r=3$ was solved recently by \citet{KNP20}. We determine the threshold for the appearance of $K_{r,s,n}$ for all the remaining values of $r$ and~$s$. Whenever $s\geq3$, the result follows from a general result of \citet{Ri00}; see \cref{sec:conclude}. Our main focus here is on the cases when $s=2$. The overall strategy follows the same structure as in \cref{sect31}. We denote the set of all unlabelled copies of $K_{r,s,n}$ on $[n]$ by $\mathcal{C}_{r,s,n}$. When talking about subgraphs of $K_{r,s,n}$, we refer to sets of consecutive vertices as \emph{segments}. The \emph{length} of a segment is the number of vertices it contains. First, we observe the following several simple facts about $K_{r,s,n}$. \begin{fact}\label{fact:prop_Krsn} Let $r> s\ge 0$ and $n\in \mathbb{N}$ with $(r-s)\mid n$. Then, the number of edges in $K_{r,s,n}$ is exactly \[\left(\binom{r}{2}-\binom{s}{2}\right)\frac{n}{r-s}=\frac{1}{2}(r+s-1)n.\] In particular, for $s\le r/2$, the number of vertices of degree $2r-s-1$ is exactly ${sn}/({r-s})$ (such vertices belong to two copies of $K_r$), whereas the remaining ${(r-2s)n}/{(r-s)}$ vertices have degree $r-1$ (these vertices belong to exactly one copy of $K_r$).\qed \end{fact} We will call vertices of $K_{r,2,n}$ with degree $2r-3$ \emph{heavy} and those with degree $r-1$ \emph{light}. \begin{fact}\label{fact:number-cCrsn} Let $r> s\ge 1$ and $n\in \mathbb{N}$ with $(r-s)\mid n$ and $s\le r/2$. We have \[|\mathcal{C}_{r,s,n}|=\frac{(n-1)! (r-s)}{2((r-2s)!)^{n/(r-s)}(s!)^{n/(r-s)}}=\frac{r-s}{2}d_{r,s}^n(n-1)!,\] where $d_{r,s}\in (0,1]$ is some absolute constant that depends on $s$ and $r$ only.\COMMENT{There are $n!$ permutations of the vertices. Given this, there are $2n/(r-s)$ equal cyclic orders (the starting point matters up to $r-s$ choices, and after that the order would repeat itself, and we can go in either of the two directions). Furthermore, for each set of consecutive vertices of degree $r-1$, we can reorder them in any way and obtain the same graph; there are $n/(r-s)$ sets of $r-2s$ such consecutive vertices. Similarly, for each set of consecutive vertices of degree $2r-s-1$ contained in the same edges, we can reorder them in any way and obtain the same graph; there are $n/(r-s)$ sets of $s$ such consecutive vertices.} \qed \end{fact} \begin{fact}\label{fact:consec_Krsn} Let $r\ge 4$ and $n\in \mathbb{N}$ with $(r-2)\mid n$. Let $V\subseteq V(K_{r,2,n})$ be a segment starting in the first vertex of some clique $K_r$ with $|V|\le n/(2r)+1$\COMMENT{This bound (or a weaker form) is necessary in the sense that, if the graph wraps around, then the number of edges can be slightly larger than described below.}. Then, \begin{equation}\label{eq:max-subgraph} e(K_{r,2,n}[V])=\left(\binom{r}{2}-1\right)a+\binom{b}{2}-\max\{2-b,0\}\cdot(r-2), \end{equation} where $|V|=(r-2)a+b$ with $a,b\in \mathbb{N}_0$ and $0\le b< r-2$.\COMMENT{ \begin{proof} Let $v=(r-2)a+b$ with $a,b\in \mathbb{N}_0$ and $0\le b< r-2$. We may assume that $V=[0,v-1]$. We distinguish the following cases: $b\in\{0, 1\}$ and $2\le b< r-2$. Assume first that $b\in\{0, 1\}$. Then, we have \[ e(I)=\left(\binom{r}{2}-1\right)a+\binom{b}{2}-(2-b)(r-2)=\left(\binom{r}{2}-1\right)(a-1)+\binom{r-2+b}{2} \] (we have $a$ contributions of $\binom{r}{2}-1$, where we count edges ($a$ times) on a set of size $r-2$ and between this set and the next two vertices of $V$ (note this guarantees that we do not count the edge induced by said pair of vertices twice); additionally we get $\binom{b}{2}$ edges, but we have to correct this for the last `full' clique in $V$, since it will miss exactly $2-b$ vertices and, therefore, $(2-b)(r-2)$ edges).\\ Assume next that $2\le b< r-2$. By the same argument as above, we have $e(I)=\left(\binom{r}{2}-1\right)a+\binom{b}{2}$. \end{proof} } \qed \end{fact} Instead of using \eqref{eq:max-subgraph}, we will make use of the following estimate to streamline our calculations. \begin{proposition}\label{prop:easy-bound} Let $r\ge 4$ and $v\in \mathbb{N}$ with $v=(r-2)a+b$, where $a,b\in \mathbb{N}_0$ and $0\le b< r-2$. Then, \begin{equation}\label{eq:easy-bound} \left(\binom{r}{2}-1\right)a+\binom{b}{2}-\max\{2-b,0\}\cdot(r-2)\le \frac{r+1}{2}v -\frac{r+2}{2}. \end{equation} \end{proposition} \begin{proof} We can rewrite the LHS of~\eqref{eq:easy-bound} as \[ \frac{(r+1)(r-2)}{2}a+\binom{b}{2}-\max\{2-b,0\}\cdot(r-2). \] By substituting $v=(r-2)a+b$ in the RHS, we see that~\eqref{eq:easy-bound} is equivalent to\COMMENT{We want to prove \[\frac{(r+1)(r-2)}{2}a+\binom{b}{2}-\max\{2-b,0\}\cdot(r-2)\le\frac{r+1}{2}((r-2)a+b) -\frac{r+2}{2}=\frac{(r+1)(r-2)}{2}a+\frac{r+1}{2}b-\frac{r+2}{2},\] and the leftmost term cancels out.} \[\binom{b}{2}-\max\{2-b,0\}\cdot(r-2)\le\frac{r+1}{2}b-\frac{r+2}{2}.\] To verify this, we consider \begin{align*} f(b)\coloneqq&\, 2\left(\frac{r+1}{2}b-\frac{r+2}{2}-\left(\binom{b}{2}-\max\{2-b,0\}\cdot(r-2)\right)\right)\\ =&\,(r+2-b)b-(r+2)+2\max\{2-b,0\}\cdot(r-2). \end{align*} For all $b\neq2$ we have $f'(b)=(r+2)-2b-2(r-2)\cdot \mathds{1}_{\{b<2\}}$ and $f''(b)=-2$. Since $f$ is concave in $(-\infty,2)$ and $(2,\infty)$, in order to verify that $f(b)\ge 0$ for all $b\in[0,r-3]$ (which is then equivalent to~\eqref{eq:easy-bound}) it suffices to check the value of $f(b)$ at $0$, $1$, $2$ and $r-3$ (assuming this is larger than $2$)\COMMENT{$1$ is necessary for the case $r=4$.}. Indeed, \begin{align*} f(0)&=-(r+2)+4(r-2)=3r-10> 0,\\ f(1)&=(r+1)-(r+2)+2(r-2)=2r-5>0,\\ f(2)&=2r-(r+2)=r-2> 0,\\ \intertext{and, if $r-3> 2$,} f(r-3)&=5(r-3)-(r+2)=4r-17>0.\qedhere \end{align*} \end{proof} Our goal now is to establish that the densest subgraphs of $K_{r,2,n}$ are precisely those described in \cref{fact:consec_Krsn}. The next lemma establishes that, among all subgraphs of $K_{r,2,n}$ induced by segments of length at most $n/(2r)+1$ (i.e., there is no `wrapping around the cycle'), the densest ones are those where the segment starts in a `new' $K_r$ or ends in a `full' $K_r$. \begin{lemma}\label{lem:densest-case-segment} Let $r\ge 4$ and $n\in \mathbb{N}$ with $(r-2)\mid n$. Let $V\subseteq V(K_{r,2,n})$ be a segment with $r\le |V|\le n/(2r)+1$. Then, the number of edges induced by $V$ is maximised when $V$ starts in the first vertex of some clique $K_r$ or ends in the last vertex of some clique $K_r$. \end{lemma} \begin{proof} Let $V$ be a segment which induces the maximum possible number of edges from $K_{r,2,n}$. By the symmetries of $K_{r,2,n}$, we may assume that $V\cap([0,r-1]\cup[n-r,n-1])=\varnothing$\COMMENT{This assumption is in place so there are no issues when talking about last and first cliques.}. Assume that $V$ is not of the form described in the claim (i.e., it neither begins in the first vertex nor ends in the last vertex of some clique $K_r$). Let $i_1$ and $i_2$ be the first and last vertices of $V$, and let $j_1$ and $j_2$ be the number of vertices which $V$ contains in the (last) clique $K_r$ which contains $i_1$ and in the (first) clique which contains $i_2$, respectively. We may assume, without loss of generality, that $j_1\le j_2$ (and recall that $j_1,j_2<r$). However, then the set $(V\setminus\{i_1\})\cup\{i_2+1\}=[i_1+1,i_2+1]$ induces more edges than $V$\COMMENT{We remove $j_1-1$ edges from one side and add $j_2$ edges to the other.}. But this contradicts our choice of $V$ as a set of consecutive vertices which induces the maximum possible number of edges. \end{proof} Now we prove that no subgraph of $K_{r,2,n}$ is denser than the subgraphs induced by segments. \begin{lemma}\label{lem:densest-case-general} Let $r\ge 4$ and $n\in \mathbb{N}$ with $(r-2)\mid n$. Let $V\subseteq V(K_{r,2,n})$ with $r\le v\coloneqq|V|\le n/(2r)+1$. Then, the number of edges induced by $V$ is at most the number induced by a segment of length $v$. \end{lemma} \begin{proof} Let $V\subseteq V(K_{r,2,n})$ be a set of cardinality $v$ inducing the maximum possible number of edges. Let $I$ be the graph induced by $V$. Of course, we may assume $I$ has no isolated vertices. Let $S$ be a smallest segment containing $V$. By the symmetries of $K_{r,2,n}$, we may assume that $S\cap([0,r-1]\cup[n-r,n-1])=\varnothing$. We use a compression-type argument to show that we can modify $V$ into a segment which induces at least as many edges as $V$. We achieve this by consecutively creating new sets $V'$ which are contained in shorter segments but induce at least as many edges as the previous set. Let $i_1$ and $i_2$ be the first and last vertices of $V$ (i.e., $S=[i_1,i_2]$) and notice that $i_1$ and $i_2$ do not form an edge (since $v\le n/(2r)+1$). Observe, then, that $\deg_I(i_1),\deg_I(i_2)\in[r-1]$. By an argument as in \cref{lem:densest-case-segment}, we may assume, without loss of generality, that $\deg_I(i_1)\ge \deg_I(i_2)$ and that $i_1$ is the first vertex of some clique $K_r$ completely contained in $I$: otherwise, we could replace the vertex $i_2$ with some missing vertex from such a clique and increase the number of edges\COMMENT{Assuming that $\deg_I(i_1)\ge \deg_I(i_2)$ can be done by symmetry. Now, assume $i_1$ is not the first vertex of a `full' copy of $K_r$. By deleting $i_2$, we loose $\deg_I(i_2)$ edges. By then adding a new vertex in the clique containing $i_1$, since this clique must already contain $\deg_I(i_1)+1$ vertices, we gain $\deg_I(i_1)+1>\deg_I(i_2)$ new edges, so the total number of edges goes up. But this contradicts the assumption that $I$ has the maximum possible number of edges.}. Let $K^{(1)}$ be the copy of $K_r$ contained in $I$ with the smallest indices (in particular, it contains $i_1$). Assume that $V$ is not a segment. Consider the vertex $i'\in S\setminus V$ with the smallest index. Observe that $i'\notin V(K^{(1)})$. We distinguish two cases, depending on whether $i'$ is heavy or light. If $i'$ is heavy, let $K'$ and $K''$ be the two copies of $K_r$ from $K_{r,2,n}$ with $i'\in V(K')\cap V(K'')$ and $K'$ containing smaller indices than $K''$. If $E(K'')\cap E(I)=\varnothing$, then we can shift all edges of $I$ induced by $V\cap[i'+1,i_2]$ to the left $r$ positions, yielding a graph contained in a segment of length $i_2-i_1+1-r$ with the same number of edges as $I$. Hence, we assume that $E(K'')\cap E(I)\neq\varnothing$, which implies $V$ contains at least two of the vertices of $K''$. Observe that $K^{(1)}\neq K'$. Then, replace $i_1$ by $i'$. In this way, since $i_1$ is the first vertex from $K^{(1)}$, we remove $r-1$ edges, but at the same time we add at least $r$ edges\COMMENT{All vertices in $K'$ before $i'$ must be in the graph, so at least $r-2$ in $V(K')\setminus V(K'')$. But also, since $|V\cap V(K'')|\geq2$, we are adding at least two more edges.\\ We could ignore the fact that $i'\notin K^{(1)}$ and still obtain the desired result (we would replace $e-1$ edges by $r-1$ new edges and obtain a shorter segment).}. But this contradicts our choice of $V$. Assume now that $i'$ is light and let $K$ be the unique clique $K_r$ from $K_{r,s,n}$ with $i'\in V(K)$. Since $i'$ is light, by its definition we must have $|V(K)\setminus V|\le r-2$\COMMENT{There have to be at least two heavy vertices in $K$ which lie in $V$ (the last two with indices below $i'$).}. We replace the first $t\coloneqq |V(K)\setminus V|\le r-2$ vertices from $K^{(1)}$ by adding $V(K)\setminus V$ to $V$. In this way, we remove $\sum_{i=1}^{t} (r-i)$ edges, but at the same time we add at least $\sum_{i=1}^{t} (r-i)$ edges. Since $i_2>i'$, this means the new set lies in a shorter segment but induces at least as many edges. In all described situations we managed to make the segment containing $V$ shorter while not decreasing the number of edges. Hence, we eventually find a segment $V'$ inducing at least as many edges as $V$. \end{proof} The following now follows directly by combining \cref{fact:consec_Krsn,prop:easy-bound,lem:densest-case-segment,lem:densest-case-general} (and noting that the bound holds trivially if $|V|\leq r$). \begin{corollary}\label{coro:Krdensity_bound} Let $r\ge 4$ and $n\in \mathbb{N}$ with $(r-2)\mid n$. Let $V\subseteq V(K_{r,2,n})$ with $|V|\le n/(2r)+1$. Then, \[e(K_{r,2,n}[V])\leq \frac{r+1}{2}|V|-\frac{r+2}{2}.\] \end{corollary} Using what we have proved so far, we can obtain estimates which will be crucial for studying the spreadness of $\mathcal{C}_{r,2,n}$. \begin{lemma}\label{lemma:small-spread-new} Let $r\ge 4$ and $n\in \mathbb{N}$ with $(r-2)\mid n$. Let $I\subseteqK_{r,2,n}$ be a subgraph with $\ell\le n/(2r)$ edges and $c$ components. Then, \[|V(I)|-c\ge \frac{2}{r+1}\ell+\frac{c}{r+1}.\] \end{lemma} \begin{proof} Let $I_1,\ldots,I_c$ be the components of $I$ with at least one edge, and let $v_1,\ldots,v_c$ be the number of vertices spanned by $I_1,\ldots,I_c$, respectively. For each $j\in[c]$, since $|I_j|\le|I|\le n/(2r)$, by \cref{coro:Krdensity_bound} we have the following easy bound: \[|I_j|\le \frac{r+1}{2}v_j-\frac{r+2}{2}.\] Summing over all $j\in[c]$, we obtain that \[\ell=|I|\le \frac{r+1}{2}|V(I)| -\frac{r+2}{2}c=\frac{r+1}{2}\left(|V(I)|-c\right)-\frac{c}{2},\] and the claim follows by reordering. \end{proof} \begin{lemma}\label{lemma:large-spread-new} Let $r\ge 4$ and $n\in \mathbb{N}$ with $(r-2)\mid n$. Let $I\subseteqK_{r,2,n}$ be a subgraph with $\ell$ edges and $c$ components. Then, \[|V(I)|\ge \frac{2}{r+1}\ell.\] \end{lemma} \begin{proof} The vertex set of $K_{r,2,n}$ consists of $t\coloneqq n/(r-2)$ segments of heavy vertices and $t$ segments of light vertices, which alternate as we traverse the vertex set. The segments of heavy vertices have length $2$, and the segments of light vertices have length $r-4$ (the case $r=4$ is special: here the segments of light vertices are empty). For each $i\in[t]$, let $h_i$ and $\ell_i$ denote the number of heavy vertices and light vertices of $I$ in the $i$-th segment of heavy or light vertices, respectively. For notational purposes, let $h_{t+1}\coloneqq h_1$. Then, we can bound the number of edges of $I$ as follows: \begin{align*} |I|&\le \sum_{i=1}^t \left(\binom{h_i}{2}+\binom{\ell_i}{2}+h_i\ell_i+h_{i+1}\ell_i+h_{i+1}h_i\right)\\ &\le \sum_{i=1}^t \left(\binom{h_i}{2}+\binom{\ell_i}{2}+h_i\ell_i+2\ell_i+2h_i\right)=\sum_{i=1}^t \left(\binom{h_i+\ell_i}{2}+2(\ell_i+h_i)\right). \end{align*} Next, observe that, for each $i\in[t]$, \[\binom{h_i+\ell_i}{2}+2(\ell_i+h_i)=(h_i+\ell_i)\left(\frac{h_i+\ell_i-1}{2}+2\right)\le \frac{r+1}{2}(h_i+\ell_i),\] where the inequality holds since $h_i+\ell_i\le r-2$. The conclusion follows by adding over all $i\in[t]$. \end{proof} Combining the previous two lemmas, we show that $\mathcal{C}_{r,2,n}$ is a $(O(n^{-2/(r+1)}),1/(r+1),1/(r(r+1)))$-superspread hypergraph. \begin{lemma}\label{lem:spread} Let $r\ge 4$ and $n\in \mathbb{N}$ with $(r-2)\mid n$. Then, the hypergraph $\mathcal{C}_{r,2,n}$ of all copies of $K_{r,2,n}$ in $M=\binom{[n]}{2}$ is $(O(n^{-2/(r+1)}),1/(r+1),1/(r(r+1)))$-superspread. \end{lemma} \begin{proof} Let $I\subseteq M$. Our first aim is to obtain a general upper bound on $|\mathcal{C}_{r,2,n}\cap \langle I\rangle|$. If $I$ is not contained in any copy of $K_{r,2,n}$, we automatically have $|\mathcal{C}_{r,2,n}\cap \langle I\rangle|=0$, so we may assume $I$ is a subgraph of some copy of $K_{r,2,n}$. Recall that each copy of $K_{r,2,n}$ can be defined by an ordering of the $n$ vertices, so it suffices to bound the number of orderings which yield a copy of $K_{r,2,n}$ containing $I$. Let $I_1,\ldots,I_c$ be the components of $I$ with at least one edge. For each $j\in[c]$, let $x_j\in V(I_j)$ (there are $v_j$ such possible choices, which leads to a total of \begin{equation}\label{equa:Krtnboundspread1} \prod_{j=1}^cv_j\leq 2^{|I|} \end{equation} choices for $\{x_1,\ldots,x_j\}$\COMMENT{We have $v_j\leq|I_j|+1\leq 2^{|I_j|}$.}). Then, each ordering $\sigma$ of $[n]$ which defines a copy of $K_{r,2,n}$ containing $I$ induces a unique ordering $\tau=\tau(\sigma)$ on the set consisting of $x_1,\ldots,x_j$ and all other isolated vertices. The total number of such orderings $\tau$ is \begin{equation}\label{equa:Krtnboundspread2} (n-|V(I)|+c)! \end{equation} so now it suffices to bound, for each such $\tau$, the number of orderings $\sigma$ with $\tau=\tau(\sigma)$. Given an ordering $\tau$, in order to obtain an ordering $\sigma$ with $\tau=\tau(\sigma)$, it suffices to `insert' the missing vertices into the ordering. That is, for each $j\in[c]$, we need to `insert' the other vertices of $V(I_j)$ into the ordering. By considering a labelling of the vertices of $I_j$ in such a way that each subsequent vertex is a neighbour of at least one previously included vertex (and taking into account that $x_j$ is already included), we note that there are at most $2r$ choices for each vertex (recall that $\Delta(K_{r,2,n})<2r$). This leads to a total of at most $(2r)^{|V(I_j)|-1}\leq(2r)^{|I_j|}$ possible ways to include the component $I_j$. By considering all $j\in[c]$, we conclude that there are at most \begin{equation}\label{equa:Krtnboundspread3} \prod_{j=1}^c(2r)^{|I_j|}=(2r)^{|I|} \end{equation} orderings $\sigma$ with $\tau=\tau(\sigma)$. Combining \eqref{equa:Krtnboundspread1}, \eqref{equa:Krtnboundspread2} and \eqref{equa:Krtnboundspread3} with the fact that each copy of $K_{r,2,n}$ is given by $d_{r,2}^{-n}2n/(r-s)$ distinct orderings (see \cref{fact:number-cCrsn}), we conclude that \begin{equation}\label{equa:spreadboundKr2n} |\mathcal{C}_{r,2,n}\cap \langle I\rangle|\le \frac{r-s}{2}d_{r,2}^n(4r)^{|I|}(n-|V(I)|+c-1)!. \end{equation} We can now estimate the spreadness of $\mathcal{C}_{r,2,n}$. Consider first any $I\subseteq M$ with $|I|>n/(2r)$. Note that $I$ has at most $2r$ components $I_j$ of size larger than $n/(2r)$. For each of these components, we use \cref{lemma:large-spread-new} to bound $|V(I_j)|$. For the remaining components $I_j$, we simply use the bound $|V(I_j)|-1\geq2|I_j|/(r+1)$, which follows by \cref{lemma:small-spread-new}. By substituting these bounds into \eqref{equa:spreadboundKr2n}, we conclude that \[|\mathcal{C}_{r,2,n}\cap\langle I\rangle|\leq\frac{r-s}{2}d_{r,2}^n(4r)^{|I|}\left(n-\frac{2}{r+1}|I|+2r\right)!.\] By comparing this with the expression given in \cref{fact:number-cCrsn} (and taking into account the bound on $|I|$), we conclude that $|\mathcal{C}_{r,2,n}\cap\langle I\rangle|\leq q^{|I|}|\mathcal{C}_{r,2,n}|$ whenever $q\geq c_1 n^{-2/(r+1)}$\COMMENT{By \cref{fact:number-cCrsn}, we have that \[|\mathcal{C}_{r,2,n}|=\frac{r-2}{2}d_{r,2}^n(n-1)!.\] Now it suffices to satisfy \[\frac{r-2}{2}d_{r,2}^n(4r)^{|I|}\left(n-\frac{2}{r+1}|I|+2r\right)!\leq q^{|I|}\frac{r-2}{2}d_{r,2}^n(n-1)!.\] By Stirling's approximation, for sufficiently large $n$, it suffices to satisfy \[2(4r)^{|I|}\left(n-\frac{2}{r+1}|I|+2r\right)_{2r}\sqrt{2\pi\left(n-\frac{2}{r+1}|I|\right)}\left(\frac{n-\frac{2}{r+1}|I|}{e}\right)^{n-\frac{2}{r+1}|I|}\leq q^{|I|}\frac{1}{n}\sqrt{2\pi n}\left(\frac{n}{e}\right)^n.\] Rearranging, \[q\geq\left(2\sqrt{n\left(n-\frac{2}{r+1}|I|\right)}\left(n-\frac{2}{r+1}|I|+2r\right)_{2r}\right)^{1/|I|}4re^{\frac{2}{r+1}}\left(\frac{n-\frac{2}{r+1}|I|}{n}\right)^{n/|I|}\left(n-\frac{2}{r+1}|I|\right)^{-2/(r+1)}.\] Now, using the bound $|I|\geq n/(2r)$, as $n$ goes to infinity, the first term tends to $1$, so we have that, if $n$ is sufficiently large, it suffices to have \[q\geq8re^{\frac{2}{r+1}}\left(1-\frac{1}{r(r+1)}\right)^{2r}\left(1-\frac{1}{r(r+1)}\right)^{-2/(r+1)}n^{-2/(r+1)}=c_1n^{-2/(r+1)}.\]}, where $c_1$ is a constant that depends only on $r$. Consider now some $I\subseteq M$ with $|I|\leq n/(2r)=|K_{r,2,n}|/(r(r+1))$ (see \cref{fact:prop_Krsn}), and let $c$ be its number of components. By making use of \cref{lemma:small-spread-new} and \eqref{equa:spreadboundKr2n}, we have that \[|\mathcal{C}_{r,2,n}\cap\langle I\rangle|\leq\frac{r-s}{2}d_{r,2}^n(4r)^{|I|}\left(n-\frac{2}{r+1}|I|-\frac{c}{r+1}-1\right)!.\] As above, by comparing this with the expression given in \cref{fact:number-cCrsn}, and taking into account also \cref{fact:prop_Krsn}, for $q\geq c_2 n^{-2/(r+1)}$, where $c_2$ depends only on $r$, we have that $|\mathcal{C}_{r,2,n}\cap\langle I\rangle|\leq q^{|I|}|K_{r,2,n}|^{-c/(r+1)}|\mathcal{C}_{r,2,n}|$\COMMENT{As before, by \cref{fact:number-cCrsn}, \[|\mathcal{C}_{r,2,n}|=\frac{r-s}{2}d_{r,2}^n(n-1)!,\] and recall from \cref{fact:prop_Krsn} that \[|K_{r,2,n}|=\frac{r+1}{2}n.\] Now it suffices to satisfy \[\frac{r-s}{2}d_{r,2}^n(4r)^{|I|}\left(n-\frac{2}{r+1}|I|-\frac{c}{r+1}-1\right)!\leq q^{|I|}\left(\frac{r+1}{2}n\right)^{-c/(r+1)}\frac{r-s}{2}d_{r,2}^n(n-1)!.\] By Stirling's approximation, for sufficiently large $n$, it suffices to satisfy \begin{align*} 2(4r)^{|I|}\frac{1}{n-\frac{2}{r+1}|I|-\frac{c}{r+1}}\sqrt{2\pi\left(n-\frac{2}{r+1}|I|-\frac{c}{r+1}\right)}\left(\frac{n-\frac{2}{r+1}|I|-\frac{c}{r+1}}{e}\right)^{n-\frac{2}{r+1}|I|-\frac{c}{r+1}}\\ \leq q^{|I|}\left(\frac{r+1}{2}n\right)^{-c/(r+1)}\frac{1}{n}\sqrt{2\pi n}\left(\frac{n}{e}\right)^n. \end{align*} Rearranging, \begin{align*} q\geq\left(2\sqrt{n/\left(n-\frac{2}{r+1}|I|-\frac{c}{r+1}\right)}\right)^{1/|I|}4re^{\frac{2}{r+1}}\left(n-\frac{2}{r+1}|I|-\frac{c}{r+1}\right)^{-2/(r+1)}\\ \cdot\left(\frac{e(r+1)n}{2\left(n-\frac{2}{r+1}|I|-\frac{c}{r+1}\right)}\right)^{\frac{c}{(r+1)|I|}}\left(\frac{n-\frac{2}{r+1}|I|-\frac{c}{r+1}}{n}\right)^{n/|I|}. \end{align*} Now, consider each term here. The first term goes to $1$ if $|I|$ goes to infinity, and is bounded by a constant otherwise ($\leq2$), so in general we simply bound it by $2$. The term $4re^{\frac{2}{r+1}}$ is also just a constant. By the bound on $|I|$, the next term is always bounded from above by $(n/2)^{-2/(r+1)}$. By the bound on $|I|$, the next term (the first in the second line) can be bounded from above by $(e(r+1))^{c/((r+1)|I|)}\leq(e(r+1))^{1/(r+1)}$, where the large parenthesis in the denominator is always at least $n/2$ and $c\leq|I|$. The last term can be bounded from above by $1$. Combining all of these, it suffices to have \[q\geq c_2n^{-2/(r+1)},\] where $c_2$ depends only on $r$.}. \end{proof} \begin{proof}[Proof of \cref{thm:Kr-cycle}] By \cref{lem:spread}, we have that $\mathcal{C}_{r,2,n}$ is $(O(n^{-2/(r+1)}),1/(r+1),1/(r(r+1)))$-superspread. By \cref{thm:main} it follows that, if $C$ is sufficiently large, for $p\ge Cn^{-2/(r+1)}$ we have \[ \mathbb{P}\left[K_{r,2,n}\subseteq G(n,p)\right]\ge 1/2. \] To finish the argument, one can employ a general result of Friedgut~\cite{Fri05} (see also~\cite{NS20}) which allows to establish that $\mathbb{P}\left[K_{r,2,n}\subseteq G(n,(1+o(1))p)\right]=1-o(1)$. \end{proof} \section{Concluding remarks}\label{sec:conclude} \subsection{Dense overlapping \texorpdfstring{$K_r$}{Kr}-cycles} As mentioned in the introduction, a general result of \citet{Ri00} provides a sufficient condition for a spanning graph to be contained in $G(n,p)$. For a graph $H=(V,E)$, let $v(H)\coloneqq|V|$ and $e(H)\coloneqq|E|$. For each integer $v$, let $e_H(v)\coloneqq\max \{ e(F) : F \subseteq H, v(F)=v \}$. Then, the following parameter will be responsible for the upper bound on the threshold for the property that $H\subseteq G(n,p)$: \begin{align*} \gamma(H) \coloneqq \max_{3 \le v \le n} \left\{ \frac{e_H(v)}{v-2} \right\}. \end{align*} Riordan proved the following (see also~\cite{PP15} for its generalization to hypergraphs). \begin{theorem}\label{thm:Riordan} Let $H=H^{(i)}$ be a sequence of graphs with $n=n(i)$ vertices (where $n$ tends to infinity with $i$), $e(H)=\alpha \binom{n}{2} = \alpha (n)\binom{n}{2}$ edges and $\Delta=\Delta(H)$. Let $p = p(n)\colon \mathbb{N}\to [0,1)$. If $H$ has a vertex of degree at least $2$ and $n p^{\gamma(H)} \Delta^{-4} \rightarrow \infty$, then a.a.s.~the random graph $G(n,p)$ contains a copy of $H$. \end{theorem} From \cref{fact:prop_Krsn} it follows that $\gamma(K_{r,s,n})\ge \frac{r+s-1}{2}$ and, since $\gamma(K_r)>\frac{r+s-1}{2}$ for $s\in[2]$, Riordan's theorem does not provide optimal bounds on the theshold in the case of our \cref{thm:Kr-cycle} (nor for $K_{r,1,n}$, for which the threshold was determined by \citet{Frieze20}). However, in the cases for $s\ge 3$, Riordan's theorem suffices and yields the correct threshold $n^{-2/(r+s-1)}$ for the property that $G(n,p)$ contains a copy of $K_{r,s,n}$. \subsection{Extensions: hypergraphs and rainbow thresholds} Throughout this paper, for simplicity, we have focused on properties of random graphs. However, we believe that \cref{thm:main} extends to random hypergraphs without much issue. Very recently, Frieze and Marbach~\cite{FM21} extended the results from~\cite{FKNP19,KNP20} to rainbow versions, where the vertices of some $r$-uniform hypergraph are colored randomly with $r$ colors. It is then shown in~\cite{FM21} that the upper bounds on the thresholds as proved in~\cite{FKNP19,KNP20} remain asymptotically the same to yield a rainbow hyperedge or a rainbow copy of a spanning structure (e.g., bounded degree spanning tree, square of a Hamilton cycle), and the result is also extended to a rainbow version of the containment of the $k$-th power of a Hamilton cycle. We believe that the fragmentation lemma, \cref{lem:fragmentation}, and \cref{thm:main} also admit rainbow versions. \bibliographystyle{mystyle}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The geometry of complex polynomials has been an area of ongoing interest since the complex numbers were first conceived of geometrically. Foremost in the historical study of the geometry of complex polynomials has been the problem of finding the zeros (and critical points) of a given polynomial, or failing that, regions guaranteed to contain all (or some or none) of the zeros (or critical points) of the polynomial. The foundational result in this area is the Gauss--Lucas theorem, which states that the critical points of a complex polynomial lie in the convex hull of the zeros of that polynomial. In Section~\ref{sect: GL-related.}, we will survey results which are related to the Gauss--Lucas Theorem. A natural generalization of the notion of a zero of a complex polynomial $p(z)$ is a lemniscate of $p(z)$. The lemniscates of $p(z)$ are the components of the level sets $$\Lambda_{\epsilon}(p)=\{z:|p(z)|=\epsilon\}$$ for any $\epsilon\in(0,\infty)$ (of course, if $\epsilon=0$ we have reproduced the zero set of $p$). The study of the geometry specifically of these lemniscates also has a long history, dating to the investigation of Cassini and Bernoulli (see~\cite{Yates} for example). In Section~\ref{sect: Geometry of Lemniscates.}, we will survey recent results regarding the geometry of lemniscates, both individually and viewed as a complex of nested curves (nested in the sense of one lying in a bounded component of the complement of another). Hilbert's theorem, to the effect that the lemniscates of complex polynomials may be used to approximate simple closed curves arbitrarily well, has made these lemniscates a valuable tool in the emerging field of shape analysis. A shape $\Gamma$ is a simple closed path which i) is smooth and ii) divides $\hat{\mathbb{C}}$ into two simply connected domains, one bounded (called $\Omega_+$) and one unbounded (called $\Omega_-$). The fingerprint $\tau:\mathbb{T}\to\mathbb{T}$ of $\Gamma$ is the orientation preserving biholomorphism of the unit circle onto itself obtained by composing the appropriate (ie. subject to certain normalizations) Riemann maps for $\Omega_+$ and $\Omega_-$ in the appropriate way. In Section~\ref{sect: Fingerprints of Shapes and Conformal Equivalence.}, we will introduce these notions of shape and fingerprint properly, and survey recent results relating to the fingerprints of polynomial lemniscates. In the special case that the shape $\Gamma$ is a proper lemniscate of a complex polynomial $p$ (that is, a lemniscate containing all of the zeros of the polynomial in its bounded face), the fingerprint of $\Gamma$ has a particularly nice form, since the Riemann map for the region $\Omega_+$ may be taken to be $p(z)^{1/n}$ (where $n$ is the degree of $p$). In~\cite{EbenfeltKhavinsonShapiro}, it was shown that the fingerprint of any such $\Gamma$ is the $n^{\text{th}}$ root of a degree-$n$ Blaschke product $B(z)$, and conversely that the $n^{\text{th}}$ root of any degree $n$ Blaschke product is the fingerprint for some proper polynomial lemniscate. From this it follows that for any finite Blaschke product $B$, there is some injective analytic map $\varphi:\mathbb{D}\to\mathbb{C}$, and some complex polynomial $p$ (with $\deg(p)=\deg(B)$) such that $B=p\circ\varphi$ on $\mathbb{D}$. In other words, $B$ is conformally equivalent to a complex polynomial (of the same degree as $B$) on $\mathbb{D}$. This fact has been re-proven by various methods by several authors, and generalized beyond the realm of finite Blaschke products. Also in Section~\ref{sect: Fingerprints of Shapes and Conformal Equivalence.}, we will survey recent results regarding conformal equivalence of arbitrary analytic functions to polynomials (and meromorphic functions to rational functions). The results contained herein are largely restricted to those appearing in the last ten or so years. Many theorems mentioned below appear in articles with other interesting results not mentioned here. The subjects described above are chosen largely for their appeal to the author's interest, and many results have appeared in other areas related to the geometry of complex polynomials. \section{Gauss--Lucas Related Theorems} \label{sect: GL-related.} There continue to be very many contributions the the classical study of the geometry of complex polynomials, which is chiefly concerned with the relations between the zeros, critical points, and coefficients of a complex polynomial. In this section, we will focus on generalizations of the Gauss--Lucas theorem. \subsection{The Shrinking Hulls of the Zeros of the Derivatives} If we let $H(p)$ denote the convex hull of the roots of a degree $n$ polynomial $p$, then the sequence $H(p)\supset H(p')\supset\cdots\supset H(p^{(n-1)})$ shrinks to a single point. In 2018, M.~Ravichandran~\cite{Ravichandran} quantified the rate at which this nested sequence shrinks with the following theorem. \begin{theorem} For any complex polynomial $p$ and any $r\in(1/2,1)$, $$m\left(H(p^{(\lceil r\deg(p)\rceil)})\right)\leq4(r-r^2)m(H(p)).$$ \end{theorem} \subsection{Convex Combinations of Incomplete Polynomials} For $n$ not necessarily distinct points $z_1,z_2,\ldots,z_n\in\mathbb{C}$, and any $1\leq k\leq n$, let $g_k$ denote the $k^{\text{th}}$ incomplete polynomial $$g_k(z)=\displaystyle\prod_{\stackrel{1\leq j\leq n}{j\neq k}}(z-z_j)$$ (that is, the monic degree $n-1$ polynomial whose zeros are exactly $z_1,z_2,\ldots,z_n$, except for $z_k$). In 2008, J.~L.~Diaz-Barrero and J.~J.~Egozcue~\cite{DiazBarreroEgozcue} provided the following generalization of the Gauss--Lucas theorem for convex combinations of incomplete polynomials. \begin{theorem} Let $z_1,z_2,\ldots,z_n\in\mathbb{C}$ be not necessarily distinct complex numbers. Let $r_1,r_2,\ldots,r_n\in[0,1]$ satisfy $\sum r_k=1$. Then the roots of the degree $n-1$ polynomial $A(z)=\displaystyle\sum r_kg_k(z)$ all lie in the convex hull of the points $z_1,z_2,\ldots,z_n$. \end{theorem} Note that in the previous theorem, setting each $r_k=1/n$ reproduces the classical Gauss--Lucas theorem. \subsection{Approximate and Assymptotic Gauss--Lucas Theorems} For a set $K\subset\mathbb{C}$, and an $\epsilon>0$, define $K_\epsilon$ to be the $\epsilon$-neighborhood of $K$. For a polynomial $p$, let $Z(p,K)$ denote the number of zeros of $p$ which lie in $K$. In 2016, V.~Totik~\cite{Totik} established the following asymptotic version of the Gauss--Lucas theorem. \begin{theorem}\label{thm: Asymptotic GL theorem.} For any bounded convex set $K\subset\mathbb{C}$, any $\epsilon>0$, and any sequence of polynomials $\{p_n\}$ with $\deg(p_n)=n$, if $\dfrac{Z(p_n,K)}{n}\to1$ then $\dfrac{Z({p_n}',K_\epsilon)}{n-1}\to1$. \end{theorem} While not quite fitting in this section, results which are somewhat similar to Theorem~\ref{thm: Asymptotic GL theorem.} in flavor, to the effect that the critical points of a random polynomial converge in distribution to the zeros of the polynomial, may be found in~\cite{ORourkeWilliams,PemantleRivin}. In 2017, T.~J.~Richards conjectured in a document posted on \textit{arxiv.org}~\cite{Richards1} that underlying the asymptotic Theorem~\ref{thm: Asymptotic GL theorem.} is the following static principle. \begin{conjecture}\label{conj: Approx. GL theorem.} For any bounded convex set $K\subset\mathbb{C}$, and any $\epsilon>0$, there is a constant $C_{K,\epsilon}\in(0,1)$ such that, for any polynomial $p$ with sufficiently large degree, if $\dfrac{Z(p,K)}{\deg(p)}>C_{K,\epsilon}$, then $Z(p',K_\epsilon)\geq Z(p,K)-1$. \end{conjecture} In 2019, T. J. Richards and S. Steinerberger~\cite{RichardsSteinerberger} proved a weaker version of Conjecture~\ref{conj: Approx. GL theorem.} \begin{theorem} For any convex bounded set $K\subset\mathbb{C}$, and any $\epsilon>0$, there is a constant $D_{K,\epsilon}>0$ such that, for any polynomial $p$, if $\dfrac{Z(p,K)}{\deg(p)}>\dfrac{\log(\deg(p))-D_{K,\epsilon}}{\log(\deg(p))}$, then $Z(p',K_\epsilon)\geq Z(p,K)-1$. \end{theorem} We also note here that in private correspondence, V.~Totik has communicated a proof of Conjecture~\ref{conj: Approx. GL theorem.} to the author, along with bounds on the constant $C_{k,\epsilon}$, and we look forward to seeing these results in print soon. \subsection{A non-convex Gauss--Lucas Theorem for Polynomials with Non-negative Coefficients} In 2017, Bl.~Sendov and H.S.~Sendov\cite{SendovSendov} proved an analogue to the Gauss--Lucas theorem for non-convex sectors of $\mathbb{C}$, provided that the coefficients of the polynomial in question are real and non-negative. In order to state the theorem, for $\alpha\in[0,\pi]$, define $\operatorname{Sect}(\alpha)=\{z\in\mathbb{C}:|\arg(z)|\geq\alpha\}$. \begin{theorem} If $p(z)$ has all real and non-negative coefficents, and all of the zeros of $p$ lie in the sector $\operatorname{Sect}(\alpha)$ for any $\alpha\in[0,\pi]$, then all of the critical points of $p$ lie in $\operatorname{Sect}(\alpha)$. \end{theorem} \subsection{Converses to the Gauss--Lucas Theorem} In 2014, N.~Nikolov and B.~Sendov~\cite{NikolovSendov} proved the following converse to the Gauss--Lucas theorem, showing that differentiation is the only non-trivial linear operator which contracts zero sets. \begin{theorem} Let $S:\mathbb{C}[z]\to\mathbb{C}[z]$ be a linear operator for which $H(S(p))\subset H(p)$ for all $p\in\mathbb{C}[z]$. Then either $S$ is complex-valued (ie. $S(\mathbb{C})\subset\mathbb{C}$), or there is some $c\in\mathbb{C}\setminus\{0\}$ and some integer $n\geq0$ for which $S(p)=cp^{(n)}$. \end{theorem} Another direction in which one might look for a converse to the Gauss--Lucas theorem is an identification of those collection of $n-1$ points in a given convex set $K\subset\mathbb{C}$ which might appear as the critical points of some degree-$n$ complex polynomial having all of its zeros lying in $K$. Along these lines, in 2017 C.~Frayer~\cite{Frayer1} established the following theorem for polynomials with three distinct roots. Any such polynomial may be normalized to have a zero at $1$, and its other two roots on the unit circle. Let $p(z)=(z-1)^k(z-d_1)^m(z-d_2)^n$, for $d_1,d_2\in\mathbb{T}$. Let $P(k,m,n)$ denote the collection of all such polynomials $p$. For $r\in(0,1)$, define $$T_r=\left\{z\in\mathbb{C}:\left|z-\left(1-\dfrac{r}{2}\right)\right|=\dfrac{r}{2}\right\},$$ the circle centered at $1-\dfrac{r}{2}$, which is tangent to the unit circle at $1$. \begin{theorem}\label{thm: Three distinct zeros.} Fix positive integers $k$, $m$, and $n$. \begin{itemize} \item No polynomial $p\in P(k,m,n)$ has a critical point on the region interior to $T_{\frac{2k}{k+m+n}}$. \item If $m\neq n$, then additionally, no $p\in P(k,m,n)$ has a critical point on the region $D$ defined directly after this theorem. \item If $c\in\mathbb{D}$ is in neither of the regions mentioned above, then there is a $p\in P(k,m,n)$ with a critical point at $c$. If $c$ is on the boundary of these regions, this polynomial is unique. If $c$ is not on the boundary of these regions, there are exactly two such polynomials. \end{itemize} \end{theorem} If $m\neq n$ (with $m<n$), as in the second part of the above theorem, the region $D$ is bounded by the degree-$2$ algebraic curve which is parameterized by $\gamma(t)=(x,y)$, where $$x=\dfrac{(m+n+k)^2t^2-[2(m+n+k)(m+n+2k)-4mn]t+4k(m+n+k)}{(m+n+k)((m+n+k)t-2k)(t-2)},$$ and $$y^2=(1-x)(t-1+x),$$ for $\dfrac{2(m+k)}{m+n+k}\leq t\leq\dfrac{2(n+k)}{m+n+k}$. In~\cite{Frayer2}, Frayer provided a geometric construction of the polynomials whose existence is guaranteed by Theorem~\ref{thm: Three distinct zeros.}. A monograph could easily be devoted to the many refinements, generalizations, and other work which the Gauss--Lucas theorem has inspired over the years. Those contained in this section are only the most recent ones which refer to general polynomials of one complex variable. For other refinements and generalizations, see~\cite{ORourkeWilliams} and the many references contained therein. \section{The Geometry of the Lemniscates} \label{sect: Geometry of Lemniscates.} The lemniscates of complex polynomials (that is, the components of the level sets $\Lambda_\epsilon(p)=\{z:|p(z)|=\epsilon\}$ for a complex polynomial $p$ and an $\epsilon>0$) have inspired considerable interest since 1680, when they were studied by G.~D.~Cassini (see~\cite{Yates} for example). The length, area circumscribed, convexity, and other geometric properties, provide a common locus of study. In their 1958 paper \textit{Metric Properties of Polynomials}~\cite{ErdosHerzogPiranian}, P.~Erd\H{o}s~et.~al. posed a number of problems surrounding these geometric properties, one of which we will begin with. \subsection{The Erd\H{o}s--Herzog--Piranian Lemniscate Problem} Let $L_n$ denote the maximum length of the level set $\Lambda_1(p)$, for any degree $n$ polynomial $p$. Erd\H{o}s~et.~al. conjectured that $p_n(z)=z^n+1$ is the polynomial which maximizes this length: $\Lambda_1(p_n)=L_n$. Note that the maximal length $L_n$ is known to be achieved by some polynomial, and that $\Lambda_1(p_n)$ is known to equal $2n+O(1)$ (for these and other results on the so-called Erd\H{o}s--Herzog--Piranian Lemniscate Problem, see references in~\cite{WangPeng,KuznetsovaTkachev}). In 2003, O.~S.~Kuznetsova and V.~G.~Tkachev~\cite{KuznetsovaTkachev} studied the growth of the length functions for the level sets of analytic functions. They established a number of results, of which we mention one here in the context of complex polynomials. For a rectifiable plane curve $\mathcal{C}$, let $\ell(\mathcal{C})$ denote the length of $\mathcal{C}$. For a monic polynomial $p$, define the auxiliary function $$F_p(t)=\ln\left|\ell(\Lambda_{e^t}(p))\right|-\dfrac{t}{\deg(p)}.$$ Kuznetzova and Tkachev showed the following. \begin{theorem}\label{thm: Lemniscate growth.} For a monic polynomial $p\in\mathbb{C}[z]$, the maps $t\mapsto\ell(\Lambda_{e^t}(p))$ and $t\mapsto F_p(t)$ are continuous for $t\in(-\infty,\infty)$. Moreover if $p$ is non-trivial (ie. not of the form $c(z-a)^n$ for constants $c,a\in\mathbb{C}$ and non-negative integer $n$), then the following hold. \begin{itemize} \item Both maps mentioned above are strictly convex on any interval $(s_1,s_2)\subset\mathbb{C}$ not containing the logarithm of the modulus of a critical value of $p$. \item $\displaystyle\lim_{t\to+\infty} F_p(t)=\ln(2\pi)$. \end{itemize} \end{theorem} If, for $s\in\mathbb{R}$, we define the dilation function $p_s(z)=e^{-s}p\left(ze^{s/\deg(p)}\right)$, the level set and growth function discussed in Theorem~\ref{thm: Lemniscate growth.} satisfy the following invariance properties: $$\Lambda_{e^t}(p_s)=e^{-s/n}\Lambda_{e^{s+t}}(p)\text{ and }F_{p_s}(t)=F_p(s+t).$$ These properties allow one, for instance, to derive from Theorem~\ref{thm: Lemniscate growth.} also some length properties of the family of level sets $\Lambda_1(z^n+r)$, for $r\in\mathbb{R}$. These length properties were also discovered (and augmented) independently in 2006 by C.~Wang and L.~Peng~\cite{WangPeng}. They showed the following. \begin{theorem} For any integer $n\geq 1$ and $r\in\mathbb{R}$, define $\gamma_n(r)=\ell\left(\Lambda_1(z^n+r)\right)$. \begin{itemize} \item ${\gamma_n}'\geq 0$ on $(0,1)$ and ${\gamma_n}'\leq 0$ on $(1,\infty)$. \item ${\gamma_n}''\geq 0$ on $(0,1)\cup(1,\infty)$. \item For any integer $n$, $4\log(2)\leq\gamma_n(1)-2n\leq 2(\pi-1)$. \end{itemize} \end{theorem} In 2012, O.~N.~Kosukhin\cite{Kosukhin} gave the following upper bound for $L_n$, which improved on the then best results. \begin{theorem} For all $n\geq 2$, $L_n\leq\pi\left(n+\dfrac{25}{23}\right)+\pi\sqrt{\dfrac{n-1}{2}\ln(\pi n(n-1))}$. \end{theorem} In 2008, A.~Fryntov and F.~Nazarov~\cite{FryntovNazarov} showed that $p(z)=z^n+1$ locally maximizes the length of the level set $\Lambda_1(p)$, and provided another asymptotic upper-bound for maximal length $L_n=\displaystyle\max_{\deg(p)=n}\ell(\Lambda_1(p))$. \begin{theorem} Let $n$ be a positive integer. There is some $\epsilon>0$ such that for any degree $n$ polynomial $p$, if the coefficients of $q(z)=p(z)-(z^n+1)$ are all smaller than $\epsilon$, then $$\ell(\Lambda_1(p))\leq\ell(\Lambda_1(z^n+1)).$$ \end{theorem} \begin{theorem} $L_n\leq 2n+o(n)$. \end{theorem} \subsection{Regions Bounded by Lemniscates} We now turn to recent area results for lemniscates. In this section, we use the notation $\Lambda_\epsilon(p)$ to denote also the region circumscribed by the lemniscate $\Lambda_\epsilon(p)$. In 2007, H.~H.~Cuenya and F.~E.~Levis~\cite{CuenyaLevis} proved the following. In the following, for $r>0$ let $M_r$ denote the collection of polynomials for which the minimum distance between any two distinct zeros of $p$ is at least $r$ times the diameter of the zero set of $p$. \begin{theorem}\label{conj: Disk in lemniscate.} For any $r>0$, there is a constant $C>0$ such that for any $s>0$ and any polynomial $p\in M_r$, the region $\Lambda_s(p)$ contains a disk $D$ with area at least $m(D)\geq \dfrac{m\left(\Lambda_s(p)\right)}{C}$. \end{theorem} Cuenya and Levis conjectured that the suitability condition $p\in M_r$ can be removed from the statement of Theorem~\ref{conj: Disk in lemniscate.}, and proved this in the special case that $p$ has at most three distinct zeros. In 2009, A.~Y.~Solynin and A.~S.~Williams~\cite{SolyninWilliams} established Cuenya and Levis' conjecture, but with dependence in the constant $C$ on the degree of the polynomial $p$. The following theorem, published by P.~Ding in 2018~\cite{Ding}, relates the area of the region between two lemniscates, the lengths of the two lemniscates, and the curvature of the interceding lemniscates. Let $p$ be a complex polynomial. For $0<r<s$, let $\lambda_r$ and $\lambda_s$ be components of the lemniscates $\Lambda_r(p)$ and $\Lambda_s(p)$, such that $\lambda_r$ lies in a bounded component of ${\lambda_s}^c$. For any $t\in(r,s)$, let $\lambda_t$ denote the components of $\Lambda_t(p)$ which lie in the region $D$ between $\lambda_r$ and $\lambda_s$. Finally, let $\kappa(z)$ denote the curvature at $z$ of the lemniscate of $p$ containing $z$. \begin{theorem} Given the notation in the preceding paragraph, the following holds. \begin{itemize} \item The area of $D$ is $\displaystyle\int_r^s\left(\int_{z\in\lambda_t}\dfrac{1}{|p'(t)|}|dz|\right)dt$. \item $\displaystyle\int_r^s\ell(\lambda_t)dt=\iint_D|p'(z)|dA$. \item $\ell(\lambda_s)=\ell(\lambda_r)+\displaystyle\iint_D\kappa(z)dA$. \end{itemize} \end{theorem} \subsection{Area of and Roundness of the Preimage Under a Polynomial} In 2004, E.~Crane~\cite{Crane} proved the following results regarding the preimages of measurable sets in the plane under a complex polynomial. \begin{theorem} Let $K\subset\mathbb{C}$ be measurable, and let $p$ be a complex polynomial with degree $n$. Then $$m\left(p^{-1}(K)\right)\leq\pi\left(\dfrac{m(K)}{\pi}\right)^{1/n}.$$ \end{theorem} If the logarithmic capacity of $K\subset\mathbb{C}$ is denoted $cap(K)$, and the roundness of $K$ is defined to be $\rho(K)=\dfrac{m(K)}{\pi cap(K)}$, Crane also proved the following. \begin{theorem} Let $K\subset\mathbb{C}$ be measurable, and let $p$ be a complex polynomial with degree $n$. Then $$\rho\left(p^{-1}(K)\right)=\rho(K)^{1/n}.$$ \end{theorem} \subsection{Lengths of Lemniscates of Random Polynomials} In 2017, E.~Lundberg and K.~Ramachandran\cite{LundbergRamachandran} studied the lengths of the lemniscates $\Lambda_1(p_n)$, for a random sequence polynomials $\{p_n\}$, proving the following. \begin{theorem} Let $\{p_n\}$ be a sequence of complex polynomials, where the coefficients of $p_n(z)=\displaystyle\sum_{j=0}^nc_jz^j$ are chosen i.i.d. with the standard Gaussian density $\dfrac{1}{\pi}exp\left(-|z|^2\right)$. Then $$\displaystyle\lim_{n\to\infty}\mathbb{E}\ell(\Lambda_1(p_n))=C,$$ where $C$ is a constant defined by an integral, numerically determined to be $C\approx8.3882$. \end{theorem} \subsection{The Lemniscate Tree of a Polynomial} In 1991, F.~Catanese and M.~Paluszny~\cite{CatanesePaluszny} published an article exploring the topological configuration of all the lemniscates of a complex polynomial. It follows directly from the maximum modulus principle that the non-critical (also called non-singular) lemniscates (ie. those not containing a critical point) of a polynomial interpolate smoothly between any two critical (or singular) lemniscates, and conversely that if any two critical lemniscates are incomparable (in the sense that neither lies in a bounded component of the complement of the other), then these two critical lemniscates lie in different bounded components of the complement of some third critical lemniscate. It follows that the topology of the graph $y=|p(z)|$ may be entirely determined by knowing the configuration of only the critical lemniscates. To each such configuration, Catanese and Paluszny associated a tree, whose nodes represent the distinct critical points and zeros of the polynomial, with an edge between two nodes $a$ and $b$ if $a$ represents a non-trivial critical point, and the zero or critical point of $p$ which is represented by $b$ lies in a bounded face of the critical lemniscate containing the critical point represented by $a$, or vice versa. Catanese and Paluszny showed that there is a bijection between the collection of simple central balanced binary trees and the connected components of the space of lemniscate-generic complex polynomials (ie. those complex polynomials all of whose critical values have different moduli). In forthcoming work, M.~Epstein~et.~al.~\cite{EpsteinHaninLundberg} analyzed the lemniscate tree of random polynomials. They established the following theorem, where $LT_n$ denotes the collection of generic lemniscate trees with $n$ leaves (ie. those trees corresponding to lemniscate-generic degree-$n$ complex polynomials). They additionally identified the out-degree of a node in a lemniscate tree as $0$ if the node represents a zero of the polynomial, and $2$ if the node represents a non-trivial critical point of the polynomial. \begin{theorem} Let $\{T_n\}\subset LT_n$ be a sequence of of lemniscate trees sampled uniformly at random, and let $X_n$ denote the number of vertices in $T_n$ of out-degree two. Let $\mu_n$ and $\sigma_n$ denote the mean and standard deviation of $X_n$. Then $$\mu_n=\left(1-\dfrac{2}{\pi}\right)n+O(1)\text{ and }\sigma_n^2=\left(\dfrac{4}{\pi^2}+\dfrac{2}{\pi}-1\right)n+O(1).$$ Moreover, $\sigma_n^{-1}(X_n-\mu_n)$ converges in distribution to a standard Gaussian random variable as $n\to\infty$. \end{theorem} In 2018, A.~Frolova~et.~al.~\cite{FrolovaKhavinsonVasil'ev} explored the construction of lemniscate trees by a means they called polynomial fireworks. In this process, a single zero $z_0$ of a complex polynomial $p(z)$ is replaced by the zeros of a second polynomial $q(z)$. That is, $p(z)\mapsto (z-z_0)^{-1}p(z)q(z)$. They prove the following result regarding the effect of this process on the lemniscate tree. \begin{theorem} Let $p(z)$ be a lemniscate generic complex polynomial, and let $z_0\in\mathbb{C}$ be one of the zeros of $p$. If the zeros of $q$ are all sufficiently close to $z_0$, then the lemniscate tree of the polynomial $(z-z_0)^{-1}p(z)q(z)$ is obtained by appending the lemniscate tree of $q$ to the leaf of the lemniscate tree of $p$ corresponding to $z_0$, and merely extending the other leaves the appropriate length. \end{theorem} In 2015, T.~J.~Richards~\cite{Richards1} expanded the definition of the lemniscate tree by taking into account not just the inclusion relation of one critical lemniscate or zero lying in the bounded component of the complement of the other, but also i) the critical value associated with each critical point of the underlying polynomial, and ii) the rotational orientation of each interior critical lemniscate. This notion of the configuration of critical lemniscates also accommodated lemniscate non-generic polynomials. Let $U$ denote the collection of equivalence classes (modulo precomposition with an affine map) of complex polynomials with a prescribed list of critical values, and let $V$ denote the collection of critical lemniscate configurations (roughly lemniscate trees with critical value and rotation data as described above). \begin{theorem} The map $\Pi:U\to V$ which takes a polynomial to its critical lemniscate configuration is a bijection. \end{theorem} \section{Fingerprints of Shapes and Conformal Equivalence} \label{sect: Fingerprints of Shapes and Conformal Equivalence.} One of the reasons for the recent burgeoning interest in the lemniscates of complex polynomials is their potential role in the field of shape analysis. Define a shape $\Gamma$ to be a simple, smooth, closed curve in the plane, with bounded interior region $\Omega_-$ and unbounded exterior region $\Omega_+$. Let $\mathbb{D}$ denote the unit disk, and let $\mathbb{D}_+$ denote the region $\hat{\mathbb{C}}\setminus cl(\mathbb{D})$. Let $\Phi_-:\mathbb{D}\to\Omega_-$ and $\Phi_+:\mathbb{D}_+\to\Omega_+$ be analytic bijections (whose existence is quaranteed by the Riemann mapping theorem). Adopt also the normalization $\Phi_+(\infty)=\infty$ and ${\Phi_+}(\infty)>0$. Since $\Gamma$ is smooth, $\Phi_+$ and $\Phi_-$ may be extended smoothly to the boundary of their domains. The fingerprint of $\Gamma$ is defined to be the self-map of the unit circle $\tau:\mathbb{T}\to\mathbb{T}$ defined by $k={\Phi_+}^{-1}\circ\Phi_-$. The map from shapes (modulo precomposition with affine transformations) to orientation-preserving diffeomorphisms of $\mathbb{T}$ (modulo precomposition with an automorphism of the disk) is known to be a bijection. The problem of recovering a shape from its fingerprint has been explored numerically, and several algorithms have been developed (see~\cite{EbenfeltKhavinsonShapiro} and the discussion contained therein for these results). In the special case that the shape is a proper non-singular polynomial lemniscate, the corresponding fingerprint has a particularly nice form. In 2011, P.~Ebenfelt~et.~al~\cite{EbenfeltKhavinsonShapiro} showed the following. \begin{theorem} Let $p(z)$ be a degree $n$ complex polynomial, and suppose that the level set $\Lambda_1(p)$ has a single, non-singular component. Then the fingerprint of $\Lambda_1(p)$ is an $n^{\text{th}}$ root of a degree-$n$ finite Blaschke product. Conversely, every $n^{\text{th}}$ root of a degree-$n$ finite Blaschke product is the fingerprint for some such lemniscate. \end{theorem} In 2018, A.~Frolova~et.~al.~\cite{FrolovaKhavinsonVasil'ev} studied the fingerprints of smooth shapes, viewing them as smooth increasing bijections $\tau:[0,2\pi]\to[0,2\pi]$ (modulo the identification $0\sim2\pi$), rather than self-maps of the unit circle. They proved the following. \begin{theorem} Let $p(z)$ be a degree $n$ complex polynomial, and suppose that the lemniscate $\Lambda_1(p)$ has a single, non-singular component. Then the fingerprint of $\Lambda_1(p)$ has an even number of inflection points, at least $2$ and at most $4n-2$. \end{theorem} Suppose again that $\Gamma=\Lambda_1(p)$ is a lemniscate of a degree $n$ complex polynomial $p$, with a single, non-critical component (that is, all of the critical values of $p$ have magnitude less than $1$, see~\cite{EbenfeltKhavinsonShapiro} or~\cite{Younsi} for details). As before, let $\Omega_+$ denote the region exterior to $\Gamma$. Then the exterior Riemann map $\Phi_+:\mathbb{D}_+\to\Omega_+$ may be taken to be $\Phi_+(z)=p(z)^{1/n}$. Let $B(z)$ be degree $n$ Blaschke product whose $n^{th}$ root is a fingerprint for $\Gamma$ (whose existence is shown in~\cite{EbenfeltKhavinsonShapiro}, as mentioned above). Then taking $n^{\text{th}}$ powers, we have the equation $B=p\circ\Phi_-$ on $\mathbb{D}$. The interesting direction is the converse (also following from~\cite{EbenfeltKhavinsonShapiro}). \begin{theorem}\label{thm: Conformal equivalence first.} For any finite Blaschke product $B$, there is a complex polynomial $p$ with the same degree as $B$, and an injective analytic map $\varphi:\mathbb{D}\to\mathbb{C}$ for which $B=p\circ\varphi$ on $\mathbb{D}$. \end{theorem} In general, if $f$ is an analytic (or later, meromorphic) function on a domain $E\subset\mathbb{C}$, and there is an injective analytic map $\varphi:E\to\mathbb{C}$ and an analytic (or meromorphic) map $g$ with domain $\varphi(E)$ such that $f=g\circ\varphi$ on $E$, then $g$ is said to be a conformal model $f$ on $E$. With this notation, Theorem~\ref{thm: Conformal equivalence first.} states that a finite Blaschke product $B$ has a polynomial conformal model $p$ on $\mathbb{D}$, with $\deg(p)=\deg(B)$. Theorem~\ref{thm: Conformal equivalence first.} was also proved by different means by T.~J.~Richards~\cite{Richards1} in 2015. In 2016, Richards~\cite{Richards2} extended this result to general analytic functions which are analytic across the boundary of the unit disk, though this time with no control on the degree of the polynomial. \begin{theorem}\label{thm: Conformal analysis for disk functions.} Let $f$ be a function which is analytic on an open set containing the closed unit disk. Then $f$ has a polynomial conformal model on $\mathbb{D}$. \end{theorem} In 2017, T.~J.~Richards and M.~Younsi~\cite{RichardsYounsi1} gave a version of Theorem~\ref{thm: Conformal analysis for disk functions.} for meromorphic functions, in which they were also able to recover control over the degree of the polynomial $p$ (now a rational function $q$), subject to a condition on the behavior of the function $f$ on the boundary of the disk. \begin{theorem} Let $f$ be meromorphic function on an open set containing the closed unit disk, such that i) $f$ has no critical points on $\mathbb{T}$, and ii) $f(\mathbb{T})$ is a Jordan curve, whose bounded face contains $0$. Suppose without loss of generality that the number of zeros $m$ of $f$ lying in $\mathbb{D}$ is greater than or equal to the number of poles $n$ of $f$ lying in $\mathbb{D}$. Then there is a rational function $q$ and an injective analytic map $\varphi:\mathbb{D}\to\mathbb{C}$ such that the following hold. \begin{itemize} \item $f=q\circ\varphi$ on $\mathbb{D}$. \item $q$ has $m$ zeros, all of which lie in $\varphi(\mathbb{D})$. $q$ has $n$ poles lying in $\varphi(\mathbb{D})$, and the only pole of $q$ not lying in $\varphi(\mathbb{D})$ is at $\infty$, with multiplicity $n-m$ (if that quantity is non-zero). \end{itemize} \end{theorem} Richards and Younsi also established a negative result regarding the degree of the polynomial conformal model for an analytic disk function $f$ to the effect that the minimal degree of a polynomial conformal model for $f$ on $\mathbb{D}$ cannot be determined by the degree of non-injectivity of $f$ on $\mathbb{D}$ (that is, how many-to-one $f$ is on $\mathbb{D}$). \begin{theorem} For any $n\geq2$, there is a function $f_n$ which is analytic on an open set containing the closed unit disk for which the following holds. \begin{itemize} \item $f_n$ is at most $2$-to-$1$ on $\mathbb{D}$. \item $f_n$ has no polynomial conformal model with degree $\leq n$. \end{itemize} \end{theorem} In 2016, M.~Younsi~\cite{Younsi} showed that a rational function may be found which is simultaneously conformally equivalent to any two prescribed finite Blaschke products $A$ and $B$, on $\mathbb{D}$ and $\mathbb{D}_+$ respectively. \begin{theorem} Let $A$ and $B$ be finite Blaschke products. There is a rational function $q(z)$ for which the lemniscate $\Gamma=\Lambda_1(q)$ is a single, non-critical component, and for which $A=q\circ\Phi_-$ on $\mathbb{D}$ and $B=q\circ\Phi_+$ on $\mathbb{D}_+$. \end{theorem} In 2019, T.~J.~Richards and M.~Younsi~\cite{RichardsYounsi2} gave a first constructive result, describing an explicit construction for the polynomial conformal model for finite Blaschke products of degree at most $3$. They also gave the following formula for the polynomial conformal model $p$ and associated injective analytic map $\varphi$ for a finite Blaschke product of arbitrarily high degree, whose zeros are evenly distributed on a circle centered at the origin. \begin{theorem} Let $c\in\mathbb{D}$, $\lambda\in\mathbb{T}$, and $n\geq1$ be chosen. Define $B(z)=\lambda\dfrac{z^n-c^n}{1-\bar{c}^nz^n}$. Then $B$ has polynomial conformal model $p(z)=\lambda\left(|c|^{2n}-1\right)z^n-\lambda c^n$. Setting $\psi(z)=\dfrac{e^{i\pi/n}z}{\sqrt[n]{1-\bar{c}^nz^n}}$, $\psi^{-1}$ is an injective analytic map on $\mathbb{D}$, and $B=p\circ\psi^{-1}$ on $\mathbb{D}$. \end{theorem} In 2013, T.~J.~Richards~\cite{RichardsLowtherSpeyer} posted Theorem~\ref{thm: Conformal analysis for disk functions.} as a conjecture on the website \textit{math.stackexchange.com}. As noted above, Richards published a proof for this result in 2016. Before that, also in 2013, users G.~Lowther and D.~Speyer provided a proof for a more general result (also on~\cite{RichardsLowtherSpeyer}), where the disk $\mathbb{D}$ is replaced with an arbitrary compact set. As we wish to include this more general result, and the proof has not appeared in a peer-reviewed source in the intervening years, we will present the proof here, with an extension also to meromorphic functions. It should be emphasized that the proof presented here is essentially that of Lowther and Speyer, the only non-trivial changes being those necessary to accommodate meromorphic rather than analytic functions. \begin{theorem}\label{thm: To prove.} Let $K\subset\mathbb{C}$ be compact, and let $f$ be a function which is meromorphic on $K$. Then there is an injective analytic function $\varphi:K\to\mathbb{C}$, and a rational function $q$ such that $f=q\circ\varphi$ on $K$. \end{theorem} Mirroring the work of Lowther and Speyer, we will make use of the following lemmas. \begin{lemma}\label{lem: Lemma 1.} Let $f$, $\{g_n\}_{n=1}^\infty$ be non-constant analytic functions on an open set $U\subset\mathbb{C}$, and assume that $g_n\to f$ uniformly on $U$. Let $K\subset U$ be compact, and suppose the following holds. \begin{itemize} \item If $z\in K$ is a critical point of $f$ with multiplicity $m\geq 1$, then for each $n\geq 1$, and each $j\in\{0,1,\ldots,m\}$, ${g_n}^{(j)}(z)=f^{(j)}(z)$. \end{itemize} Then for all sufficiently large $n$, there is an injective analytic map $\varphi_n:K\to U$ such that $f=g_n\circ\varphi_n$ on $K$. \end{lemma} \begin{lemma}\label{lem: Lemma 2.} Let $f$ be analytic and non-constant on a compact set $K\subset\mathbb{C}$. There is an open neighborhood $U$ of $K$ and a sequence of rational functions $q_n$ having no poles in $U$ and having only a single pole in each component of $U^c$ for which $q_n\to f$ uniformly on $U$, and such that if $z\in K$ is a critical point of $f$ with multiplicity $m\geq 1$, then for each $n\geq 1$, and each $j\in\{0,1,\ldots,m\}$, ${q_n}^{(j)}(z)=f^{(j)}(z)$. \end{lemma} \begin{proof}[Proof of Theorem~\ref{thm: To prove.}] Let $K\subset\mathbb{C}$ be compact, and let $\mathcal{O}$ be an open set containing $K$. Let $f:\mathcal{O}\to\hat{\mathbb{C}}$ be meromorphic. By replacing $\mathcal{O}$ with a slightly smaller open set, still containing $K$, we may assume that $f$ is meromorphic on the closure of $\mathcal{O}$ (and thus has only finitely many poles on $\mathcal{O}$), with no critical points or poles on $\partial\mathcal{O}$. Let $w_1,w_2,\ldots,w_M\in\mathcal{O}$ be the poles of $f$ in $\mathcal{O}$, with multiplicities $m_1,m_2,\ldots,m_M\in\mathbb{N}$. Around each pole $w_k$, there is an open neighborhood $E_k$ such that for some analytic bijection $\psi_k:E_k\to\mathbb{D}$, with $\psi_k(w_k)=0$, $f(z)=\dfrac{c_k}{\psi_k(z)^{m_k}}$ on $E_k$. By i) dividing $f$ by a large enough constant, ii) reducing the neighborhoods $E_k$ as necessary, and iii) making the appropriate choice of the maps $\psi_k$, we may assume without loss of generality that each $c_k=1$. Define $\mathcal{O}_2=\mathcal{O}\setminus\displaystyle\bigcup E_k$. By Lemma~\ref{lem: Lemma 2.}, we may choose a neighborhood $U$ of $cl(\mathcal{O}_2)$, and a sequence of rational functions $\{q_n\}$ which interpolates the values and derivative data at each critical point of $f$ in $\mathcal{O}_2$. By Lemma~\ref{lem: Lemma 1.}, for sufficiently large $n$, there is an injective analytic map $\varphi_n:\mathcal{O}_2\to U$ such that $f=q_n\circ\varphi_n$ on $\mathcal{O}_2$. Let $n_0$ denote the smallest such (or any such) value of $n$. Set $q=q_{n_0}$ and $\varphi=\varphi_{n_0}$. For each $1\leq k\leq M$, let $\widetilde{E_k}$ denote the bounded region bounded by $\varphi\left(\partial E_k\right)$. Since each $E_k$ contained a single distinct pole of $f$ of multiplicity $m_k$, each $\widetilde{E_k}$ contains a single distinct pole of $q$ of multiplicity $m_k$. Thus since $|q|=1$ on $\partial\widetilde{E_k}$ (since $|f|=1$ on $\partial E_k$), we may choose $\widetilde{\psi_k}:\widetilde{E_k}\to\mathbb{D}$ to be a Riemann map for $\widetilde{E_k}$ for which $$q(z)=\dfrac{1}{\widetilde{\psi_k}(z)^{m_k}}\text{ on }\widetilde{E_k}.$$ Thus if we extend $\varphi$ from $\mathcal{O}_2$ to $\mathcal{O}$ by $\varphi=\widetilde{\psi_k}^{-1}\circ\psi_k$ on $E_k$, then $\varphi$ is continuous, thus analytic across the boundary of $E_k$, and $f=q\circ\varphi$ on all of $\mathcal{O}$. \end{proof} \begin{proof}[Proof of Lemma~\ref{lem: Lemma 1.}] By restricting $U$ to a small enough open set containing $K$, we may assume without loss of generality $f$ has no critical points in $U\setminus K$. Fix some $z_0\in U$. Our first goal is to show that there is a small neighborhood $V_0$ of $z_0$, and a sequence of injective analytic functions $\psi_n:V_0\to U$ with $\psi_n(z)\to z$ uniformly on $V_0$, and $f=g_n\circ\psi_n$ on $V_0$ for all sufficiently large $n$. Suppose first that $f'(z_0)\neq0$. By rescaling $f$ and $p_n$ if necessary, we can assume that $f'(z_0)=1$. Choose some $r>0$ such that the closed ball $cl(B(z_0;r))$ is contained in $U$, and $\Re(f')>1/2$ on $cl(B(z_0;r))$. Then by uniform convergence, $\Re({g_n}')>1/2$ on $cl(B(z_0;r))$ for sufficiently large values of $n$. This implies that for $z,z'\in cl(B(z_0;r))$, $$\Re\left(\dfrac{g_n(z)-g_n(z')}{z-z'}\right)>\dfrac{1}{2},$$ so $g_n$ is injective with $|{g_n}'|\geq 1/2$ on $cl(B(z_0;r))$. It follows therefore that $g_n\left(B(z_0;r)\right)$ contains $B(g_n(z_0);r/2)$, so $g_n\left(B(z_0;r)\right)\supset B\left(f(z_0);r/3\right)$ for sufficiently large values of $n$ (again by the uniform convergence of $g_n\to f$), and there is a unique analytic inverse ${g_n}^{-1}:B\left(f(z_0);r/3\right)\to B(z_0;r)$ with $g_n\circ{g_n}^{-1}(z)=z$ (by the inverse function theorem). Choosing the open neighborhood $V_0$ of $z_0$ small enough that $f(V_0)\subset B(f(z_0);r/3)$, then defining $\psi_n:V_0\to U$ by $\psi_n={g_n}^{-1}\circ f$ satisfies the requirements. Suppose now that $f'(z_0)=0$. Subtract a constant if necessary from $f$, and the same constant from each $g_n$, to ensure that $f(z)=(z-z_0)^mh(z)$ for some $m\geq2$ and for an analytic function $h:U\to\mathbb{C}$ with $g(z_0)\neq0$. By assumption, each $g_n=(z-z_0)^mh_n(z)$ for analytic functions $h_n:U\to\mathbb{C}$ with $h_n\to h$ uniformly on $U$. Then on a neighborhood of $z_0$, $h$ and each $h_n$ is nonzero for sufficiently large $n$. Hence, we can take $m^{\text{th}}$ roots to obtain analytic functions $\widetilde{f}(z)=(z-z_0)h^{1/m}$ and $\widetilde{g_n}(z)=(z-z_0)h_n(z)^{1/m}$. Moreover, provided that we take consistent $m^{\text{th}}$ roots, then $\widetilde{g_n}\to f$ uniformly on the neighborhood of $z_0$. Therefore by the first case above, there exists an open neighborhood $V_0$ of $z_0$ and analytic functions $\psi_n:V_0\to U$ with $\psi_n(z)\to z$ uniformly, with $\widetilde{f}=\widetilde{g_n}\circ\psi_n$ on $V_0$. Taking $m^{\text{th}}$ powers, we have $f=g_n\circ\psi_n$ on $V_0$. By compactness of $K$ and the fact that the analytic functions $\psi_n$ exist locally as shown above, there is a finite open cover $\{B_1,\ldots,B_N\}$ of $K$ for which the $B_k$ are open balls in $U$, and sequences of analytic functions $\{\psi_{k,n}\}_{n=1}^\infty$ satisfying $g_n\circ\psi_{k,n}=f$ on $B_k$, and $\psi_{k,n}(z)\to z$ on uniformly $B_k$. However, whenever $B_k$ and $B_l$ have non-empty intersection, since $f$ is non-constant, its derivative will be non-zero at some point $z_0\in B_k\cap B_l$, and without loss of generality, suppose that $f'(z_0)=1$. Then by the uniform convergence, there is an open neighborhood $\hat{B}$ of $z_0$ on which $\Re({g_n}')\geq 1/2$ for sufficiently large $n$, so that $g_n$ is injective on $\hat{B}$. Since $g_n\circ\psi_{k,n}=f=g_n\circ\psi_{l,n}$ on $\hat{B}$, and $g_n$ is injective on $\hat{B}$, it follows that $\psi_{k,n}=\psi_{l,n}$ on $\hat{B}$ (and thus on all of $B_k\cap B_l$). Thus setting $V=\displaystyle\bigcup B_k$, we have have analytic functions $\psi_n:V\to U$ (setting $\psi_n=\psi_{k,n}$ on $B_k$), with $f=g_n\circ\psi_n$ on $V$, and $\psi_n(z)\to z$ uniformly. It only remains to show that $\psi_n$ is injective on all of $V$. Let $\widehat{B_1},\ldots,\widehat{B_t}$ be open balls covering $K$, whose closures are contained in $V$. Let $n$ be chosen large enough so that $\psi_n$ is injective on each $\widehat{B_k}$, and set $\widehat{K}=\displaystyle\bigcup\widehat{B_k}$. By compactness, there is an $\epsilon>0$ such that for each $z,w\in\widehat{K}$, if $0<|z-w|<\epsilon$, $z$ and $w$ lie in some common $\widehat{B_k}$, so that $\psi_n(z)\neq\psi_n(w)$. Additionally, since $\psi_n(z)\to z$ uniformly on $V$, we may also require that $|\psi_n(z)-z|<\epsilon/2$ on $V$. Therefore, for any distinct $z,w\in V$, if $|z-w|<\epsilon$, $\psi_n(z)\neq \psi(w)$, and if $|z-w|\geq\epsilon$, $|\psi_n(z)-\psi_n(w)|\geq\epsilon-|z-w|>0$ (by the reverse triangle inequality). Thus $\psi_n$ is injective of $V$. \end{proof} \begin{proof}[Proof of Lemma~\ref{lem: Lemma 2.}] To begin, let an open, bounded set $U$ be chosen which contains $K$, and such that $f$ is analytic on the closure of $U$. By Runge's theorem, we may find a sequence of rational functions $\left\{\widehat{q_n}\right\}$ which converge uniformly to $f$ on $U$, and such that each $\widehat{q_n}$ i) is analytic on $U$ and ii) has at most one pole in each component of $U^c$. Let $z_1,z_2,\ldots,z_M\in K$ be the critical points of $f$ in $K$, with multiplicities $m_1,m_2,\ldots,m_M\geq1$. Define $N=\displaystyle\sum(m_k+1)$. By Lagrange interpolation, for each $n\in\mathbb{N}$, there is a unique polynomial $r_n$ of degree $N-1$ such that for each $k\in\{1,2,\ldots,M\}$ and each $j\in\{0,1,\ldots,m_k\}$, $\widehat{q_n}^{(j)}(z_k)-{r_n}^{(j)}(z_k)=f^{(j)}(z_k)$. We wish to show that $q_n=\widehat{q_n}-r_n\to f$ uniformly on $U$. Since $\widehat{q_n}\to f$ uniformly on $U$, it suffices to show that $r_n\to 0$ uniformly on $U$. The coefficients of $r_n$ depend linearly on the $N$ quantities $\widehat{q_n}^{(j)}(z_k)$. These coefficients do not depend on $n$. Thus it suffices to show that each $\widehat{q_n}^{(j)}(z_k)$ approaches zero as $n\to\infty$. Fix some $k\in\{1,2,\ldots,M\}$. For $j=0$, observe that since $\widehat{q_n}\to f$ uniformly on $U$, ${r_n}^{(0)}(z_k)=f(z_k)-\widehat{q_n}(z_k)\to0$. Let $\gamma_k$ be a small circle around $z_k$, on which $f$ is analytic, and which does not enclose or contain any other $z_l$. For $j\in\{1,2,\ldots,m_k\}$, $$\widehat{q_n}^{(j)}(z_k)=\widehat{q_n}^{(j)}(z_k)-f^{(j)}(z_k)=\dfrac{j!}{2\pi i}\displaystyle\oint_{\gamma_k}\dfrac{\widehat{q_n}(z)-f(z)}{(z-z_k)^{j+1}}dz.$$ Since $\widehat{q_n}\to f$ uniformly on $U$, this integral approaches $0$ as $n\to\infty$. \end{proof} \bibliographystyle{plain}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Reinforcement Learning (RL) \cite{rlbook} is a powerful optimisation method used for complex problems. In RL, an agent learns to perform a (set of) task(s) on the basis of how it has performed on the previous steps. The agent typically gets a reward for moving closer to the goal or the optimised value, and in some cases a punishment for deviating from its intended learning task. Reinforcement learning, in a lot of ways, is inspired by biological learning that generally happens in mammals; for e.g.: children learn a language by observing their environment and if they are able to mimic it well, they are rewarded with something in appreciation. Similar behaviour is also observed in animals, like dogs, which are given treats on successful completion of a task, say fetching a stick. Mammalian brain then tries and rewire itself so that it can perform certain actions which lead to successful completion of tasks and gives it some sort of short/long term rewards. A few algorithms in RL are also directly inspired by neuroscience and behavioural psychology. Temporal Difference learning (TD), for example, is one such example which has been rigorously studied in the mammalian brain, and the loss function in TD is known to perform in a way the brain spikes, in case of dopamine neurons, when given a reward \cite{b1} \cite{b2} \cite{b3} \cite{b4} \cite{b5}. Schultz et al. \cite{b6} reported that in case of a monkey rewarded with juice, the dopamine level shot up, when the reward was not expected, showing a difference in expected and actual rewards, as in the TD loss function. Overtime, this firing activity back propagated to the earliest reliable stimuli, and once the monkey was fully trained, the firing activity disappeared, and started resulting in a decline on when the expected reward was not produced. The field has not just benefitted unidirectionally, results from TD have also been used in the study of schizophrenia, and the effects of dopamine manipulation on learning \cite{b7}. The community has always been interested in development of more sophisticated algorithms and applying them to real-life tasks. RL is not far-behind in this. With the practical viability of deep leaning, there have been significant progresses in training RL algorithms with deep learning and then applying them to solve problems with human-level accuracy. This has been lately demonstrated by the use of RL algorithms to train agents for playing atari games, where they have surpassed human accuracy on a wide array of games, without significant changes in the strategy \cite{atari}. Board games are also not left behind in this feat. Shortly after atari games, RL has been used to solve one of the most complex board games, and beat the world champion \cite{go}. Traditional approaches on board games have failed on larger boards, given large state spaces, complex optimal value functions making it infeasible for the to learn using self-play, and underdeveloped algorithms, that have slower/skewed learning curves. The problem starts to become even more complex as soon as multi-agent strategies come into play. Cooperative v/s independent multi agent strategies have their own advantages and disadvantages \cite{b8}. Cooperative agents have shown improvements in scores at the cost of speed. Independent agents, on the other hand, have shown the reverse to be true. While the idea of having both agents have access to the other's state and action space is an acceptable workaround, such a setting is not possible to have in a few cases, like card games and certain kind of board games. In our paper we demonstrate the use of popular RL algorithms in playing the Royal Game of Ur. The use of RL algorithms to solve popular board games is not new \cite{go} \cite{b17} \cite{b18} \cite{b19} \cite{b20}, but the use of RL for solving Ur has not been attempted yet. The Game of Ur is known to be a fairly complex, two-player board game, with variable number of states and pawns. A complete background on Ur is given in section \ref{sec:gour}. We have compared the performance of on-policy Monte Carlo, Q-learning and Expected Sarsa on playing The Game of Ur, in an independent multi-agent setting. We compare these RL algorithms' application to Ur, implemented in a simulator, similar to Open AI's Gym \cite{b9}. For the implementation, we create our own simulator from scratch (see section \ref{subsec:goursim}, with similar functions as implemented in \cite{b9}. Our goal is to test the performance of these algorithms on the simulator, and for the agents to be able to achieve human level strategies. The algorithm is not provided any game-specific information or hand-designed features, and are not privy to the internal state of the simulator. Through the simulator, the algorithms are just given the state space and the possible actions to take, as well as the information that they already have from their previous actions. \section{Background} \begin{figure}[] \centering \includegraphics[width=0.5\textwidth]{board} \caption{Board for The Royal Game of Ur, showing coordinates and safe positions.} \label{fig:board} \end{figure} \subsection{Game of Ur} \label{sec:gour} The Royal Game of Ur is a two player strategy racing board game, claimed to be first played in Mesopotamia in third century BC. However the exact origin of the game is still a matter of debate among historians. The game was long forgotten, except being rediscovered among the Jewish population in Indian city of Kochi, who continued playing a variant of it until 1950s. The Game of Ur is also believed to inspire/transformed into the early form of Backgammon \cite{b10}. The board consists of 20 squares, a large block of 3x4 squares, a small block of 3x2 squares, and a bridge of 1x2 blocks connecting the two. Each player starts with seven pieces (pawns) in hand and an empty board. There are four pyramidal dice with two marked and two unmarked corners. The number of marked corners that show up decide the number of positions to be moved by the active player. On a score of four, player can roll over the dice again. A pawn must bear off by exact throw. On landing on opponent's pawn, the piece is removed from the board and must begin its journey again. A piece sitting on a special star-marked square is safe; the opponent can neither land on it, nor remove the other opponent's piece from that position. The first four squares in the beginning of the pawn's journey are safe by default, since an opponent cannot land there. The first player whose all seven pieces are all borne off the board has won the game \cite{b11}. \begin{figure}[] \centering \includegraphics[width=0.5\textwidth]{board_movements} \caption{Board movements as happen during a normal gameplay. The circles in green and red represent different players and arrows represent the movement of respective pieces.} \label{fig:board_movements} \end{figure} \subsubsection{Gameplay} The general move for the game is to move forward by the number of positions as shown up on the dice. The two players start on their respective start positions, on the opposite sides of the board, generally marked off as start. The rule for starting vary depending on the version and the place of game being played. In some places any number that shows up on the dice could be allowed to start the game, while in other places a particular number might be required to start-off with a particular piece. In yet another variant, the particular number on the dice might be used to just bring the piece on the board, and then another dice roll decides the number of pieces to be moved, whereas in some variants the second dice roll might not be allowed. The general movement of pieces is divided into war zones and safe zones. In case of a safe zone, the player has no risk of the piece being eliminated by the opponent, since the opponent cannot enter those positions (marked as rows a and c in the figure \ref{fig:board}), while in case of a war zone (marked as row b in the board, figure \ref{fig:board}), the player's piece might get eliminated by the opponent, depending on the position of the piece and the number that shows up on the dice during opponent's turn to roll the dice. The positions in the broad marked by a star are called safe positions, where the opponent cannot eliminate the other player's piece. If a player's piece is sitting on the safe position and the opponent's piece lands on it, then the opponent has to move to the next position. There can be only a single piece situation on a position at any given time. In case if the colliding piece is that of the opponent, the piece gets eliminated, while if the piece is that of the current player, the move is not allowed. \subsection{Algorithms} We also use the derivatives of Temporal Difference (TD) Algorithms, therefore it is important to give a background on TD algorithms \cite{b14}. \subsection{The Temporal Difference (TD) Algorithm} The problem solved by reinforcement learning, which defines an error signal $\delta_t$ for state $V(s_t)$: $$\delta_t = r(s')+\gamma V(s') - V(s)$$ The error signal vanishes if $V(s_t)$ reaches the target $r(s')+\gamma V(s')$. The algorithm waits for the next state $s'$ and looks if for that state the reward or the value function is known. If so, it changes $V(s)$ in that direction. Parameter $\gamma$ (typically 0.9) is called the discount factor. It takes into account that a state further away in future. say $V(s') = 1$, might be a possible successor of state $s$ in the course of the game, but it is not guaranteed that the game path for the entire game leads from $s$ to $s'$. Thus, the game states more distant in the future are \textit{discounted} in their effect on $V(s)$. This clarifies the name TD, since for most states the reward is zero, and the error signal in most cases is the temporal difference in the value function. \subsection{Q-learning} Q-learning \cite{b13} is a popular learning algorithm that can be applied to most sequential tasks to learn the state-action value. It is an off-policy learning algorithm, as the policy being learnt is not executed during training, instead a behaviour policy is used which is exploratory in nature. But the target action is the one predicted by the policy using the following update rule iteratively at each time step $t$: $$Q(s,a) \leftarrow Q(s,a) + \alpha (r+\gamma Q(s',a') - Q(s,a))$$ where $a$ is the action that the behaviour policy predicts in the state $s$, whereas $A$ is the action that our current learnt policy predicts. The above update rule can be derived from the Bellman optimality equation for MDPs. \subsection{Expected Sarsa} Expected Sarsa improves the on-policy nature of Sarsa. Since the update rule of Sarsa is dependent on the next action $a'$ it cannot converge unless the learning rate is reduced $(a \rightarrow 0)$ or exploration is annealed $(\epsilon \rightarrow 0)$, since $a'$ always has some degree of randomness. Expected Sarsa changes this with an update rule that takes the expected action-value instead of the action-value of $s', a'$: $$Q_{t+1}(s,a) = Q_t(s,a)+\alpha (r+\gamma \sum_{a'} \pi (a' | s') Q_t(s', a') - Q_t(s,a))$$ Since the update rule is now independent of the next action taken, and rather dependent on the expected action-value, Expected Sarsa can indeed converge \cite{b16}. In case of a greedy policy $\pi$, Expected Sarsa is the same as Q-learning. \section{Experimental Setup} \subsection{The Game of Ur Simulator} \label{subsec:goursim} For the purpose of this paper, we implemented our own simulator for the game of Ur, which serves as a testbed for our algorithms and their comparisons. The simulator is open source and available at \url{https://github.com/sidharth0094/game-of-ur-reinforcement-learning} The theoretical details of the simulator are given below. \subsection{Game representation as MDP} \subsection{State representation:} Each square of the board is given a coordinate, for e.g.: (a,1), (a, 2), etc. (as shown in Figure \ref{fig:board}). Each state is represented by a tuple $(((num\_p_1),(p_1\cdots n)), ((num\_p_2),(p_2\cdots n)), dice): position\_piece\_to\_move$, where $num\_p_1$ represents the number of pieces of player $p_1$ that are yet to start, $p_1\cdots n$ represents the coordinates of pieces of player $p_1$ that are on board, here $n$ represents the total number of pieces of a player, dice represents the number that shows up on the dice, and $position\_piece\_to\_move$ represents the position of the piece of the current player to be moved by the number of steps that shows up on the dice. Likewise for player $p_2$. An example state for a game with $4$ pieces and player $1$'s turn would be like:  $((2, ((a, 3), (a, 4))), (3, ((c, 3))), 1): 4 $, here $2$ represents that player $1$ has 2 pieces that are yet to start, which is followed by two tuples of positions of pieces that are already on board, the following $3$ represents the number of pieces of player $2$ that are yet to start, followed by the position of piece on board, while $1$ is the number that shows up on the dice, and $4$ represents the position of piece of player $1$ to be moved, which in this case is the one currently at $('a', 4)$. \subsection{Rewards:} \label{subsec:rewards} The reward function for our MDP representation works in the following way. The agent is given a reward of $+10$ when it successfully replaces the opponent's piece, since the opponent has to start with that piece again and the path clears for the current player. The safe positions in the safe zones do not have a reward associated with them, since the agent is encouraged to be in war zone, while landing on the single safe position inside war zone has a reward of $+20$ associated with it, as the piece here is safe, and it is a strategic position to capture opponent's pieces that cross it; the agent might want to keep its piece here as long as possible. We also associate a reward of $-1$ with any step inside war zone, other than the one to the safe position described above, as the piece is vulnerable to attack by the opponent, and should move out of war zone as soon as possible. Winning has a reward of $+100$ associated with it, while loosing has a reward of $0$. All other positions on the board, namely in safe zone, have a reward of $0$ as well. Initially, we had set the reward to be $+1$ for winning, and $0$ for loosing, but we had to change to such a discretised function because of problems when the agent tries to learn. In the absence of such a function, the agent takes a very large number of episodes in order to learn efficiently, which results in more compute time. Also, the agent is unable to learn the strategic moves well enough, given the strategy is only applicable to a few parts of the game, while the rest are independent of how the agent plays. We found that this approach of breaking the reward function for multiple smaller sub-tasks helps the agent learn faster and saves compute time. \begin{figure}[] \centering \includegraphics[width=0.5\textwidth]{board_actions} \caption{Encoding of board coordinates for faster training, used for representing board positions for deciding which piece to move forward} \label{fig:board_actions} \end{figure} \subsection{Actions:} \label{subsec:actions} All board positions are assigned an integer number, from 1 through 24, as shown in Figure \ref{fig:board_actions}, in order to decipher the piece that needs to move forward. This representation of using board positions to decide which piece to move forward is better than using individual piece IDs as it allows the agent to train faster by classifying the states with same coordinates, but different pieces, into a single state, as opposed to considering them as different states. The agent has the option to either go forward, or to make a null move, which happens in case there is no legal move. The situation of not having a legal move occurs in case all next positions are already occupied by the current player's pieces, or the required number of steps to finish the game for a particular piece is less than the number that shows up on the dice. \begin{figure}[] \centering \includegraphics[width=0.5\textwidth]{strategy_move} \caption{Strategic move when each player has $4$ pieces. Here, player represented by green piece has to decide between eliminating the red piece in $('b', 1)$ or to move to a safe position in war zone. The decision making gets more complex with a full $7$ piece game, where green also has to avoid being eliminated by red. } \label{fig:strategy_move} \end{figure} \begin{figure}[] \centering \includegraphics[width=0.5\textwidth]{war_zone_safe_move} \caption{Safe move inside of a war zone. It is green's turn to move, which with a dice roll of $4$ has to move to $('b', 5)$. This move takes it closer to the goal state, and towards the end of war zone, as opposed to it starting with a new piece. Other pieces on board not shown in the figure} \label{fig:war_zone_safe_move} \end{figure} \begin{figure}[] \centering \includegraphics[width=0.5\textwidth]{war_zone_unsafe_move} \caption{Unsafe move inside of a war zone. It is green's turn to move, which with a dice roll of $3$ has to move to $('b', 3)$. Here, red is vulnerable to be eliminated by green. Other pieces on board not shown in the figure} \label{fig:war_zone_unsafe_move} \end{figure} \section{Experiments} We formulate the problem as a multiagent MDP, where both agents compete against each other to learn \cite{b12}. The reward function works as defined in (\ref{subsec:rewards}). For our experiments, we consider only two dice, since every board position is reachable by this configuration of dice. We have considered 4 pieces per player in order to train the agent quickly using limited computational resources at hand. Another reason to consider only 4 pieces per player is that this is the minimum number of pieces required for testing a popular strategic move, represented in figure \ref{fig:strategy_move}. Our action space consists of forward and null moves, as described in (\ref{subsec:actions}). For our experiments, we used an \textit{$\epsilon$-greedy} approach, with an epsilon value of $0.1$, which signifies that the agent explores with a probability of $0.1$, while takes greedy actions with a probability of $0.9$. We trained our agent using Q-learning, Expected Sarsa, and on-policy Monte Carlo, since we thought it would be interesting to compare the performance of agents trained using episodic learning algorithms like on-policy Monte Carlo, against the popular TD learning based algorithms like tabular Q-learning and Expected Sarsa. Our agents are trained on $100$K episodes for each of the algorithm, and then tested on 100 gameplays against an agent following a stochastic policy with equiprobable actions. Since the state space of our environment is very large, therefore to keep a track of how the agent learns, we record the change in value function for state ((3, (('a', 3),)), (3, (('c', 3),)), 1), given that this state would occur frequently over the training period of all agents. We keep a track of time steps to win for each agent during training to observe if the agent is learning the strategic moves to finish early. We also tested a special board position with 4 pieces (displayed in figure \ref{fig:strategy_move}), wherein we tried testing for what piece does agent decide to move based on the strategy it should learn. We show both safe and unsafe moves when the green piece enters the war zone in figures \ref{fig:war_zone_safe_move} and \ref{fig:war_zone_unsafe_move}. \section{Results} \begin{table}[h] \begin{tabular}{|l|l|l|} \hline \hline \textbf{Algorithms} & \textbf{Games won} & \textbf{Games lost} \\ \hline \hline \textbf{Q-learning} & 60 & 40 \\ \hline \textbf{Monte Carlo} & 55 & 45 \\ \hline \textbf{Expected Sarsa} & 54 & 46 \\ \hline \end{tabular} \caption{Table summarizing the results of agent performances trained on all three methods and then tested on $100$ gameplays.} \label{tab:results} \end{table} We show the results of our testing for $100$ gameplays after training 3 separate agents using Monte Carlo, Q-learning and Expected Sarsa, summarized in Table \ref{tab:results}. For our algorithms, Q-learning wins $60$ out of $100$ games, while Monte Carlo and Expected Sarsa win $55$ and $54$ games respectively. This result is not completely random, and demonstrates an agent learning to play using popular strategic moves, as described below and shown in plots. \begin{figure}[] \centering \includegraphics[width=0.5\textwidth]{q_learning_vs_mc_finish_times_vs_ex_sarsa} \caption{Time to finish for agents trained using the three methods. Here Q-learning, Monte Carlo and Expected Sarsa are represented by green, red and blue curves respectively. Note that Expected Sarsa demonstrates faster learning.} \label{fig:finish_times} \end{figure} We demonstrate the learning of our agents using the metric time to finish, as shown in figure \ref{fig:finish_times}. We observe that for all $3$ agents trained, our time to finish decreases over $100$K episodes. The sharpest decrease is shown by Expected Sarsa, while both Q-learning and Monte Carlo show similar competing curves. The curves do show fluctuations, but trend seems to move towards stabilization. \begin{figure}[] \centering \includegraphics[width=0.5\textwidth]{value_function_mc} \caption{Value function for agent trained using Monte Carlo on state $((3, ((a, 3),)), (3, ((c, 3),)), 1)$. Note the sharp increase and fluctuating behaviour demonstrated.} \label{fig:value_function_mc} \end{figure} \begin{figure}[] \centering \includegraphics[width=0.5\textwidth]{value_function_q_learning} \caption{Value function for agent trained using Q-learning on state $((3, ((a, 3),)), (3, ((c, 3),)), 1)$.} \label{fig:value_function_q_learning} \end{figure} \begin{figure}[] \centering \includegraphics[width=0.5\textwidth]{value_function_ex_sarsa} \caption{Value function for agent trained using Expected Sarsa on state $((3, ((a, 3),)), (3, ((c, 3),)), 1)$.} \label{fig:value_function_ex_sarsa} \end{figure} \begin{figure}[] \centering \includegraphics[width=0.5\textwidth]{strategy_move2} \caption{Strategic move learnt by the agent to save the piece from elimination, by taking it from $('b', 8)$ to $('a', 8)$, and saving it from opponent's piece at $('b', 5)$} \label{fig:strategy_move2} \end{figure} Our value functions for Monte Carlo, Q-learning and Expected Sarsa, for state $((3, ((a, 3),)), (3, ((c, 3),)), 1)$ are shown in Figures \ref{fig:value_function_mc}, \ref{fig:value_function_q_learning}, and \ref{fig:value_function_ex_sarsa} . The value function for the given state does seem to increase for all $3$ agents. In case of Monte Carlo, it shows a sharp increase, followed by a trend towards stabilization, while the plots for Q-learning and Expected Sarsa show a much smoother trend. One should not be misled that one agent is performing better over the other, it just shows that there is a difference in the way they learn. We also demonstrate the strategic move that our agents learn, as shown in figure \ref{fig:strategy_move2}. Our agents are able to learn this strategic move, in which the piece on coordinate $('b', 8)$ is moved to $('a', 8)$. This is an important move given the agent's gameplay when at the intersection of war and safe zones. \section{Discussions} The testing of our agents using different methods show promising results. The outcome was not always the smoothest, but for a lot of things we cannot conclude with certainty as to why an agent behaves in the given way. We can attribute this problem to the limitation of computational resources required for training our agents, with such complex and large state spaces. We believe that our agents could perform much better when trained with more episodes and better computational resources. \begin{figure}[] \centering \includegraphics[width=0.5\textwidth]{q_learning_vs_mc_vs_ex_sarsa_values} \caption{Combined value functions for agents trained using all three methods on state $((3, ((a, 3),)), (3, ((c, 3),)), 1)$. Q-learning, Monte Carlo and Expected Sarsa are shown by green, red and blue curves respectively.} \label{fig:q_learning_vs_mc_vs_ex_sarsa_values} \end{figure} We chose to show the value function for the state $((3, ((a, 3),)), (3, ((c, 3),)), 1)$, given that it is a prime state and occurs very frequently (comparison of all three methods together is shown in figure \ref{fig:q_learning_vs_mc_vs_ex_sarsa_values}). The disparity in smoothness, and the difference in values of the value functions for the given state, of our plots could be attributed to the fact that Monte Carlo takes full episode to learn, while TD methods do not. The step updates in case of TD methods are biased on the initial conditions of learning parameters. The bootstrapping process typically updates a function or lookup $Q(s,a)$ on a successor value $Q(s',a')$ using whatever the current estimates are in the latter. Clearly, at the very start of learning, these estimates contain no information from any real rewards or state transitions. However, if the agent is learning as it should, then this bias will reduce asymptotically over multiple iterations, but this bias is known to cause problems, specially for off-policy methods. Monte Carlo methods, on the other hand, do not suffer from this bias, as each update is made after the entire episode, using the true sample of $Q(s, a)$. But, Monte Carlo methods can suffer from high variance, which means more samples are required to achieve the same degree of learning compared to TD methods. A middle ground between these two problems could be achieved by using TD($\lambda$). Our agent learns to move the piece on $('b', 8)$ inside of the war zone, to coordinate $('a', 8)$ inside the safe zone. We believe that this is an important strategic move that it learns, since the piece at $('b', 8)$ will reach the end position in two steps; while if the opponent's piece at position $('b', 5)$ eliminates it, then the piece at $('b', 8)$ would have to restart. So agent by moving piece to $('a', 8)$ didn't just move it closer to winning state but also saved it from eliminating. \section{Conclusion} In this report, we compared the performance of $3$ agents, trained using entirely different methods, namely Monte Carlo, Q-learning and Expected Sarsa, to play the ancient strategic board game, Royal Game of Ur. The state space for our game is complex and large, but our agents show promising results at playing the game and learning important strategic moves. Although it is hard to conclude that when trained with limited resources which algorithm performs better overall than the others, but Expected Sarsa showed promising results in case of fastest learning. In future, we plan to run our agent on the given algorithms and their variants for more than $1$ million episodes, as is the case in community when testing on board games, as we speculate that this would allow our agent to experience more states, and therefore learn better policies. We also plan to train our agent using sophisticated deep RL methods like DQN and Double DQN, to see if the agent shows significant differences in performance when trained on those. \bibliographystyle{apalike}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Suppose $(N,J,g_N)$ is a Calabi-Yau manifold of real dimension $n=2m$. A smooth immersion $F:M\to N$ is called \textit{Lagrangian}, if $m=\dim M$ and $F^*\omega_N=0$, where $\omega_N$ denotes the K\"ahler form on $N$. Let $dz$ denote the complex volume form on $N$. A Lagrangian immersion is called \textit{special Lagrangian}, if the Lagrangian is calibrated with respect to the real part of the complex volume form, i.e. if $F^*dz$ coincides with the induced volume form $d\mu$ on $M$. It is well known, that for general Lagrangian immersions $F:M\to N$ one has $$F^*dz=e^{i\phi}d\mu,$$ with a multi-valued phase function $\phi$. Thus the phase $\phi$ vanishes for special Lagrangians. Since special Lagrangians are calibrated they are volume minimizing in their homology class, in particular the mean curvature vector field $H$ vanishes. In fact, the phase $\phi$ is related to the mean curvature vector field $H$ by $d\phi=\Theta_H$, where $\Theta_H(\cdot)=\langle H,J\cdot\rangle$ denotes the mean curvature $1$-form on $M$. It is well known that $d\Theta_H=0$ and that the cohomology class of $\Theta_H$ coincides up to a multiple of $\pi$ with the first Maslov class $m_1$ of $M$. Any locally defined potential $\alpha$ for $\Theta_H$ is called a Lagrangian angle and must coincide up to a constant with the phase function $\phi$. Conversely, if $M$ is minimal Lagrangian, then $\phi=\phi_0$ for some constant $\phi_0$. Thus minimal Lagrangians are also calibrated (special) with respect to the complex volume form $e^{-i\phi_0}dz$. A Lagrangian immersion is called \textit{almost calibrated}, if $\cos(\phi-\phi_0)>0$ for some choice of constant phase shift $\phi_0$. Therefore a Lagrangian is almost calibrated, if and only if $\Theta_H=d\alpha$ with some globally defined Lagrangian angle $\alpha$ with $\cos\alpha>0$. Since the Lagrangian angle satisfies the heat equation, it follows immediately from the maximum principle that the condition to be almost calibrated is preserved during the flow. One of the most important open questions in the (Lagrangian) mean curvature flow is the classification of possible singularities. In general these fall into two categories, type-I and type-II singularities. The first category describes those finite time singularities for which $$\limsup_{t\to T}\bigl(|A|^2(T-t)\bigr)$$ is bounded, where $T$ denotes the singular time and $|A|$ denotes the norm of the second fundamental form. This category is relatively well understood and it follows from the monotonicity formula of Huisken \cite{huisken} that a rescaled subsequence converges to a self-similarly shrinking solution. In the compact case, one easily observes from the elliptic equation induced for the Lagrangian angle (see \cite{sm habil}), that there do not exist any compact self-shrinkers with trivial Maslov class (so in particular the first Betti number of a compact self-shrinker must be positive). This was later extended by Neves in \cite{neves} to the case of forming type-I singularities (even non-compact) in the zero Maslov class case and he proved that they never occur. Therefore in the Lagrangian mean curvature flow with zero Maslov class, only type-II singularities need to be studied. This applies in particular to the case of almost calibrated Lagrangians, which was first observed by Wang \cite{wang}. Nevertheless, it is a hard and in general an unsolved question, what these singularities are. From the general theory for type-II singularities it follows that the tangent flow of such singularities will be an eternal Lagrangian mean curvature flow in $\complex{m}$ with uniformly bounded second fundamental form. Possible candidates are translating solitons, special Lagrangians and products of these types. Besides these there might exist various other types of eternal solutions. A very interesting class of translating solitons was found by Joyce, Lee and Tsui in \cite{joyce}, in particular there exist non-trivial translating solitons with arbitrary small oscillation of their Lagrangian angle. Recently Kunikawa \cite{kunikawa} proved that there do not exist non-flat complete Lagrangian eternal solutions with nonnegative Ricci curvature to the almost-calibrated Lagrangian mean curvature flow in $\complex{m}$ with $\cos\phi\ge\epsilon>0$ for some constant $\epsilon$. To understand how type-II singularties form one needs to take a closer look at the formation itself. Hopefully, one can then exclude certain types of eternal solutions in the corresponding tangent flow. E.g. in \cite{nevestian}, Neves and Tian classified two-dimensional translating solitons under various conditions, one of which is a control on the volume growth. We believe that the following theorem about the \textit{local non-collapsing of volume} under the Lagrangian mean curvature flow will be useful in understanding many aspects related to the volume in more detail, since it gives an optimal control on the time-dependent measure of a measurable set under the Lagrangian mean curvature flow of almost calibrated submanifolds in a Calabi-Yau manifold. \begin{thma}\label{thm a} Let $(N,J,g_N)$ be a Calabi-Yau manifold and suppose $F_0:M\to N$ is a compact almost calibrated Lagrangian immersion and let $\alpha$ denote a choice for the Lagrangian angle with $\cos\alpha>0$. If $F:M\times[0,T)\to N$ is a smooth solution of the reparametrized Lagrangian mean curvature flow \begin{equation}\label{eq rlmcf} \frac{d}{dt}\, F=H-\tan\alpha\cdot JH,\quad F(\cdot,0)=F_0,\tag{\text{$\ast$}} \end{equation} then for each measurable set $\Omega\subset M$ and any time $t\in[0,T)$ we have \begin{equation} \frac{1}{\epsilon}\cdot \operatorname{vol}_0(\Omega)\ge \operatorname{vol}_t(\Omega)\ge \epsilon\cdot\operatorname{vol}_0(\Omega), \end{equation} where $\epsilon$ is the uniform constant given by $$\epsilon:=\min_{M\times\{0\}}\cos\alpha>0$$ and $\operatorname{vol}_t$ denotes the induced measure on $M$ at time $t$. \end{thma} \begin{remark}~\\[-30pt] \begin{enumerate}[(i)] \item The flow described in \eqref{eq rlmcf} differs from the standard Lagrangian mean curvature flow only by a tangential variation, i.e. there exists a time dependent smooth family of diffeomorphisms $\phi_t:M\to M$ such that $\tilde F(x,t):=F(\phi_t(x),t)$ evolves by the usual mean curvature flow. In particular the flow describes the same evolving submanifolds in $N$. \medskip \item Since the first Maslov class is trivial on an almost calibrated Lagrangian, almost calibrated Lagrangian submanifolds cannot develop singularities of type-I. Hence all possible finite time singularities are of type-II and the tangent flow of such singularities gives eternal solutions of the Lagrangian mean curvature flow in $\complex{m}$ with bounded second fundamental form (the tangent flow of the reparametrized flow in Theorem A will also be the same reparametrized Lagrangian mean curvature flow in $\complex{m}$). Moreover these eternal solutions must be almost calibrated as well (that $\cos\alpha>0$ follows from $\cos\alpha\ge 0$, the real analyticity of the submanifolds and the strong elliptic maximum principle.) \item As long as $\cos\alpha\ge\epsilon>0$ during the flow, one can drop the compactness assumption in Theorem A and in such situations it equally well holds in the complete case. \end{enumerate} \end{remark} \begin{example} The \textit{grim reaper} $\Gamma\subset\complex{}$ given by the graph of the function $\textsl{u}:(-\pi/2,\pi/2)\to\real{}$, $$\textsl{u}(\textsl{x})=\log\frac{1}{\cos \textsl{x}}$$ is a translating Lagrangian soliton, translating with constant speed $1$ in direction of $V:=\partial/\partial\textsl{u}$. A short computation shows $d\textsl{x}=\Theta_H$, $V=H+\nabla \textsl{u}$. So in particular $\alpha:=\textsl{x}$ is a Lagrangian angle and $$\textsl{u}=\log\frac{1}{\cos\textsl{x}} $$ or $$e^\textsl{u}\cos\textsl{x}=1.$$ Since $e^\textsl{u}\cos\textsl{x}$ is constant, we get $d\textsl{u}=\tan\textsl{x}\cdot \Theta_H$, so that with $\alpha=\textsl{x}$ $$V=H+\nabla \textsl{u}=H-\tan\alpha\cdot JH.$$ The same holds for the product $\Gamma\times \Sigma$ of $\Gamma$ with a minimal Lagrangian submanifold $\Sigma\subset\complex{m-1}$. Thus these translating solitons evolve by the reparametrized Lagrangian mean curvature flow given in Theorem A. \end{example} A special case of the above example is the product of the grim reaper $\Gamma$ with a flat Lagrangian subspace $\Sigma\subset\complex{m-1}$. This translator usually appears as the blow-up model of the type-II singularities forming in the evolution of immersed Lagrangian spheres, e.g. this is the case for a large class of equivariant spheres containing the Whitney spheres (see \cite{savassmoczyk}). Since the reparametrized Lagrangian mean curvature flow naturally appears for some translating solitons and seems to favor them, we want to understand this in more detail. Without loss of generality we may assume that the origin is contained in $M$. The next theorem gives a classification of these translators. \begin{thmb} Let $M\subset\complex{m}$ be a complete translating soliton, $0\in M$, translating in direction of a unit vector $V\in\complex{m}$ and let $\textsl{x}, \textsl{u}$ be the two coordinate functions on $M$ induced by the plane $-JV\wedge V$, i.e. $\textsl{u}(p):=\langle V,p\rangle $ and $\textsl{x}(p):=\langle -JV,p\rangle$ for any $p\in M$. Then $\alpha:=\textsl{x}$ is a Lagrangian angle. Moreover, the following statements are equivalent. \begin{enumerate}[\rm (a)] \item The translator evolves according to \eqref{eq rlmcf}. \item The function $e^\textsl{u}\cos\alpha$ admits a local extremum. \item $e^\textsl{u}\cos\alpha$ is constant. \item $M=\Gamma\times\Sigma$, where $\Sigma\subset\complex{m-1}$ is a minimal Lagrangian submanifold and $\Gamma$ is the grim reaper given by the function $\textsl{u}(\textsl{x})=-\log\cos\textsl{x}$. \end{enumerate} \end{thmb} All products $M=\Gamma\times\Sigma$ of the grim reaper with a minimal Lagrangian submanifold $\Sigma$ have in common that $\cos\alpha>0$ and $\inf_M\cos\alpha=0$. This implies in particular that these translating solitons \textit{do not occur} as a blow-up of a type-II singularity on a compact almost calibrated Lagrangian since for such compact Lagrangians we have a uniform lower bound $\cos\alpha\ge\epsilon>0$ for all $t\in[0,T)$. On the other hand this argument does not exclude translating solitons of the form $M=\ell\times\Sigma$, where $\ell$ is a straight line in direction of $V$. For those solitons however we observe that the coordinate function $\textsl{u}$ is unbounded from below in contrast to those given by $\Gamma\times\Sigma$. That this is not a coincidence will be shown in the next theorem. \begin{thmc} Let $M\subset\complex{m}$ be a complete translating soliton, $0\in M$, translating in direction of a unit vector $V\in\complex{m}$ and with bounded second fundamental form. Let the functions $\alpha:=\textsl{x}$ and $\textsl{u}$ be defined as in Theorem B and suppose that $\cos\alpha\ge\epsilon>0$ for some constant $\epsilon$. Then the coordinate function $\textsl{u}$ is unbounded from above and below. \end{thmc} Note, that it was shown by Joyce, Lee and Tsui in \cite{joyce} that for any $\epsilon\in(0,1)$ there exist non trivial translating Lagrangian solitons which satisfy $\cos\alpha\ge \epsilon$. In Theorem C we impose the boundedness of the second fundamental form to guarantee the Omori-Yau maximum principle is applicable and hence this condition can be relaxed as long as the Omori-Yau maximum principle still holds. On the other hand, the assumption on the boundedness of the second fundamental form is quite natural, because this will be valid for any parabolic blow-up of a type-II singularity of the mean curvature flow. \section{Basic notations}% Let $(N,J, g_N)$ be a Calabi-Yau manifold and suppose $F:M\to N$ is a Lagrangian immersion. The differential $dF$ will be considered as a $1$-form with values in the pull-back bundle $F^*TN$, i.e. $$dF\in\Omega^1(M,F^*TN).$$ Composing $J$ with $dF$ we obtain another $1$-form $$\nu=JdF\in\Omega^1(M,F^*TN).$$ From the Lagrangian condition we deduce $$\nu\in\Omega^1(M,T^\perp M),$$ where $T^\perp M$ is the normal bundle of $M$ with respect to the immersion. The first fundamental form $g$ on $M$ is the metric induced by $F$, i.e. $$g=F^*g_N.$$ By definition, the second fundamental tensor $A$ of $F$ is $$A=\nabla dF,$$ where we will use $\nabla$ to denote any canonical connection induced by the Levi-Civita connections on $TM$ resp. $TN$. Since it is well known that $A$ is normal, i.e. $A\in\Gamma(T^\perp M\otimes T^*M\otimes T^*M)$, the Lagrangian condition implies that the tri-linear form $$h(u,v,w)=\langle A(u,v),Jw\rangle$$ is fully symmetric. In the sequel, let $e_1,\dots, e_m$ be a local orthonormal frame for the tangent bundle $TM$. From the Lagrangian condition we get that $\nu_1,\dots,\nu_m$ with $\nu_k:=\nu(e_k)$ forms a local orthonormal frame for the normal bundle $T^\perp M$. Taking covariant derivatives of $dF$ resp. $\nu$ one obtains for any $u,w\in TM$ the equations \begin{eqnarray} (\nabla_{u}dF)(w)&=&\sum_{k=1}^mh(u,w,e_k)\nu_k,\label{struc 1}\\ (\nabla_u\nu)(w)&=&-\sum_{k=1}^mh(u,w,e_k)dF(e_k).\label{struc 2} \end{eqnarray} \section{Variations of Lagrangian immersions} Suppose now that for some $T>0$ we have a time dependent smooth map $$F:M\times[0,T)\to N$$ such that each map $$F_t:M\to N,\qquad F_t(p):=F(p,t)$$ is a smooth Lagrangian immersion into $N$. Let $\frac{dF}{dt}$ be the velocity vector field along $M$ considered as a section in the pull-back bundle $F^*TN$ over $M$. We can thus define two one-forms $\eta,\tau\in\Omega^1(M)$ by $$\eta(w):=\left\langle \frac{dF}{dt},\nu(w)\right\rangle,\qquad\tau(w):=\left\langle \frac{dF}{dt},dF(w)\right\rangle,$$ where $w\in TM$ is arbitrary. Let us first define the bilinear forms $K, L$ by $$K(u,w):=(\nabla_u\eta)(w)+\sum_{k=1}^m\tau(e_k)h(u,w,e_k),$$ $$L(u,w):=(\nabla_u\tau)(w)-\sum_{k=1}^m\eta(e_k)h(u,w,e_k).$$ Next we compute the evolution of the 1-form $dF$ under the flow. We get \begin{eqnarray} \bigl(\nabla_\frac{d}{dt}\, dF\bigr)(w) &=&\nabla_w\left(\frac{dF}{dt}\right)\nonumber\\ &=&\nabla_w\left(\sum_{k=1}^m\eta(e_k)\nu(e_k)+\sum_{k=1}^m\tau(e_k)dF(e_k)\right)\nonumber\\ &=&\operatorname{trace}\bigl(\nabla_w(\eta\otimes\nu+\tau\otimes dF)\bigr)\nonumber\\ &=&\sum_{k=1}^m\Bigl((\nabla_w\eta)(e_k)+\sum_{l=1}^mh(w,e_k,e_l)\tau(e_l)\Bigr)\nu_k\nonumber\\ &&+\sum_{k=1}^m\Bigl((\nabla_w\tau)(e_k)-\sum_{l=1}^mh(w,e_k,e_l)\eta(e_l)\Bigr)dF(e_k)\nonumber\\ &=&\sum_{k=1}^mK(w,e_k)\nu_k+\sum_{k=1}^mL(w,e_k)dF(e_k),\label{evol dF} \end{eqnarray} From this we can derive the evolution equation for the metric \begin{eqnarray} \Bigl(\frac{d}{dt}\, g\Bigr)(u,w) &=&\Bigl\langle \bigl(\nabla_\frac{d}{dt}\, dF\bigr)(u),dF(w)\Bigr\rangle+\Bigl\langle \bigl(\nabla_\frac{d}{dt}\, dF\bigr)(w),dF(u)\Bigr\rangle\nonumber\\ &=&\bigl(\nabla_u\tau\bigr)(w)+\bigl(\nabla_w\tau\bigr)(u)-2\sum_{k=1}^m\eta(e_k)h(u,w,e_k)\nonumber\\ &=&L(u,w)+L(w,u),\label{evol g} \end{eqnarray} Thus the evolution of the volume form $d\mu$ is given by \begin{equation}\label{evol dmu} \frac{d}{dt}\, d\mu=\frac{1}{2}\operatorname{trace}_g\Bigl(\frac{d}{dt}\, g\Bigr)d\mu=(d^\dagger\tau-\langle \Theta_H,\eta\rangle)d\mu, \end{equation} where $d^\dagger\tau$ is defined by $d^\dagger \tau=\operatorname{trace}(\nabla\tau)=\sum_{k=1}^m\bigl(\nabla_{e_k}\tau\bigr)(e_k)$, $H$ denotes the mean curvature vector field $H=\operatorname{trace}A$ and $\Theta_H$ is the mean curvature $1$-form on $M$ given by $\Theta_H(\cdot)=\langle H,J\cdot\rangle$. In the next step we compute the evolution equation of the second fundamental form. \begin{eqnarray*} &&\bigl(\nabla_\frac{d}{dt}\, h\bigr) (u,v,w)\\ &=&\underbrace{\left\langle\nabla_\frac{d}{dt}\,(\nabla dF)(u,v),\nu(w)\right\rangle}_{=:S} +\underbrace{\left\langle(\nabla_u dF)(v),J(\nabla_\frac{d}{dt}\, dF)(w)\right\rangle}_{=:T}, \end{eqnarray*} where we have used that $J$ is parallel. For $T$ we compute with \eqref{struc 1} and \eqref{evol dF} \begin{eqnarray*} T&=&\left\langle(\nabla_u dF)(v),J(\nabla_\frac{d}{dt}\, dF)(w)\right\rangle\\ &=&\sum_{k=1}^mh(u,v,e_k)L(w,e_k). \end{eqnarray*} To compute $S$ we first need to interchange the covariant derivatives. By definition of the curvature tensor of the pull-back bundle we have \begin{eqnarray*} S&=&\left\langle\nabla_\frac{d}{dt}\,(\nabla dF)(u,v),\nu(w)\right\rangle\\ &=&\left\langle\nabla_u\bigl(\nabla_\frac{d}{dt}\, dF\bigr)(v)+R_N\hspace{-2pt}\left(\frac{dF}{dt},dF(u)\right)\hspace{-2pt}dF(v),\nu(w)\right\rangle, \end{eqnarray*} where $R_N$ denotes the curvature tensor on $N$. Taking into account \eqref{evol dF} and $J\nu(e_k)=-dF(e_k)$, the first term on the RHS simplifies to \begin{eqnarray*} \left\langle\nabla_u\bigl(\nabla_\frac{d}{dt}\, dF\bigr)(v),\nu(w)\right\rangle=(\nabla_uK)(v,w)+\sum_{k=1}^mh(u,w,e_k)L(v,e_k) \end{eqnarray*} Combining everything gives \begin{eqnarray*} \bigl(\nabla_\frac{d}{dt}\, h\bigr) (u,v,w)&=&(\nabla_uK)(v,w)\nonumber\\ &&+\sum_{k=1}^mh(u,w,e_k)L(v,e_k)+\sum_{k=1}^mh(u,v,e_k)L(w,e_k)\\ &&+\left\langle R_N\hspace{-2pt}\left(\frac{dF}{dt},dF(u)\right)\hspace{-2pt}dF(v),\nu(w)\right\rangle. \end{eqnarray*} The mean curvature form $\Theta_H$ is given by $\Theta_H(u)=\sum_{k=1}^mh(u,e_k,e_k)$. Taking into account \eqref{evol g}, we take the trace in the last evolution equation over $v, w$ and obtain \begin{eqnarray*} \nabla_\frac{d}{dt}\, \Theta_H=d(\operatorname{trace}(K))=d\bigl(d^\dagger \eta+\langle\tau,\Theta_H\rangle\bigr), \end{eqnarray*} where we have used that the Lagrangian condition and the K\"ahler identity on $N$ imply that the trace $$\sum_{k=1}^m\left\langle R_N\hspace{-2pt}\left(\frac{dF}{dt},dF(u)\right)\hspace{-2pt}dF(e_k),\nu(e_k)\right\rangle$$ gives a Ricci curvature and thus vanishes since Calabi-Yau manifolds are Ricci flat. From this evolution equation we deduce that the Lagrangian angle $\alpha$, i.e. the potential with $d\alpha=\Theta_H$, evolves according to \begin{equation}\label{evol alpha} \frac{d}{dt}\,\alpha=d^\dagger\eta+\langle\tau,\Theta_H\rangle. \end{equation} As above let $dz$ denote the complex volume form on the Calabi-Yau manifold. Since $F^*dz=e^{i\phi}d\mu$ with phase function $\phi$ we must have $\alpha=\phi-\phi_0$ for some constant $\phi_0$. Thus $$F^*(e^{-i\phi_0}dz)=(\cos\alpha +i\sin\alpha)d\mu.$$ Therefore from \eqref{evol dmu} we get the evolution equation \begin{eqnarray*} \frac{d}{dt}\, F^*(e^{-i\phi_0}dz) &=&\left(i\frac{d}{dt}\, \alpha+d^\dagger\tau-\langle \Theta_H,\eta\rangle\right)F^*(e^{-i\phi_0}dz). \end{eqnarray*} This and \eqref{evol alpha} imply \begin{eqnarray*} \frac{d}{dt}\, (\cos\alpha\, d\mu)&=&\Bigl(-\sin\alpha(d^\dagger\eta+\langle\tau,\Theta_H\rangle)+\cos\alpha(d^\dagger\tau-\langle \Theta_H,\eta\rangle)\Bigr)d\mu,\\ \frac{d}{dt}\, (\sin\alpha\, d\mu)&=&\Bigl(\cos\alpha(d^\dagger\eta+\langle\tau,\Theta_H\rangle)+\sin\alpha(d^\dagger\tau-\langle \Theta_H,\eta\rangle)\Bigr)d\mu. \end{eqnarray*} If we now choose $$\eta=\Theta_H,\qquad\tau=\tan\alpha\cdot \Theta_H=-d(\log\cos\alpha),$$ we get \begin{eqnarray*} &&-\sin\alpha(d^\dagger\eta+\langle\tau,\Theta_H\rangle)+\cos\alpha(d^\dagger\tau-\langle \Theta_H,\eta\rangle)\\ &=&-\sin\alpha (d^\dagger \Theta_H+\tan\alpha|\Theta_H|^2)\\ &&+\cos\alpha\left(\tan\alpha\cdot d^\dagger \Theta_H+\frac{|\Theta_H|^2}{\cos^2\alpha}-|\Theta_H|^2\right)\\ &=&0. \end{eqnarray*} Moreover \begin{eqnarray*} &&\cos\alpha(d^\dagger\eta+\langle\tau,\Theta_H\rangle)+\sin\alpha(d^\dagger\tau-\langle \Theta_H,\eta\rangle)\\ &=&\cos\alpha (d^\dagger \Theta_H+\tan\alpha|\Theta_H|^2)\\ &&+\sin\alpha\left(\tan\alpha\cdot d^\dagger \Theta_H+\frac{|\Theta_H|^2}{\cos^2\alpha}-|\Theta_H|^2\right)\\ &=&\frac{d^\dagger \Theta_H}{\cos\alpha} +\frac{\sin\alpha}{\cos^2\alpha} |\Theta_H|^2=-\Delta(\log{\cos\alpha}) \end{eqnarray*} We summarize this in the following Lemma. \begin{lemma} Under the reparametrized Lagrangian mean curvature flow $$\frac{d}{dt}\, F=H-\tan\alpha\cdot JH$$ we have \begin{eqnarray} \frac{d}{dt}\, (\cos\alpha\, d\mu)&=&0,\label{eq evol c}\\ \frac{d}{dt}\, (\sin\alpha\, d\mu)&=&\Delta\left(\log{\frac{1}{\cos\alpha}}\right)\, d\mu.\label{eq evol s} \end{eqnarray} \end{lemma} \begin{remark} If one first chooses $\eta=\theta_H$, then $\frac{d}{dt}\,(\cos\alpha\, d\mu)=0$, if and only if $$d^\dagger(\cos\alpha\cdot\tau-\sin\alpha\cdot \theta_H)=0.$$ Therefore one observes that $\frac{d}{dt}\,(\cos\alpha\, d\mu)=0$, if and only if $$\cos\alpha\cdot\tau=\sin\alpha\cdot\theta_H+\sigma,$$ where $d^\dagger\sigma=0$, i.e. where $\sigma$ is a smooth time-dependent co-closed $1$-form. \end{remark} From the computations above one observes that the Lagrangian angle evolves under the reparametrized Lagrangian mean curvature flow \eqref{eq rlmcf} according to $$\frac{d}{dt}\,\alpha=\Delta\alpha+\tan\alpha\,|\nabla \alpha|^2.$$ Equation (\ref{eq evol c}) is the key ingredient to prove Theorem A. \textbf{Proof of Theorem A.} The evolution equation for $\cos\alpha$ is $$\frac{d}{dt}\,\cos\alpha=\Delta\cos\alpha+\cos\alpha\,|\nabla\alpha|^2-\frac{1}{\cos\alpha}|\nabla(\cos\alpha)|^2.$$ Thus the parabolic maximum principle implies \begin{equation} \label{eq evol e} \min_{M\times\{t\}}\cos\alpha\ge \min_{M\times\{0\}}\cos\alpha=\epsilon,\quad\text{for all }t\in[0,T). \end{equation} Then for each measurable set $\Omega\subset M$ we get $$\operatorname{vol}_t(\Omega)=\int\limits_{\Omega\times\{t\}}d\mu\ge\int\limits_{\Omega\times\{t\}}\cos\alpha\, d\mu\overset{\eqref{eq evol c}}{=}\int\limits_{\Omega\times\{0\}}\cos\alpha\, d\mu\ge\epsilon\operatorname{vol}_0(\Omega).$$ Similarly $$\operatorname{vol}_t(\Omega) =\int\limits_{\Omega\times\{t\}}d\mu \overset{\eqref{eq evol e}}{\le}\frac{1}{\epsilon}\int\limits_{\Omega\times\{t\}}\cos\alpha \, d\mu\overset{\eqref{eq evol c}}{=}\frac{1}{\epsilon}\int\limits_{\Omega\times\{0\}}\cos\alpha \, d\mu\le\frac{1}{\epsilon}\operatorname{vol}_0(\Omega).$$ This completes the proof of Theorem A.\hfill{$\square$} \section{Translating solitons} In general, the equation for a translating soliton $F:M\to\complex{m}$ for the mean curvature flow is \begin{equation}\label{eq trans} H=V^\perp, \end{equation} where $H$ denotes the mean curvature vector of $M$, $V$ is a constant vector of unit length in $\complex{m}$ and $V^\perp$ denotes the normal part of $V$ along the submanifold. There exist a number of results for translating solitons, e.g. in \cite{mss}, \cite{nevestian}, \cite{sun1} and \cite{sun2}. In case of a Lagrangian translating soliton $M\subset\complex{m}$ equation \eqref{eq trans} can be expressed in terms of the mean curvature $1$-form $\Theta_H$, \begin{equation}\label{eq trans3} \Theta_H=d\textsl{x}, \end{equation} where $\textsl{x}$ is the coordinate function $$\textsl{x}:=-\langle JV,F\rangle.$$ In particular, translating Lagrangian solitons have trivial first Maslov class and are of gradient type. Moreover, \begin{equation}\label{eq transv} \alpha:=\textsl{x} \end{equation} is a choice for the Lagrangian angle $\alpha$. From this one immediately gets \begin{equation}\label{eq trans1} V=H+\nabla \textsl{u}, \end{equation} where $\textsl{u}$ is the coordinate function $$\textsl{u}:=\langle V,F\rangle.$$ $|V|=1$ implies \begin{equation}\label{eq trans2} 1=|\nabla\alpha|^2+|\nabla \textsl{u}|^2. \end{equation} Next we will compute the Laplacians of various functions. From $\alpha=\textsl{x}$ we get \begin{eqnarray} \nabla^2\alpha&=&-h(\nabla \textsl{u},\cdot,\cdot),\label{eq trans4}\\ \nabla^2\textsl{u}&=&h(\nabla\alpha,\cdot,\cdot).\label{eq trans5} \end{eqnarray} Taking traces gives \begin{eqnarray} \Delta\alpha+\langle\nabla\alpha,\nabla \textsl{u}\rangle&=&0,\label{eq trans6}\\ \Delta \textsl{u}-|\nabla\alpha|^2&=&0.\label{eq trans7} \end{eqnarray} With \eqref{eq trans2} we derive from \eqref{eq trans7} that \begin{eqnarray} \Delta e^\textsl{u}=e^\textsl{u}(\Delta \textsl{u}+|\nabla \textsl{u}|^2)=e^\textsl{u}. \end{eqnarray} Let us define the function $$f:=e^\textsl{u}\cos\alpha.$$ Since \begin{eqnarray} \nabla f=e^\textsl{u}(\cos\alpha\nabla \textsl{u}-\sin\alpha\nabla\alpha)=f\nabla\textsl{u}-e^\textsl{u}\sin\alpha\nabla\alpha, \end{eqnarray} we obtain \begin{eqnarray} \nabla^2f&=&\nabla f\otimes\nabla \textsl{u} +f(\nabla^2\textsl{u}-\nabla\alpha\otimes\nabla\alpha)-e^\textsl{u}\sin\alpha(\nabla^2 \alpha+\nabla\textsl{u}\otimes\nabla\alpha).\nonumber\\ &&\label{eq trans9} \end{eqnarray} Hence taking a trace and using \eqref{eq trans6}, \eqref{eq trans7} we get \begin{eqnarray} \Delta f-\langle\nabla f,\nabla \textsl{u}\rangle=0.\label{eq trans8} \end{eqnarray} We want to exploit equation \eqref{eq trans9} even further in case $\cos\alpha>0$. To this end observe that from $1=\frac{1}{\cos^2\alpha}-\tan^2\alpha$ we get \begin{eqnarray*} &&\nabla^2\textsl{u}-\nabla\alpha\otimes\nabla\alpha-\tan\alpha (\nabla^2\alpha+\nabla\textsl{u}\otimes\nabla\alpha)\\ &=&\frac{1}{\cos^2\alpha}(\nabla^2 \textsl{u}-\nabla\alpha\otimes\nabla\alpha)\\ &&+\frac{\tan\alpha}{\cos\alpha}\Bigl(\sin\alpha(\nabla\alpha\otimes\nabla\alpha-\nabla^2\textsl{u})- \cos\alpha (\nabla^2\alpha+\nabla\textsl{u}\otimes\nabla\alpha)\Bigr). \end{eqnarray*} The last line can be substituted using equations \eqref{eq trans4}, \eqref{eq trans5} and this gives \begin{eqnarray*} &&\nabla^2\textsl{u}-\nabla\alpha\otimes\nabla\alpha-\tan\alpha (\nabla^2\alpha+\nabla\textsl{u}\otimes\nabla\alpha)\\ &=&\frac{1}{\cos^2\alpha}(\nabla^2 \textsl{u}-\nabla\alpha\otimes\nabla\alpha)\\ &&+\frac{\tan\alpha}{\cos\alpha}\Bigl(h(\cos\alpha\nabla\textsl{u}-\sin\alpha\nabla\alpha,\cdot,\cdot)+(\sin\alpha\nabla\alpha-\cos\alpha\nabla\textsl{u})\otimes\nabla\alpha\Bigr). \end{eqnarray*} Therefore we have \begin{eqnarray*} &&\nabla^2\textsl{u}-\nabla\alpha\otimes\nabla\alpha-\tan\alpha (\nabla^2\alpha+\nabla\textsl{u}\otimes\nabla\alpha)\\ &=&\frac{1}{\cos^2\alpha}(\nabla^2 \textsl{u}-\nabla\alpha\otimes\nabla\alpha)+\frac{\tan\alpha}{f}\Bigl(h(\nabla f,\cdot,\cdot)-\nabla f\otimes\nabla\alpha\Bigr). \end{eqnarray*} Combining this with \eqref{eq trans9} implies \begin{eqnarray} \nabla^2f &=&\nabla f\otimes\nabla\textsl{u}+\frac{f}{\cos^2 \alpha}(\nabla^2\textsl{u}-\nabla\alpha\otimes\nabla\alpha)\nonumber\\ &&+\tan\alpha\Bigl(h(\nabla f,\cdot,\cdot)-\nabla f\otimes\nabla\alpha\Bigr)\nonumber\\ &=&\frac{1}{f}\nabla f\otimes\nabla f+\tan\alpha\cdot h(\nabla f,\cdot,\cdot)\nonumber\\ &&+\frac{f}{\cos^2 \alpha}\Bigl(h(\nabla\alpha,\cdot,\cdot)-\nabla\alpha\otimes\nabla\alpha\Bigr).\label{eq trans10} \end{eqnarray} \textbf{Proof of Theorem B.} Let us first mention that since $0\in M$ and $\alpha=\textsl{x}=-\langle p,JV\rangle$, there exists at least one point $p\in M$ with $\textsl{u}(p)=0$ and $\alpha(p)=0$. Therefore, if $f$ is constant, this constant clearly is $1$. \begin{enumerate}[1.] \item We prove (a)$\Leftrightarrow$(c): \item[] From $V=H+\nabla \textsl{u}$ we see that $V$ takes the form in (\ref{eq rlmcf}), if and only if $$\nabla \textsl{u}=-\tan\alpha \cdot JH=\tan\alpha\,\nabla\alpha,$$ i.e. if and only if $$\nabla(e^\textsl{u}\cos\alpha)=e^\textsl{u}(\cos\alpha\nabla \textsl{u}-\sin\alpha\nabla \alpha)=0.$$ \item The equivalence (b)$\Leftrightarrow$(c) follows from the strong elliptic maximum principle applied to the equation \eqref{eq trans8}. \item (d)$\Leftrightarrow$(c): \item[] Since $e^\textsl{u}\cos\alpha\equiv 1$ on the grim reaper $\Gamma$, this implies that $e^\textsl{u}\cos\alpha$ must be constant and equal to $1$ as well on the product of $\Gamma$ with a minimal Lagrangian submanifold $\Sigma\subset\complex{m-1}$. So clearly (d) implies (c). It remains to show that (c) implies (d). \noindent Let us assume that $e^\textsl{u}\cos\alpha$ is constant. Since the origin is contained in $M$, this constant is $1$. Thus in particular $\cos\alpha>0$ on $M$. From $\nabla f=0$ with $f=e^\textsl{u}\cos\alpha$ we first observe \begin{equation}\label{eq proof1} \nabla \textsl{u}=\tan\alpha\,\nabla\alpha \end{equation} and then with \eqref{eq trans2} $$\sin^2\alpha|\nabla\alpha|^2=\cos^2\alpha|\nabla \textsl{u}|^2=\cos^2\alpha(1-|\nabla\alpha|^2)$$ which implies $$|\nabla\alpha|^2=\cos^2 \alpha>0.$$ In particular the kernel of $\Theta_H=d\alpha$ at each point $p\in M$ is $(m-1)$-dimensional. Let $\mathcal D$ denote the corresponding $(m-1)$-dimensional distribution on $M$ defined by $\mathcal D_p:=\operatorname{ker}\Theta_H|_p$. \smallskip\noindent\textbf{Claim:} $\mathcal D$ is parallel. \\ \textit{Proof.} \noindent From \eqref{eq trans10} and $\nabla f=0$ we derive $$h(\nabla\alpha,\cdot,\cdot)=\nabla\alpha\otimes\nabla\alpha.$$ With \eqref{eq trans4} and \eqref{eq proof1} we then conclude \begin{eqnarray*} \nabla^2\alpha &=&-h(\nabla\textsl{u},\cdot,\cdot)\\ &=&-\tan\alpha\cdot h(\nabla\alpha,\cdot,\cdot)\\ &=&-\tan\alpha\cdot\nabla\alpha\otimes\nabla\alpha. \end{eqnarray*} Now let $\gamma:[0,1]\to M$ be a smooth curve with $\gamma(0)=p$ and let $W$ be the parallel transport of $W_0\in\mathcal D_p$ along $\gamma$. Then we compute along $\gamma$: $$\frac{\partial}{\partial s}\bigl(\Theta_H(W)\bigr)=\bigl(\nabla_{\gamma'}\Theta_H\bigr)(W)=-\tan\alpha\cdot\Theta_H(\gamma')\Theta_H(W).$$ Since the solution of this ODE with $\Theta_H(W)(0)=0$ is unique, it follows that $\Theta_H(W)(s)=0$ for all $s\in[0,1]$ which implies that $W(\gamma(s))\in \mathcal D_{\gamma(s)}$ for all $s$ and that $\mathcal D$ is invariant under parallel transport. This proves the claim.\hfill{$\ast$} \smallskip\noindent Therefore the manifold $M$ splits into a Riemannian product of a curve $\beta$ with an $(m-1)$-dimensional submanifold $\Sigma$. Let $\pi :M\to \Sigma$ denote the natural projection. The tangent space $T_{\pi(p)}\Sigma$ is given by $\mathcal D_p$. Since $J\nabla\alpha=H=V^\perp$, the curve $\beta$ lies in the $(\textsl{x},\textsl{u})$-plane spanned by $V,JV$. Therefore $\Sigma\subset\complex{m-1}$ is also Lagrangian. If at a point $p\in M$ we choose an ONB $\nu_1.\dots,\nu_m$ of the normal space $T_p^\perp M$ with $\nu_1=H/|H|$, then the trace of $A^{\nu_k}:=\langle A(\cdot,\cdot),\nu_k\rangle$ is given by $-\Theta_H(J\nu_k)$ and hence vanishes for all $k\ge 2$. Therefore from the Lagrangian condition and $\mathcal D_p=\operatorname{ker}\Theta_H|_p$ we see that the mean curvature vector of $\Sigma$ vanishes identically and $\Sigma$ is a minimal Lagrangian submanifold. Since $M=\beta\times\Sigma$, with a minimal Lagrangian submanifold $\Sigma$, we finally conclude that $\beta\subset\complex{}$ must itself be a translating soliton in the plane, wich implies $\beta$ must be the grim reaper $\Gamma$. \end{enumerate} \hfill{$\square$} The Omori-Yau maximum principle \cite{omori} (later extended by Yau \cite{yau}, see also \cite{prs}) states that if $(M,g)$ is a complete Riemannian manifold with sectional curvatures bounded below, then for every $f\in C^2(M)$ that is bounded above there exists a sequence $(x_k)_{k\in\natural{}}\subset M$ such that $$f(x_k)\ge \sup_M f-\frac{1}{k},\qquad |\nabla f(x_k)|<\frac{1}{k},\qquad \nabla^2 f(x_k)\le\frac{1}{k} g.$$ If a translator $M\subset\complex{m}$ has bounded second fundamental form, then the Gau\ss\ equations imply that all sectional curvatures are bounded. Hence we may apply the Omori-Yau maximum principle to complete translators with bounded second fundamental form. Since the function $e^\textsl{u}$ satisfies the equation $$\Delta e^\textsl{u}=e^\textsl{u}$$ the Omori-Yau maximum principle immediately implies that $\textsl{u}$ cannot be bounded above on any complete translator with bounded second fundamental form. Theorem C claims that this holds also from below, provided the translator is strictly calibrated in the sense $\cos\alpha\ge\epsilon$ for some constant $\epsilon>0$. \textbf{Proof of Theorem C.} Suppose that $\textsl{u}$ is bounded below by some constant $c_0$. We will derive a contradiction. Since by assumption $\cos\alpha\ge\epsilon>0$, we can choose a constant $\sigma>1$ such that \begin{equation}\label{eq cos} \cos(\sigma\alpha)\ge\frac{\epsilon}{2}. \end{equation} We set $f_\sigma:=e^\textsl{u}\cos(\sigma\alpha)$ and compute \begin{eqnarray} \Delta f_\sigma &=&\cos(\sigma\alpha)\Delta e^\textsl{u}+e^\textsl{u}\Delta\cos(\sigma\alpha)+2\langle\nabla e^\textsl{u},\nabla\cos(\sigma\alpha)\rangle\nonumber\\ &=&f_\sigma+e^\textsl{u}(-\sigma\sin(\sigma\alpha)\Delta\alpha-\sigma^2\cos(\sigma\alpha)|\nabla\alpha|^2)\nonumber\\ &&+2\langle\nabla \textsl{u},e^\textsl{u}\,\nabla \cos(\sigma\alpha)\rangle\nonumber\\ &\overset{\eqref{eq trans6}}{=}&f_\sigma+\langle\nabla \textsl{u},e^\textsl{u}\,\nabla \cos(\sigma\alpha)\rangle-\sigma^2 f_\sigma|\nabla\alpha|^2\nonumber\\ &=&\langle\nabla \textsl{u},\nabla f_\sigma\rangle-f_\sigma|\nabla \textsl{u}|^2+f_\sigma-\sigma^2 f_\sigma|\nabla\alpha|^2\nonumber\\ &=&\langle\nabla \textsl{u},\nabla f_\sigma\rangle+(1-\sigma^2)f_\sigma|\nabla\alpha|^2.\label{eq cos2} \end{eqnarray} Now $$\nabla f_\sigma=f_\sigma\bigl(\nabla \textsl{u}-\sigma\tan(\sigma\alpha)\nabla\alpha\bigr),$$ so that $$\sigma^2\tan^2(\sigma\alpha)|\nabla\alpha|^2=\left|\frac{\nabla f_\sigma}{f_\sigma}-\nabla \textsl{u}\right|^2$$ which in view of $|\nabla \textsl{u}|^2=1-|\nabla\alpha|^2$ implies $$\frac{(\sigma^2-1)\sin^2(\sigma\alpha)+1}{\cos^2(\sigma\alpha)}|\nabla\alpha|^2=1+\frac{|\nabla f_\sigma|^2}{f_\sigma^2}-2\left\langle\frac{\nabla f_\sigma}{f_\sigma}, \nabla \textsl{u}\right\rangle.$$ Because $|\nabla \textsl{u}|^2\le 1$ and $\sigma^2-1\ge 0$ we can apply the Peter-Paul inequality on the right hand side to obtain \begin{equation}\label{eq cos3} \frac{\sigma^2}{\cos^2(\sigma\alpha)} |\nabla\alpha|^2\ge \frac{1}{2}-\frac{|\nabla f_\sigma|^2}{f_\sigma^2}. \end{equation} Since by assumption $\textsl{u}\ge c_0$, we may combine $$\delta:=\inf_Mf_\sigma\ge \frac{\epsilon }{2}\,e^{c_0}>0$$ with \eqref{eq cos2}, \eqref{eq cos3} to get the estimate \begin{eqnarray} \Delta f_\sigma &\le&|\nabla \textsl{u}|\cdot|\nabla f_\sigma|+(1-\sigma^2)f_\sigma|\nabla\alpha|^2\nonumber\\ &\le&|\nabla f_\sigma|+\frac{1-\sigma^2}{\sigma^2}\cos^2(\sigma\alpha)\left(\frac{f_\sigma}{2}-\frac{|\nabla f_\sigma|^2}{f_\sigma} \right)\nonumber\\ &\le&|\nabla f_\sigma|+\frac{1-\sigma^2}{\sigma^2}\cos^2(\sigma\alpha)\left(\frac{\delta}{2}-\frac{|\nabla f_\sigma|^2}{\delta} \right).\label{eq cos4} \end{eqnarray} Thus there exists a positive constant $C=C(\delta,\sigma,\epsilon)$ such that $|\nabla f_\sigma|\le C$ implies \begin{equation}\label{eq cos5} \Delta f_\sigma\le\frac{\delta(1-\sigma^2)}{4\sigma^2}\cos^2(\sigma\alpha)\le\frac{\delta\epsilon^2(1-\sigma^2)}{16\sigma^2}. \end{equation} But since $\inf_Mf_\sigma=\delta$, the Omori-Yau maximum principle implies that there exists a sequence $(x_k)_{k\in\mathbb{N}}\in M$ such that $$f_\sigma(x_k)\le\delta+\frac{1}{k},\qquad|\nabla f_\sigma(x_k)|<\frac{1}{k},\qquad\Delta f_\sigma(x_k)\ge -\frac{1}{k}.$$ In view of \eqref{eq cos5} this is for large enough $k$ a contradiction and hence the function $\textsl{u}$ cannot be bounded below. \hfill{$\square$} \bibliographystyle{alpha} \begin{bibdiv} \begin{biblist} \bib{angenent}{article}{ author={Angenent, S.}, title={On the formation of singularities in the curve shortening flow}, journal={J. Differential Geom.}, volume={33}, date={1991}, number={3}, pages={601--633}, } \bib{chen}{article}{ author={Chen, J.}, author={Li, J.}, title={Singularity of mean curvature flow of Lagrangian submanifolds}, journal={Invent. Math.}, volume={156}, date={2004}, number={1}, pages={25--51}, } \bib{huisken}{article}{ author={Huisken, G.}, title={Local and global behaviour of hypersurfaces moving by mean curvature}, conference={ title={Differential geometry: partial differential equations on manifolds}, address={Los Angeles, CA}, date={1990}, }, book={ series={Proc. Sympos. Pure Math.}, volume={54}, publisher={Amer. Math. Soc., Providence, RI}, }, date={1993}, pages={175--191}, } \bib{joyce}{article}{ author={Joyce, D.}, author={Lee, Y.-I.}, author={Tsui, M.-P.}, title={Self-similar solutions and translating solitons for Lagrangian mean curvature flow}, journal={J. Differential Geom.}, volume={84}, date={2010}, number={1}, pages={127--161}, } \bib{kunikawa}{article}{ author={Kunikawa, K.}, title={Non existence of eternal solutions to Lagrangian mean curvature flow.}, journal={arXiv:1611.03594v1}, date={2016}, } \bib{mss}{article}{ author={Mart\'\i n, F.}, author={Savas-Halilaj, A.}, author={Smoczyk, K.}, title={On the topology of translating solitons of the mean curvature flow}, journal={Calc. Var. Partial Differential Equations}, volume={54}, date={2015}, number={3}, pages={2853--2882}, } \bib{neves}{article}{ author={Neves, A.}, title={Singularities of Lagrangian mean curvature flow: zero-Maslov class case}, journal={Invent. Math.}, volume={168}, date={2007}, number={3}, pages={449--484}, } \bib{nevestian}{article}{ author={Neves, A.}, author={Tian, G.}, title={Translating solutions to Lagrangian mean curvature flow}, journal={Trans. Amer. Math. Soc.}, volume={365}, date={2013}, number={11}, pages={5655--5680}, } \bib{omori}{article}{ author={Omori, H.}, title={Isometric immersions of Riemannian manifolds}, journal={J. Math. Soc. Japan}, volume={19}, date={1967}, pages={205--214}, } \bib{prs}{article}{ author={Pigola, S.}, author={Rigoli, M.}, author={Setti, A. G.}, title={Maximum principles on Riemannian manifolds and applications}, journal={Mem. Amer. Math. Soc.}, volume={174}, date={2005}, number={822}, pages={x+99}, issn={0065-9266}, review={\MR{2116555}}, } \bib{savassmoczyk}{article}{ author={Savas-Halilaj, A.}, author={Smoczyk, K.}, title={Lagrangian mean curvature flow of Whitney spheres}, journal={Preprint}, date={2018}, } \bib{sm habil}{book}{ author={Smoczyk, K.}, title = {Der Lagrangesche mittlere Kr\"ummungsfluss.}, pages = {102~p.}, year = {2000}, publisher = {Leipzig, Univ. Leipzig (Habil.-Schr.)}, language = {German}, } \bib{sun1}{article}{ author={Sun, J.}, title={Rigidity results on Lagrangian and symplectic translating solitons}, journal={Commun. Math. Stat.}, volume={3}, date={2015}, number={1}, pages={63--68}, } \bib{sun2}{article}{ author={Sun, J.}, title={Mean curvature decay in symplectic and lagrangian translating solitons}, journal={Geom. Dedicata}, volume={172}, date={2014}, pages={207--215}, } \bib{wang}{article}{ author={Wang, M.-T.}, title={Mean curvature flow of surfaces in Einstein four-manifolds}, journal={J. Differential Geom.}, volume={57}, date={2001}, number={2}, pages={301--338}, } \bib{yau}{article}{ author={Yau, S.-T.}, title={Harmonic functions on complete Riemannian manifolds}, journal={Comm. Pure Appl. Math.}, volume={28}, date={1975}, pages={201--228}, issn={0010-3640}, review={\MR{0431040}}, } \end{biblist} \end{bibdiv} \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{INTRODUCTION} The precise numerical value of the strange quark mass is a controversial issue, with important implications for low--energy phenomenology. The Particle Data Group \cite{PDG98} quotes a rather wide range of $m_s$ values, reflecting the large uncertainties in the present determinations of this parameter from QCD Sum Rules and Lattice calculations. The high precision data on tau decays \cite{TAU98} collected at LEP and CESR provide a very powerful tool to analyse strange quark mass effects in a cleaner environment. The QCD analysis of the inclusive tau decay width, \be \label{defrtau} R_\tau \equiv \frac{\dis \Gamma \left[ \tau^-\to\nu_\tau + {\rm hadrons} \; (\gamma) \right]} {\dis \Gamma \left[ \tau^- \to e^- \, \overline{\nu}_e \, \nu_\tau (\gamma) \right]} \, , \ee has already made possible \cite{PIC97} an accurate measurement of the strong coupling constant at the $\tau$ mass scale, $\alpha_s(M_\tau^2)$, which complements and competes in accuracy with the high precision measurements of $\alpha_s(M_Z^2)$ performed at LEP. More recently, detailed experimental studies of the Cabibbo--suppressed width of the $\tau$ have started to become available \cite{ALEPH99,CDH99}, allowing to perform a systematic investigation of the corrections induced by the strange quark mass in the $\tau$ decay width \cite{PP98,PP99,CKP98}. What makes a $m_s$ determination from $\tau$ data very interesting is that the hadronic input does not depend on any extra hypothesis; it is a purely experimental issue, which accuracy can be systematically improved. The major part of the uncertainty will eventually come from the theoretical side. However, owing to its inclusive character, the total Cabibbo--suppressed tau decay width can be rigorously analyzed within QCD, using the Operator Product Expansion (OPE). Therefore, the theoretical input is in principle under control and the associated uncertainties can be quantified. \section{THEORETICAL FRAMEWORK} The theoretical analysis of the inclusive hadronic tau decay width \cite{BNP92,NP88,BRA89,DP92} involves the two--point correlation functions $$ \Pi^{\mu\nu}_{ij,{\cal J}}(q) \equiv i {\dis \int }{\rm d}^4 x\, e^{iqx} \, \langle 0 | T \left( {\cal J}_{ij}^\mu (x)\, {\cal J}_{ij}^\nu (0)^\dagger \right) |0 \rangle $$ for the vector, ${\cal J}_{ij}^\mu = V_{ij}^\mu (x) \equiv \overline q_j \gamma^\mu q_i $, and axial--vector, ${\cal J}_{ij}^\mu = A_{ij}^\mu (x) \equiv \overline q_j \gamma^\mu\gamma_5 q_i $, colour--singlet quark currents ($i,j = u, d, s$). These correlators have the Lorentz decompositions \beqn \Pi^{\mu\nu}_{ij,V/A}(q) &\!\! = &\!\! \left( - g^{\mu\nu}\, q^2 + q^\mu q^\nu \right) \, \Pi^{T}_{ij,V/A}(q^2) \no\\ &&\!\! \mbox{} + q^\mu q^\nu \, \Pi^{L}_{ij,V/A}(q^2) \, , \eeqn where the superscript in the transverse and longitudinal components denotes the corresponding angular momentum $J=1$ (T) and $J=0$ (L) in the hadronic rest frame. \begin{table*} {\centering \begin{tabular}{|c|c|c|} \hline $(k,l)$ & $\cF^{kl}_{L+T}(x)$ & $\cF^{kl}_{L}(x)$ \\ \hline (0,0) & $(1-x)^3\, (1+x)$ & $(1-x)^3$ \\ (1,0) & $\frac{1}{10}\, (1-x)^4\, (7+8x)$ & $\frac{3}{4}\, (1-x)^4$ \\ (2,0) & $\frac{2}{15}\, (1-x)^5\, (4+5x)$ & $\frac{3}{5}\, (1-x)^5$ \\ (1,1) & $\frac{1}{6}\, (1-x)^4\, (1+2x)^2$ & $\frac{3}{20}\, (1-x)^4\, (1+4x)$ \\ (1,2) & $\frac{1}{210}\, (1-x)^4\, (13+52 x + 130 x^2 + 120 x^3)$ & $\frac{1}{20}\, (1-x)^4\, (1+4 x + 10 x^2)$ \\ \hline \end{tabular} \caption{Explicit values of the relevant kinematical kernels.} \label{tab:kernels}} \end{table*} The semi-hadronic decay rate of the $\tau$ lepton, can be expressed as an integral of the spectral functions ${\rm Im} \, \Pi^T(s)$ and ${\rm Im} \, \Pi^L(s)$ over the invariant mass $s$ of the final--state hadrons as follows: \beqn \label{rtau} \lefteqn{R_\tau = 12 \pi {\dis \int^{M_\tau^2}_0} \frac{{\rm d} s}{M_\tau^2} \, \left(1-{s\over M_\tau^2}\right)^2}&& \no \\ && \mbox{}\times \left[ \left( 1+2{s\over M_\tau^2}\right) {\rm Im}\, \Pi^T(s) + {\rm Im} \, \Pi^L(s) \right] . \eeqn Moreover, according to the quantum numbers content of the two--point function correlators \beqn \label{correlators} \Pi^J(s) &\!\!\equiv&\!\! |V_{ud}|^2 \left[ \Pi_{V, ud}^J(s) + \Pi_{A,ud}^J(s)\right] \nonumber \\ &\!\! +&\!\! |V_{us}|^2 \left[ \Pi_{V, us}^J(s) + \Pi_{A,us}^J(s)\right] , \eeqn we can decompose $R_\tau$ into \beqn R_\tau \equiv R_{\tau, V} + R_{\tau, A} + R_{\tau, S} \, , \eeqn where $R_{\tau, V}$ and $R_{\tau, A}$ correspond to the first two terms in Eq.~(\ref{correlators}), while $R_{\tau, S}$ contains the remaining Cabibbo--suppressed contributions. The measurement of the invariant mass distribution of the final hadrons provides additional information on the QCD dynamics, through the moments \cite{DP92} \be\label{eq:momdef} R_\tau^{kl} \equiv \int_0^{M_\tau^2} \, ds \, \left( 1 -\frac{s}{M_\tau^2} \right)^k\, \left(\frac{s}{M_\tau^2}\right)^l \, {d R_\tau\over d s} \, , \ee which include $R_\tau\equiv R_\tau^{00}$ as a particular case. Exploiting the analytic properties of $\Pi^J(s)$, we can express these moments as contour integrals in the complex $s$-plane running counter-clockwise around the circle $|s|=M_\tau^2$: \beqn \label{contourkl} R_\tau^{kl} &\!\!\!\! =&\!\!\!\! -\pi i \oint_{|x|=1} \frac{{\rm d}x}{x} \,\Bigl\{ 3 \, \cF^{kl}_{L+T}(x) \, D^{L+T}(M_{\tau}^2 x) \Bigr.\no\\ &&\qquad\qquad\quad\; \Bigl.\mbox{} + 4 \, \cF^{kl}_L(x) \, D^L(M_{\tau}^2 x) \Bigr\} . \; \eeqn We have used integration by parts to rewrite $R_\tau^{kl}$ in terms of the logarithmic derivatives \be D^{L+T}(s)\,\equiv\, -s \frac{{\rm d}}{{\rm d}s} \left[\Pi^{L+T}(s)\right] \, , \ee\be D^{L}(s) \,\equiv\, \frac{\dis s}{\dis M_\tau^2} \, \frac{\dis {\rm d}}{\dis {\rm d}s} \left[s\, \Pi^{L}(s)\right] \, , \ee which satisfy homogeneous renormalization group equations. All kinematical factors have been absorbed into the kernels $\cF^{kl}_{L+T}(x)$ and $\cF^{kl}_L(x)$. Table~\ref{tab:kernels} shows the explicit form of these kernels for the moments which we are going to analyze in the following sections. For large enough $-s$, the contributions to $D^J(s)$ can be organized with the OPE in a series of local gauge--invariant scalar operators of increasing dimension $D=2n$, times the appropriate inverse powers of $-s$. This expansion is expected to be well behaved along the complex contour $|s|=M_\tau^2$, except in the crossing point with the positive real axis \cite{PQS76}. As shown in Table~\ref{tab:kernels}, the region near the physical cut is strongly suppressed by a zero of order $3+k$ at $s=M_\tau^2$. Therefore, the uncertainties associated with the use of the OPE near the time--like axis are very small. Inserting this series in (\ref{contourkl}) and evaluating the contour integral, one can rewrite $R_\tau^{kl}$ as an expansion in inverse powers of $M_\tau^2$ \cite{BNP92}, \beqn \label{LAB:deltas} \lefteqn{R_\tau^{kl} \equiv 3 \left[ |V_{ud}|^2 + |V_{us}|^2 \right] S_{\rm EW} \,\biggl\{ 1 + \delta'_{\rm EW}\, + \delta^{kl\, (0)} \biggr. }\no\\ &&\!\!\!\!\!\! \biggl.\mbox{} + {\dis \sum_{D=2,4,\cdots}} \left( \cos^2{\theta_C} \, \delta_{ud}^{kl\, (D)}+ \sin^2{\theta_C} \, \delta_{us}^{kl\, (D)} \right) \biggr\} , \no \eeqn where $\sin^2{\theta_C}\equiv |V_{us}|^2/[|V_{ud}|^2+|V_{us}|^2]$ and we have pulled out the electroweak corrections $S_{\rm EW}=1.0194$ \cite{MS88} and $\delta'_{\rm EW}\simeq 0.0010$ \cite{BL90}. The dimension--zero contribution $\delta^{kl\, (0)}$ is the purely perturbative correction, neglecting quark masses, which, owing to chiral symmetry, is identical for the vector and axial--vector parts. The symbols $\delta_{ij}^{kl\, (D)} \equiv [ \delta^{kl\, (D)}_{ij,V} + \delta^{kl\, (D)}_{ij,A}]/2$ stand for the average of the vector and axial--vector contributions from dimension $D\ge 2$ operators; they contain an implicit suppression factor $1/M_\tau^D$. \section{SU(3) BREAKING} The separate measurement of the Cabibbo--allowed and Cabibbo--suppressed decay widths of the $\tau$ \cite{ALEPH99} allows one to pin down the SU(3) breaking effect induced by the strange quark mass, through the differences \beqn \delta R_\tau^{kl} &\!\!\equiv &\!\! {R_{\tau,V+A}^{kl}\over |V_{ud}|^2} - {R_{\tau,S}^{kl}\over |V_{us}|^2} \no\\ &\!\! =&\!\! 3 \, S_{EW}\,\sum_{D\geq 2} \left[ \delta^{kl\, (D)}_{ud} - \delta^{kl\, (D)}_{us} \right] \, . \eeqn The leading contributions to $\delta R_\tau^{kl}$ are quark--mass corrections of dimension two \cite{PP98,PP99}; they are the dominant SU(3) breaking effect, generating the wanted sensitivity to the strange quark mass. The corrections of $O(m^4)$ are very tiny \cite{PP99}. The main $D=4$ contribution comes from the SU(3)--breaking quark condensate \be\label{eq:O4def} \delta O_4 \,\equiv\, \langle 0| m_s\, \bar s s - m_d \,\bar d d | 0 \rangle \, . \ee Neglecting the small $O(m^4)$ terms and $D\ge 6$ contributions, $\delta R_\tau^{kl}$ can be written as \cite{PP99}: \beqn \label{Delta2a} \delta R_\tau^{kl} &\!\!\approx &\!\! 24\, S_{EW}\,\biggl\{ {m_s^2(M_\tau^2)\over M_\tau^2} \, \left(1-\epsilon_d^2\right)\, \Delta^{(2)}_{kl}(a_\tau) \biggr.\no\\ &&\biggl.\qquad\qquad - 2 \pi^2\, {\delta O_4\over M_\tau^4} \, Q_{kl}(a_\tau) \biggr\} \, , \eeqn where $\epsilon_d\equiv m_d/ m_s = 0.053 \pm 0.002 $ \cite{LEU96} and $a_\tau\equiv \alpha_s(M_\tau^2)/\pi$. \begin{table}[thb] \centering \begin{tabular}{|c|c|c|} \hline $(k,l)$ & $\Delta^{(2)}_{kl}(a_\tau)$ & $Q_{kl}(a_\tau)$ \\ \hline (0,0) & $2.0 \pm 0.5$ & $1.08\pm 0.03$ \\ (1,0) & $2.4 \pm 0.7$ & $1.52\pm 0.03$ \\ (2,0) & $2.7 \pm 1.0$ & $1.93\pm 0.02$ \\ (1,1) & $-0.39\pm 0.26$ & $-0.41\pm 0.02$ \\ (1,2) & $0.07\pm 0.06$ & $-0.02\pm 0.01$ \\ \hline \end{tabular} \caption{Numerical values \protect\cite{PP99} of the relevant perturbative expansions for $\alpha_s(M_\tau^2) = 0.35\pm0.02$. } \label{tab:num} \end{table} The perturbative QCD expansions $\Delta^{(2)}_{kl}(a_\tau)$ and $Q_{kl}(a_\tau)$ are known to $O(a_\tau^2)$. Moreover, the $O(a_\tau^3)$ contributions to $\Delta^{(2)}_{kl}(a_\tau)$ coming from the longitudinal correlator $D^L(s)$ have been also computed. Using the value of the ($\overline{\mbox{\rm MS}}$) strong coupling determined by the total hadronic $\tau$ decay width \cite{PIC97}, $\alpha_s(M_\tau^2) = 0.35 \pm0.02$, one gets the numerical results shown in Table~\ref{tab:num} \cite{PP99}. The rather large theoretical uncertainties of \be \Delta^{(2)}_{kl}(a_\tau)\equiv {1\over 4} \,\left\{ 3 \,\Delta^{L+T}_{kl}(a_\tau) + \Delta^{L}_{kl}(a_\tau) \right\} \, , \ee have their origin in the bad perturbative behaviour of the longitudinal contribution. The most important higher--order corrections can be resummed \cite{PP98}, using the renormalization group, but the resulting ``improved'' series is still rather badly behaved. For instance, $$ \Delta^{L}_{00}(0.1) = 1.5891 + 1.1733 + 1.1214 + 1.2489 + \cdots $$ which has $O(a^2)$ and $O(a^3)$ contributions of the same size. On the contrary, the $J=L+T$ series converges very well: $$ \Delta^{L+T}_{00}(0.1) = 0.7824 + 0.2239 + 0.0831 + \cdots $$ Fortunately, the longitudinal contribution to $\Delta^{(2)}_{kl}(a_\tau)$ is parametrically suppressed by a factor $1/3$. Thus, the combined final expansion looks still acceptable for the first few terms: \beqn \lefteqn{\Delta^{(2)}_{00}(0.1) = 0.9840 + 0.4613 + 0.3427}&& \no\\ &&\mbox{} + \left( 0.3122 - 0.000045\, c_3^{L+T}\right) + \cdots \eeqn Nevertheless, after the third term the series appears to be dominated by the longitudinal contribution, and the bad perturbative behaviour becomes again manifest. Taking the unknown $O(a^3)$ coefficient of the $D^{L+T}(s)$ perturbative series as $c_3^{L+T} \sim c_2^{L+T}\,\left(c_2^{L+T}/c_1^{L+T}\right)\approx 323$, the fourth term becomes $0.298$; i.e. a 5\% reduction only. Since the longitudinal series seems to reach an asymptotic behaviour at $O(a^3)$, the central values of $\Delta^{(2)}_{kl}(a_\tau)$ have been evaluated adding to the fully known $O(a^2)$ result one half of the longitudinal $O(a^3)$ contribution. To estimate the associated theoretical uncertainties, we have taken one half of the size of the last known perturbative contribution plus the variation induced by a change of the renormalization scale in the range $\xi\in [0.75,2]$ (added in quadrature). The SU(3)--breaking condensate $\delta O_4$ could be extracted from the $\tau$ decay data, together with $m_s$, through a combined fit of different $\delta R_\tau^{kl}$ moments. However, this is not possible with the actual experimental accuracy. We can estimate the value of $\delta O_4$ using the constraints provided by chiral symmetry. To lowest order in Chiral Perturbation Theory, $\delta O_4$ is fully predicted in terms of the pion decay constant and the pion and kaon masses: $\delta O_4 \simeq -f_\pi^2\, \left(m_K^2 - m_\pi^2\right) \simeq -1.9\times 10^{-3} \:\mbox{\rm GeV}^4 $. Taking into account the leading $O(p^4)$ corrections through the ratio of quark vacuum condensates \cite{NAR89,DJN89} \be v_s \,\equiv\, \frac{\langle 0 | \overline s s | 0 \rangle} {\langle 0 | \overline d d | 0\rangle}\, =\, 0.8 \pm 0.2 \, , \ee one gets the improved estimate, \beqn\label{eq:O4value} \delta O_4 &\!\! \simeq &\!\! - \frac{m_s}{2 \hat m}\, (v_s -\epsilon_d)\, \, f_\pi^2 \, m_\pi^2 \nonumber \\ &\!\!\simeq & \!\! -(1.5\pm 0.4)\times 10^{-3} \:\mbox{\rm GeV}^4 \, , \eeqn where we have used the known quark mass ratio \cite{LEU96} $m_s/\hat m = 24.4\pm 1.5$. Strictly speaking, $\delta O_4$ and $v_s$ are scale dependent. This dependence cancels with the $O(m^4)$ contributions \cite{PP99} and is then of $O(p^8)$ in the chiral expansion. The numerical effect is smaller than the accuracy of (\ref{eq:O4value}) and has been neglected together with the tiny $O(m^4)$ corrections. \section{NUMERICAL ANALYSIS} \label{sec:numerics} \begin{table}[t] \centering \begin{tabular}{|c|c|c|} \hline $(k,l)$ & $\delta R_\tau^{kl}$ & $m_s(M_\tau^2)$ (MeV) \\ \hline (0,0) & $0.394\pm 0.137$ & $143\pm31\pm18$\\ (1,0) & $0.383\pm 0.078$ & $121\pm17\pm18$\\ (2,0) & $0.373\pm 0.054$ & $106\pm12\pm21$\\ (1,1) & $0.010\pm 0.029$ & --\\ (1,2) & $0.006\pm 0.015$ & --\\ \hline \end{tabular} \caption{Measured \protect{\cite{ALEPH99}} moments $\delta R_\tau^{kl}$ and corresponding $m_s(M_\tau^2)$ values \protect{\cite{PP99}}. The first error is experimental and the second theoretical.} \label{tab:res} \end{table} The ALEPH collaboration has measured \cite{ALEPH99} the weighted differences $\delta R_\tau^{kl}$ for five different values of $(k,l)$. The experimental results are shown in Table~\ref{tab:res}, together with the corresponding $m_s(M_\tau^2)$ values. Since the QCD counterparts to the moments $(k,l)=$ (1,1) and (1,2) have theoretical uncertainties larger than 100\%, we only use the moments $(k,l)=$ (0,0), (1,0), and (2,0). The experimental errors quoted in Table~\ref{tab:res} do not include the present uncertainty in $|V_{us}|$. To estimate the corresponding error in $m_s$, we take the following numbers from ALEPH: $R^{00}_{\tau, V+A}=3.486\pm0.015$, $R^{00}_{\tau, S}=0.1610\pm0.0066$, $|V_{ud}|=0.9751\pm0.0004$ and $|V_{us}|=0.2218\pm0.0016$. This gives $\delta R_\tau^{00}=0.394\pm0.135\pm0.047$, where the second error comes from the uncertainty in $|V_{us}|$ and translates into an additional uncertainty of 10 MeV in the strange quark mass. We will put the same $|V_{us}|$ uncertainty to the other two moments, for which the ALEPH collaboration does not quote the separate values of $R^{kl}_{\tau, V+A}$ and $R^{kl}_{\tau, S}$. Taking the information from the three moments into account, we get our final result \cite{PP99}: \beqn\label{eq:result} m_s(M_\tau^2) &\!\! = &\!\! (119 \pm 12 \pm 18 \pm 10) \: {\rm MeV} \no\\ &\!\! = &\!\! (119 \pm 24 ) \: {\rm MeV} \, . \eeqn The first error is experimental, the second reflects the QCD uncertainty and the third one is from the present uncertainty in $|V_{us}|$. Since the three moments are highly correlated, we have taken the smaller individual errors as errors of the final average. Our determination (\ref{eq:result}) corresponds to \be m_s(1\, {\rm GeV}^2) = (164 \pm 33 ) \:{\rm MeV} \ee and \be m_s(4\, {\rm GeV}^2) = (114 \pm 23 ) \: {\rm MeV} \, . \ee \section{COMPARISON WITH ALEPH} The ALEPH collaboration has performed a phenomenological analysis of the $\delta R_\tau^{kl}$ moments in Table~\ref{tab:res}, which results in larger $m_s$ values \cite{ALEPH99}: $$ m_s(M_\tau^2) = \left\{ \begin{array}{l} 149 {}^{+24_{\rm exp}}_{-30_{\rm exp}} {}^{+21_{\rm th}}_{-25_{\rm th}} \pm 6_{\rm fit} \: {\rm MeV} , \\ 176 {}^{+37_{\rm exp}}_{-48_{\rm exp}} {}^{+24_{\rm th}}_{-28_{\rm th}} \pm 8_{\rm fit} \pm 11_{J=0} \: {\rm MeV} . \end{array} \right. $$ To derive these numbers, ALEPH has used our published results in refs.~\cite{PP98}, \cite{BNP92} and \cite{DP92}. Since we have analyzed the same data with improved theoretical input \cite{PP99}, it is worthwhile to understand the origin of the numerical difference. ALEPH makes a global fit to the five measured moments, including the last two which are unreliable (100\% theoretical errors). In view of the asymptotic behaviour of $\Delta_{kl}^L(a_\tau)$, they truncate this perturbative series at $O(a_\tau)$, neglecting the known and positive $O(a_\tau^2)$ and $O(a_\tau^3)$ contributions. Thus, they use a smaller value of $\Delta_{kl}^{(2)}(a_\tau)$ and, therefore, get a larger result for $m_s$ (the first value above) because the sensitivity to this parameter is through the product $m_s^2(M_\tau^2) \Delta_{kl}^{(2)}(a_\tau)$. Since they put rather conservative errors, their result is nevertheless consistent with ours. ALEPH has made a second analysis subtracting the $J=L$ contribution. Unfortunately, only the pion and kaon contributions are known. Using the positivity of the longitudinal spectral functions, this pole contributions provide lower bounds on $\mbox{\rm Im}\Pi^L_{ud}(s)$ and $\mbox{\rm Im}\Pi^L_{us}(s)$, which translate into lower limits on the corresponding $J=L$ contribution to $\delta R_\tau^{kl}$. Subtracting this contribution, one gets upper bounds on $\delta R_{\tau,L+T}^{kl}$ \cite{PP99} which imply $m_s(M_\tau^2) < 202$ MeV \cite{PP99}. However, besides subtracting the pion and kaon poles, ALEPH makes a tiny ad-hoc correction to account for the remaining unknown $J=L$ contribution, and quotes the resulting number as a $m_s(M_\tau^2)$ determination [the second value above]. Since they add a generous uncertainty, their number does not disagree with ours. However, it is actually an upper bound on $m_s(M_\tau^2)$ and not a determination of this parameter. \section*{ACKNOWLEDGEMENTS} We have benefit from useful discussions with Shaomin Chen, Michel Davier, Andreas H\"ocker and Edwige Tournefier. This work has been supported in part by the European Union TMR Network EURODAPHNE (Contract No. ERBFMX-CT98-0169), by CICYT, Spain, under grants No. AEN-96/1672 and PB97-1261, and by Junta de Andaluc\'{\i}a, Grant No. FQM-101. \vfill
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Noisy intermediate scale quantum (NISQ) computers are near-term systems that have up to a few hundred non-error-corrected qubits that suffer from decoherence and noise, limiting the overall quantum circuit depth \cite{Preskill2018}. Some of the main limitations of NISQ systems include limited qubit number, limited qubit connectivity, and hardware-specific quantum gate alphabets \cite{Khatri2019}. Algorithms must be optimized carefully to smaller quantum circuit depths in order to run on NISQ systems \cite{Khatri2019, Zou2020}. Hybrid classical-quantum variational algorithms are promising implementations for NISQ systems and include the quantum approximate optimization algorithm (QAOA) \cite{Farhi2014}, and Variational Quantum Eigensolver (VQE) \cite{Peruzzo2014, Moll2018}. These variational algorithms use classical compute resources to enable the execution of more complicated algorithms on the small quantum compute resources available today. Implementation of a variational algorithm typically involves the minimization of a cost function in order to obtain the optimal variational parameters to generate the desired quantum state. The purpose of the variational algorithm described in this work is to construct a quantum state called a thermofield double (TFD) state in the transverse field Ising model (TFIM), which is important in understanding thermal phase transitions in condensed matter systems. TFD states are entangled pure states between two systems which yield a thermal state when one of the systems is traced out \cite{Nielsen2010, Maziero2017}. We develop an engineering approach to construct an optimal cost function for implementation on a small qubit system, and show that cost functions engineered using our approach can result in more accurate outcomes on real hardware systems. This higher accuracy will yield better experimental results in state-of-the-art NISQ systems, as evidenced by the implementation of a simpler variant of a similarly engineered cost function to generate TFD states in an actual superconducting quantum processor \cite{Sagastizabal2020, Sagastizabal2020b}. \section{Thermofield Double States Generation \label{sec:tfd_generation}} \subsection{System Definition} The Hamiltonian of the TFIM for a one-dimensional ring of $N$ qubits is given by \begin{align} \mathcal{H}_\textrm{TFIM} &= \sum_{i=1}^N \textsf{Z}_i \textsf{Z}_{i+1} + g \sum_{i=1}^N \textsf{X}_i = \mathcal{H}_\textsf{ZZ} + g \mathcal{H}_\textsf{X} \label{eq:H_TFIM} \end{align} where the transverse field direction was chosen for convenience of implementation in superconducting systems (compare to \cite{Zhu2019} with $\textsf{X} \leftrightarrow \textsf{Z}$). We use natural units (\textit{i.e.} $\hbar = k_b = 1$) throughout the manuscript. Consider the special case of generating Thermofield Double (TFD) states in a four-qubit system which was recently demonstrated experimentally, using superconducting qubits by \cite{Sagastizabal2020}. In this case, the intra-system Hamiltonian reduces to \begin{align} \mathcal{H}_\textrm{intra} &= \textsf{Z}_1 \textsf{Z}_2 + g (\textsf{X}_1 + \textsf{X}_2) \label{eq:H_intra} \end{align} which describes interactions within each of the two subsystems ($A$ and $B$) with $g$ being the transverse field strength. The ultimate objective is to have the full system undergo unitary evolution such that it will yield a thermal state (or Gibbs state) on subsystem $A$, if it is considered in isolation. In practice, this can be studied by performing a partial trace over subsystem $B$ \cite{Nielsen2010}. Conversely, this technique can be viewed as a purification of the Gibbs state, resulting in a TFD in the full system \cite{Wu2019}. The TFD state $\ket{\xi}$ at an inverse temperature $\beta=T^{-1}$ is thus defined as \begin{align} \ket{\xi (\beta)} \equiv \frac{1}{\sqrt{\mathcal{N}}} \exp \left(- \frac{\beta}{2} \mathcal{H}_A \right) \ket{\xi(0)} \label{eq:tfd_definition} \end{align} where $\mathcal{N}$ is a normalization factor, and \begin{align} \mathcal{H}_A = \textsf{ZZ}_{A} + g \textsf{X}_{A} = \textsf{Z}_{A1} \textsf{Z}_{A2} + g (\textsf{X}_{A1} + \textsf{X}_{A2}). \end{align} $\ket{\xi(0)}$ is the TFD state at $\beta=0$ or $T \rightarrow \infty$, which should be a pairwise maximally entangled Bell state (since tracing subsystem $B$ out of this full state yields a maximally-mixed state for subsystem $A$). For simulation purposes we thus set the initial state to be \begin{align} \ket{\xi{(0)}} &= \textsf{CNOT}_{24} \cdot \textsf{CNOT}_{13} \cdot \textsf{H}_2 \cdot \textsf{H}_1 \ket{0000} \nonumber \\ &= \frac{1}{2} \left( \ket{0000} + \ket{0101} + \ket{1010} + \ket{1111} \right) \label{eq:init_state} \end{align} where the qubits are labeled as $\ket{ A_1, A_2, B_1, B_2}$ and qubits $\left\{ A_i, B_i \right\}$ are pairwise maximally entangled through applied unitaries. \subsection{System Evolution} The protocol for generation of TFD states is described in \cite{Zhu2019}, and we follow a similar path to invoke the variational ansatz motivated by the quantum alternating operator ansatz \cite{Hadfield2019}. This involves alternating between the subsystem Hamiltonian $\mathcal{H}_A + \mathcal{H}_B$ and the entangling Hamiltonian $\mathcal{H}_{AB}$. Here the subsystem $B$ Hamiltonian is defined as \begin{align} \mathcal{H}_B = \textsf{ZZ}_{B} + g \textsf{X}_{B} = \textsf{Z}_{B1} \textsf{Z}_{B2} + g (\textsf{X}_{B1} + \textsf{X}_{B2}) \end{align} \noindent and the entangling Hamiltonian is defined as \begin{align} \mathcal{H}_\mathit{AB} &= \textsf{XX}_{AB} + \textsf{ZZ}_{AB} \nonumber \\ &= \textsf{XX}_{AB1} + \textsf{XX}_{AB2} + \textsf{ZZ}_{AB1} + \textsf{ZZ}_{AB2} \nonumber \\ &= \textsf{X}_{A1} \textsf{X}_{B1} + \textsf{X}_{A2} \textsf{X}_{B2} + \textsf{Z}_{A1} \textsf{Z}_{B1} + \textsf{Z}_{A2} \textsf{Z}_{B2}. \label{eq:H_inter} \end{align} Near-term quantum computing systems have stringent coherent operations limits and it is desirable to minimize the number of steps required for a given workload \cite{Zou2020}. Hence, here we focus on the case of increasing efficiency of single-step TFD generation under the given Hamiltonians (\cref{eq:H_intra,eq:H_inter}). With our restricted model, the quantum circuit for TFD generation is described by \cref{eq:evolution} with $\left\{ \alpha_1, \alpha_2, \gamma_1, \gamma_2 \right\}$ as variational parameters. \small \begin{align} \begin{split} \ket{\psi_{\vec{\alpha}, \vec{\gamma}}} &= \ket{\psi \left(\alpha_1, \alpha_2, \gamma_1, \gamma_2 \right)} \\ &= e^{i \alpha_2 \textsf{ZZ}_{AB}} e^{i \alpha_1 \textsf{XX}_{AB}} e^{i \gamma_2 \left(\textsf{ZZ}_A + \textsf{ZZ}_B \right) } e^{i \gamma_1 \left(\textsf{X}_A + \textsf{X}_B \right)} \ket{\xi{(0)}} \label{eq:evolution} \end{split} \end{align} \normalsize \section{Comparison of Cost Functions for TFD States' Generation} To obtain the optimal variational parameters $\left\{ \alpha_1, \alpha_2, \gamma_1, \gamma_2 \right\}$ experimentally, a quantum computer is used for system evolution while a classical computer is used for cost function evaluation based on the measurements returned by the quantum computer. The ultimate success of the protocol depends on the effectiveness of the cost function evaluation as well as the capability of the quantum circuit to accurately generate the desired TFD state. \subsection{Error in fidelity between generated and target states} For numerical convenience it is typical to utilize the error in fidelity $\mathcal{E}$ with respect to the target as the cost function during optimization \cite{Wu2019}, \begin{align} \mathcal{E} &= 1- \left| \bra{\xi(\beta)} \ket{\psi_{\vec{\alpha}, \vec{\gamma}}} \right|^2. \label{eq:overlap_fidelity} \end{align} Evaluation of the latter expression requires access to the actual wavefunction of the generated state. Experimental determination of the wavefunction is typically limited to either estimation based on extensive tomographic reconstruction \cite{Altepeter2004}, or an array of sophisticated weak measurements \cite{Lundeen2011}. Thus it is undesirable in its present form for evaluation in an actual hybrid quantum-classical experimental system. In principle, it is possible to modify \cref{eq:overlap_fidelity} to use the density matrix of the generated state, but this also is cumbersome and reasons are discussed further in the next section. \subsection{Free energy of the system \label{ssec:f_a}} An alternative to \cref{eq:overlap_fidelity} is using the free energy of the system as the cost function during optimization \cite{Wu2019}. In this case, the free energy $F_A$ is calculated on the reduced density matrix for subsystem $A$ as \begin{align} F_A (T; \vec{\alpha}, \vec{\gamma}) &= E_A - T S_A \nonumber \\ &= \Tr \left[ \varsigma_A H_A \right] + T \Tr \left[ \varsigma_A \log \varsigma_A \right], \label{eq:free_energy} \end{align} where $E_A$, $S_A$, $\varsigma_A = \Tr_B \ketbra{\psi_{\vec{\alpha}, \vec{\gamma}}}$ are the energy, von Neumann entropy, and the reduced density operator for subsystem $A$, respectively, and $T$ is the system temperature. In this case quantum state tomography (QST) \cite{Altepeter2004} of subsystem $A$ is necessary to calculate $F_A$. Recently, methods to approximate the von Neumann entropy have also been proposed \cite{Chowdhury2020}. Typically, QST of a system requires a complete set of measurements related to the number of unknowns of the system size \cite{James2001}. Given a system of $n_\textrm{q}$ qubits, the number of unique density matrix elements is given by $\left(2^{2n_\textrm{q}} - 1 \right)$. Thus, this is the total number of unique measurements required to fully characterize the qubit system (\textit{e.g.} for a two-qubit subsystem, this gives 15 measurements). As system size increases, QST becomes prohibitively expensive and renders free energy $F_A$ a poor choice for large-scale optimization problems. \subsection{Hypothesized cost function based on correlator expectation values} To formulate a more experiment-friendly expression, it is possible to hypothesize a cost function based on the form of $F_A$ as described in \cref{ssec:f_a}. This would involve substituting alternative expressions for $E_A$ and $S_A$ that require fewer measurements than QST. It is straightforward to infer that $F_A$ will be dominated by $S_A$ at higher temperatures, and by $E_A$ at lower temperatures. Given the simplicity of the TFIM Hamiltonian, the expression for the low temperature regime is easily obtained as follows, \begin{align} E_A &= \expval{\mathcal{H}_A} = \expval{\textsf{Z}_{A1} \textsf{Z}_{A2} + g (\textsf{X}_{A1} + \textsf{X}_{A2})} \nonumber \\ &= \expval{\textsf{Z}_{A1} \textsf{Z}_{A2}} + g \expval{\textsf{X}_{A1}} + g \expval{\textsf{X}_{A2}} \label{eq:E_A} \end{align} where the $\expval{\cdot}$ notation indicates the expectation value of the relevant correlator with respect to the evolved wavefunction $\ket{\psi_{\vec{\alpha}, \vec{\gamma}}}$ in \cref{eq:evolution}. The reduction in the number of terms is not strictly due to the use of correlators, but it is rather the underlying symmetry of the problem that allows us to determine $E_A$ with only three system measurements for subsystem $A$. However, this method of constructing the cost function allows an explicit represention using tangible measurements for the hybrid quantum-classical optimization algorithm. There is no straightforward method to derive an expression for $S_A$ based on correlator expectation values as before. However, it is trivial to verify that $\ket{\xi(0)}$ is the ground state of the negative of the inter-system Hamiltonian, $-\mathcal{H}_{AB}$. Given that we expect $S_A$ to dominate at $T\rightarrow \infty$ and $\ket{\xi(0)}$ is the infinite temperature TFD state, it is natural to hypothesize an approximate form of $S_A$ to be generalized to $\mathcal{H}_{AB}$ when considering the full system (\textit{i.e.} both $A$ and $B$) \cite{Premaratne2020}. This results in the following expression for a cost function $\mathcal{C}_0 (T)$, which is more amenable for practical implementation in a quantum-classical optimization algorithm. \begin{align} \mathcal{C}_0 (T; \vec{\alpha}, \vec{\gamma}) &= \mel**{\psi_{\vec{\alpha}, \vec{\gamma}}}{\mathcal{H}_A + \mathcal{H}_B - T \mathcal{H}_{AB}}{\psi_{\vec{\alpha}, \vec{\gamma}}}, \label{eq:C_0} \end{align} where the system temperature $T$ is a parameter, and $\left\{ \alpha_1, \alpha_2, \gamma_1, \gamma_2 \right\}$ are optimization variables. Here, we have generalized the cost function to the full system and included $\mathcal{H}_B$ in the energy of the system at low temperatures. \subsection{Free Energy vs. Hypothesized Cost Function \label{ssec:FA_vs_C0}} When the strength of the transverse field is set to $g=1$, a critical point between antiferromagnetic and paramagnetic quantum phases is expected \cite{Bonfim2019, Pfeuty1970}. Hence, for cost function performance comparison purposes, we will primarily explore the case of $g=1$. The case of $g \neq 1$ will be considered for completeness in \cref{app:g_neq_1}. We simulate TFD state generation using Differential Evolution, which is a global optimization algorithm \cite{Storn1997} supported by Wolfram Mathematica for non-linear optimization. The optimization is performed over a wide inverse temperature range of six orders of magnitude to ensure complete coverage. During optimization we minimize the cost functions corresponding to $F_A$ and $\mathcal{C}_0$ separately, and see that there is excellent agreement between them for extreme low and high temperatures (see \cref{fig:C_0-vs-F_A}). We have chosen trace distance $\mathcal{T}$ and fidelity $\mathcal{F}$ as defined below \cite{Nielsen2010}, as proximity measures comparing the ideal TFD state and the optimally generated state with the different cost functions. \begin{align} \mathcal{T} &= \frac{1}{2} \Tr \left[ \sqrt{(\rho_A - \sigma_A)^{\dagger} \cdot (\rho_A - \sigma_A)} \right] \\ \mathcal{F} &= \left( \Tr \sqrt{\sqrt{\rho_A} \sigma_A \sqrt{\rho_A}} \right)^2 \end{align} Here, $\rho = \ketbra{\xi(\beta)}$ is the density matrix corresponding to the ideal TFD state, $\sigma = \ketbra{\Psi(\beta)}$ represents the density matrix for the circuit-generated state $\ket{\Psi(\beta)}$ utilizing the relevant cost function, and $\rho_A / \sigma_A$ are corresponding subsystem states after tracing out $B$, respectively. \begin{figure}[tb!] \includegraphics[width=\columnwidth]{C_0-vs-F_A.pdf} \caption{Cost function performance comparison between $F_A$ and $\mathcal{C}_0$, using (a) fidelity $\mathcal{F}$ and (b) trace distance $\mathcal{T}$ as proximity measures.} \label{fig:C_0-vs-F_A} \end{figure} For perfect TFD state preparation, we would expect $\mathcal{T} = 0$ and $\mathcal{F} = 1$. However, in the range $10^{-2} < \beta < 10^{2}$, the performance of $\mathcal{C}_0$ as a cost function is poor, and demonstrates difficulty in reaching the target TFD state. This is somewhat expected, given that we hypothesized the form of the simple cost function based on extreme high/low temperature behavior. Also note that for $\beta \rightarrow \infty$, $\mathcal{F} \neq 1$ and $\mathcal{T} \neq 0$ indicating that even while using free energy as the cost function we do not construct the ideal TFD state. This is also expected since $T \rightarrow 0$ is the most difficult regime to generate the TFD state based on the given protocol \cite{Wu2019}. This indicates that the depth of the circuit is most likely insufficient to construct a better state approaching the TFD state. However depending on the measure used, we find that better approximations to the TFD states are possible using different engineered cost functions. \section{Engineering Improved Cost Functions} \subsection{Enhancing the hypothesized cost function \label{ssec:mod_corr}} Given the shortcomings of $\mathcal{C}_0$ at intermediate $\beta$ values, we began our new approach to construct a better cost function by generalizing \cref{eq:C_0} for $g=1$ by adding coefficients to the expression containing correlators as follows, \begin{align} \mathcal{C}_1(T; \vec{\alpha}, \vec{\gamma}) = \mel**{\psi_{\vec{\alpha}, \vec{\gamma}}}{\textsf{c}_1}{\psi_{\vec{\alpha}, \vec{\gamma}}} , \end{align} where \begin{align} \textsf{c}_1 (\zeta, \tau) &= \textsf{X}_{A} + \textsf{X}_{B} + \zeta \left( \textsf{ZZ}_{A} + \textsf{ZZ}_{B} \right) \nonumber \\ &\hspace{7em} - T^\tau \left(\textsf{ZZ}_{AB} + \textsf{XX}_{AB}\right). \end{align} Here $\zeta$ and $\tau$ are parameters which we optimize to find a cost function $\mathcal{C}_1$ that can yield better approximations to TFD states across the full inverse temperature range. For simplicity and clarity, instead of performing nested optimizations over $\left\{ \zeta, \tau \right\}$ and $\left\{\vec{\alpha}, \vec{\gamma} \right \}$, we execute TFD generation for a range of $\zeta$ and $\tau$ values (see \cref{app:minimization_quantity} for details). By varying $1.4 < \zeta < 1.9$, and $1.2 < \tau < 1.7$, and evaluating the minimization quantity of interest $\Xi$, we see that the best agreement between $\ket{\xi(\beta)}$ and $\ket{\Psi(\beta)}$ is obtained for $\zeta=1.6$ and $\tau = 1.48$ (see \cref{fig:mod_corr}). The improvements from using $\mathcal{C}_1$ are discussed in \cref{ssec:cf_comparison}. \begin{figure}[tb!] \includegraphics[width=\columnwidth]{mod_corr.pdf} \caption{Two-dimensional sweep plot of $\left| \Xi \right|$ vs. $\zeta$ and $\tau$ to enhance the hypothesized cost function. The minimum value for $\left|\Xi \right|$ is observed for $\zeta=1.6$ and $\tau = 1.48$.} \label{fig:mod_corr} \end{figure} \subsection{Analytically obtaining a better cost function \label{ssec:pruned_elements}} To further the work towards engineering a better cost function, we used an alternative approach based on density matrix elements' closeness to generate a cost function that shows significant improvement over both the hypothesized cost function $\mathcal{C}_0$, and its enhanced version $\mathcal{C}_1$. In this case, we studied the characteristics of the density matrices of the target TFD state, $\rho$, and the single-step circuit-generated state, $\sigma$. Note that we are not tracing out subsystem $B$ of these matrices to obtain the thermal state. Instead we directly compare the density matrices of the target and generated states. Also we relax the assumption of $g=1$, and keep $g$ as a parameter throughout the analysis. First, we observe the redundancies present in the ideal TFD state and obtain 15 unique real elements for $\rho$: \begin{align} \begin{split} \mathcal{R} &= \{ \rho_{00}, \rho_{01}, \rho_{03}, \rho_{05}, \rho_{06}, \rho_{11}, \rho_{13}, \rho_{15}, \rho_{16}, \\ &\hspace{2em} \rho_{33}, \rho_{35}, \rho_{36}, \rho_{55}, \rho_{56}, \rho_{66} \} \end{split} \end{align} where the density matrix elements are labeled consistent with the notation in \cref{eq:init_state} (see \cref{app:dm} for details). Similarly, we observe the redundancies and Hermiticity in $\sigma$ to obtain 10 unique complex elements for off-diagonal elements, and 5 unique real elements for the diagonal elements: \begin{align} \begin{split} \mathcal{S} &= \{ \sigma_{00}, \sigma_{01}, \sigma_{03}, \sigma_{05}, \sigma_{06}, \sigma_{11}, \sigma_{13}, \sigma_{15}, \sigma_{16}, \\ &\hspace{2em} \sigma_{33}, \sigma_{35}, \sigma_{36}, \sigma_{55}, \sigma_{56}, \sigma_{66} \} \end{split} \end{align} Inspired by trial optimizations, we find that $\sigma$ is explicitly symmetrized by choosing particular values for the inter-system variational parameters $\vec{\alpha}$, \begin{align} \begin{split} \alpha_1 &= \pi/8 \\ \alpha_2 &= \pi/4 \end{split} \end{align} resulting in 14 unique real elements in $\sigma$. With this choice for $\vec{\alpha}$, it is found that $\sigma_{35} = \sigma_{06}$, indicating that this constrained $\sigma$ will be limited in fully matching $\rho$. Following symmetrization, a cost function is constructed explicitly by calculating the sum of the square of differences between the density matrix elements for $\rho$ and $\sigma$: \begin{align} \mathcal{C}_2 = \sum_{i=1}^{15} a_i (r_i - s_i)^2 \label{eq:pruned_cf} \end{align} where $r_i \in \mathcal{R}$, $s_i \in \mathcal{S}$, and $a_i \in \{0,1 \}$ is a weight used to prune elements. We find that choosing \begin{align} a_i = \left\{ \begin{array}{cl} 1 , & i \in \{4, 8, 13\} \\ 0 , & i \in \{1, 2, 3, 5, 6, 7, 9, 10, 11, 12, 14, 15\} \end{array} \right. \end{align} generates a simple cost function which yields good results across the full temperature range and for different $g$ values. For this system of four qubits, we find that it is beneficial to substitute variational angles in the engineered cost function $\mathcal{C}_2$ with the operator expectation values as was the case in $\mathcal{C}_0$ and $\mathcal{C}_1$. This will enable the experiments to be performed using identical quantum processor measurements, while modifying the classical processor evaluation to improve efficiency. Thus, the non-zero elements for the engineered cost function $\mathcal{C}_2$ are explicitly given by \cref{eq:pruned_elements}. Note that given the symmetric nature of the evolution of the subsystems, it is sufficient to include only the intra-system expectation values for one subsystem. \subsection{Improvements from Engineered Cost Functions\label{ssec:cf_comparison}} We now evaluate the performance of the various cost functions when generating the TFD states for a wide temperature range. Fidelity and trace distances are calculated for the traced out subsystem as described in \cref{ssec:FA_vs_C0}. In \cref{fig:All_CF}, we observe that $\mathcal{C}_1$ yields vastly superior results compared to the original cost function $\mathcal{C}_0$, especially at intermediate temperatures. We also find that $\mathcal{C}_2$ performs significantly better than $\mathcal{C}_1$ at high temperatures. \begin{figure}[tb!] \includegraphics[width=\columnwidth]{All_CF.pdf} \caption{Cost function performance comparison between $F_A$, $\mathcal{C}_0$, $\mathcal{C}_1$, and $\mathcal{C}_2$ using (a) fidelity $\mathcal{F}$ and (b) trace distance $\mathcal{T}$ as proximity measures.} \label{fig:All_CF} \end{figure} For $\beta > 1$, we note that the two measures ($\mathcal{F}$ and $\mathcal{T}$) offer somewhat inconsistent results. When considering trace distance $\mathcal{T}$ as the proximity measure, $\mathcal{C}_1$ and $\mathcal{C}_2$ seems to give better results compared to both $\mathcal{C}_0$ and $F_A$. In \cref{app:g_neq_1} we study a few cases of $g \neq 1$ and find that $\mathcal{C}_2$ outperforms all other cost functions (including $F_A$) irrespective of the proximity measure used. We attribute this anomaly to the definition of the measures themselves, and further study of this aspect is beyond the scope of this work. \section{Conclusion} In this article, we explored different cost functions that can be used during hybrid classical-quantum variational algorithm execution for TFD generation on real qubit systems. NISQ systems are constrained due to imperfect quantum operations with relatively low fidelities, low qubit lifetimes, and small qubit numbers. Hence, it is necessary to implement quantum algorithms in the most effective manner to utilize the available resources to their fullest extent. Our aim was to engineer a cost function which will generate the TFD states in a NISQ four-qubit system, in the most efficient manner. The constructed cost functions yielded a substantial improvement over the original cost function results (\textit{e.g.} $>80\%$ reduction in relative error for intermediate temperature TFD states). The originally hypothesized cost function $\mathcal{C}_0$ was found to be inadequate in approximating the TFD state at intermediate temperatures. The enhanced cost function $\mathcal{C}_1$ obtained via modifications to the original cost function yielded better results at intermediate temperatures for the quantum critical case of $g=1$. Subsequently, the cost function $\mathcal{C}_2$ constructed based on the closeness of density matrix elements yielded good results for the full temperature range and for the transverse field range $g \in \{-0.1, -0.2, -0.5, 1, 2, 5\}$. The method of improving the cost function to formulate $\mathcal{C}_1$ as discussed in \cref{ssec:mod_corr} is amenable to experimental exploration of cost function generation, and could result in novel methods for obtaining better cost functions for special state generation. This method can be further extended by adding other correlator expectation values in the cost function with coefficients to be found experimentally. As quantum state evolution typically incurs the highest resource cost during the experiment, this method of constructing cost functions could lead to more efficient scaling of variational algorithms to higher qubit numbers. Although \cref{eq:pruned_elements} only included the intra-system expectation values for one subsystem, it is possible to include the other subsystem's measurements when performing an experiment. Judicious choice of the measurements between the different subsystems should allow higher throughput of measurements for faster variational algorithm execution. Although the construction of $\mathcal{C}_2$ is not a scalable technique for higher qubit numbers, excellent results were obtained for all temperature regimes. In $\mathcal{C}_2$ construction only 20\% of the density matrix elements evaluated for closeness, indicating that the encoding of the thermal state is primarily in a few populations and coherences of the TFD state. Studying how this encoding space will scale with qubit number should shed light on methods to improve practical TFD generation. \appendices \crefalias{section}{appsec} \section{Definition of Minimization Quantity of Interest for Improving the hypothesized cost function \label{app:minimization_quantity}} The list of 15 operators considered in \cref{ssec:mod_corr} to compute the minimization quantity of interest is given by, \begin{align} \begin{split} \mathcal{O} &= \left\{ \mathsf{X}_A, \mathsf{X}_B, \mathsf{Y}_A, \mathsf{Y}_B, \mathsf{Z}_A, \mathsf{Z}_B, \right. \\ &\hspace{2em} \mathsf{XX}_A, \mathsf{XX}_B, \mathsf{YY}_A, \mathsf{YY}_B, \mathsf{ZZ}_A, \mathsf{ZZ}_B, \\ &\hspace{2em} \left. \mathsf{XX}_{AB}, \mathsf{YY}_{AB}, \mathsf{ZZ}_{AB}\right\} \end{split} \label{eq:correlator_operators} \end{align} and the range of 55 temperatures used for the optimization is, \begin{align} \begin{split} \mathcal{B} &= \{ 10^{-3} \times \{1, 2, 3, 4, 5, 6, 7, 8, 9\} , \\ &\hspace{2em} 10^{-2} \times \{1, 2, 3, 4, 5, 6, 7, 8, 9\} , \\ &\hspace{2em} 10^{-1} \times \{1, 2, 3, 4, 5, 6, 7, 8, 9\} , \\ &\hspace{2em} 10^{0} \times \{1, 2, 3, 4, 5, 6, 7, 8, 9\} , \\ &\hspace{2em} 10^{1} \times \{1, 2, 3, 4, 5, 6, 7, 8, 9\} , \\ &\hspace{2em} 10^{2} \times \{1, 2, 3, 4, 5, 6, 7, 8, 9\} , \\ &\hspace{2em} 10^3 \}. \end{split} \end{align} In \cref{ssec:mod_corr}, we calculate the differences in operator expectation values between the ideal TFD state and the circuit-generated state. The total of the absolute differences (for all temperatures) between each ideal and generated state is chosen as the minimization quantity of interest, $\Xi$, for finding optimal $\zeta$ and $\tau$ values: \begin{align} \Xi = \sum_{\beta \in \mathcal{B}} \sum_{o_i \in \mathcal{O}} \left| \ev{o_i}{\xi(\beta)} - \ev{o_i}{\Psi (\beta)} \right| \end{align} where $\ket{\xi(\beta)}$ and $\ket{\Psi(\beta)}$ are the ideal and circuit-generated states, respectively for each $\beta$ value. This is an expression that is more conducive for experimental implementation as well depending on the ease of obtaining various operator expectation values. \section{Density Matrix Analysis for TFD Generation \label{app:dm}} We label the density matrices corresponding to the full system in the binary ordering of the ket occupation states as defined in \cref{sec:tfd_generation}. For example the relevant elements for $\rho_{00}, \rho_{55}$, and $\rho_{16}$ can be found as follows, \begin{align*} \rho_{00} &= \ketbra{0000} \\ \rho_{55} &= \ketbra{0101} \\ \rho_{16} &= \ketbra{0001}{0110}. \end{align*} In \cref{eq:rho}, each unique label in $\rho$ is assigned when first encountered dduring enumeration of the matrix elements. Note that this is a symmetric matrix as expected from the definition of the purified thermal state in \cref{eq:tfd_definition}. Conversely, the circuit-generated state $\sigma$ is given by a Hermitian matrix as seen in \cref{eq:sigma}. The non-zero elements in the evaluation of \cref{eq:pruned_cf} are given by the elements in \cref{eq:pruned_elements}. \begin{figure*}[!t] \small \begin{align} \rho = \left( \begin{array}{cccccccccccccccc} \rho_{00} & \rho_{01} & \rho_{01} & \rho_{03} & \rho_{01} & \rho_{05} & \rho_{06} & \rho_{01} & \rho_{01} & \rho_{06} & \rho_{05} & \rho_{01} & \rho_{03} & \rho_{01} & \rho_{01} & \rho_{00} \\ \rho_{01} & \rho_{11} & \rho_{11} & \rho_{13} & \rho_{11} & \rho_{15} & \rho_{16} & \rho_{11} & \rho_{11} & \rho_{16} & \rho_{15} & \rho_{11} & \rho_{13} & \rho_{11} & \rho_{11} & \rho_{01} \\ \rho_{01} & \rho_{11} & \rho_{11} & \rho_{13} & \rho_{11} & \rho_{15} & \rho_{16} & \rho_{11} & \rho_{11} & \rho_{16} & \rho_{15} & \rho_{11} & \rho_{13} & \rho_{11} & \rho_{11} & \rho_{01} \\ \rho_{03} & \rho_{13} & \rho_{13} & \rho_{33} & \rho_{13} & \rho_{35} & \rho_{36} & \rho_{13} & \rho_{13} & \rho_{36} & \rho_{35} & \rho_{13} & \rho_{33} & \rho_{13} & \rho_{13} & \rho_{03} \\ \rho_{01} & \rho_{11} & \rho_{11} & \rho_{13} & \rho_{11} & \rho_{15} & \rho_{16} & \rho_{11} & \rho_{11} & \rho_{16} & \rho_{15} & \rho_{11} & \rho_{13} & \rho_{11} & \rho_{11} & \rho_{01} \\ \rho_{05} & \rho_{15} & \rho_{15} & \rho_{35} & \rho_{15} & \rho_{55} & \rho_{56} & \rho_{15} & \rho_{15} & \rho_{56} & \rho_{55} & \rho_{15} & \rho_{35} & \rho_{15} & \rho_{15} & \rho_{05} \\ \rho_{06} & \rho_{16} & \rho_{16} & \rho_{36} & \rho_{16} & \rho_{56} & \rho_{66} & \rho_{16} & \rho_{16} & \rho_{66} & \rho_{56} & \rho_{16} & \rho_{36} & \rho_{16} & \rho_{16} & \rho_{06} \\ \rho_{01} & \rho_{11} & \rho_{11} & \rho_{13} & \rho_{11} & \rho_{15} & \rho_{16} & \rho_{11} & \rho_{11} & \rho_{16} & \rho_{15} & \rho_{11} & \rho_{13} & \rho_{11} & \rho_{11} & \rho_{01} \\ \rho_{01} & \rho_{11} & \rho_{11} & \rho_{13} & \rho_{11} & \rho_{15} & \rho_{16} & \rho_{11} & \rho_{11} & \rho_{16} & \rho_{15} & \rho_{11} & \rho_{13} & \rho_{11} & \rho_{11} & \rho_{01} \\ \rho_{06} & \rho_{16} & \rho_{16} & \rho_{36} & \rho_{16} & \rho_{56} & \rho_{66} & \rho_{16} & \rho_{16} & \rho_{66} & \rho_{56} & \rho_{16} & \rho_{36} & \rho_{16} & \rho_{16} & \rho_{06} \\ \rho_{05} & \rho_{15} & \rho_{15} & \rho_{35} & \rho_{15} & \rho_{55} & \rho_{56} & \rho_{15} & \rho_{15} & \rho_{56} & \rho_{55} & \rho_{15} & \rho_{35} & \rho_{15} & \rho_{15} & \rho_{05} \\ \rho_{01} & \rho_{11} & \rho_{11} & \rho_{13} & \rho_{11} & \rho_{15} & \rho_{16} & \rho_{11} & \rho_{11} & \rho_{16} & \rho_{15} & \rho_{11} & \rho_{13} & \rho_{11} & \rho_{11} & \rho_{01} \\ \rho_{03} & \rho_{13} & \rho_{13} & \rho_{33} & \rho_{13} & \rho_{35} & \rho_{36} & \rho_{13} & \rho_{13} & \rho_{36} & \rho_{35} & \rho_{13} & \rho_{33} & \rho_{13} & \rho_{13} & \rho_{03} \\ \rho_{01} & \rho_{11} & \rho_{11} & \rho_{13} & \rho_{11} & \rho_{15} & \rho_{16} & \rho_{11} & \rho_{11} & \rho_{16} & \rho_{15} & \rho_{11} & \rho_{13} & \rho_{11} & \rho_{11} & \rho_{01} \\ \rho_{01} & \rho_{11} & \rho_{11} & \rho_{13} & \rho_{11} & \rho_{15} & \rho_{16} & \rho_{11} & \rho_{11} & \rho_{16} & \rho_{15} & \rho_{11} & \rho_{13} & \rho_{11} & \rho_{11} & \rho_{01} \\ \rho_{00} & \rho_{01} & \rho_{01} & \rho_{03} & \rho_{01} & \rho_{05} & \rho_{06} & \rho_{01} & \rho_{01} & \rho_{06} & \rho_{05} & \rho_{01} & \rho_{03} & \rho_{01} & \rho_{01} & \rho_{00} \\ \end{array} \right) \label{eq:rho} \end{align} \vspace*{2pt} \normalsize \end{figure*} \begin{figure*} \small \begin{align} \sigma = \left( \begin{array}{cccccccccccccccc} \sigma_{00} & \sigma_{01} & \sigma_{01} & \sigma_{03} & \sigma_{01} & \sigma_{05} & \sigma_{06} & \sigma_{01} & \sigma_{01} & \sigma_{06} & \sigma_{05} & \sigma_{01} & \sigma_{03} & \sigma_{01} & \sigma_{01} & \sigma_{00} \\ \sigma_{01}^* & \sigma_{11} & \sigma_{11} & \sigma_{13} & \sigma_{11} & \sigma_{15} & \sigma_{16} & \sigma_{11} & \sigma_{11} & \sigma_{16} & \sigma_{15} & \sigma_{11} & \sigma_{13} & \sigma_{11} & \sigma_{11} & \sigma_{01}^* \\ \sigma_{01}^* & \sigma_{11} & \sigma_{11} & \sigma_{13} & \sigma_{11} & \sigma_{15} & \sigma_{16} & \sigma_{11} & \sigma_{11} & \sigma_{16} & \sigma_{15} & \sigma_{11} & \sigma_{13} & \sigma_{11} & \sigma_{11} & \sigma_{01}^* \\ \sigma_{03}^* & \sigma_{13}^* & \sigma_{13}^* & \sigma_{33} & \sigma_{13}^* & \sigma_{35} & \sigma_{36} & \sigma_{13}^* & \sigma_{13}^* & \sigma_{36} & \sigma_{35} & \sigma_{13}^* & \sigma_{33} & \sigma_{13}^* & \sigma_{13}^* & \sigma_{03}^* \\ \sigma_{01}^* & \sigma_{11} & \sigma_{11} & \sigma_{13} & \sigma_{11} & \sigma_{15} & \sigma_{16} & \sigma_{11} & \sigma_{11} & \sigma_{16} & \sigma_{15} & \sigma_{11} & \sigma_{13} & \sigma_{11} & \sigma_{11} & \sigma_{01}^* \\ \sigma_{05}^* & \sigma_{15}^* & \sigma_{15}^* & \sigma_{35}^* & \sigma_{15}^* & \sigma_{55} & \sigma_{56} & \sigma_{15}^* & \sigma_{15}^* & \sigma_{56} & \sigma_{55} & \sigma_{15}^* & \sigma_{35}^* & \sigma_{15}^* & \sigma_{15}^* & \sigma_{05}^* \\ \sigma_{06}^* & \sigma_{16}^* & \sigma_{16}^* & \sigma_{36}^* & \sigma_{16}^* & \sigma_{56}^* & \sigma_{66} & \sigma_{16}^* & \sigma_{16}^* & \sigma_{66} & \sigma_{56}^* & \sigma_{16}^* & \sigma_{36}^* & \sigma_{16}^* & \sigma_{16}^* & \sigma_{06}^* \\ \sigma_{01}^* & \sigma_{11} & \sigma_{11} & \sigma_{13} & \sigma_{11} & \sigma_{15} & \sigma_{16} & \sigma_{11} & \sigma_{11} & \sigma_{16} & \sigma_{15} & \sigma_{11} & \sigma_{13} & \sigma_{11} & \sigma_{11} & \sigma_{01}^* \\ \sigma_{01}^* & \sigma_{11} & \sigma_{11} & \sigma_{13} & \sigma_{11} & \sigma_{15} & \sigma_{16} & \sigma_{11} & \sigma_{11} & \sigma_{16} & \sigma_{15} & \sigma_{11} & \sigma_{13} & \sigma_{11} & \sigma_{11} & \sigma_{01}^* \\ \sigma_{06}^* & \sigma_{16}^* & \sigma_{16}^* & \sigma_{36}^* & \sigma_{16}^* & \sigma_{56}^* & \sigma_{66} & \sigma_{16}^* & \sigma_{16}^* & \sigma_{66} & \sigma_{56}^* & \sigma_{16}^* & \sigma_{36}^* & \sigma_{16}^* & \sigma_{16}^* & \sigma_{06}^* \\ \sigma_{05}^* & \sigma_{15}^* & \sigma_{15}^* & \sigma_{35}^* & \sigma_{15}^* & \sigma_{55} & \sigma_{56} & \sigma_{15}^* & \sigma_{15}^* & \sigma_{56} & \sigma_{55} & \sigma_{15}^* & \sigma_{35}^* & \sigma_{15}^* & \sigma_{15}^* & \sigma_{05}^* \\ \sigma_{01}^* & \sigma_{11} & \sigma_{11} & \sigma_{13} & \sigma_{11} & \sigma_{15} & \sigma_{16} & \sigma_{11} & \sigma_{11} & \sigma_{16} & \sigma_{15} & \sigma_{11} & \sigma_{13} & \sigma_{11} & \sigma_{11} & \sigma_{01}^* \\ \sigma_{03}^* & \sigma_{13}^* & \sigma_{13}^* & \sigma_{33} & \sigma_{13}^* & \sigma_{35} & \sigma_{36} & \sigma_{13}^* & \sigma_{13}^* & \sigma_{36} & \sigma_{35} & \sigma_{13}^* & \sigma_{33} & \sigma_{13}^* & \sigma_{13}^* & \sigma_{03}^* \\ \sigma_{01}^* & \sigma_{11} & \sigma_{11} & \sigma_{13} & \sigma_{11} & \sigma_{15} & \sigma_{16} & \sigma_{11} & \sigma_{11} & \sigma_{16} & \sigma_{15} & \sigma_{11} & \sigma_{13} & \sigma_{11} & \sigma_{11} & \sigma_{01}^* \\ \sigma_{01}^* & \sigma_{11} & \sigma_{11} & \sigma_{13} & \sigma_{11} & \sigma_{15} & \sigma_{16} & \sigma_{11} & \sigma_{11} & \sigma_{16} & \sigma_{15} & \sigma_{11} & \sigma_{13} & \sigma_{11} & \sigma_{11} & \sigma_{01}^* \\ \sigma_{00} & \sigma_{01} & \sigma_{01} & \sigma_{03} & \sigma_{01} & \sigma_{05} & \sigma_{06} & \sigma_{01} & \sigma_{01} & \sigma_{06} & \sigma_{05} & \sigma_{01} & \sigma_{03} & \sigma_{01} & \sigma_{01} & \sigma_{00} \\ \end{array} \right) \label{eq:sigma} \end{align} \vspace*{2pt} \normalsize \end{figure*} \begin{figure*}[!t] \small \begin{align} \begin{split} r_4 = \rho_{05} &= \frac{3 \Gamma ^2-\sqrt{4 \Gamma ^2+1} \sinh \left(\frac{1}{2 T}\right) \sinh \left(\frac{\sqrt{4 \Gamma ^2+1}}{2 T}\right)+\Gamma ^2 \cosh \left(\frac{\sqrt{4 \Gamma ^2+1}}{T}\right)+\left(4 \Gamma ^2+1\right) \cosh \left(\frac{1}{2 T}\right) \cosh \left(\frac{\sqrt{4 \Gamma ^2+1}}{2 T}\right)+1}{4 \left(4 \Gamma ^2+1\right) \left(\cosh \left(\frac{\sqrt{4 \Gamma ^2+1}}{T}\right)+\cosh \left(\frac{1}{T}\right)\right)}\\ r_8 = \rho_{15} &= -\frac{\Gamma \sinh \left(\frac{\sqrt{4 \Gamma ^2+1}}{2 T}\right) \left(\sinh \left(\frac{\sqrt{4 \Gamma ^2+1}}{2 T}\right)+\sqrt{4 \Gamma ^2+1} \left(\cosh \left(\frac{\sqrt{4 \Gamma ^2+1}}{2 T}\right)+\sinh \left(\frac{1}{2 T}\right)+\cosh \left(\frac{1}{2 T}\right)\right)\right)}{4 \left(4 \Gamma ^2+1\right) \left(\cosh \left(\frac{\sqrt{4 \Gamma ^2+1}}{T}\right)+\cosh \left(\frac{1}{T}\right)\right)} \\ r_{13} = \rho_{55} &= \frac{\left(\frac{\sinh \left(\frac{\sqrt{4 \Gamma ^2+1}}{2 T}\right)}{\sqrt{4 \Gamma ^2+1}}+\cosh \left(\frac{\sqrt{4 \Gamma ^2+1}}{2 T}\right)+\sinh \left(\frac{1}{2 T}\right)+\cosh \left(\frac{1}{2 T}\right)\right)^2}{8 \left(\cosh \left(\frac{\sqrt{4 \Gamma ^2+1}}{T}\right)+\cosh \left(\frac{1}{T}\right)\right)}\\ s_4 = \sigma_{05} &= \frac{(\expval{\mathsf{ZZ}_{AB}}+2)^2 \left(4 \expval{\mathsf{XX}_{AB}}+\expval{\mathsf{ZZ}_{AB}}^2-4\right)}{64 \left(\expval{\mathsf{ZZ}_{AB}}^2+4\right)} \\ s_8 = \sigma_{15} &= \frac{(\expval{\mathsf{ZZ}_{AB}}+2) \left(\expval{\mathsf{X}_{A}}^2 \left(\expval{\mathsf{ZZ}_{AB}}^2+4\right)+4 \expval{\mathsf{ZZ}_{A}} \left(\expval{\mathsf{ZZ}_{AB}}^2-4\right)\right)}{64 \expval{\mathsf{X}_{A}} \left(\expval{\mathsf{ZZ}_{AB}}^2+4\right)} \\ s_{13} = \sigma_{55} &= \frac{(\expval{\mathsf{ZZ}_{AB}}+2)^2 \left(-8 \expval{\mathsf{ZZ}_{A}}+\expval{\mathsf{ZZ}_{AB}}^2+4\right)}{64 \left(\expval{\mathsf{ZZ}_{AB}}^2+4\right)} \end{split} \label{eq:pruned_elements} \end{align} \hrulefill \vspace*{2pt} \normalsize \end{figure*} \section{Performance of cost functions for varying $g$ \label{app:g_neq_1}} In \cref{fig:All_CF_vary_g} we compare the different cost functions' performance at generating TFD states for different $g$ values. We find that $\mathcal{C}_2$ outperforms $F_A$ for most cases. Note that the low temperature performance of $\mathcal{C}_1$ is poor for the case of $g \neq 1$, indicating the optimal coefficients should be re-optimized for various $g$ values. Finding a general expression for the coefficients is desirable as the intermediate temperature performance is still superior to $\mathcal{C}_0$. We note that $F_A$ as a cost function occasionally had difficulty converging to a minimum, especially for $g=-0.1$ and $g=-0.2$, as can be seen from sudden jumps in the traces. \begin{figure*}[htbp!] \includegraphics[width=\textwidth]{All_CF_vary_g.pdf} \caption{Cost function performance comparison between $F_A$, $\mathcal{C}_0$, $\mathcal{C}_1$, and $\mathcal{C}_2$ using (a) fidelity $\mathcal{F}$ and (b) trace distance $\mathcal{T}$ as proximity measures for various transverse field strengths $g$.} \label{fig:All_CF_vary_g} \end{figure*} \section*{Acknowledgments} \addcontentsline{toc}{section}{Acknowledgments} The authors thank Sonika Johri and Xiang Chris Zou for insightful discussions. \bibliographystyle{IEEEtrans}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction and summary} \setlength{\baselineskip}{7mm} Supersymmetric large $N$ gauge theories with 16 supercharges, which describe $N$ D brane dynamics, are fundamental theories in the study of the string theory. However the thermodynamical properties of these theories with $R$-symmetry chemical potentials have not been studied sufficiently in terms of the gauge theory. In the strong coupling regime, the gravity analysis is valid through the gauge/gravity correspondence, where the $R$-symmetry charges in the gauge theories will correspond to electric charges or angular momenta of the dual gravity systems \cite{Itzhaki:1998dd, Gubser:1998jb, Harmark:1999xt}. Importantly, it has been shown that gravity systems with these charges have rich phase structures. For example, in asymptotically AdS space, RN black hole/hairy black hole type transitions were found \cite{Gubser:2008px, Nishioka:2009zj, Basu:2010uz, Bhattacharyya:2010yg} and their applications to condensed matter physics have been investigated \cite{Hartnoll:2008vx, Hartnoll:2009sz, Herzog:2009xv}. Besides, in higher dimensional black holes with angular momenta, Meyers-Perry black hole/black ring type transitions happen \cite{Emparan:2008eg}. Therefore we can expect that similar phase structures also appear in the supersymmetric gauge theories with corresponding chemical potentials. However the gauge theories have not been studied sufficiently, since the presence of the chemical potentials makes the analysis difficult even in a weak coupling. The reason is as follows. Fields in the gauge theories are typically massless except in some special situations, e.g. ${\mathcal N}=4$ SYM on $R\times S^3$. (Several groups studied the chemical potentials in this case \cite{Basu:2005pj, Yamada:2006rx, Harmark:2006di, Harmark:2006ta, Harmark:2006ie, Dey:2007vt, Harmark:2007px, Hollowood:2008gp, Murata:2008bg, Elander:2008vw}.) Then the massless scalars coupled to the chemical potentials will be unstable around the trivial vacuum and perturbative techniques do not work. If these scalars are free, this instability is inevitable and the theory is destabilized by the chemical potentials\footnote{A naive regularization in free super Yang-Mills theory through an analytic continuation of the chemical potential to complex was proposed in \cite{Gubser:1998jb}. In this article, we use a different analysis and do not consider this regularization. The imaginary chemical potential is considered in lattice gauge theory also to avoid the fermion problem in QCD \cite{deForcrand:2002ci}. }. On the other hand, interactions involving the scalars may remove this instability. For example, in a massless $g|\phi|^4$ theory, if we turn on a finite chemical potential $\mu$ for a $U(1)$ rotation, the potential becomes $-\mu^2 |\phi|^2+g|\phi|^4$. Although it causes the shift of the vacuum, the theory is still stable if $g>0$. As in this example, the understanding of the interactions in the gauge theories is essential to investigate the finite chemical potentials. In this article, in order to consider this problem, we study the following one dimensional large $N$ gauge theory (a bosonic BFSS \cite{Banks:1996vh} type model), \begin{align} S = \int_0^{\beta} dt \, \Tr \left( \sum_{I=1}^{D} \frac12 \left(D_0 Y^{I}\right)^2 - \sum_{I,J} \frac {g^2}{4} [Y^I,Y^J][Y^I, Y^J] \right). \label{matrixqm-action} \end{align} Here $Y^I$ are $SU(N)$ adjoint scalars. The covariant derivative is defined by $D_0 Y^I =\partial_t Y^I - i [A_0, Y^I]$ and $A_0$ is an $SU(N)$ gauge field. This model can be regarded as a dimensional reduction of a $1+D$ dimensional pure Yang-Mills theory. The model is invariant under the $SO(D)$ rotation of $Y^I$, which is related to the $R$-symmetry in the supersymmetric gauge theory. We will consider chemical potentials associated with $U(1)^{\lfloor D/2 \rfloor} \subset SO(D)$.\footnote{ Here $\lfloor \cdots \rfloor$ denotes the floor function which maps a number to the integer part.} Note that the charges of these $U(1)^{\lfloor D/2 \rfloor}$ correspond to the transverse angular momenta on $\lfloor D/2 \rfloor$ planes in the context of the rotating D-branes \cite{Gubser:1998jb, Harmark:1999xt, Cvetic:1996ek, Csaki:1998cb, Hawking:1998kw, Kraus:1998hv, Cvetic:1999ne, Cvetic:1999rb, Hawking:1999dp}. We can obtain the model (\ref{matrixqm-action}) with $D=9$ from $N$ D1-branes on a spatial circle as follows \cite{Aharony:2005ew}. We can choose the boundary conditions of the fermions for this circle as anti-periodic. Then all the fermions obtain a mass, which is proportional to the inverse radius of the circle. Thus, if the radius is sufficiently small, we can ignore all the fermions, KK-modes and long string states and the theory is reduced to one dimension. Then the $SO(D-1)$ symmetry is enhanced to $SO(D)$ and we obtain (\ref{matrixqm-action}). Note that it is known that the gauge/gravity correspondence is not valid in such a small radius, since the effective coupling becomes small \cite{Aharony:2005ew}. Thus we cannot compare our results with the gravity directly. Our results should be regarded as a weak coupling continuation of the gravity results. (However, as far as I know, there is no known gravity result of the thermodynamics of this system with the chemical potentials.) The thermodynamics of the model (\ref{matrixqm-action}) without the chemical potential has been investigated by using a $1/D$ expansion \cite{Mandal:2009vz}. Especially it has been revealed that a non-trivial stable vacuum exists in which the adjoint scalars condense and obtain a mass. We generalize this analysis to the finite chemical potentials. We will show that, in the $D \to \infty$ limit, the (meta-)stable condensed vacuum always exists for arbitrary values of the chemical potentials. This result is similar to the $g|\phi|^4$ theory and we can conclude that the commutator-squared interaction stabilizes our model in the large $D$ limit. However, in a large but finite $D$ case, the $1/D$ expansion does not converge for large chemical potentials and it is unclear whether the system is stable or not there. We investigate the phase structure also and show the existence of three phases in the $\mu-T$ plane, which is summarized in Figure \ref{fig phase diagram}. In a single chemical potential case, we find several saddle points in a high temperature regime. In this regime, since only the zero-modes of the Matsubara frequencies are dominant, the model (\ref{matrixqm-action}) will reduce to an effective zero-dimensional matrix model (bosonic IKKT type model \cite{Ishibashi:1996xs}). In this model, the chemical potential induces a CS like interaction. (However, its coefficient is real and different from the ordinary CS term in \cite{Iso:2001mg}.) Therefore, (imaginary) fuzzy sphere like solutions exist as complex saddle points. However, physical interpretation of these solutions is yet to be explored.\\ We also explore the existence of the condensed vacuum in the $d$ dimensional large $N$ gauge theory to understand the stability issue.\footnote{ The $1/D$ expansion analysis of $d$ dimensional gauge theory without the chemical potentials for $d=0$ (bosonic IKKT model) is considered in \cite{Hotta:1998en} and for $d=2$ (two-dimensional gauge theory on $T^2$) is considered in an ongoing work \cite{Dynamical}. } Since the analysis is difficult in general, we restrict our study to a high temperature regime and consider the $D \to \infty$ limit. Then we will see that the existence of the condensation depends on the dimension $d$. In $d=2,3$ cases, the condensation always happens for arbitrary values of the chemical potentials as in the $d=1$ case. On the other hand, in $d \ge 4$ case, a critical chemical potential $\mu_{c}$ exists and the condensation happens only if all the chemical potentials are below $\mu_{c}$. These results are summarized in Figure \ref{fig large d phase}. The same condensations will happen in the supersymmetric gauge theories also, since we can ignore the fermions in the high temperature limit. Thus our results may be related to the analysis in the strong coupling regime through the gravitational study of the rotating D brane geometries \cite{Harmark:1999xt}. However the application of our large $D$ results to the supersymmetric gauge theories are not robust particularly in a high chemical potential regime and further analyses are necessary.\\ The organization of this article is as follows. In section \ref{sec 1dMM}, we introduce the chemical potentials to our model (\ref{matrixqm-action}) and, by using the large $D$ expansion, we derive an effective action. We will analyze this effective action in two regimes. One is a low temperature and low chemical potential regime. Another is a high temperature regime and/or a high chemical potential regime. We will study the phase structure of the first regime in section \ref{Section low T}. In section \ref{Section high T}, we investigate the nature of the second regime. We will also discuss the fuzzy sphere like saddle points in this section. In section \ref{sec high d}, we explore the existence of the condensation in higher dimensional gauge theories. In section \ref{Conclusion}, we conclude with a discussion. \section{One-dimensional gauge theory with chemical potential} \label{sec 1dMM} \subsection{Chemical potential} \label{sec chemical} We consider the large $N$ gauge theory (\ref{matrixqm-action}) with chemical potentials associated with the global $U(1)^{\lfloor D/2 \rfloor}$ transformations. It is convenient to rename the $D$ adjoint scalar $Y^I$ as \begin{align} X^I& \equiv Y^I~(I=1,\dots,\lfloor D/2 \rfloor), \nonumber \\ W^I &\equiv Y^I~(I=\lfloor D/2 \rfloor+1,\dots,2 \lfloor D/2 \rfloor), \nonumber \\ Y^D &= Y^D~(\text{If $D$ is even, this one does not exist.}), \end{align} and define complex scalars as $\Phi^I \equiv (X^I+iW^I)/\sqrt{2}$, ($I=1,\dots,\lfloor D/2 \rfloor$). Then the action (\ref{matrixqm-action}) is invariant under the global $U(1)^{\lfloor D/2 \rfloor}$ transformation: $\Phi^I \to e^{i\Lambda_I} \Phi^I$. In the context of the rotating D-brane, this charge corresponds to the angular momentum on the $(X^I, W^I)$ plane. Now we introduce the chemical potential $\mu_I$ for these $U(1)$ transformations. It has been studied in \cite{Yamada:2006rx, Haber:1981ts} that, in the Euclidean path integral, the chemical potential causes the modification of the action as \begin{align} D_0 \to D_0 - \mu_I \quad (I=1,\ldots, \lfloor D/2 \rfloor), \label{chemical potential} \end{align} in the kinetic term of $\Phi^I$. Without loss of generality, we can take $\mu_I$ positive. This modification gives rise to negative mass term $-\mu_I^2 |\Phi^I|^2$ and the system might be destabilized. In addition, the usual perturbative analysis does not work. In the following sections, we will investigate the thermodynamics of the gauge theory by using a $1/D$ expansion \cite{Mandal:2009vz, Hotta:1998en}, in which the number of the adjoint scalars is regarded as large. In order to keep the contribution of the chemical potentials finite under the large $D$ limit, first we consider a simple situation, \begin{align} \mu_I&=\mu \qquad ( I=1,\cdots, \tilde{D} ), \nonumber \\ \mu_i&=0 \qquad (i=\tilde{D}+1, \cdots, \lfloor D/2 \rfloor), \label{simple mu} \end{align} and $\tilde{D}$ scales linearly with $D$. If we consider general chemical potential $\mu_I$, each contribution will appear as sub-leading in the $1/D$ expansion. Thus in order to evaluate them consistently, we have to derive other sub-leading terms also and compare with them. We will study it in section \ref{sec general mu} and section \ref{sec D=1}. The validity of the application of the $1/D$ expansion to D1-brane case ($D=9$) is not a priori obvious, since $D$ is large but finite. However in the case of the zero-chemical potential, the $1/D$ expansion \cite{Mandal:2009vz} reproduces the numerical results in \cite{Aharony:2005ew, Aharony:2004ig, Kawahara:2007fn, Azuma:2007fj, Azeyanagi:2009zf} quantitatively. In fact the leading large $D$ results even for $D=2,3$ reproduce the essential qualitative properties. Thus we expect that the $1/D$ expansion is also meaningful in the model involving the chemical potentials. \subsection{$1/D$ expansion and computation of effective action} \label{sec Eff} In this subsection, we will integrate out the adjoint scalars in the action (\ref{matrixqm-action}) with the chemical potential (\ref{chemical potential}) and derive an effective action by using the $1/D$ expansion. We evaluate only the leading order of this expansion in this section. We will discuss the contribution from the next order in section \ref{sec 1/D} and \ref{sec high mu 1/D }. In section \ref{sec 1/D}, it will turn out that inclusion of these corrections does not change the nature of the phase structure in a low chemical potential regime. On the other hand, in section \ref{sec high mu 1/D }, we will show that the $1/D$ expansion does not converge in a very high chemical potential regime. According to \cite{Mandal:2009vz}, we rewrite the path integral by employing an auxiliary field $B_{ab}$ as\footnote{The definition of $B_{ab}$ is different from \cite{Mandal:2009vz} by an imaginary factor ``$i$".} \begin{align} Z= & {\cal N} \int {\cal D} B {\cal D} A_0 {\cal D} Y^i {\cal D} \Phi^I e^{-S(B,A_0,Y,\Phi)}, \nonumber \\ S(B,A_0,Y,\Phi) =& \int_0^\beta dt\, \Biggl[ - \frac{1}{4g^2} B_{ab} M^{-1}_{ab,cd}B_{cd} \nonumber \\ & +\sum_{I=1}^{\tilde{D}} \Phi^{\dagger I}_a \left(-(D_0-\mu)^2_{ab}+ B_{ab} \right) \Phi^{I}_b +\sum_{i=2\tilde{D}+1}^D \frac12 Y^{i}_a \left( - D_{0ab}^2+B_{ab} \right) Y^{i}_b \Biggr]. \label{gauss-trick} \end{align} Again we have used the notation $Y^i$ for the adjoint scalars, which do not couple to the non-zero chemical potentials. Here $1/{\cal N} \equiv \int {\cal D} B \exp\left( \int dt B_{ab} M^{-1}_{ab,cd}B_{cd}/(4g^2) \right) $ is a numerical factor. $M_{ab,cd}^{-1}$ is the inverse of $M_{ab,cd}$, which is defined by \begin{align} M_{ab,cd} = -\frac{1}{4} \Bigl\{ \Tr[\lambda_a, \lambda_c][\lambda_b, \lambda_d] +(a\leftrightarrow b)+(c\leftrightarrow d)+(a\leftrightarrow b,c\leftrightarrow d) \Bigr\}. \label{def Mabcd} \end{align} Here $\lambda_a$ $(a=1\dots N^2-1)$ is a generator of $SU(N)$. They satisfy $M_{ab,cd} M_{cd,ef}^{-1}=(\delta_{ae}\delta_{bf}+\delta_{af}\delta_{be})/2$. Properties of the matrix $M_{ab,cd}$ is summarized in appendix A of \cite{Mandal:2009vz}. We can reproduce the original action (\ref{matrixqm-action}) from (\ref{gauss-trick}) by substituting the solution of the classical equation of motion for $B_{ab}$: \begin{align} M^{-1}_{ab,cd}B_{cd}=g^2\left(\sum_{I=1}^{\tilde{D}} \left( \Phi^{I\dagger}_a\Phi^{I}_b + \Phi^{I\dagger}_b\Phi^{I}_a \right) + \sum_{i=2\tilde{D}+1}^D Y_a^iY_b^i \right) . \label{classical sol B} \end{align} Now we can formally integrate out $\Phi^I$ and $Y^i$, since the action (\ref{gauss-trick}) is quadratic in them. Then we obtain an effective action for $B_{ab}$ and $A_0$, \begin{align} S_{eff}(B,A_0) =& \int_0^\beta dt\, \Biggl[ - \frac{1}{4g^2} B_{ab} M^{-1}_{ab,cd}B_{cd} \Biggr] \nonumber \\ &+\frac{k_1 D}{2} \log \det \left( - D_{0ab}^2+B_{ab} \right) +\frac{k_2 D}{2} \log \det \left(-(D_0-\mu)^2_{ab}+ B_{ab} \right) , \label{gauss-trick-2} \end{align} where we have defined normalized ratios: $k_1=(D-2\tilde{D})/D$ and $k_2=2\tilde{D}/D$, which satisfy $k_1+k_2=1$. As in \cite{Mandal:2009vz}, we investigate this model by taking the large $D$ and large $N$ limit such that $D \to \infty$, $N \to \infty$ and $g \to 0$ with fixed $\tilde{\lambda}\equiv g^2 N D =\lambda D$. (In our case, we also take $\tilde{D} \to \infty$ with fixed $k_2=2\tilde{D}/D$). Then the first term and the $\log \det$ terms in (\ref{gauss-trick-2}) will be comparable in this limit. As a result, this model will have a non-trivial saddle point $\bar{B}_{ab}=\triangle_0^2 \delta_{ab}$, which we will confirm later\footnote{The large $N$ limit is not necessary to derive this saddle point. Actually, even at finite $N$, we can calculate some physical quantities by taking $D \to \infty$ and $g^2N \to 0$ with fixed $g^2N D$ \cite{Mandal:2009vz, Hotta:1998en}. However our interest is in the large $N$ limit of (\ref{matrixqm-action}) and we do not consider finite $N$ effects in this article.}. Here $\triangle_0$ is a time independent constant. From equation (\ref{classical sol B}), indeed this saddle point gives a condensation of the adjoint scalars \begin{align} 2 \sum_{I=1}^{\tilde{D}} \langle \Tr \Phi^{I\dagger}\Phi^{I} \rangle + \sum_{i=2\tilde{D}+1}^D \langle \Tr Y^iY^i \rangle = \frac{N}{2g^2}\triangle_0^2 , \end{align} where we have used a relation $M_{ab,cd}^{-1} \delta_{cd}=\frac{1}{2N}\delta_{ab}$ \cite{Mandal:2009vz}. To proceed, we write the $B_{ab}$ as the sum of a constant trace piece and the rest as \begin{align} B_{ab}(t) = \triangle^2 \delta_{ab} +g b_{ab}(t), \label{fluct} \end{align} where $b_{ab}(t)$ satisfies $\int dt\, b_{aa}(t)=0$. Note that we can ignore the interactions between $b_{ab}$ and $\Phi^I$ and $Y^i$ in the leading order of the $1/D$ expansion \cite{Mandal:2009vz}. The contributions of the interactions will appear in the next order as we will see in section \ref{sec 1/D} and \ref{sec high mu 1/D }. Here we consider a gauge fixing of $A_0$. It is convenient to take the constant diagonal gauge: $A_{0ij}(t)=\alpha_{i}\delta_{ij}$. As a result the effective action will be described by gauge invariant Wilson loop operators, \begin{align} u_n = \frac{1}{N} \Tr e^{in \int_0^\beta A_0 dt}= \frac{1}{N} \sum_{i=1}^N e^{in\beta \alpha_i}. \end{align} This gauge fixing gives rise to a Faddeev Popov determinant \cite{Aharony:2003sx} \begin{align} {\cal D}A_0=\prod_i d\alpha_i e^{-S_{FP}},\quad S_{FP}=N^2\sum_n \frac1{n} |u_n|^2. \label{FP} \end{align} Now we integrate out $b_{ab}$ and obtain the effective action for the condensation $\triangle$ and the gauge field $\{u_n \}$. \begin{align} \frac{S_{eff}(\triangle,\{u_n\})}{DN^2}=& -\frac{\beta \triangle^4}{8 \tilde{\lambda}}+ \frac{1}{D} \sum_{n=1}^\infty \frac{|u_n|^2}{n} \nonumber \\ &+ \frac{1}{2N^2} \left[ k_1 \Tr \log \left(-D_0^2 + \triangle^2 \right) + k_2 \Tr \log \left(-\left( D_0-\mu \right)^2 + \triangle^2 \right) \right]. \label{Seff process} \end{align} Here the first term is a classical term from the first term in (\ref{gauss-trick-2}). The second term comes from (\ref{FP}) and indeed it is $1/D$ order. We keep it here, since this term will be dominant in a low temperature regime and more significant than other $O(1/D)$ terms. The integral of $b_{ab}$ gives a numerical factor which cancels out ${\mathcal N}$ in (\ref{gauss-trick}) except the contribution of the integral of the constant trace peace $\triangle$. Let us evaluate the terms in the second line of (\ref{Seff process}). In the momentum space, the quadratic term of $\Phi^I$ in (\ref{gauss-trick}) can be written as \begin{align} \frac{1}{2} \begin{pmatrix} X_{nij}^I &W_{nij}^I \end{pmatrix} \begin{pmatrix} \left(\frac{2\pi n}{\beta} -(\alpha_j-\alpha_i) \right)^2 -\mu^2 + \triangle^2& -2\mu \left(\frac{2\pi n}{\beta} -(\alpha_j-\alpha_i) \right) \\ 2\mu \left(\frac{2\pi n}{\beta} -(\alpha_j-\alpha_i) \right)&\left(\frac{2\pi n}{\beta} -(\alpha_j-\alpha_i) \right)^2 -\mu^2 + \triangle^2\end{pmatrix} \begin{pmatrix} X^I_{-nji} \\ W_{-nji}^I \end{pmatrix} . \label{phi kinetic} \end{align} Then the eigenvalues of this matrix are calculated as \begin{align} \left(\frac{2\pi n}{\beta} -(\alpha_j-\alpha_i) \pm i \mu \right)^2 + \triangle^2. \end{align} By using this result, we calculate \cite{Yamada:2006rx, Aharony:2003sx} \begin{align} &\Tr \log \left(-\left( D_0-\mu \right)^2 + \triangle^2 \right) \nonumber \\ =&\frac{1}{2} \sum_{n,i,j} \left[ \log \left( \left(\frac{2\pi n}{\beta} -(\alpha_j-\alpha_i) + i \mu \right)^2 + \triangle^2 \right)+\log \left( \left(\frac{2\pi n}{\beta} -(\alpha_j-\alpha_i) - i \mu \right)^2 + \triangle^2 \right) \right] \nonumber \\ =&\sum_{n,i,j} \log \left[ \tilde{{\mathcal N}}e^{\beta \triangle} \left( 1- e^{-\beta \triangle+i\beta \left( \alpha_i-\alpha_j \right)-\beta \mu } \right) \left( 1- e^{-\beta \triangle-i\beta \left( \alpha_i-\alpha_j \right)+\beta \mu } \right) \right] \nonumber \\ =& N^2 \log \tilde{{\mathcal N}}+ N^2 \beta \triangle -N^2 \sum_{n=1}^\infty \frac{1}{n} e^{-n\beta \triangle } \left( e^{n\beta \mu }+e^{-n\beta \mu }\right) |u_n|^2, \label{log det} \end{align} where we have ignored $O\left( 1/N^2\right) $ corrections. $\tilde{{\mathcal N}}$ is a irrelevant constant factor and we will ignore it from now. Similarly we evaluate the term from $Y^I$ integral also. Then the effective action (\ref{Seff process}) becomes \begin{align} \frac{S_{eff}(\triangle,\{u_n\})}{DN^2}=& -\frac{\beta \triangle^4}{8 \tilde{\lambda}}+ \frac{1}{D} \sum_{n=1}^\infty \frac{|u_n|^2}{n} \nonumber \\ &+\frac{\beta \triangle}{2} -\sum_{n=1}^\infty \left[ e^{-n\beta \triangle} \left(k_1+k_2 \frac{e^{-n\beta \mu}+e^{n\beta \mu}}{2}\right) \right] \frac{|u_n|^2}{n} . \label{potential triangle} \end{align} Note that, during this derivation, we have assumed a relation $\triangle \ge \mu$. If this inequality is not satisfied, tachyonic modes will appear and the path integral is not well defined\footnote{$\triangle=\mu$ case is also subtle, since massless modes will appear and the effective action will be non-local. Indeed it will happen at $T=0$. We will come back to this problem later.}. Later we will show that this assumption is ensured. From now we evaluate the effective action (\ref{potential triangle}) and investigate the phase structure and the condensation of the scalars. It is convenient to integrate out $\triangle$ first and derive the effective action for the Wilson loops. In order to do it, we analyze the saddle point equation for $\triangle^2$, \footnote{ When we derive the saddle point equation (\ref{saddle point triangle}), we deviate the effective action with respect to $\triangle^2$ (not $\triangle$), since $\triangle^2$ is the correct variable as in (\ref{fluct}).} \begin{align} -\frac{\triangle^3}{2\tilde{\lambda} } +\frac{1}{2} +\sum_{n=1}^\infty e^{-n\beta \triangle} \left[ k_1+k_2 \left( \frac{e^{-n\beta \mu}+e^{n\beta \mu}}{2} \right) \right] |u_n|^2=0. \label{saddle point triangle} \end{align} We will solve this equation in two different regimes characterized by the values of the Wilson loop operators. One is the low temperature and low chemical potential regime ($|u_n| \sim 0$, $n\ge 2$) and another is the high temperature or high chemical potential regime ($|u_n|\sim 1$). We will study the first regime in section \ref{Section low T} and the second one in section \ref{Section high T}. Before proceeding the analysis of our model, we remark about the validity of the $1/D$ expansion. In the strict large $D$ limit, our analysis will be valid for any temperature and chemical potentials. However, in a large but finite $D$ case, some problems about the $1/D$ expansion will arise in a very low temperature regime and a very high chemical potential regime. In the very low temperature regime, the dimensionless effective coupling $\tilde{\lambda}/T^3$ will be large. Thus the contributions of higher loops will be large and the expansion will not work. In the very high chemical potential regime, as we will see later, light mass modes will appear and similarly the contribution of the higher loops will become large. We will discuss the details of these issues in section \ref{sec 1/D} and \ref{sec high mu 1/D }. We summarize the regimes which we will consider in this article in Table \ref{table regime}. \TABLE{ \begin{tabular}{lc} \hline very low temperature regime & \lower .3ex\hbox{$T/\tilde{\lambda}^{1/3}< D^{-\gamma}$} \\ low temperature and low chemical potential regime & $|u_n| \sim 0$, $n\ge 2$ \\ intermediate temperature and chemical potential regime & $|u_n| \ne 0$, $n\ge 2$ \\ high temperature regime & $|u_n| \sim 1$ \\ high chemical potential regime & $|u_n| \sim 1$ \\ very high chemical potential regime & $\mu/\tilde{\lambda}^{1/3} \gg (k_2 T/\tilde{\lambda}^{1/3})^{1/4} $ \\ \hline \end{tabular} \caption{Various regimes in the analysis of the one dimensional gauge theory. $T/\tilde{\lambda}^{1/3}$ and $\mu/\tilde{\lambda}^{1/3}$ are dimensionless temperature and chemical potential. We will discuss our model in the first three regimes in section \ref{Section low T} and the rest in section \ref{Section high T}. } \label{table regime} } \section{The phase structure of the low temperature and low chemical potential regime} \label{Section low T} \subsection{The phase structure in the leading $1/D$ expansion} \label{sec low T} In this subsection, we evaluate the phase structure in the low temperature and low chemical potential regime, by analyzing the effective action (\ref{potential triangle}). Note that this effective action is the leading order of the $1/D$ expansion. We will consider next order corrections in subsection \ref{sec 1/D} but, as we have mentioned, they do not change the nature of the phase structure. The obtained phase structure is summarized in Figure \ref{fig phase diagram}. \FIGURE{ \includegraphics[scale=0.75]{chemical.eps} \caption{Phase diagram of the one dimensional gauge theory in $\mu-T$ space from the $1/D$ expansion. Three (uniform, non-uniform and gapped) phases exist and the orders of the phase transitions between them are second and third. In the shaded regions, it is difficult to analyze the model through the $1/D$ expansion. In the horizontal shaded region (very high chemical potential region $\mu/\tilde{\lambda}^{1/3} \gg (k_2 T/\tilde{\lambda}^{1/3})^{1/4} $), the expansion does not converge because of the existence of the light mass modes. In the inclined shaded region (very low temperature region $T/\tilde{\lambda}^{1/3} < D^{-\gamma}$), the expansion is not valid, since the effective coupling $\tilde{\lambda}/T^3$ becomes too strong. The analysis in the vertically shaded region is also difficult, since the Wilson loop operators are highly interacting (intermediate region). However we can guess that the vertically shaded region will be gapped phase. See Table \ref{table regime} also.} \label{fig phase diagram}} If both of the temperature and chemical potential are sufficiently low such that all the coefficients of $|u_n|^2$ in (\ref{potential triangle}) are positive, the stable solution is given by $|u_n|=0$ for all $n$. Thus the contributions of $|u_n|$ are small in this regime and it is indeed enough to keep only $|u_1|$ in the saddle point equation (\ref{saddle point triangle}) to analyze the thermodynamics in this regime, \begin{align} \frac{\triangle^3}{\tilde{\lambda} } =1 + 2e^{-\beta \triangle} \left( k_1+ k_2\left( \frac{ e^{-\beta \mu}+e^{\beta \mu} }{2}\right)\right) |u_1|^2. \label{saddle point low T} \end{align} Since $e^{-\beta \triangle}$, $e^{-\beta (\triangle-\mu)}$ and $e^{-\beta (\triangle+\mu)}$ will be small in this regime, we can solve this equation approximately and obtain the condensation as \begin{align} \frac{\triangle}{\tilde{\lambda}^{1/3}}=1+ \frac{2}{3} e^{-\beta \tilde{\lambda}^{1/3} } \left( k_1+k_2 \left( \frac{ e^{-\beta \mu}+e^{\beta \mu}}{2} \right)\right) |u_1|^2+\cdots. \label{triangle low T} \end{align} Then, by putting this solution into (\ref{potential triangle}), we obtain an effective action for the Wilson loops \begin{align} \frac{S_{eff}(\{u_n\})}{DN^2}=& \frac{3\beta \tilde{\lambda}^{1/3} }{8} + a(\beta,\mu) |u_1|^2+ b(\beta,\mu) |u_1|^4+ \frac{1}{D} \sum_{n=2}^\infty \frac{|u_n|^2}{n}+\cdots, \label{action wilson loop} \end{align} where the coefficient $a(\beta,\mu)$ and $b(\beta,\mu)$ are given by \begin{align} a(\beta,\mu)&= \frac{1}{D} -e^{-\beta \tilde{\lambda}^{1/3}} \left( k_1+ k_2\left( \frac{e^{-\beta \mu}+e^{\beta \mu}}{2} \right) \right), \label{solution a} \\ b(\beta,\mu)&=\frac{\beta\tilde{\lambda}^{1/3}}{3}e^{-2\beta \tilde{\lambda}^{1/3}} \left( k_1+k_2\left( \frac{e^{-\beta \mu}+e^{\beta \mu}}{2} \right) \right)^2. \end{align} Note that $b(\beta,\mu)$ is always positive. In this case, three phases will appear as we will show soon through the argument of the Landau-Ginzburg type analysis in \cite{Aharony:2003sx, AlvarezGaume:2005fv}. (If $b<0$, two phases will appear instead of them.) Here the order parameters of these phases will be the values of the Wilson loop operator $u_n$ or equivalently the eigenvalue density of the gauge field $A_0$, which is defined as \begin{align} \rho(\alpha)\equiv& \frac{1}{N}\sum_{i=1}^N \delta(\alpha-\alpha_i) = \frac{\beta}{2\pi}\left(1+\sum_{n\ne 0} u_n e^{-i n\beta \alpha} \right) . \end{align} \FIGURE{ \includegraphics[scale=0.75]{rho.eps} \caption{Plots of eigenvalue density function $\rho(\alpha)$. Three configurations of $\rho(\alpha)$ characterize the three phases in the $\mu-T$ phase diagram. } \label{fig rho} } Now we investigate the three phases. If the temperature and chemical potential are both low, $a(\beta,\mu)>0$ is satisfied. Then, from the effective action (\ref{action wilson loop}), the stable configuration is $|u_n|=0$ for all $n$ as we have mentioned. This phase is called uniform phase, since the eigenvalue density $\rho(\alpha)$ is constant and thus it is uniform with respect to $\alpha$. (See Figure \ref{fig rho}.) This phase is an analogue of the confinement phase in the higher dimensional gauge theory, since the expectation values of the temporal Wilson loops vanish. As the temperature or chemical potential increases, $a(\beta,\mu)$ becomes 0. On the curve $a(\beta,\mu)=0$, still $|u_n|=0$ is stable since $b(\beta,\mu)$ is positive. However if $a(\beta,\mu)<0$, the solution $|u_1|=0$ becomes unstable and a stable solution, which is given by $|u_1|=\sqrt{-a/2b}$ and $|u_n|=0$ for $n \ge 2$, appears. The configuration of the eigenvalue density $\rho(\alpha)$ becomes non-uniform as shown in Figure \ref{fig rho} and this phase is called non-uniform phase. Thus a phase transition happens on $a(\beta,\mu)=0$. This line is the first phase transition line in $\mu-T$ phase space. (See Figure \ref{fig phase diagram}.) It is easy to show that this transition is second order by evaluating the free energy \cite{Mandal:2009vz, Aharony:2003sx}. As the temperature or chemical potential increases further, $|u_1|$ achieves 1/2. Then a gap appears in the eigenvalue density $\rho$. (We can choose $u_1$ real by a gauge fixing, and then the gap arises at $\alpha=\pm \pi/\beta.$ ) This is a Gross-Witten-Wadia type third order phase transition \cite{Gross:1980he, Wadia:1980cp}. In this phase, all the Wilson loop operators become non-zero. This phase is called gapped phase and is an analogue of the deconfinement phase in the higher dimensional gauge theory. The curve $|u_1|=\sqrt{-a/2b}=1/2$ gives the second phase transition line in Figure \ref{fig phase diagram}. We have found the two phase transition lines between the three phases. Let us analyze the details of these two curves. First we evaluate the curve described by $a(\beta,\mu)=0$. For small $\mu$, we can solve this equation and obtain $T$ as a function of $\mu$,\footnote{Since the mass dimension of $\tilde{\lambda}$ is 3 in our model, $T/\tilde{\lambda}^{1/3}$ and $\mu/\tilde{\lambda}^{1/3}$ can be regarded as dimensionless temperature and chemical potential.} \footnote{ The critical temperature (\ref{Tc1}) goes to zero if we take $D \to \infty$. Similarly the second critical temperature (\ref{Tc2}) will also go to zero in this limit. (In this limit, even though the critical temperatures become very low, the $1/D$ corrections are suppressed further and these results are exact.) As a result, only the gapped phase appears in $D=\infty$. See Figure \ref{fig large d phase}.} \begin{align} \frac{T_{c1}(\mu)}{\tilde{\lambda}^{1/3}} =\frac{1}{\log D} - \frac{k_2}{2}\left(\frac{\mu}{\tilde{\lambda}^{1/3} } \right)^2+\cdots. \label{Tc1} \end{align} Here we can see that the existence of the chemical potential reduces the critical temperature. It means the chemical potential enhances the non-uniform phase. Note that, at $\mu=0$, this curve is coincident with the result in \cite{Mandal:2009vz}. For finite $\mu$, the $e^{\beta \mu}$ term in (\ref{solution a}) is dominant and the curve is described as \begin{align} \frac{T_{c1}(\mu)}{\tilde{\lambda}^{1/3}}= \frac{1}{ \log \tilde{D}}\left(1-\mu/\tilde{\lambda}^{1/3} \right) +\cdots. \label{eq mu-c1} \end{align} Thus as $\mu/\tilde{\lambda}^{1/3}$ approaches 1, the critical temperature goes to 0, and $\mu_{c1}/\tilde{\lambda}^{1/3}=1$ seems a critical chemical potential at $T=0$. However the $1/D$ expansion will not be valid in such a very low temperature regime and this result is not reliable. Indeed some strange things will happen around this point when we consider $1/D$ corrections in subsection \ref{sec 1/D}. Next we evaluate the second phase transition line $|u_1|=\sqrt{-a/2b}=1/2$. For small $\mu$, the curve is given by \begin{align} \frac{T_{c2}(\mu)}{\tilde{\lambda}^{1/3}} = \frac{T_{c1}(\mu)}{\tilde{\lambda}^{1/3}} \left( 1+\frac{2}{3D} \left(1+k_2(\log D)^2 \left(\frac{\mu}{\tilde{\lambda}^{1/3} } \right)^2 \right)\right) +\cdots. \label{Tc2} \end{align} For finite $\mu/\tilde{\lambda}^{1/3}$ $(< 1 )$ , the curve behaves \begin{align} \frac{T_{c2}(\mu)}{\tilde{\lambda}^{1/3}} =\frac{T_{c1}(\mu)}{\tilde{\lambda}^{1/3}}\left(1+\frac{2}{3D}\frac{1}{1-\mu/\tilde{\lambda}^{1/3} } \right) +\cdots. \end{align} However this equation is not valid around $\mu/\tilde{\lambda}^{1/3} \sim 1 $. In order to investigate this region, it is convenient to evaluate the saddle point equation for $u_1$, which is derived from (\ref{potential triangle}), as \begin{align} \left[ \frac{1}{D}- e^{-\beta \triangle} \left(k_1+k_2 \frac{e^{-\beta \mu}+e^{\beta \mu}}{2}\right) \right] u_1=0. \end{align} Thus in case $u_1 \ne 0$, $\triangle$ has to satisfy \begin{align} e^{-\beta \triangle} \left(k_1+k_2 \frac{e^{-\beta \mu}+e^{\beta \mu}}{2}\right) =\frac{1}{D}. \label{triangle non-uniform 1} \end{align} Since $\beta$ will be large on the curve near $\mu/\tilde{\lambda}^{1/3} \sim 1$, we can approximately solve this equation as \begin{align} \triangle=\mu+\frac{1}{\beta} \log \tilde{D}+\cdots \label{triangle non-uniform 2}. \end{align} By putting it into the saddle point equation (\ref{saddle point low T}), we obtain \begin{align} |u_1|^2=\frac{D}{2}\left(\frac{1}{\tilde{\lambda}} \left(\mu+\frac{1}{\beta} \log \tilde{D} \right)^3-1 \right) +\cdots. \end{align} The positivity of $|u_1|$ requires that this solution is valid only if $\triangle^3/\tilde{\lambda}^{1/3} \ge 1 $. Now we can derive the second phase transition line around $\mu/\tilde{\lambda}^{1/3} \sim 1 $ by putting $|u_1|=1/2$ in this equation, \begin{align} \frac{T_{c2}(\mu)}{\tilde{\lambda}^{1/3}} =1-\mu/\tilde{\lambda}^{1/3} +\frac{1}{6\tilde{D} } +\cdots. \label{mu-c2} \end{align} Although this equation predicts a critical value of the chemical potential $\mu_{c2}/\tilde{\lambda}^{1/3}=1+1/6\tilde{D}$ at $T=0$, the $1/D$ expansion will not work there. Finally let us confirm that the relation $\triangle \ge \mu$, which we have assumed, is always satisfied in the uniform and non-uniform phase. In the uniform phase, $\triangle/\tilde{\lambda}^{1/3}=1$ from (\ref{triangle low T}). Since the uniform phase exists up to $\mu_{c1}/\tilde{\lambda}^{1/3} = 1$, the relation $\triangle \ge \mu$ is satisfied. In the non-uniform phase, the equation (\ref{triangle non-uniform 1}) and (\ref{triangle non-uniform 2}) show $\triangle \ge \mu$. A problem is the case $\triangle=\mu$, which arises on a line $T=0$, $\mu/\tilde{\lambda}^{1/3} \ge 1 $. It causes zero modes of the adjoint scalar $\Phi^I$ in (\ref{phi kinetic}). In addition, the $1/D$ expansion itself is not valid around this very low temperature regime. Therefore further analysis is necessary but we do not consider it in this article. The analysis in the gapped phase is difficult since all the Wilson loop operators are excited and interacting each other through a constraint $\rho(\alpha)\ge 0$. (The vertical shaded region in Figure \ref{fig phase diagram}.) We can perturbatively analyze it just above the curve $\sqrt{-a/2b}=1/2$ by assuming that $|u_n|$ ($n\ge2$) are small \cite{Mandal:2009vz, Aharony:2003sx}. Thus it is complicated to show the relation $\triangle \ge \mu$ in general. On the other hand, if temperature or chemical potential is enough high, $|u_n|\sim 1$ is satisfied and we can evaluate the contribution of $A_0$ perturbatively. There, analysis is possible as we will see in section \ref{Section high T} and we can guess the stability problem of the gapped phase through these results. \subsection{$1/D$ corrections and problems in very low temperature regime} \label{sec 1/D} In this subsection, we evaluate the subleading $1/D$ corrections to the effective action (\ref{potential triangle}) in the low temperature and low chemical potential regime. Then we will show that the $1/D$ expansion is not valid in a very low temperature regime. After that we argue how the $1/D$ corrections modify the phase structure derived in the previous section. We show the calculation of the $1/D$ corrections in appendix \ref{app 1/d} and, by using it, we obtain the relevant terms of the effective action as \begin{align} {\cal S}(\triangle, \{u_n\})/(DN^2)=C_0+ C_2 |u_1|^2 +C_4 |u_1|^4 +\cdots +O(1/D^2) , \label{s-eff-all} \end{align} where \begin{align} C_{0}=&-\frac{\beta \triangle^4}{8 \tilde{\lambda} }+\frac{\beta \triangle}{2} \nonumber \\ &+\frac{\beta \triangle}{D} \left[ \left( 1+\frac{\tilde{\lambda} }{4\triangle^3} \right)^{\frac{1}{2} } -1-\left(\frac{\tilde{\lambda} }{4\triangle^3} \right)-\frac{1}{4}\left(\frac{\tilde{\lambda} }{4\triangle^3} \right)^2 \right], \label{C0} \end{align} \begin{align} C_{2}=& \frac{1}{D}-x\left(k_1+k_2 \frac{y+y^{-1}}{2} \right) +\frac{\beta \triangle}{D}x\left(k_1+k_2 \frac{y+y^{-1}}{2} \right) \nonumber \\ & \times \Biggl[ \left(\frac{\tilde{\lambda} }{4\triangle^3} \right) \left( 1+\frac{\tilde{\lambda} }{4\triangle^3} \right)^{-\frac{1}{2} } +\frac{\frac{\tilde{\lambda} }{4\triangle^3}}{1+\frac{\tilde{\lambda} }{4\triangle^3} } -4\left(\frac{\tilde{\lambda} }{4\triangle^3} \right)-3\left(\frac{\tilde{\lambda} }{4\triangle^3} \right)^2 \Biggr] +O(x^2), \end{align} \begin{align} C_{4}=& \frac{\beta \triangle}{2D} x^2 \left(k_1+k_2 \frac{y+y^{-1}}{2} \right)^2 \left(\frac{\tilde{\lambda} }{4\triangle^3} \right)^2 \nonumber \\ &\times \Biggl\{ \left[ -\frac12 \left( 1+\frac{\tilde{\lambda} }{4\triangle^3} \right)^{-\frac{3}{2} } -1 \right] +(2+\beta \triangle ) \left[ -\frac{1}{ \left( 1+\frac{\tilde{\lambda} }{4\triangle^3} \right)^2 } -2 \right] \Biggr\} \nonumber \\ &+\frac{\beta \triangle}{2D} x^2 \left(k_2 \frac{y-y^{-1}}{2} \right)^2 \left(\frac{\tilde{\lambda} }{4\triangle^3} \right)^2 \nonumber \\ &\times \left\{ \beta \triangle \left[ -\frac{1}{ \left( 1+\frac{\tilde{\lambda} }{4\triangle^3} \right)^2 } -2 \right] -2 - \left(1+ \frac{\tilde{\lambda} }{4\triangle^3}\right)^{-3/2}\left(1+\frac{1}{2} \frac{\tilde{\lambda} }{4\triangle^3} \right) \right\}+O(x^3) . \end{align} Here $x \equiv e^{-\beta \triangle }$ and $y \equiv e^{-\beta \mu }$. Note that higher order terms of $x$ are irrelevant in low temperature, since the transitions happen around $x \sim 1/D$ in $\mu=0$ case and the critical temperatures will decrease as $\mu$ increases. Now we derive the effective action for the Wilson loop operators as in the previous section. By solving the saddle point equation for $\triangle$, we obtain the condensation \begin{align} \frac{ \triangle}{\tilde{\lambda}^{1/3} }=1+\frac{1}{D} \left(\frac{7\sqrt{5}}{30}-\frac{9}{32} \right) + \frac{2}{3}\bar{x}\left(k_1+k_2 \frac{y+y^{-1}}{2} \right) |u_1|^2 +\cdots , \label{condensation 1/D} \end{align} where $\bar{x}\equiv e^{-\beta \tilde{\lambda}^{1/3} }$. By substituting this solution into (\ref{s-eff-all}), we obtain the effective action for the Wilson loops as \begin{align} {\cal S}/(DN^2)= \beta \tilde{\lambda}^{1/3} \epsilon_0 + a' |u_1|^2 + b'|u_1|^4+\cdots, \label{LG'} \end{align} with \begin{align} \epsilon_0=&\frac{3}{8}+\frac{1}{D} \left(-\frac{81}{64}+\frac{\sqrt{5}}{2} \right), \label{free energy uni} \\ a'=&\frac{1}{D}-\bar{x}\left(k_1+k_2 \frac{y+y^{-1}}{2} \right) \left( 1+ \frac{\tilde{\lambda}^{1/3}\beta }{D} \left( \frac{203}{160} -\frac{\sqrt{5}}{3} \right)\right) , \label{a in 1/D} \\ b'=& \bar{x}^2 \left(k_1+k_2 \frac{y+y^{-1}}{2} \right)^2 \left[ \frac{\tilde{\lambda}^{1/3} \beta }{3} +\frac{\tilde{\lambda}^{1/3} \beta }{D} \left( \tilde{\lambda}^{1/3} \beta \left( \frac{229}{300}-\frac{2\sqrt{5}}{9} \right) +\frac{3181}{2400}-\frac{391\sqrt{5}}{1800} \right) \right] \nonumber \\ & - \frac{\tilde{\lambda}^{1/3} \beta}{D}\bar{x}^2 \left(k_2 \frac{y-y^{-1}}{2} \right)^2 \left( \tilde{\lambda}^{1/3} \beta \frac{33}{400} +\frac{\sqrt{5}}{160} +\frac{1}{32} \right) . \label{b 1/D} \end{align} From these expressions, we immediately find that the $1/D$ expansion in $T/\tilde{\lambda}^{1/3}<1/D$ regime is problematic, since several $1/D$ corrections involve $\beta \tilde{\lambda}^{1/3}$ factors. Thus if $\beta \tilde{\lambda}^{1/3} \sim D$ ($T/\tilde{\lambda}^{1/3} \sim 1/D$), these corrections become the same order to the leading terms. Similar terms may arise in higher $1/D$ corrections also and the $1/D$ expansion will not be reliable in such a very low temperature regime. Thus the $1/D$ expansion would be valid until $T/\tilde{\lambda}^{1/3} \sim D^{-\gamma}$, where $\gamma$ is a positive constant\footnote{ $\gamma$ will be less than 1 through the arguments in this section. In order to determine $\gamma$ precisely, we have to evaluate the higher order corrections of the $1/D$ expansion and it has not been done.} \footnote{ In $\mu=0$ case, the very low temperature regime is in the uniform phase and physical quantities do not depend on $T$. Thus this problem was not observed in \cite{Mandal:2009vz}. However we cannot rule out a slight possibility that new phases appear in this regime. Fortunately, numerical analyses show that such new phases do not arise \cite{Aharony:2005ew, Aharony:2004ig, Kawahara:2007fn, Azuma:2007fj, Azeyanagi:2009zf}. }. This regime is depicted as the inclined shaded region in Figure \ref{fig phase diagram}. Now we evaluate the effective action (\ref{LG'}) and argue the low temperature phase structure involving the $1/D$ corrections. We can show that $b'$ is positive at $a'=0$. Thus the argument in the previous section is still valid and the phase structure does not change. The curve $a'=0$ and $\sqrt{-a'/2b'}=0$ give the two phase transition lines in the $\mu-T$ plane. However we can see that the curve $a'=0$ ends at $\mu/\tilde{\lambda}^{1/3}=1$ on $T=0$ again. This result is the same to the leading result in (\ref{eq mu-c1}). This strange fact also indicates that the $1/D$ expansion is invalid in the very low temperature. \subsection{General chemical potential} \label{sec general mu} Until now, we have studied the phase structure only for the simple chemical potential (\ref{simple mu}). In this subsection, we take each $\mu_I$ arbitrary value and consider general chemical potentials. Before evaluating the general chemical potentials, we discuss a small $k_2 \sim 1/D$ case, in which the contribution of the chemical potential will be comparable to the $1/D$ corrections derived in the previous subsection. Even in this case, the phase structure in the low temperature and low chemical potential regime is similar. On the curve $a'=0$, if $\mu$ sufficiently closes to $\tilde{\lambda}^{1/3} $, $k_2 e^{-\beta(\tilde{\lambda}^{1/3} -\mu)}$ will be dominant in (\ref{a in 1/D}) even if $k_2$ is small. Thus the $\mu$ dependence is still large there. The curve $\sqrt{-a'/2b'}=0$ also depends on $\mu$ strongly in a certain regime. Therefore the qualitative nature of the phase structure is not modified. Now we consider the general chemical potentials. In this case, the one-loop contributions from the complex adjoint scalars are modified, and the second line of the effective action (\ref{potential triangle}) becomes \begin{align} &\frac{\beta \triangle}{2} -\sum_{n=1}^\infty \left[ e^{-n\beta \triangle} \left(\frac{D-2 \lfloor D/2 \rfloor}{D} +\frac{2}{D}\sum_{I=1}^{\lfloor D/2 \rfloor} \frac{e^{-n\beta \mu_I}+e^{n\beta \mu_I}}{2}\right) \right] \frac{|u_n|^2}{n} . \label{potential general mu} \end{align} During the derivation of this potential, we have assumed $\triangle \ge \mu_I$ for all $\mu_I$. Here we consider the phase structure. As we have argued in the case of $k_2 \sim 1/D$, even if each contribution of the chemical potentials appears with the $1/D$ factor, the largest chemical potential will be dominant in some regimes to fix the phase structure. Therefore we will obtain similar phase structure. Besides we can confirm that the condensation satisfies $\triangle(\beta,\{\mu_I \})>\mu_I$ in the uniform and non-uniform phase. In principle, there is a possibility that a different phase structure appears, since we have deformed the potential through the chemical potentials. However it does not happen. Especially, in the leading order of the $1/D$ expansion, we can prove that the phase structure is determined by the $|u_n|$ independent terms of the effective action (\ref{potential triangle}). Since the $|u_n|$ independent terms of the potential (\ref{potential general mu}) are the same to the previous ones in (\ref{potential triangle}), the phase structure does not change. We show it in appendix \ref{app LG}.\\ As we have mentioned in the introduction, it was supposed that the study of the low temperature thermodynamics of the gauge theories with the finite chemical potential is difficult because of the perturbative instability of the massless adjoint scalars. However, in our study, the condensation $\triangle(\beta, \mu)$ of the adjoint scalars protects our model from the instability and we can investigate the low temperature phase structure. In the next section, we will explore the properties of the condensation in different regimes. \section{Condensation in high temperature and high chemical potential regime} \label{Section high T} \subsection{Condensation in the large $D$ limit} \label{sec high T} We investigate the natures of our model in the high temperature regime and the high chemical potential regime. In these regimes, the phase will be the gapped (deconfinement) phase and we can use an approximation $|u_n| \sim 1$ \cite{Yamada:2006rx, Mandal:2009vz, Aharony:2003sx}. Then we can use a perturbative analysis in $A_0$ around $A_0=0$. In this subsection, we consider the simple chemical potential (\ref{simple mu}) and we will show that a unique condensation $\triangle(\beta,\mu)>\mu$ exists for an arbitrary value of $\mu$ in the large $D$ limit. This conclusion will be modified in the finite $D$ case as we will see in the next subsection. In the large $D$ limit, we can ignore fluctuation of $A_0$ and it is enough to evaluate the saddle point equation (\ref{saddle point triangle}) only. In $|u_n|=1$ case, this equation becomes \begin{align} \frac{\triangle^3}{\tilde{\lambda} }=& 1 +2\sum_{n=1}^\infty e^{-n\beta \triangle} \left[ k_1+k_2 \left( \frac{e^{-n\beta \mu}+e^{n\beta \mu}}{2} \right) \right] \nonumber \\ =& 1 +\frac{2k_1}{e^{\beta \triangle}-1}+\frac{k_2}{e^{\beta (\triangle+\mu)}-1} +\frac{k_2}{e^{\beta (\triangle-\mu)}-1}. \label{saddle point high temperature} \end{align} The left hand side of this equation is simply increasing from 0 to infinity with respect to $\triangle$. On the other hand, the right hand side is simply decreasing from infinity to 0 in $\triangle> \mu$ region. As a result, this equation has an unique solution $\triangle(\beta,\mu)$ in $\triangle> \mu$ region. (See Figure \ref{fig condensation}. Similar behaviour can be seen in $d=2$ and 3 dimensional gauge theories also.) There is another solution in $0<\triangle<\mu$. However this solution is unphysical since we have assumed $\triangle> \mu$ when we derived the saddle point equation (\ref{saddle point high temperature}). Now we show solutions of (\ref{saddle point high temperature}) in several cases. In $k_1=0$ case, if $\beta$ is sufficiently small, we obtain, \begin{align} \triangle^2=&\frac{1}{2} \mu^2+\frac{1}{2} \sqrt{\mu^4+8\frac{\tilde{\lambda}}{\beta}} & ( \beta \tilde{\lambda}^{1/3}\ll 1 ) . \end{align} In $k_1 \ne 0$ case, if $\beta$ and $\mu$ are small, we obtain \begin{align} \triangle=&\left( \frac{2\tilde{\lambda}}{\beta}\right)^{1/4} \left(1 +\frac{k_2 \mu^2}{4} \sqrt{\frac{\beta}{2\tilde{\lambda} } } \right)+\cdots & (\mu^4 \beta /\tilde{\lambda} \ll 1,~ \beta \tilde{\lambda}^{1/3}\ll 1 ). \label{cond-high-T} \end{align} These two solutions are valid in very high temperature. If $\triangle$ is close to $\mu$, we obtain \begin{align} \triangle=& \mu + \frac{ k_2 \tilde{\lambda} }{\beta \mu^3} +\cdots & (\mu/\tilde{\lambda}^{1/3} \gg (k_2 T/\tilde{\lambda}^{1/3})^{1/4} ). \label{cond-high-mu} \end{align} This solution is valid in very high chemical potential regime. Note that this condensation induces a light effective mass $\triangle^2-\mu^2 \sim 2 k_2 \tilde{\lambda} /\beta \mu^2$ for $\Phi^I$ in (\ref{phi kinetic}). In the next subsection, we will see that such light mass modes cause a divergence in $1/D$ corrections and the $1/D$ expansion does not work there. Thus the arguments in the very high chemical potential regime will be valid only in the $D\to \infty$ case. \subsection{$1/D$ correction in very high chemical potential regime} \label{sec high mu 1/D } In this subsection, we evaluate the $1/D$ corrections in the very high chemical potential regime ($\mu/\tilde{\lambda}^{1/3} \gg (k_2 T/\tilde{\lambda}^{1/3})^{1/4} $), in which the equation (\ref{cond-high-mu}) is satisfied, and argue the validity of the $1/D$ expansion. In the very high chemical potential regime, since $\beta(\triangle-\mu)$ will be small and $u_n$ will be close to $1$, the zero-mode of the Matsubara frequencies of $\Phi^I$ will be dominant. Therefore the relevant $1/D$ corrections arise from loops of $\Phi^I$ and $b_{ab}$ and the path integral of $A_0$. First we evaluate the $1/D$ correction from $A_0$. In the very high chemical potential regime, the dominant $\triangle$ dependent terms in (\ref{log det}) can be evaluated as \begin{align} \frac{1}{2}\log\left\{ \left(\alpha_j-\alpha_i\right)^4+2\left(\triangle^2+\mu^2\right)\left(\alpha_j-\alpha_i\right)^2+\left(\triangle^2-\mu^2\right)^2\right\} . \end{align} Then the effective action for $A_0$ is given by \begin{align} \sum_{i,j}\frac{\tilde{D}}{2}\log\left\{ \left(\alpha_j-\alpha_i\right)^4+2\left(\triangle^2+\mu^2\right)\left(\alpha_j-\alpha_i\right)^2+\left(\triangle^2-\mu^2\right)^2\right\} -\frac{1}{2} \log(\alpha_j-\alpha_i)^2, \label{log det high mu} \end{align} where the last term is derived from (\ref{FP}) by assuming that $\alpha_i$ are small. Now we assume $\triangle^2-\mu^2 \gg (\alpha_j-\alpha_i)^2$. Then we can expand the first $\log$ term and obtain\footnote{If we start an assumption $\triangle^2-\mu^2 \ll (\alpha_j-\alpha_i)^2$ in (\ref{log det high mu}), the attractive force between $\alpha_i$ will be quite strong and this assumption will not be satisfied.} \begin{align} N^2\tilde{D}\log (\triangle^2-\mu^2) + 2N\tilde{D} \frac{(\triangle^2+\mu^2)}{(\triangle^2-\mu^2)^2 }\sum_{i=1}^N\alpha_i^2 -\sum_{i,j}\frac{1}{2} \log(\alpha_j-\alpha_i)^2. \end{align} Note that this action is just a gaussian in $A_0$, \begin{align} N^2\tilde{D}\log (\triangle^2-\mu^2) + 2\tilde{\lambda}k_2 \frac{(\triangle^2+\mu^2)}{(\triangle^2-\mu^2)^2 }\Tr A_0^2/g^2. \end{align} Since the coefficient of $A_0^2$ will be enough large in $\mu/\tilde{\lambda}^{1/3} \gg (k_2 T/\tilde{\lambda}^{1/3})^{1/4} $ regime, $\alpha_i$ will be strongly trapped around $\alpha_i=0$. Thus the assumption $\triangle^2-\mu^2 \gg (\alpha_j-\alpha_i)^2$ will be satisfied. Then the gaussian integral of $A_0$ gives a $1/D$ correction and the effective action for $\triangle$ in the very high chemical potential regime will become \begin{align} S_{eff}(\triangle)/DN^2=& -\frac{\triangle^4}{8\tilde{\lambda}T} +\frac{ \tilde{D}}{D}\log (\triangle^2-\mu^2) -\frac{1}{2D}\log\left( \frac{(\triangle^2-\mu^2)^2}{(\triangle^2+\mu^2)}\right) \nonumber \\ &+\left(O\left( \frac{1}{D} \right) ~\text{from matter loops} \right) + \cdots . \label{1/d A high mu} \end{align} Here the first and the second terms are from (\ref{potential triangle}) with an approximation $\beta(\triangle-\mu) \ll 1$. The third term is from the $A_0$ integral. Thus the $1/D$ correction from $A_0$ is qualitatively not important for large $\tilde{D}$ case. However, if $\tilde{D}=1$, this correction cancels the second term. As a result, the arguments in the previous section will not be valid and (\ref{cond-high-mu}) will not be satisfied. Thus the assumption $\triangle \sim \mu$ in $\mu/\tilde{\lambda}^{1/3} \gg (k_2 T/\tilde{\lambda}^{1/3})^{1/4} $ is not ensured. Therefore a different analysis is necessary in $\tilde{D}=1$ case and we will discuss it in the next section. Now we evaluate the subleading $1/D$ corrections from the matter loops. In the large $\tilde{D}$ case, we have confirmed that $A_0$ is sufficiently small and we can ignore it in the loop calculation at this order. Then the dominant contributions of the matter loops are from the zero modes of the Matsubara frequencies of $\Phi^I$ and $b_{ab}$. Therefore we can evaluate them by using a zero dimensional reduced model \begin{align} S_{0d}= -\frac{1}{4}\tilde{b}_{ab}M_{ab,cd}^{-1}\tilde{b}_{cd}+ \sum_{I=1}^{\tilde{D}} \left( (\triangle^2-\mu^2) \tilde{\Phi}_a^{\dagger I} \tilde{\Phi}_a^I +g\sqrt{T} \tilde{b}_{ab}\tilde{\Phi}_a^{\dagger I} \tilde{\Phi}_b^I \right) . \label{zero dim matrix model} \end{align} Here $\tilde{b}_{ab}$ and $\tilde{\Phi}^I_a$ are the zero-modes of the one dimensional fields. Then we can calculate the $1/D$ corrections of the effective action (\ref{1/d A high mu}) as in appendix B of \cite{Mandal:2009vz}, \begin{align} \frac{1}{D}\left( -\frac{k_2\tilde{\lambda}T}{(\triangle^2-\mu^2)^2} -\frac{1}{2} \left(\frac{ k_2 \tilde{\lambda}T}{(\triangle^2-\mu^2)^2} \right)^2 -\frac{1}{2} \sum_{m=1}^\infty \frac{1}{m} \left( -\frac{k_2\tilde{\lambda}T}{(\triangle^2-\mu^2)^2} \right)^m \right) . \label{1/d matter high mu} \end{align} From this expression, we notice that if $(\triangle^2-\mu^2)^2 < k_2 \tilde{\lambda}T$, the last sum does not converge. It means that the $1/D$ expansion does not work in this regime. From (\ref{cond-high-mu}), it will happen \begin{align} \mu^4 > 4 k_2 \tilde{\lambda}T. \label{bound high mu} \end{align} Note that this estimate is crude, since, if $\mu$ is not sufficiently larger than $( k_2 \tilde{\lambda}T)^{1/4}$, the reduced model analysis (\ref{zero dim matrix model}) is not valid. Especially in the uniform and the non-uniform phases, the contribution of the gauge field is relevant and the $1/D$ expansion still works as we have discussed in section \ref{sec 1/D} even if (\ref{bound high mu}) is satisfied. By considering it, we conclude that the $1/D$ expansion is not valid in the horizontal shaded region in Figure \ref{fig phase diagram} schematically. Although the $1/D$ expansion does not work in this regime, we cannot conclude that the system is unstable there. The divergence of the $1/D$ correction in (\ref{1/d matter high mu}) arises from the loops of the light mass modes of $\Phi^I$ and it is not clear whether it indicates an instability of the system or not. \paragraph{$1/D$ correction in finite chemical potential} Here we consider the $1/D$ corrections in the finite chemical potential regime $\mu/\tilde{\lambda}^{1/3} < (k_2 T/\tilde{\lambda}^{1/3})^{1/4} $. In a finite temperature, the calculation for the corrections will be complicated but, if temperature is enough high, a zero dimensional analysis similar to (\ref{zero dim matrix model}) is possible. Then it is not difficult to show that the $1/D$ corrections converge if $(\triangle^2-\mu^2)^2 > k_2 \tilde{\lambda}_0 T$ is satisfied. Therefore we can guess that the $1/D$ expansion is valid in the finite temperature case also. By using the $1/D$ expansion method, we have revealed that, if the chemical potential is not very high, our model in the high temperature regime and the low temperature regime is stable. The stability in both of the regimes will support the stability of the unknown regime in the gapped phase. (The vertically shaded region in Figure \ref{fig phase diagram}.) However we have investigated only one condensate vacuum and there is a possibility that this vacuum is just a local minimum and more stable vacua exist\footnote{The appearance of other phases might be natural. If $\mu$ is large, $ \langle \Tr | \Phi^I |^2 \rangle \gg \langle \Tr Y^{i2} \rangle$ will be satisfied since the chemical potential makes the effective masses of $\Phi^I$ light. Then the eigenvalue distribution of the adjoint scalars will be pancake like, and, intuitionally, a doughnut like distribution may be favoured. If such a transition happens, it may correspond to the Meyers-Perry black hole/black ring transition in higher dimensional gravity \cite{Emparan:2008eg}. Besides, several intermediate deformed Meyers-Perry black hole solutions also exist in gravity \cite{Dias:2009iu}. However, we have not found such new phases in our model and it is interesting to investigate them further. }. Another possibility is that the model is unbounded below in the finite chemical potential. Thus we conclude that our model is at least meta-stable if the chemical potential is not very high. \subsection{Condensation in $\tilde{D}=1$ case} \label{sec D=1} As we have seen in equation (\ref{1/d A high mu}), the $\tilde{D}=1$ case is special. In this subsection, we show that a consistent condensation will happen in $\tilde{D}=1$ case also by using a different analysis up to $1/D$ order. However, this condensation will give rise to a small effective mass $\triangle^2-\mu^2 $ in a very high chemical potential and it will cause a divergence in the next order of the expansion. This is similar to the behaviour of the $\tilde{D}>1$ case and the $1/D$ expansion will not work in the very high chemical potential regime. In addition, we will show that our model has complex fuzzy sphere like saddle points in $\tilde{D}=1$ case, although their physical interpretations have not been understood. For simplicity, we consider a sufficient high temperature regime ($\beta \tilde{\lambda}^{1/3} \ll 1$) only. Then the zero-modes of the Matsubara frequencies will be dominant and the model reduces to a zero dimensional model, \begin{align} S = & \Tr \left( -2\mu i g_0 \Phi^{\dagger } \left[ A_0, \Phi\right] -\mu^2 \Phi^{\dagger }\Phi \right) \nonumber \\ &+ \sum_{i,j=1}^{D-2}g^2_{0} \left(\Phi^{\dagger }_a \Phi_b +\frac{1}{2} Y^{i}_a Y^{i}_b +\frac{1}{2}A_{0a}A_{0b}\right) M_{ab,cd} \left(\Phi^{\dagger }_c \Phi_d +\frac{1}{2} Y^{j}_c Y^{j}_d +\frac{1}{2}A_{0c}A_{0d}\right), \label{action 0d} \end{align} where we have scaled matrices appropriately and the coupling is defined as $g_0^2=g^2T$. By employing an auxiliary matrix $B_{ab}=\triangle^2 \delta_{ab}+g_0b_{ab}$ where $b_{ab}$ satisfies $b_{aa}=0$, this action becomes, \begin{align} S = & \Tr \left( -2\mu g_0 W \left[ A_0, X \right]+\frac{\triangle^2 -\mu^2}{2} \left(X^2+W^2 \right) +\frac{\triangle^2}{2}A_0^2 + \sum_{i=1}^{D-2} \frac{\triangle^2}{2}Y^{i2} \right) \nonumber \\ & -\frac{DN^2 \triangle^4}{8\tilde{\lambda}_0} -\frac{1}{4}b_{ab}M_{ab,cd}^{-1}b_{cd} +\frac{g_0}{2}b_{ab}\left(Y^{i}_a Y^{i}_b+X_a X_b+W_aW_b +A_{0a}A_{0b}\right). \end{align} Here we have used $\Phi=(X+iW)/\sqrt{2}$ and $\tilde{\lambda}_0=g_0^2 ND$. We evaluate this action up to the second order of the $1/D$ expansion. In this case, we can ignore the interactions between $A_0, X,W$ and $b_{ab}$ in the last term. By integrating out $Y^i$ and $b_{ab}$ through a technique in \cite{Mandal:2009vz}, we obtain \begin{align} S = & \Tr \left( -2\mu g_0 W \left[ A_0, X \right]+\frac{\triangle^2 -\mu^2}{2} \left(X^2+W^2 \right) +\frac{\triangle^2}{2}A_0^2 \right) \nonumber \\ &+DN^2\Biggl[ -\frac{ \triangle^4}{8\tilde{\lambda}_0} +\frac{k_1}{4} \log \triangle^4 \nonumber \\ &+\frac{1}{D} \left(-\frac{k_1 \tilde{\lambda}_0}{\triangle^4} -\frac{1}{2}\left(\frac{k_1 \tilde{\lambda}_0}{\triangle^4} \right)^2 +\frac{1}{2} \log\left(1+\frac{k_1 \tilde{\lambda}_0}{\triangle^4} \right) \right) +O\left(\frac{1}{D^2} \right) \Biggr]. \label{action 0d 2} \end{align} Here $k_1=(D-2)/D$. Now we evaluate the path integral of $X,W$ and $A_0$ and derive an effective action for $\triangle$. A formula for the following three matrix model is available \cite{Kazakov:1998ji}: \begin{align} &S = \Tr \left\{aM_1[M_2,M_3] + \frac{b}{2}\left(M_1^2+M_2^2 \right) +\frac{c}{2} M_3^2 \right\}, \nonumber \\ Z=&\int DM_1 DM_2 DM_3 \exp\left(-S \right) \nonumber \\ =& C a^{-N^2} \int d m_1 \cdots d m_N \prod_{i \ne j}\frac{m_i - m_j}{ m_i - m_j+1 } \prod_{i} e^{-\lambda_M m_i^2} \nonumber \\ = & C a^{-N^2} e^{-N^2 F_0(\lambda_M)}+\cdots, \end{align} where $a,b,c$ are constants and $\lambda_M \equiv N/g_M^2 = cb^2/2a^2$. $C$ is an irrelevant factor. We have ignored $1/N$ corrections. Here the free energy is given by \begin{align} F_0(\lambda_M) & \to -\frac{1}{2}\log g_M^2 +\frac{1}{2}g_M^2 + \cdots &(g_M \rightarrow 0) , \label{small g_m} \\ &\to \frac{3(12\pi)^{2/3}}{40}g_M^{-2/3}+O(g_M^{-5/3}) &(g_M \rightarrow \infty) . \end{align} In our case, from (\ref{action 0d 2}), $g_M$ becomes \begin{align} g_M^2=\frac{8\mu^2 \tilde{\lambda}_0}{\triangle^2(\triangle^2-\mu^2)^2D} . \end{align} If $\mu$ is small, $g_M$ will be also small. Then (\ref{small g_m}) gives us standard $\log$ terms plus $O(1/D^2)$ corrections. Thus, in the small $\mu$ case, a consistent condensation $\triangle>\mu$ will happen as usual. On the other hand, if $\mu$ is large and $\triangle^2-\mu^2 $ is small, then $g_M^2$ will be large. In this case, we obtain the effective action for $\triangle$ as \begin{align} \frac{S(\triangle)}{DN^2}=&-\frac{\triangle^4}{8\tilde{\lambda}_0} +\frac{k_1}{4} \log \triangle^4 + \frac{3(12\pi)^{2/3}}{40D} \left( \frac{\triangle^2(\triangle^2-\mu^2)^2 D}{8\mu^2 \tilde{\lambda}_0} \right)^{1/3} \nonumber \\ & +\frac{1}{D}\left(-\frac{k_1 \tilde{\lambda}_0}{\triangle^4} -\frac{1}{2}\left(\frac{k_1 \tilde{\lambda}_0}{\triangle^4} \right)^2 +\frac{1}{2} \log\left(1+\frac{k_1 \tilde{\lambda}_0}{\triangle^4} \right) \right) +\cdots. \label{D=1 effective action} \end{align} From this action, we can derive a saddle point equation for $\triangle$ and we can obtain a unique saddle point $\triangle(\mu)$ which satisfies $\triangle(\mu)-\mu \gtrsim 0$ for any $\mu$. Therefore, up to this order of the $1/D$ expansion, we obtain the consistent condensation and the system is stable even in $\tilde{D}=1$ case. We consider the high temperature approximation in this section but similar analysis will be possible in a finite temperature also. However, since $\triangle^2-\mu^2$ will be small in the very high chemical potential regime, the next order terms will diverge and the $1/D$ expansion will not be valid as in the large $\tilde{D}$ case. (The appearance of the fractional power of $D$ in the third term of (\ref{D=1 effective action}) also implies a problem of the $1/D$ expansion in this regime.) \paragraph{Fuzzy solutions?} Now we discuss possible saddle points of the zero dimensional action (\ref{action 0d 2}). Since the cubic interaction in (\ref{action 0d 2}) can be regarded as a CS like term, the action has a complex fuzzy sphere like saddle point \cite{Ishiki:2010pe}, \begin{align} X = -i \frac{\sqrt{\triangle^2(\triangle^2-\mu^2)}}{2\mu g} J_1, ~W =-i \frac{\sqrt{\triangle^2(\triangle^2-\mu^2)}}{2\mu g} J_2, ~A_0= -i \frac{\triangle^2-\mu^2}{2\mu g} J_3, \end{align} where $J_i$ are the generators of the $N$ dimensional irreducible representation of $SU(2)$, which satisfy $[J_i,J_j]=i \epsilon_{ijk}J_k$. By replacing $J_i$ with reducible representations, we can obtain many saddle points. However, these fuzzy sphere like solutions are not hermitian\footnote{ At these saddle points, $\triangle<\mu$ may not be forbidden. (However it will be unstable.) Then $X$ and $W$ can be hermitian but $A_0$ is still not. } and physical interpretation is unclear. In addition to these complex saddle points, it might be possible to find different fuzzy solutions in (\ref{action 0d}) or (\ref{action 0d 2}) as in the studies in \cite{Iso:2001mg, Ishiki:2010pe, Jatkar:2001uh, Kimura:2001uk, Azuma:2004zq}. \paragraph{General chemical potentials} Now we consider the general chemical potentials as in section \ref{sec general mu}. If the values of several chemical potentials are the same and larger than the others, the analysis in the previous section is valid. If only one chemical potential is very large, we can approximately apply the analysis in this section by ignoring other chemical potentials. Thus the consistent condensation $\triangle>\mu_I$ will always happen up to the $1/D$ order. \section{High temperature condensation in higher dimensional gauge theory} \label{sec high d} In this section, we consider the generalization of our argument about the condensation to $d$ dimensional gauge theory, \begin{align} S = \int_0^{\beta} dt \int d^{d-1}x \, \Biggl[ & \Tr \left( \frac{1}{4g^2_{d}} F_{\mu\nu}^2 - \sum_{I=1}^{\tilde{D}}\Phi^{\dagger I} \left( (D_0-\mu)^2 +D_i^2 \right) \Phi^{I} - \sum_{i=2\tilde{D}+1}^D \frac12 Y^{i} D_{\mu}^2 Y^{i} \right) \nonumber \\ & + \sum_{I,J,i,j}g^2_{d} \left(\Phi^{\dagger I}_a \Phi^{I}_b +\frac{1}{2} Y^{i}_a Y^{i}_b \right) M_{ab,cd} \left(\Phi^{\dagger J}_c \Phi^{J}_d +\frac{1}{2} Y^{j}_c Y^{j}_d \right) \Biggr]. \label{action high d} \end{align} Here $g_{d}$ is the gauge coupling. We consider the simple chemical potential (\ref{simple mu}). We will show that the condensation of the adjoint scalars can happen at least in the high temperature regime in the leading order of the $1/D$ expansion. First we employ an auxiliary field $B_{ab}$ as in the $d=1$ case. Then we assume that this field condenses as $B_{ab}=\triangle^2\delta_{ab}$, where $\triangle$ does not depend on the time and position. We also assume that this condensation satisfies $\triangle > \mu$. Under these assumptions, we can exactly derive a saddle point equation to determine the condensation $\triangle$ in the large $D$ limit. In the large $D$ limit, we can ignore the contribution of the spatial components of the gauge field $A_i$ by regarding $D \gg d-1$. The interactions between $b_{ab}$ and $\Phi^I$ and $Y^i$ are also suppressed. (Here $b_{ab}$ is defined as in (\ref{fluct}) and satisfies $\int dt d^{d-1}xb_{aa}=0 $.) In the high temperature regime, we can also ignore $A_0$. Then, we can integrate out $\Phi^I$ and $Y^i$ and obtain the effective action for $\triangle^2$ as in section \ref{sec Eff}. From this effective action, the saddle point equation is obtained as \begin{align} \frac{\triangle^2}{\tilde{\lambda}_{d} T} =& \sum_{n} \int \frac{ d^{d-1}p}{(2\pi)^{d-1}} \Biggl[ \frac{2k_1}{\left(\frac{2\pi n}{\beta} \right) ^2 + \vec{p}^2+\triangle^2 } \nonumber \\ &+\frac{k_2}{\left(\frac{2\pi n}{\beta} +i \mu \right) ^2 + \vec{p}^2+\triangle^2 } + \frac{k_2}{\left(\frac{2\pi n}{\beta} -i \mu \right) ^2 + \vec{p}^2+\triangle^2 } \Biggr]. \end{align} Here $\tilde{\lambda}_{d} \equiv g_d^2 N D$ is the $d$ dimensional 'tHooft like coupling. Generally solving this equation is still complicated and we evaluate it by taking a high temperature limit. Here the zero modes of the Matusbara frequencies are dominant and we can ignore the non-zero modes. Then the equation is simplified as, \begin{align} \frac{(2\pi)^{d-1}}{2V_{d-2}} \left( \frac{\Lambda^{5-d}}{ \tilde{\lambda}_{d} T} \right) \left( \frac{\triangle^2}{\Lambda^2}\right) =& k_1 f_{d} (\triangle^2/\Lambda^2)+ k_2 f_{d} ((\triangle^2-\mu^2)/\Lambda^2), \label{condensation high d} \end{align} where \begin{align} f_{d}(x) \equiv \Lambda^{3-d} \int_0^\Lambda dp \frac{ p^{d-2} }{ p^2+x \Lambda^2 }. \end{align} Here $V_{d-2}$ is the volume of the $d-2$ dimensional unit sphere and $\Lambda$ is a momentum cut off. $T$ appears only through $\tilde{\lambda}_{d}T$ in this equation. Thus we can regard $\tilde{\lambda}_{d}T$ as an effective coupling. This equation determines $\triangle$ in terms of the effective coupling $\tilde{\lambda}_{d}T$, $\mu$ and the cut off $\Lambda$. We tune $\Lambda$ dependence of $\tilde{\lambda}_{d}T$ such that the condensation $\triangle$ is fixed for a particular physical value at a certain chemical potential $\mu$. \FIGURE{ \includegraphics[scale=0.75]{condensation.eps} \caption{Solution of (\ref{condensation high d}). In $d\le 3$ case (the left plot), the curve represents the schematic behaviour of the right hand side of (\ref{condensation high d}) in $\triangle^2>\mu^2$ region. It diverges at $\triangle^2=\mu^2$. The straight line is the left hand side of (\ref{condensation high d}) and the crossing point gives the solution of (\ref{condensation high d}). Note that a unique solution exists for any value of $\mu$. In $d\ge 4$ case (the right plot), the curve does not diverge at $\triangle^2=\mu^2$. As a result the curve cannot cross the line for small $\tilde{\lambda}_{d} $ (or large $\mu$) and the solution does not exist in this case. } \label{fig condensation}} Now we evaluate the equation (\ref{condensation high d}). We show the explicit expressions for $f_d(x)$ in appendix \ref{app fd} and we can derive the condensation by solving it. Instead of doing it, here we argue the condition for the existence of the condensation in $\triangle> \mu$ region through the qualitative properties of $f_d(x)$. We can show that $f_d(x)$ is simply decreasing in $x>0$ and behaves as \begin{align} f_d(x) \left\{ \begin{array}{lll} \to \infty & x \to +0 &(d \le 3)\\ \to 1/(d-3) & x \to +0 &(d \ge 4)\\ \to 0 & x \to +\infty & \end{array} \right. . \label{fd} \end{align} Thus the right hand side of (\ref{condensation high d}) diverges at $\triangle=\mu$ in the $d\le 3$ case but does not in the $d\ge 4$ case. Here the consistent solution of (\ref{condensation high d}) is given by a crossing point in Figure \ref{fig condensation}. Therefore the equation (\ref{condensation high d}) always has a unique consistent solution in $d\le 3$. On the other hand, in $d \ge 4$ case, one solution exists only if the following equation is satisfied, \begin{align} \frac{ \tilde{\lambda}_{d} T}{\Lambda^{5-d}} \ge \frac{(2\pi)^{d-1}}{2V_{d-2}} \left( \frac{\mu}{\Lambda}\right)^2 \frac{1}{k_1 f_{d} (\mu^2/\Lambda^2)+ k_2/(d-3) }. \label{critical coupling} \end{align} We have obtained this condition by evaluating (\ref{condensation high d}) at $\triangle=\mu$. Thus the consistent condensation does not happen if the effective coupling is weak. Equivalently we can say that, for a given coupling, a critical chemical potential $\mu_c$ exists, which saturates (\ref{critical coupling}), and the condensate happens only if $\mu<\mu_c$. We summarize this result in Figure \ref{fig large d phase}. \FIGURE{ \includegraphics[scale=0.75]{large-d-phase.eps} \caption{Schematic phase diagrams of the $d$ dimensional large $N$ gauge theories in $\mu-T$ space at $D=\infty$. In the $D=\infty$ case, the problems from the $1/D$ corrections do not arise. In $d=1$ case, since the two critical temperatures become $0$ at $D=\infty$, only the gapped phase appears. (See (\ref{Tc1}) and (\ref{Tc2}).) In $d \ge 2$ case, we evaluated only the high temperature regime. In $d \ge 4$ case, there is a critical chemical potential and the condensation does not happen beyond it. The nature of this regime has not been understood. } \label{fig large d phase} } One important unsolved issue is the nature of $\mu>\mu_c$ region, in which the condensation does not happen. One possibility is that the system is destabilized by the chemical potential. Another possibility is that the system is still stable but described by a highly interacting theory. Although the above results are exact in the large $D$ limit, as we have observed in section \ref{sec high mu 1/D }, the $1/D$ corrections are important in the very high chemical potential regime in finite $D$ case. Thus our results may be valid only in the $D \to \infty$ case in this regime. Even if we start the supersymmetric gauge theory\footnote{How to take large $D$ limit in the supersymmetric gauge theory is difficult problem, since the number of the bosons and fermions grow at different rates as $D$ grows large. By taking the high temperature limit and ignoring fermions, we can ignore this problem.}, the behaviour will be the same in the high temperature regime, since all the fermions are decoupled if the temperature is sufficiently high. Therefore the analysis in this section may be valid in the supersymmetric gauge theory also. \section{Conclusions and discussions} \label{Conclusion} In this article, we investigated the thermodynamics of the large $N$ gauge theories with the chemical potentials. Because of the condensation of the adjoint scalars in the large $D$, we can analyze the effect of the finite chemical potential even in the perturbatively massless gauge theories. It is an important step towards understanding the phase structure of large $N$ gauge theories. However the light mass modes of the adjoint scalars in the very high chemical potential regime cause the $1/D$ expansion to diverge. Understanding of the nature of this divergence is important to figure out whether the system is stable or not in this regime for finite $D$. In addition, if we can understand the $1/D$ corrections in the higher dimensional gauge theories, it may be possible to apply our analysis to D brane theories at a high temperature. It would then be interesting to compare our results in the weak coupling regime to the strong coupling results predicted by the dual gravity analysis \cite{Harmark:1999xt}. \\ We also studied the phase structure of the one-dimensional gauge theory as in Figure \ref{fig phase diagram}. As we have mentioned in the introduction, this theory is related to D1 branes on a small circle and it is interesting to evaluate this system by using the dual gravity and compare each other. However we have not completed the understanding of the phase structure of our model. There is a possibility that other phases appear. If such phases are found, they may correspond to non-spherical black objects like a black ring or hairy black holes in the dual gravity. In the sufficient high temperature regime, the one-dimensional model reduces to the zero-dimensional model (\ref{action 0d}). This model is similar to the bosonic IKKT matrix model with negative mass and imaginary CS like terms. As we have discussed in section \ref{sec D=1}, fuzzy solutions might exist in this model \cite{Iso:2001mg, Ishiki:2010pe, Jatkar:2001uh, Kimura:2001uk, Azuma:2004zq}. Therefore, exploring this matrix model may be the simplest way to find the new phases.\\ Our phase structure in the bosonic gauge theory is quite different from the results in the one-dimensional supersymmetric gauge theory predicted from D0 brane black hole \cite{Itzhaki:1998dd, Harmark:1999xt}. There, the low temperature phase transition does not happen and the system is always in the gapped (deconfinement) phase. Besides if the chemical potential is larger than a critical chemical potential $\mu_c=c T$, the system is destabilized. These differences indicate that the contribution of the fermions is relevant in the low temperature regime. One possibility is that, in the supersymmetric theory, the effective action (\ref{potential triangle}) is modified such that the condensation behaves as $\triangle(T,\mu) \sim T$ in the low temperature regime. Then the potentials for the Wilson loop operators may be unstable at $|u_n|=0$ even around $T=0$ and the gapped (deconfinement) phase will be preferred. In addition, the system will be destabilized if $\mu > \triangle \sim T$. However, how to take a large $D$ limit in supersymmetric theories has not been understood and we cannot show such a mechanism yet. \paragraph{Acknowledgements } I would especially like to thank Gautam Mandal for useful discussions and for several detailed comments on the manuscript. I would also like to thank Avinash Dhar, Oleg Evnin, Shoichi Kawamoto, Hiroaki Kohyama, Manavendra Mahato, Shiraz Minwalla, Shinji Shimasaki, Sandip Trivedi and Spenta Wadia for useful discussions.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Recent observations indicate that to explain the motions of galaxy and the cosmic speed-up, dark matter and dark energy should be included in the framework of general relativity, and they sum up 96\% of the total energy content. This inevitably puts forward a challenge to general relativity. General relativity actually has passed all tests at solar-system scales only, but not all length scales. So considering alternative theories of gravity is a reasonable choice. In fact, Einstein considered a new approach to variation after he found general relativity, nowadays known as Palatini variation. Unlike the conventional metric variation, the Palatini variation assumes that both the metric and connection are independent variables, and thus abandons the priori of metric. Under Palatini variational principle, two field equations can be obtained by varying with respect to the metric and connection. In Palatini theories, the matter actions are assumed to be independent of the connection. For the Einstein-Hilbert action, these two variations are equivalent~\cite{Wald1984a}. However, for the action of a general $f(R)$ gravity, these two variations would lead to two very different theories: the metric and Palatini $f(R)$ theories. It is well known that the metric $f(R)$ theory leads to fourth-order field equations, while the Palatini $f(\mathcal{R})$ theory leads to second-order ones. In recent years the Palatini $f(\mathcal{R})$ theories of gravity have attracted great interest since they are expected to have good descriptions of the phenomenons of our universe~\cite{Olmo2011b, Olmo2005b, Fay2007a, Amarzguioui2006, Barragan2010a, Tsujikawa2008, Koivisto2007a, Sotiriou2006, Flanagan2004, Meng2004a, Meng2004,Allemandi:2005qs}, and they are different from the results of the metric $f(R)$ gravity~\cite{Nojiri2003,Hu2007}. In Ref.~\cite{Vollick2003}, it was shown that $1/\mathcal{R}$ correction to the Einstein-Hilbert action in Palatini formalism offers an alternative explanation for late time acceleration. The models with the addition of both positive and negative powers of scalar curvature were considered in Ref.~\cite{Sotiriou2006a}. It was shown that these models of modified gravity may account for both early time inflation and late time accelerated expansion. On the other hand, it has been shown that brane world theories may address some open problems in particle physics and phenomenology such as the hierarchy problem and cosmological constant problem ~\cite{Akama1982,Arkani-Hamed1998a,Arkani-Hamed1999a,Antoniadis1998a, Randall1999,Randall1999a,Maartens2004a}. Inspired by these theories, thick brane world models~\cite{Csaki2000a,Gremm2000,Gremm2000a,Kobayashi:2001jd} have been considered in both general relativity and modified gravity theories. In these models, a lot of interesting structures have been found~\cite{Dzhunushaliev2010a,Liu2011a,Fu2014a}. Another reason to consider thick brane models is that most of the thin brane models constructed in modified gravity with higher derivatives cannot be solved. As in the Randall-Sundrum model~\cite{Randall1999a}, one of the issues of thick brane model is the localization of the graviton zero mode, which is related to the recovery of the four-dimensional gravity. Besides, the stability problem of the system is also very important. Usually, the graviton massive KK modes are suppressed on the brane, but they do contribute to the Newtonian potential, and this provides an approach to detect the extra dimensions. The structure of a brane world model is determined by the gravity model. As is well known, the metric $f(R)$ gravity theories modify the gravitational sector of the Einstein equations. For the brane world models constructed in the metric $f(R)$ gravity, see Refs.~\cite{Liu2011a,Zhong2011,Afonso2007a,Bazeia:2007jj,Dzhunushaliev2010,HoffdaSilva:2011si,Liu:2011am,Bazeia2013b,Bazeia2014a}. In contrast to the metric $f(R)$ gravity, the Palatini $f(\mathcal{R})$ gravity is equivalent to general relativity with a modified source. In this paper, we expect that the thick brane model in Palatini $f(\mathcal{R})$ gravity has some interesting features, in particular the solutions and the tensor perturbations. In Ref.~\cite{Bazeia2014b}, thick brane solutions in Palatini $f(\mathcal{R})$ gravity were obtained under first-order framework and perturbative approach. We try to get analytic solutions of the thick brane model for general $f(\mathcal{R})$ with constant curvature and exact solutions of the Palatini $f(\mathcal{R})=\mathcal{R} + \alpha\mathcal{R}^{2}$ gravity with nonconstant curvature in this paper. In section~\ref{secModel}, we first review the Palatini $f(\mathcal{R})$ gravity and set up our Palatini $f(R)$-brane model. Then we derive the second-order field equations in five dimensions for our model and show how to solve the field equations analytically. In section~\ref{SecFluctuations}, we study the gravitational fluctuations and stability problems. The localization of the graviton zero mode and the corrections to Newtonian potential of massive KK modes are also discussed. The discussion and conclusions are given in section~\ref{secConclusion}. \section{The model} \label{secModel} In this section, we consider the general $f(\mathcal{R})$ model in five-dimensional spacetime in Palatini formalism. The action takes the form \be S_{\texttt{Pal}}=\frac{1}{2\kappa_{5}^{2}} \int d^5 x \sqrt{-g}f(\mathcal{R}(g,\Gamma))+S_{\texttt{M}}(g_{M N},\Psi),\label{action of Pal. f(R)} \ee where $\kappa_{5}^{2}=1/M^{3}_{*}$ with $M_{*}$ the fundamental scale, $g$ is the determinant of the metric, and $S_{\texttt{M}}(g_{M N},\Psi)= \int d^5 x \sqrt{-g} L_{\texttt{M}}(g_{M N},\Psi)$ is the action for ordinary matter that only couples to the metric. In this paper, capital Latin letters $M,N,\cdots$ denote the five-dimensional coordinate indices $0,1,2,3,5$ and Greek letters $\mu,\nu,\cdots$ denote the four-dimensional coordinate indices $0,1,2,3$. $\mathcal{R}(g,\Gamma)=g^{M N}\mathcal{R}_{M N}$ is the Ricci scalar constructed by the independent connection $\Gamma$. In Palatini $f(\mathcal{R})$ gravity theories, the main feature is that both the metric and the connection are assumed to be independent variables. It is very different from general relativity and other metric theories. We will see that this set-up leads to special physics. Varying with respect to the metric and the connection, respectively, one gets the following two field equations: \ba f_{\mathcal{R}}\mathcal{R}_{M N}-\frac{1}{2}f g_{M N} &=& \kappa_{5}^{2}T_{M N}, \label{field equation of metric}\\ \tilde{\nabla}_{A}\left(\sqrt{-g}f_{\mathcal{R}}g^{M N}\right) &=&0, \label{field equation of connection} \ea where $f_{\mathcal{R}}\equiv{df}/{d\mathcal{R}}$, $T_{M N}$ is the energy-momentum tensor, and $\tilde\nabla$ is the covariant derivative defined with the connection $\Gamma$. Note that $\tilde\nabla$ is not compatible with the metric, which implies that $\tilde{\nabla}_{A}g_{MN}\neq0$ unless $f(\mathcal{R})=\mathcal{R}$. Actually, Eq. (\ref{field equation of connection}) defines the auxiliary metric in Palatini $f(R)$ gravity. If we define \be \sqrt{-q}q^{M N}\equiv \sqrt{-g}f_{\mathcal{R}}g^{M N}, \label{definition of auxiliary metric} \ee then we have $\tilde\nabla_{A}(\sqrt{-q}q^{M N})=0$. It is similar to the equation $\nabla_{A}(\sqrt{-g}g^{M N})=0$ (it is also equivalent to $\nabla_{A}g^{MN}=0$) in general relativity. At this point, we can say $\tilde\nabla$ is compatible with the auxiliary metric $q^{M N}$. According to this definition, we obtain \be q^{M N}=f_{\mathcal{R}}^{-{2}/{3}}g^{M N},\quad q_{M N}=f_{\mathcal{R}}^{{2}/{3}}g_{M N}. \label{definition of metric q} \ee Obviously, $q_{M N}$ is just the conformally transformed metric. With this metric, one can express the independent connection as \ba \Gamma_{M N}^{A}&=&\frac{1}{2}q^{A B}(\partial_{M}q_{N B}+\partial_{N}q_{M B}-\partial_{B}q_{M N})\nn\\&=&\left\{^{A}_{M N}\right\}+C^{A}_{M N}, \label{definition of independent connection} \ea where $\left\{^{A}_{M N}\right\}$ is the Christoffel symbol and $C^{A}_{M N}$ is a well-defined tensor. When $f(\mathcal{R})=\mathcal{R}$, we have $C^{A}_{M N}=0$, and the theory would reproduce general relativity. The expression (\ref{definition of independent connection}) indicates that we are able to eliminate the independent connection $\Gamma$ from the field equations. If this is done, then we would get one field equation which only relies on metric dynamically. With the following relations \ba \mathcal{R}_{M N}&=&R_{M N}(g)-\frac{1}{3f_{\mathcal{R}}} \left(3\nabla_{M}\nabla_{N}f_{\mathcal{R}}+g_{M N}\nabla_{A}\nabla^{A}f_{\mathcal{R}}\right) +\frac{4}{3f_{\mathcal{R}}^{2}}\nabla_{M}f_{\mathcal{R}}\nabla_{N}f_{\mathcal{R}}, \label{expression of transformed Ricci tensor}\\ \mathcal{R}&=&R-\frac{8}{3f_{\mathcal{R}}}\nabla_{A}\nabla^{A}f_{\mathcal{R}} +\frac{4}{3f_{\mathcal{R}}^{2}}\nabla_{A}f_{\mathcal{R}}\nabla^{A}f_{\mathcal{R}}, \label{expression of transformed Ricci scalar} \ea where ${R}_{M N}(g)$ and ${R}$ are the Ricci tensor and Ricci scalar constructed by the spacetime metric $g_{MN}$, respectively, we can transform the Eq. (\ref{field equation of metric}) into the following one \ba G_{M N}&=&\frac{\kappa_{5}^{2} T_{M N}}{f_{\mathcal{R}}}- \frac{1}{2}g_{M N}\left(\mathcal{R}-\frac{f}{f_{\mathcal{R}}}\right)+ \frac{1}{f_{\mathcal{R}}}\left(\nabla_{M}\nabla_{N}-g_{M N}\nabla_{A}\nabla^{A}\right) f_{\mathcal{R}}\nn\\ &&-\frac{4}{3f_{\mathcal{R}}^{2}} \left(\nabla_{M}f_{\mathcal{R}}\nabla_{N}f_{\mathcal{R}} -\frac{1}{2}g_{M N}\nabla_{A}f_{\mathcal{R}}\nabla^{A}f_{\mathcal{R}}\right), \label{modified Einstein equation} \ea where $G_{M N}=R_{MN}-\frac{1}{2}R g_{MN}$ is the Einstein tensor. Furthermore, from Eq. (\ref{field equation of metric}), we have \be f_{\mathcal{R}}\mathcal{R}-\frac{5}{2}f=\kappa_{5}^{2} T\label{trace of Einstein eq.}, \ee which shows that $\mathcal{R}$ is related to the trace of energy-momentum tensor algebraically. Thus, all of the quantities such as $\mathcal{R}$, $f(\mathcal{R})$, and $f_{\mathcal{R}}$ can be expressed by $T$. At this point, we have successfully eliminated the auxiliary metric $q^{M N}$ or the connection $\Gamma$ from the field equations, and the dynamical variable of the field equations (\ref{modified Einstein equation}) is the spacetime metric $g_{MN}$. The implication of Eq. (\ref{modified Einstein equation}) is clear so far: it is the Einstein equation with a modified source, and the effective energy-momentum tensor is defined by the right hand side of Eq. (\ref{modified Einstein equation}). It can be seen that the Palatini $f(\mathcal{R})$ gravity is equivalent to a metric theory with a modified source. For more details about the $f(R)$ gravity, see Refs.~\cite{Sotiriou2010,Capozziello2011a,DeFelice2010,Nojiri2011}. In this paper we consider the $f(\mathcal{R})$ brane model with a scalar field presented in the five-dimensional background spacetime. The background metric with four-dimensional Poincar$\acute{e}$ symmetry is assumed as \be d s^2=a^2(y)\eta_{\mu\nu}d x^{\mu}d x^{\nu}+d y^{2},\label{background metric} \ee where $a(y)$ is the warp factor. The Lagrangian of the scalar field is assumed as $\mathcal{L}_{\phi}=-\frac{1}{2}\partial_{M}\phi\partial^{M}\phi-V(\phi)$. The corresponding energy-momentum tensor and equation of motion of the scalar field are \ba T_{M N}&=&\partial_{M}\phi\partial_{N}\phi-g_{M N}\left(\frac{1}{2}\partial_{A}\phi\partial^{A}\phi+V(\phi)\right), \label{energy-momentum tensor}\\ \Box^{(5)}{\phi}&=& V_{\phi}.\label{EOM of scalar} \ea For static brane solution, the scalar field is a function of $y$, namely $\phi=\phi(y)$. Then, with the metric (\ref{background metric}), the explicit forms of the above two equations are given by \ba T_{\mu\nu}&=&-a^2 \Big(\frac{1}{2}\phi'^2+V\Big) \eta_{\mu\nu}, \label{energy-momentum tensor1}\\ T_{55}&=&\frac{1}{2}\phi'^2-V, \label{energy-momentum tensor2}\\ V{'}&=&\phi{''}\phi{'}+4\frac{a{'}}{a}\phi'^{2},\label{EOM of scalar1} \ea where the prime represents the derivative with respect to the extra dimension coordinate $y$. To construct a thick brane world model, we expect to solve the system (\ref{modified Einstein equation}) and (\ref{EOM of scalar1}). Usually, we can obtain topologically nontrivial solutions by introducing a superpotential~\cite{Gremm2000, Fu2011a, Chen2013} or giving a scalar potential such as the $\phi^{4}$ or other models. However, in our case this does not work because of the complex expression of the right hand side of (\ref{modified Einstein equation}). To solve this system, we consider Eqs. (\ref{field equation of metric}) and (\ref{definition of auxiliary metric}) instead of Eq. (\ref{modified Einstein equation}). With the relation (\ref{definition of auxiliary metric}) between $q_{M N}$ and $g_{M N}$, it is convenient to assume the auxiliary metric as \be d\tilde{s}^2=u^2(y)\eta_{\mu\nu}dX^{\mu}dX^{\nu}+\frac{u^{2}(y)}{a^{2}(y)}dY^{2}. \ee Then Eqs. (\ref{field equation of metric}) and (\ref{definition of auxiliary metric}) are reduced to \ba \left(6\frac{u'^{2}}{u^{2}}-3\frac{a'}{a}\frac{u'}{u}-3\frac{u''}{u}\right) f_{\mathcal{R}}&=&\kappa_{5}^{2}\phi'^{2}, \label{compent equation a}\\ 5f_{\mathcal{R}}\left(\frac{a'}{a}\frac{u'}{u}+\frac{u''}{u}\right) -2f_{\mathcal{R}}\frac{u'^{2}}{u^{2}}+f(\mathcal{R})&=&2\kappa_{5}^{2} V, \label{compent equation b} \ea and \be f_{\mathcal{R}}=\left(\frac{u}{a}\right)^3, \label{definition equation of q} \ee respectively. \subsection{Constant curvature solutions} Now we have four equations (\ref{EOM of scalar}), (\ref{compent equation a}), (\ref{compent equation b}), and (\ref{definition equation of q}). However, Eqs. (\ref{EOM of scalar}), (\ref{compent equation a}), and (\ref{compent equation b}) are not independent because of the conservation of $T_{M N}$. To solve the system we need a constraint. Obviously, different constraints lead to different results. We first consider the case in which $\mathcal{R}(\Gamma)$ is a constant. According to Eq. (\ref{definition of metric q}), it is straightforward to conclude that $R(g)$ is also constant. Thus, the solutions are the same as in metric $f(R)$ gravity with constant $R(g)$~\cite{Zhong2011}. The solutions are listed as follows. \begin{itemize} \item For $AdS_{5}$, $\mathcal{R}(\Gamma)=R(g)=-20{\gamma}^{2}(\gamma>0)$ and $f_{\mathcal{R}}<0$, we have \ba a(y)&=&\text{cosh}^{{2}/{5}}\left(\frac{5{\gamma}y}{2}\right),\nn\\ \phi(y)&=&\pm2\sqrt{\frac{6|f_{\mathcal{R}}|}{5\kappa_{5}^{2}}} \text{arctan}\left(\text{tanh}\left(\frac{5{\gamma}y}{4}\right)\right),\label{AdSsolution}\\ V(y)&=&V_{0}+\frac{9{\gamma}^{2}|f_{\mathcal{R}}|}{4\kappa_{5}^{2}} \text{sin}^{2}\left(\sqrt{\frac{5\kappa_{5}^{2}}{6|f_{\mathcal{R}}|}}\phi\right),\nn \ea where $V_{0}=({2f-25{\gamma}^{2}|f_{\mathcal{R}}|})/{4\kappa_{5}^{2}}$. \item For $dS_{5}$, $\mathcal{R}(\Gamma)=R(g)=20{\gamma}^{2}$ and $f_{\mathcal{R}}>0$, \ba a(y)&=&\text{cos}^{{2}/{5}}\left(\frac{5{\gamma}y}{2}\right),\nn\\ \phi(y)&=&\pm\sqrt{\frac{6f_{\mathcal{R}}}{5\kappa_{5}^{2}}} \text{arctanh}\left(\text{sin}\left(\frac{5{\gamma}y}{2}\right)\right), \label{dSsolution}\\ V(y)&=&V_{0}-\frac{9{\gamma}^{2}f_{\mathcal{R}}}{4\kappa_{5}^{2}} \text{sinh}^{2}\Bigg(\sqrt{\frac{5\kappa_{5}^{2}}{6f_{\mathcal{R}}}}\phi\Bigg),\nn \ea where $V_{0}=({2f-25{\gamma}^{2}f_{\mathcal{R}}})/{4\kappa_{5}^{2}}$. \end{itemize} The $AdS_{5}$ solution supports a warp factor which diverges at infinity. However, it can be checked that for an observer located at $y=0$, photons coming from infinity cost finite time to reach $y=0$. Therefore, there are no event horizons here. For the $dS_{5}$ solution, the extra dimension should be restricted to the interval $-{\pi}/{5{\gamma}}< y < {\pi}/{5{\gamma}}$ and there are also no event horizons. Some more discussions are given in section~\ref{Stability}. \subsection{Nonconstant curvature solutions} For nonconstant $\mathcal{R}(\Gamma)$, the system becomes complex. For the metric $f(R)$ gravity, it is the metric that involves higher derivatives, so the brane solutions can be obtained by assuming the solution of the warp factor~\cite{Liu2011a}. However, this is not a good choice for our case. Here, we consider the $f(\mathcal{R})=\mathcal{R}+\alpha \mathcal{R}^2$ model, for which Eq. (\ref{definition equation of q}) becomes \be 1-8\alpha\left(2\frac{a'}{a}\frac{u'}{u} +2\frac{u''}{u}+\frac{u'^{2}}{u^2}\right) =\left(\frac{u}{a}\right)^3. \label{definition equation of q_2} \ee Note that the system can be greatly simplified if we impose a good relation between $u(y)$ and $a(y)$. For this purpose, we assume $u(y)=c_{1}a^{n}(y)$ with $n\ne 0$. Then Eq. (\ref{definition equation of q_2}) reduces to \be 1-24n^{2}\alpha\frac{a'^2}{a^2}-16n\alpha\frac{a''}{a} -c_{1}^{3}a^{3(n-1)}=0. \label{EqofModel} \ee When $n=1$, we get the solutions (\ref{AdSsolution}) and (\ref{dSsolution}), which are constant curvature solutions. So we consider the case $n\neq1$, for which Eq.~(\ref{EqofModel}) supports the following solution of the warp factor: \be a(y)=\text{sech}^{\frac{2}{3(n-1)}}(ky) \label{ay} \ee with $k=\frac{3(n-1)}{\sqrt{32n(3n+2)\alpha}}$, and $c_1$ in (\ref{EqofModel}) is fixed as $c_1=(\frac{6n-1}{3n+2})^{1/3}$. Note that this solution is valid for $\alpha\neq 0$ ($\alpha=0$ corresponds to general relativity and can be solved with superpotential method). Thus the potential $V(y)$, scalar field $\phi(y)$, and energy density $\rho(y)$ are given by \ba V(y)&=&\frac{(3n+1)(6n-1)}{32(3n+2)^2 \kappa_{5}^{2}\alpha} \text{sech}^4(ky) + \frac{(3n+5)(6n-1)}{16(3n+2)^2 \kappa_{5}^{2}\alpha} \text{sech}^2(ky) +\Lambda_5, \label{sol of potential}\\ \phi(y)&=&\sqrt{\frac{2n(6n-1)}{3(3n+2)(n-1)\kappa_{5}^2}} \Bigg[i \sqrt{3} ~\text{E}\left(ik y, \frac{2}{3}\right) -i \sqrt{3} ~\text{F}\left(ik y, \frac{2}{3}\right)\nonumber\\ &&+ \sqrt{2+\text{cosh}(2k y)}\text{tanh}(k y)\Bigg], \label{sol of scalar}\\ \rho(y)&=&\frac{(3n-1)(6n-1)}{16(3n+2)^2 \kappa_{5}^{2}\alpha} \text{sech}^4(ky) + \frac{(3n+1)(6n-1)}{8(3n+2)^2 \kappa_{5}^{2}\alpha} \text{sech}^2(ky), \label{sol of energy density} \ea where $\Lambda_5 =-{1}/{8\kappa_{5}^{2}\alpha}$, $\text{F}(y,m)$ and $\text{E}(y,m)$ are the incomplete elliptic integrals of the first and second kinds, respectively. The scalar field involves Elliptic integrals, and we fail to give the expression of the potential $V(\phi)$. It can be seen that our solutions are determined by two parameters $n$ and $\alpha$. In order to get an asymptotic $AdS_5$ solution, we require $\alpha>0$. The solution of the scalar field indicates that $n$ should be restricted to the interval $n<-{2}/{3}$ or $0<n<{1}/{6}$ or $n>1$. Besides, any solution with $0<n<{1}/{6}$ leads to negative energy density, so we exclude these solutions. On the other hand, for an observer located at the origin of the extra dimension, photons coming from infinity cost finite time to reach the origin for the warp factor with $n<-2/3$. We will see in section~\ref{subsecMassiveKKmodes} that the solutions with $n<-2/3$ have some interesting features. \begin{figure*}[htb] \begin{center} \includegraphics[width=6.5cm,height=5cm]{warpfactor.eps} \includegraphics[width=7cm,height=5cm]{scalarfield.eps} \includegraphics[width=7cm,height=5cm]{scalarpotential.eps} \includegraphics[width=7cm,height=5cm]{energydensity.eps} \end{center} \caption{The shapes of the $a(y)$, $\kappa_5\phi(y)$, $\kappa_{5}^2 V(y)/k^2$, and $\kappa_{5}^2\rho(y)/k^2$ with respect to $ky$ for $n={5}/{3}$.}\label{fig1} \end{figure*} Here, we note that the brane world solution for $n={5}/{3}$ becomes simpler, and it reads \ba a(y)&=& \text{sech}(ky), \label{sol of warp factor}\\ \phi(y)&=&\sqrt{\frac{15}{7\kappa_{5}^2}} \Bigg[i \sqrt{3}~ \text{E}\left(ik y, \frac{2}{3}\right) -i \sqrt{3} ~\text{F}\left(ik y, \frac{2}{3}\right) \nonumber\\ &&+ \sqrt{2+\text{cosh}(2ky)}\,\text{tanh}(ky)\Bigg],\\ V(y)&=&\frac{5k^2}{\kappa_{5}^{2}} \left( \frac{9}{14} \text{sech}^4(ky) + \frac{15}{7}\text{sech}^2(ky) - \frac{7}{3}\right), \ea where $k=\sqrt{\frac{3}{280\alpha}}$. It is clear that $a(\pm\infty)\rightarrow0$, thus $|\phi(\infty)|\rightarrow$constant, and $V(\pm\infty)\rightarrow \Lambda_5$. We show the plots of $a(y)$, $\phi(y)$, and $V(y)$ in figure~\ref{fig1}. The thickness of the brane, $1/k$, is determined by the parameter $\alpha$. Another feature of our solution is that the cosmological constant $\Lambda_5=-{1}/{8\kappa_{5}^{2}\alpha}$ is independent of $n$. Equation (\ref{sol of energy density}) implies that the energy density peaks at $y=0$, and it does not dissipate with time. Furthermore, the Ricci scalar $R(g)$ is given by \be R(g)= -\frac{8 k^2 [1-6 n+5 \cosh(2 k y)] \text{sech}^2(k y)} {9 (n-1)^2}. \ee As $y\rightarrow\pm\infty$, $R(g)\rightarrow-80 k^2/9 (n-1)^2<0$. This is consistent with the fact that the spacetime far away from the brane is asymptotically $AdS_{5}$. \section{Gravitational fluctuations} \label{SecFluctuations} The fluctuations $\delta g_{MN}$ of the background metric (\ref{background metric}) can be decomposed as the transverse-traceless (TT) tensor mode, transverse vector modes, and scalar modes. It can be shown that the transverse vector modes and scalar modes are decoupled with the TT tensor modes. In this paper we would like to investigate stability and localization of the TT tensor fluctuations of the background metric (\ref{background metric}), whose KK modes are related to the four-dimensional gravitons and Newtonian potential. \subsection{Stability under tensor perturbations}\label{Stability} In our case, the perturbed metric is \be d s^2=a^2(y)(\eta_{\mu\nu}+h_{\mu\nu})d x^{\mu}d x^{\nu}+d y^{2}, \label{perturbed metric} \ee where the tensor fluctuations $h_{\mu\nu}$ satisfy the TT condition \be \partial_{\mu}h^{\mu}_{~\nu}=0,\quad h\equiv\eta^{\mu\nu} h_{\mu\nu}=0. \label{TT-Condition} \ee Thus we have \be \delta g_{\mu\nu}=a^2(y)h_{\mu\nu},\quad \delta g_{55}=\delta g_{\mu 5}=0. \ee With the perturbed metric (\ref{perturbed metric}), to linear order, we get the perturbations of Ricci tensor and Ricci scalar \ba \delta R_{\mu\nu}=&&\!\!\!\!\!\!\!\!\!\frac{1}{2} \left(\partial_{\sigma}\partial_{\nu}h^{\sigma}_{~\mu} +\partial_{\sigma}\partial_{\mu}h^{\sigma}_{~\nu}-\Box ^{(4)}h_{\mu\nu} -\partial_{\mu}\partial_{\nu}h\right)\nn \\&-&3a'^{ 2}h_{\mu\nu}-aa{''}h_{\mu\nu}-2aa{'}h_{\mu\nu}'- \frac{1}{2}a^{2}h_{\mu\nu}''-\frac{aa{'}h{'}}{2}\eta^{\mu\nu}, \label{perturbation of Ricci tensor}\\ \delta R=&&\!\!\!\!\!\!\!\!\!\frac{1}{a^{2}}\left(\partial_{\mu}\partial_{\nu}h^{\mu\nu} -\Box^{(4)}h\right)-h{''}-\frac{5a{'}}{a}h{'}, \label{perturbation of Ricci scalar} \ea where $\Box^{(4)}=\eta^{\mu\nu}\partial_{\mu}\partial_{\nu}$ denotes the four-dimensional d'Alembert operator. Considering the TT condition (\ref{TT-Condition}), the perturbation of Ricci scalar vanishes, and \be \delta R_{\mu\nu}=-\frac{1}{2}\Box^{(4)}h_{\mu\nu}-3a'^{ 2}h_{\mu\nu}-aa{''}h_{\mu\nu}-2aa{'}h_{\mu\nu}'-\frac{1}{2}a^{2}h_{\mu\nu}''. \ee We immediately obtain the perturbed $\mu\nu$-components of Einstein tensor \ba \delta G_{\mu\nu}=&&\!\!\!\delta\left(R_{\mu\nu}-\frac{1}{2}R g_{\mu\nu}\right)\nn\\=&&\!\!\!-\frac{1}{2}\Box^{(4)}h_{\mu\nu} +3a'^{2}h_{\mu\nu}+3aa{''}h_{\mu\nu} -2aa{'}h_{\mu\nu}'-\frac{1}{2}a^{2}h_{\mu\nu}''. \label{perturbation of Einstein tensor} \ea On the other hand, the perturbations of the $\mu\nu$-components of the right hand side of field equation~(\ref{modified Einstein equation}) reads \ba \delta G_{\mu\nu}=\Bigg[&-&\frac{\kappa^2}{f_{\mathcal{R}}}\left(\frac{1}{2}\phi'^{ 2}+V(\phi)\right)-\frac{1}{2}\left(\frac{\kappa_{5}^{2} T}{f_{\mathcal{R}}}+\frac{3f}{2f_{\mathcal{R}}}\right)\nn\\ &-&4\frac{a{'}}{a }\frac{\partial_{y}f_{\mathcal{R}}}{f_{\mathcal{R}}} -\frac{\partial^{2}_{y}f_{\mathcal{R}}}{f_{\mathcal{R}}} +\frac{2}{3}\left(\frac{\partial_{y}f_{\mathcal{R}}}{f_{\mathcal{R}}}\right)^{2}\Bigg]h_{\mu\nu} +\frac{\partial_{y}f_{\mathcal{R}}}{2f_{\mathcal{R}}}h_{\mu\nu}'. \label{perturbation of effctive SE tensor} \ea With Eqs. (\ref{perturbation of Einstein tensor}) and (\ref{perturbation of effctive SE tensor}) we get the following perturbed equation \ba &-&\frac{1}{2}\Box^{(4)}h_{\mu\nu}+3a'^{ 2}h_{\mu\nu}+3aa{''}h_{\mu\nu}-2aa{'}h_{\mu\nu}'-\frac{1}{2}a^{2}h_{\mu\nu}''\nn\\ =\Bigg[&-&\frac{\kappa_{5}^{2}}{f_{\mathcal{R}}}\left(\frac{1}{2}\phi'^{ 2}+V(\phi)\right)-\frac{1}{2}\left(\frac{\kappa_{5}^{2} T}{f_{\mathcal{R}}}+\frac{3f}{2f_{\mathcal{R}}}\right)\nonumber\\ &-&4\frac{a{'}}{a }\frac{\partial_{y}f_{\mathcal{R}}}{f_{\mathcal{R}}} -\frac{\partial^{2}_{y}f_{\mathcal{R}}}{f_{\mathcal{R}}} +\frac{2}{3}\left(\frac{\partial_{y}f_{\mathcal{R}}}{f_{\mathcal{R}}}\right)^{2}\Bigg]h_{\mu\nu} +\frac{\partial_{y}f_{\mathcal{R}}}{2f_{\mathcal{R}}}h_{\mu\nu}'. \label{perturbed modified field equation} \ea Next, we simplify the above equation. From Eq. (\ref{modified Einstein equation}), we have \ba \frac{g^{\alpha\beta}G_{\alpha\beta}}{4}= &-&\frac{\kappa_{5}^2}{f_{\mathcal{R}}}\left(\frac{1}{2}\phi'^{2}+V(\phi)\right) -\frac{1}{2}\left(\frac{\kappa_{5}^2 T}{f_{\mathcal{R}}} +\frac{3f}{2f_{\mathcal{R}}}\right)\nn\\ &-&\frac{4a{'}\partial_{y}f_{\mathcal{R}}}{a f_{\mathcal{R}}} -\frac{\partial^{2}_{y}f_{\mathcal{R}}}{f_{\mathcal{R}}} +\frac{2}{3}\left(\frac{\partial_{y}f_{\mathcal{R}}}{f_{\mathcal{R}}}\right)^{2} +\frac{a{'}\partial_{y}f_{\mathcal{R}}}{a f_{\mathcal{R}}}. \label{counterterm} \ea Now, it is straightforward to get the following simplified perturbed equation by substituting Eq. (\ref{counterterm}) into Eq. (\ref{perturbed modified field equation}): \be \frac{1}{2}\Box^{(4)}h_{\mu\nu} +2aa{'}h_{\mu\nu}' -\frac{1}{2}a^{2}h_{\mu\nu}'' =-\frac{\partial_{y}f_{\mathcal{R}}}{2f_{\mathcal{R}}}h_{\mu\nu}' +\frac{a{'}\partial_{y}f_{\mathcal{R}}}{a f_{\mathcal{R}}}h_{\mu\nu}, \label{perturbed eq.1} \ee where we have used the result $g^{\alpha\beta}G_{\alpha\beta} =12(a'^{2}+aa{''})$. The above perturbed equation is our main equation. For convenience, we will transform it to a Schr\"odinger-like equation. However, the third term on the left hand side contains a factor $a^2$, which destroys the formalism of the Schr\"odinger-like equation. To eliminate it, we introduce a coordinate transformation \be d y=a d z. \ee Under this transformation, the background metric~(\ref{background metric}) turns to be a conformally flat one. As a consequence, \be \partial_{y}=\frac{\partial_{z}}{a},\quad a{'}=\partial_{y}a=\frac{\partial_{z}a}{a}. \ee Then Eq.~(\ref{perturbed eq.1}) becomes \be \left[\Box^{(4)}-\partial^{2}_{z}+\left(3\frac{\partial_{z}a}{a} +\frac{\partial_{z}f_{\mathcal{R}}}{f_{\mathcal{R}}}\right)\partial_{z}\right]h_{\mu\nu}=0. \label{perturbed eq.2} \ee By defining $\tilde{h}_{\mu\nu}=B(z)h_{\mu\nu}$ with $B(z)=a^{{3}/{2}}f_{\mathcal{R}}^{{1}/{2}}$, the equation of $\tilde{h}_{\mu\nu}$ reads \be \left(\Box^{(4)} - \partial^{2}_{z} + \frac{\partial^{2}_{z}B}{B}\right)\tilde{h}_{\mu\nu}=0\label{perturbed eq.3}. \ee Now we introduce the KK decomposition $\tilde{h}_{\mu\nu}(x^{\sigma},z)=\varepsilon_{\mu\nu}(x^\sigma)\Psi(z)$. Then we get two equations: \ba \left(\Box^{(4)}+m^2\right)\varepsilon_{\mu\nu}(x^\sigma)&=&0,\\ \left(- \partial^{2}_{z} + \frac{\partial^{2}_{z}B}{B}\right)\Psi(z)&=&m^2\Psi(z). \label{SchrodingerEq} \ea It is clear that the equation for the function $\Psi(z)$ is a Schr\"odinger-like equation with the effective potential given by \be \mathcal{W}(z)=\frac{\partial^{2}_{z}B}{B}=\frac{3\partial_{z}^{2}a}{2a} + \frac{\partial_{z}^{2}f_{\mathcal{R}}}{2f_{\mathcal{R}}} + \frac{3}{4}\left(\frac{\partial_{z}a}{a}\right)^{2} -\left(\frac{\partial_{z}f_{\mathcal{R}}}{2f_{\mathcal{R}}}\right)^{2} +\frac{3\partial_{z}a}{2a}\frac{\partial_{z}f_{\mathcal{R}}}{f_{\mathcal{R}}}. \label{Wz} \ee It is easy to show that Eq. (\ref{SchrodingerEq}) can be factorized as \be \left(\partial_{z} + \frac{\mathcal{A}}{2}\right)\left(-\partial_{z} + \frac{\mathcal{A}}{2}\right)\Psi(z)=m^2 \Psi(z), \ee where $\mathcal{A}=3\partial_{z}a/{a}+\partial_{z}f_{\mathcal{R}}/{f_{\mathcal{R}}}$. The above equation has the form of $\mathcal{Q}^{\dag}\mathcal{Q}\Psi(z)=m^{2}\Psi(z)$, which ensures that the eigenvalues are nonnegative, i.e., $m^2 \ge 0$. Thus, there are no gravitational tachyon modes and the system is stable under tensor perturbations. It should be pointed out that this is valid for any $f(\mathcal{R})$ with $f_{\mathcal{R}}>0$. \subsection{Localization of massless graviton} Now we analyze the localization of the graviton zero mode $\Psi_{0}(z)$, for which $m=0$ and the equation is reduced to $\left(-\partial_{z} + \mathcal{A}/2\right)\Psi_{0}(z)=0$. The solution of the zero mode is \be \Psi_{0}(z) \propto a^{3/2}f_{\mathcal{R}}^{1/2}. \label{zeromode} \ee Clearly, for the first constant curvature solution (\ref{AdSsolution}), the zero mode cannot be localized on the brane. For the second constant curvature solution (\ref{dSsolution}), though the zero mode can be localized on the brane, it can be shown that the energy density diverges at the boundaries of extra dimension. Now we focus on the nonconstant curvature solutions. For the $f(\mathcal{R})=\mathcal{R}+\alpha \mathcal{R}^2$ brane model with $u=c_1 a^n$, the warp factors are given by $a(y)=\text{sech}^{\frac{2}{3(n-1)}}(ky)$ and $u(y)=c_1\text{sech}^{\frac{2n}{3(n-1)}}(ky)$ with $n>1$ or $n<-{2}/{3}$. Recalling Eq.~(\ref{definition equation of q}), the zero mode (\ref{zeromode}) is given by \be \Psi_{0}(z(y)) \propto \text{sech}^{\frac{n}{n-1}}(ky). \ee The normalization condition for the zero mode is \be \int^{+\infty}_{-\infty} \Psi_0^2 dz =\int^{+\infty}_{-\infty} \Psi_0^2 a^{-1}dy \propto \int^{+\infty}_{-\infty} \text{sech}^{\frac{2(3n-1)}{3(n-1)}}(ky) dy < \infty, \ee which can be guaranteed for both the solutions with $n>1$ and $n<-{2}/{3}$. So the zero mode can always be localized on the brane and the four-dimensional gravity can be recovered on the brane. \subsection{{Corrections to Newtonian potential of massive KK modes}} \label{subsecMassiveKKmodes} For massive KK modes, we need to analyze the properties of the effective potential $\mathcal{W}(z)$ given in Eq.~(\ref{Wz}) for different values of $n$. For the $f(\mathcal{R})=\mathcal{R}+\alpha \mathcal{R}^2$ brane model with $u=c_1 a^n$, the effective potential is reduced to \be \mathcal{W}(z)= \frac{3 n}{2}\frac{ \partial_z^2 a}{a}+ \frac{3 n(3 n-2)}{4} \left(\frac{\partial_z a}{a}\right)^2, \label{Wz} \ee or \ba \mathcal{W}(z(y)) &=& \frac{3n}{4} \left( 2 a \partial_y^2 a+3 n (\partial_y a)^2\right) \nonumber \\ &=& \frac{ n k^2}{3 (n-1)^2} \text{sech}^{\frac{4}{3 (n-1)}}(k y) \left[3 n+2 - (6 n-1) \text{sech}^2(k y)\right]. \label{Wzy} \ea From the above equation, we can see that \ba \mathcal{W}(0) &=& -\frac{ nk^2}{n-1},\label{W0} \\ \mathcal{W}(|y|\rightarrow \infty) &\rightarrow& \frac{ n (3 n+2)k^2 }{3 (n-1)^2} e^{-\frac{4 k |y|}{3 (n-1)}}. \label{Wzinfty} \ea For $n>1$, the effective potential has a trapping well around the brane and trends to vanish from above at infinity, which shows that the effective potential has a trapping well around the brane and a potential barrier at each side of the brane. The asymptotic behavior of the effective potential implies that only the zero mode is a bound state and any massive mode cannot be localized on the brane. For $n={5}/{3}$, the explicit expression of the effective potential in the conformally flat coordinate $z$ can be obtained: \ba \mathcal{W}(z) &=& \frac{5 k^2 ( 7 k^2 z^2-2)}{4 \left[(kz)^2+1\right]^2}. \label{Wz2} \ea We show the plot of $\mathcal{W}(z)$ in figure~\ref{figWz}. As can be seen \begin{figure*}[htb] \begin{center} \includegraphics[width=7.5cm,height=5cm]{effectivepotential.eps} \end{center} \caption{The shape of the effective potential $\mathcal{W}(z)/k^2$ for the graviton KK modes with respect to $kz$ for $n= {5}/{3}$.} \label{figWz} \end{figure*} from Eq.~(\ref{Wz2}), the effective potential allows a series of continuous massive KK modes $\Psi_{m}$. They are not localized on the brane. As claimed in Ref.~\cite{Csaki2000a}, if the effective potential $\mathcal{W}(z)\sim {\beta(\beta+1)}/{z^2}$ as $|z|\rightarrow\infty$, then the Newtonian potential is corrected by $\Delta U(r)\sim\ 1/r^{2\beta}$. For a general parameter $n$, the parameter $\beta$ is determined by $\beta(\beta+1)={n(3n+2)}/{3(n-1)^2}$, which gives \ba \beta=\frac{\sqrt{3(15n^2 + 2n +3)}}{6(n-1)}-\frac{1}{2} ~~\Big(>\frac{\sqrt{5}-1}{2} \Big). \label{beta} \ea So in the case of $n= {5}/{3}$, the Newtonian potential is corrected by $\Delta U(r)\sim1/r^5$. Note that our result is different from the one obtained for the metric $f(R)=R+\alpha R^2$ brane model in Ref. \cite{Liu2011a}, which gives $\Delta U(r)\sim1/r^3$. What is more interesting is the case of $n<-{2}/{3}$. According to Eqs. (\ref{Wzy}) and (\ref{W0}), we can see that $\mathcal{W}(z(y))$ diverges as $|y|\rightarrow \infty$ and so $\mathcal{W}(z)$ diverges at the boundaries. Thus, all of the massive KK modes are bound and discrete states. However, it can be seen that the conformally flat coordinate $z=\int_{0}^y \text{sech}^{{2}/{3(1-n)}}(k \bar{y}) d\bar{y}$ ranges from $-z_0/2$ to $z_{0}/2$, where $z_{0}$ is a finite parameter determined by $k$ and $n$. Hence $\mathcal{W}(z)$ acts as an infinitely deep potential well, which supports the solutions of highly excited states with even function $\Psi_{\text{even}}\simeq \sqrt{2}\text{cos}(mz)/{\sqrt{z_0}}$ and odd function $\Psi_{\text{odd}}\simeq \sqrt{2}\text{sin}(mz)/{\sqrt{z_0}}$. Indeed, the corrections to Newtonian potential of the KK modes can be roughly calculated by the infinitely deep square potential well model. The mass spectrum determined by the even KK modes $\Psi_{\text{even}}(z)= \sqrt{2}\text{cos}(m_{N} z)/{\sqrt{z_0}}$ is given by \ba m_N = (2N+1)\pi/z_0 ,~~~ (N=1,2,3\cdots). \ea With this mass spectrum, if we consider two test particles separated by a distance $r$ on the brane, then the Newtonian potential is~\cite{Csaki2000a,Guo:2010az} \be U(r)=-\frac{M_1 M_2}{M_{\text{pl}}^2}\frac{1}{r} - \frac{M_1 M_2}{M_{*}^{3}} \sum_{N=1}^{\infty}\frac{e^{-m_N r}}{r}|\Psi_{m_N}(0)|^2, \label{correction 1} \ee where $M_{\text{pl}}$ is the Planck scale and $M_1$, $M_2$ are the masses of the two test particles. From the action (\ref{action of Pal. f(R)}), to recover the four-dimensional gravity, we have \be \frac{M_{*}^{3}}{2} \int d^5 x \sqrt{-g}f(\mathcal{R}(g,\Gamma)) \supset \frac{M_{\text{pl}}^2}{2}\int d^4 x \sqrt{-g^{(4)}(x^{\mu})} R^{(4)}(x^{\mu}) \ee with $g^{(4)}(x^{\mu})$ the determinant of the four-dimensional metric and $R^{(4)}(x^{\mu})$ the Ricci scalar in four dimensions. Therefore, the relation between the effective Planck scale $M_{\text{pl}}$ and the fundamental scale $M_{*}$ is given by \be M_{\text{pl}}^2=M_{*}^{3} \int_{-\infty}^{+\infty} d y a^{2}(y)f_{\mathcal{R}} \equiv \frac{M_{*}^{3}}{k}\sigma_{1}(n), \label{de sigma1} \ee where $\sigma_{1}(n) \simeq \frac{3(n-1)(6n-1)}{2(3n-1)(3n+2)}$. For convenience, we define another function $\sigma_{2}(n)$ by \be z_0= 2\int_{0}^{+\infty} a^{-1}(y)d y\equiv \frac{2}{k}\sigma_{2}(n). \label{de sigma2} \ee It is turned out that $\sigma_{2}(n)\simeq \frac{3}{2}(1-n)$. In terms of $\sigma_{1}(n)$, $\sigma_{2}(n)$, and the relation (\ref{de sigma1}), the Newtonian potential (\ref{correction 1}) can be expressed as \ba U(r)&=&-\frac{M_1 M_2}{M_{\text{pl}}^2}\frac{1}{r}\left[1 + \frac{\sigma_{1}(n)}{\sigma_{2}(n)} \sum_{N=1}^{\infty}e^{-m_N r}\right]\nn\\ &=&-\frac{M_1 M_2}{M_{\text{pl}}^2}\frac{1}{r}\left[1 + \frac{\sigma_{1}(n)}{\sigma_{2}(n)} \frac{e^{-{3\pi r}/{z_0}}}{\left(1-e^{-{2\pi r}/{z_0}}\right)}\right]\nn\\ &\simeq&-\frac{M_1 M_2}{M_{\text{pl}}^2}\frac{1}{r}\left[1 + \frac{(6n-1)}{(1-3n)(3n+2)} \frac{e^{-{3\pi r}/{z_0}}}{\left(1-e^{-{2\pi r}/{z_0}}\right)}\right]. \label{correction 2} \ea Clearly, for $r\gg z_0$, the summation term tends to $e^{-{3\pi r}/{z_0}}$ and thus can be ignored. For the case of $r\ll z_0$, we can expand the correction term in terms of $r/z_0$, and effective Newtonian potential is \be U(r) \simeq -\frac{M_1 M_2}{M_{\text{pl}}^2}\frac{1}{r}\left[ 1+\frac{\sigma_{1}(n)}{{2\pi}\sigma_{2}(n)} \frac{{z_0}}{r}\right], \ee which can also be expressed in a more elegant formalism by recalling Eqs.~(\ref{de sigma1}) and (\ref{de sigma2}): \be U(r) \simeq -\frac{M_1 M_2}{M_{\text{pl}}^2}\frac{1}{r} -\frac{M_1 M_2}{M_{*}^3}\frac{1}{\pi r^2}. \ee It shows that the Newtonian potential is corrected by a $1/r^2$ term when $r\ll z_0$, and the correction term dominates, namely $U(r)\propto 1/r^2$. For $r\gg z_0$, the correction can be ignored, and we have $U(r)\propto 1/r$. This result is similar to the ADD model~\cite{Arkani-Hamed1998a}. In contrast to the ADD model, the physical extra dimension in our model is infinitely large. The length parameter $z_0$ can be regarded as a length scale beyond which the four-dimensional gravity can be recovered. We get this result by using the infinitely deep square potential model. One can also consider the potential model $\bar{\mathcal{W}}(z)=c\text{ tan}^{2}({\pi}z/{z_0})$, and this would lead to a similar result. \section{Discussions and conclusions} \label{secConclusion} In this work we investigated the thick brane configuration generated by a background scalar field in Palatini $f(\mathcal{R})$ gravity. In this gravity, higher derivatives of the matter fields are involved in the field equations, whereas it is the metric that contains higher derivatives in metric $f(R)$ gravity. This leads to the difference of strategies to solve differential equations. For the case of constant curvature, the solutions are the same as in metric $f(R)$ gravity. The $dS_{5}$ solution supports an energy density diverges at infinity, which implies that it is not a viable brane solution. For nonconstant curvature, it is convenient to introduce an auxiliary metric to reduce the order of the differential equations, which is a key step in Palatini theories~\cite{Banados2010, Liu2012a, Fu2014a}. By assuming the relation between the spacetime metric and the auxiliary metric, we obtained the thick brane solutions of this system. The scalar field solution is a kink, which connects two vacua of the scalar potential, and it describes a domain wall brane. Besides, the thickness of the brane is determined by the coefficient of the $\mathcal{R}^2$ term. Furthermore, we analyzed the gravitational fluctuations of the brane system. For the TT tensor perturbations, a Schr\"odinger-like equation was obtained. It was shown that the Palatini $f(\mathcal{R})$-brane system with any function $f(\mathcal{R})$ with $f_\mathcal{R}>0$ is stable under tensor perturbations. For the $AdS_{5}$ solution, the graviton zero mode cannot be localized on the brane. The $dS_5$ solution suffers from the pathology that the energy density diverges at the boundaries of the extra dimension. For the nonconstant curvature solutions, the asymptotic behavior of the effective potential implies that the graviton zero mode can always be localized on the brane. The behaviors of massive KK modes are determined by the parameter $n$ in the brane solutions. For the solution with $n>1$, the effective potential is volcano one, so the massive KK modes are not bound states and they are suppressed on the brane due to the potential barrier near the brane. The more interesting case is the solution with $n<-2/3$, for which the effective potential is an infinitely deep potential well and the massive KK modes are all bound states. In this case, we get the correction to Newtonian potential, $\Delta U(r)\sim 1/r^2$, and the length scale beyond which the four-dimensional gravity can be recovered. This result is similar to the ADD model. At last, it should be pointed out that some of the massive KK modes in the case of $n>1$ may be quasilocalized on the brane due to the potential barrier at each side of the brane~\cite{Fu2014a, Liu2009}. It would be interesting to explore the properties of these quasilocalized states. Unfortunately, no such states were found in this model. We expect that these quasilocalized gravitons appear in some more general Palatini theories. \section{Acknowledgement} This work was supported by the National Natural Science Foundation of China (Grants No. 11075065 and No. 11375075) and the Fundamental Research Funds for the Central Universities (Grant No. lzujbky-2015-jl01).
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Monotone operators are natural nonlinear generalizations of linear operators in divergence form. In this contribution we develop a quantitative approach to nonlinear stochastic homogenization. Although our arguments will be mainly self-contained, we expect the reader to be a bit familiar with the associated qualitative nonlinear theory (cf.~\cite{JKO94,Pankov}) and quantitative linear theory (for which we refer to the recent short and self-contained introduction \cite{Otto-Tlse} in form of the first part of \cite{josien2020annealed}). \medskip Homogenization of monotone operators started with the very definition of G-convergence \cite{DeGiorgi-Franzoni-75,Marcellini-78}, and is treated in detail in the reference textbooks \cite{DalMaso-93} by Dal Maso, \cite{Braides98} by Braides and Defranceschi, \cite{JKO94} by Jikov, Kozlov, and Ole{\u\i}nik, and \cite{Pankov} by Pankov -- see also \cite{Marcellini-78,DalMaso86,DalMaso-Modica-86b,DalMaso-Gconv,DalMaso-corr,FMT}. These textbooks first establish a general compactness result (for G-, $\Gamma$-, or H-convergence depending on the context). Let us give a restricted version of this result. For exponents $p\ge 2$\footnote{Our approach does not allow to consider the range of exponents $1\le p<2$}, $\beta \ge 2$ and $\alpha \le 1$, and a constant $C>0$, we define a set of monotone maps $\hat a :\mathbb{R}^d\to \mathbb{R}^d$ via \begin{multline} \mathcal{M}(p,\alpha,\beta,C)\,:=\,\Big\{\hat a:\mathbb{R}^d\to \mathbb{R}^d\,\Big|\, |\hat a(0)|\le C, \\ \forall \xi_1,\xi_2 \in\mathbb{R}^d, |\hat a(\xi_1)-\hat a(\xi_2)|\le C (1+|\xi_1|+|\xi_2|)^{p-1-\alpha}|\xi_1-\xi_2|^\alpha \\ \text{ and } (\hat a(\xi_1)-\hat a(\xi_2),\xi_1-\xi_2) \ge \frac1C (1+|\xi_1|+|\xi_2|)^{p-\beta}|\xi_1-\xi_2|^\beta\Big\}. \label{defcalMspace} \end{multline} Let $a_\varepsilon:\mathbb{R}^d \times \mathbb{R}^d \to \mathbb{R}^d, (x,\xi)\mapsto a_\varepsilon(x,\xi)$ be a family (parametrized by $\varepsilon>0$) of Carath\'eodory maps\footnote{that is, measurable with respect to $x$ and continuous with respect to $\xi$.} such that for all $\varepsilon>0$ and almost all $x\in \mathbb{R}^d$, $a_\varepsilon(x,\cdot) \in \mathcal{M}(p,\alpha,\beta,C)$. Then every cluster point $\bar a:\mathbb{R}^d \times \mathbb{R}^d \to \mathbb{R}^d$ (for G, $\Gamma$, or H-convergence) of this family is a Carath\'eodory map that belongs to $\mathcal{M}(p,\gamma,p,\tilde C)$ with $\gamma=\frac{\alpha}{\beta-\alpha}$ and some $\tilde C>0$ (depending on $C$, $d$, $p$, $\alpha$, and $\beta$). In PDE terms, this implies that, along a subsequence (not relabelled), for all $f \in L^p(\mathbb{R}^d)^d$, the unique weak solution $u_\varepsilon \in \dot W^{1,p}(\mathbb{R}^d)/\mathbb{R}:=\{v \in W^{1,p}_\mathrm{loc}(\mathbb{R}^d)\,|\,\nabla v \in L^p(\mathbb{R}^d)^d\}/\mathbb{R}$ of $ -\nabla \cdot a_\varepsilon(x,\nabla u_\varepsilon(x))\,=\, \nabla \cdot f $ weakly converges in $\dot W^{1,p}(\mathbb{R}^d)/\mathbb{R}$ to the unique weak solution $\bar u$ of $ -\nabla \cdot \bar a(x,\nabla \bar u(x))\,=\, \nabla \cdot f, $ in the sense that $\nabla u_\varepsilon \rightharpoonup \nabla \bar u$ in $w-L^p(\mathbb{R}^d)$. When some structural assumption is made on the spatial dependence of $a_\varepsilon$ (such as periodicity, quasi-periodicity, or stationarity in a random ergodic context), the homogenized monotone map $\bar a$ does not depend on the space variable and the cluster point is unique, which we shall assume in the rest of this paper. \medskip The aim of the present contribution is to make this picture quantitative, and establish estimates between $\nabla u_\varepsilon$ and its two-scale expansion based on $\nabla \bar u$. The case $p=2$ is successfully addressed in \cite{fischer2019optimal,AFK-+,AFK-20}. We primarily focus on the case of random monotone operators as first considered by Dal Maso and Modica in \cite{DalMaso86,DalMaso-Modica-86b} -- our results will also apply to the periodic and quasi-periodic settings. Before we describe the precise class of operators and of random dependence that we consider here, let us emphasize the two main difficulties we face when going from (the by-now well-understood case of) $p=2$ to $p> 2$. \begin{itemize} \item[(I)] The more recent and powerful approaches to quantitative homogenization of linear elliptic equations distinguish the (more robust) large-scale regularity properties from the (finer) rates of convergence. Large-scale regularity started with the works \cite{Avellaneda-Lin-87,Avellaneda-Lin-91} by Avellaneda \& Lin and can be summarized by saying that on large scales (say, with respect to the heterogeneities), the heterogeneous monotone operator ``inherits'' (a suitable version of) the regularity theory for the homogenized monotone operator. In the linear setting, (large-scale) elliptic regularity thus follows for the heterogeneous linear operator. For nonlinear monotone operators, the story is less clear. Monotone operators also enjoy strong regularity theory provided they belong to $\mathcal{M}(p,1,2,C)$ -- see for instance \cite{kuusi2014guide,DarkSide}. In particular, the coercivity given by $\beta=2$ seems crucial. Suppose that $a_\varepsilon(x,\cdot) \in \mathcal{M}(p,1,2,C)$ for all $\varepsilon>0$ and almost all $x\in \mathbb{R}^d$. If the map is stationary ergodic, the homogenized monotone operator $\bar a$ (which does not depend on the space variable) is defined as a map of class $\mathcal{M}(p,1,p,\tilde C)$. Except in the case $p=2$ (studied in \cite{fischer2019optimal,AFK-+,AFK-20}), it is not clear to us what additional condition would ensure that $\bar a \in \mathcal{M}(p,1,2,C)$ for $p>2$\footnote{By scaling the homogenization of the $p$-Laplacian essentially remains a $p$-Laplacian, but it does not satisfy the coercivity for $\beta=2$ in view of the additional constant $1$ we impose, which is not needed for regularity but which will be crucial for (II.1).}. This is even worse for nonlinear systems, for which the condition $\bar a\in \mathcal{M}(p,1,2,C)$ does not necessarily ensure regularity properties. Having this in mind, and as opposed to \cite{AS,AKM2,AKM-book,GNO-reg,GNO-quant,GO4}, we adopt an approach which does not rely on large-scale regularity. \item[(II)] In order to establish convergence rates, one needs to linearize the problem, one way or the other. Doing so, one obtains a linear operator with coefficients that are heterogeneous and depend on the solution itself. Again, except in the particular case of $p=2$ treated in \cite{fischer2019optimal,AFK-+,AFK-20} (for which the coefficients of the linearized operator are bounded from above and below), this linear equation is hard to handle for two reasons: \begin{itemize} \item[(II.1)] The coefficients may be degenerate (for the $p$-Laplacian e.g.). Controlling this degeneracy would require to have precise information on the critical set of harmonic coordinates (that is the set of $x \in \mathbb{R}^d$ such that $\nabla \phi_\xi(x)+\xi=0$, cf.~\eqref{e.cor-eq} below). Such results are however currently unavailable for $d>2$ (even for the $p$-Laplacian, the unique continuation principle is not known to hold for $d>2$ and $p>2$, e.g.~\cite{Lindvist-06}); \item[(II.2)] The coefficients are unbounded, and control is a priori solely given by the growth exponent $\frac{p}{p-2}$ (the larger $p$, the weaker the integrability). \end{itemize} \end{itemize} In order to circumvent these difficulties, we proceed as follows: \begin{itemize} \item[(I)] Rather than assuming that $\bar a \in \mathcal{M}(p,1,2,C)$ (which we do not know a priori in general), we simply assume that the \emph{solution} $\bar u$ of our specific homogenized problem is \emph{smooth enough} (or we restrict ourselves to a subdomain where it is). In order to deal with the nonlinear dependence of the solution upon the randomness, we choose to appeal to Malliavin calculus (or sensitivity calculus), and assume \emph{Gaussianity of the randomness}. In order to by-pass the need of large-scale regularity, we assume \emph{weak correlations} of the coefficients (in form of the integrability of the covariance function), which allows us to use the \emph{central limit theorem scaling} to buckle in a nonlinear estimate based on the (nonlinear) hole-filling estimate. This approach is more in the vein of the first contributions to quantitative homogenization, by Otto and the second author \cite{GO1,GO2,Gloria-Otto-10b} and with Neukamm \cite{GNO1}. We also take inspiration from \cite{Otto-Tlse,josien2020annealed} by Otto and by Josien and Otto. For completeness, we further investigate whether a subclass of $\mathcal{M}(p,1,2,C)$ might be closed by homogenization under additional assumptions. Surprisingly this seems to be a very challenging question in general. In dimension $d=1$, this is always the case. For periodic homogenization, we give one such example in dimension $d=2$, for which the proof relies on specific properties of the critical set of $x \mapsto \phi_\xi(x)+\xi \cdot x$. In the random setting, we prove such a result for all $d\ge 2$ assuming the statistical isotropy of the ensemble (the argument unfortunately does not apply to systems). \item[(II)] Regarding the ellipticity conditions of the linearized operator, we use perturbative arguments when $p>2$: \begin{itemize} \item[(II.1)] To avoid the degeneracy of the linearized operator, we assume that for all $\varepsilon>0$ and almost all $x\in \mathbb{R}^d$ we have $a_\varepsilon(x,\cdot) \in \mathcal{M}(p,1,2,C)$ (that is, we replace a $p$-Laplacian by a $p$-Laplacian plus Laplacian e.g.). This only yields the \emph{non-degeneracy} of the linearized operator in a \emph{perturbative} way (it disappears in the regime when the solution has a large gradient). We do not know whether this is inherited at the limit for $\bar a$ in general (except in the specific scalar cases mentioned above). \item[(II.2)] We treat the unboundedness of the coefficients in two ways. \begin{itemize} \item First, one needs slightly more integrability than that given by energy estimates. For uniformly bounded coefficients, this follows from Meyers' estimates (as used in \cite{fischer2019optimal,AFK-+,AFK-20} when $p=2$). In the unbounded setting of $p>2$, standard approaches to Meyers' estimates fail. For that reason, we introduce \emph{weighted Meyers' estimates in the large} (that is, the Meyers exponent remains deterministic and perturbative, but the validity of the estimate depends on a random radius -- which we call the Meyers minimal radius -- which fixes the scale at which the estimate holds). We then combine this estimate with the sensitivity calculus to prove stretched exponential moment bounds on the Meyers minimal radius, which then allows us to upgrade quenched Meyers' estimates in the large into their annealed versions (as introduced by Duerinckx and Otto in \cite{DO-20}). This notion of perturbative regularity in the large is one of the main technical novelties of this work and is the origin of the condition on the exponent $p$ (only active for $d\ge 4$). \item Second, using these annealed Meyers estimates, we establish sharp bounds on the (nonlinear and linearized) correctors. When using sensitivity calculus on the linearized corrector, we obtain a term which is essentially quadratic with respect to the linearized corrector, and needs to be locally evaluated in $L^2$ (our whole approach is based on $L^2$-theory). Assuming some \emph{local regularity} of $x\mapsto a_\varepsilon(x,\xi)$ (at scale $\varepsilon$) allows us to replace a local $L^2$-norm by a local $L^1$-norm, and therefore deal with this quadratic term. \end{itemize} \end{itemize} \end{itemize} Using this strategy, we establish the first convergence rates in stochastic homogenization of monotone operators for $p>2$. More precisely, our approach allows to treat uniformly-elliptic equations and systems (with Uhlenbeck structure \cite{Uhlenbeck}) for all $2\le p<\infty$ in dimensions $d\le 3$. The rates we obtain are sharp, and our strategy of proof is most probably the shortest path to such results. Possible extensions are listed after the statements of the main results. \section{Statement of the main results} \subsection{Qualitative assumptions and homogenization} To simplify the statements and the proofs of the main results, we consider below the simplest possible setting, and use scalar notation (our results hold for systems with Uhlenbeck structure). More or less straightforward extensions are discussed in Section~\ref{sec:extension}. \begin{hypo}\label{hypo0} Let $p\ge 2$, and consider the operator \begin{equation}\label{e.def-a} a(x,\xi) \,:=\, A(x)(1+|\xi|^{p-2})\xi, \end{equation} where $A$ is a uniformly elliptic stationary ergodic matrix field. More precisely, we assume that $A$ is smooth (uniformly wrt to the randomness) and satisfies the ellipticity conditions for some $0<\lambda\le 1$ $$ \forall \xi \in \mathbb{R}^d: \quad (A\xi,\xi)\ge \lambda|\xi|^2 \text{ and }|A\xi|\le |\xi|. $$ \end{hypo} Under Hypothesis~\ref{hypo0}, the monotone map $a$ almost surely satisfies for almost all $x\in \mathbb{R}^d$ $a(x,\cdot) \in \mathcal{M}(p,1,2,c_\lambda)$ for some constant $c_\lambda$ depending only on $p$ and $\lambda$. In what follows we denote by $D_i$ the derivative with respect to the $i$-th entry of the vector $\xi \in \mathbb{R}^d$ (so that $D_i a(x,\xi):=\nabla_{\xi_i} a(x,\xi)$). It is convenient to define the probability space via $\Omega=\{A:\mathbb{R}^d \to \mathbb{M}_d(\lambda)\}$, endowed with some probability measure $\mathbb P$. In this setting, a random variable can be seen as a (measurable) function of $A$. We say that a random field $X:\mathbb{R}^d \times \Omega\to \mathbb{R}^k$ (for $k\in \mathbb{N}$) is stationary if for all $z\in \mathbb{R}^d$ and almost all $x\in \mathbb{R}^d$ we have $X(x+z,A)=X(x,A(\cdot+z))$, where $A(\cdot+z):x \mapsto A(x+z)$. (Note that the expectation $\mathbb E[X(x)]$ of a stationary random field $X$ does not depend on $x\in \mathbb{R}^d$ and we simply write $\mathbb E[ X]$.) We use the notation $L^q(d\mathbb P)$ for the space of $q$-integrable random variables. Throughout the paper, we use the short-hand notation $\lesssim$, $\gtrsim$ and $\sim$ for $\le C\times$, $\ge C\times$, and $\frac1C \times \le \cdot \le C\times$ for some universal constant $C$ depending on $\lambda,d,p$ (and possibly on further quantities displayed as subscripts). We use $\ll$ and $\gg$ for $\le \frac1C \times$ and $\ge C \times$ in the case when $C$ needs to be chosen large enough. \medskip We start with the well-known qualitative homogenization result. \begin{theorem}[Qualitative homogenization]\label{th:qual-hom} Under Hypothesis~\ref{hypo0}, there exists a monotone map $\bar a \in \mathcal{M}(p,1,p,c)$ for some $c=c(\lambda,d)$ such that for all $f\in L^p(\mathbb{R}^d)^d$, the unique weak solution $u_\varepsilon \in \dot W^{1,p}(\mathbb{R}^d)/\mathbb{R}:=\{v \in W^{1,p}_\mathrm{loc}(\mathbb{R}^d)\,|\, \nabla v \in L^p(\mathbb{R}^d)^d\}/\mathbb{R}$ of \begin{equation}\label{e.eps-eq} -\nabla \cdot a(\tfrac x\varepsilon,\nabla u_\varepsilon(x))=\nabla \cdot f(x) \end{equation} weakly converges in $\dot W^{1,p}(\mathbb{R}^d)$ almost surely to the unique weak solution $\bar u \in \dot W^{1,p}(\mathbb{R}^d)/\mathbb{R}$ of \begin{equation}\label{e.hom-eq} -\nabla \cdot \bar a( \nabla \bar u(x))=\nabla \cdot f(x), \end{equation} where the operator $\bar a$ is characterized in direction $\xi \in \mathbb{R}^d$ by $ \bar a(\xi)=\mathbb E[a(0,\nabla \phi_\xi(0)+\xi)], $ and $\phi_\xi$ is the corrector, defined as the unique almost sure distributional solution in $W^{1,p}_\mathrm{loc}(\mathbb{R}^d)$ of \begin{equation} -\nabla \cdot a(x,\nabla \phi_\xi(x)+\xi)=0, \label{e.cor-eq} \end{equation} in $\mathbb{R}^d$, anchored at the origin via $\int_B \phi_\xi=0$, and the gradient of which $\nabla \phi_\xi$ is stationary, has vanishing expectation $\mathbb E[{\nabla \phi_\xi}]=0$, and satisfies \begin{equation} \expec{|\nabla \phi_\xi|^2+|\nabla \phi_\xi|^p} \lesssim |\xi|^2+|\xi|^p. \label{e.cormoment} \end{equation} \end{theorem} We continue by investigating to what extent the class $\mathcal{M}(p,1,2,C)$ is closed by homogenization. As already mentioned above this constitutes a very challenging problem. In dimension $d=1$, one directly has $D \bar a (\xi)\gtrsim 1+|\xi|^{p-2}$ so that $\bar a \in \mathcal{M}(p,1,2,c)$, and the whole class $\mathcal{M}(p,1,2,C)$ is closed in $\mathcal{M}(p,1,2,c)$ by homogenization (for some $c$ depending on $C$). For $d>1$ this is different since from the a priori estimate $\xi \cdot D \bar a (\xi)\xi \gtrsim |\xi|^2(1+|\xi|^{p-2})$ we cannot deduce $e \cdot D \bar a (\xi) e \gtrsim |e|^2(1+|\xi|^{p-2})$ in general. Yet we give a positive answer in two settings: periodic coefficients under the validity of a quantitative version of unique continuation and random and statistically isotropic coefficients. \begin{theorem}\label{th:isotropic-per} Let $A$ be a $Q$-periodic Lipschitz matrix field. For all $\xi \in \mathbb{R}^d$, denote by $\psi_\xi \in W^{1,p}_{\mathrm{per}}(Q)$ the unique weak solution of $$ -\nabla \cdot A(x)|\nabla \psi_\xi+\xi|^{p-2}(\nabla \psi_\xi+\xi)=0. $$ Assume that for all $\xi \in \mathbb{R}^d$, there exists $r>0$ such that the $r$-tubular neighborhood $\mathcal{T}_r(\xi)=\{x+B_r \,|\,x \in \mathcal{C}(\xi)\}$ of the critical set $\mathcal{C}(\xi)=\{x \in \mathbb{R}^d\,|\, \xi+\nabla \psi_\xi(x)=0\}$ is such that $\mathbb{R}^d \setminus \mathcal{T}_r(\xi)$ is a connected set. Then there exists $c>0$ such that $\bar a \in \mathcal{M}(p,1,2,c)$. \end{theorem} \begin{remark} The assumptions of Theorem~\ref{th:isotropic-per} are quite strong. They are satisfied in dimension $d=2$ by \cite{AS-01} (which shows that $\mathcal{C}(\xi)\cap Q$ is indeed a finite union of points). For $d>2$ this is a widely open problem (somewhat related to unique continuation). For linear equations, this follows from \cite{CNV-15}. \end{remark} \begin{theorem}\label{th:isotropic} On top of Hypothesis~\ref{hypo0}, assume that $A(x)=b(x) \mathrm{Id}$ for some scalar-valued function $b$ and that for all $R \in SO(d)$, $b(R \cdot)$ and $b$ have the same (joint) distribution (in which case $A$ is statistically isotropic). Then there exists $c>0$ such that $\bar a \in \mathcal{M}(p,1,2,c)$. \end{theorem} These results are proved in~Appendices~\ref{app:isotropic-per} and~\ref{app:closed}. We suspect that Theorems~\ref{th:isotropic-per} and~\ref{th:isotropic} hold under weaker assumptions but we are currently unable to establish this (even using the quantitative estimates proved in this paper). \subsection{Quantitative assumptions and two-scale expansion} As explained in the introduction, to prove quantitative estimates in the random setting, we assume Gaussianity of the law and integrable correlations. \begin{hypo}\label{hypo} On top Hypothesis~\ref{hypo0}, assume that \begin{equation}\label{e.defAGauss} A(y)= \chi * B(G(y)), \end{equation} where $B:\mathbb{R} \to \mathbb{M}_d$ is a Lipschitz map and $G$ is a stationary random Gaussian field on $\mathbb{R}^d$ with integrable covariance function, and $\chi : \mathbb{R}^d \to [0,1]$ is a smooth compactly supported convolution kernel. In particular, $A$ is smooth (uniformly wrt to the randomness). We further require $2\le p < \frac{2(d-1)}{d-3}$ in dimensions $d>3$. \end{hypo} \begin{remark} If $B(t)=b(t)\mathrm{Id}$ for some scalar-valued map $b$ and if $G$ satisfies $\mathrm{cov}[G(x),G(0)]=c(|x|)$ for some function $c$, then $A$ is statistically isotropic and Theorem~\ref{th:isotropic} applies. \end{remark} Our main achievement is an optimal quantitative corrector result, which extends the results of \cite{fischer2019optimal} to $p>2$. Following Dal Maso and Defranceschi \cite{DalMaso-corr}, we start with the suitable definition of the two-scale expansion. To this aim, we introduce a scale $\delta>0$ (which we should think of as being $\varepsilon$ in the upcoming result), set $\mathcal{K}_\delta:=\delta\mathbb{Z}^d$ and for all $k\in \mathcal{K}_\delta$, we define the cube ${Q}_{\delta}(k)=k+[-\delta,\delta)^d$ centered at $k$ and of sidelength $2\delta$. We also consider a partition $(\eta_k)_{k\in \mathcal{K}_\delta}$ of unity on $\mathbb{R}^d$ with the following properties: $0\leq \eta_k\leq 1$, $\eta_k\equiv 1$ on $\mathcal{Q}_{\frac{\delta}{2d}}(k)$, $\eta_k\geq c$ on $\mathcal{Q}_{(1-\frac{1}{3d})\delta}$, $\mathrm{supp}\, \eta_k \subset Q_\delta(k)$, and $\vert\nabla\eta_k\vert\leq C\delta^{-1}$ (for some suitable $c,C>0$ independent of $\delta$). Given the solution~$\bar u$ of \eqref{e.hom-eq}, we introduce local averages associated with the partition of unity in form for all $k \in \mathcal{K}_\delta$ of $$ (\nabla \bar u)_{k,\delta}\,:=\, \frac{\int_{\mathbb{R}^d}\eta_k\nabla\bar{u}}{\int_{\mathbb{R}^d}\eta_k}, $$ and define the two-scale expansion $\bar u^{2s}_{\varepsilon,\delta}$ associated with $\bar u$ via \begin{equation}\label{e.2s} \bar u^{2s}_{\varepsilon,\delta}:= \bar{u}+\varepsilon \sum_{k\in \mathcal{K}_\delta}\eta_k\phi_{(\nabla \bar u)_{k,\delta}}(\tfrac \cdot\varepsilon), \end{equation} where $\phi_\xi$ denotes the corrector in direction $\xi \in \mathbb{R}^d$ (cf.~Theorem~\ref{th:qual-hom}). This constitutes a convenient variant (introduced in \cite{DalMaso-corr} to deal with monotone operators) of the classical two-scale expansion $x\mapsto \bar u(x) + \varepsilon \phi_{\nabla \bar u(x)}(\tfrac x\varepsilon)$, which may raise measurability issues. Based on this two-scale expansion, we have the following optimal convergence result. \begin{theorem}\label{th:2s} Assume Hypothesis~\ref{hypo} and let $f \in L^p(\mathbb{R}^d)^d$. Let the weight $\mu_d: \mathbb{R}^k \to \mathbb{R}_+$ (for $k=1$ and $d$) be given by \begin{equation}\label{e.def-mud} \mu_d(z)\,=\,\left\{ \begin{array}{rcl} d=1&:& 1+\sqrt{|z|}, \\ d=2&:& \log (2+|z|)^\frac12, \\ d>2&:& 1. \end{array} \right. \end{equation} For all $\varepsilon>0$ we denote by $u_\varepsilon \in \dot W^{1,p}(\mathbb{R}^d)/\mathbb{R}$ the unique weak solution of \eqref{e.eps-eq}, by $\bar u \in \dot W^{1,p}(\mathbb{R}^d)/\mathbb{R}$ the unique weak solution of the homogenized equation~\eqref{e.hom-eq}, and by $\bar u^{2s}_{\varepsilon}$ the two-scale expansion \eqref{e.2s} for the choice $\delta=\varepsilon$. If the homogenized solution $\bar u$ satisfies $\nabla \bar u \in L^\infty(\mathbb{R}^d)$ and $\mu_d \nabla^2 \bar u \in L^2(\mathbb{R}^d)$, then we have \begin{equation} \|\nabla u_\varepsilon - \nabla \bar u^{2s}_{\varepsilon}\|_{L^2(\mathbb{R}^d)} \,\le \, C_{\varepsilon,\bar u} \, \varepsilon \mu_d(\tfrac1\varepsilon), \label{th:2sIneg} \end{equation} where $C_{\varepsilon,\bar u}$ denotes a random variable that satisfies \begin{equation} \expec{\exp(c C_{\varepsilon,\bar u}^\alpha)}\le 2, \label{th:2smoment} \end{equation} for some exponent $\alpha>0$ depending on $d$, $p$, $\lambda$, and $\|\nabla \bar u\|_{L^\infty(\mathbb{R}^d)}$, and some constant $c$ further depending on $\|\mu_d \nabla^2 \bar u\|_{L^2(\mathbb{R}^d)}$, but not on $\varepsilon$. \end{theorem} Some comments are in order: \begin{itemize} \item In the periodic setting, Theorem~\ref{th:2s} holds without restrictions on $p\ge 2$ and with $\mu_d \equiv 1$ in any dimension. This result is sharper than \cite{CS-04}, which indeed contains the first quantitative two-scale expansion estimate for monotone periodic operators with $p>2$ (there, one needs to know that $\bar a \in \mathcal{M}(p,1,2,C)$ a priori to construct a second-order two-scale expansion, which gives \eqref{th:2sIneg} after truncation and with a dependence of the constant on stronger norms of $\bar u$). \item The choice to work on the whole space with a right-hand side in divergence form allows one to avoid boundary layers (and therefore to truly focus on the homogenization error, in line with \cite{GNO-quant}) and to treat all dimensions at once. In particular one could state and prove a similar result on a bounded domain with Dirichlet boundary conditions, in which case the bound would be of the order of the square root of that in \eqref{th:2sIneg}. \item This result takes the same form (with the same optimal rates) as for the linear case \cite{GNO-quant,GO4,AKM2} and for the nonlinear case \cite{fischer2019optimal} with $p=2$. As opposed to the latter, the stretched exponential exponent in Theorem~\ref{th:2s} depends on $\|\nabla \bar u\|_{L^\infty(\mathbb{R}^d)}$ itself. This intricate dependence could be made explicit (provided we make the exponent and constants explicit in Gehring's lemma) and is reminiscent of the way we treat the non-degeneracy of the linearized equation (that is, perturbatively). \item One drawback of this result is the a priori assumption that $\nabla \bar u \in L^\infty(\mathbb{R}^d)$ and $\mu_d \nabla^2 \bar u \in L^2(\mathbb{R}^d)$, which we cannot guarantee in general by making direct assumptions on $f$ since the regularity theory for $\bar a$ is not well-understood. However: \begin{itemize} \item Since the above result is local in nature, this estimate holds on domains of $\mathbb{R}^d$ on which $\bar u$ has the required regularity. In any case, if $\nabla \bar u$ develops some singularity somewhere, one does not expect the two-scale expansion to be accurate in that region. \item In the examples of Theorems~\ref{th:isotropic-per} and~\ref{th:isotropic}, the operator $\bar a$ enjoys the required regularity theory to ensure a priori bounds on $\bar u$, and therefore guarantee the validity of the result (for scalar equations). \end{itemize} \item Although the natural norm in this problem is the $L^p$-norm of the gradient, our estimate is only sharp for the $L^2$-norm. We believe the same estimate should hold for the $L^p$-norm, although the proof of this result would most probably require large-scale nonlinear Calder\'on-Zygmund estimates (see discussion below). Suboptimal estimates can be obtained in any $L^q$ by interpolation between $L^2$ and $L^\infty$. \item This result yields an optimal control of the oscillations of $\nabla u_\varepsilon$ which are accurately captured by the oscillations of the corrector gradient via the two-scale expansion. In the random setting, $\nabla u_\varepsilon$ does not only oscillate but it also displays random fluctuations. In order to characterize fluctuations as in \cite{DGO1}, one needs better estimates than Meyers at the nonlinear level. Since we prove good control of correctors here, a possible route could be to derive non-perturbative linear and nonlinear annealed Calder\'on-Zygmund estimates for the heterogeneous operators (assuming of course that they hold for the homogenized operator). With such non-perturbative results at hands, one should most presumably be able to prove the convergence to white noise of the homogenization commutator like in \cite{DGO1,AKM2,GO4} as well as the pathwise structure of fluctuations \cite{DGO1} in this nonlinear setting. This is left for future investigation. \item The restriction on the exponent $p$ (which is only active in high dimensions $d\ge 4$) is related to the perturbative regularity theory in the large that we develop for the linearized operator. Indeed, the coefficient $a_\xi:=Da(\cdot,\xi+\nabla \phi_\xi)$ of the operator $a$ linearized at $\xi+\nabla \phi_\xi$ scales like $1+|\xi+\nabla \phi_\xi|^{p-2}$ and therefore only satisfies $\mathbb E[{|a_\xi|^\frac{p}{p-2}}]<\infty$ a priori: as $p$ increases, the stochastic integrability of the coefficients decreases. At some threshold (depending on dimension), this poor stochastic integrability cannot be compensated any longer by the Sobolev embedding --- whence our restriction (even in dimension 3, the argument is subtle to get all the exponents $2\le p<\infty$). \end{itemize} As in \cite{fischer2019optimal}, the proof of Theorem~\ref{th:2s} relies on three ingredients: the control of the growth of the nonlinear correctors, the control of the growth of corrector differences (which requires to analyze correctors of the linearized operator), and a representation formula for the two-scale expansion error (which involves the so-called flux correctors). In the rest of this section, we display the results on the nonlinear and on the linearized correctors, which are of independent interest. We start by recalling the notion of flux corrector. \begin{definition}\label{defsigmaNL} For all $\xi\in \mathbb{R}^d$ there exists a skew-symmetric random matrix field $(\sigma_{\xi,ij})_{1\le i,j\le d}$, which solves almost surely in the distributional sense in $\mathbb{R}^d$ the flux corrector equation \begin{equation} -\triangle \sigma_{\xi,ij} \,=\, \partial_i (a(\cdot,\xi+\nabla\phi_{\xi})\cdot e_j)-\partial_j(a(\cdot,\xi+\nabla\phi_{\xi})\cdot e_i), \label{e.Laplace-sig} \end{equation} which is anchored at the origin via $\int_B\sigma_{\xi}=0$, and whose gradient $\nabla \sigma_\xi$ is stationary, has vanishing expectation $\mathbb E[{\nabla \sigma_\xi}]=0$, and is bounded in the sense $$ \expec{|\nabla \sigma_\xi|^2+|\nabla \sigma_\xi|^p}\lesssim |\xi|^2+|\xi|^p. $$ In addition we have \begin{equation}\label{e.div-sig} \nabla\cdot \sigma_{\xi}=a(\cdot,\xi+\nabla\phi_{\xi})-\bar a(\xi), \end{equation} where the divergence of a matrix field $\sigma$ is understood as $(\nabla \cdot \sigma_\xi)_i=\sum_{j=1}^d \partial_j \sigma_{\xi,ij}$. \end{definition} The proof of existence and uniqueness of $\sigma_\xi$ is essentially the same as in the linear setting, and we refer the reader to \cite{GNO-reg} for the argument. The main result on the extended corrector $(\phi_\xi,\sigma_\xi)$ is as follows. \begin{theorem}\label{th:corrNL} Under Hypothesis~\ref{hypo}, for all $\xi \in \mathbb{R}^d$, the stationary extended corrector gradient $\nabla (\phi_\xi, \sigma_\xi)$ satisfies for some exponent $\alpha>0$ depending on $\lambda$, $p$, and $d$, and some constant $c_{\xi}>0$ depending further on $|\xi|$, \begin{equation}\label{e.bdd-grad-corrNL} \expec{\exp(c_{\xi}|\nabla (\phi_\xi,\sigma_\xi)|^\alpha)}\le 2. \end{equation} For all $g \in L^2(\mathbb{R}^d)$, averages of $(\nabla \phi_\xi,\nabla \sigma_{\xi,ij})$ display the CLT scaling\footnote{Indeed, for $g=|B_R|^{-1}\mathds{1}_{B_R}$, the right-gand side of \eqref{e:corr-NL-CLT} scales like $R^{-\frac d2}$.} in the form \begin{equation}\label{e:corr-NL-CLT} \Big| \int g (\nabla \phi_\xi ,\nabla \sigma_{\xi})\Big|\,\le \, C_{\xi,g} \Big(\int |g|^2\Big)^\frac12, \end{equation} where $C_{\xi,g}$ is a random variable with finite stretched exponential moment $$ \expec{\exp(c_{\xi} C_{\xi,g}^\alpha)}\le 2, $$ for some exponent $\alpha>0$ depending on $p$, $\lambda$, and $d$, and some constant $c_{\xi}>0$ further depending on $|\xi|$ (but all independent of $g$). This directly implies the following bounds on the growth of $(\phi_\xi,\sigma_\xi)$: For all $x\in \mathbb{R}^d$, \begin{equation}\label{e.growth-nlc} |(\phi_\xi,\sigma_\xi)(x)| \,\le\, C_{x,\xi} \mu_d(x), \end{equation} where $\mu_d$ is defined in \eqref{e.def-mud} and $C_{x,\xi}$ is a random variable with the same stochastic integrability as $C_{\xi,g}$. \end{theorem} \begin{remark} Under Hypothesis~\ref{hypo0}, for $Q$-periodic matrix fields $A$, the nonlinear correctors are bounded in $C^{1,\alpha}(Q)$ (no restriction on $p\ge 2$). \end{remark} In order to estimate the difference of correctors in directions $\xi_1$ and $\xi_2$ for $|\xi_1-\xi_2|\ll 1$, we study the linearized correctors. The following lemma (which is only used in an approximation argument) defines these linearized correctors by combining \cite[Lemma~1]{bella2018liouville} (which is devoted to the existence and uniqueness for linear corrector equations with prescribed in advance unbounded and degenerate coefficients) with the moment bound \eqref{e.bdd-grad-corrNL} (see Remark~\ref{rem:whymoment} below). \begin{lemma}\label{lem:def-lincorr} Under Hypothesis~\ref{hypo}, for all $\xi \in \mathbb{R}^d$ set $a_\xi:=D a(\cdot,\xi+\nabla \phi_\xi)$. For all $e \in \mathbb{R}^d$ there exists a unique random field $\tilde \phi_{\xi,e}$ that solves almost surely in the distributional sense in $\mathbb{R}^d$ the linearized corrector equation \begin{equation} -\nabla \cdot a_\xi (e+\nabla \tilde \phi_{\xi,e}) \,=\, 0, \label{e.Lcorr} \end{equation} anchored at the origin via $\int_B \tilde \phi_{\xi,e}=0$, and whose gradient $\nabla \tilde \phi_{\xi,e}$ is stationary, has vanishing expectation $\mathbb E[{\nabla \tilde \phi_{\xi,e}}]=0$, and is bounded in the sense \begin{equation*} \expec{|\nabla \tilde \phi_{\xi,e}|^2(1+|\xi+\nabla \phi_\xi|)^{p-2}}\lesssim (1+|\xi|^{p-2}) |e|^2. \label{e.Lboundcor} \end{equation*} In addition, there exists a skew-symmetric random matrix field $(\sigma_{\xi,e,ij})_{1\le i,j\le d}$, which solves almost surely in the distributional sense in $\mathbb{R}^d$ the linearized flux corrector equation \begin{equation}\label{e:eq-sigmaL} -\triangle \tilde \sigma_{\xi,e,ij} \,=\, \partial_i (a_\xi(e+\nabla \tilde \phi_{\xi,e})\cdot e_j)-\partial_j(a_\xi(e+\nabla \tilde \phi_{\xi,e})\cdot e_i), \end{equation} which is anchored via $\int_B \tilde \sigma_{\xi,e}=0$ almost surely, whose gradient $\nabla \tilde \sigma_{\xi,e}$ is stationary and is bounded in the sense $$ \expec{|\nabla \tilde \sigma_{\xi,e}|^\frac{2d}{d+2}}\lesssim C_\xi |e|^\frac{2d}{d+2}, $$ (where $C_\xi =1+\expec{|\xi+\nabla \phi_{\xi}|^{\frac{d+\gamma}2(p-2)}}^\frac{2}{d+\gamma}$ for any $\gamma>0$) and which satisfies the property $$ \nabla\cdot \tilde\sigma_{\xi,e}= a_\xi(e+\nabla \tilde \phi_{\xi,e})-\bar a_\xi e, $$ where $\bar a_\xi e=\expec{a_\xi(e+\nabla \tilde \phi_{\xi,e})}$. \end{lemma} \begin{remark}\label{rem:whymoment} In dimension $d=3$ the existence of $\tilde \sigma_{\xi,e}$ is only ensured by the energy estimate for $\nabla \phi_\xi$ for $p<6$. In the regime $p\ge 6$, we need to appeal to \eqref{e.bdd-grad-corrNL}, and therefore use a quantitative ergodicity assumption. \end{remark} The upcoming theorem gives further information on the linearized correctors, in line with Theorem~\ref{th:corrNL} for the nonlinear correctors. \begin{theorem}\label{th:corrL} Under Hypothesis~\ref{hypo}, for all $\xi,e \in \mathbb{R}^d$ with $|e|=1$, the stationary extended linearized corrector gradient $\nabla (\tilde \phi_{\xi,e}, \tilde\sigma_{\xi,e})$ satisfies for some exponent $\alpha_\xi>0$ and some constant $c_{\xi}>0$ depending on $\lambda$, $p$, $d$, and $|\xi|$, \begin{equation}\label{e.bdd-grad-corrL} \expec{\exp(c_{\xi}|\nabla (\tilde\phi_{\xi,e},\tilde\sigma_{\xi,e})|^{\alpha_\xi})}\le 2. \end{equation} For all $g \in L^2(\mathbb{R}^d)$, averages of $(\nabla \tilde\phi_{\xi,e},\nabla \tilde\sigma_{\xi,e})$ display the CLT scaling in the form \begin{equation}\label{e:corr-L-CLT} \Big| \int g (\nabla \tilde \phi_{\xi,e} ,\nabla \tilde \sigma_{\xi,e})\Big|\,\le \, C_{\xi,g} \Big(\int |g|^2\Big)^\frac12, \end{equation} where $C_{\xi,g}$ is a random variable with stretched exponential moments $ \expec{\exp(c_{\xi} C_{\xi,g}^{\alpha_\xi})}\le 2, $ for some exponent $\alpha_\xi>0$ and some constant $c_{\xi}>0$ depending on $p$, $\lambda$, $d$, and $|\xi|$. This directly implies that for all $x\in \mathbb{R}^d$, we have $|(\tilde\phi_{\xi,e},\tilde\sigma_{\xi,e})(x)| \,\le\, C_{x,\xi} \mu_d(x)$, where $C_{x,\xi}$ is a random variable with the same moment bounds as $C_{\xi,g}$. \end{theorem} \begin{remark} Under Hypothesis~\ref{hypo0}, for $Q$-periodic matrix fields $A$, the linearized correctors exist and are bounded in $C^{1,\alpha}(Q)$ (no restriction on $p\ge 2$). \end{remark} This finally gives the desired control of nonlinear corrector differences. \begin{corollary}[Control of nonlinear corrector differences]\label{coro:corr-diff} Under Hypothesis~\ref{hypo}, for all $K>0$ and all $\xi_1,\xi_2 \in \mathbb{R}^d$ with $|\xi_1|,|\xi_2|\le K$, we have for all $x\in \mathbb{R}^d$, $$ |\nabla (\phi_{\xi_1}-\phi_{\xi_2}, \sigma_{\xi_1}-\sigma_{\xi_2})(x)|\,\le\, C_{x,K}|\xi_1-\xi_2|, \quad |(\phi_{\xi_1}-\phi_{\xi_2}, \sigma_{\xi_1}-\sigma_{\xi_2})(x)|\,\le\, C_{x,K} |\xi_1-\xi_2| \mu_d(x), $$ where $C_{x,K}$ is a random variable with finite stretched exponential moment depending only on $d$, $p$, $\lambda$, and $K$. In particular, $\xi \mapsto \bar a(\xi)$ is locally $C^{1,1}$. \end{corollary} \begin{remark} Under Hypothesis~\ref{hypo0}, for $Q$-periodic matrix fields $A$, corrector differences are controlled by $|\xi_1-\xi_2|$ in $C^{1,\alpha}(Q)$ (no restriction on $p\ge 2$), and $\bar a$ is $C^{1,1}$ as well. \end{remark} In order to establish these estimates on nonlinear and linearized correctors, we first use an approximation argument which allows us to discard the long-range correlations induced by the elliptic character of the equation. In this contribution, we proceed by periodization in law, which has the advantage to keep differential relations neat in the approximation (in particular the identity \eqref{e.div-sig}). For all $L>0$, we introduce in Definition~\ref{defi:PL} (see Appendix~\ref{append:per}) a probability measure $\mathbb P_L$ taking values in $Q_L=[-\frac L2,\frac L2)^d$-periodic functions. The associated maps $x\mapsto a(x,\xi)$ are therefore $Q_L$-periodic $\mathbb P_L$-almost surely, and the corrector equations are posed on the bounded domain $Q_L$. The coupling between $\mathbb P$ and $\mathbb P_L$ given in Lemma~\ref{approxcoef} then allows us to infer results on $\mathbb P$ from corresponding results on $\mathbb P_L$, see in particular Proposition~\ref{convergenceofperiodiccorrectors}. The choice of periodization in law is convenient but not essential. In the linear setting one often adds a massive term to the equation (which yields an exponential cut-off for long-range interactions) \cite{GO1,GO2,Gloria-Otto-10b,GNO-reg,GNO-quant} or disintegrates scales via a semi-group approach \cite{GNO-quant,GO4,Clozeau-20}. All our estimates are proved for fixed periodization and the above results follow by letting the periodization parameter go to infinity. \subsection{Extensions and limitations}\label{sec:extension} Hypothesis~\ref{hypo} makes several assumptions on the monotone operator and the randomness: \begin{itemize} \item The underlying probability law is Gaussian with integrable correlations; \item The monotone map $a(x,\xi)$ is a multiple of $(1+|\xi|^{p-2})\xi$, the randomness is multiplicative (in form a random matrix field), and the admissible range of $p$ depends on $d$; \item The spatial dependence $x \mapsto a(x,\xi)$ is smooth on a deterministic level; \item If it admits a variational form, the operator is associated with a convex energy functional. \end{itemize} Several of these assumptions can be slightly relaxed, while others are crucial. \subsubsection{Probability laws} Consider our multiplicative model. Our approach is based on a sensitivity calculus which allows us to linearize quantities with respect to the randomness (say, wrt $A$) and on functional inequalities which allow us to control variances using this sensitivity calculus. On the one hand, in Hypothesis~\ref{hypo} we consider a Gaussian random field with integrable correlation, and one might wonder to what extent this last assumption is necessary. Our argument strongly relies on the CLT scaling of spatial averages of the corrector gradient, which essentially follow from the same property for $a(x,\xi)-\expec{a(x,\xi)}$ and therefore require the integrability of the correlation function. (Since there is some little room in the argument, one could consider a covariance function such that $\int_{\mathbb{R}^d} |c(x)|(1+|x|)^{-\beta}dx<\infty$ provided $0<\beta\ll 1$, but this is detail.) In particular, the only way to deal with strongly correlated Gaussian coefficient fields as in \cite{GNO-reg,GNO-quant,Clozeau-20} would be to have large-scale regularity, which, as mentioned in difficulty (I), is quite unclear at the moment. On the other hand, sensitivity calculus and functional inequalities are not limited to Gaussian fields: they can be developed as soon as the stationary field $A$ is constructed via a ``hidden'' product structure. In particular, the random checkerboard and various Poisson-based processes also enjoy such tools, and we refer the reader to \cite{DG1,DG2} for a systematic study of sensitivity calculus and (multiscale) functional inequalities for random fields commonly used in the mechanics of composite materials \cite{Torquato-02}. Such models could be considered here as well. \subsubsection{Form of the monotone map} There are three different assumptions when considering a monotone map of the form $(x,\xi)\mapsto a(x,\xi)\,=\,A(x)(1+|\xi|^{p-2})\xi$: coercivity conditions, regularity with respect to $\xi$, and multiplicative character of the randomness. To start with, we must assume that $\xi \mapsto a(x,\xi)$ is twice-differentiable (for all $x$) in order to apply sensitivity calculus to the linearized corrector. \medskip \textbf{Multiplicative models.} The form of $a$ is such that one can easily differentiate $a$ with respect to the randomness. This is not strictly necessary but quite convenient. Any model having such a property would do, and we can consider coefficients of the form $a(x,\xi)=\rho(A(x),\xi)\xi$ provided $M\mapsto \rho(M,\cdot)$ satisfies $\vert D_M\rho(M,\xi)\vert\lesssim 1+\vert\xi\vert^{p-1}$ and $\vert D_M \partial_\xi\rho(M,\xi)\vert\lesssim 1+\vert\xi\vert^{p-2}$. This holds for instance for \begin{equation}\label{e.2phase} a(x,\xi)=\chi(x) a_1(\xi)+(1-\chi(x)) a_2(\xi), \end{equation} where $\chi:\mathbb{R}^d \to [0,1]$ is a smooth random field (with a sensitivity calculus and a suitable functional inequality) and $a_1$ and $a_2$ are two given (suitable) monotone maps. This model is more in line with composite materials. \medskip \textbf{Coercivity conditions.} What is crucial is the uniform ellipticity in form of \begin{equation}\label{e.coer-below} (a(x,\xi_1)-a(x,\xi_2),\xi_1-\xi_2) \ge \frac1C (s+|\xi_1|+|\xi_2|)^{p-2}|\xi_1-\xi_2|^2 \end{equation} for some $s>0$ (which we take to be 1 without loss of generality). Whereas local regularity would hold with $s=0$, the choice $s>0$ is forced upon us to rule out the degeneracy of the linearized operator (cf. difficulty (II.1) above). In particular, this condition does not hold for the $p$-Laplacian, to which our results do not apply. To our opinion, relaxing this condition constitutes a very challenging problem. \medskip \textbf{Restriction on $p$.} Our specific form of $a$ yields the growth condition from above \begin{equation}\label{e.coer-above} |a(x,\xi)|\le C(1+|\xi|)^{p-1}, \end{equation} and the operator has therefore $p$-growth from above and below. In dimensions $d>3$, we have to impose the condition $2\le p < \frac{2(d-1)}{d-3}$. This is forced upon us to deal with the linearized corrector equation (as mentioned in Lemma~\ref{lem:def-lincorr}, already for $d=3$ defining the linearized correctors for $p\ge 6$ is subtle). \subsubsection{Local regularity} It is quite tempting to assert that quantitative homogenization is a matter of large scales (or say, low frequencies), and that local regularity assumptions might be convenient but are not necessary. This is indeed quite relevant provided small scales do not interact with large scales. A convincing counterexample of that is the quasiperiodic (and almost periodic) setting, where small and large scales indeed interact via a weak Poincar\'e inequality in a high-dimensional torus, cf.~\cite{AGK-16}. In our nonlinear setting, local regularity is not needed for the nonlinear correctors, but it seems unavoidable for the linearization part. This regularity requirement could be weakened in several directions: \begin{itemize} \item Only a local $C^\alpha$-control of the spatial dependence is needed for some $\alpha>0$, and the control of this local norm can be random itself provided the latter has good moment bounds. In particular, with the same notation as in Hypothesis~\ref{hypo}, this is the case for coefficients of the form $A(y)=B(G(y))$ provided the (non-negative) Fourier transform $\hat c$ of the covariance function satisfies $\hat c(k) \le (1+|k|)^{-d-2\alpha'}$ (for some $\alpha'>\alpha$). Then $x\mapsto \|A\|_{C^\alpha(B(x))}$ is stationary and has finite Gaussian moments (as a slight quantification of \cite[Appendix~A.3]{josien2020annealed} shows). \item In the proofs we use local regularity to control pointwise values of the (nonlinear and linear) corrector gradient by its local averages, and therefore control a local supremum by a local $C^\alpha$-norm. Such a control would also follow from a local broken $C^\alpha$-norm, so that one could in principle be able to deal with some $A$ (or $\chi$ in \eqref{e.2phase}) that would be piecewise smooth (and a fortiori piecewise constant with smooth boundaries, covering the case of smooth inclusions in a background material). This constitutes a question of classical regularity theory. For linear equations and systems, this is proved in \cite{LiVog-00,LiNiren-03} and for monotone operators and $p=2$ in \cite{NeukSch-19}. The case $p>2$ will be the object of future investigation. \item The state of the art of local regularity is as follows. For scalar equations, the structure can be quite general, and only requires the Hölder continuity of the map $x\mapsto a(x,\cdot)$ in the sense (see \cite[Theorem 13]{kuusi2014guide}) \begin{equation}\label{darkside} \sup_{r>0}\int_{0}^r \frac{(\omega(\rho))^{\frac{2}{p}}}{\rho^{\alpha}}\frac{\mathrm{d}\rho}{\rho}<+\infty, \end{equation} for $$\omega : r\in (0,+\infty)\mapsto \Big(\sup_{z\in\mathbb{R}^d, B_r(x)\subset\mathbb{R}^d}\fint_{B_r(x)}\left(\frac{a(y,z)-(a)_{x,r}(z)}{(\vert z\vert+1)^{p-1}}\right)^2\mathrm{d} y\Big)^{\frac{1}{2}},$$ for some $\alpha>0$ and $(a)_{x,r}(z):=\fint_{B_r(x)} a(y,z)\mathrm{d} y$. For systems however, we are restricted to quasi-diagonal structures of the form $a(x,\xi)=\rho(x,\vert\xi\vert)\xi$, for some $\rho : \mathbb{R}^d\times\mathbb{R}^d\rightarrow \mathbb{R}$ (the so-called Uhlenbeck structure, see \cite{Uhlenbeck}). \end{itemize} \subsubsection{Non-convex energy functionals?} It would be natural to try to extend these results to the setting of nonlinear elasticity, for which a large part of the qualitative theory has been established (cf.~\cite{Muller-87,Braides-85,Messaoudi-Michaille}, and \cite{DG-16b} for the most general results in this context). Besides the much more delicate regularity theory (cf.~\cite{DarkSide}), non-convexity essentially prevents us from using the corrector equation efficiently (cf.~the counter-examples to the cell formula in the periodic setting by M\"uller \cite{Muller-87}, see also \cite{Barchiesi-Gloria-10}), and may cause loss of ellipticity upon linearization (see \cite{GMT-93} at the nonlinear level, and \cite{G99,BF-15,FG-16,GR-19} at the linear level) -- except in the vicinity of the identity (cf. \cite{MN-11,GN-11}, and the further use of rigidity \cite{FJM-02} to establish quantitative results in this regime \cite{NS-18}). Hence, quantitative results in homogenization of nonlinear nonconvex models of elasticity remain widely out of reach today. \section{Perturbative regularity theory for the linearized operator}\label{mainresultNLunifL} In this section we consider periodized random operators $a_L$ distributed according to the law $\mathbb P_L$ given in Definition~\ref{defi:PL}. In particular, for all $L\ge 1$, $a_L$ is almost surely $Q_L$-periodic in its space variable, and remains random and stationary (this owes to the fact that we use periodization in law rather than naive periodization, cf.~Appendix~\ref{append:per}). This implies that $\phi_\xi$ and $\sigma_\xi$ are necessarily $Q_L$-periodic fields almost surely, so that the equations \eqref{e.cor-eq} and \eqref{e.Laplace-sig} can be posed on $Q_L$ rather than $\mathbb{R}^d$ -- and likewise for the linearized correctors. For all $L\ge 1$ we use the notation $H^1_{\mathrm{per}}(Q_L)$ (resp. $W^{1,p}_{\mathrm{per}}(Q_L)$) for $Q_L$-periodic fields of $H^1_\mathrm{loc}(\mathbb{R}^d)$ (resp. $W^{1,p}_\mathrm{loc}(\mathbb{R}^d)$) with vanishing average. Our aim is to prove regularity statements and bounds that are uniform in the periodization parameter $L\ge 1$. \subsection{The Meyers minimal radius} In this paragraph we introduce the notion of Meyers minimal radius, a stationary random field which quantifies the scale at which Meyers' estimates hold for the linearized operator. We start with a definition. \begin{definition}[Meyers minimal radius]\label{minimalscaleNL}Let $\xi\in\mathbb{R}^d$, $L\geq 1$ and $c>0$. If it exists, the ($Q_L$-periodic) minimal radius $r_{\star,\xi,L}(\cdot,c)$ is defined for all $x\in\mathbb{R}^d$ via \begin{equation} r_{\star,\xi,L}(\cdot,c): x\in\mathbb{R}^d\mapsto \inf_{y\in\mathbb{R}^d}\Big( {\underline r_{\star,\xi,L}}(y,c)+\ell \vert x-y \vert\Big), \label{defr*NL} \end{equation} where $\ell = \frac 1{9C \sqrt{d}} \wedge \frac1{16}$ (with $C$ defined in Lemma~\ref{addfint*}) and for all $y\in\mathbb{R}^d$ \begin{equation} {\underline r_{\star,\xi,L}}(y,c)\,:=\,\inf_{r=2^N,N\in\mathbb{N}}\bigg\{ \forall R\geq r, \fint_{B_R(y)}\vert \nabla \phi_{\xi}\vert^{p} \leq c (1+\vert \xi\vert^p) \bigg\}. \label{defr*NL2} \end{equation} \end{definition} We now argue that $r_{\star,\xi,L}(\cdot,c)$ is a well-defined bounded random field if $c$ is chosen large enough. \begin{lemma}[Well posedness of $r_{\star,\xi,L}$]\label{uniformboundr*NL} Let $(x,\xi)\in\mathbb{R}^d\times\mathbb{R}^d$ and $L\geq 1$. There exist two constants $c_1,c_2>0$ depending on $p$ and $d$ such that, $\mathbb P_L$-almost surely, $r_{\star,\xi,L}$ satisfies \begin{equation} {\underline r_{\star,\xi,L}}(x,c_2)\leq r_{\star,\xi,L}(x,c_1)\leq {\underline r_{\star,\xi,L}}(x,c_1), \label{encadrementrNL} \end{equation} and \begin{equation} {\underline r_{\star,\xi,L}}(x,c_1)\leq L. \label{unifboundr*NLeq} \end{equation} \end{lemma} \begin{proof} Without loss of generality, we may assume that $x=0$. We start with the proof of \eqref{unifboundr*NLeq}, and then turn to the proof of \eqref{encadrementrNL}. We let $c$ denote a constant depending only on $d$, $\lambda$, and $p$, that may change from line to line. \medskip \step1 Proof of \eqref{unifboundr*NLeq}. \noindent From the defining equation \eqref{e.cor-eq} for $\phi_\xi$, we have $$-\nabla\cdot (a(\cdot,\xi+\nabla\phi_{\xi})-a(\cdot,\xi))=\nabla\cdot a(\cdot,\xi)\text{ in $Q_L$},$$ so that by testing the equation with $\phi_\xi$ and using the monotonicity \eqref{e.coer-below} and boundedness \eqref{e.coer-above}, we obtain for some constant $c$ depending on $\lambda$ and $d$ $$ \fint_{Q_L}\vert\nabla\phi_{\xi}(x)\vert^2(1+|\xi|^{p-2}+\vert\xi+\nabla\phi_{\xi}(x)\vert^{p-2})\mathrm{d} x\leq c \fint_{Q_L}|\xi|(1+|\xi|^{p-2})|\nabla \phi_\xi|. $$ By absorbing part of the right-hand side into the left-hand side, this yields \begin{equation*} \fint_{Q_L}\vert\nabla\phi_{\xi}(x)\vert^2(1+|\xi|^{p-2}+\vert\xi+\nabla\phi_{\xi}(x)\vert^{p-2})\mathrm{d} x\leq c \fint_{Q_L}|\xi|^2(1+|\xi|^{p-2}). \end{equation*} By the triangle inequality in form of $\vert\xi+\nabla\phi_{\xi}(x)\vert^{p-2} \gtrsim \vert\nabla\phi_{\xi}(x)\vert^{p-2}-\vert\xi \vert^{p-2}$, and using the above twice, we obtain $$\fint_{Q_L}\vert\nabla\phi_{\xi}(x)\vert^2(1+ \vert\nabla\phi_{\xi}(x)\vert^{p-2})\mathrm{d} x\leq c \fint_{Q_L}|\xi|^2(1+|\xi|^{p-2}) \,\le\, c(1+|\xi|^p).$$ Assume that $L$ is dyadic. Given now $R \ge L$, we cover $B_R$ by $N_{L,R}\le c_d (\frac{R}L)^d$ translations of $Q_L$ (where $c_d$ only depends on dimension), which we denote by $Q_L^j$ for $1\le j \le N_{R,L}$. This yields \begin{eqnarray*} { \fint_{B_R(y)}\vert \nabla \phi_{\xi}\vert^{2}+\vert \nabla \phi_{\xi}\vert^{p}} &\le & \frac{L^d}{|B_R|} \sum_{j=1}^{N_{R,L}} \fint_{Q_L^j}\vert \nabla \phi_{\xi}\vert^{2}+\vert \nabla \phi_{\xi}\vert^{p} \\ &\le &c_d \frac{R^d}{L^d} \frac{L^d}{|B_R|} c (1 +|\xi|^{p}) \,=\, c_1 (1+|\xi|^{p}) \end{eqnarray*} for the choice $c_1:= c_d|B|^{-1}$, which only depends on $d$ and $\lambda$. This yields \eqref{unifboundr*NLeq}. If $L$ is not dyadic, we cover $B_R$ by cubes of sidelength $2^l$ with $l$ such that $2^l \le L < 2^{l+1}$, and obtain the result at the price of increasing~$c_1$. \medskip \step2 Proof of \eqref{encadrementrNL}. \noindent By definition~\eqref{defr*NL} of $r_{\star,\xi,L}$, we have $r_{\star,\xi,L}(0)\le {\underline r_{\star,\xi,L}}(0)$ by testing the infimum problem with $y=0$. Let us now prove that there exists $c_2$ such that for all $R \ge 1$ we have the implication $r_{\star,\xi,L}(0,c_1) \le R \implies {\underline r_{\star,\xi,L}}(0,c_2) \le R$, from which we deduce~\eqref{encadrementrNL}. By definition \eqref{defr*NL} of $r_{\star,\xi,L}$, if $r_{\star,\xi,L}(0,c_1)\le R$, there exists $y \in \mathbb{R}^d$ such that $|y| \le \frac R\ell$ and ${\underline r_{\star,\xi,L}}(y,c_1)\le R$. This implies that $B_{R} \subset B_{\bar R}(y)$ with $\bar R:=(\frac 1\ell+1)R$ so that \begin{equation*} {\fint_{B_R} \vert\nabla \phi_{\xi}\vert^{p}} \,\le \, (\tfrac{\bar R}{R})^{d}\fint_{B_{\bar R}}\vert\nabla \phi_{\xi}\vert^{p}\,\le \,(\tfrac 1\ell+1)^{d}c_1. \end{equation*} Hence, with $c_2:=(\tfrac 1\ell+1)^{d}c_1$, this yields ${\underline r_{\star,\xi,L}}(y,c_2)\le R$, and therefore~\eqref{encadrementrNL}. \end{proof} In the rest of the paper, the notation $r_{\star,\xi,L}$ refers to the minimal scales $r_{\star,\xi,L}(\cdot,c_1)$ for which Lemma~\ref{uniformboundr*NL} holds. When no confusion occurs, we simply write $\r$ for $r_{\star,\xi,L}$, and use the short-hand notation $B_{\star}(x)$ for $B_{r_{\star,\xi,L}(x)}(x)$. \medskip We conclude this paragraph by showing that the Meyers minimal radius controls local averages of the nonlinear corrector. \begin{lemma}[Control of averages of the nonlinear correctors]\label{ctrlavNL} There exists a nonlinear hole-filling exponent $0<\delta \le d$ depending on $d$, $p$, and $\lambda$ such that for all $(x,\xi)\in\mathbb{R}^d\times\mathbb{R}^d$, we have for all $r>0$ \begin{eqnarray} \fint_{B_r(x)}\vert\xi+\nabla\phi_{\xi}\vert^2+\vert\xi+\nabla\phi_{\xi}\vert^p&\lesssim_{d,\lambda,p}&(1+\vert\xi\vert^p)\Big(\frac{\r(x) \vee r}{r}\Big)^{d-\delta}. \label{controlunitball} \end{eqnarray} \end{lemma} \begin{proof} Without loss of generality, we may assume that $x=0$. We use the short-hand notation $\rho:=\r \vee r \ge \r$. By the hole-filling estimate \eqref{HolefillingNL} applied to the defining equation~\eqref{e.cor-eq} for $\phi_\xi$, there exists $\delta>0$ depending on $d$ and $\lambda$ such that \begin{equation*} {\fint_{B_r}\vert\xi+\nabla\phi_{\xi}\vert^2+\vert\xi+\nabla\phi_{\xi}\vert^p} \,\lesssim \, \Big(\frac{\rho}{r}\Big)^{d-\delta}\fint_{B_{\rho}}\vert\xi+\nabla\phi_{\xi}\vert^2+\vert\xi+\nabla\phi_{\xi}\vert^p \,\lesssim\,\Big(\frac{\rho}{r}\Big)^{d-\delta}\fint_{B_{\rho}}|\xi|^2+|\xi|^p+\vert\nabla\phi_{\xi}\vert^2+\vert\nabla\phi_{\xi}\vert^p. \end{equation*} Using then \eqref{encadrementrNL} in form of $\rho \ge \r(0)\ge {\underline r_{\star,\xi,L}}(0,c_2)$, the definition \eqref{defr*NL2}, and Jensen's inequality, this yields the reformulation of~\eqref{controlunitball} \begin{equation*} {\fint_{B_r}\vert\xi+\nabla\phi_{\xi}\vert^2+\vert\xi+\nabla\phi_{\xi}\vert^p} \,\lesssim \, \Big(\frac{\rho}{r}\Big)^{d-\delta}(c_2+1)(1+|\xi|^p). \end{equation*} \end{proof} \subsection{Quenched perturbative regularity in the large}\label{perturbativeregsection} \subsubsection{Quenched Meyers' estimate in the large} Recall that $a_\xi:=D a(\cdot,\xi+\nabla \phi_\xi)$. The elliptic operator $-\nabla\cdota_{\xi}\nabla$ has unbounded coefficients, whose growth depends on the nonlinear corrector $\nabla\phi_{\xi}$: There exists $(c,C)\in\mathbb{R}_+\times \mathbb{R}_+$, depending on $\lambda$ and $p$, such that for all $h\in\mathbb{R}^d$ \begin{equation} c\vert h\vert^2\mu_{\xi}\leq h\cdot a_{\xi} h\leq C\vert h\vert^2\mu_{\xi}, \label{growthconditionlineaxi} \end{equation} where \begin{equation}\label{e.def-mu} \mu_{\xi}:=1+\vert\xi+\nabla\phi_{\xi}\vert^{p-2}. \end{equation} In addition, by \eqref{controlunitball} in Lemma~\ref{ctrlavNL} we have for all $r\geq \r$ \begin{equation} \|\mu_{\xi}\|_{L^{\frac{p}{p-2}}(B_r)}^{\frac{p}{p-2}} \lesssim_{d,\lambda,p}r^{d }(1+\vert\xi\vert^p). \label{boundnormweight} \end{equation} The main result of this section is the following quenched Meyers estimate in the large. \begin{theorem}[Quenched Meyers' estimate in the large]\label{unweightmeyers} Under Hypothesis~\ref{hypo}, for all $\xi\in\mathbb{R}^d$, there exists $\bar m>2$ depending on $d$, $p$ and $\vert\xi\vert$, such that for all exponents $2 \le m \le \bar m$, and all $Q_L$-periodic functions $g$ and $u$ related via \begin{equation}\label{LSMequationu} -\nabla\cdot a_{\xi}\nabla u=\nabla \cdot (g\sqrt{\mu}_{\xi}), \end{equation} and all $r>0$ we have \begin{equation} \fint_{B_r}\Big(\fint_{B_{\star}(x)}\vert\nabla u \vert^2\mu_{\xi} \Big)^{\frac{m}{2}}dx\lesssim_{\vert\xi\vert}\Big(\fint_{B_{2r}}\Big(\fint_{B_{\star}(x)}\vert\nabla u\vert^2\mu_{\xi}\Big)\,\mathrm{d} x\Big)^{\frac{m}{2}} +\fint_{B_{2r}}\Big(\fint_{B_{\star}(x)}\vert g \vert^2 \Big)^{\frac{m}{2}}\mathrm{d} x. \label{unweightedmeyerslocal} \end{equation} In particular, \begin{equation} \int_{Q_L}\Big(\fint_{B_{\star}(x)}\vert\nabla u\vert^2\mu_{\xi}\Big)^{\frac{m}{2}}\mathrm{d} x\lesssim_{\vert\xi\vert} \int_{Q_L}\Big(\fint_{B_{\star}(x)}\vert g\vert^2\Big)^{\frac{m}{2}}\mathrm{d} x. \label{unweightedmeyers} \end{equation} The same result holds with $a_{\xi}$ replaced by $a_{\xi}^*$ (the pointwise transpose field). \end{theorem} We follow the standard strategy based on a reverse H\"older inequality and Gehring's lemma to prove this Meyers estimate. We start with the reverse H\"older inequality: \begin{lemma}[Reverse H\"older inequality]\label{improvedholder} Let Hypothesis~\ref{hypo} hold. Set $q=\frac{p}{p-2}$. For all $x\in\mathbb{R}^d$, $r \geq \r(x)$, and all $g$ and $u$ related via \begin{equation} -\nabla\cdot a_{\xi}\nabla u=\nabla \cdot (g\sqrt{\mu_\xi}) \text{ in $B_{\frac{17}{12}r}(x)$}, \label{reverseholderequationu} \end{equation} we have \begin{equation} \Big(\fint_{B_{\frac{67}{48} r}(x)}\vert \nabla u\vert^2\mu_{\xi}\Big)^{\frac{1}{2}}\lesssim (1+ \vert\xi\vert^p)^{\frac{p-2}{2p}}\Big(\fint_{B_{\frac{17}{12}r}(x)}\vert\nabla u \vert^{q_*}\Big)^{\frac{1}{q_*}}+\Big(\fint_{B_{\frac{17}{12}r}(x)}\vert g\vert^2 \Big)^{\frac{1}{2}}, \label{reverseholderesti} \end{equation} with $1\le q_*<2$ given by \begin{equation} \frac{1}{q_*} = \left\{ \begin{array}{ll} 1 & \text{ for $d=2$},\\ \frac{1}{2}-\frac{1}{2q}+\frac{1}{d-1} & \text{ for $d\geq 3$}. \end{array} \right. \label{exponentlemmabella} \end{equation} (The choice of $\frac{67}{48}$ and $\frac{17}{12}$ is convenient for the sequel, but obviously not essential.) The same result holds with $a_{\xi}$ replaced by $a_{\xi}^*$ (the pointwise transpose field). \end{lemma} Not surprisingly, this estimate follows from the Caccioppoli and the Poincar\'e-Sobolev inequalities. As opposed to the case of uniformly bounded coefficients, the weight $\mu_\xi$ is in the way (and cannot be treated as a Muckenhoupt weight, which it is not). In order to get the entire range of exponents $2\le p <\infty$ in dimension $d=3$, we have to be careful in the Caccioppoli inequality. Inspired by \cite[Lemma 1]{bella2019local}, we optimize with respect to the cut-off in Caccioppoli's inequality, which allows us to appeal to Poincar\'e-Sobolev in dimension $d-1$ rather than $d$ (and therefore improve the integrability). \begin{lemma}\label{lemmabella} Let $q\in [1,+\infty)$, assume that $q>\frac{d-1}{2}$ if $d\geq 3$, and let $q_*$ be given by \eqref{exponentlemmabella}. For $0<\rho<\sigma<+\infty$, $v\in W^{1,q_*}(B_{\sigma})$ and $\mu\in L^{q}_{\text{loc}}(\mathbb{R}^d)$, the quantity \begin{equation} \mathcal{J}(\rho,\sigma,\mu,v):=\inf\Big\{\int_{B_{\sigma}}\mu v^2\vert\nabla\eta\vert^2 \Big\vert\eta\in C^{1}_c(B_{\sigma}),\, 0\leq \eta\leq 1,\, \eta\equiv 1 \text{ in } B_{\rho}\Big\} \label{defJinfbella} \end{equation} satisfies \begin{equation} \mathcal{J}(\rho,\sigma,\mu,v)\lesssim (\sigma-\rho)^{-\frac{2d}{d-1}}\|\mu\|_{L^q(B_{\sigma}\backslash B_{\rho})}\left(\|\nabla v\|^2_{L^{q_*}(B_{\sigma}\backslash B_{\rho})}+\rho^{-2}\|v\|^2_{L^{q_*}(B_{\sigma}\backslash B_{\rho})}\right). \label{estilemmabella} \end{equation} \end{lemma} The proof of Lemma~\ref{lemmabella}, which closely follows the proof of~\cite[Lemma 1]{bella2019local}, is postponed to Appendix~\ref{append:standard-ineq}. We now prove Lemma~\ref{improvedholder}. \begin{proof}[Proof of Lemma~\ref{improvedholder}] Without loss of generality, we may assume $x=0$ and $\int_{B_{\frac{17}{12}r}\backslash B_{\frac{67}{48} r}} u =0$. We first apply the Caccioppoli inequality \eqref{esticaccioppounbounded} with $\mu=\mu_{\xi}$ and $c_1=\frac{67}{48}<\frac{17}{12}=c_2$, and obtain with the notation \eqref{defJinfbella} \begin{equation} \int_{B_{\frac{67}{48} r}}\vert\nabla u \vert^2\mu_{\xi}\, \lesssim \, \mathcal{J}(\tfrac{67}{48} r,\tfrac{17}{12}r,\mu_{\xi},u)+\int_{B_{\frac{17}{12}r}}\vert g \vert^2 . \label{lemmabellaapp} \end{equation} We then apply Lemma \ref{lemmabella} with exponent $q=\frac{p}{p-2}$ for $d\geq 3$ and $q=1$ for $d=2$, to the effect that \begin{equation} \mathcal{J}(\tfrac{67}{48} r,\tfrac{17}{12}r,\mu_{\xi},u)\lesssim r^{-\frac{2d}{d-1}}\|\mu_{\xi}\|_{L^q(B_{\frac{17}{12}r}\backslash{B_{\frac{67}{48} r}})}\left(\|\nabla u\|^2_{L^{q_*}(B_{\frac{17}{12}r}\backslash{B_{\frac{67}{48} r}})}+r^{-2}\|u\|^2_{L^{q_*}(B_{\frac{17}{12}r}\backslash{B_{\frac{67}{48} r}})}\right). \label{lemmabellaapp2} \end{equation} Since $r\geq \r(0)$, \eqref{boundnormweight} yields $\|\mu_{\xi}\|_{L^q(B_{\frac{17}{12}r}\backslash{B_{\frac{67}{48} r}})}\lesssim (1+\vert \xi\vert^{p})^{\frac{p-2}{p}}r^{\frac{d}{q}}$, whereas Poincar\'e's inequality in $L^{q_*}(B_{2r}\backslash B_r)$ yields $r^{-2}\|u\|^2_{L^{q_*}(B_{\frac{17}{12}r}\backslash{B_{\frac{67}{48} r}})}\lesssim \|\nabla u\|^2_{L^{q_*}(B_{\frac{17}{12}r}\backslash{B_{\frac{67}{48} r}})}$. Hence, \eqref{lemmabellaapp2} turns into $$ \mathcal{J}(\tfrac{67}{48} r,\tfrac{17}{12}r,\mu_{\xi},u)\lesssim r^{-\frac{2d}{d-1}+\frac{d}{q}}(1+\vert\xi\vert^p)^{\frac{p-2}{p}}\|\nabla u\|^2_{L^{q_*}(B_{\frac{17}{12}r}\backslash{B_{\frac{67}{48} r}})}. $$ Combined with \eqref{lemmabellaapp}, this entails $$ \fint_{B_{\frac{67}{48} r}}\vert\nabla u \vert^2\mu_{\xi}\,\lesssim \,r^{-\frac{2d}{d-1}+\frac{d}{q}-d+\frac{2d}{q_*}}(1+ \vert\xi\vert^p)^{\frac{p-2}{p}}\Big(\fint_{B_{\frac{17}{12}r}}\vert\nabla u\vert^{q_*} \Big)^{\frac{2}{q_*}}+\fint_{B_{\frac{17}{12}r}}\vert g \vert^2, $$ which concludes the proof since, by definition \eqref{exponentlemmabella} of $q_*$, $-\frac{2d}{d-1}+\frac{d}{q}-d+\frac{2d}{q_*}=0$. \end{proof} Theorem~\ref{unweightmeyers} relies on the combination of Lemma~\ref{improvedholder} with Gehring's inequality in form of \begin{lemma}[Gehring's lemma]\label{gehring} Let $s>1$, and let $f$ and $h$ be two non-negative measurable functions in $L^q_\mathrm{loc}(\mathbb{R}^d)$ such that there exists $C>0$ for which for all $r>0$ and $x\in\mathbb{R}^d$ $$ \Big(\fint_{B_r(x)} f^s\Big)^{\frac{1}{s}}\leq C\Big(\fint_{B_{2r}(x)}f+\Big(\fint_{B_{2r}(x)}h^s\Big)^{\frac{1}{s}}\Big). $$ Then, there exists $\bar{s}> s$ depending on $q$ and $C$ such that for all $r>0$ and $x\in\mathbb{R}^d$, we have $$ \Big(\fint_{B_r(x)}f^{\bar{s}}\Big)^{\frac{1}{\bar{s}}}\lesssim \fint_{B_{2r}(x)}f+\Big(\fint_{B_{2r}(x)}h^{\bar{s}}\Big)^{\frac{1}{\bar{s}}}. $$ \end{lemma} We are now in the position to prove Theorem \ref{unweightmeyers}. \begin{proof}[Proof of Theorem \ref{unweightmeyers}] Let $1\le q_* <2$ be given by \eqref{exponentlemmabella}. We first prove that for all $r>0$ \begin{equation} \fint_{B_r}\Big(\fint_{B_{\star}(x)}\vert\nabla u \vert^2\mu_{\xi}\Big)\mathrm{d} x\lesssim \Big(\fint_{B_{2r}}\Big(\fint_{B_{\star}(x)}\vert\nabla u\vert^2\mu_{\xi}\Big)^\frac{q_*}2 \mathrm{d} x\Big)^{\frac{2}{q_*}} +\fint_{B_{2r}}\Big(\fint_{B_{\star}(x)}\vert g \vert^2\Big)\mathrm{d} x . \label{reverseholder} \end{equation} If $r\le 3 \r(0)$ this estimate follows from Lemma~\ref{reverse}, and it remains to treat the case $r\geq 3\r(0)$. We first use \eqref{fintout} with $f=\vert\nabla u\vert^2\mu_{\xi}$ to the effect of \begin{equation} \int_{B_r}\Big(\fint_{B_{\star}(x)}\vert\nabla u\vert^2\mu_{\xi} \Big) \mathrm{d} x\lesssim \fint_{B_{\frac{67}{48}r}}\vert\nabla u\vert^2\mu_{\xi}. \label{propunweightmeyers1} \end{equation} Then, by the reverse H\"older inequality \eqref{reverseholderesti} followed by \eqref{fintint}, we obtain \begin{eqnarray} \fint_{B_{\frac{67}{48}r}}\vert\nabla u \vert^2\mu_{\xi} &\stackrel{\eqref{reverseholderesti}}{\lesssim}& (1+ \vert\xi\vert^p)^{\frac{p}{p-2}}\Big(\fint_{B_{\frac{17}{12} r}}\vert\nabla u \vert^{q_*} \Big)^{\frac{2}{q_*}}+\fint_{B_{\frac{17}{12}r}}\vert g \vert^2 \nonumber \\ &\stackrel{\eqref{fintint}}{\lesssim}&(1 +\vert\xi\vert^p)^{\frac{p}{p-2}}\Big(\fint_{B_{2r}}\Big(\fint_{B_{\star}(x)}\vert\nabla u\vert^{q_*}\Big)\, \mathrm{d} x\Big)^{\frac{2}{q_*}} +\fint_{B_{2r}}\Big(\fint_{B_{\star}(x)}\vert g\vert^2\Big)\, \mathrm{d} x\label{propunweightmeyers2}. \end{eqnarray} We then slightly reformulate the first right-hand side term using Jensen's inequality in the inner integral (since $q_*<2$) and the lower bound $\mu_{\xi}\geq 1$, so that \begin{align} \Big(\fint_{B_{2r}}\fint_{B_{\star}(x)}\vert \nabla u\vert^{q_*}\, dx\Big)^{\frac{2}{q_*}}&\leq \Big(\fint_{B_{2r}}\Big(\fint_{B_{\star}(x)}\vert \nabla u\vert^2 \Big)^{\frac{q_*}2} \mathrm{d} x\Big)^{\frac{2}{q_*}}\nonumber\\ &\leq \Big(\fint_{B_{2r}}\Big(\fint_{B_{\star}(x)}\vert \nabla u\vert^2 \mu_{\xi}\Big)^{\frac{q_*}2} \mathrm{d} x\Big)^{\frac{2}{q_*}}. \label{propunweightmeyers3} \end{align} The combination of \eqref{propunweightmeyers1}, \eqref{propunweightmeyers2}, \eqref{propunweightmeyers3} yields the claimed estimate \eqref{reverseholder}. To conclude, we apply Lemma~\ref{gehring} with $$f:x\mapsto \Big(\fint_{B_{\star}(x)}\vert\nabla u\vert^{2}\mu_{\xi}\Big)^{\frac {q_*}{2}}, \quad h:x\mapsto \Big(\fint_{B_{\star}(x)}\vert g \vert^2 \Big)^{\frac{q_*}2}, \quad s=\tfrac{2}{q_*}>1. $$ This yields \eqref{unweightedmeyerslocal}, whereas \eqref{unweightedmeyers} follows by applying \eqref{unweightedmeyerslocal} for $B_r$ with $r=\frac{\sqrt{d}}2L$ and using the periodicity of the quantities involved together with the plain energy estimate $ \int_{Q_L}\vert\nabla u \vert^2\mu_{\xi} \lesssim \int_{Q_L}\vert g \vert^2. $ \end{proof} \subsubsection{Quenched weighted Meyers' estimate in the large} The main result of this paragraph is the following upgrade of Theorem~\ref{unweightmeyers}. \begin{theorem}[Quenched weighted Meyers estimates in the large]\label{largescalereg} Under Hypothesis~\ref{hypo}, for all $\xi\in\mathbb{R}^d$, there exists $\beta >0$ depending only on $|\xi|$ and $d$ such that for all $2\le m \le \bar m$ (cf.~Theorem~\ref{unweightmeyers}), $0\le 2\varepsilon \le \beta$ and all $Q_L$-periodic fields $g$ and $u$ related via \eqref{LSMequationu}, we have \begin{equation} \int_{Q_L}\omega_{\varepsilon,r}(x)\Big(\fint_{B_{\star}(x)}\vert\nabla u \vert^2\mu_{\xi} \Big)^{\frac{m}{2}}dx\lesssim_{\vert\xi\vert}\int_{Q_L}\omega_{2\varepsilon,r}(x)\Big(\fint_{B_{\star}(x)}\vert g\vert^2 \Big)^{\frac{m}{2}}dx, \label{weightedLp} \end{equation} where for all $x\in Q_L$ \begin{equation} \omega_{\varepsilon,r}(x):=\Big(1+\frac{|x|+\r(0)}{r}\Big)^{\varepsilon}. \label{weightmeyers} \end{equation} The same result holds with $a_{\xi}$ replaced by $a_{\xi}^*$ (the pointwise transpose field). \end{theorem} We proceed in two steps: From Theorem~\ref{unweightmeyers} we first prove a suitable hole-filling estimate which we use in turn to upgrade Theorem~\ref{unweightmeyers}. \begin{corollary}[Linear hole-filling estimate in the large]\label{Lholefilling} Under Hypothesis~\ref{hypo}, for all $\xi\in\mathbb{R}^d$ there exist an exponent $\beta >0$, depending only on $d$, $p$ and $\vert\xi\vert$, and a constant $c_d\ge 1$ with the following properties. Let $u$ be a $Q_L$-periodic function which is $a_{\xi}$-harmonic in $Q_R(x)$ for some $x\in \mathbb{R}^d$ and $L\ge R \ge c_d \r(x)$, that is $$-\nabla\cdot a_{\xi}\nabla u= 0 \text{ in $Q_R(x)$}.$$ Then for all $\r(x)\le r \le R$, \begin{equation} \int_{Q_r(x)}\vert \nabla u \vert^2\mu_{\xi} \lesssim_{\vert\xi\vert} (\tfrac{r}{R})^{\beta}\int_{Q_R(x)}\vert\nabla u \vert^2\mu_{\xi} . \label{Lholefillingesti} \end{equation} The same result holds with $a_{\xi}$ replaced by $a_{\xi}^*$ (the pointwise transpose field). \end{corollary} \begin{proof}[Proof of Corollary \ref{Lholefilling} ] Without loss of generality, we may assume that $x=0$, $r \ge \r(0)$, and that $2c r \le \frac R4$ with $c= 3 \vee \frac{\sqrt d}{2}$. By \eqref{fintout}, \eqref{fintint}, the H\"older inequality with exponents $(\bar{m} ,\frac{\bar m}{\bar m -1})$ (with $\bar m$ as in Theorem \ref{unweightmeyers}) and the unweighted Meyers estimate~\eqref{unweightedmeyerslocal}, we have with $\beta :=d(1-\frac{1}{\bar m})$ \begin{eqnarray*} \int_{Q_r}\vert \nabla u\vert^2\mu_{\xi} & \le & \int_{B_{cr}}\vert \nabla u\vert^2\mu_{\xi} \\ &\stackrel{\eqref{fintint}}{\lesssim}&r^d\fint_{B_{2cr}}\Big(\fint_{B_{\star}(x)}\vert\nabla u\vert^2\mu_{\xi} \Big)\,\mathrm{d} x\\ &\leq &r^d\Big(\fint_{B_{2cr}}\Big(\fint_{B_{\star}(x)}\vert\nabla u\vert^2\mu_{\xi} \Big)^{\bar m}\mathrm{d} x\Big)^{\frac{1}{\bar m}}\\ &\leq& r^d(\tfrac{R}{r})^{\frac{d}{\bar m}} \Big(\fint_{B_{\frac R4}}\Big(\fint_{B_{\star}(x)}\vert\nabla u\vert^2\mu_{\xi} \Big)^{\bar m}\mathrm{d} x\Big)^{\frac{1}{\bar m}}\\ &\stackrel{\eqref{unweightedmeyerslocal}}{\lesssim}&r^d(\tfrac{R}{r})^{\frac{d}{\bar m}}\fint_{B_{\frac R4}}\Big(\fint_{B_{\star}(x)}\vert\nabla u\vert^2\mu_{\xi}\Big)\,\mathrm{d} x \,\stackrel{\eqref{fintout}}{\lesssim} \,(\tfrac{r}{R})^{\beta }\int_{B_{\frac R2}}\vert\nabla u \vert^2\mu_{\xi} \le (\tfrac{r}{R})^{\beta }\int_{Q_R}\vert\nabla u \vert^2\mu_{\xi}. \end{eqnarray*} \end{proof} We now prove Theorem \ref{largescalereg}. \begin{proof}[Proof of Theorem \ref{largescalereg}] We split the proof into four steps. In the first step, we show that for compactly supported right-hand sides $g$ the solution gradient decays algebraically away from the source in $L^2$, based on hole-filling. We then upgrade this $L^2$ estimate into an $L^m$ estimate for some $m>2$ using Meyers' estimate~\eqref{unweightedmeyerslocal}. In the third step, we remove the assumption that $g$ be compactly supported by using a dyadic decomposition of scales. In the last step we exploit the algebraic decay to add the desired weight. Since the proof relies on a dyadic decomposition of the torus, it is convenient to work with cubes rather than balls when taking averages (which makes constants slightly cumbersome). \medskip \step1 $L^2$ algebraic decay rate. \noindent We prove that there exists $\delta>0$, depending on $d$, $p$ and $|\xi|$, such that for all $L\ge R \ge r\geq c_d\r(0)$ and all $g$ compactly supported in $Q_r$ we have \begin{equation} \int_{Q_L \backslash Q_R}\vert\nabla u \vert^2\mu_\xi \lesssim (\tfrac{r}{R})^{\beta}\int_{Q_r}\vert g \vert^2 . \label{holefil} \end{equation} We proceed by duality in form of \begin{equation}\label{e.dua1} \int_{Q_L \backslash Q_R}\vert\nabla u\vert^2\mu_\xi \,=\, \sup_{h } \int_{Q_L \backslash Q_R}h \cdot \nabla u \sqrt{\mu_\xi } , \end{equation} where the supremum runs over functions $h \in L^2(Q_L \backslash Q_R)^d$ with $\|h\|_{ L^2(Q_L \backslash Q_R)^d}=1$. Consider such a test function $h$ (implicitly extended by zero on $Q_R$) and denote by $v$ the unique weak solution in $H^1_{\mathrm{per}}(Q_L)$ of \begin{equation} -\nabla \cdot a_{\xi}^*\nabla v=\nabla\cdot (h\sqrt{\mu_{\xi}}), \label{LSMequationdualv} \end{equation} which is well-posed since $\mu_{\xi}$ is bounded on $Q_L$ almost surely by Lemma~\ref{regestiNL}. By testing \eqref{LSMequationdualv} with $u$ and \eqref{LSMequationu} with $v$, we obtain by Cauchy-Schwarz' inequality and the support condition on $g$ \begin{equation} \Big|\int_{Q_L}h\cdot \nabla u\sqrt{\mu_{\xi} }\Big|=\Big|\int_{Q_{L}}g \cdot \nabla v \sqrt{\mu_{\xi} } \Big|\leq \Big(\int_{Q_r}\vert g \vert^2 \Big)^{\frac{1}{2}}\Big(\int_{Q_r}\vert\nabla v \vert^2\mu_{\xi} \Big)^{\frac{1}{2}}. \label{weightedmeyerstep11} \end{equation} Since $h$ vanishes on $Q_R$, $v$ is $a_{\xi}^*$-harmonic in $Q_R$, and the hole filling estimate \eqref{Lholefillingesti} with exponent $\delta>0$ yields in combination with the plain energy estimate $\int_{Q_{L}}\vert \nabla v \vert^2\mu_{\xi} \lesssim \int_{Q_{L}}\vert h \vert^2 $ and the assumption $\int_{Q_L}|h|^2=1$ \begin{equation} \int_{Q_r}\vert\nabla v\vert^2\mu_{\xi} \lesssim (\tfrac{r}{R})^{\beta}\int_{Q_R}\vert\nabla v\vert^2\mu_{\xi} \lesssim (\tfrac{r}{R})^{\beta}\int_{Q_{L}}\vert h \vert^2 =(\tfrac{r}{R})^{\beta}. \label{weightedmeyerstep12} \end{equation} The claim \eqref{holefil} now follows from \eqref{e.dua1}, \eqref{weightedmeyerstep11} and \eqref{weightedmeyerstep12}. \medskip \step2 $L^m$ algebraic decay rate for $2 \le m \le \bar m$. \noindent In this step, we prove that, with $C_d=4C \vee c_d \vee 16$ (with $C\ge 1$ as in \eqref{fintoutC}), for all $L> R \ge 2 r$ with $r\geq C_d\r(0)$, and all $g$ compactly supported in $Q_r$, we may upgrade \eqref{holefil} to \begin{equation} \int_{Q_{L}\backslash Q_R}\Big(\fint_{B_{\star}(x)}\vert\nabla u\vert^2 \mu_{\xi}\Big)^{\frac{m}{2}}\mathrm{d} x\lesssim R^{d(1-\frac{m}{2})}(\tfrac{r}{R})^{\delta \frac{m}{2}}\Big(\int_{Q_r}\vert g\vert^2\Big)^{\frac{m}{2}}. \label{holefillingLp} \end{equation} for all $2\le m \le \bar m$, where $\bar{m}$ is the Meyers exponent of Theorem~\ref{unweightmeyers}. Let $J \in \mathbb{N}$ be such that $2^JR < L \le 2^{J+1}R$. By writing $Q_L \setminus Q_R= (Q_L \setminus Q_{2^JR})\cup \cup_{j=1}^J (Q_{2^{j}R}\setminus Q_{2^{j-1}R})$ (with the convention that the second union is empty if $J=0$), it is enough to prove that for all $1\le j\le J+1$, we have \begin{equation} \int_{Q_{(2^{j}R) \wedge L}\backslash Q_{2^{j-1}R}}\Big(\fint_{B_{\star}(x)}\vert \nabla u \vert^2 \mu_\xi \Big)^{\frac{m}{2}}\mathrm{d} x\lesssim (2^jR)^{d(1-\frac{m}{2})}(\tfrac{r}{2^jR})^{\beta \frac{m}{2}}\Big(\int_{Q_r}\vert g\vert^2 \Big)^{\frac{m}{2}}. \label{dyadicLp} \end{equation} Indeed, for all $m\ge 2$, $d(1-\frac{m}{2})-\delta \frac m 2 \le -\delta$, so that the dyadic terms sum to \eqref{holefillingLp}. To start with, reverting from balls to cubes, one may reformulate Theorem~\ref{unweightmeyers} with cubes instead of balls, and replace $B_r$ and $B_{2r}$ by $Q_r$ and $Q_{C_1 r}$, respectively (for some $C_1$ depending only dimension). Let $1\le j\le J$ be fixed (the case $j=J+1$ can be treated similarly). We partition $Q_{2^{j}R}\setminus Q_{2^{j-1}R}$ into the union of cubes $\{Q^k\}_{k=1,\dots,N}$ of side-length $\frac1{C_2} 2^jR$ for some $C_2$ to be fixed later (the number $N$ of such cubes then depends on $d$ and $C_2$, but not on $j$ or $R$), to the effect that for all $m>2$ we have \begin{equation} \int_{Q_{2^{j}R}\backslash Q_{2^{j-1}R}}\Big(\fint_{B_{\star}(x)}\vert \nabla u \vert^2 \mu_\xi\Big)^{\frac{m}{2}}\mathrm{d} x= \sum_{k=1}^{N}\int_{Q^k}\Big(\fint_{B_{\star}(x)}\vert \nabla u \vert^2 \mu_\xi \Big)^{\frac{m}{2}}\mathrm{d} x. \label{decomposeint1} \end{equation} By Theorem \ref{unweightmeyers}, for all $2\le m \le \bar m$ and $1\le k \le N$, \begin{equation} \fint_{Q^k}\Big(\fint_{B_{\star}(x)}\vert \nabla u \vert^2 \mu_\xi \Big)^{\frac{m}{2}}\mathrm{d} x\lesssim \, \Big(\fint_{\bar Q^k}\Big(\fint_{B_{\star}(x)}\vert \nabla u \vert^2 \mu_\xi\Big) \, \mathrm{d} x\Big)^{\frac{m}{2}}+\fint_{\bar Q^k}\Big(\fint_{B_{\star}(x)}\vert g \vert^2 \Big)^{\frac{m}{2}}\mathrm{d} x, \label{meyersesti1} \end{equation} where $\bar Q^k\supset Q^k$ denotes the cube of side-length $\frac{C_1}{C_2} 2^jR$ centered at the center $x_k \in Q_{2^jR}\setminus Q_{2^{j-1}R}$ of $Q^k$. We now control the two right-hand side terms of \eqref{meyersesti1}. On the one hand, by the $\ell$-Lipschitz property of $\r$ and the assumption $R \ge 2C_d\r(0)$, for all $x \in \bar Q^k$, we have $|x|\le |x_k|+\frac{\sqrt{d}}{2}\frac{C_1}{C_2} 2^j R \le \frac{\sqrt{d}}{2}(1+\frac{C_1}{C_2})2^j R$, and therefore \begin{equation} \label{e.ugly} \r(x)\le \r(0)+ \ell |x| \,\le \, R(\tfrac1{2C_d} +\ell \tfrac{\sqrt d}{2}2^{j} (1+\tfrac{C_1}{C_2})). \end{equation} Recall the constant $C$ in \eqref{fintoutC} and that $C_1$ only depends on dimension. We now choose $C_2:=8C_1$. For our choice $C_d = 4C\vee c_d \vee 16$ and $R \ge 2C_d \r(0)$, and since $0<\ell = \frac 1{9C \sqrt{d}} \wedge \frac1{16}$, we have $\frac{C_1}{C_2}2^j R= 2^{j-3}R$ and $$ C(\frac1{2C_d}+\frac{\ell\sqrt d}{2}2^j (1+\frac{C_1}{C_2}) \le C(\frac1{8C}+\frac1{18C}2^j(1+\frac18)) \le 2^{j-3}, $$ which, by \eqref{e.ugly}, entails $\frac{C_1}{C_2}2^j R \ge C\r(x)$, condition under which \eqref{fintoutC} yields \begin{equation*} \int_{\bar Q^k}\Big(\fint_{B_{\star}(x)}\vert \nabla u \vert^2 \mu_\xi\Big)\, \mathrm{d} x\lesssim \int_{\tilde Q^k}\vert\nabla u \vert^2 \mu_\xi , \end{equation*} where $\tilde Q^k$ denotes the cube of side-length $\frac{C_1}{C_2} 2^{j+1}R=2^{j-2}R$ centered at the $x_k$, so that $\tilde Q^k \mod L\mathbb{Z}^d \subset Q_{2^{j+1}R \wedge L}\backslash Q_{2^{j-2}R}$. Hence, by \eqref{holefil}, \begin{equation}\label{e.lp1} \int_{\bar Q^k}\Big(\fint_{B_{\star}(x)}\vert \nabla u \vert^2 \mu_\xi\Big)\, \mathrm{d} x\lesssim \int_{Q_{2^{j+1}R \wedge L}\backslash Q_{2^{j-2}R}}\vert\nabla u \vert^2 \mu_\xi \, \lesssim\, (\tfrac{r}{2^jR})^{\beta}\int_{Q_r}\vert g \vert^2 . \end{equation} On the other hand, the same argument implies \begin{equation}\label{e.lp2} \int_{\bar Q^k}\Big(\fint_{B_{\star}(x)}\vert g \vert^2 \Big)\, \mathrm{d} x\lesssim \int_{Q_{2^{j+1}R \wedge L}\backslash Q_{2^{j-2}R}}\vert g \vert^2 = 0, \end{equation} where we used that $g$ is supported in $Q_r$ and $r\le \frac R2$. The claim \eqref{dyadicLp} then follows from \eqref{decomposeint1}, \eqref{meyersesti1}, \eqref{e.lp1}, and \eqref{e.lp2}, and the identity $|Q^k|=(2^{j-3}R)^d$. \medskip \step3 Extension to general $g$. \noindent In this step, we relax the support assumption on $g$ in \eqref{holefillingLp}, and claim that for all $L\ge R \ge 2C_d \r(0)$ and all $2\le m \le \bar m$, \begin{multline} \Big(\int_{Q_{L}\backslash Q_R}\Big(\fint_{B_{\star}(x)}\vert\nabla u \vert^2\mu_{\xi} \Big)^{\frac{m}{2}}\mathrm{d} x\Big)^{\frac{1}{m}}\, \lesssim\, \Big(\int_{Q_{L}\backslash Q_{\frac R4}}\Big(\fint_{B_{\star}(x)}\vert g\vert^2\Big)^{\frac{m}{2}}\mathrm{d} x\Big)^{\frac{1}{m}} \\ +\Big(\int_{Q_{R}}(\tfrac{|x|+\r(0)}R)^{\frac{\beta m}{4}}\Big(\fint_{B_{\star}(x)}\vert g \vert^2 \Big)^{\frac{m}{2}}\mathrm{d} x\Big)^{\frac{1}{m}}. \label{holefilling2} \end{multline} Let $N \in \mathbb{N}$ be such that $2^{N}C_d\r(0) \le R < 2^{N+1}C_d\r(0)$ (note that $N\ge 0$ since $R \ge 2 C_d \r(0)$). We decompose $g$ as $g=\sum_{i=0}^{N} g_i$ with $g_0:=g \mathds{1}_{Q_{C_d \r(0)}}$, $g_i:= g \mathds{1}_{Q_{2^i C_d \r(0)} \setminus Q_{2^{i-1} C_d \r(0)} }$ for all $1\le i \le N-1$, and $g_{N}:=g \mathds{1}_{Q_{L}\setminus Q_{2^{N-1}C_d \r(0)}}$. By linearity (and uniqueness of the solution) of the equation, we have $u = \sum_{i=0}^{N} u_i$ where $u_i$ denotes the (unique) weak solution in $H^1_{\mathrm{per}}(Q_L)$ of $$ -\nabla\cdot a_{\xi}\nabla u_{i}=\nabla\cdot (g_{i}\sqrt{\mu_{\xi}}). $$ By the triangle inequality, we then have for $2 \le m \le \bar m$, \begin{equation} \Big(\int_{\ Q_{L}\backslash Q_R}\Big(\fint_{B_{\star}(x)}\vert\nabla u \vert^2\mu_{\xi} \Big)^{\frac{m}{2}}\mathrm{d} x\Big)^{\frac{1}{m}} \, \le\,\sum_{i=0}^{N}\Big(\int_{Q_{L}\backslash Q_R}\Big(\fint_{B_{\star}(x)}\vert\nabla u_{i}(y)\vert^2\mu_{\xi} \Big)^{\frac{m}{2}}\mathrm{d} x\Big)^{\frac{1}{m}}.\label{decompdyadic} \end{equation} We start by estimating the term for $i=N$, for which we use the Meyers estimate~\eqref{unweightedmeyers} to the effect that $$ \Big(\int_{Q_L} \Big(\fint_{B_{\star}(x)} |\nabla u_{N} |^2 \mu_\xi \Big)^\frac m2 \mathrm{d} x \Big)^\frac 1m \,\stackrel{\eqref{unweightedmeyers}}\lesssim \, \Big(\int_{Q_L} \Big(\fint_{B_{\star}(x)} |g_{N}|^2 \Big)^\frac m2 \mathrm{d} x\Big)^\frac1m. $$ We then reformulate the right-hand side using the support condition on $g_{N}$. For $x \in Q_{2^{N-2}C_d \r(0)}$, since $\ell = \frac 1{9C \sqrt{d}} \wedge \frac1{16}$, $C\ge 1$, $N\ge 0$, and $C_d \ge 16$, we have $$ \r(x)\le \r(0)+\ell |x| \le \r(0)(1+\ell \tfrac{\sqrt{d}}{2}2^{N-2}C_d) \le 2^{N-2} C_d\r(0) (\tfrac14 + \tfrac19 ) \le 2^{N-3} C_d \r(0), $$ so that we have the implication $$ y \in B_{\star}(x) \,\implies \, y \in Q_{2^{N-2} C_d \r(0)}(x) \,\implies \, y\in Q_{2^{N-1}C_d \r(0)}(0)\,\implies \, g_N(y)=0. $$ Since $R < 2^{N+1}C_d\r(0)=4\, 2^{N-1}C_d\r(0)$, $Q_{\frac R4} \subset Q_{2^{N-1}C_d \r(0)}$, and the above implies \begin{equation}\label{holefilling2-2} \Big(\int_{Q_L} \Big(\fint_{B_{\star}(x)} |\nabla u_{N} |^2 \mu_\xi \Big)^\frac m2 \mathrm{d} x \Big)^\frac 1m \, \lesssim \, \Big(\int_{Q_L\setminus Q_{\frac R4}} \Big(\fint_{B_{\star}(x)} |g|^2\Big)^\frac m2 \mathrm{d} x\Big)^\frac1m. \end{equation} We then turn to the contributions for $0\le i\le N-1$, for which we appeal to \eqref{holefillingLp} with $r=2^i C_d \r(0)\ge C_d \r(0)$ and $R\ge 2^NC_d\r(0)\ge 2r$, and obtain \begin{eqnarray*} \int_{Q_{L}\backslash Q_R}\Big(\fint_{B_{\star}(x)}\vert\nabla u_i\vert^2 \mu_{\xi}\Big)^{\frac{m}{2}}\mathrm{d} x&\stackrel{\eqref{holefillingLp}}\lesssim &R^{d(1-\frac{m}{2})}(\tfrac{r}{R})^{\beta \frac{m}{2}}\Big(\int_{Q_r\setminus Q_{r/2}}\vert g\vert^2 \Big)^{\frac{m}{2}} \\ &\lesssim &R^{d(1-\frac{m}{2})}(\tfrac{r}{R})^{\beta \frac{m}{4}} \Big(\int_{Q_r\setminus Q_{r/2}}(\tfrac{|y|+\r(0)}{R})^\frac \beta 2\vert g(y)\vert^2\mathrm{d} y\Big)^{\frac{m}{2}}. \end{eqnarray*} We then appeal to \eqref{fintintC} (which holds for $r$ since $r=2^i C_d \r(0) \ge 2^i C\r(0)$ by definition of $C_d$), to Jensen's inequality, and to the Lipschitz regularity of $\r$ in form of $\r(x)\le \r(0)+|x|$, and get \begin{eqnarray} \lefteqn{\Big(\int_{Q_{L}\backslash Q_R}\Big(\fint_{B_{\star}(x)}\vert\nabla u_i\vert^2 \mu_{\xi}\Big)^{\frac{m}{2}}\mathrm{d} x\Big)^\frac1m} \nonumber \\ &\stackrel{\eqref{fintintC} }\lesssim& R^{d \frac{2-m}{2m}} (\tfrac rR)^\frac \beta 4 \Big(\int_{Q_{2r}}\fint_{B_{\star}(x)}(\tfrac{|y|+\r(0)}{R})^\frac \beta 2\vert g(y)\vert^2\mathrm{d} y \mathrm{d} x\Big)^{\frac{1}{2}} \nonumber \\ & \le& R^{d \frac{2-m}{2m}} (\tfrac rR)^\frac \beta 4 (2r)^{d(\frac12-\frac1m)}\Big(\int_{Q_{2r}}\Big(\fint_{B_{\star}(x)}(\tfrac{|y|+\r(0)}{R})^\frac \beta 2\vert g(y)\vert^2\mathrm{d} y\Big)^\frac m2 \mathrm{d} x\Big)^{\frac{1}{m}} \nonumber \\ &\lesssim& (\tfrac rR)^{\frac \beta 4 +d\frac{m-2}{2m}} \Big(\int_{Q_{R}}(\tfrac{|x|+\r(0)}{R})^\frac {\beta m}4\Big(\fint_{B_{\star}(x)}\vert g \vert^2 \Big)^\frac m2 \mathrm{d} x\Big)^{\frac{1}{m}} \nonumber\\ &\le & (2^{\frac \beta 4 +d\frac{m-2}{2m}})^{i-N} \Big(\int_{Q_{R}}(\tfrac{|x|+\r(0)}{R})^\frac {\beta m}4\Big(\fint_{B_{\star}(x)}\vert g \vert^2 \Big)^\frac m2 \mathrm{d} x\Big)^{\frac{1}{m}}. \label{holefilling2-1} \end{eqnarray} The claimed estimate \eqref{holefilling2} then follows from \eqref{decompdyadic}, \eqref{holefilling2-2}, and \eqref{holefilling2-1}. \medskip \step4 Proof of \eqref{weightedLp}. \noindent If $L \le 2C_d\r(0) \le 2C_d L$, then the weight is essentially constant, for all $x\in Q_L$, $\omega_{r,\varepsilon} (x)\simeq (1+\frac{L}{r})^\varepsilon$, and the conclusion~\eqref{weightedLp} is obviously satisfied. In the rest of this step we thus assume that $L>2C_d\r(0)$. Let $2C_d\r(0)< r\le L$ (the case $0<r\le 2C_d\r(0)$ reduces to the case $r=2C_d\r(0)$ by homogeneity). Let $N \in \mathbb{N}$ be such that $2^N C_d\r(0) \le L < 2^{N+1} C_d \r(0)$ and let $N_0 \le N$ be such that $2^{N_0} C_d\r(0) \le r < 2^{N_0+1}C_d\r(0)$. We then have \begin{multline} \int_{Q_L}\omega_{\frac{\varepsilon}{2},r}(x)\Big(\fint_{B_{\star}(x)}\vert\nabla u \vert^2\mu_{\xi} \Big)^{\frac{m}{2}}\mathrm{d} x \,= \,\int_{Q_{2^{N_0}C_d\r(0)}}\omega_{\frac{\varepsilon}{2},r}(x)\Big(\fint_{B_{\star}(x)}\vert\nabla u \vert^2\mu_{\xi} \Big)^{\frac{m}{2}}\mathrm{d} x \\ +\sum_{i=N_0}^{N-1} \int_{Q_{2^{i+1}C_d\r(0)}\setminus Q_{2^{i}C_d\r(0)}}\omega_{\frac{\varepsilon}{2},r}(x)\Big(\fint_{B_{\star}(x)}\vert\nabla u \vert^2\mu_{\xi} \Big)^{\frac{m}{2}}\mathrm{d} x \\ + \int_{Q_{L}\setminus Q_{2^{N}C_d\r(0)}}\omega_{\frac{\varepsilon}{2},r}(x)\Big(\fint_{B_{\star}(x)}\vert\nabla u \vert^2\mu_{\xi} \Big)^{\frac{m}{2}}\mathrm{d} x. \label{e.weight-ant0} \end{multline} We then control each right-hand side term separately. For the first term, we have $$ \sup_{Q_{2N_0C_d \r(0)}} \omega_{\frac \varepsilon 2, r} \lesssim \omega_{\frac \varepsilon 2,r}(0) \lesssim \omega_{\frac\varepsilon 2,r}(x) \quad \forall \, x \in Q_L, $$ so that by Theorem~\ref{unweightmeyers} \begin{eqnarray} \int_{Q_{2N_0C_d\r(0)}}\omega_{\frac{\varepsilon}{2},r}(x)\Big(\fint_{B_{\star}(x)}\vert\nabla u \vert^2\mu_{\xi} \Big)^{\frac{m}{2}}\mathrm{d} x &\lesssim& \omega_{\frac \varepsilon 2,r}(0) \int_{Q_L} \Big(\fint_{B_{\star}(x)}\vert g \vert^2 \Big)^{\frac{m}{2}}\mathrm{d} x \nonumber \\ &\lesssim& \int_{Q_L} \omega_{\frac \varepsilon 2,r}(x)\Big(\fint_{B_{\star}(x)}\vert g \vert^2 \Big)^{\frac{m}{2}}\mathrm{d} x. \label{e.weight-ant1} \end{eqnarray} For all $N_0\le i\le N-1$, we combine the bound $\omega_{\frac \e2, r}|_{Q_{2^{i+1}C_d \r(0)}\setminus Q_{2^{i}C_d \r(0)}} \, \simeq \,2^{\frac \e2(i-N_0)}$ with \eqref{holefilling2} to the effect that (using that $2\varepsilon \le \beta$) \begin{eqnarray} \lefteqn{\Big( \int_{Q_{2^{i+1}C_d\r(0)}\setminus Q_{2^{i}C_d\r(0)}}\omega_{\frac{\varepsilon}{2},r}(x)\Big(\fint_{B_{\star}(x)}\vert\nabla u \vert^2\mu_{\xi} \Big)^{\frac{m}{2}}\mathrm{d} x\Big)^\frac1m} \nonumber\\ &\lesssim &2^{\frac{\varepsilon}{2m}(i-N_0)} \Big(\int_{ Q_{L} \setminus Q_{2^{i}C_d\r(0)}} \Big(\fint_{B_{\star}(x)}\vert\nabla u \vert^2\mu_{\xi} \Big)^{\frac{m}{2}}\mathrm{d} x\Big)^\frac1m \nonumber\\ &\stackrel{ \eqref{holefilling2},2\varepsilon \le \delta}\lesssim&2^{\frac{\varepsilon}{2m}(i-N_0)}\Big(\int_{Q_{L}\setminus Q_{2^{i-2}C_d\r(0)}} \Big(\fint_{B_{\star}(x)}\vert g \vert^2 \Big)^{\frac{m}{2}}\mathrm{d} x\Big)^\frac1m \nonumber \\ &&+2^{\frac{\varepsilon}{2m}(i-N_0)} \Big(\int_{Q_{2^{i}C_d\r(0)}}(\tfrac{|x|+\r(0)}{2^{i}C_d\r(0)})^{\frac{\varepsilon m}{2}}\Big(\fint_{B_{\star}(x)}\vert g \vert^2 \Big)^{\frac{m}{2}}\mathrm{d} x\Big)^{\frac{1}{m}}. \label{e.weight-ant2} \end{eqnarray} For the first right-hand side term, we use that for all $x \in Q_L\setminus Q_{2^{i-2}C_d \r(0)}$ we have $2^{\frac{\varepsilon}{2}(i-N_0)} \lesssim 2^{-\frac{\varepsilon}{2}(i-N_0)} \omega_{\varepsilon,r}(x)$, so that \begin{equation}\label{e.weight-ant3} 2^{\frac{\varepsilon}{2m}(i-N_0)}\Big(\int_{Q_{L}\setminus Q_{2^{i-2}C_d\r(0)}} \Big(\fint_{B_{\star}(x)}\vert g\vert^2 \Big)^{\frac{m}{2}}\mathrm{d} x\Big)^\frac1m\,\lesssim\, 2^{-\frac{\varepsilon}{2m}(i-N_0)}\Big(\int_{Q_{L}} \omega_{\varepsilon,r}\Big(\fint_{B_{\star}(x)}\vert g\vert^2 \Big)^{\frac{m}{2}}\mathrm{d} x\Big)^\frac1m. \end{equation} For the second term, we rather use that for all $x \in Q_{2^{i}C_d \r(0)}$ we have by definition of $N_0$ and since $m\ge 2$ \begin{eqnarray*} 2^{\frac{\varepsilon}{2}(i-N_0)}(\tfrac{|x|+\r(0)}{2^{i}C_d\r(0)})^{\frac{\varepsilon m}{2}} &\lesssim & 2^{\frac{\varepsilon}{2}(i-N_0)}(\tfrac{|x|+\r(0)}{2^{i}C_d\r(0)})^{\varepsilon} \\ &\lesssim & 2^{\frac{\varepsilon}{2}(i-N_0)} 2^{-\varepsilon (i-N_0)} (\tfrac{|x|+\r(0)}{r})^{\varepsilon} \lesssim\, 2^{-\frac{\varepsilon}{2}(i-N_0)} \omega_{\varepsilon,r}(x), \end{eqnarray*} so that \begin{equation}\label{e.weight-ant4} 2^{\frac{\varepsilon}{2m}(i-N_0)} \Big(\int_{Q_{2^{i}C_d\r(0)}}(\tfrac{|x|+\r(0)}{2^{i}C_d\r(0)})^{\frac{\varepsilon m}{2}}\Big(\fint_{B_{\star}(x)}\vert g\vert^2\Big)^{\frac{m}{2}}\mathrm{d} x\Big)^{\frac{1}{m}} \, \lesssim\, 2^{-\frac{\varepsilon}{2m}(i-N_0)}\Big(\int_{Q_{L}} \omega_{\varepsilon,r}\Big(\fint_{B_{\star}(x)}\vert g\vert^2 \Big)^{\frac{m}{2}}\mathrm{d} x\Big)^\frac1m. \end{equation} Summing \eqref{e.weight-ant2}--\eqref{e.weight-ant4} over $i$ form $N_0$ to $N-1$ we then obtain \begin{equation}\label{e.weight-ant5} \sum_{i=N_0}^{N-1} \int_{Q_{2^{i+1}C_d\r(0)}\setminus Q_{2^{i}C_d\r(0)}}\omega_{\frac{\varepsilon}{2},r}(x)\Big(\fint_{B_{\star}(x)}\vert\nabla u\vert^2\mu_{\xi}\Big)^{\frac{m}{2}}\mathrm{d} x \, \lesssim \, \int_{Q_{L}} \omega_{\varepsilon,r}\Big(\fint_{B_{\star}(x)}\vert g \vert^2 \Big)^{\frac{m}{2}}\mathrm{d} x. \end{equation} Controlling the last right-hand side of \eqref{e.weight-ant0} the same way, \eqref{weightedLp} follows from \eqref{e.weight-ant1} and \eqref{e.weight-ant5}. \end{proof} \subsection{Control of the Meyers minimal radius: sensitivity estimate and buckling} The main result of this section is the following control of the Meyers minimal radius. \begin{theorem}\label{boundrNLprop} Under Hypothesis~\ref{hypo}, for all $\xi\in\mathbb{R}^d$, there exist an exponent $\gamma>0$ depending on $d$, $\lambda$, and $p$, and a constant $c_{\xi}$ depending additionally on $|\xi|$ (and all independent of $L\ge 1$) such that \begin{equation} \expecL{ \exp(c_{\xi} r_{\star,\xi,L}^{\gamma}) } \,\le\,2. \label{boundrNL} \end{equation} \end{theorem} The proof of Theorem~\ref{boundrNLprop} relies on the combination of the following sensitivity estimate (based on the quenched weighted Meyers estimate of Theorem~\ref{largescalereg}) with the Caccioppoli inequality via a buckling argument. \begin{proposition}\label{weakNL} Under Hypothesis~\ref{hypo}, for all $\xi \in \mathbb{R}^d$, denote by $\bar m>2$, $\delta>0$, and $\beta>0$ the Meyers and nonlinear and linear hole-filling exponents, respectively (cf.~Lemma~\ref{ctrlavNL} and Theorem~\ref{largescalereg}). Then, for all $r\ge 1$ and $0<\tau <1$, the random variable $\mathcal{F}:=\fint_{B_r}\nabla\phi_{\xi}$ satisfies \begin{equation} \expecL{|\mathcal{F}|^{2q}}^{\frac{1}{q}}\lesssim_{\vert\xi\vert}qr^{-d}\expecL{r_{\star,\xi,L}^{\frac{d-\delta}{1-\tau}q}}^{\frac{1-\frac \tau2}{q}} \label{sensimomentboundNL} \end{equation} for all $q\geq 1+\frac{d+1}{\varepsilon}$, where \begin{equation}\label{e-def-eps} \varepsilon\,:=\, (\tfrac \beta 2) \wedge (\tfrac{(d+1)(\bar m-2)}{2}) \wedge (\tfrac{\tau(d-\delta)}{4(1-\tau)}). \end{equation} \end{proposition} We start with the proof of Theorem~\ref{boundrNLprop}, and then turn to the proof of Proposition~\ref{weakNL}. \begin{proof}[Proof of Theorem~\ref{boundrNLprop}] We use the short-hand notation ${\underline r_{\star}}:={\underline r_{\star,\xi,L}}(0,c_1)$ (cf.~\eqref{defr*NL2} and Lemma~\ref{uniformboundr*NL}). We split the proof into two steps. In the first step, we control the probability of the level set $\{{\underline r_{\star}}=R\}$ for all dyadic $R\in [1,L]$ using averages of $\nabla \phi_\xi$, which we combine with Proposition~\ref{weakNL} and the bound $\r \le {\underline r_{\star}}$ to buckle on moments of ${\underline r_{\star}}$, and therefore on $\r$ in the second step. \medskip \step1 We claim that there exist $\theta \in (0,1)$ and $c>0$, depending on $p$, $d$ and $\lambda$ such that for all dyadic $R\in [1,L]$ and all exponents $q\ge 1$, \begin{equation} \mathbb P_L[{\underline r_{\star}} =R]\, \le \, c^q (1+\vert\xi\vert^p)^{-q}\expecL{\Big\vert\fint_{B_{\theta R}}\nabla\phi_{\xi}\Big\vert^{pq}}. \label{estiprobar*} \end{equation} Assume that ${\underline r_{\star}} =R$. By the definition~\eqref{defr*NL2} of ${\underline r_{\star}}$, we then have \begin{eqnarray} c_2 (1+|\xi|^p)& \ge& \fint_{B_{2R}} \vert \nabla \phi_{\xi}\vert^{p}, \label{boundnabla2} \\ \fint_{B_{\frac R2}} \vert \nabla \phi_{\xi}\vert^{p} &>& c_2 (1+\vert \xi\vert^p). \label{boundnabla1.0} \end{eqnarray} By the Caccioppoli inequality \eqref{cacciopoNLesti},~\eqref{boundnabla1.0} turns into \begin{equation} \inf_{\eta\in\mathbb{R}} \fint_{B_R} \tfrac{1}{R^2} \vert\phi_{\xi}(x)-\eta\vert^2 +\tfrac1{R^p}\vert\phi_{\xi}(x)-\eta\vert^{p} \,\gtrsim\, 1 +\vert \xi\vert^p, \label{boundnabla1.1} \end{equation} which we shall use in the stronger form \begin{equation} \inf_{\eta\in\mathbb{R}} \fint_{B_R}\tfrac1{R^p}\vert\phi_{\xi}(x)-\eta\vert^{p} \,\gtrsim\, 1 +\vert \xi\vert^p. \label{boundnabla1} \end{equation} Indeed, by Jensen's inequality, with the short-hand notation $\alpha:=\inf_{\eta\in\mathbb{R}} \fint_{B_R} \tfrac{1}{R^p} \vert\phi_{\xi}(x)-\eta\vert^p$, \eqref{boundnabla1.1} yields $\alpha^\frac2p+\alpha \gtrsim 1+\vert \xi\vert^p$ so that $\alpha \gtrsim 1$, which implies $\alpha \gtrsim \alpha^\frac 2p$, whence the reformulation~\eqref{boundnabla1}. \medskip Let $\theta\in (0,1)$ (the value of which we shall choose below), and set $c_R:=\fint_{B_R}\fint_{B_{\theta R}(x)}\phi_{\xi}(y)\mathrm{d} y\, \mathrm{d} x$. By the triangle inequality and Poincar\'e's inequality in $L^p(B_{R})$, we obtain \begin{eqnarray} \inf_{\eta\in\mathbb{R}} \fint_{B_R} \tfrac{1}{R^p} \vert\phi_{\xi}-\eta\vert^p &\lesssim &\fint_{B_{R}}\tfrac{1}{R^p}\Big\vert\phi_{\xi}(x)-\fint_{B_{\theta R}(x)}\phi_{\xi}\Big\vert^p\mathrm{d} x+\fint_{B_{R}}\tfrac{1}{R^p}\Big\vert\fint_{B_{\theta R}(x)}\phi_{\xi}-c_R\Big\vert^p\mathrm{d} x\nonumber\\ &\lesssim &\theta^p \fint_{B_{2R}}\vert\nabla\phi_{\xi}(x)\vert^p\mathrm{d} x+\fint_{B_{R}}\Big\vert\fint_{B_{\theta R}(x)}\nabla\phi_{\xi} \Big\vert^p\mathrm{d} x.\label{boundbanbla31} \end{eqnarray} Combined with \eqref{boundbanbla31}, \eqref{boundnabla1} turns into \begin{equation} 1+\vert \xi\vert^p \, \lesssim \, \theta^p \fint_{B_{2R}} \vert\nabla\phi_{\xi}\vert^p+\fint_{B_{R}} \Big\vert\fint_{B_{\theta R}(x)}\nabla\phi_{\xi}\Big\vert^p\mathrm{d} x.\label{boundbanbla3} \end{equation} Using now \eqref{boundnabla2}, we may absorb the first right-hand side term into the left-hand side for $\theta$ small enough (independent of $R$), and therefore conclude that for some $c>0$ (depending only on $d$, $p$, $\lambda$) $$\{{\underline r_{\star}} =R\} \subset \left\{\fint_{B_{R}}\Big\vert\fint_{B_{\theta R}(x)}\nabla\phi_{\xi}\Big\vert^p \mathrm{d} x \ge \frac 1c(1+\vert\xi\vert^p)\right\},$$ which yields \eqref{estiprobar*} by Markov's inequality and the stationarity of $\nabla\phi_{\xi}$. \medskip \step2 Buckling argument. \noindent Fix $\tau := 1-\frac{d-\delta}{d-\frac{\delta}{2}}=\frac\delta{2d-\delta}>0$, to the effect that $$ \frac{d-\delta}{1-\tau}=d-\frac{\delta}{2}, \quad 1-\frac \tau 2= 1-\frac\delta{2(2d-\delta)}, \quad \varepsilon := (\tfrac{(d+1)(\bar m-2)}{2}) \wedge (\tfrac{\delta}8 ). $$ For all dyadic $1\le R \le L$, by~\eqref{estiprobar*} and by Proposition~\ref{weakNL} with this choice of $\tau$ and $r=\theta R$, we obtain for all $q$ with $q \frac p2 \ge 1+\frac{d+1}\varepsilon$, \begin{equation} \mathbb P_L[{\underline r_{\star}}=R ]\,\le\, c_{\xi}^q q^{\frac{p}{2}q}R^{-d\frac{p}{2}q}\expecL{\r^{(d-\frac{\delta}{2})\frac p2q}}^{ 1-\frac\delta{2(2d-\delta)}} \,\stackrel{\eqref{encadrementrNL}}{\le} \, c_{\xi}^q q^{\frac{p}{2}q}R^{-d\frac{p}{2}q}\expecL{{\underline r_{\star}}^{(d-\frac{\delta}{2})\frac p2q}}^{ 1-\frac\delta{2(2d-\delta)}}. \label{boundproba2} \end{equation} Therefore, using a dyadic decomposition (the sum is actually finite since $\r \le L$), we deduce that (up to changing the value of $c_{\xi}$) \begin{eqnarray*} \expecL{{\underline r_{\star}}^{(d-\frac{\delta}{2})\frac{p}{2}q}}&\leq &1+\sum_{n=1}^{+\infty} (2^{n})^{(d-\frac{\delta}{2})\frac{p}{2}q}\mathbb P[ {\underline r_{\star}}=2^n ]\\ &\stackrel{\eqref{boundproba2}}{\le}& 1 + c_{\xi}^qq^{q\frac{p}{2}}\expecL{{\underline r_{\star}}^{(d-\frac{\delta}{2})\frac p2q}}^{ 1-\frac\delta{2(2d-\delta)}}\sum_{n=1}^{+\infty}2^{(d-\frac{\delta}{2})q\frac{p}{2}n}2^{-dq\frac{p}{2}n}\\ &\le& 1+ c_{\xi}^qq^{q\frac{p}{2}}\expecL{{\underline r_{\star}}^{(d-\frac{\delta}{2})\frac p2q}}^{ 1-\frac\delta{2(2d-\delta)}}. \end{eqnarray*} Since both terms of this inequality are finite, this gives by Young's inequality provided $q \frac p2 \ge 1+\frac{d+1}\varepsilon$% $$ \expecL{{\underline r_{\star}}^{(d-\frac{\delta}{2})\frac{p}{2}q}}^\frac1q \,\lesssim\, c_{\xi}q^{p \frac{2d-\delta}{\delta}}, $$ from which the stretched exponential moment bound~\eqref{boundrNL} follows with $\gamma:=\frac\delta 8$ (cf.~Lemma~\ref{momentexp}), which is not expected to be sharp. \end{proof} We conclude this section with the proof of Proposition~\ref{weakNL}. \begin{proof}[Proof of Proposition~\ref{weakNL}] We split the proof into three steps. In the first step, we compute the functional derivative of $\mathcal{F}$, in the sense of \eqref{deffunctioderivper}, and apply the logarithmic-Sobolev inequality in the second step to control moments of $\mathcal{F}$. In the third step, we then control these moments by suitable moments of $\r$ using the quenched weighted Meyers estimate in the large of Theorem~\ref{largescalereg}. \medskip \step1 Sensitivity calculus. \noindent In this step, we take a slightly more general version of $\mathcal{F}$ (this will be further used in the proof of Theorem~\ref{th:corrNL}), which we define, for some given $g \in L^2(Q_L)^d$ (extended by periodicity on $\mathbb{R}^d$), by $$ \mathcal{F}:=\int_{Q_L} \nabla \phi_\xi \cdot g. $$ We then argue that for all $x\in Q_L$, \begin{equation} \partial_x\mathcal{F} = \int_{B(x)}\vert \aa(\xi+\nabla\phi_{\xi})\otimes \nabla u\vert, \label{functionalderivesensiNLformula} \end{equation} with the short-hand notation $\aa(\zeta):=(1+|\zeta|^{p-2})\zeta$ and where $u$ is the unique weak $Q_L$-periodic solution (with zero average) of \begin{equation} -\nabla\cdot a_{\xi}^*\nabla u=\nabla\cdot g, \label{equationdual} \end{equation} we recall that $a_{\xi}^*$ is bounded from above and below, since $a$ is assumed to be smooth, and satisfies \eqref{growthconditionlineaxi}, cf.~Lemma~\ref{regestiNL}. Let denote by $h$ a sequence that goes to zero and by $\delta A$ a coefficient field supported in $B(x)$ (and extended by $Q_L$-periodicity) such that $\|\delta A\|_{L^{\infty}(\mathbb{R}^d)}\leq 1$. We let $h$ be small enough so that $A+h\delta A$ is uniformly elliptic, and define \begin{eqnarray} \delta^{h}\mathcal{F}&:=&\frac{\mathcal{F}(A+h\delta A)-\mathcal{F}(A)}{h},\nonumber\\ \delta^{h}\phi_{\xi}&:=&\frac{\phi_{\xi}(A+h\delta A )-\phi_{\xi}(A)}{h},\label{functionderivNLnota1}\\ a_{\xi}^{h}&:=&\int_{0}^1 D a(\cdot,\xi+t\nabla\phi_{\xi}(A+h\delta A)+(1-t)\nabla\phi_{\xi}(A))\mathrm{d} t.\label{functionderivNLnota2} \end{eqnarray} By the definition of $\mathcal{F}$, we have $\delta^{h}\mathcal{F}=\int_{Q_L}\nabla\delta^{h}\phi_{\xi}\cdot g$, and we need to characterize $\delta^{h}\phi_{\xi}$. By the defining equation \eqref{e.cor-eq}, we obtain \begin{equation}\label{e.NLgr-1} -\nabla \cdot \Big(a(\cdot, \xi+\nabla \phi_\xi +h \nabla \delta^h \phi)-a(\cdot,\xi+\nabla \phi_\xi)\Big) \,=\,h \nabla \cdot \delta A \aa(\xi+\nabla \phi_\xi(A+h\delta A)), \end{equation} which we rewrite, by the fundamental theorem of calculus and the definition of $a_{\xi}^{h}$, as \begin{equation} -\nabla\cdot a_{\xi}^h\nabla\delta^h\phi_{\xi}=\nabla\cdot \delta A \aa(\xi+\nabla\phi_{\xi}(A+h\delta A)). \label{sensiNLequation1} \end{equation} Assume that $\delta^h\phi_{\xi}$ converges weakly in $H^1_{\mathrm{per}}(Q_L)$ to the solution $\delta\phi_{\xi} \in H^1_{\mathrm{per}}(Q_L)$ of \begin{equation} -\nabla\cdot a_{\xi} \nabla\delta\phi_{\xi}=\nabla\cdot \delta A \aa(\xi+\nabla\phi_{\xi}). \label{sensiNLequation1+} \end{equation} Then $\lim_{h\downarrow 0} \delta^h \mathcal{F} = \delta \mathcal{F}=\int g \cdot \nabla\delta\phi_{\xi}$, which we now rewrite by duality. Testing \eqref{equationdual} with $\delta \phi_{\xi}$ and then \eqref{sensiNLequation1+} with $u$, we obtain $$ \delta \mathcal{F} = \int \nabla u \cdot \delta A \aa(\xi+\nabla \phi_\xi), $$ and the claim \eqref{functionalderivesensiNLformula} follows by taking the supremum over $\delta A$. It remains to argue in favor of the convergence of $\delta^h\phi_{\xi}$ to $\delta\phi_{\xi}$, which actually holds in $C^{1,\alpha}(Q_L)$. First, recall that $\{\phi_\xi(A+h\delta A)\}_{h}$ is a bounded set in $C^{1,\alpha}(Q_L)$ by Lemma~\ref{regestiNL}. By testing \eqref{e.NLgr-1} with $h\delta^h \phi_\xi$, we obtain by monotonicity $$ \int_{Q_L} |\nabla \phi_\xi(A+h\delta A)-\nabla \phi_\xi(A)|^2 + |\nabla \phi_\xi(A+h\delta A)-\nabla \phi_\xi(A)|^p \lesssim h^2, $$ so that $\nabla \phi_\xi(A+h\delta A) \to \nabla \phi_\xi(A)$ in $L^p(Q_L)$, and therefore $\phi_\xi(A+h\delta A) \to \phi_\xi(A)$ in $C^{1, \alpha}(Q_L)$ by Arzela-Ascoli's theorem as claimed. \medskip \step2 Application of the logarithmic-Sobolev inequality: For all $q\ge 1$, \begin{equation}\label{e.sensi-estimNL-ant} \expecL{\mathcal{F}^{2q}}^{\frac{1}{q}}\,\lesssim\, q(1+\vert\xi\vert^p)\expecL{\Big(\int_{Q_L}\r(x)^{d-\delta}\Big(\int_{B(x)}\vert\nabla u\vert^2\mu_{\xi}\Big) \mathrm{d} x\Big)^q}^{\frac{1}{q}}. \end{equation} Since $\mathbb E_L[{\nabla \phi_\xi}]=0$, by \eqref{SGinegp1} and \eqref{functionalderivesensiNLformula}, we have for all $q\ge 1$ \begin{equation} \expecL{\mathcal{F}^{2q}}^{\frac{1}{q}}\, \lesssim \,q \expecL{\Big(\int_{Q_L}\Big(\int_{B(x)}\vert \aa(\xi+\nabla\phi_{\xi})|| \nabla u\vert \Big)^2\mathrm{d} x \Big)^q}^{\frac{1}{q}}. \label{SGsensitibityNL} \end{equation} By Cauchy-Schwarz' inequality, the definition~\eqref{e.def-mu} of $\mu_\xi$, and \eqref{controlunitball}, we have for all $x\in Q_L$ \begin{eqnarray*} \lefteqn{\Big(\int_{B(x)}\vert \aa(\xi+\nabla\phi_{\xi})|| \nabla u\vert\Big)^2}\\ &\lesssim &\int_{B(x)}\vert\xi+\nabla\phi_{\xi}\vert^2(1+\vert\xi+\nabla\phi_{\xi}\vert^{p-2}) \int_{B(x)}\vert\nabla u\vert^2\mu_{\xi} \\ &\stackrel{\eqref{controlunitball}}{\lesssim}& (1+\vert\xi\vert^p)\r(x)^{d-\delta}\int_{B(x)}\vert\nabla u\vert^2\mu_{\xi}. \end{eqnarray*} The claim \eqref{e.sensi-estimNL-ant} then follows in combination with~\eqref{SGsensitibityNL}. \medskip \step3 Proof of \eqref{sensimomentboundNL}. \noindent For $0<\tau <1$ given, we define $\varepsilon$ as in \eqref{e-def-eps}, and set $m:=2+\frac{2\varepsilon}{d+1}$, to the effect that $m\le \bar m$ and $\frac{2\varepsilon}{m-2}=d+1$. Since $\r$ is $\frac{1}{16}$-Lipschitz and $\r \ge 1$, we have $$ \int_{Q_L}\r(x)^{d-\delta}\Big(\fint_{B(x)}\vert\nabla u\vert^2\mu_{\xi}\Big) \mathrm{d} x \,\lesssim\, \int_{Q_L} \Big(\fint_{B(x)}\r^{d-\delta}\vert\nabla u\vert^2\mu_{\xi} \Big) \mathrm{d} x, $$ so that by \eqref{fintintC} combined with the estimate $\r \le L$, with periodicity, and using again the Lipschitz-continuity of $\r$, we obtain $$ \int_{Q_L}\r(x)^{d-\delta}\Big(\fint_{B(x)}\vert\nabla u\vert^2\mu_{\xi} \Big) \mathrm{d} x \,\lesssim\, \int_{Q_L}\r(x)^{d-\delta} \Big(\fint_{B_{\star}(x)} \vert\nabla u\vert^2\mu_{\xi} \Big) \mathrm{d} x. $$ Inserting the weight $(1+\frac{|x|}{r})^\frac{2\varepsilon}{m}(1+\frac{|x|}{r})^{-\frac{2\varepsilon}{m}}$, and using H\"older's inequality in space with exponents $(\frac m2,\frac{m}{m-2})$ followed by H\"older's inequality in probability with exponents $(\frac1{1-\tau},\frac1\tau)$, \eqref{e.sensi-estimNL-ant} turns into \begin{align} &{\frac{1}{q(1+|\xi|^p)} \expecL{\mathcal{F}^{2q}}^{\frac{1}{q}}} \label{term0} \\ \lesssim\,& \expecL{\Big(\int_{Q_L}(1+\tfrac{|x|}{r})^{-d-1}\r(x)^{\frac{m}{m-2}(d-\delta)}\, \mathrm{d} x\Big)^{q\frac{m-2}{m}} \Big(\int_{Q_L}(1+\tfrac{|x|}{r})^{\varepsilon}\Big(\fint_{B_{\star}(x)}|\nabla u|^2\mu_{\xi}\Big)^{q\frac{m}{2}} \mathrm{d} x\Big)^\frac2m}^\frac1q \nonumber\\ \le \,& \expecL{\Big(\int_{Q_L}(1+\tfrac{|x|}{r})^{-d-1}\r(x)^{\frac{m}{m-2}(d-\delta)}\, \mathrm{d} x\Big)^{q\frac{m-2}{m(1-\tau)}}}^{\frac{1-\tau}{q}}\expecL{\Big(\int_{Q_L}(1+\tfrac{|x|}{r})^{\varepsilon}\Big(\fint_{B_{\star}(x)}|\nabla u|^2\mu_{\xi} \Big)^{\frac{m}{2}} \mathrm{d} x\Big)^{q\frac{2}{m\tau}}}^{\frac{\tau}{q}}.\nonumber \end{align} By the change of variables $\frac x r \leadsto x$, Jensen's inequality in space provided $q\geq \frac{m}{m-2} = 1+\frac{d+1}\varepsilon$, and the stationarity of $\r$, we control the first right-hand side term of \eqref{term0} by \begin{equation} \expecL{\Big(\int_{Q_L}(1+\tfrac{|x|}{r})^{-d-1} \r(x)^{\frac{m}{m-2}(d-\delta)}\, \mathrm{d} x\Big)^{q\frac{m-2}{m(1-\tau)}}}^{\frac{1-\tau}{q}}\, \lesssim \, r^{d\frac{m-2}{m}}\expecL{ \r^{q\frac{d-\delta}{1-\tau}}}^{\frac{1-\tau}{q}}. \label{firstterm} \end{equation} For the second right-hand side term of \eqref{term0}, we appeal to the quenched weighted Meyers estimate~\eqref{weightedLp}, which we may apply to equation~\eqref{equationdual} (rewriting the right-hand side as $\frac{1}{\sqrt{\mu_{\xi}}}g\sqrt{\mu_{\xi}}$) with weight $\omega_{\varepsilon,r}$ since $\varepsilon \le \frac \beta 2$. By stationarity of $\r$, this yields \begin{eqnarray} \expecL{\Big(\int_{Q_L}(1+\tfrac{|x|}{r})^{\varepsilon}\Big(\fint_{B_{\star}(x)}|\nabla u|^2\mu_{\xi}\Big)^{\frac{m}{2}} \mathrm{d} x\Big)^{\frac{2}{m\tau}q}}^{\frac{\tau}{q}} &\lesssim_{\vert\xi\vert}&\expecL{\Big(\int_{Q_L}\omega_{2\varepsilon,r}(x)\Big(\fint_{B_{\star}(x)}| g|^2 \tfrac{1}{\mu_{\xi}}\Big)^{\frac{m}{2}} \mathrm{d} x\Big)^{\frac{2}{m\tau}q}}^{\frac{\tau}{q}}\nonumber\\ &\lesssim& r^{-2d+\frac{2}{m}d}\expecL{ \r^{\frac{4\varepsilon}{m\tau}q}}^{\frac{\tau}{q}}\label{secondterm}, \end{eqnarray} where we used that $g=|B_r|^{-1} \mathds1_{B_r}$, that $\mu_\xi \ge 1$, and~\eqref{fintintC}. By \eqref{e-def-eps} and our choice $m=2+\frac{2 \varepsilon}{d+1}$, $\frac{4\varepsilon}{m\tau}\leq \frac{d-\delta}{1-\tau}$, to the effect that $\expecL{\r^{\frac{4\varepsilon}{m\tau}q}}^{\frac{\tau}{q}}\leq \expecL{\r^{q\frac{d-\delta}{1-\tau}}}^{\frac{4\varepsilon(1-\tau)}{q m(d-\delta)}}$ by H\"older's inequality. Using \eqref{e-def-eps} again, this time in form of $\frac{4\varepsilon(1-\tau)}{m(d-\delta)}\le \frac \tau 2$, and the lower bound $\r \ge 1$, \eqref{secondterm} turns into \begin{equation} \expecL{\Big(\int_{Q_L}(1+\tfrac{|x|}{r})^{\varepsilon}\Big(\fint_{B_{\star}(x)}|\nabla u|^2\mu_{\xi}\Big)^{\frac{m}{2}} \mathrm{d} x\Big)^{\frac{2}{m\tau}q}}^{\frac{\tau}{q}}\, \lesssim_{\vert\xi\vert}\, r^{-2d+\frac{2}{m}d}\expecL{\r^{q\frac{d-\delta}{1-\tau}}}^{\frac{\tau}{2q}}\label{secondterm+}. \end{equation} The claim \eqref{sensimomentboundNL} then follows from \eqref{term0}, \eqref{firstterm} and \eqref{secondterm+}. \end{proof} \subsection{Annealed Meyers' estimate} The annealed Meyers (or perturbative Calder\'on-Zygmund) estimates recently introduced by Duerinckx and Otto in \cite{DO-20} (see also \cite{josien2020annealed}) constitute a very versatile upgrade of their quenched counterpart in stochastic homogenization. In the present setting the annealed Meyers estimates take the following form. \begin{theorem}[Annealed Meyers' estimate]\label{th:annealedmeyers} Under Hypothesis~\ref{hypo}, for all $\xi \in \mathbb{R}^d$, with $\kappa := \frac{(\bar m -2)\wedge 1}{8}>0$ (where $\bar m$ is the Meyers exponent of Theorem~\ref{unweightmeyers}), for all $Q_L$-periodic random fields $g$ and $u$ related via \eqref{LSMequationu}, we have for all exponents $2-\kappa \le q,m \le 2+\kappa$ and $0< \delta \le \frac12$, \begin{equation} \int_{Q_L}\expecL{\Big(\fint_{B(x)}\vert\nabla u\vert^2\mu_{\xi}\Big)^{\frac{q}{2}}}^\frac mq dx\lesssim_{\vert\xi\vert}\, \delta^{-\frac14} |\log \delta |^\frac12\int_{Q_L}\expecL{\Big(\fint_{B(x)}\vert g \vert^2\Big)^{\frac{q(1+\delta)}{2}}}^\frac m{q(1+\delta)} dx. \label{annealedmeyers} \end{equation} The same result holds with $a_{\xi}$ replaced by $a_{\xi}^*$ (the pointwise transpose field). \end{theorem} The proof is based on the quenched Meyers estimate in the large of Theorem~\ref{unweightmeyers}, on the moment bounds of Theorem~\ref{boundrNLprop} on the Meyers minimal radius (which allows us to use duality at the price of a loss of stochastic integrability), real interpolation, and the following refined dual version of the Calder\'on-Zygmund lemma due to Shen~\cite[Theorem~3.2]{Shen-07}, based on ideas by Caffarelli and Peral~\cite{CP-98}. \begin{lemma}[\cite{CP-98,Shen-07}]\label{lem:Shen} Given $1\le q<m\le\infty$, let $F,G\in L^{q}\cap L^{m}(Q_L)$ be nonnegative $Q_L$-periodic functions and let $C_0>0$. Assume that for all balls $D$ (of radius $\lesssim L$) there exist measurable functions $F_{D,1}$ and $F_{D,2}$ such that $F\le F_{D,1}+F_{D,2}$ and $F_{D,2}\le F+F_{D,1}$ on $D$, and such that \begin{equation*} \Big(\fint_{D}F_{D,1}^{q}\Big)^\frac1{q}\,\le\,C_0\Big(\fint_{C_0D}G^{q}\Big)^\frac1{q},\quad \Big(\fint_{\frac1{C_0}D}F_{D,2}^{m}\Big)^\frac1{m}\,\le\,C_0\Big(\fint_{D}F_{D,2}^{q}\Big)^\frac1{q}. \end{equation*} Then, for all $q<s<m$, \[\Big(\int_{Q_L}F^s\Big)^\frac1s\,\lesssim_{C_0,q,s,m}\,\Big(\int_{Q_L}G^s\Big)^\frac1s.\] \end{lemma} Before we prove Theorem~\ref{th:annealedmeyers}, let us note that Theorem~\ref{boundrNLprop} allows one to pass from averages on $B_{\star}(x)$ to averages on $B(x)$ using \cite[Lemma~6.7]{DO-20} in the (slightly more general) form of \begin{lemma}\label{lem:postproc} Let $\r$ be a stationary random field satisfying $\expecL{\exp(c \r^\alpha)}\le 2$ for some $\alpha>0$ and $c\simeq 1$. Set $B_{\star}(x):=B_{\r(x)}(x)$ for all $x \in Q_L$. For all $f\in C^\infty_{\mathrm{per}}(Q_L;L^\infty(\Omega))$ and $1\le q_1\le q_2<\infty$, we have \begin{itemize} \item[(i)] for all $r>q_1$, \begin{equation*} \Big(\int_{Q_L}\mathbb{E}_L\Big[\Big(\fint_{B_{\star}(x)}|f|^2\Big)^\frac{q_1}2\Big]^\frac{q_2}{q_1}dx\Big)^\frac1{q_2} \,\lesssim\,(\tfrac1{q_1}-\tfrac1r)^{-(\frac{1}{q_1}-\frac{1}2)_+}\zeta(\tfrac1{q_1}-\tfrac1r)^{\frac{1}{q_1}-\frac1{q_2}}\Big(\int_{Q_L}\mathbb{E}_L\Big[\Big(\fint_{B(x)}|f|^2\Big)^\frac{r}2\Big]^\frac{q_2}{r}\Big)^\frac1{q_2}; \end{equation*} \item[(ii)] for all $r<q_1$, \begin{equation*} \Big(\int_{Q_L}\mathbb{E}_L\Big[\Big(\fint_{B_{\star}(x)}|f|^2\Big)^\frac{q_1}2\Big]^\frac{q_2}{q_1}dx\Big)^\frac1{q_2} \,\gtrsim\,(\tfrac1r-\tfrac1{q_1})^{(\frac12-\frac1{q_2})_+}\zeta(\tfrac1r-\tfrac1{q_1})^{-(\frac1{q_1}-\frac1{q_2})}\Big(\int_{Q_L}\mathbb{E}_L\Big[\Big(\fint_{B(x)}|f|^2\Big)^\frac{r}2\Big]^\frac{q_2}{r}\Big)^\frac1{q_2}; \end{equation*} \end{itemize} where we have set $\zeta(t):=\log(2+\frac1t)$, and the multiplicative constants depend on $q_1,q_2,\alpha$. \end{lemma} The proof of this result is identical to that of \cite[Lemma~6.7]{DO-20}, noting that the assumption $\expecL{\exp(\frac1C \r^d)}\le 2$ can be weakened to $\expecL{\exp(\frac1C \r^\alpha)}\le 2$ for any $\alpha>0$ at the price of adding a dependence on $\alpha$ in the multiplicative factors in the estimates, and $\mathbb{R}^d$ can be replaced by $Q_L$. \begin{proof}[Proof of Theorem~\ref{th:annealedmeyers}] We split the proof into three steps. In the first step, we upgrade Theorem~\ref{unweightmeyers} by adding expectations using Lemma~\ref{lem:Shen} in a suitable way. At the price of a loss of stochastic integrability we then remove the local averages at scale $\r$ in Step~2 by using Lemma~\ref{lem:postproc}. The formulation with local averages at unit scale allows us to conclude using a standard duality argument, and real interpolation. \medskip \step1 Proof that for all $2 \le q < m < \bar m$, we have \begin{equation} \int_{Q_L}\expecL{\Big(\fint_{B_{\star}(x)}\vert\nabla u\vert^2\mu_{\xi}\Big)^{\frac{q}{2}}}^\frac mq dx\lesssim_{\vert\xi\vert} \int_{Q_L}\expecL{\Big(\fint_{B_{5\star}(x)}\vert g \vert^2\Big)^{\frac{q}{2}}}^\frac m{q} dx \label{annealed-1.1} \end{equation} with the short hand notation $B_{5\star}(x):=B_{5\r(x)}(x)$. \noindent Let $2\le q_1 \le m_1 \le \bar m$. Let $D$ be a ball centered at $x \in Q_L$ and of radius $0<r_D \lesssim L$, we define $D_{\star}:=B_{r_D \vee (2\r(x))}(x)$, and let $N$ be the smallest integer so that $D_{\star} \subset Q_{NL}$. We then decompose $u$ as $u=u_{D,1}+u_{D,2}$, where $u_{D,1}$ is the $Q_{NL}$-periodic solution of $-\nabla\cdot a_\xi \nabla u_{D,1}=\nabla \cdot g \sqrt{\mu_\xi} \mathds 1_{D_{\star}}$. Note that $u_{D,2}$ is $a_\xi$-harmonic on $D_{\star}$. We start with the control of $u_{D,1}$ and claim that \begin{equation}\label{annealed-1.2} \int_D \expecL{\Big(\fint_{B_{\star}(y)}|\nabla u_{D,1}|^2 \mu_\xi\Big)^\frac{q_1}{2}}\mathrm{d} y \,\lesssim\, \expecL{\int_{8D} \Big(\fint_{B_{5\star}(y)} |g|^2 \Big)^\frac{q_1}{2}\mathrm{d} y}. \end{equation} Assume first that $r_D \ge 2\r(x)$, so that $D_{\star}=D$. By taking the expectation in Theorem~\ref{unweightmeyers}, we have \begin{multline*} \int_D \expecL{\Big(\fint_{B_{\star}(y)}|\nabla u_{D,1}|^2 \mu_\xi\Big)^\frac{q_1}{2}}\mathrm{d} y \,\le\, \expecL{\int_{Q_{NL}} \Big(\fint_{B_{\star}(y)}|\nabla u_{D,1}|^2 \mu_\xi\Big)^\frac{q_1}{2}\mathrm{d} y} \\ \stackrel{\eqref{unweightedmeyers}}\lesssim \, \expecL{\int_{Q_{NL}} \Big(\fint_{B_{\star}(y)} |g|^2 \mathds 1_{D}\Big)^\frac{q_1}{2}\mathrm{d} y}. \end{multline*} By the $\frac1{16}$-Lipschitz property of $\r$, we have the implication for all $z \in B_{\star}(y)$ \begin{align*} |y-x|\ge 2r_D \,\implies \, |z-x|\ge |y-x|-\r(y) & \ge |y-x|-(\r(x)+\tfrac1{16}|y-x|) \\ &\ge \frac{15}{16}|y-x|-\r(x) \ge (\tfrac{15}{8}-\tfrac12)r_D \ge r_D \, \implies \, \mathds{1}_D(z)=0, \end{align*} so that \eqref{annealed-1.2} follows for $r_D \ge 2\r(x)$ in the stronger form \begin{equation*} \int_D \expecL{\Big(\fint_{B_{\star}(y)}|\nabla u_{D,1}|^2 \mu_\xi\Big)^\frac{q_1}{2}}\mathrm{d} y \,\lesssim\, \expecL{\int_{2D} \Big(\fint_{B_{\star}(y)} |g|^2 \Big)^\frac{q_1}{2}\mathrm{d} y}. \end{equation*} If $r_D \le 2 \r(x)$, then $\sup_{D} \r \lesssim \inf_D \r$, $D_{\star}=B_{2\r(x)}(x)=:B_{2\star}(x)$, and a plain energy estimate yields \begin{multline*} \int_D \expecL{\Big(\fint_{B_{\star}(y)}|\nabla u_{D,1}|^2 \mu_\xi\Big)^\frac{q_1}{2}}\mathrm{d} y \,\lesssim\, \expecL{|D| \r(x)^{-d\frac {q_1}2}\Big(\int_{Q_{NL}} |\nabla u_{D,1}|^2 \mu_\xi\Big)^\frac{q_1}{2}} \\ \lesssim \, \expecL{|D| \r(x)^{-d\frac {q_1}2}\Big(\int_{D_{\star}} |g|^2 \Big)^\frac{q_1}{2}} \,\lesssim\, \expecL{|D| \Big(\fint_{B_{2\star}(x)} |g|^2 \Big)^\frac{q_1}{2}} , \end{multline*} and it remains to turn the right-hand side into an integral over $D$. For all $y \in D$, we have $\r(y)\ge \r(x)-\frac{1}{16} r_D \ge \frac78 \r(x)$, and therefore for all $z\in B_{2\star}(x)$, $|z-y|\le |z-x|+|x-y|\le 4\r(x) \le 5\r(y)$, to the effect that $B_{2\star}(x) \subset B_{5\star}(y)$. Recalling that $\sup_{D} \r \lesssim \inf_D \r$, this implies the following stronger form of~\eqref{annealed-1.2} \begin{equation*} \int_D \expecL{\Big(\fint_{B_{\star}(y)}|\nabla u_{D,1}|^2 \mu_\xi\Big)^\frac{q_1}{2}}\mathrm{d} y \lesssim \, \expecL{ \int_D \Big(\fint_{B_{5\star}(y)} |g|^2 \Big)^\frac{q_1}{2}\mathrm{d} y} . \end{equation*} We now turn to the control of $u_{D,2}$, and claim that \begin{equation}\label{annealed-1.2b} \Big(\fint_{\frac18D} \expecL{\Big(\fint_{B_{\star}(y)}|\nabla u_{D,2}|^2 \mu_\xi\Big)^\frac{q_1}{2}}^\frac{m_1}{q_1}\mathrm{d} y \Big)^\frac1{m_1} \,\lesssim\, \Big(\fint_{D} \expecL{\Big(\fint_{B_{\star}(y)} |\nabla u_{D,2}|^2 \mu_\xi \Big)^\frac{q_1}{2}}\mathrm{d} y\Big)^\frac{1}{q_1}. \end{equation} The starting point is the Minkowski inequality: Since $\frac{m_1}{q_1}\ge 1$, \begin{equation}\label{annealed-1.3} \Big(\fint_{\frac18D} \expecL{\Big(\fint_{B_{\star}(y)}|\nabla u_{D,2}|^2 \mu_\xi\Big)^\frac{q_1}{2}}^\frac{m_1}{q_1}\mathrm{d} y \Big)^\frac1{m_1} \,\le\, \expecL{\Big(\fint_{\frac18D} \Big(\fint_{B_{\star}(y)}|\nabla u_{D,2}|^2 \mu_\xi\Big)^\frac{m_1}{2}\mathrm{d} y\Big)^\frac{q_1}{m_1}}^\frac{1}{q_1}. \end{equation} We then appeal to the local Meyers estimate~\eqref{unweightedmeyerslocal} to bound the right-hand side \begin{equation*} \fint_{\frac18D}\Big(\fint_{B_{\star}(y)} |\nabla u_{D,2}|^2\mu_{\xi}\Big)^{\frac{m_1}{2}}\mathrm{d} y \, \lesssim_{\vert\xi\vert}\, \Big(\fint_{\frac14D}\Big(\fint_{B_{\star}(y)} |\nabla u_{D,2}|^2\mu_{\xi}\Big) \,\mathrm{d} y\Big)^{\frac{m_1}{2}} +\fint_{\frac14D}\Big(\fint_{B_{\star}(y)}| g|^2 (1-\mathds{1}_{D_{\star}})\Big)^{\frac{m_1}{2}}\mathrm{d} y. \end{equation*} Since for all $y \in \frac14D$, one has $\r(y)\le \r(x)+\frac{1}{16} \frac14 r_D \le \frac34 (r_D \vee (2\r(x)))$, $B_{\star}(y) \subset D_{\star}$ and the second right hand side term vanishes identically. Combined with \eqref{annealed-1.3} and Jensen's inequality in space (using that $\frac{q_1}2\ge 1$), this entails \begin{eqnarray*} \Big(\fint_{\frac18D} \expecL{\Big(\fint_{B_{\star}(y)}|\nabla u_{D,2}|^2 \mu_\xi\Big)^\frac{q_1}{2}}^\frac{m_1}{q_1}\mathrm{d} y \Big)^\frac1{m_1} &\le& \expecL{\Big(\fint_{\frac14D}\Big(\fint_{B_{\star}(y)} |\nabla u_{D,2}|^2\mu_{\xi}\Big) \,\mathrm{d} y\Big)^\frac{q_1}{2}}^\frac{1}{q_1} \nonumber \\ &\le &\expecL{\fint_{\frac14D}\Big(\fint_{B_{\star}(y)} |\nabla u_{D,2}|^2\mu_{\xi}\Big)^\frac{q_1}{2} \,\mathrm{d} y}^\frac{1}{q_1}, \end{eqnarray*} from which \eqref{annealed-1.2b} follows. \noindent We are in the position to conclude. Setting $F:x \mapsto \expecL{\Big(\fint_{B_{\star}(x)} |\nabla u|^2\mu_\xi\Big)^\frac{q_1}{2}}^\frac1{q_1}$, $G:x\mapsto \expecL{\Big(\fint_{B_{5\star}(x)} |g|^2\Big)^\frac{q_1}2}^\frac{1}{q_1}$, $F_{D,1}:x \mapsto \expecL{\Big(\fint_{B_{\star}(x)}|\nabla u_{D,1}|^2 \mu_\xi \Big)^\frac{q_1}{2}}^\frac1{q_1}$, and $F_{D,2}:x \mapsto \expecL{\Big(\fint_{B_{\star}(x)}|\nabla u_{D,2}|^2 \mu_\xi \Big)^\frac{q_1}{2}}^\frac1{q_1}$, the assumptions of Lemma~\ref{lem:Shen}, and the claimed estimate \eqref{annealed-1.1} follows. \medskip \step2 Reformulation of \eqref{annealed-1.1}. \noindent Since both $\r$ and $5\r$ satisfy stretched exponential moment bounds, Lemma~\ref{lem:postproc} allows us to reformulate~\eqref{annealed-1.1} as: For all $2 \le q < m < \bar m$ and $0<r \le \frac 12$, \begin{equation} \int_{Q_L}\expecL{\Big(\fint_{B(x)}\vert\nabla u\vert^2\mu_{\xi}\Big)^{\frac{q}{2}}}^\frac m{q} \mathrm{d} x\lesssim_{\vert\xi\vert,r} r^{-\frac{m-2}{2m}} |\log r |^\frac{2(m-q)}{qm}\int_{Q_L}\expecL{\Big(\fint_{B(x)}\vert g \vert^2\Big)^{\frac{q+r}{2}}}^\frac m{q+r} \mathrm{d} x. \label{annealed-2.1} \end{equation} \medskip \step3 Proof of \eqref{annealedmeyers}. \noindent First, we show that for all $\bar m' <m< q \le 2$ and $0<r \ll 1$, \begin{equation} \int_{Q_L}\expecL{\Big(\fint_{B(x)}\vert\nabla u\vert^2\mu_{\xi}\Big)^{\frac{q}{2}}}^\frac m{q} \mathrm{d} x\lesssim_{\vert\xi\vert} r^{-\frac{2-m}{2m}} |\log r |^\frac{2(q-m)}{qm} \int_{Q_L}\expecL{\Big(\fint_{B(x)}\vert g \vert^2\Big)^{\frac{q-r}{2}}}^\frac m{q-r} \mathrm{d} x. \label{annealed-3.1} \end{equation} Indeed, by duality we have \begin{equation*} \Big(\int_{Q_L}\expecL{\Big(\fint_{B(x)}\vert\nabla u\vert^2\mu_{\xi}\Big)^{\frac{q}{2}}}^\frac m{q} \mathrm{d} x\Big)^\frac1m\, \,=\, \sup_{h}\Big\{\expecL{\int_{Q_L}\nabla u \cdot h \sqrt{\mu_\xi}} \Big\}, \end{equation*} where the supremum runs over maps $h \in C^\infty_{\mathrm{per}}(Q_L,L^\infty(d\mathbb P_L))^d$ such that $\int_{Q_L}\expecL{\Big(\fint_{B(x)}|h|^2\Big)^{\frac{q'}{2}}}^\frac {m'}{q'} \mathrm{d} x =1$. For such $h$, denote by $v_h$ the unique $Q_L$-periodic solution of $ -\nabla \cdot a_\xi^* \nabla v_h =\nabla \cdot (h\sqrt{\mu_\xi}). $ Testing this equation with $u$ and the defining equation \eqref{LSMequationu} for $u$ by $v_h$, we obtain (using periodicity in the last equality) $$ \int_{Q_L}\nabla u \cdot h \sqrt{\mu_\xi}=\int_{Q_L}\nabla v_h \cdot g \sqrt{\mu_\xi} \,=\, \int_{Q_L} \Big(\fint_{B(x)} \nabla v_h \cdot g \sqrt{\mu_\xi} \Big)\mathrm{d} x. $$ By Cauchy-Schwarz' inequality on $B(x)$, followed by H\"older's inequality with exponents $(q-r,\frac{q-r}{q-r-1})$ on $Q_L$ and with exponent $(m,m')$ in probability, this yields \begin{equation*} {\Big|\expecL{\int_{Q_L}\nabla v_h \cdot g \sqrt{\mu_\xi}}\Big|} \,\le \,\Big(\int_{Q_L}\expecL{\Big(\fint_{B(x)}\vert g \vert^2\Big)^{\frac{q-r}{2}}}^\frac m{q-r} \mathrm{d} x\Big)^\frac1m\Big(\int_{Q_L}\expecL{\Big(\fint_{B(x)}\vert \nabla v_h \vert^2 \mu_\xi\Big)^{\frac{(q-r)'}{2}}}^\frac {m'}{(q-r)'} \mathrm{d} x\Big)^\frac1{m'}. \end{equation*} Since $(q-r)'-q'=\frac{r}{(q-1)(q-1-r)}$, we may apply \eqref{annealed-2.1} to $\nabla v_h$ to the effect that \begin{eqnarray*} \lefteqn{\Big|\expecL{\int_{Q_L}\nabla v_h \cdot g \sqrt{\mu_\xi}}\Big|} \\ &\lesssim& r^{-\frac{m'-2}{2m'}} |\log r |^\frac{2(m'-q')}{q'm'} \Big(\int_{Q_L}\expecL{\Big(\fint_{B(x)}\vert g \vert^2\Big)^{\frac{q-r}{2}}}^\frac m{q-r} \mathrm{d} x\Big)^\frac1m\Big(\int_{Q_L}\expecL{\Big(\fint_{B(x)}|h|^2\Big)^{\frac{q'}{2}}}^\frac {m'}{q'} \mathrm{d} x\Big)^\frac1{m'}, \end{eqnarray*} from which \eqref{annealed-3.1} follows by the arbitrariness of $h$ and the identities $\frac{m'-2}{2m'}=\frac{2-m}{2m}$ and $\frac{2(m'-q')}{q'm'} =\frac{2(q-m)}{qm}$. \medskip Replacing $r$ by $qr$ in \eqref{annealed-2.1} and \eqref{annealed-3.1}, and using the bounds $\frac{m-2}{2m} \le \frac14$ and $\frac{2(m-q)}{qm} \le \frac 12$ for $2\le q \le m \le 3$ and $\frac{2-m}{2m} \le \frac14$ and $\frac{2(q-m)}{qm} \le \frac 12$ for $\frac 32 \le m \le q \le 2$, we have thus proved that \eqref{annealedmeyers} holds for all $2 \le q < m < \bar m \wedge 3$ and for all $\bar m' \vee \frac32<m< q \le 2$. By choosing $\kappa=\frac{(\bar m-2)\wedge 1}8$, the validity of \eqref{annealedmeyers} in the full range of exponents $2-\kappa \le m,q\le 2+\kappa$ then follows by real interpolation. \end{proof} We conclude this subsection by the annealed version of the maximal regularity for the Laplacian. \begin{theorem}\label{th:annealed-lap} Let $L\ge 1$. For all $Q_L$-periodic random fields $g$ and $u$ related via \begin{equation*} -\triangle u=\nabla \cdot g, \end{equation*} we have for all exponents $1<m,q<\infty$, \begin{equation} \Big(\int_{Q_L}\expecL{\Big(\fint_{B(x)}\vert\nabla u\vert^2 \Big)^{\frac{q}{2}}}^\frac mq \mathrm{d} x \Big)^\frac1m \lesssim_{m,q} \, \Big( \int_{Q_L}\expecL{\Big(\fint_{B(x)}\vert g \vert^2\Big)^{\frac{q }{2}}}^\frac m{q} \mathrm{d} x\Big)^\frac1m. \label{annealedmeyers-lap} \end{equation} \end{theorem} A proof of this result can be found in \cite[Section~7.1]{josien2020annealed}. A simpler argument (based on CZ estimates for Hilbert-valued operators and interpolation) would show that the multiplicative constant in \eqref{annealedmeyers-lap} is of the order $m+m'+q+q'$ (this finer result will not be used here). \section{Control of correctors: Proof of Theorem~\ref{th:corrNL}} The proof relies on the following upgrade of Proposition~\ref{weakNL} based on Theorem~\ref{boundrNLprop} and on Theorem~\ref{th:annealedmeyers}. \begin{corollary}\label{cor:average-per-NL} Under Hypothesis~\ref{hypo}, there exists $\gamma>0$ such that for all $\xi \in \mathbb{R}^d$, $L \ge 1$, and all $g \in L^2(\mathbb{R}^d)$ compactly supported in $Q_L$, the random field $\mathcal{F}:=\int_{Q_L}g(\nabla\phi_\xi,\nabla \sigma_\xi)$ satisfies for all $q \ge 1$ \begin{equation}\label{e.cor:average-per-NL} \expecL{|\mathcal{F}|^{2q}}^{\frac{1}{q}}\lesssim_{|\xi|} q^\gamma \int_{Q_L}|g|^2. \end{equation} \end{corollary} For future reference, we state the following consequence of local regularity and of the hole-filling estimate. \begin{lemma}\label{lem:supnablaphi} Under Hypothesis~\ref{hypo}, with $0<{\delta}\le d$ the nonlinear hole-filling exponent of Lemma~\ref{ctrlavNL}, we have for all $\xi \in \mathbb{R}^d$ and $x\in \mathbb{R}^d$ \begin{equation} \|\xi+\nabla\phi_{\xi}\|_{\text{C}^{\alpha}(B(x))}\, \lesssim \, (1+|\xi|) (\r(x))^{\frac{d-{\delta}}{p}}. \label{deterboundproof1} \end{equation} \end{lemma} \begin{proof} By the deterministic regularity theory of Lemma~\ref{regestiNL} applied to the equation~\eqref{e.cor-eq} combined with the estimate \eqref{controlunitball}, we indeed have \begin{align*} \|\xi+\nabla\phi_{\xi}\|_{\text{C}^{\alpha}(B(x))}&\lesssim_{\|A\|_{C^{0,\alpha}(\mathbb{R}^d)}} \Big(\fint_{B_{2}(x)}\vert \xi+\nabla\phi_{\xi} \vert^p\Big)^{\frac{1}{p}}\nonumber\\ &\stackrel{\eqref{controlunitball}}{\leq}_{\|A\|_{C^{0,\alpha}(\mathbb{R}^d)}}(1+|\xi|) (\r(x))^{\frac{d-{\delta}}{p}}. \end{align*} \end{proof} Before we turn to the proof of Corollary~\ref{cor:average-per-NL}, let us quickly argue that it yields Theorem~\ref{th:corrNL}. \begin{proof}[Proof of Theorem~\ref{th:corrNL}] By \eqref{deterboundproof1} and Theorem~\ref{boundrNLprop}, assumption~\eqref{convergenceofthecor-hyp} in Proposition~\ref{convergenceofperiodiccorrectors} is satisfied for $\nabla \phi_\xi$. Let us show that this also yields assumption~\eqref{convergenceofthecor-hyp} for $\nabla \sigma_\xi$. Indeed, by maximal regularity for the Laplacian applied to equation \eqref{e.Laplace-sig} we have for all $q> 1$, $ \Big(\int_{Q_L} |\nabla \sigma_\xi|^q\Big)^\frac1q \lesssim q \Big(\int_{Q_L} |\xi+\nabla \phi_\xi|^q\Big)^\frac1q, $ so that assumption~\eqref{convergenceofthecor-hyp} for $\nabla \sigma_\xi$ follows from taking the expectation of the $q$-th power of this estimate and using the stationarity of the extended corrector gradient together with the moment bound on $\nabla \phi_\xi$. By \eqref{convergenceofthecor}, we can then pass to the limit $L\uparrow +\infty$ in the moment bounds on the extended corrector gradient for the periodized ensemble, and obtain~\eqref{e.bdd-grad-corrNL}. Likewise, the claimed estimate~\eqref{e:corr-NL-CLT} follows from Corollary~\ref{cor:average-per-NL} for $g$ compactly supported by passing to the limit $L\uparrow \infty$ using \eqref{convergenceofthecor}. The result for general $g \in L^2(\mathbb{R}^d)$ is then obtained by approximation. The control \eqref{e.growth-nlc} of the growth of the extended corrector is a direct consequence of~\eqref{e:corr-NL-CLT} by ``integration'' (see for instance \cite[Proof of Theorem~4.2, Step~3]{DG-21} -- the argument is also displayed in the proof of Corollary~\ref{coro:corr-diff}). \end{proof} It remains to prove Corollary~\ref{cor:average-per-NL}. \begin{proof}[Proof of Corollary~\ref{cor:average-per-NL}] We split the proof into two steps, first treat averages of $\nabla \phi_\xi$ and then turn to averages of $\nabla \sigma_\xi$. \medskip \step1 Averages of $\nabla \phi_\xi$. \noindent In this step we set $\mathcal{F}:=\int_{Q_L}g \cdot \nabla\phi_\xi$ for some $g \in L^2(\mathbb{R}^d)^d$ compactly supported in $Q_L$. The starting point is the estimate~\eqref{e.sensi-estimNL-ant} in the proof of Proposition~\ref{weakNL}, which takes the form for all $q\ge 1$ of \begin{equation*} \expecL{\mathcal{F}^{2q}}^{\frac{1}{q}}\,\lesssim\, q(1+\vert\xi\vert^p)\expecL{\Big(\int_{Q_L}\r(x)^{d-\delta}\Big(\int_{B(x)}\vert\nabla u\vert^2\mu_{\xi}\Big) \mathrm{d} x\Big)^q}^{\frac{1}{q}}, \end{equation*} where $u$ is the unique weak $Q_L$-periodic solution (with zero average) of \eqref{equationdual}, that is, $-\nabla\cdot a_{\xi}^*\nabla u=\nabla\cdot g$. By duality, we may reformulate the right-hand side as \begin{equation*} {\expecL{\Big(\int_{Q_L}\r(x)^{d-\delta}\Big(\int_{B(x)}\vert\nabla u\vert^2\mu_{\xi}\Big) \mathrm{d} x\Big)^q}^{\frac{1}{q}}} \,=\, \sup_{\mathbb E_L[|X|^{2q'}]=1} \expecL{\int_{Q_L}\r(x)^{d-\delta}\Big(\int_{B(x)}\vert\nabla X u\vert^2\mu_{\xi}\Big) \mathrm{d} x}, \end{equation*} where the supremum runs over random variables $X \in L^{2q'}(d\mathbb P_L)$ which are independent of the space variable. Let $0<\eta<1$ be some exponent (to be fixed later) small enough so that $\frac{q'}{1+\eta}>1$. We then appeal to H\"older's inequality with exponents $(\frac{q'}{q'-1-\eta},\frac{q'}{1+\eta})$ and to the stationarity of $\r$ to the effect that \begin{equation*} \expecL{\int_{Q_L}\r(x)^{d-\delta}\Big(\int_{B(x)}\vert\nabla X u\vert^2\mu_{\xi}\Big) \mathrm{d} x} \, \le \, \expec{\r^{\frac{q'}{q'-1-\eta}(d-\delta)}}^\frac{q'-1-\eta}{q'} \int_{Q_L} \expecL{\Big(\fint_{B(x)}\vert\nabla X u\vert^2\mu_{\xi}\Big)^{\frac{q'}{1+\eta}}}^\frac{1+\eta}{q'} \mathrm{d} x. \end{equation*} Provided $2q'\le 2+\kappa$, we may appeal to Theorem~\ref{th:annealedmeyers} on the second right-hand side factor, which yields (recall that $X$ does not depend on the space variable, that $\mathbb E_L[|X|^{2q'}]=1$ and that $\mu_\xi \ge 1$) \begin{multline*} \int_{Q_L} \expecL{\Big(\fint_{B(x)}\vert\nabla X u\vert^2\mu_{\xi}\Big)^{\frac{q'}{1+\eta}}}^\frac{1+\eta}{q'} \mathrm{d} x \\ \lesssim \, \eta^{-\frac14}|\log(\eta)|^\frac12 \int_{Q_L} \expecL{|X|^{2q'}\Big(\fint_{B(x)}\vert g \vert^2\tfrac{1}{\mu_{\xi}}\Big)^{q'}}^\frac{1}{q'} \mathrm{d} x \, \lesssim \, \eta^{-\frac14}|\log(\eta)|^\frac12 \int |g|^2. \end{multline*} The choice $\eta=\frac12(q'-1)=\frac1{2(q-1)}$ is legitimate provided $q\gg 1$, in which case the above combined with the moment bound on $\r$ of Theorem~\ref{boundrNLprop} yields \begin{equation*} \expecL{\mathcal{F}^{2q}}^{\frac{1}{q}}\,\lesssim\, q^\nu (1+\vert\xi\vert^p)\int |g|^2. \end{equation*} for some exponent $\nu>0$ independent of $q$. This entails \eqref{e.cor:average-per-NL} for $\nabla \phi_\xi$ for a suitable exponent $\gamma>0$ (depending only on $\nu$). \medskip \step2 Averages of $\nabla \sigma_\xi$. \noindent Fix $1\le i,j \le d$. We proceed as for $\nabla \phi_\xi$: We first derive a representation formula for the sensitivity of $\mathcal{F}:= \int_{Q_L}g \cdot \nabla \sigma_{\xi,ij}$ with respect to changes of the coefficient $A$, and then use the annealed estimates of Theorems~\ref{th:annealedmeyers} and \ref{th:annealed-lap}, and the moment bounds on $\r$ to conclude. \medskip \substep{2.1} Sensitivity calculus. \noindent Recall the defining equation for $\sigma_{\xi,ij}$ \begin{equation*} -\triangle \sigma_{\xi,ij} \,=\, \partial_i (a(\cdot,\xi+\nabla\phi_{\xi})\cdot e_j)-\partial_j(a(\cdot,\xi+\nabla\phi_{\xi})\cdot e_i). \end{equation*} As in Step~1 of the proof of Proposition~\ref{weakNL}, we proceed by duality. This time we introduce two auxiliary functions $u_1$ and $u_2$ as $Q_L$-periodic solutions of $$ -\triangle u_1 = \nabla \cdot g, \quad -\nabla \cdot a_\xi^* \nabla u_2 = \nabla \cdot a_\xi^*(\partial_i u_1 e_j-\partial_j u_1 e_i), $$ and claim that \begin{equation} \delta_x \mathcal{F} = \int_{B(x)}\vert \aa(\xi+\nabla\phi_{\xi})\otimes (\nabla u_2+\partial_i u_1 e_j-\partial_j u_1 e_i)\vert. \label{e.sens-sigma} \end{equation} Let us quickly argue in favor of \eqref{e.sens-sigma}. With the notation of Step~1 of the proof of Proposition~\ref{weakNL}, and $\delta A$ an increment of $A$ localized in $B(x)$, we have by the defining equations for $\sigma_{\xi,ij}$ and $u_1$ $$ \delta^h \mathcal{F} := \frac{\mathcal{F}(A+h\delta A)-\mathcal{F}(A)}{h} \,=\, \int (\partial_i u_1 e_j-\partial_j u_1 e_i) \cdot \delta^h \Big(a(\xi+\nabla \phi_\xi)\Big), $$ where \begin{eqnarray*} \delta^h \big(a(\xi+\nabla \phi_\xi)\big)&=& \frac{(A+h\delta A)\aa(\xi+\nabla \phi_\xi(A+h\delta A))-A\aa(\xi+\nabla \phi_\xi)}{h} \\ &=&\delta A \aa(\xi+\nabla \phi_\xi(A+h\delta A))+ a^h_\xi \nabla \delta^h \phi_\xi. \end{eqnarray*} Passing to the limit $h\downarrow 0$, and testing the equation for $u_2$ with $\delta \phi_\xi$ and equation \eqref{sensiNLequation1+} with $u_2$, we obtain \begin{eqnarray*} \delta \mathcal{F} \,=\,\lim_{h\downarrow 0} \delta^h \mathcal{F} &=& \int (\partial_i u_1 e_j-\partial_j u_1 e_i) \cdot \Big(\delta A \aa(\xi+\nabla \phi_\xi)+a_\xi \nabla \delta \phi_\xi\Big) \\ &=&\int (\nabla u_2+\partial_i u_1 e_j-\partial_j u_1 e_i) \cdot \delta A \aa(\xi+\nabla \phi_\xi), \end{eqnarray*} and the claim follows by taking the supremum over $\delta A$. \medskip \substep{2.2} Proof of \eqref{e.cor:average-per-NL}. \noindent Combining \eqref{e.sens-sigma} with the logarithmic-Sobolev inequality, we obtain for all $q\ge 1$ \begin{equation*} \expecL{|\mathcal{F}|^{2q}}^{\frac{1}{q}}\,\lesssim\, q \expecL{\Big(\int_{Q_L} \Big(\fint_{B(x)}|\aa(\xi+\nabla \phi_\xi)|\vert\nabla u_2+\partial_i u_1 e_j-\partial_j u_1 e_i\vert \Big)^2 \mathrm{d} x\Big)^q}^{\frac{1}{q}}. \end{equation*} We treat differently the terms involving $u_1$ and $u_2$. For $u_2$ we proceed as in Step~3 of the proof of Proposition~\ref{weakNL} (using the definition~\eqref{e.def-mu} of $\mu_\xi$ and \eqref{controlunitball}), whereas for $u_1$ we directly use \eqref{deterboundproof1}. This yields \begin{multline*} \expecL{|\mathcal{F}|^{2q}}^{\frac{1}{q}}\,\lesssim_{|\xi|}\, q \expecL{\Big(\int_{Q_L}\r(x)^{d-\delta}\Big(\int_{B(x)}\vert\nabla u_2\vert^2\mu_{\xi}\Big) \mathrm{d} x\Big)^q}^{\frac{1}{q}} \\ +q \expecL{\Big(\int_{Q_L}\r(x)^{\frac{2(p-1)}p(d-\delta)}\Big(\int_{B(x)}\vert\nabla u_1\vert^2\Big) \mathrm{d} x\Big)^q}^{\frac{1}{q}}. \end{multline*} As in Step~1, this entails \begin{multline*} \expecL{|\mathcal{F}|^{2q}}^{\frac{1}{q}}\,\lesssim_{|\xi|}\,q \sup_{\mathbb E_L[|X|^{2q'}]=1} \expecL{\int_{Q_L}\r(x)^{d-\delta}\Big(\int_{B(x)}\vert \nabla X u_2\vert^2\mu_{\xi}\Big) \mathrm{d} x} \\ +q \sup_{\mathbb E_L[|X|^{2q'}]=1} \expecL{ \int_{Q_L}\r(x)^{\frac{2(p-1)}p(d-\delta)}\Big(\int_{B(x)}\vert\nabla Xu_1\vert^2\Big) \mathrm{d} x }. \end{multline*} For the second right-hand side term, we proceed as in Step~1 (using Theorem~\ref{th:annealed-lap} in place of Theorem~\ref{th:annealedmeyers}), and it remains to treat the first right-hand side term. We then use H\"older's inequality with exponents $(\frac{q'}{q'-(1+\eta)^2},\frac{q'}{(1+\eta)^2})$ for some $0<\eta<1$ (so that $q'>(1+\eta)^2$) to be chosen below to the effect that \begin{multline*} \expecL{\int_{Q_L}\r(x)^{d-\delta}\Big(\int_{B(x)}\vert\nabla X u_2\vert^2\mu_{\xi}\Big) \mathrm{d} x} \\ \le \, \expec{\r^{\frac{q'}{q'-(1+\eta)^2}(d-\delta)}}^\frac{q'-(1+\eta)^2}{q'} \int_{Q_L} \expecL{\Big(\fint_{B(x)}\vert\nabla X u_2\vert^2\mu_{\xi}\Big)^{\frac{q'}{(1+\eta)^2}}}^\frac{(1+\eta)^2}{q'} \mathrm{d} x. \end{multline*} We then appeal to the annealed Meyers estimate of Theorem~\ref{th:annealedmeyers} under the condition that $2\le \frac{2q'}{(1+\eta)^2}\le 2+\kappa$, and obtain \begin{equation*} \int_{Q_L} \expecL{\Big(\fint_{B(x)}\vert\nabla X u_2\vert^2\mu_{\xi}\Big)^{\frac{q'}{(1+\eta)^2}}}^\frac{(1+\eta)^2}{q'} \mathrm{d} x \, \lesssim \,\eta^\frac14 |\log \eta|^\frac12 \int_{Q_L} \expecL{\Big(\fint_{B(x)}\vert \mu_\xi \nabla Xu_1 \vert^2\tfrac{1}{\mu_{\xi}}\Big)^{\frac{q'}{1+\eta}}}^\frac{1+\eta}{q'} \mathrm{d} x \end{equation*} since under the assumption $0<\eta<\frac12$, we have $(1+\eta)^2-1\lesssim \eta$. Bounding $\mu_\xi$ by $\r^{\frac{p-2}p(d-\delta)}$ (cf.~Lemma~\ref{ctrlavNL}) and using H\"older's inequality with exponents $(\frac{1+\eta}{\eta},1+\eta)$, the integral in the right-hand side is controlled by $$ \int_{Q_L} \expecL{\Big(\fint_{B(x)}\vert \mu_\xi \nabla Xu_1 \vert^2\tfrac{1}{\mu_{\xi}}\Big)^{\frac{q'}{1+\eta}}}^\frac{1+\eta}{q'} \,\lesssim \, \expecL{\r^{\frac{q'}{\eta} \frac{p-2}p(d-\delta) }}^\frac{\eta}{q'} \int_{Q_L} \expecL{\Big(\fint_{B(x)}\vert \nabla Xu_1 \vert^2 \Big)^{ q' }}^\frac{1}{q'}. $$ We finally estimate the integral term by Theorem~\ref{th:annealed-lap}, which yields (since there is no loss in the stochastic exponent, $g$ is deterministic, and $1\le q'\le 2$) $$ \int_{Q_L} \expecL{\Big(\fint_{B(x)}\vert \nabla Xu_1 \vert^2 \Big)^{ q' }}^\frac{1}{q'} \,\lesssim\, \expecL{|X|^{2q'}} \int_{Q_L}|g|^2 = \int_{Q_L}|g|^2. $$ The conclusion follows by choosing $\eta=\frac14(q'-1)$ and $q\gg 1$, and using the moment bound on $\r$ of Theorem~\ref{boundrNLprop}. \end{proof} \section{Control of corrector differences: Proof of Theorem~\ref{th:corrL}} \subsection{Reduction argument} As for nonlinear correctors, by Proposition~\ref{convergenceofperiodiccorrectors} it is enough to prove estimates for $L$-periodic ensembles that are uniform with respect to $L$. We split the version of Corollary~\ref{cor:average-per-NL} for the linearized corrector into two statements: Proposition~\ref{prop:average-per-L} below shows that averages of the gradient of the extended linearized corrector decay at the CLT scaling provided we have good control of moments of $\nabla \tilde \phi_{\xi,e}$, whereas Proposition~\ref{prop:average-per-L+} provides the latter. \begin{proposition}\label{prop:average-per-L} Under Hypothesis~\ref{hypo}, for all $\xi \in \mathbb{R}^d$ and all $0<\theta<1$ there exists $\gamma>0$ (depending on $|\xi|$ and $\theta$) such that for all $L \ge 1$, all $g \in L^2(\mathbb{R}^d)$ compactly supported in $Q_L$, and all unit vectors $e \in \mathbb{R}^d$, the random field $\mathcal{F}:=\int_{Q_L}g(\nabla\tilde \phi_{\xi,e},\nabla \tilde \sigma_{\xi,e})$ satisfies for all $q \ge 1$ such that $2q'\le 2+\kappa$ (where $\kappa>0$ is as in Theorem~\ref{th:annealedmeyers}) \begin{equation}\label{e.prop:average-per-L} \expecL{|\mathcal{F}|^{2q}}^{\frac{1}{q}}\lesssim_{|\xi|,\theta} q^\gamma \expecL{\Big(\sup_B |\nabla\tilde\phi_{\xi,e}+e|^2 \mu_\xi\Big)^{q(1+\theta)}}^\frac1{q(1+\theta)} \Big(\int_{Q_L}|g|^2\Big) \end{equation} \end{proposition} The proof of Proposition~\ref{prop:average-per-L} relies on a sensitivity estimate by duality combined with the annealed Meyers estimate of Theorem~\ref{th:annealedmeyers}. \begin{proposition}[Control of moments]\label{prop:average-per-L+} Under Hypothesis~\ref{hypo}, for all $\xi \in \mathbb{R}^d$, there exists $\gamma>0$ (depending on $|\xi|$) such that for all $L \ge 1$ and all unit vectors $e \in \mathbb{R}^d$ we have \begin{equation}\label{e.prop:moment-per-L+} \expecL{\Big(\sup_B |\nabla\tilde \phi_{\xi,e}+e|^2 \mu_\xi\Big)^{q}}^\frac1{q} \,\lesssim \, q^\gamma. \end{equation} \end{proposition} The proof of Proposition~\ref{prop:average-per-L+} is based on Proposition~\ref{prop:average-per-L} and a buckling argument. Because the linearized corrector equation has unbounded coefficients, we cannot use the elegant approach of \cite{Otto-Tlse} (see also \cite[Proposition~4.5]{DG-21}) to buckle on moments of $\nabla \tilde \phi_{\xi,e}$ themselves. Instead, as we did for $r_{\star,\xi,L}$, we have to go through the super levelsets of some minimal radius controlling the growth of averages of $|\nabla \tilde \phi_{\xi,e}|^2\mu_\xi$. \medskip Before we turn to the proofs, let us show how bounds on linearized correctors allow us to derive bounds on nonlinear corrector differences in form of Corollary~\ref{coro:corr-diff}. \begin{proof}[Proof of Corollary~\ref{coro:corr-diff}] For simplicity, we only treat $\phi_\xi$. \medskip \step1 Statement for differences of corrector gradients. \noindent By \eqref{convergenceofthecor} in Proposition~\ref{convergenceofperiodiccorrectors} in form of (note the difference of expectations) $$ \expec{|\nabla (\phi_\xi-\phi_{\xi'})|^q}^\frac1q \,=\, \lim_{L\uparrow +\infty}\expecL{\vert \nabla (\phi_\xi-\phi_{\xi'})\vert^q}^\frac1q, $$ it suffices to prove the statement for the periodized ensemble. By Lemma~\ref{lemmadiffcor}, $\mathbb P_L$-almost surely, $\xi \mapsto \nabla \phi_\xi$ is differentiable and we have by the fundamental theorem of calculus for all $e \in \mathbb{R}^d$ \begin{equation}\label{e.good-diff} e\cdot (\nabla \phi_\xi-\nabla \phi_{\xi'})= \int_0^1 \nabla \tilde \phi_{\xi+t(\xi'-\xi),e} \cdot (\xi'-\xi)dt, \end{equation} so that by taking the $q$-th moment and using Proposition~\ref{prop:average-per-L+}, one obtains \begin{equation}\label{e.approxL-per-cor+} \expecL{|\nabla \phi_\xi-\nabla \phi_{\xi'}|^q}^\frac1q \,\le \, |\xi-\xi'| \sum_i \int_0^1 \expecL{|\nabla \phi_{\xi+t(\xi'-\xi),e_i}|^q}^\frac1q \, \lesssim \, q^\gamma |\xi-\xi'|, \end{equation} which yields the claim by taking the limit $L\uparrow \infty$. \medskip \step2 Statement for corrector differences. \noindent By \eqref{convergenceofthecor+}, since $\int_B \phi_\xi =0$, for all $x\in \mathbb{R}^d$ we have for all $q\ge 1$ \begin{equation}\label{e.approxL-per-cor} \expec{\Big(\int_{B(x)} |\phi_\xi-\phi_{\xi'}|^2\Big)^\frac{q}2}^\frac1q =\lim_{L\uparrow \infty}\expecL{\Big(\int_{B(x)} \Big|\phi_\xi-\phi_{\xi'}-\fint_B \phi_\xi-\phi_{\xi'}\Big|^2\Big)^\frac{q}2}^\frac1q. \end{equation} To control the right-hand side, we shall bound moments of periodic random fields $\zeta$ by moments of averages of their gradients $\nabla \zeta$. Indeed, by Poincar\'e's inequality on $B(x)$ for $x \in Q_L$, we have for $c=\fint_{B} \zeta$ \begin{equation}\label{eq:phi-bnd0} \expecL{ \Big( \int_{B(x)} (\zeta-c)^2 \Big)^\frac q2}^\frac1q \,\lesssim\, \expecL{|\nabla \zeta|^q}^\frac1q +\expecL{\Big| \fint_{B(x)} \zeta -c \Big|^{q}}^\frac1q, \end{equation} and it remains to estimate the second right-hand side term. For that purpose, we write \[\fint_{B(x)}\zeta-\fint_{B}\zeta=\int_{Q_L}\nabla\zeta\cdot\nabla h_{x},\] where $h_x$ denotes the unique weak periodic solution in $Q_L$ of $-\triangle h_x=\tfrac{1}{|B|}(\mathds1_{B(x)}-\mathds1_B)$. We apply this to $\zeta=\phi_\xi-\phi_{\xi'}$ and rewrite the gradient as $e\cdot \nabla \zeta = \int_0^1 \nabla \phi_{\xi+t(\xi'-\xi),e} \cdot (\xi'-\xi)dt$, to the effect that (with implicit sum on the repeated index $i$) $$ \fint_{B(x)}(\phi_\xi-\phi_{\xi'})-\fint_{B}(\phi_\xi-\phi_{\xi'})=(\xi'-\xi) \cdot \int_0^1 \int_{Q_L}\nabla \phi_{\xi+t(\xi'-\xi),e_i} \nabla_i h_{x}. $$ Using Propositions~\ref{prop:average-per-L} and~\ref{prop:average-per-L+}, this yields $$ \expecL{\Big|\fint_{B(x)}(\phi_\xi-\phi_{\xi'})-\fint_{B}(\phi_\xi-\phi_{\xi'})\Big|^q}^\frac1q \,\le_{|\xi|,|\xi'|} \,|\xi-\xi'| q^\gamma \Big(\int_{Q_L} |\nabla h_x|^2\Big)^\frac12. $$ A direct computation with Green's kernel gives $\|\nabla h_{x}\|_{L^2(Q_L)}\,\lesssim\,\mu_d(x)$, and thus $$ \expecL{\Big|\fint_{B(x)}(\phi_\xi-\phi_{\xi'})-\fint_{B}(\phi_\xi-\phi_{\xi'})\Big|^q}^\frac1q \,\le_{|\xi|,|\xi'|} \,|\xi-\xi'| q^\gamma \mu_d(x). $$ Combined with \eqref{eq:phi-bnd0}, \eqref{e.approxL-per-cor}, and \eqref{e.approxL-per-cor+}, this entails $ \expec{\Big(\int_{B(x)} |\phi_\xi-\phi_{\xi'}|^2\Big)^\frac{q}2}^\frac1q \, \lesssim\, |\xi-\xi'|q^\gamma \mu_d(x), $ from which the claim follows using local regularity in form of Lemma~\ref{regestiNL} and \eqref{e.approxL-per-cor+} in the limit $L\uparrow +\infty$. \medskip \step3 Regularity of $\xi \mapsto \bar a(\xi)$. \noindent The starting point is the definition $\bar a(\xi):=\expec{a(\xi+\nabla \phi_\xi)}=\expec{A(0)\aa(\xi+\nabla \phi_\xi(0))}$ and of its approximation by periodization $\bar a_L(\xi):=\expecL{A(0)\aa(\xi+\nabla \phi_\xi(0))}$ for all $L\ge 1$. Since $\bar a_L(\xi) \to \bar a(\xi)$ as $L\uparrow +\infty$, it is enough to prove that $D\bar a_L$ is Lipschitz-continuous uniformly with respect to $L$ and given for all $\xi,e \in \mathbb{R}^d$ by $$ D\bar a_L(\xi) e:= \bar a_{L,\xi} e=\expecL{A(0)D\aa(\xi+\nabla \phi_\xi(0))(e+\nabla \tilde \phi_{\xi,e}(0))}. $$ The differentiability of $\xi \mapsto \bar a_L(\xi)$ and the formula for $D \bar a_L(\xi)$ follow from \eqref{e.good-diff}, the continuity of $\xi \mapsto \nabla {\tilde \phi_{\xi,e}}$, and the moment bounds on $\nabla {\tilde \phi_{\xi,e}}$. It remains to argue that $\xi \mapsto D\bar a_L(\xi)$ is Lipschitz-continuous. Since $\xi \mapsto \nabla \phi_\xi$ is continuously differentiable with stretched exponential moment bounds, it is enough to prove that $\xi \mapsto \nabla \tilde \phi_{\xi,e}$ is itself Lipschitz-continuous in $L^2(d\mathbb P)$. This is a direct consequence of the defining equation~\eqref{e.Lcorr} in the form for all $\xi,\xi' \in \mathbb{R}^d$ of \begin{equation*} -\nabla \cdot D a(\cdot,\xi+\nabla \phi_\xi)\nabla (\tilde \phi_{\xi,e}-\tilde \phi_{\xi',e}) \,=\, \nabla \cdot (D a(\cdot,\xi+\nabla \phi_\xi)-D a(\cdot,\xi'+\nabla \phi_{\xi'})) (e+\nabla \tilde \phi_{\xi',e}) \end{equation*} combined with the differentiability of $\xi \mapsto \nabla \phi_\xi$, uniform moment bounds on $\nabla\tilde \phi_{\xi',e}$ and $\nabla \phi_{\xi}$, and an energy estimate. \end{proof} \subsection{CLT-scaling: Proof of Proposition~\ref{prop:average-per-L}} In this paragraph, we fix $e$ and $\xi$, and use the short-hand notation $\r$ for $r_{\star,\xi,L}$, $\phi$ for $\phi_\xi$, $\mu$ for $\mu_\xi$, $\tilde \phi$ for $\tilde \phi_{\xi,e}$, $\tilde \sigma$ for $\tilde \sigma_{\xi,e}$. We split the proof into three steps. In the first two steps, we derive sensitivity estimates for averages of $\nabla \tilde \phi$ and of $\nabla \tilde \sigma$, respectively, and then conclude in the third step using Theorems~\ref{th:annealedmeyers} and~\ref{th:annealed-lap}. \medskip \step1 Sensivity formula for $\nabla \tilde \phi$: The random variable $\mathcal{F}_1:= \int_{Q_L} g \cdot \nabla \tilde \phi$ (where $g$ abusively denotes $g e'$ for some unit vector $e' \in \mathbb{R}^d$) satisfies for all $x\in Q_L$ \begin{equation} \delta_x \mathcal{F}_1\,=\, \int_{B(x)}\vert D\aa(\xi+\nabla\phi)(e+\nabla\tilde \phi)\otimes \nabla u_1+\aa(\xi+\nabla\phi)\otimes \nabla u_2\vert, \label{functioderivL} \end{equation} where we recall that $\aa: \xi\in\mathbb{R}^d\mapsto (1+\vert\xi\vert^{p-2})\xi$, and where $u_1,u_2\in H^1_{\mathrm{per}}(Q_L)$ are the unique weak solutions of \begin{equation} -\nabla\cdota^*_{\xi}\nabla u_1=\nabla\cdot g \label{equationdualL1} \end{equation} and (with an implicit sum over the repeated index $k$) \begin{equation} -\nabla\cdota^*_{\xi}\nabla u_2=\partial_k(D^2a(\xi+\nabla\phi)(e+\nabla\tilde \phi)e_k\cdot \nabla u_1). \label{equationdual2} \end{equation} (These equations are well-posed since the $Q_L$-periodic maps $\nabla \phi$ and $a^*_{\xi}$ are bounded almost surely.) \medskip Let us give the quick argument. Following Step~1 of the proof of Proposition~\ref{weakNL}, we let $\delta A$ be an increment of $A$ localized in $B(x)$ and consider for $h$ small enough \begin{align*} \delta^h\mathcal{F}_{1}&:=\frac{\mathcal{F}_{1}(A+h\delta A)-\mathcal{F}_{1}(A)}{h}=\int_{Q_L}g\cdot \nabla\delta^h\tilde \phi,\quad \delta^h\tilde \phi:=\frac{\tilde \phi(A+h\delta A)-\tilde \phi(A)}{h},\\ b^h_\xi&:=A\int_{0}^1D^2 \aa(\xi+t\nabla\phi(A+h\delta A)+(1-t)\nabla\phi(A))\mathrm{d} t, \end{align*} and recall the notation \eqref{functionderivNLnota1} and \eqref{functionderivNLnota2}. By the defining equation~\eqref{e.Lcorr} for the linearized corrector, we obtain \begin{align*} -\nabla\cdota_{\xi}\nabla\delta^h\tilde \phi=&\nabla\cdot \delta A D\aa(\xi+\nabla \phi(A+h\delta A))(e+\nabla\tilde \phi(A+h\delta A))\nonumber\\ &+\frac{1}{h}\nabla\cdot A(D\aa(\xi+\nabla\phi(A+h\delta A))-D\aa(\xi+\nabla\phi(A))(e+\nabla\tilde \phi(A+h\delta A)), \end{align*} which we rewrite, by the fundamental theorem of calculus and the definition of $b^h_{\xi}$, as \begin{equation*} -\nabla\cdot a_{\xi}\nabla\delta^h\tilde \phi=\nabla\cdot \delta A D\aa(\xi+\nabla \phi(A+h\delta A))(e+\nabla\tilde \phi(A+h\delta A))+\nabla\cdot b^h_\xi \nabla\delta^h\phi(e+\nabla\tilde \phi(A+h\delta A)). \end{equation*} As in Step~1 of the proof of Proposition~\ref{weakNL}, we can pass to the limit as $h\downarrow 0$ and obtain that $\delta^h\tilde \phi$ converges in $C^{1,\alpha}(Q_L)$ to the solution $\delta\tilde \phi \in H^1_{\mathrm{per}}(Q_L)$ of \begin{equation} -\nabla\cdot a_{\xi}\nabla\delta\tilde \phi=\nabla\cdot \delta A D\aa(\xi+\nabla \phi )(e+\nabla\tilde \phi )+\nabla\cdot b_\xi \nabla\delta\phi(e+\nabla\tilde \phi), \label{sensiLequation1+} \end{equation} with $b_\xi:=D^2 a(\xi+ \nabla\phi)$. We now proceed by duality. First, we test \eqref{sensiLequation1+} with $u_1$ and \eqref{equationdualL1} with $\delta\tilde \phi$ to obtain \begin{equation} \delta\mathcal{F}_{1}=\lim_{h\downarrow 0} \delta^h \mathcal{F}_1=\int_{Q_L}\nabla u_1\cdot \delta A D\aa(\xi+\nabla \phi)(e+\nabla\tilde \phi)+\int_{Q_L}\nabla u_1\cdot b_\xi \nabla\delta \phi(e+\nabla\tilde \phi ). \label{sensiLequation3} \end{equation} Second, we test \eqref{sensiNLequation1+} with $u_2$ and \eqref{equationdual2} with $\delta\phi$ to get \begin{equation} \int_{Q_L}\nabla u_1\cdot b_\xi \nabla\delta \phi(e+\nabla\tilde \phi )=\int_{Q_L}\nabla u_2\cdot \delta A \aa(\xi+\nabla\phi ). \label{sensiLequation4} \end{equation} The combination of \eqref{sensiLequation3} and \eqref{sensiLequation4} then entails the claim \eqref{functioderivL} by taking the supremum over $\delta A$. \medskip \step2 Sensitivity formula for $\nabla \tilde \sigma_{ij}$ (for $i,j$ fixed): The random variable $\mathcal{F}_2:= \int_{Q_L} g \cdot \nabla \tilde \sigma_{ij}$ satisfies for all $x\in Q_L$ \begin{equation} \delta_x \mathcal{F}_2\,=\, \int_{B(x)}\vert D\aa(\xi+\nabla \phi)(e+\nabla \tilde \phi) \otimes (\nabla w_1+\partial_i v e_j-\partial_j v e_i)+\aa(\xi+\nabla \phi)\otimes \nabla w_2 \vert , \label{functioderiv-sigL} \end{equation} where the functions $v,w_1,w_2 \in H^1_{\mathrm{per}}(Q_L)$ solve (with an implicit sum over the repeated index $k$) \begin{eqnarray} -\triangle v&=&\nabla \cdot g,\label{e.sens-ant-v} \\ -\nabla \cdot a_\xi^* \nabla w_1&=& \nabla \cdot a_\xi^* (\partial_i v e_j-\partial_j v e_i),\label{e.sens-ant-w1} \\ -\nabla \cdot a_\xi^* \nabla w_2&=&\partial_k\big(D^2a(\xi+\nabla\phi)(e+\nabla\tilde \phi)e_k\cdot (\nabla w_1+\partial_i v e_j-\partial_j v e_j)\big).\label{e.sens-ant-w2} \end{eqnarray} We only display the algebra of the argument (passing already to the limit $h\downarrow 0$, which entails that $\delta=\lim_{h\downarrow 0}\delta^h$ satisfies the Leibniz rule). Recall the defining equation for $\tilde \sigma_{ij}$ with the notation $a_\xi=D a(\xi+\nabla \phi)$ \begin{equation*} -\triangle \tilde \sigma_{ij} \,=\, \partial_i (a_\xi(e+\nabla \tilde \phi)\cdot e_j)-\partial_j(a_\xi(e+\nabla \tilde \phi)\cdot e_i). \end{equation*} First, by \eqref{e.sens-ant-v}, $$ \delta \mathcal{F}_2 = \int (\partial_i v e_j-\partial_j v e_i) \cdot \delta \Big( D a(\xi+\nabla \phi)(e+\nabla \tilde \phi)\Big). $$ Since $\delta$ satisfies the Leibniz rule, we have \begin{equation*} \delta \big( D a(\xi+\nabla \phi)(e+\nabla \tilde \phi)\big) \,=\, \delta A D\aa(\xi+\nabla \phi)(e+\nabla \tilde \phi)+ D^2 a(\xi+\nabla \phi) \nabla \delta \phi (e+\nabla \tilde \phi)+a_\xi \nabla \delta \tilde \phi. \end{equation*} The first right-hand term directly gives the right-hand side contribution of \eqref{functioderiv-sigL} involving $\nabla v$. For the second term, we introduce the solutions $w_{2,1}$ and $w_{2,2}$ of $-\nabla \cdot a_\xi^* \nabla w_{2,1}=\partial_k\big(D^2a(\xi+\nabla\phi)(e+\nabla\tilde \phi)e_k\cdot (\partial_i v e_j-\partial_j v e_j)\big)$ and $-\nabla \cdot a_\xi^* \nabla w_{2,2}=\partial_k\big(D^2a(\xi+\nabla\phi)(e+\nabla\tilde \phi)e_k\cdot \nabla w_1 \big)$ to the effect that $w_2=w_{2,1}+w_{2,2}$. By using \eqref{sensiNLequation1+}, we obatin $$ \int (\partial_i u e_j-\partial_j u e_i) \cdot D^2 a(\xi+\nabla \phi) \nabla \delta \phi (e+\nabla \tilde \phi)\,=\, \int \nabla w_{2,1} \cdot \delta A \aa(\xi+\nabla \phi). $$ This yields part of the right-hand side contribution of \eqref{functioderiv-sigL} involving $\nabla w_2$. We conclude with the third term. Using first \eqref{e.sens-ant-w1} we obtain $$ \int (\partial_i u e_j-\partial_j u e_i) \cdot a_\xi \nabla \delta \tilde \phi = -\int \nabla w_1\cdot a_\xi \nabla \delta \tilde \phi, $$ and therefore using \eqref{sensiLequation1+} $$ \int (\partial_i u e_j-\partial_j u e_i) \cdot a_\xi \nabla \delta \tilde \phi = \int \nabla w_1\cdot \Big(\delta A D\aa(\xi+\nabla \phi )(e+\nabla\tilde \phi )+ D^2a(\xi+\nabla \phi) \nabla\delta\phi(e+\nabla\tilde \phi)\Big). $$ The first right-hand side term yields the right-hand side contribution of \eqref{functioderiv-sigL} involving $\nabla w_1$. For the second term, we use $w_{2,2}$, and conclude using \eqref{sensiNLequation1+} that $$ \int \nabla w_1\cdot D^2a(\xi+\nabla \phi) \nabla\delta\phi(e+\nabla\tilde \phi)\,=\, \int \nabla w_{2,2} \cdot \delta A \aa(\xi+\nabla \phi). $$ This gives the second part of the right-hand side contribution of \eqref{functioderiv-sigL} involving $\nabla w_2$, recalling that $\nabla w_2=\nabla w_{2,1}+\nabla w_{2,2}$. \medskip \step3 Proof of \eqref{e.prop:average-per-L}. \noindent From the logarithmic-Sobolev inequality, and Steps~1 and~2, we deduce by the triangle inequality that for all $q\ge 1$, \begin{multline*} \expecL{ |\mathcal{F}|^{2q}}^\frac1q \,\lesssim \, \underbrace{q \expecL{\Big(\int_{Q_L} \Big(\int_{B(x)}| D\aa(\xi+\nabla\phi)||e+\nabla\tilde \phi|(|\nabla u_1|+|\nabla v|+|\nabla w_1|)\Big)^2\mathrm{d} x\Big)^q}^\frac1q}_{\displaystyle =:I_1} \\ +\underbrace{q \expecL{\Big(\int_{Q_L} \Big(\int_{B(x)}|\aa(\xi+\nabla\phi)|(|\nabla u_2|+|\nabla w_2|)\Big)^2\mathrm{d} x\Big)^q}^\frac1q}_{\displaystyle =:I_2}. \end{multline*} To control these terms we proceed as in the proof of Corollary~\ref{cor:average-per-NL}: using duality in probability and Theorems~\ref{th:annealedmeyers} and~\ref{th:annealed-lap}. We treat the two right-hand sides separately. (In what follows, $\gamma$ denotes finite positive exponents independent of $q$, the precise value of which we are not interested in.) \medskip \substep{3.1} Proof of \begin{equation}\label{e.sens-ant-3.1} I_1 \,\lesssim \, q^\gamma \expecL{\Big(\int_{B(x)} |e+\nabla\tilde \phi|^2 \mu_\xi\Big)^{q(1+\theta)}}^\frac{1}{q(1+\theta)} \int_{Q_L} |g|^2. \end{equation} The most technical term to treat is the one involving $w_1$ (which is defined by solving two equations successively, whereas $u_1$ and $v$ are defined by solving one equation only). By Cauchy-Schwarz' inequality, and the definitions of $\aa$ and $\mu_\xi$, \begin{multline*} \expecL{\Big(\int_{Q_L} \Big(\int_{B(x)}| D\aa(\xi+\nabla\phi)||e+\nabla\tilde \phi||\nabla w_1|\Big)^2\mathrm{d} x\Big)^q}^\frac1q\\ \lesssim \, \expecL{\Big(\int_{Q_L} \Big(\int_{B(x)} |e+\nabla\tilde \phi|^2 \mu_\xi\Big)\Big(\int_{B(x)} |\nabla w_1|^2 \mu_\xi\Big)\mathrm{d} x\Big)^q}^\frac1q. \end{multline*} By duality (in probability), this entails \begin{eqnarray*} \lefteqn{\expecL{\Big(\int_{Q_L} \Big(\int_{B(x)}| D\aa(\xi+\nabla\phi)||e+\nabla\tilde \phi||\nabla w_1|\Big)^2\mathrm{d} x\Big)^q}^\frac1q}\\ &\lesssim & \sup_{X} \expecL{\int_{Q_L} \Big(\int_{B(x)} |e+\nabla\tilde \phi|^2 \mu_\xi\Big)\Big(\int_{B(x)} |\nabla X w_1|^2 \mu_\xi\Big)\mathrm{d} x}, \end{eqnarray*} where the supremum runs over random variables $X$ (independent of the space variable) such that $\expec{|X|^{2q'}}=1$. To obtain the claimed dependence on the moments of $\int_{B(x)} |e+\nabla\tilde \phi|^2 \mu_\xi$, we set $\eta_\circ:=\frac{\theta}{(1+\theta)(q-1)}$, to the effect that $q'>1+\eta_\circ$ and $\frac{q'}{q'-(1+\eta_\circ)}=q(1+\theta)$, and use H\"older's inequality with exponents $(\frac{q'}{q'-(1+\eta_\circ)},\frac{q'}{1+\eta_\circ})$, so that the above turns into \begin{eqnarray*} \lefteqn{\expecL{\Big(\int_{Q_L} \Big(\int_{B(x)}| D\aa(\xi+\nabla\phi)||e+\nabla\tilde \phi||\nabla w_1|\Big)^2\mathrm{d} x\Big)^q}^\frac1q}\\ &\lesssim & \expecL{\Big(\int_{B(x)} |e+\nabla\tilde \phi|^2 \mu_\xi\Big)^{q(1+\theta)}}^\frac{1}{q(1+\theta)} \sup_{X} \int_{Q_L}\expecL{\Big(\int_{B(x)} |\nabla X w_1|^2 \mu_\xi\Big)^\frac{q'}{1+\eta_\circ}}^\frac{1+\eta_\circ}{q'}\mathrm{d} x. \end{eqnarray*} For convenience, we rewrite $1+\eta_\circ$ as $(1+\eta)^2$, and apply Theorem~\ref{th:annealedmeyers} to \eqref{e.sens-ant-w1}, which yields provided $ 2q' \le 2+ \kappa$, $$ \int_{Q_L}\expecL{\Big(\int_{B(x)} |\nabla X w_1|^2 \mu_\xi\Big)^\frac{q'}{(1+\eta)^2}}^\frac{(1+\eta)^2}{q'}\mathrm{d} x\,\lesssim \, \zeta(\eta_\circ) \int_{Q_L}\expecL{\Big(\int_{B(x)} |\nabla X v|^2 \mu_\xi\Big)^\frac{q'}{1+\eta}}^\frac{1+\eta}{q'}\mathrm{d} x, $$ where $\zeta:t \mapsto t^{-\frac14}|\log t|^\frac12$ (since for $0<\eta_\circ<\frac12$, $\zeta(\eta)=\zeta(\sqrt{1+\eta_\circ}-1) \lesssim \zeta(\eta_\circ)$). By the bound $\mu_\xi\lesssim \r^{(d-\delta)\frac{p-2}{p}}$ and H\"older's inequality with exponents $(\frac{1+\eta}{\eta},1+\eta)$, followed by Theorem~\ref{th:annealed-lap} applied to \eqref{e.sens-ant-v} (with exponent $q' \lesssim 1$) we further have \begin{eqnarray*} {\int_{Q_L}\expecL{\Big(\int_{B(x)} |\nabla X v|^2 \mu_\xi\Big)^\frac{q'}{1+\eta}}^\frac{1+\eta}{q'}\mathrm{d} x} &\lesssim & \expecL{ \r^{\frac{q'}{\eta}(d-\delta)\frac{p-2}{p}}}^\frac{\eta}{q'} \int_{Q_L}\expecL{\Big(\int_{B(x)} |\nabla X v|^2 \Big)^{q'}}^\frac{1}{q'}\mathrm{d} x \\ &\lesssim& \expecL{ \r^{ \frac{q'}{\eta} (d-\delta)\frac{p-2}{p}}}^\frac{\eta}{q'} \expecL{|X|^{2q'}}^\frac{1}{q'}\int_{Q_L} |g|^2 \\ &=& \expecL{ \r^{ \frac{q'}{\eta} (d-\delta)\frac{p-2}{p}}}^\frac{\eta}{q'} \int_{Q_L} |g|^2, \end{eqnarray*} where we used that $g$ is deterministic and $\expec{|X|^{2q'}}=1$. We have thus proved that \begin{multline*} \expecL{\Big(\int_{Q_L} \Big(\int_{B(x)}| D\aa(\xi+\nabla\phi)||e+\nabla\tilde \phi||\nabla w_1|\Big)^2\mathrm{d} x\Big)^q}^\frac1q\\ \lesssim \, \expecL{\Big(\int_{B(x)} |e+\nabla\tilde \phi|^2 \mu_\xi\Big)^{q(1+\theta)}}^\frac{1}{q(1+\theta)} \zeta(\eta_\circ) \expecL{ \r^{\frac{q'}{\sqrt{1+\eta_\circ}-1}\frac{p-2}{p}(d-\delta)}}^\frac{\sqrt{1+\eta_\circ}-1}{q'} \int_{Q_L} |g|^2. \end{multline*} Since $\eta_\circ=\frac{\theta}{(1+\theta)(q-1)}$, by definition of $\zeta$ and by the moment bound on $\r$ of Theorem~\ref{boundrNLprop}, $$ q\zeta(\eta_\circ) \expecL{ \r^{ \frac{q'}{\sqrt{1+\eta_\circ}-1}\frac{p-2}{p}(d-\delta)}}^\frac{\sqrt{1+\eta_\circ}-1}{q'} \,\lesssim \, q^\gamma $$ for some exponent $\gamma>0$ independent of $q$. This entails the claimed estimate \eqref{e.sens-ant-3.1}. \medskip \substep{3.2} Proof of \begin{equation}\label{e.sens-ant-3.2} I_2\, \lesssim\, q^\gamma \expecL{\sup_{B}\{ |e+\nabla\tilde \phi|^2 \mu_\xi\}^{q(1+\theta)}}^\frac{1}{q(1+\theta)} \int_{Q_L}|g|^2. \end{equation} We only display the argument for the term involving $\nabla w_2$, which is defined by solving three equations successively (which will compel us to appeal to Theorem~\ref{th:annealedmeyers} twice in a row, and then to Theorem~\ref{th:annealed-lap}). By Cauchy-Schwarz' inequality, and the definition of $\aa$ and $\mu_\xi$, $$ \expecL{\Big(\int_{Q_L} \Big(\int_{B(x)}|\aa(\xi+\nabla\phi)||\nabla w_2|\Big)^2\mathrm{d} x\Big)^q}^\frac1q \,\lesssim \, \expecL{\Big(\int_{Q_L} \Big(\int_{B(x)} \mu_\xi\Big) \Big( \int_{B(x)}\mu_\xi |\nabla w_2|^2\Big)\mathrm{d} x\Big)^q}^\frac1q. $$ By duality and the bound $\mu_\xi\lesssim \r^{(d-\delta)\frac{p-2}{p}}$, we have \begin{equation*} {\expecL{\Big(\int_{Q_L} \Big(\int_{B(x)}|\aa(\xi+\nabla\phi)||\nabla w_2|\Big)^2\mathrm{d} x\Big)^q}^\frac1q} \,\lesssim\,\sup_X\expecL{\int_{Q_L} \r^{(d-\delta)\frac{p-2}{p}} \Big( \int_{B(x)}\mu_\xi |\nabla Xw_2|^2\Big) \mathrm{d} x}, \end{equation*} where the supremum runs over random variables $X$ (thus independent of the space variable) such that $\expec{|X|^{2q'}}=1$. We now introduce exponents: $\eta_2:=\frac{1}{q-1} \frac{\theta}{8(1+\theta)}$ and $\eta_1:=\frac{1}{(q-1)(1+\eta_2)^2(1+\theta)}$ which are chosen so that $\frac{q'}{(1+\eta_2)^2\eta_1}=q(1+\theta)$ and $\frac{q'}{(1+\eta_2)^3(1+\eta_1)}>1$. Let us quickly check the second property: $$ (1+\eta_2)^3(1+\eta_1)=(1+\eta_2)^3+\frac{1+\eta_2}{(q-1)(1+\theta)} \le 1+(7+\frac1{(q-1)(1+\theta)})\eta_2+\frac1{(q-1)(1+\theta)} <1+\frac{1}{q-1}=q'. $$ With these exponents at hands, we first use H\"older's inequality with exponents $(\frac{q'}{q'-(1+\eta_2)^3(1+\eta_1)},\frac{q'}{(1+\eta_2)^3(1+\eta_1)})$ together with the stationarity of $\r$, and obtain \begin{multline*} \expecL{\int_{Q_L} \r^{(d-\delta) \frac{p-2)}{p}} \Big( \int_{B(x)}\mu_\xi |\nabla Xw_2|^2\Big) \mathrm{d} x} \\ \lesssim\, \expecL{\r^{\frac{q'}{q'-(1+\eta_2)^3(1+\eta_1)}{(d-\delta)\frac{p-2}{p}}}}^\frac{q'-(1+\eta_2)^3(1+\eta_1)}{q'} \int_{Q_L} \expec{\Big( \fint_{B(x)}\mu_\xi |\nabla Xw_2|^2\Big)^{\frac{q'}{(1+\eta_2)^3(1+\eta_1)}}}^\frac{(1+\eta_2)^3(1+\eta_1)}{q'} \mathrm{d} x. \end{multline*} Provided $2q'\le 2+\kappa$, Theorem~\ref{th:annealedmeyers} applied to~\eqref{e.sens-ant-w2} yields \begin{multline*} \int_{Q_L} \expec{\Big( \fint_{B(x)}\mu_\xi |\nabla Xw_2|^2\Big)^{\frac{q'}{(1+\eta_2)^3(1+\eta_1)}}}^\frac{(1+\eta_2)^3(1+\eta_1)}{q'} \mathrm{d} x \\ \lesssim \, \zeta(\eta_2) \int_{Q_L} \expecL{\Big( \fint_{B(x)}\mu_\xi^{-1}|D^2a(\xi+\nabla\phi)|^2|e+\nabla\tilde \phi|^2(|\nabla Xw_1|^2+|\nabla Xv|^2)\Big)^{\frac{q'}{(1+\eta_2)^2(1+\eta_1)}}}^\frac{(1+\eta_2)^2(1+\eta_1)}{q'} \mathrm{d} x. \end{multline*} Since $\mu_\xi\ge 1$ and $|D^2a(\xi+\nabla\phi)|\le \mu_\xi$, this yields \begin{multline*} \int_{Q_L} \expec{\Big( \fint_{B(x)}\mu_\xi |\nabla Xw_2|^2\Big)^{\frac{q'}{(1+\eta_2)^3(1+\eta_1)}}}^\frac{(1+\eta_2)^3(1+\eta_1)}{q'} \mathrm{d} x \\ \lesssim \, \zeta(\eta_2) \int_{Q_L} \expecL{\sup_{B(x)}\{ |e+\nabla\tilde \phi|^2 \mu_\xi\}^{\frac{q'}{(1+\eta_2)^2(1+\eta_1)}}\Big( \fint_{B(x)}(|\nabla Xw_1|^2+|\nabla Xv|^2)\Big)^{\frac{q'}{(1+\eta_2)^2(1+\eta_1)}}}^\frac{(1+\eta_2)^2(1+\eta_1)}{q'} \mathrm{d} x. \end{multline*} We only treat the term involving $w_1$, which is the most subtle of the two. We then apply H\"older's inequality with exponents $(\frac{1+\eta_1}{\eta_1},1+\eta_1)$, and use the stationarity of $x\mapsto \sup_{B(x)}\{ |e+\nabla\tilde \phi|^2 \mu_\xi\}$ and the definition of $\eta_1$ and $\eta_2$ to the effect that \begin{multline*} \int_{Q_L} \expecL{\sup_{B(x)}\{ |e+\nabla\tilde \phi|^2 \mu_\xi\}^{\frac{q'}{(1+\eta_2)^2(1+\eta_1)}}\Big( \fint_{B(x)}(|\nabla Xw_1|^2+|\nabla Xv|^2)\Big)^{\frac{q'}{(1+\eta_2)^2(1+\eta_1)}}}^\frac{(1+\eta_2)^2(1+\eta_1)}{q'}\mathrm{d} x \\ \le \, \expecL{\sup_{B}\{ |e+\nabla\tilde \phi|^2 \mu_\xi\}^{q(1+\theta)}}^\frac{1}{q(1+\theta)} \int_{Q_L} \expecL{ \Big( \fint_{B(x)}|\nabla Xw_1|^2\Big)^{\frac{q'}{(1+\eta_2)^2}}}^\frac{(1+\eta_2)^2}{q'} \mathrm{d} x. \end{multline*} In view of equation~\eqref{e.sens-ant-w1}, one may appeal to Theorem~\ref{th:annealedmeyers}, and obtain $$ \int_{Q_L} \expecL{ \Big( \fint_{B(x)}|\nabla Xw_1|^2\Big)^{\frac{q'}{(1+\eta_2)^2}}}^\frac{(1+\eta_2)^2}{q'} \mathrm{d} x\,\lesssim \, \zeta(\eta_2) \int_{Q_L} \expecL{ \Big( \fint_{B(x)}\mu_\xi |\nabla X v|^2\Big)^{\frac{q'}{1+\eta_2}}}^\frac{1+\eta_2}{q'} \mathrm{d} x. $$ We finally bound $\mu_\xi$ using $\r$, use H\"older's inequality with exponents $(\frac{1+\eta_2}{\eta_2},1+\eta_2)$ and we apply Theorem~\ref{th:annealed-lap} to equation~\eqref{e.sens-ant-v} \begin{eqnarray*} \lefteqn{ \int_{Q_L} \expecL{ \Big( \fint_{B(x)}\mu_\xi |\nabla X v|^2\Big)^{\frac{q'}{1+\eta_2}}}^\frac{1+\eta_2}{q'} \mathrm{d} x} \\ &\le&\expecL{\r^{\frac{q'}{\eta_2}{(d-\delta)\frac{p-2}{p}}}}^\frac{\eta}{q'} \int_{Q_L} \expecL{ \Big( \fint_{B(x)} |\nabla X v|^2\Big)^{q'}}^\frac1{q'} \mathrm{d} x \\ &\lesssim &\expecL{\r^{\frac{q'}{\eta}{(d-\delta)\frac{p-2}{p}}}}^\frac{\eta_2}{q'} \expecL{|X'|^{2q'}} \int_{Q_L} |g|^2 = \expecL{\r^{\frac{q'}{\eta_2}{(d-\delta)\frac{p-2}{p}}}}^\frac{\eta_2}{q'}\int_{Q_L} |g|^2. \end{eqnarray*} As in Substep~3.1, the above estimates combine to \eqref{e.sens-ant-3.2} using Theorem~\ref{boundrNLprop} and our choice of $\eta_2$. \subsection{Control of level sets: Proof of Proposition~\ref{prop:average-per-L+}} As mentioned above, we do not buckle on moments of $\nabla \tilde \phi_{\xi,e}$ but rather on a minimal scale that controls the growth of $R\mapsto \fint_{B_R}|\nabla \tilde \phi_{\xi,e}|^2 \mu_\xi$ by the growth of $\fint_{B_{2R}} \mu_\xi$. \begin{definition}[Linear minimal scale]\label{defminimalscaleL} Let $\xi\in\mathbb{R}^d$, $L\geq 1$, $\vert e\vert=1$ and $C>0$. For all $x\in Q_L$, we define the linear minimal scale $\tilde{r}_{\star,\xi,e,L}(x,C)$ via \begin{equation} \tilde{r}_{\star,\xi,e,L}(x,C): = \inf_{r=2^N, N\in\mathbb{N}}\left\{\forall R\geq r\,:\,\fint_{B_R}|\nabla \tilde \phi_{\xi,e}|^2 \mu_\xi \le C \fint_{B_{2R}} \mu_\xi \right\}. \label{defr*L} \end{equation} \end{definition} As for the Meyers minimal radius, $\tilde{r}_{\star,\xi,e,L}(\cdot,C)$ is bounded by $L$ as soon as $C$ is large enough, due to periodicity and to the plain energy estimates for $\tilde \phi_{\xi,e}$ in form of $ \int_{Q_L} |\nabla \tilde \phi_{\xi,e}|^2 \mu_\xi \,\lesssim \, \int_{Q_L} \mu_\xi. $ In what follows we fix such a constant $C$, fix $e$ and $\xi$, and use the short-hand notation $\tilde{r}_{\star}$ for $\tilde{r}_{\star,\xi,e,L}(\cdot,C)$, $\r$ for $r_{\star,\xi,L}$, $\phi$ for $\phi_\xi$, and $\tilde \phi$ for $\tilde \phi_{\xi,e}$. The upcoming lemma uses local regularity and hole-filling to control $\sup_B |\nabla\tilde \phi +e|^2 \mu_\xi$ by $\tilde{r}_{\star}$ and $\r$. \begin{lemma}[Quenched bounds on the linearized correctors]\label{smallscalereg} For all $\xi \in \mathbb{R}^d$, there exist two exponents $0<\beta\le d$ (the linear hole-filling exponent of Lemma~\ref{ctrlavNL}) and $\gamma>0$ and a non-negative stationary random field $\chi$ (depending on $\r$, $\|A\|_{C^{0,\alpha}(\mathbb{R}^d)}$ and $\vert\xi\vert$) with the following properties: For all $x\in\mathbb{R}^d$ \begin{equation} \sup_{B(x)} |e+\nabla\tilde \phi|^2 \mu_\xi \le \chi(x) (\tilde{r}_{\star}(x))^{d-\beta}, \label{deterboundL} \end{equation} and all $q \ge 1$ \begin{equation} \mathbb E_L[\chi^q]^{\frac{1}{q}}\, \lesssim_{|\xi|}\, q^{\gamma}. \label{momentchireg} \end{equation} \end{lemma} \begin{proof} We split the proof into two steps. In the first step, we control the $C^{\alpha}$-norm of $a_\xi$ that we use in the second step to control the linearized corrector via classical Schauder theory for elliptic systems. W.l.o.g we may assume that $x=0$.\newline \newline \step1 Proof that \begin{equation} \|a_{\xi}\|_{C^{\alpha}(B)}\leq C \r^{(d-\delta)\frac{p-2}{p}}, \label{deterboundNL} \end{equation} for some constant $C>0$ depending on $d$, $p$, $\|A\|_{C^{0,\alpha}(\mathbb{R}^d)}$, and $|\xi|$, where $0<\delta\le d$ is the nonlinear hole-filling exponent of Lemma~\ref{ctrlavNL}. (We recall that $\|X\|_{C^\alpha(B)}=\|X\|_{L^\infty(B)}+\|X\|_{C^{0,\alpha}(B)}$.) On the one hand, by Lemma~\ref{regestiNL} applied to the equation \eqref{e.cor-eq} combined with the estimate \eqref{controlunitball}, we have \begin{equation} \|\xi+\nabla\phi\|_{C^{\alpha}(B)}\,\lesssim_{\|A\|_{C^{0,\alpha}(\mathbb{R}^d)}} \,\Big(\fint_{B_{2}}\vert \xi+\nabla\phi \vert^p \Big)^{\frac{1}{p}}\,\stackrel{\eqref{controlunitball}}{\leq}_{\|A\|_{C^{0,\alpha}(\mathbb{R}^d)}}(1+|\xi|) \r^{\frac{d-{\delta}}{p}}. \label{deterboundproof1+} \end{equation} On the other hand, recall that $a_{\xi}=AD\aa(\xi+\nabla\phi)$ with $\aa : \zeta\in\mathbb{R}^d\mapsto (1+\vert\zeta\vert^{p-2})\zeta$, and thus for all $\zeta\in\mathbb{R}^d$ \begin{equation} \vert D\aa(\zeta)\vert\lesssim 1+\vert\zeta\vert^{p-2} \text{ and } \vert D^2\aa(\zeta)\vert\lesssim 1+\vert\zeta\vert^{p-3}. \label{deterboundproof8} \end{equation} Therefore, by \eqref{deterboundproof1+} and \eqref{deterboundproof8}, \begin{align} \|D\aa(\xi+\nabla\phi)\|_{C^{\alpha}(B)}&\leq \|D\aa(\xi+\nabla\phi)\|_{L^\infty(B)}+\|D^2\aa(\xi+\nabla\phi)\|_{L^{\infty}(B)}\|\xi+\nabla\phi\|_{\text{C}^{0,\alpha}(B)}\nonumber\\ &\stackrel{\eqref{deterboundproof8}}{\lesssim} 1+ \|\xi+\nabla\phi\|^{p-2}_{C^\alpha(B )} \,\stackrel{\eqref{deterboundproof1}}{\lesssim}_{\|A\|_{C^{0,\alpha}(\mathbb{R}^d)} } \, (1+|\xi|)^{p-2} \r^{(d-{\delta})\frac{p-2}{p}}, \label{deterboundproof3} \end{align} from which the claim \eqref{deterboundNL} follows since $\|a_{\xi}\|_{C^{\alpha}(B)}\,\leq\, \|A\|_{C^\alpha(B)}\|D\aa(\xi+\nabla\phi)\|_{C^{\alpha}(B)}. $% \medskip \step2 Proof of \eqref{deterboundL}. \noindent We first argue that \begin{equation} \int_{B}\vert e+\nabla\tilde \phi \vert^2\mu_\xi \,\lesssim\, \tilde{r}_{\star}^{d-\beta} \r^{\frac{p-2}{p}(d-\delta)+\beta}.\label{deterboundproof5} \end{equation} If $\tilde{r}_{\star} < \r$, the claim follows from the defining property~\eqref{defr*L} in form of $$ \int_{B}\vert e+\nabla\tilde \phi \vert^2\mu_\xi \,\lesssim \, 2\tilde{r}_{\star}^{d}\fint_{B_{\tilde{r}_{\star}}}(1+|\nabla\tilde \phi|^2)\mu_\xi \, {\le} \,2(C+1) \tilde{r}_{\star}^{d} \fint_{B_{2\tilde{r}_{\star}}} \mu_\xi \, \lesssim \, \tilde{r}_{\star}^d \r^{(d-\delta)\frac{p-2}{p}} \,\lesssim \,\tilde{r}_{\star}^{d-\beta} \r^{(d-\delta)\frac{p-2}{p}+\beta}. $$ If $\tilde{r}_{\star} \geq \r$, we appeal to the hole filling estimate \eqref{Lholefillingesti}, to the defining property~\eqref{defr*L}, and use \eqref{defr*NL2} \&~\eqref{encadrementrNL}, to the effect that \begin{equation*} \int_{B}\vert e+\nabla\tilde \phi \vert^2\mu_\xi \,\lesssim \,\r^{d}\fint_{B_{\r}}\vert e+\nabla\tilde \phi\vert^2\mu_\xi \,\stackrel{\eqref{Lholefillingesti}}{\lesssim}\, \tilde{r}_{\star}^{d-\beta}\r^{\beta}\fint_{B_{\tilde{r}_{\star}}}\vert e+\nabla\tilde \phi\vert^2\mu \, \lesssim \, \tilde{r}_{\star}^{d-\beta}\r^{\beta } \fint_{B_{\tilde{r}_{\star}}} \mu_\xi \,\lesssim\, \tilde{r}_{\star}^{d-\beta} \r^{\beta }. \end{equation*} We now argue that \eqref{deterboundproof5} entails~\eqref{deterboundL}. By the Schauder estimate \cite[Theorem~5.19]{giaquinta2013introduction} applied to~\eqref{e.Lcorr} (for which the constant depends algebraically on the ellipticity ratio and the $C^{0,\alpha}$-seminorm of the coefficients, which we may encapsulate in the $C^{\alpha}$-norm since $\mu_\xi \ge 1$), and the bound~\eqref{deterboundNL} on the coefficient and \eqref{deterboundproof5}, there is some $\gamma>0$ (depending on $\alpha$ and $d$) such that \begin{equation*} \|e+\nabla\tilde \phi\|_{L^{\infty}(B)}\,\lesssim\,\|a_{\xi}\|^{\gamma}_{C^{\alpha}(B)}\Big(\fint_{B_2}\vert e+\nabla\tilde \phi\vert^2 \Big)^{\frac{1}{2}}\,\stackrel{\eqref{deterboundNL},\eqref{deterboundproof5}}{\lesssim} \, \r^{\gamma(d-{\delta})\frac{p-2}{p}}\tilde{r}_{\star}^{\frac{1}2 (d-\beta)}\r^{\frac12((d-\delta)\frac{p-2}{p} +\beta)}, \end{equation*} which yields~\eqref{deterboundL} for $\chi:=C \r^{2 (\gamma+1)(d-\delta)\frac{p-2}p+\beta}$ (for some constant $C >0$ depending on $d$, $p$, $\vert\xi\vert$ and $\|A\|_{C^{0,\alpha}(\mathbb{R}^d)}$). The claimed moment bounds on $\chi$ follow from Theorem~\ref{boundrNLprop} (for a suitable $\gamma>0$). \end{proof} The main result of this section is the following control of $\tilde{r}_{\star}$, which implies Proposition~\ref{prop:average-per-L+} in combination with Lemma~\ref{smallscalereg}. \begin{proposition}\label{boundrLprop} There exists an exponent $\gamma>0$ depending on $|\xi|$ such that for all $q\ge 1$, $ \mathbb E_L[\tilde{r}_{\star}^q]^{\frac{1}{q}}\, \lesssim\, q^{\gamma}. $ \end{proposition} \begin{proof} We split the proof into three steps. In the first step, we control the level set $\{\tilde{r}_{\star}=R\}$ for all dyadic $R$ using averages of the corrector gradient. In Step~2, we use Proposition~\ref{prop:average-per-L} to reformulate the right-hand side using moments of $\tilde{r}_{\star}$ itself, and then buckle in Step~3 by exploiting the gain of integrability provided by the hole-filling exponent $\beta>0$. \medskip \step1 Control of level sets of $\tilde{r}_{\star}$. \noindent We claim that there exists a constant $c>0$ (depending on $\xi$, $d$, $p$) such that for all dyadic $R \in [1,L]$, and all $0<\kappa,\varepsilon<1$ and $q\ge 1$ \begin{equation} \mathbb P_L[\tilde{r}_{\star}=R]\, \le \, c^qR^{-(d-\beta+ 2(1-\kappa)-\varepsilon)q}\expecL{ \mathcal{C}_{\star,R}^q \tilde{r}_{\star}^{(d-\beta)q}}+c^qR^{\varepsilon q}\expecL{ \mathcal{C}_{\star,R}^q\Big(\fint_{B_{R}}\big\vert\fint_{B_{R^{\kappa}}(x)}\nabla\tilde \phi \big\vert^2\mathrm{d} x\Big)^q} , \label{estimomentboundrL} \end{equation} where $\mathcal{C}_{\star,R}:= R^{-\varepsilon}\|\mu_\xi\|_{L^{\infty}(B_{4R})}^2$. By the defining property \eqref{defr*L} of $\tilde{r}_{\star}$ (with a constant $C$ to be chosen below), we have \begin{eqnarray} \fint_{B_{2R}}\vert\nabla\tilde \phi\vert^2\mu_\xi &\le & C \fint_{B_{4R}} \mu_\xi,\label{e.rL-ant1} \\ \fint_{B_{R/2}}\vert\nabla\tilde \phi\vert^2\mu_\xi &\ge & C \fint_{B_{R}} \mu_\xi.\label{e.rL-ant2} \end{eqnarray} By the Caccioppoli inequality of Lemma~\ref{cacciopounbounded}, \eqref{e.rL-ant2} yields $$ \inf_{c\in\mathbb{R}^d}\frac{1}{R^2}\fint_{B_{R}}\vert \tilde \phi-c\vert^2\mu_\xi+ \fint_{B_R} \mu_\xi \,\gtrsim\, C \fint_{B_{R}} \mu_\xi, $$ so that, provided $C$ is chosen large enough in \eqref{defr*L}, we have $$ \inf_{c\in\mathbb{R}^d}\frac{1}{R^2}\fint_{B_{R}}\vert \tilde \phi-c\vert^2\mu_\xi \,\gtrsim\, \fint_{B_R} \mu_\xi \,\gtrsim\, 1. $$ Set $c_R:=\fint_{B_{R}}\fint_{B_{R^{\kappa}}(x)}\tilde \phi(y)\mathrm{d} y\, \mathrm{d} x$. By the triangle inequality, Poincar\'e's inequality in $L^2(B_R)$, and the definition of $\mathcal{C}_{\star,R}$, the above turns into \begin{eqnarray*} 1\, \lesssim \, \inf_{c\in\mathbb{R}^d}\frac{1}{R^2}\fint_{B_{R}}\vert \tilde \phi-c\vert^2\mu_\xi &\leq &\sqrt{\mathcal{C}_{\star,R}}R^{\frac\e2} \frac{1}{R^2}\fint_{B_{R}}\vert\tilde \phi -c_R\vert^2 \\ &\lesssim& \sqrt{\mathcal{C}_{\star,R}}R^{\frac\e2}\Big(\frac{1}{R^2}\fint_{B_{R}}\big\vert\tilde \phi(x)-\fint_{B_{R^{\kappa}}(x)}\tilde \phi\big \vert^2\mathrm{d} x+\frac{1}{R^2}\fint_{B_{R}}\big\vert\fint_{B_{R^{\kappa}}(x)}\tilde \phi-c_R\big\vert^2\mathrm{d} x\Big)\\ &\lesssim & \sqrt{\mathcal{C}_{\star,R}}R^{\frac\e2}\Big(R^{2(\kappa-1)}\fint_{B_{2R}}\vert\nabla\tilde \phi\vert^2+\fint_{B_{R}}\big\vert\fint_{B_{R^{\kappa}}(x)}\nabla\tilde \phi\big\vert^2\mathrm{d} x\Big)\\ &\stackrel{\eqref{e.rL-ant1}}{\lesssim}& \sqrt{\mathcal{C}_{\star,R}}R^{\frac\e2}\left(R^{2(\kappa-1)} \sqrt{\mathcal{C}_{\star,R}}R^{\frac\e2}+\fint_{B_{R}}\big\vert\fint_{B_{R^{\kappa}}(x)}\nabla\tilde \phi\big\vert^2\mathrm{d} x\right) \\ &\stackrel{\tilde{r}_{\star}=R}=& {\mathcal{C}_{\star,R}}R^{\varepsilon}\left(R^{2(\kappa-1)}R^{-d+\beta} \tilde{r}_{\star}^{d-\beta}+\fint_{B_{R}}\big\vert\fint_{B_{R^{\kappa}}(x)}\nabla\tilde \phi\big\vert^2\mathrm{d} x\right). \end{eqnarray*} The claim now follows from Markov' inequality. \medskip \step2 Control of the right-hand side of \eqref{estimomentboundrL}: For all $0<\varepsilon,\kappa,\theta < 1$, and all dyadic $R$ and exponents $q\ge 1$ \begin{equation} \mathbb P_L[\tilde{r}_{\star}=R]\, \le \, c^qq^\gamma (R^{-(d-\beta+ 2(1-\kappa)-\varepsilon)q}+R^{-(d\kappa-\varepsilon)q}) \expecL{ \tilde{r}_{\star}^{(d-\beta) (1+\theta)^3 q}}^\frac1{(1+\theta)^3} , \label{estimomentboundrL+} \end{equation} for some constant $c>0$ depending on $|\xi|$, $p$, $d$, $\varepsilon$, $\kappa$, $\theta$, but not on $R$ and $q$. Since $\tilde{r}_{\star} \le L$, it suffices to establish the statement for dyadic $R \le L$. By Lemma~\ref{unifproba} and Theorem~\ref{boundrNLprop}, there exists $\gamma>0$ such that for all $q\ge \frac d\varepsilon$ and $R\ge 1$, we have \begin{equation}\label{e.bdC*-ant} \mathbb E_L[\mathcal{C}_{\star,R}^q]^\frac1q \lesssim q^\gamma, \end{equation} where the multiplicative constant does not depend on $R$ and $\varepsilon$. By H\"older's inequality with exponents $(\frac{1+\theta}{\theta},1+\theta)$, we then get for the first right hand side term of \eqref{estimomentboundrL} \begin{equation}\label{e.bdC*-ant2} \expecL{ \mathcal{C}_{\star,R}^q \tilde{r}_{\star}^{(d-\beta)q}}^\frac1q \, \le\, \expecL{ \mathcal{C}_{\star,R}^{q\frac{1+\theta}{\theta}}}^\frac{\theta}{1+\theta} \expecL{ \tilde{r}_{\star}^{(d-\beta)(1+\theta)q}}^\frac1{q(1+\theta)} \,\lesssim_\theta \, q^\gamma \expecL{ \tilde{r}_{\star}^{(d-\beta)(1+\theta)q}}^\frac1{q(1+\theta)}. \end{equation} We turn to the second right hand side term of \eqref{estimomentboundrL}. By H\"older's inequality with exponents $(\frac{1+\theta}{\theta},1+\theta)$, stationarity of $\nabla \tilde \phi$, and \eqref{e.bdC*-ant}, we first have \begin{eqnarray*} \expecL{ \mathcal{C}_{\star,R}^q\Big(\fint_{B_{R}}\big\vert\fint_{B_{R^{\kappa}}(x)}\nabla\tilde \phi\big\vert^2\mathrm{d} x\Big)^q}^\frac1q &\le& \expecL{ \mathcal{C}_{\star,R}^{q\frac{1+\theta}\theta}}^\frac{\theta}{q(1+\theta)} \expecL{\Big(\fint_{B_{R}}\big|\fint_{B_{R^{\kappa}}(x)}\nabla\tilde \phi\big|^2\mathrm{d} x\Big)^{q(1+\theta)}}^\frac1{q(1+\theta)} \\ &\lesssim _\theta &q^\gamma \expecL{\big|\fint_{B_{R^{\kappa}}}\nabla\tilde \phi\big|^{2q(1+\theta)}}^\frac1{q(1+\theta)}. \end{eqnarray*} Then, by Proposition~\ref{prop:average-per-L} applied to $g=|B_{R^\kappa}|^{-1}\mathds 1_{B_{R^\kappa}}$, followed by Lemma~\ref{smallscalereg}, by H\"older's inequality with exponent $(\frac{1+\theta}{\theta},1+\theta)$, and \eqref{momentchireg}, we have \begin{eqnarray*} \expecL{\big|\fint_{B_{R^{\kappa}}}\nabla\tilde \phi\big|^{2q(1+\theta)}}^\frac1{q(1+\theta)} &\lesssim_{|\xi|,\theta}& q^\gamma \expecL{\Big(\sup_B |\nabla\tilde\phi_{\xi,e}+e|^2 \mu_\xi\Big)^{q(1+\theta)^2}}^\frac1{q(1+\theta)^2} \Big(\int_{Q_L}|g|^2\Big) \\ &\lesssim & q^\gamma \expecL{\chi^{q(1+\theta)^2} \tilde{r}_{\star}^{(d-\beta) q(1+\theta)^2}}^\frac1{q(1+\theta)^2} R^{-d\kappa} \\ &\lesssim&q^\gamma R^{-d\kappa} \expecL{ \tilde{r}_{\star}^{(d-\beta) q(1+\theta)^3}}^\frac1{q(1+\theta)^3} \end{eqnarray*} (where we changed the value of $\gamma$ from one line to the other). Combined with \eqref{e.bdC*-ant2}, this entails \eqref{estimomentboundrL+} by redefining $\gamma$ once more. \medskip \step3 Buckling argument. \noindent Recall that all the quantities we consider are finite since $\tilde{r}_{\star} \le L$. We now express moments of $\tilde{r}_{\star}$ using its level sets and obtain by \eqref{estimomentboundrL+} for some $K>1$ to be fixed below and all $q\ge 1$ \begin{eqnarray*} \expecL{\tilde{r}_{\star}^{q(d-\frac{\beta}{K})}}&\le & 1+\sum_{n=1}^\infty 2^{nq(d-\frac{\beta}{K})} \mathbb P_L[\tilde{r}_{\star}=2^n] \\ &\stackrel{\eqref{estimomentboundrL+}}\le & 1+\sum_{n=1}^\infty 2^{nq(d-\frac{\beta}{K})} c^qq^\gamma (2^{-nq(d-\beta+ 2(1-\kappa)-\varepsilon)}+2^{-nq(d\kappa-\varepsilon)}) \expecL{ \tilde{r}_{\star}^{q(d-\beta)(1+\theta)^3}}^\frac1{(1+\theta)^3} \\ &\le& 1+\expecL{ \tilde{r}_{\star}^{q(d-\beta)(1+\theta)^3}}^\frac1{(1+\theta)^3}c^qq^\gamma \sum_{n=1}^\infty (2^{nq(-\frac\beta K+\beta-2(1-\kappa)+\varepsilon)}+2^{nq(d(1-\kappa)+\varepsilon-\frac\beta K)}). \end{eqnarray*} We now choose the exponents. We first fix $0\le \kappa < 1$ so that $d(1-\kappa)=\frac\beta 2$, and then set $\varepsilon:=\frac{\beta}{5d}$ and $\frac1K:=1-\frac1{5d}$, to the effect that $$ \frac12(2^{nq(-\frac\beta K+\beta-2(1-\kappa)+\varepsilon)}+2^{nq(d(1-\kappa)+\varepsilon-\frac\beta K)}) \,\le \,2^{-nq \frac \beta{5d}}. $$ With this choice, the series is summable and the above turns into \begin{eqnarray*} \expec{\tilde{r}_{\star}^{q(d-\frac{\beta}{K})}}&\le & 1+c^qq^\gamma\expecL{ \tilde{r}_{\star}^{q(d-\beta)(1+\theta)^3}}^\frac1{(1+\theta)^3} \end{eqnarray*} for some redefined constant $c$. We may then absorb part of the right-hand side into the left-hand side by Young's inequality upon choosing $0<\theta <1$ so small that $(d-\beta)(1+\theta)^3 < d-\frac{\beta}{K}$ (which is possible since $K>1$), and the claimed moment bound follows for some suitable choice of $\gamma>0$. \end{proof} \section{Quantitative two-scale expansion: Proof of Theorem~\ref{th:2s}} We assume $\delta \le 1$, and split the proof into four steps. In the first step, we show that the two-scale expansion error satisfies a nonlinear PDE in conservative form (crucially using the flux corrector). In the second step we give a bound for the $H^{-1}(\mathbb{R}^d)$-norm of the right-hand side, the moments of which we control in the third step. We then conclude in the fourth step by using the monotonicity of the heterogeneous operator $a_\varepsilon$. In the following, we use the short-hand notation $\xi_k:=(\nabla\bar{u})_{k,\delta}$. \medskip \step1 Equation for the two-scale expansion error: \begin{equation} -\nabla\cdot (a(\tfrac x\varepsilon,\nabla\bar{u}^{2s}_{\varepsilon,\delta}(x))-a(\tfrac x\varepsilon,\nabla u_{\varepsilon}(x)))=\nabla\cdot R_{\varepsilon,\delta}(x), \label{2sc:Eq1} \end{equation} where \begin{eqnarray*} R_{\varepsilon,\delta}(x) &=&\Big(\sum_{k\in\delta\mathbb{Z}^d}\eta_k(x)(\bar{a}(\xi_k)-\bar{a}(\nabla\bar{u}(x))\Big)- \Big(\sum_{k\in\delta\mathbb{Z}^d}\varepsilon\sigma_{\xi_k}(\tfrac x\varepsilon)\nabla\eta_k(x)\Big)\nonumber\\ &&+ \Big(\sum_{k\in\delta\mathbb{Z}^d}\eta_k(x)(a(\tfrac x\varepsilon,\nabla\bar{u}(x)+\nabla\phi_{\xi_k}(\tfrac x\varepsilon))-a(\tfrac x\varepsilon,\xi_k+\nabla\phi_{\xi_k}(\tfrac x\varepsilon)))\Big)\nonumber\\ &&+ \Big(a\Big(\tfrac x\varepsilon,\nabla\bar{u}(x)+\sum_{k\in\delta\mathbb{Z}^d}\nabla\phi_{\xi_k}(\tfrac x\varepsilon)\eta_k(x)\Big)-\sum_{k\in\delta\mathbb{Z}^d}\eta_k(x) a(\tfrac x\varepsilon,\nabla\bar{u}(x)+\nabla\phi_{\xi_k}(\tfrac x\varepsilon))\Big)\nonumber\\ &&+ \Big(a\Big(\tfrac x\varepsilon,\nabla\bar{u}(x)+\sum_{k\in\delta\mathbb{Z}^d}\nabla\phi_{\xi_k}(\tfrac x\varepsilon)\eta_k(x)+\varepsilon \phi_{\xi_k}(\tfrac x\varepsilon)\nabla\eta_k(x)\Big)-a\Big(\tfrac x\varepsilon,\nabla\bar{u}(x)+\sum_{k\in\delta\mathbb{Z}^d}\nabla\phi_{\xi_k}(\tfrac x\varepsilon)\eta_k(x)\Big)\Big). \end{eqnarray*} To start with, we expand $\nabla \bar{u}^{2s}_{\varepsilon,\delta}$ as $$ \nabla\cdot a(\tfrac x\varepsilon,\nabla \bar{u}^{2s}_{\varepsilon,\delta})=\nabla\cdot a\Big(\tfrac x\varepsilon,\nabla\bar{u}(x)+ \sum_{k\in \delta\mathbb{Z}^d} \varepsilon \phi_{\xi_k}(\tfrac x\varepsilon)\nabla\eta_k(x)+\nabla\phi_{\xi_k}(\tfrac x\varepsilon)\eta_k(x)\Big), $$ which we rewrite in the form of the telescopic sum (using that $\sum_{k\in\delta\mathbb{Z}^d}\eta_k\equiv 1$) \begin{eqnarray*} \lefteqn{\nabla\cdot a(\tfrac x\varepsilon,\nabla \bar{u}^{2s}_{\varepsilon,\delta}(x))-\nabla\cdot \bar{a}(\nabla\bar{u}(x))} \\ &=&\nabla\cdot \Big(\sum_{k\in\delta\mathbb{Z}^d}\eta_k(x)(\bar{a}(\xi_k)-\bar{a}(\nabla\bar{u}(x))\Big)+\nabla\cdot\Big(\sum_{k\in\delta\mathbb{Z}^d}\eta_k(x)(a(\tfrac x\varepsilon,\xi_k+\nabla\phi_{\xi_k}(\tfrac x\varepsilon))-\bar{a}(\xi_k))\Big)\nonumber\\ &&+\nabla\cdot \Big(\sum_{k\in\delta\mathbb{Z}^d}\eta_k(x)(a(\tfrac x\varepsilon,\nabla\bar{u}(x)+\nabla\phi_{\xi_k}(\tfrac x\varepsilon))-a(\tfrac x\varepsilon,\xi_k+\nabla\phi_{\xi_k}(\tfrac x\varepsilon)))\Big)\nonumber\\ &&+\nabla\cdot\Big(a\Big(\tfrac x\varepsilon,\nabla\bar{u}(x)+\sum_{k\in\delta\mathbb{Z}^d}\nabla\phi_{\xi_k}(\tfrac x\varepsilon)\eta_k(x)\Big)-\sum_{k\in\delta\mathbb{Z}^d}\eta_k(x) a(\tfrac x\varepsilon,\nabla\bar{u}(x)+\nabla\phi_{\xi_k}(\tfrac x\varepsilon))\Big)\nonumber\\ &&+\nabla\cdot\Big(a\Big(\tfrac x\varepsilon,\nabla\bar{u}(x)+\sum_{k\in\delta\mathbb{Z}^d}\nabla\phi_{\xi_k}(\tfrac x\varepsilon)\eta_k(x)+\varepsilon \phi_{\xi_k}(\tfrac x\varepsilon)\nabla\eta_k(x)\Big)-a\Big(\tfrac x\varepsilon,\nabla\bar{u}(x)+\sum_{k\in\delta\mathbb{Z}^d}\nabla\phi_{\xi_k}(\tfrac x\varepsilon)\eta_k(x)\Big)\Big). \end{eqnarray*} First, using \eqref{e.eps-eq} and \eqref{e.hom-eq} we may replace $-\nabla\cdot \bar{a}(\nabla\bar{u}(x))$ by $-\nabla\cdot a(\tfrac x\varepsilon,\nabla u_\varepsilon(x))$ in the left-hand side. In the right-hand side, all the terms obviously converge strongly to zero in $H^{-1}(\mathbb{R}^d)$ (and are present in the definition of $R_{\varepsilon,\delta}$) except the second term, which we need to reformulate. More precisely, using the flux corrector $\sigma$ (see Definition \ref{defsigmaNL}) in form of the property \eqref{e.div-sig}, we have for all $k\in\delta\mathbb{Z}^d$ (implicitly summing on the repeated indices $ij$) \begin{align} \nabla\cdot \eta_k(x)(a(\tfrac x\varepsilon,\xi_k+\nabla\phi_{\xi_k}(\tfrac x\varepsilon))-\bar{a}(\xi_k))&=\nabla\cdot(\eta_k(x)\nabla\cdot \sigma_{\xi_k}(\tfrac x\varepsilon))=\partial_j(\eta_k(x)\partial_{i}\sigma_{\xi_k,ji})\nonumber\\ &=\varepsilon\partial_i(\partial_j\eta_k(x)\sigma_{\xi_k,ji}(\tfrac x\varepsilon))-\partial_i\partial_j(\eta_k(x))\sigma_{\xi_k,ji}(\tfrac x\varepsilon),\label{2sc:Eq4} \end{align} where the last term vanishes thanks to the skew-symmetry of $(\sigma_{\xi_k,ji})_{j,i}$ and the symmetry of $(\partial_i\partial_j\eta_k)_{j,i}$. By the skew-symmetry of $\sigma_\xi$, one has $\varepsilon\partial_i(\partial_j\eta_k(x)\sigma_{\xi_k,ji}(\tfrac x\varepsilon))=-\varepsilon\nabla\cdot(\sigma_{\xi_k}(\tfrac x\varepsilon)\nabla\eta_k(x))$, and we thus deduce $$ \nabla\cdot \sum_{k\in\delta\mathbb{Z}^d}\eta_k(x)(a(\tfrac x\varepsilon,\xi_k+\nabla\phi_{\xi_k}(\tfrac x\varepsilon))-\bar{a}(\xi_k))=-\varepsilon\nabla\cdot\Big(\sum_{k\in\delta\mathbb{Z}^d}\sigma_{\xi_k}(\tfrac x\varepsilon)\nabla\eta_k(x)\Big). $$ This yields \eqref{2sc:Eq1}. \medskip \step2 Control by continuity of the operators: The remainder $R_{\varepsilon,\delta}$ satisfies \begin{eqnarray} \lefteqn{\int_{\mathbb{R}^d} |R_{\varepsilon,\delta}|^2 \,\lesssim\,\sum_{k} \int_{\mathbb{R}^d} \eta_k |\xi_k-\nabla\bar{u}|^2(1+|\xi_k|+|\nabla \bar u|+|\nabla\phi_{\xi_k}(\tfrac \cdot\varepsilon)|)^{2(p-2)}} \nonumber\\ &&+ \int_{\mathbb{R}^d} \sum_{k} \eta_{k} \Big|\sum_{k'}\varepsilon (\phi_{\xi_k'}-\phi_{\xi_k},\sigma_{\xi_k'}-\sigma_{\xi_k})(\tfrac \cdot\varepsilon)\nabla\eta_{k'}\Big|^2 \Big(1+|\nabla \bar u|+\Big|\sum_{k''}\nabla\phi_{\xi_{k''}}(\tfrac \cdot\varepsilon)\eta_{k''}\Big|\Big)^{2(p-2)} \nonumber\\ &&+\int_{\mathbb{R}^d} \sum_{k} \eta_{k}\Big|\sum_{k'}\varepsilon (\phi_{\xi_k'}-\phi_{\xi_k})(\tfrac \cdot\varepsilon)\nabla\eta_{k'}\Big|^{2(p-1)} +\sum_{k } \int_{\mathbb{R}^d} \eta_k \Big|\sum_{k' }\nabla (\phi_{\xi_k}-\phi_{\xi_{k'}})(\tfrac \cdot \varepsilon) \eta_{k'} \Big|^2 (1+|\nabla \bar u|)^{2(p-2)} \nonumber \\ &&+\sum_{k\in\delta\mathbb{Z}^d} \int_{\mathbb{R}^d} \eta_k \Big| \sum_{k'\in\delta\mathbb{Z}^d}\nabla ( \phi_{\xi_k}-\phi_{\xi_{k'}})(\tfrac \cdot \varepsilon) \eta_{k'} \Big|^{2(p-1)}.\label{2sc:Eq2} \end{eqnarray} This estimate directly follows from the definition of $R_{\varepsilon,\delta}$ together with the continuity of the operator in form of $$ |\tilde a (\xi_1)-\tilde a(\xi_2)|\lesssim C|\xi_1-\xi_2|(1+|\xi_1|+|\xi_1-\xi_2|)^{p-2} $$ for $\tilde a=a_\varepsilon$ and $\tilde a = \bar a$, and with the observation that $\sum_{k'} \nabla \eta_{k'}=0$ so that for all maps $(\zeta_k)_k$ one has $$ \sum_{k'}\zeta_{k'} \nabla\eta_{k'}\,=\,\sum_{k'}(\zeta_{k'}-\zeta_{k})\nabla\eta_{k'}, $$ which we applied to $\zeta_k=\varepsilon (\phi_{\xi_k},\sigma_{\xi_k}) (\tfrac \cdot \varepsilon)$. \medskip \step3 Control of moments of $\int_{\mathbb{R}^d} |R_{\varepsilon,\delta}|^2$: For all $q\ge 1$, \begin{equation}\label{e.momentbd-remain} \expec{\Big(\int_{\mathbb{R}^d} |R_{\varepsilon,\delta}|^2\Big)^\frac q2}^\frac1q \,\leq C \, q^\gamma (\varepsilon+\delta)\mu_d(\tfrac1\varepsilon)\|\mu_d\nabla^2\bar{u}\|_{L^2(\mathbb{R}^d)}, \end{equation} for some constant $C$ and an exponent $\gamma>0$ depending on $\|\nabla\overline{u}\|_{L^{\infty}(\mathbb{R}^d)}$. We treat the second right-hand side term of \eqref{2sc:Eq2} (that we denote by $\tilde{R}_{\varepsilon,\delta}$) -- the other terms are easier and can be treated similarly. Since for all $k'$, $\vert\nabla\eta_{k'}\vert\lesssim \delta^{-1}\mathds{1}_{Q_{\delta}(k')}$, we have \begin{align*} \sum_{k} \eta_{k} \Big|\sum_{k'}\varepsilon (\phi_{\xi_k'}-\phi_{\xi_k},\sigma_{\xi_k'}-\sigma_{\xi_k})(\tfrac \cdot\varepsilon)\nabla\eta_{k'}\Big|^2\lesssim (\tfrac{\varepsilon}{\delta})^2\sum_{k}\eta_k\sum_{k'}\mathds{1}_{Q_{\delta}(k')} | (\phi_{\xi_k'}-\phi_{\xi_k},\sigma_{\xi_k'}-\sigma_{\xi_k})(\tfrac \cdot\varepsilon)|^2. \end{align*} Inserting this estimate in $\tilde{R}_{\varepsilon,\delta}$, and using the assumption $\nabla \bar {u}\in L^{\infty}(\mathbb{R}^d)$, we obtain for all $q\ge 1$ by Cauchy-Schwarz' inequality followed by Minkowski's inequality in probability, the support condition $Q_{\delta}(k)\cap Q_{\delta}(k')\neq \emptyset \Rightarrow \vert k-k'\vert<2\delta$, and the stationarity of $\nabla \phi_{k''}$, \begin{multline*} \mathbb{E}\Big[\Big(\int_{\mathbb{R}^d}\vert\tilde{R}_{\varepsilon,\delta}\vert^2\Big)^{\frac{q}{2}}\Big]^{\frac{1}{q}} \\ \,\lesssim \, \frac{\varepsilon}{\delta}\Big(\sum_k\sum_{k'\in Q_{2\delta}(k)}\int_{Q_{\delta}(k)}\mathbb{E}[|(\phi_{\xi_k'}-\phi_{\xi_k},\sigma_{\xi_k'}-\sigma_{\xi_k})(\tfrac \cdot\varepsilon)|^{2q}]^{\frac{1}{q}}\Big(1+\|\nabla\bar {u}\|^{2(p-2)}_{L^{\infty}(\mathbb{R}^d)}+ \sum_{k''\in Q_{2\delta}(k)}\mathbb{E}[\vert\nabla\phi_{\xi_{k''}}\vert^{2q(p-2)}]^{\frac{1}{q}}\Big)\Big)^{\frac{1}{2}}. \end{multline*} By Theorem~\ref{th:corrNL} and Corollary~\ref{coro:corr-diff}, and using that $\mu_d$ satisfies $\mu_d(t_1t_2)\lesssim \mu_d(t_1)\mu_d(t_2)$ and $\sup_{Q_{4\delta}(k)} \mu_d \lesssim \inf_{Q_{4\delta}(k)} \mu_d$, this turns into \begin{equation}\label{2sc:Eq14} \mathbb{E}\Big[\Big(\int_{\mathbb{R}^d}\vert\tilde{R}_{\varepsilon,\delta}\vert^2\Big)^{\frac{q}{2}}\Big]^{\frac{1}{q}}\leq Cq^{\gamma}(\tfrac{\varepsilon}{\delta})\mu_d(\tfrac1\varepsilon)\Big(\sum_k(\inf_{Q_{4\delta}(k)} \mu_d)\sum_{k'\in Q_{2\delta}(k)}\vert\xi_{k'}-\xi_k\vert^2\vert Q_{\delta}\vert\Big)^{\frac{1}{2}}, \end{equation} for some constant $C$ and an exponent $\gamma>0$ depending on $\|\nabla\overline{u}\|_{L^{\infty}(\mathbb{R}^d)}$. It remains to reformulate the right-hand side sum. By Poincar\'e's inequality on $Q_{4\delta }(k)$, we have \begin{equation*} \sum_{k'\in Q_{2\delta}(k)}\vert\xi_k-\xi_{{k'}}\vert^2\vert Q_{\delta}\vert\lesssim \delta^2\int_{Q_{4\delta}(k)}\vert\nabla^2 \bar{u}\vert^2, \end{equation*} so that~\eqref{e.momentbd-remain} follows from~\eqref{2sc:Eq14}. \medskip \step4 Conclusion by monotonicity. \noindent We test \eqref{2sc:Eq1} with $u_\varepsilon-\bar{u}^{2s}_{\varepsilon,\delta}$, and deduce by monotonicity of $a_\varepsilon$ that $$ \int_{\mathbb{R}^d} |\nabla (u_\varepsilon-\bar{u}^{2s}_{\varepsilon,\delta})|^2+ |\nabla (u_\varepsilon-\bar{u}^{2s}_{\varepsilon,\delta})|^p \, \lesssim \, \int_{\mathbb{R}^d} R_{\varepsilon,\delta} \cdot \nabla (u_\varepsilon-\bar{u}^{2s}_{\varepsilon,\delta}). $$ By Young's inequality, we may absorb part of the right-hand side into the left-hand side, and obtain after taking the $q$-th moment of this inequality $$ \expec{\Big(\int_{\mathbb{R}^d} |\nabla (u_\varepsilon-\bar{u}^{2s}_{\varepsilon,\delta})|^2+ |\nabla (u_\varepsilon-\bar{u}^{2s}_{\varepsilon,\delta})|^p \Big)^q}^\frac1q\, \lesssim \, \expec{\Big(\int_{\mathbb{R}^d} |R_{\varepsilon,\delta}|^2\Big)^q}^\frac1q. $$ This entails the claim in combination with \eqref{e.momentbd-remain} and the choice $\delta=\varepsilon$.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \IEEEPARstart{S}{ampling} theory lies at the heart of all modern digital processing systems. The original sampling problem entails identifying a continuous function on Euclidean space from discrete data samples. It is addressed by the classical sampling theorem, commonly and variously, attributed to Cauchy \cite{cauchy1841memoire}, de La Vallée Poussin \cite{poussin1908sur}, Whittaker \cite{whittaker1915functions}, Ogura \cite{ogura1920certain}, Kotel\'{n}ikov \cite{kotelnikov1933on}, Raabe \cite{raabe1939untersuchungen}, Shannon \cite{shannon1949communication}, and/or Someya \cite{someya1949waveform}. A seminal result in this context, referred to as Whittaker-Kotel\'{n}ikov-Shannon (or, simply Shannon's) theorem, states that it is possible to fully recover a bandlimited function from values measured on a regular sampling grid as long as the bandlimitation is an interval with length not exceeding the density of the sampling grid. Restating this in signal processing terms, a lowpass bandlimited signal can be perfectly reconstructed from its discrete samples taken uniformly at a sampling frequency that is at least the Nyquist rate, i.e., twice the signal bandwidth. During the past few decades, several variants and extensions of this result have solidified the extensive role of sampling theory in science and engineering \cite{jerri1977shannon,hogan2012duration,pfander2015sampling}. Shannon's theorem assumes the existence of samples that are of infinite precision and infinite dynamic range (DR). But, in practice, it is realized by the quantization of the signals through analog-to-digital converters (ADCs) that clip or saturate whenever the signal amplitude exceeds the maximum recordable ADC voltage, leading to a significant information loss. The effects of finite precision quantization are characterized in the form of rate distortion theory \cite{kailath1967application,berger2003rate}. However, investigations into finite DR or clipping effects are relatively recent \cite{olofsson2005deconvolution,adler2011audio,abel1991restoring,ting2013mitigation}. Substantial work has been done and is still ongoing to overcome this problem, and the literature is too large to summarize here; see, e.g., \cite{bhandari2020unlimited} and the references therein, for comparisons of various techniques. Overall, these approaches require declipping \cite{esqueda2016aliasing}, multiple ADCs \cite{gregers2001stacked}, and scaling techniques \cite{prasanna2021application}, which are expensive and cumbersome. Recently, some studies \cite{bhandari2020unlimited,bhandari2017unlimited,rudresh2018wavelet} have proposed the \emph{unlimited sampling} architecture to fully overcome this limitation by employing modular arithmetic. To perfectly reconstruct the signal of interest from modulo samples (up to a unknown constant), the unlimited sampling theory suggests the sampling rate to be slightly higher than the Nyquist rate and the norm estimate of the bandlimited signal be known Conventional multi-bit ADCs require a very large number of quantization levels to represent the original continuous signal in high resolution settings. Sampling at high data rates with high resolution ADCs, however, would dramatically increase the overall power consumption and the manufacturing cost of such ADCs \cite{ameri2018one}. This problem is exacerbated in systems that require multiple ADCs such as large array receivers \cite{ho2019antithetic}. An immediate solution to such challenges is to use fewer bits for sampling. Therefore, in the recent years, the design of receivers with low-complexity \textit{one-bit ADC} has been emphasized to meet the requirements of both wide signal bandwidth and low cost/power. \emph{One-bit quantization} is an extreme quantization scenario, in which the ADCs are merely comparing the signals with given threshold levels, producing sign ($\pm1$) outputs. This enables the signal processing equipment to sample at a very high rate yet with considerably lower cost and energy consumption than the conventional ADCs \cite{instrumentsanalog,mezghani2018blind,eamaz2021modified,ameri2018one,sedighi2020one}. \begin{comment} \Ae{In high resolution settings, a very large number of quantization levels is required in order to represent the original continuous signal. However, this leads to some difficulties in applications where the signals of interest have large bandwidths, and may pass through several RF chains that require using a plethora of ADCs. Moreover, the large number of quantization bits can cause a considerable increase in the overall power consumption and the manufacturing cost of ADCs. Such drawbacks lend support to the idea of utilizing fewer bits for sampling. \emph{One-bit quantization} is an extreme quantization scenario, in which the ADCs are merely comparing the signals with given threshold levels, producing sign ($\pm1$) outputs. This enables the signal processing equipment to sample at a very high rate, with a considerably lower cost and energy consumption, compared to conventional ADCs \cite{instrumentsanalog,mezghani2018blind,eamaz2021modified,ameri2018one,sedighi2020one}}. \end{comment} In the classical problem of one-bit sampling, the signal is reconstructed by comparing the signal with a fixed, usually zero, threshold. This leads to difficulties in estimating signal parameters. In particular, when zero threshold is used, the power information of the input signal $\mathbf{x}$ is lost in one-bit data because the signs of $\mathbf{x}$ and $\eta\mathbf{x}$ are identical for $\eta>0$. This problem has been addressed in a few recent works \cite{eamaz2021modified,qian2017admm,gianelli2016one,eamaz2022phase,eamaz2022covariance,wang2017angular,xi2020gridless}, which show that time-varying sampling thresholds enable better estimation of the signal characteristics. In particular, time-varying thresholds were considered for the covariance recovery from one-bit measurements in \cite{eamaz2021modified}. This was extended in \cite{eamaz2022covariance} for a significantly improved estimation of signal autocorrelation via the \emph{modified arcsine law}. In non-stationary scenarios, \cite{eamaz2022covariance} applied the modified arcsine law to utilize time-varying sampling thresholds. Applications of one-bit sampling to diverse problems such as sparse parameter estimation \cite{gianelli2016one}, localization \cite{sedighi2021localization}, and phase retrieval \cite{eamaz2022phase} have also appeared in the contemporary literature. Evidently, one-bit and unlimited sampling frameworks address complementary requirements. A one-bit ADC only compares an input signal with a given threshold. Therefore, essentially, one-bit sampling is indifferent to DR because, apart from the comparison bit, other information such as the distance between the signal value and the threshold is not stored. On the other hand, the self-reset ADC in unlimited sampling provides a natural approach to producing judicious time-varying thresholds for one-bit ADCs. In this paper, to harness advantages of both methods, we propose \emph{un}limited \emph{o}ne-bit (UNO) sampling to design sampling thresholds which are highly informative about the signal of interest. \subsection{Prior Art} \label{subsec:prior} Unlimited sampling of continuous-time signals that are sparse in Fourier domain was discussed in \cite{bhandari2018unlimited}. Extensions to graph signals \cite{ji2019folded}, multi-channel arrays \cite{fernandez2021doa}, and sparse outliers (noise) \cite{bhandari2022unlimited} have also been proposed. Reconstruction algorithms have included wavelet-based \cite{rudresh2018wavelet}, generalized approximate message passing \cite{musa2018generalized}, and local average \cite{florescu2022unlimited} techniques. Very recently, non-idealities in hardware prototyping were considered in \cite{bhandari2021unlimited,beckmann2020hdr}; a computational sampling strategy in the form of \textit{unlimited sampling with hysteresis} \cite{florescu2021unlimited} was found to be more flexible for circuit design specifications. To reconstruct the full-precision signal from one-bit sampled data, conventional approaches \cite{khobahi2018signal,zymnis2009compressed} include maximum likelihood estimation (MLE) and weighted least squares. However, these methods have high computational cost, especially for high-dimensional input signals. To this end, we propose using the randomized Kaczmarz algorithm (RKA) \cite{strohmer2009randomized,leventhal2010randomized}, which is an iterative algorithm for solving a system of linear inequalities that arise naturally in the one-bit quantization framework. While the deterministic version\cite{kaczmarz1937angenaherte} of the Kaczmarz method usually selects the linear equation sequentially, the RKA is random in its selection in each iteration leading to a faster convergence. The RKA is simple to implement and performs comparably with the state-of-the-art optimization methods. Among prior studies involving both one-bit and unlimited frameworks, state-of-the-art results in \cite{graf2019one} proposed \textit{one-bit $\Sigma\Delta$ quantization via unlimited sampling}, whose objective is to shrink the \textit{DR between the input signal and its one-bit samples}. This study developed a guaranteed reconstruction as long as the DR of the input signal is less than the DR of the one-bit data (i.e., 1). However, when the ratio of the input signal amplitude to the ADC threshold is large, then the imperfect noise shaping in sigma-delta conversion degrades this reconstruction. Contrary to this work, our proposed UNO technique focuses on a different problem, i.e., shrinking the \textit{DR between the input signal and the time-varying sampling thresholds}. The one-bit sampling is typically performed at significantly high rates. As a result, the resulting observation inequalities form an overdetermined system. When the difference between the DR of the input signal and that of the thresholds increases, the reconstruction degrades significantly. We show that jointly exploiting both unlimited and one-bit sampling techniques provides a more efficient solution by a considerable reduction of the aforementioned gap. In practice, errors arising from quantization noise degrade the reconstruction quality in unlimited sampling framework. In this context, \cite{bhandari2020unlimited} derived the reconstruction guarantees by including this error as bounded additive noise to the modulo samples. Contrary to this approach, we consider the more realistic case of additive noise to the input signal. We show that our RKA-based reconstruction is also effective for noisy one-bit sampled signals because it is independent from the statistical properties of the modulo samples. \subsection{Our Contributions} \label{subsec:contrib} Our main contributions in this paper are:\\ \textbf{1) Combined unlimited and one-bit sampling framework.} In the proposed UNO framework, we leverage upon the benefits of both one-bit and unlimited sampling techniques. The result is a sampling approach that yields unlimited DR and a low-cost, low-power receiver while retaining a high sampling rate. We design time-varying sampling thresholds for one-bit quantization, whose DR is closer to that of the original signal. This aids in accurately storing the information of distance between the signal values and thresholds to utilize in the signal reconstruction task. We show that compared to the one-bit reconstruction with random thresholds \cite{ameri2018one}, our proposed UNO sampling based on time-varying thresholds performs better, especially for high DR signals. \\ \textbf{2) RKA-based reconstruction.} The signal reconstruction from one-bit measurements requires solving an overdetermined linear feasibility problem that we recast as a one-bit polyhedron and efficiently solve it via the RKA. By generating an abundant number of one-bit samples, we show that the singular values of one-bit data matrix that creates the one-bit polyhedron are equal to the number of time-varying threshold sequences employed in one-bit sampling. Further, we numerically investigate the effects of ADC threshold and signal amplitude in the RKA-based UNO reconstruction. \\ \textbf{3) Performance guarantees.} Our theoretical analyses show that a proper selection of the sufficient number of samples further enhances the reconstruction performance of the UNO. We prove that the convergence rate of the RKA when applied to the one-bit polyhedron depends on the size of the input signal and the total number of RKA iterations. In this context, we also obtain a lower bound on the number of required iterations for perfect reconstruction. \\ \textbf{4) Reconstruction in the presence of additive noise.} When the input signal is contaminated with additive noise, we apply the recently introduced new plug-and-play (PnP) priors \cite{venkatakrishnan2013plug} to the alternating direction method of multipliers (ADMM) as an additional reconstruction algorithm step. In image denoising problems, the PnP-ADMM is used to replace the shrinkage step of the standard ADMM algorithm with any off-the-shelf algorithm to ensure the noise variance is sufficiently suppressed. Although PnP-ADMM appears \textit{ad hoc}, it yields a better performance than state-of-the-art methods in several different inverse problems \cite{venkatakrishnan2013plug,chan2016plug}. For the noisy UNO, we deploy this algorithm to reconstruct the original signal from overdetermined and underdetermined noisy systems. Moreover, we show that the additive noise to the input signal contaminates the modulo samples with noise that is expressed in terms of the input noise. \subsection{Organization and Notations} In the next section, we provide an introduction to one-bit quantization with time-varying sampling thresholds. Particularly, the one-bit sampled signal reconstruction problem is formulated as an overdetermined system of linear inequalities. In Section~\ref{sec:unlimited}, we recall the concept of unlimited sampling as proposed in \cite{bhandari2017unlimited,bhandari2020unlimited}. We introduce the RKA in the context of signal reconstruction in Section~\ref{sec:onebit_rec}. This is a prelude to Section~\ref{sec:uno}, which proposes UNO sampling to design judicious thresholds and guarantee the one-bit signal reconstruction in the high-DR. In Section~\ref{numerical_unlim}, we provide several numerical experiments to illustrate UNO-based sampling and analyze the reconstruction error. We consider the noisy measurement scenario in Section~\ref{sec:noise} and conclude in Section~\ref{sec:summ}. Throughout this paper, we use boldface lowercase, boldface uppercase, and calligraphic letters for vectors, matrices, and sets, respectively. The notations $\mathbb{C}$, $\mathbb{R}$, and $\mathbb{Z}$ represent the set of complex, real, and integer numbers, respectively. We represent a vector $\mathbf{x}$ in terms of its elements $\{x_{i}\}$ or $\left(\mathbf{x}\right)_{i}$ as $\mathbf{x}=[x_{i}]$. We use $(\cdot)^{\top}$ and $(\cdot)^{\mathrm{H}}$ to denote the vector/matrix transpose and the Hermitian transpose, respectively. The identity matrix of size $N$ is $\mathbf{I}_{N}\in \mathbb{R}^{N\times N}$. The Frobenius norm of a matrix $\mathbf{B}\in \mathbb{C}^{M\times N}$ is defined as $\|\mathbf{B}\|_{\mathrm{F}}=\sqrt{\sum^{M}_{r=1}\sum^{N}_{s=1}\left|b_{rs}\right|^{2}}$, where $b_{rs}$ is the $(r,s)$-th entry of $\mathbf{B}$. The function $\text{diag}(\cdot)$ outputs a diagonal matrix with the input vector along its main diagonal. The $\ell_{p}$-norm of a vector $\mathbf{b}$ is $\|\boldsymbol{b}\|_{p}=\left(\sum_{i}b^{p}_{i}\right)^{1/p}$. The infinity or max-norm of a function $x$ is $\|x\|_{\infty}=\operatorname{inf}\left\{c_{0}\geq 0: |x(t)|\leq c_{0}\right\}$, where $\textrm{inf}(\cdot)$ denotes the infimum of its argument; for vectors, we have $\|\mathbf{x}\|_{\infty}=\max_{k}|x_{k}|$. For a vector $\mathbf{x}$, $\Delta\mathbf{x}=x_{k+1}-x_{k}$ denotes the finite difference and recursively applying the same yields $N$-th order difference, $\Delta^{N}\mathbf{x}$. We denote the $\Omega$-bandlimited Paley-Wiener subspace of the square-integrable function space $L^{2}$ by $\textrm{PW}_{\Omega}$ such that $\textrm{PW}_{\Omega}=\{f: f, \widehat{f} \in L^2,\;\operatorname{supp}(\widehat{f}) \subset[-\Omega, \Omega]\}$, where $\widehat{f}$ is the Fourier transform of $f$. The Hadamard (element-wise) product of two matrices $\mathbf{B}_{1}$ and $\mathbf{B}_{2}$ is $\mathbf{B}_{1}\odot \mathbf{B}_{2}$. The column-wise vectorized form of a matrix $\mathbf{B}$ is $\operatorname{vec}(\mathbf{B})$. Given a scalar $x$, we define the operator $(x)^{+}$ as $\max\left\{x,0\right\}$. For an event $\mathcal{E}$, $\mathbb{1}_{(\mathcal{E})}$ is the indicator function for that event meaning that $\mathbb{1}_{(\mathcal{E})}$ is $1$ if $\mathcal{E}$ occurs; otherwise, it is zero. The function $\operatorname{sgn}(\cdot)$ yields the sign of its argument. In the context of numerical computations, $\lfloor\rfloor$ and $\lceil\rceil$ denote the floor and ceiling functions, respectively. The function $\log(\cdot)$ denotes the natural logarithm, unless its base is otherwise stated. The notation $x \sim \mathcal{U}(a,b)$ means a random variable drawn from the uniform distribution over the interval $[a,b]$ and $x \sim \mathcal{N}(\mu,\sigma^2)$ represents the normal distribution with mean $\mu$ and variance $\sigma^2$. The operator $\operatorname{mod}(a,b)$ between two values $a$ and $b$, returns the remainder of the division operation $a/b$. \begin{comment} The cumulative distribution function (CDF) of the zero-mean truncated Gaussian process $z\sim\mathcal{N}(0,\zeta)$ is given by \begin{equation} \label{eq:1bbb} \Phi(z) \triangleq \frac{1}{\sqrt{2\pi}}\int^{z}_{-\infty}e^{-\frac{t^{2}}{2\zeta^{2}}} \,dt. \end{equation} The CDF of the truncated Gaussian process $\Tilde{z}\sim\mathcal{TN}(\mu,\sigma_{\Tilde{z}},\alpha,\beta)$ is given by \begin{equation} \label{eq:1bbbb} \begin{aligned} F(\Tilde{z}) &\triangleq \frac{\Phi\left(\frac{\Tilde{z}-\mu}{\sigma_{\Tilde{z}}}\right)-\Phi\left(\frac{\alpha-\mu}{\sigma_{\Tilde{z}}}\right)}{\chi},\\ \chi &= \Phi\left(\frac{\beta-\mu}{\sigma_{\Tilde{z}}}\right)-\Phi\left(\frac{\alpha-\mu}{\sigma_{\Tilde{z}}}\right), \end{aligned} \end{equation} where $\Tilde{z}\in \left[\alpha, \beta\right]$, and $\Phi(.)$ is the CDF of a zero-mean Gaussian process. Finally, the error function (erf) is defined as $ \operatorname{erf} x=\frac{2}{\sqrt{\pi}} \int_{0}^{x} e^{-z^{2}}\,d z$. \end{comment} \section{One-Bit Sampling: Overdetermined Linear System Formulation} \label{sec:onebit} Several approaches have been proposed in the literature to reconstruct the signal of interest from one-bit samples with the most of them formulating this task as an optimization problem. For example, the covariance matrix formulation of \cite{ameri2018one} employs the cyclic optimization method to recover the input autocorrelation elements. A convex program based on the Gauss-Legendre integration to recover the input covariance matrix from one-bit sampled data was suggested in \cite{eamaz2022covariance}. Other recent works exploit sparsity of the signal and apply techniques such as $\ell_1$-norm minimization \cite{zahabi2020one,knudson2016one}, $\ell_{1}$-regularized MLE formulation \cite{zymnis2009compressed,khobahi2019model}, log-relaxation \cite{zhu2020target}, and Lasserre's semidefinite program relaxation \cite{sedighi2021localization} to lay the ground for signal reconstruction. In the following, we explain our one-bit polyhedron formulation, wherein a strong efficient and easily implementable solver of linear feasibility problems is applied to the aforementioned application-specific methods. \subsection{One-Bit Quantization Using Time-Varying Thresholds} \label{sec:survey} Consider a bandlimited continuous-time signal $x\in \textrm{PW}_{\Omega}$ that we represent via Shannon's sampling theorem as \cite{hogan2012duration} \begin{equation} \label{eq:1} 0<\mathrm{T} \leqslant \frac{\pi}{\Omega}, \quad x(t)=\sum_{k=-\infty}^{k=+\infty} x(k \mathrm{T}) \operatorname{sinc}\left(\frac{t}{\mathrm{T}}-k\right), \end{equation} where $1/\mathrm{T}$ is the sampling rate, $\Omega$ is the signal bandwidth, and $\operatorname{sinc}(t)=\frac{\sin (\pi t)}{\pi t}$ is an \emph{ideal} low-pass filter. Denote the uniform samples of $x(t)$ with the sampling rate $1/\mathrm{T}$ by $x_{k}=x(k\mathrm{T})$. In practice, the discrete-time samples occupy pre-determined quantized values. We denote the quantization operation on $x[k]$ by the function $Q(\cdot)$. This yields the quantized signal as $r_{k} = Q(x_{k})$. \begin{comment} \begin{equation} \label{eq:2} r_{k} = Q(x_{k}), \end{equation} \end{comment} In one-bit quantization, compared to zero or constant thresholds, time-varying sampling thresholds yield a better reconstruction performance \cite{ameri2018one,eamaz2022covariance}. These thresholds may be chosen from any distribution. In this work, to be consistent with state-of-the-art \cite{ameri2018one,khobahi2018signal,eamaz2021modified}, we consider a Gaussian non-zero time-varying threshold vector $\boldsymbol{\uptau}=\left[\tau_{k}\right]$ that follows the distribution $\boldsymbol{\uptau} \sim \mathcal{N}\left(\mathbf{d}=\mathbf{1}d,\boldsymbol{\Sigma}\right)$. For one-bit quantization with such time-varying sampling thresholds, $r_{k} = \operatorname{sgn}\left(x_{k}-\tau_{k}\right)$. \begin{comment} \begin{equation} \label{eq:3} r_{k} = \operatorname{sgn}\left(x_{k}-\tau_{k}\right). \end{equation} \end{comment} \subsection{One-Bit Polyhedron} \label{subsec:overdetermined} The information gathered through the one-bit sampling with time-varying thresholds may be formulated in terms of an overdetermined linear system of inequalities. We have $r_{k}=+1$ when $x_{k}>\tau_{k}$ and $r_{k}=-1$ when $x_{k}<\tau_{k}$. Collecting all the elements in the vectors as $\mathbf{x}=[x_{k}] \in \mathbb{R}^{n}$ and $\mathbf{r}=[r_{k}] \in \mathbb{R}^{n}$, therefore, one can formulate the geometric location of the signal as \begin{equation} \label{eq:4} r_{k}\left(x_{k}-\tau_{k}\right) \geq 0. \end{equation} Then, the vectorized representation of (\ref{eq:4}) is $\mathbf{r} \odot \left(\mathbf{x}-\boldsymbol{\uptau}\right) \geq \mathbf{0}$ \begin{comment} \begin{equation} \label{eq:5} \mathbf{r} \odot \left(\mathbf{x}-\boldsymbol{\uptau}\right) \geq \mathbf{0}, \end{equation} \end{comment} or equivalently \begin{equation} \label{eq:6} \begin{aligned} \boldsymbol{\Omega} \mathbf{x} &\succeq \mathbf{r} \odot \boldsymbol{\uptau}, \end{aligned} \end{equation} where $\boldsymbol{\Omega} \triangleq \operatorname{diag}\left(\mathbf{r}\right)$. Suppose $\mathbf{x},\boldsymbol{\uptau} \in \mathbb{R}^{n}$, and that $\boldsymbol{\uptau}^{(\ell)}$ denotes the time-varying sampling threshold in $\ell$-th signal sequence, where $\ell\in\mathcal{L}=\{1,\cdots,m\}$. For the $\ell$-th signal sequence, (\ref{eq:6}) becomes \begin{equation} \label{eq:7} \begin{aligned} \boldsymbol{\Omega}^{(\ell)} \mathbf{x} &\succeq \mathbf{r}^{(\ell)} \odot \boldsymbol{\uptau}^{(\ell)}, \quad \ell \in \mathcal{L}, \end{aligned} \end{equation} where $\boldsymbol{\Omega}^{(\ell)}=\operatorname{diag}\left(\mathbf{r}^{(\ell)}\right)$. Denote the concatenation of all $m$ sign matrices as \begin{equation} \label{eq:9} \Tilde{\boldsymbol{\Omega}}=\left[\begin{array}{c|c|c} \boldsymbol{\Omega}^{(1)} &\cdots &\boldsymbol{\Omega}^{(m)} \end{array}\right]^{\top}, \quad \in \mathbb{R}^{m n\times n}. \end{equation} Rewrite the $m$ linear system of inequalities in (\ref{eq:7}) as \begin{equation} \label{eq:8} \Tilde{\boldsymbol{\Omega}} \mathbf{x} \succeq \operatorname{vec}\left(\mathbf{R}\right)\odot \operatorname{vec}\left(\boldsymbol{\Gamma}\right), \end{equation} where $\mathbf{R}$ and $\boldsymbol{\Gamma}$ are matrices, whose columns are the sequences $\left\{\mathbf{r}^{(\ell)}\right\}_{\ell=1}^{m}$ and $\left\{\boldsymbol{\uptau}^{(\ell)}\right\}_{\ell=1}^{m}$, respectively. The linear system of inequalities in (\ref{eq:8}) associated with the one-bit sampling scheme is overdetermined. We recast (\ref{eq:8}) into a \textit{one-bit polyhedron} as \begin{equation} \label{eq:8n} \begin{aligned} \mathcal{P} = \left\{\mathbf{x} \mid \Tilde{\boldsymbol{\Omega}} \mathbf{x} \succeq \operatorname{vec}\left(\mathbf{R}\right)\odot \operatorname{vec}\left(\boldsymbol{\Gamma}\right)\right\}. \end{aligned} \end{equation} Instead of complex high-dimensional optimization with techniques such as MLE, our objective is to employ the polyhedron (\ref{eq:8n}) that encapsulates the desired signal $\mathbf{x}$ and leads to solving linear inequalities with linear convergence in expectation. \section{Unlimited Sampling} \label{sec:unlimited} In a variety of applications, clipping or saturation poses a serious problem to signal reconstruction. For instance, in scientific imaging systems such as ultrasound \cite{olofsson2005deconvolution}, radar \cite{cassidy2009ground}, and seismic imaging \cite{zhang2016restoration}, strong reflections or pulse echoes blind the sensor. In audio processing, clipped sound results in high-frequency artifacts \cite{adler2011audio}. In this context, unlimited sampling suggests that, instead of point-wise samples of the bandlimited function $x(t)$, the signal is digitized using a self-reset ADC with an appropriately selected threshold $\lambda>0$ such that any signal value outside the range $\left[-\lambda,\lambda\right]$ is \emph{folded} to the same range \cite{bhandari2017unlimited,bhandari2020unlimited}. The folding corresponds to introducing a non-linearity in the sensing process \cite{bhandari2017unlimited,bhandari2020unlimited}. We denote the folding by the modulo operator $\mathcal{M}_{\lambda}$ that represents the following mapping: \begin{equation} \label{eq:18} \mathcal{M}_{\lambda}(x_{k}): \Tilde{x}_{k} = x_{k}-2\lambda \left\lfloor\frac{x_{k}}{2\lambda}+\frac{1}{2}\right\rfloor, \end{equation} where $\Tilde{x}_{k}$ are the modulo samples of $x(t)$. The \emph{unlimited sampling theorem} \cite{bhandari2020unlimited} (reproduced below) states that, if the estimate of the norm of the bandlimited signal is known, then its perfect reconstruction (up to additive multiples of $2\lambda$) from its modulo samples is possible with at least sampling period $T \leq(2 \pi e)^{-1}$, where $e$ is the Euler's number and the signal bandwidth has been normalized to $\pi$. \begin{theorem}[Unlimited sampling theorem \cite{bhandari2020unlimited}] \label{theorem_1} Assume $x(t)$ to be a finite energy, bandlimited signal with maximum frequency $\Omega_{\textrm{max}}$ and let $\Tilde{x}_{k}$, $k \in \mathbb{Z}$ in (\ref{eq:18}) be the modulo samples of $x(t)$ with sampling rate $1/T$. Then a sufficient condition for the reconstruction of $x(t)$ from $\left\{\Tilde{x}_{k}\right\}$ is that $T \leq \frac{1}{2\Omega_{\textrm{max}} e}$ (up to additive multiples of $2\lambda$). \end{theorem} Theorem~\ref{theorem_1} implies that the sampling rate depends on only the bandwidth and is independent of the ratio of ADC threshold $\lambda$ to the signal amplitude. In other words, the DR of the input signal is \emph{unlimited}. Recently, stable unlimited sampling reconstruction in the presence of noise has also been obtained \cite{bhandari2020unlimited}. The reconstruction of the bandlimited function $x(t)$ from its modulo samples $\left\{\Tilde{x}_{k}\right\}$ is achieved as follows. Assume that $x(t)$ admits a decomposition \cite{bhandari2017unlimited,bhandari2020unlimited}, \begin{equation} \label{eq:19} x(t) = \Tilde{x}(t) + \epsilon_{x}(t), \end{equation} where $\Tilde{x}(t)=\mathcal{M}_{\lambda}\left(x(t)\right)$ and the error $\epsilon_{x}$ between the input signal and its modulo samples is \begin{equation} \label{eq:20} \epsilon_{x}(t)=2\lambda \sum_{u \in \mathbb{Z}}e_{u}\mathbb{1}_{\mathcal{D}_{u}}(t), \quad e_{u} \in \mathbb{Z}, \end{equation} where $\bigcup_{u \in \mathbb{Z}} \mathcal{D}_{u}=\mathbb{R}$ is a partition of the real line into intervals $\mathcal{D}_{u}$. As indicated by (\ref{eq:19}), if $\epsilon_{x}$ is known, then $x$ can be reconstructed from $\Tilde{x}$. It follows from \eqref{eq:20} that $\epsilon_{x}$ takes only those values that are integer multiples of $2\lambda$ thereby leading to a robust reconstruction algorithm \cite{bhandari2020unlimited}. To obtain $\epsilon_{x}$ (up to an unknown additive constant) and subsequently the desired signal $x(t)$, the reconstruction procedure in \cite{bhandari2017unlimited,bhandari2020unlimited} requires the higher-order differences of $\Tilde{\mathbf{x}}=[\Tilde{x}_{k}]$ to obtain $\Delta^{N}\boldsymbol{\epsilon}_{x}=\mathcal{M}_{\lambda}\left(\Delta^{N}\Tilde{\mathbf{x}}\right)-\Delta^{N}\Tilde{\mathbf{x}}$, where $\boldsymbol{\epsilon}_{x}=[\epsilon_{x}]$. Define the inverse-difference operator as a sum of real sequence $\{s_b\}$, i.e., \begin{equation} \label{eq:23} \nabla: \{s_{k}\}_{k \in \mathbb{Z}^{+}} \rightarrow \sum_{b=1}^{k}s_{b}. \end{equation} Then, applying $\nabla\left(\Delta^{N}\boldsymbol{\epsilon}_{x}\right)$ and rounding the result to the nearest multiple of $2\lambda \mathbb{Z}$ yields $\epsilon_{x}$. For a guaranteed and stable reconstruction performance, a suitable choice for difference order $N$ is \cite{bhandari2020unlimited}, \begin{equation} \label{eq:21} N \geq \left\lceil \frac{\log \lambda -\log \beta_{x}}{\log\left(T\Omega e\right)} \right\rceil, \end{equation} where $\beta_{x}$ is chosen such that $\beta_{x} \in 2\lambda \mathbb{Z}$ and $\|x\|_{\infty} \leq \beta_{x}$. Algorithm~\ref{algorithm_1} summarizes the unlimited sampling reconstruction procedure. \begin{algorithm}[H] \caption{Input signal reconstruction from modulo folded samples.} \label{algorithm_1} \begin{algorithmic}[1] \Statex \textbf{Input:} $\Tilde{x}_{k}=\mathcal{M}_{\lambda}\left(x_{k}\right)$, ADC threshold $\lambda$, and $2\lambda \mathbb{Z}\ni\beta_{x}\geq \|x\|_{\infty}$. \Statex \textbf{Output:} The approximation of the input signal $\bar{\mathbf{x}}$. \State $N \gets \left\lceil \frac{\log \lambda -\log \beta_{x}}{\log\left(T\Omega e\right)} \right\rceil$ using (\ref{eq:21}). \State $\Delta^{N}\boldsymbol{\epsilon}_{x} \gets \mathcal{M}_{\lambda}\left(\Delta^{N}\Tilde{\mathbf{x}}\right)-\Delta^{N}\Tilde{\mathbf{x}}$. \State $\mathbf{s}_{0}\gets\Delta^{N}\boldsymbol{\epsilon}_{x}$. \For{$p=0:N-2$} \State $\mathbf{s}_{p+1}\gets\nabla\mathbf{s}_{p}$ $\triangleright$ $\nabla$ is the inverse-difference operator defined in (\ref{eq:23}). \State $\mathbf{s}_{p+1}\gets2\lambda\left\lceil\frac{\left\lfloor\mathbf{s}_{p+1}/\lambda\right\rfloor}{2}\right\rceil$ $\triangleright$ rounding to $2\lambda\mathbb{Z}$. \State $\kappa_{p}\gets\left\lfloor\frac{\left(\nabla^{2}\Delta^{p}\boldsymbol{\epsilon}_{x}\right)_{1}-\left(\nabla^{2}\Delta^{p}\boldsymbol{\epsilon}_{x}\right)_{J+1}}{12\beta_{x}}+\frac{1}{2}\right\rfloor$ $\triangleright$ $J=\frac{6\beta_{x}}{\lambda}$. \State $\mathbf{s}_{p+1}\gets\mathbf{s}_{p+1}+2\lambda\kappa_{p}$. \EndFor \State \Return $\bar{\mathbf{x}} \gets \nabla\mathbf{s}_{N-1}+\Tilde{\mathbf{x}}+2a\lambda, \quad a \in \mathbb{Z}$. \end{algorithmic} \end{algorithm} \section{One-Bit Signal Reconstruction} \label{sec:onebit_rec} To reconstruct $\mathbf{x}$ from the sign data $\left\{\mathbf{r}^{(\ell)}\right\}^{m}_{\ell=1}$, we solve the polyhedron search problem through RKA because of its optimal projection and linear convergence in expectation \cite{eamaz2022phase,briskman2015block,leventhal2010randomized}. \subsection{Basic Theory of RKA} \label{RKA} The RKA is a \emph{subconjugate gradient method} to solve overdetermined linear systems, i.e, $\mathbf{C}\mathbf{x}\leq\mathbf{b}$ where $\mathbf{C}$ is a $m^{\prime}\times n^{\prime}$ matrix with $m^{\prime}>n^{\prime}$ \cite{leventhal2010randomized,strohmer2009randomized}. The conjugate-gradient methods turn this inequality to an equality of the following form: \begin{equation} \label{eq:10} \left(\mathbf{C}\mathbf{x}-\mathbf{b}\right)^{+}=0, \end{equation} and then solve it as any other system of equations. Given a sample index set $\mathcal{J}$, without loss of generality, rewrite (\ref{eq:10}) as the polyhedron \begin{equation} \label{eq:11} \begin{aligned} \begin{cases}\mathbf{c}_{j} \mathbf{x} \leq b_{j} & \left(j \in \mathcal{I}_{\leq}\right), \\ \mathbf{c}_{j} \mathbf{x}=b_{j} & \left(j \in \mathcal{I}_{=}\right),\end{cases} \end{aligned} \end{equation} where $\{\mathbf{c}_{j}\}$ are the rows of $\mathbf{C}$ and the disjoint index sets $\mathcal{I}_{\leq}$ and $\mathcal{I}_{=}$ partition $\mathcal{J}$. The projection coefficient $\beta_{i}$ of the RKA is \cite{leventhal2010randomized,briskman2015block,dai2013randomized}: \begin{equation} \label{eq:12} \beta_{i}= \begin{cases}\left(\mathbf{c}_{j} \mathbf{x}_{i}-b_{j}\right)^{+} & \left(j \in \mathcal{I}_{\leq}\right), \\ \mathbf{c}_{j} \mathbf{x}_{i}-b_{j} & \left(j \in \mathcal{I}_{=}\right).\end{cases} \end{equation} The unknown column vector $\mathbf{x}$ is iteratively updated as \begin{equation} \label{eq:13} \mathbf{x}_{i+1}=\mathbf{x}_{i}-\frac{\beta_{i}}{\left\|\mathbf{c}_{j}\right\|^{2}_{2}} \mathbf{c}^{\mathrm{H}}_{j}, \end{equation} where, at each iteration $i$, the index $j$ is drawn from the set $\mathcal{J}$ independently at random following the distribution \begin{equation} \label{eq:14} \operatorname{Pr}\{j=k\}=\frac{\left\|\mathbf{c}_{k}\right\|^{2}_{2}}{\|\mathbf{C}\|_{\mathrm{F}}^{2}}. \end{equation} Note that, (\ref{eq:8n}) has only the inequality partition $I_{\leq}$. Herein, $m^{\prime}=m\times n$ and $n^{\prime}=n$. The row vector $\mathbf{c}_{j}$ and the scalar $b_{j}$ in the RKA (\ref{eq:11})-(\ref{eq:14}) are $j$-th row of $-\Tilde{\boldsymbol{\Omega}}$ and $j$-th element of $-\left(\operatorname{vec}\left(\mathbf{R}\right)\odot\operatorname{vec}\left(\boldsymbol{\Gamma}\right)\right)$, respectively. It may be readily verified that the distribution of choosing a specific sample index $j$ for the inequalities in (\ref{eq:8n}) is uniform, i.e., $\operatorname{Pr}\{j=k\}=\frac{1}{mn}$. In one-bit reconstruction, $\mathbf{c}_{j}=-\upomega_{j}$, wherein $\upomega_{j}$ is the $j$-th row of $\Tilde{\boldsymbol{\Omega}}$; a $j^{\prime}$-th \emph{coordinate vector} with $\pm1$ as its $j^{\prime}$-th element and \begin{equation} \label{Negindex} \begin{aligned} j^{\prime}=\begin{cases} \operatorname{mod}(j,n), & j\neq k n, \\ n, & j=k n,\end{cases} \end{aligned} \end{equation} with $1\leq k \leq m$. This property makes the update process \eqref{eq:13} similar to that of the \emph{randomized Gauss-Seidal} method using the coordinate vector in each iteration \cite{leventhal2010randomized,ma2015convergence}. This approach is commonly used for solving high-dimensional linear feasibility problems by updating only one dimension at any iteration. The structure of matrix $\Tilde{\boldsymbol{\Omega}}$ leads to a similar efficient RKA implementation by updating only the generic element $j^{\prime}$ at each iteration, i.e., $\left(\mathbf{x}_{i+1}\right)_{j^{\prime}}=\left(\mathbf{x}_{i}\right)_{j^{\prime}}-\beta_{i} r_{j^{\prime}}$, where $r_{j^{\prime}}$ is the one-bit data at index $j^{\prime}$. \begin{figure*} \centering \includegraphics[width=1.0\textwidth]{rka_recovery_DR.png} \caption{ (a) The input sawtooth wave signal $\mathbf{x}$ is reconstructed from one-bit measurements using the RKA to yield $\bar{\mathbf{x}}$. Here, $\text{DR}_{\mathbf{x}}=1$ and $\text{DR}_{\boldsymbol{\uptau}}=1$. The inset shows the same plot on a larger scale. (b) As in (a) but for the bandlimited input signal from \cite{bhandari2020unlimited} with $\text{DR}_{\mathbf{x}}=5$. (c) As in (b) but for $\text{DR}_{\mathbf{x}}=8$. } \label{figure_1} \end{figure*} \subsection{Error Reconstruction Bound} \label{ERROR_BOUND} At the $i$-th iteration, the error between the RKA estimate $\mathbf{x}_{i}$ and the optimal solution $\mathbf{x}^{\star}$ has been shown to follow the convergence bound \cite{strohmer2009randomized,leventhal2010randomized,briskman2015block,polyak1964gradient} \begin{equation} \label{eq:15} \mathbb{E}\left\{\left\|\mathbf{x}_{i}-\mathbf{x}^{\star}\right\|_{2}^{2}\right\} \leq q^{i}\left\|\mathbf{x}_{0}-\mathbf{x}^{\star}\right\|_{2}^{2}, \end{equation} where $q=1-\frac{1}{\kappa\left(\Tilde{\boldsymbol{\Omega}}\right)}\;\in \left(0,1\right)$ \begin{comment} \begin{equation} \label{eq:16} q=1-\frac{1}{\kappa\left(\Tilde{\boldsymbol{\Omega}}\right)}\;\in \left(0,1\right), \end{equation} \end{comment} and $\kappa\left(\Tilde{\boldsymbol{\Omega}}\right)=\|\Tilde{\boldsymbol{\Omega}}\|^{2}_{\mathrm{F}}\|\Tilde{\boldsymbol{\Omega}}^{\dagger}\|^{2}_{2}$ is \textit{scaled condition number} \cite{edelman1992distribution} of $\Tilde{\boldsymbol{\Omega}}$, which is a block matrix of $m$ diagonal matrices per \eqref{eq:9}. We have \begin{equation} \label{fro} \|\Tilde{\boldsymbol{\Omega}}\|^{2}_{\mathrm{F}}=\sum^{m n}_{j=1}r^{2}_{j}=\sum^{m n}_{j=1}1=m n. \end{equation} Moreover, $\|\Tilde{\boldsymbol{\Omega}}^{\dagger}\|^{2}_{2}=\frac{1}{\sigma^{2}_{\textrm{min}}}$, where $\sigma_{\textrm{min}}=\min\left\{\sigma_{i}\right\}$ is the minimum singular value of $\Tilde{\boldsymbol{\Omega}}$ \cite{van1996matrix} (maximum singular value is $\sigma_{\textrm{max}}$ similarly defined). Following Lemma~\ref{lemma_1} evaluates singular values of $\Tilde{\boldsymbol{\Omega}}$. \begin{lemma} \label{lemma_1} Consider the concatenation of all $m$ sign data matrices in \eqref{eq:9}, i.e., $\Tilde{\boldsymbol{\Omega}} \in \mathbb{R}^{m n\times n}$, where $n$ is the size of the input signal and $m$ is the number of time-varying sampling thresholds. The matrix $\Tilde{\boldsymbol{\Omega}}$ is full-rank and its singular values are \begin{equation} \label{singular} \sigma_{1}=\sigma_{2}=\cdots=\sigma_{n}= \sqrt{m}. \end{equation} \end{lemma} \begin{IEEEproof} Compute the square matrix \begin{equation} \label{proof_th} \begin{aligned} \mathbf{P} &= \Tilde{\boldsymbol{\Omega}}^{\top}\Tilde{\boldsymbol{\Omega}} = \left[\begin{array}{c|c|c} \boldsymbol{\Omega}^{(1)} &\cdots &\boldsymbol{\Omega}^{(m)} \end{array}\right]\left[\begin{array}{c|c|c} \boldsymbol{\Omega}^{(1)} &\cdots &\boldsymbol{\Omega}^{(m)} \end{array}\right]^{\top}\\ &=\boldsymbol{\Omega}^{(1)}\left(\boldsymbol{\Omega}^{(1)}\right)^{\top}+\boldsymbol{\Omega}^{(2)}\left(\boldsymbol{\Omega}^{(2)}\right)^{\top}+\cdots+\boldsymbol{\Omega}^{(m)}\left(\boldsymbol{\Omega}^{(m)}\right)^{\top},\\ &= m\mathbf{I}. \end{aligned} \end{equation} Hence, the eigenvalues of $\mathbf{P}$ are equal to $m$. In other words, the singular values of $\Tilde{\boldsymbol{\Omega}}$ are $\left\{\sigma_{i}\right\}_{i=1}^{n}=\sqrt{m}$. \end{IEEEproof} It follows from (\ref{fro}) and Lemma~\ref{lemma_1} that $\kappa\left(\Tilde{\boldsymbol{\Omega}}\right)=\frac{m n}{\sigma^{2}_{\textrm{min}}} = n$. Conventionally, the condition number of a matrix is defined as $\frac{\sigma_{\textrm{max}}}{\sigma_{\textrm{min}}}$. From Lemma~\ref{lemma_1}, all singular values are equal. Hence, $\kappa\left(\Tilde{\boldsymbol{\Omega}}\right) = n \frac{\sigma_{\textrm{max}}}{\sigma_{\textrm{min}}}$ is indeed the condition number scaled by $n$. This leads to \begin{align} \label{cond} q=\frac{n-1}{n}. \end{align} Set the algorithm termination criterion to the condition \begin{equation} \label{eq:6565} \mathbb{E}\left\{\left\|\mathbf{x}_{i}-\mathbf{x}^{\star}\right\|_{2}^{2}\right\}\leq \epsilon_{1}, \end{equation} where $\epsilon_{1}$ is a positive constant. Based on this criterion and (\ref{eq:15}), the following Proposition~\ref{lemma_3} states the lower bound on the number of required RKA iterations. \begin{proposition} \label{lemma_3} The number of RKA iterations $i$ required to achieve the optimal solution $\mathbf{x}^{\star}$ of length $n$ from its one-bit samples within the error specified by (\ref{eq:6565}) is \begin{equation} \label{bound5} \begin{aligned} i&\geq \frac{\log\left(\frac{\omega_{0}}{\epsilon_{1}}\right)}{\log\left(\frac{1}{1-\frac{1}{n}}\right)}, \end{aligned} \end{equation} where $\omega_{0}=\left\|\mathbf{x}_{0}-\mathbf{x}^{\star}\right\|_{2}^{2}$ is the initial squared error (at $i=0$) and $\epsilon_{1}$ is a positive constant. \end{proposition} \begin{IEEEproof} Define $q^{i}\left\|\mathbf{x}_{0}-\mathbf{x}^{\star}\right\|_{2}^{2}\leq \epsilon_{1}$, or equivalently, \begin{equation} \label{bound311} \begin{aligned} q^{i}&\leq \frac{\epsilon_{1}}{\omega_{0}}. \end{aligned} \end{equation} Note that $\omega_{0}$ is a constant scalar that depends on only the initial and optimal solutions. Substituting (\ref{cond}) in \eqref{bound311} and taking logarithm on both sides yields \begin{equation} \label{bound30} i\log\left(1-\frac{1}{n}\right)\leq \log\left(\frac{\epsilon_{1}}{\omega_{0}}\right). \end{equation} \end{IEEEproof} Since the optimal solution is unknown, $\omega_{0}$ may not be precisely determined. However, a suitable number of required iterations may still be selected following Theorem~\ref{lemma_3} with a reasonable guess for $\omega_{0}$. For instance, an initial value $\mathbf{x}_{0}$ may be chosen in the direction of optimal solution $\mathbf{x}^{\star}$ so that a $\omega_{0}$ is obtained \cite{strohmer2009randomized,leventhal2010randomized}. \subsection{Numerical Example} \label{NUM_RKA} Fig.~\ref{figure_1}a illustrates the RKA reconstruction of a sawtooth signal from one-bit polyhedron in (\ref{eq:8n}) for $10$ sweeps (periods) with a fundamental frequency of $50~\mathrm{Hz}$. We discretized the generated signal $x(t)$ at the sampling rate (interval) of $1~\mathrm{kHz}$ ($\mathrm{T}=0.001~\mathrm{s}$). The time-varying sampling thresholds were drawn from the distribution $\boldsymbol{\uptau}^{(\ell)}\sim\mathcal{N}\left(\mathbf{0},\mathbf{I}\right)$, for all $\ell\in\mathcal{L}$. Define the normalized mean squared error, $\operatorname{NMSE} \triangleq \frac{\left\|\mathbf{x}-\bar{\mathbf{x}}\right\|_{2}^{2}}{\left\|\mathbf{x}\right\|_{2}^{2}}$, where $\mathbf{x}$ and $\bar{\mathbf{x}}$ denote the true (discretized) signal and its reconstructed version, respectively. Since RKA selects each hyperplane randomly in each iteration, we repeat the reconstruction in Fig.~\ref{figure_1}a for $15$ times. The averaged NMSE over all experiments is only $\sim 0.0012$ or $-29.2082~\mathrm{dB}$. \subsection{Limitations of Conventional One-Bit Reconstruction} \label{Guaranteed} Denote the DRs of the desired signal $\mathbf{x}$ and the time-varying threshold $\boldsymbol{\uptau}$ by $\text{DR}_{\mathbf{x}}$ and $\text{DR}_{\boldsymbol{\uptau}}$, respectively, where we define the DR of a vector as its $\ell_{\infty}$-norm. \emph If $\text{DR}_{\mathbf{x}} \leq \text{DR}_{\boldsymbol{\uptau}}$, then the reconstructed signal $\mathbf{x}^{\star}$ may be found inside the polyhedron (\ref{eq:8n}) with a high probability for an \textup{adequate} number of samples. Otherwise, if $\text{DR}_{\mathbf{x}} > \text{DR}_{\boldsymbol{\uptau}}$, there is no guarantee to obtain $\mathbf{x}^{\star}$ since the desired solution cannot be inside the finite-volume space imposed by the set of inequalities in (\ref{eq:8n}) indicating an irretrievable information loss.} We demonstrate this as follows. Without loss of generality, consider $x_{k}=\text{DR}_{\mathbf{x}}$ for $x_{k}>0$. Assume $\tau_{k}^{\star}=\max_{\ell}~\tau_{k}^{(\ell)}$. Since $\text{DR}_{\boldsymbol{\uptau}}=\|\boldsymbol{\uptau}\|_{\infty}$, we have $\tau_{k}^{\star}\leq\text{DR}_{\boldsymbol{\uptau}}$. If $\text{DR}_{\mathbf{x}} > \text{DR}_{\boldsymbol{\uptau}}$, then we have $\tau_{k}^{\star}<\text{DR}_{\mathbf{x}}=x_{k}$. Therefore, to reconstruct the $k$-th entry of the input signal $x_{k}$, we always have a gap $\delta=x_{k}-\tau_{k}^{\star}>0$ that is not covered by any sample to capture the amplitude information of $\mathbf{x}$. Hence, the desired signal is not found inside the finite-volume space imposed by the inequalities in (\ref{eq:8n}). In Fig.~\ref{figure_1}a, $\text{DR}_{\boldsymbol{\uptau}}=3$ is larger than $\text{DR}_{\mathbf{x}}=1$ thereby leading to a low reconstruction NMSE. We now consider $x$ to be a bandlimited function with piece-wise constant Fourier transform values are drawn uniformly at random, i.e., $\widehat{x}(\omega)\sim\operatorname{unif}(0,1)$. This signal is the same as the one used in \cite{bhandari2020unlimited}. The time-varying sampling thresholds were generated following the procedure explained in Section~\ref{NUM_RKA}. Fig.~\ref{figure_1}b shows the RKA-based reconstruction of the bandlimited signal from the polyhedron (\ref{eq:8n}). Around $t=0$ (corresponding sample index is $364$ in the plot), the reconstruction severely degrades because $\text{DR}_{\mathbf{x}}=5$ is set to be larger than $\text{DR}_{\boldsymbol{\uptau}}=3$. Indeed, when the difference between $\text{DR}_{\mathbf{x}}$ and $\text{DR}_{\boldsymbol{\uptau}}$ increases further, we observe a significant loss of information in the reconstructed signal (Fig.~\ref{figure_1}c). \section{Toward a Reconstruction Guarantee for One-Bit Sampling} \label{sec:uno} Since RKA does not guarantee an exact signal reconstruction from one-bit measurements in (\ref{eq:8n}) when the DR of the signal exceeds that of the time-varying sampling threshold, it is pertinent to design the time-varying sampling threshold such that $\text{DR}_{\mathbf{x}}\leq\text{DR}_{\mathbf{\boldsymbol{\uptau}}}$. This is not always possible because the desired signal is unknown. We address this limitation via UNO, which is our proposed new one-bit sampling method based on the concept of unlimited sampling. As discussed in Section~\ref{sec:unlimited}, unlimited sampling yields signal amplitudes folded within the range $\left[-\lambda,\lambda\right]$. This suggests an alternative time-varying threshold with the same DR as the modulo samples $\Tilde{\mathbf{x}}=[\Tilde{x}_{k}]$; i.e. $\text{DR}_{\boldsymbol{\uptau}}=\lambda$. In other words, the thresholds are modified to be closer to the clipping value and the self-reset ADC is integrated with one-bit sampling. We summarize this UNO sampling framework as follows: \begin{enumerate} \item Apply the modulo operator defined in (\ref{eq:18}) to the input signal $\mathbf{x}$ and obtain modulo samples $\Tilde{\mathbf{x}}=\mathcal{M}_{\lambda}\left(\mathbf{x}\right)$. \item Design sequences of the time-varying sampling threshold $\left\{\boldsymbol{\uptau}^{(\ell)}\right\}_{\ell=1}^{m}$ such that $\left|\text{DR}_{\boldsymbol{\uptau}^{(\ell)}}-\lambda\right|\leq \varepsilon_{0}$ for all $\ell\in\mathcal{L}=\{1,\cdots,m\}$ and a small number $\varepsilon_{0}$. \item Apply the one-bit quantization to modulo samples as $\mathbf{r}^{(\ell)}=\operatorname{sgn}\left(\Tilde{\mathbf{x}}-\boldsymbol{\uptau}^{(\ell)}\right)$. \end{enumerate} Fig.~\ref{figure_30} illustrates various steps of our UNO sampling technique. The following Proposition~\ref{UNO_prop} states the UNO threshold design. \begin{proposition}[Judicious threshold design] \label{UNO_prop} Under the UNO sampling framework, the following \textup{DR guarantee} holds: Assume each one-bit sampling threshold $\boldsymbol{\uptau}^{(\ell)}$ is distributed as $\boldsymbol{\uptau}^{(\ell)}\sim \mathcal{N}\left(\mathbf{0},\sigma^{2}_{\boldsymbol{\uptau}}\mathbf{I}\right)$. Then, considering the ADC threshold $\lambda$, $\sigma_{\boldsymbol{\uptau}}$ will be equal to $\frac{\lambda}{3}$ with a probability of at least $0.99$. \end{proposition} \begin{IEEEproof} With a probability of at least $0.99$, the DR of each $\boldsymbol{\uptau}^{(\ell)}\sim \mathcal{N}\left(\mathbf{0},\sigma^{2}_{\mathbf{\uptau}}\mathbf{I}\right)$ is $3\sigma_{\boldsymbol{\uptau}}$ \cite{kendall1987kendall}. When $\sigma_{\boldsymbol{\uptau}}=\frac{\lambda}{3}$, then time-varying sampling threshold also has a DR of $\lambda$ with a probability of at least $0.99$. \end{IEEEproof} \begin{figure}[t] \centering \includegraphics[width=1.0\textwidth]{didid_v03.png} \caption{The UNO sampling architecture. The proper choice of the sampling interval $T$ in the middle block is specified by Theorem~\ref{Negar_Theorem}. } \label{figure_30} \end{figure} \begin{figure}[t] \centering \subfloat[Transfer function of 1-bit ADC] {\includegraphics[width=0.48\textwidth]{signum_1.png}}\quad \subfloat[Input signal and threshold] {\includegraphics[width=0.48\textwidth]{threshold_1.png}} \qquad \subfloat[Transfer function of UNO ADC] {\includegraphics[width=0.48\textwidth]{signum.png}}\quad \subfloat[Modulo samples and threshold] {\includegraphics[width=0.48\textwidth]{threshold.png}} \caption (a) Transfer function of conventional one-bit ADC where the $i$-th element of the input signal $x=(\mathbf{x})_i$ is compared with a randomly selected threshold $\boldsymbol{\uptau}$ (b) High DR input signal $\mathbf{x}$ and its thresholds samples $\uptau$. (c) As in (a), but for UNO with the judicious time-varying threshold $\lambda$. (d) The unlimited samples $\Tilde{\mathbf{x}}$ compared with the thresholded samples $\boldsymbol{\uptau}$ and $\lambda$. } \label{figure_7nm} \end{figure} \begin{figure*}[t] \centering \subfloat[$m=2$] {\includegraphics[width=0.33\textwidth]{cube_2.png}} \subfloat[$m=6$] {\includegraphics[width=0.33\textwidth]{cube_6.png}} \subfloat[$m=20$] {\includegraphics[width=0.33\textwidth]{cube_20.png}} \qquad \subfloat[$m=2$] {\includegraphics[width=0.33\textwidth]{2D_2.png}} \subfloat[$m=6$] {\includegraphics[width=0.33\textwidth]{2D_6.png}} \subfloat[$m=20$] {\includegraphics[width=0.33\textwidth]{2D_20.png}} \caption{Top: Trihedron space (polyhedron (\ref{eq:24}) in $3$ dimensions) (blue), unlimited sampling cube (red), and true value of the modulo signal $\Tilde{\mathbf{x}}\in\mathbb{R}^{3}$ (yellow) for (a) $m=2$ (b) $m=6$ and (c) $m=20$. Bottom: As in the top panel, but only a cross-section (unshaded with same color boundary) at $Z=0$ plane is shown for (d) $m=2$ (e) $m=6$ and (f) $m=20$. Each inequality constraint is shown by a half-space whose feasible region is marked by black arrows. } \label{figure_1n} \end{figure*} In Proposition~\ref{UNO_prop}, we design time-varying sampling threshold sequences so that their DR is close to that of the input signal. This enables storing the information of distance between the input signal and the thresholds without any loss of information via one-bit sampling. Fig.~\ref{figure_7nm} shows a comparison of conventional one-bit sampling and UNO for the high DR scenario; the transfer function of the former is plotted in Fig.~\ref{figure_7nm}a. We consider the same bandlimited signal as in Section~\ref{Guaranteed} and a random threshold $\boldsymbol{\uptau}\sim\mathcal{N}\left(\mathbf{0},\mathbf{I}\right)$. In case of one-bit sampling, the signal values and thresholds differ considerably at some points (Fig.~\ref{figure_7nm}b) and, consequently, the information on the distance between the signal value and the threshold samples is completely lost. For UNO, the threshold is chosen closer to the folded signal with $\lambda=0.5$ (Fig.~\ref{figure_7nm}c). This preserves the information of the input signal in the modulo samples (Fig.~\ref{figure_7nm}d). For reconstruction of the signal of interest $\mathbf{x}$ from UNO samples, we reformulate the polyhedron (\ref{eq:8n}) for modulo samples as \begin{equation} \label{eq:24} \Tilde{\mathcal{P}} = \left\{\Tilde{\mathbf{x}} \mid \Tilde{\boldsymbol{\Omega}} \Tilde{\mathbf{x}} \succeq \operatorname{vec}\left(\mathbf{R}\right)\odot \operatorname{vec}\left(\boldsymbol{\Gamma}\right)\right\}. \end{equation} This overdetermined system of linear inequalities in (\ref{eq:24}) is then solved via RKA and, from the resulting approximated modulo samples, we obtain $\mathbf{x}$ via Algorithm ~\ref{algorithm_1}. Algorithm~\ref{algorithm_2} summarizes these steps of the \emph{UNO algorithm}. \clearpage \begin{algorithm}[H] \caption Signal reconstruction in UNO.} \label{algorithm_2} \begin{algorithmic}[1] \Statex \textbf{Input:} Sequences of one-bit measurements $\left\{\mathbf{r}^{(\ell)}=\operatorname{sgn}\left(\mathcal{M}_{\lambda}\left(\mathbf{x}\right)-\boldsymbol{\uptau}^{(\ell)}\right)\right\}_{\ell=1}^{m}$, $\boldsymbol{\uptau}^{(\ell)}\sim \mathcal{N}\left(\mathbf{0},\sigma^{2}_{\boldsymbol{\uptau}}\mathbf{I}\right)$, ADC threshold $\lambda$, total number of iterations $i_{\textrm{max}}$. \Statex \textbf{Output:} The approximation of the input signal $\bar{\mathbf{x}}$. \vspace{1.2mm} \State $\mathbf{R} \gets \left\{\mathbf{r}^{(\ell)}\right\}_{\ell=1}^{m}$. \vspace{1.2mm} \State $\sigma_{\boldsymbol{\uptau}} \gets \frac{\lambda}{3}$ \vspace{1.2mm} \State $\boldsymbol{\Gamma} \gets \left\{\boldsymbol{\uptau}^{(\ell)}\right\}_{\ell=1}^{m}$ \vspace{1.2mm} \State $\boldsymbol{\Omega}^{(\ell)} \gets \operatorname{diag}\left(\mathbf{r}^{(\ell)}\right)$ \vspace{1.2mm} \State $\Tilde{\boldsymbol{\Omega}}\gets\left[\begin{array}{c|c|c} \boldsymbol{\Omega}^{(1)} &\cdots &\boldsymbol{\Omega}^{(m)} \end{array}\right]^{\top}$ \vspace{1.2mm} \State $\Tilde{\mathcal{P}}\gets \left\{\Tilde{\mathbf{x}} \mid \Tilde{\boldsymbol{\Omega}} \Tilde{\mathbf{x}}\succeq \operatorname{vec}\left(\mathbf{R}\right)\odot \operatorname{vec}\left(\boldsymbol{\Gamma}\right)\right\}$. \vspace{1.2mm} \State Find the modulo signal in $\Tilde{\mathcal{P}}$ via RKA. \For{$i=1:i_{\textrm{max}}$} \State $\Tilde{\mathbf{x}}_{i+1}\gets\Tilde{\mathbf{x}}_{i}+\left(-\upomega_{j}\Tilde{\mathbf{x}}_{i}+\upomega_{j}\boldsymbol{\uptau}^{(\ell)}\right)^{+}\upomega^{\top}_{j}$ \EndFor \State $\bar{\Tilde{\mathbf{x}}}\gets \Tilde{\mathbf{x}}_{i_{\textrm{max}}}$ \State Reconstruct the input signal via Algorithm~\ref{algorithm_1} from $\bar{\Tilde{\mathbf{x}}}$. \State \Return $\bar{\mathbf{x}}$ \end{algorithmic} \end{algorithm} In Fig.~\ref{figure_1n}, we show that increasing the number $m$ of time-varying sampling threshold sequences guarantees the RKA-based reconstruction as it leads the space formed by the intersection of half-spaces (inequality constraints in \eqref{eq:24}) to completely shrink to the true value modulo signal $\Tilde{\mathbf{x}}$ inside the volume space imposed by unlimited sampling. This volume space is a \emph{cube} because the constraints applied to the modulo samples are $-\lambda\leq\Tilde{x}_{k}\leq\lambda$. Here, the blue planes/lines representing the linear inequalities form a finite-volume space around the optimal point (displayed by the yellow circle inside the cube) by increasing the number of one-bit sampling thresholds. In the top panel, we show the specific case of a trihedron (i.e., modulo samples are $\Tilde{\mathbf{x}}\in\mathbb{R}^{3}$) to represent the effect of increasing the number of threshold sequences on the reconstruction performance. The bottom panel shows the same effect for 2-D cross-section of the trihedron. The constraints are not enough to create a finite-volume space in Fig.~\ref{figure_1n}a and d. On the other hand, in Fig.~\ref{figure_1n}b and e, such constraints create the desired finite-volume polyhedron space but are unable to capture the optimal point. Finally, in Fig.~\ref{figure_1n}c and f, the optimal point is successfully captured by the resulting finite-volume space. The following theorem summarizes the UNO guarantees. \begin{figure*}[t] \centering \includegraphics[width=1.0\textwidth]{rka_reconst.png} \caption{Reconstruction of the input signal from one-bit measurements using UNO when the ADC threshold is (a) $\lambda=1$, (b) $\lambda=0.5$, and (c) $\lambda=0.2$. (d)-(f) As in, respectively, (a)-(c) but the true unlimited samples are compared with their reconstructed samples. } \label{figure_4} \end{figure*} \begin{theorem}[UNO sampling theorem] \label{Negar_Theorem} Assume $x(t)$ to be a finite energy, bandlimited signal with maximum frequency $\Omega_{\textrm{max}}$. Let $\Tilde{x}_{k}$, $k \in \mathbb{Z}$, introduced in (\ref{eq:18}) be the modulo samples of $x(t)$ with sampling rate $1/T$. Assume $\bar{\Tilde{\mathbf{x}}}$ contains the modulo samples reconstructed by the RKA and define the reconstruction error as $\mathbf{e}=\left(\bar{\Tilde{\mathbf{x}}}-\Tilde{\mathbf{x}}\right)$. Then, the sufficient condition for the reconstruction of bandlimited samples $x_{k}$ from UNO samples $\left\{\mathbf{r}^{(\ell)}=\operatorname{sgn}\left(\mathcal{M}_{\lambda}\left(\mathbf{x}\right)-\boldsymbol{\uptau}^{(\ell)}\right)\right\}_{\ell=1}^{m}$, where $\boldsymbol{\uptau}^{(\ell)}\sim \mathcal{N}\left(\mathbf{0}, \frac{\lambda^{2}}{9}\mathbf{I}\right)$, up to additive multiples of $2\lambda$ is \begin{equation} \label{rageN} T \leq \frac{1}{2^{h}\Omega_{\textrm{max}} e}, \end{equation} where $h\in\mathbb{N}$ is given by \begin{equation} \label{rageN1} h\geq \frac{\log\left(\frac{2\beta_{x}}{\lambda}\right)}{\log\left(\frac{\lambda}{4\left\|\mathbf{e}\right\|_{\infty}}\right)}, \end{equation} and \begin{equation} \label{rageN2} \lambda\geq 4 \zeta\left\|\mathbf{e}\right\|_{\infty}, \quad \zeta>1. \end{equation} \end{theorem} \begin{IEEEproof} \label{Negarianproof} While reconstructing the modulo samples from one-bit data via RKA, the real modulo samples are represented by the linear model \begin{equation} \label{rageN3} \bar{\Tilde{\mathbf{x}}}=\Tilde{\mathbf{x}}+\mathbf{e}. \end{equation} The error in RKA reconstruction may be viewed as \emph{noise} for modulo samples. According to \cite[Theorem 3]{bhandari2021unlimited}, the sampling rate for the \emph{contaminated} modulo samples in \eqref{rageN3} to reconstruct the bandlimited samples $x_{k}$ to satisfy $\bar{x}_{k}=x_{k}+e_{k}$ is $T \leq \frac{1}{2^{h}\Omega_{\textrm{max}} e}$, where $h\in\mathbb{N}$, and \begin{equation} \label{rageN4} \left\|\mathbf{e}\right\|_{\infty}\leq\frac{\lambda}{4} \left(\frac{2\beta_{x}}{\lambda}\right)^{-\frac{1}{h}}. \end{equation} Clearly, \eqref{rageN1} follows from \eqref{rageN4}. Moreover, to ensure that the log function used in \eqref{rageN1} is positive, we have $\frac{\lambda}{4\left\|\mathbf{e}\right\|_{\infty}}\geq\zeta>1$ leading to a lower bound for the ADC threshold $\lambda\geq 4\zeta\left\|\mathbf{e}\right\|_{\infty}$. This completes the proof. \end{IEEEproof} Theorem~\ref{Negar_Theorem} provides the lower bound for the ADC threshold $\lambda$ in Eq.~(\ref{rageN2}). The upper bound on $T$ for UNO sampling is lower than or equal to that of the unlimited sampling (the equality holds when $h=1$) which associates with a higher sampling rate in UNO. As mentioned later in Section~\ref{subsec:sig_amp}, oversampling is a a common scenario in one-bit quantization techniques and not a major concern in UNO implementation. Note that the resulting error $\mathbf{e}$ of RKA is different than the noise considered in \cite[Theorem 3]{bhandari2021unlimited} in the sense that, unlike the latter, the corresponding reconstructed modulo samples in UNO obey $\left|\bar{\Tilde{x}}_{k}\right|<\lambda$. This ensures that $N$ in (\ref{eq:21}) guarantees $\Delta^{N}\bar{\mathbf{x}} \equiv \mathcal{M}_\lambda\left(\Delta^{N} \bar{\mathbf{x}}\right)$ or equivalently $\Delta^{N}\bar{\mathbf{x}}\equiv \mathcal{M}_{\lambda}\left(\Delta^{N}\bar{\Tilde{\mathbf{x}}}\right)$; we refer the reader to \cite{bhandari2020unlimited} for more details on this aspect. As a result, UNO perfectly reconstructs the input samples $x_{k}$ in the sense that $\bar{x}_{k}=x_{k}+e_{k}$ (up to additive multiples of $2\lambda$) with the same $N$ considered in the noiseless unlimited sampling reconstruction of \cite[Section IV.B]{bhandari2021unlimited}. \begin{comment} \begin{theorem} \label{adc_threshold_theorem} Let $\Tilde{\mathbf{x}}$ and $\mathbf{e}$ denote the reconstructed modulo samples by the RKA and its associated reconstruction error, respectively. To guarantee the reconstruction of $\mathbf{x}^{\star}$ from $\Tilde{\mathbf{x}}$ in the UNO algorithm, we have: \begin{equation} \label{tiger_1} \left\|\mathcal{M}_{\lambda}\left(\Delta^{N}\Tilde{\mathbf{x}}\right)\right\|_{\infty}+\left\|\operatorname{mod}\left(\Delta^{N}\mathbf{e},2\lambda\right)\right\|_{\infty}<\lambda, \end{equation} where $\operatorname{mod}(.)$ returns the remainder of a division and $N$ satisfies (\ref{eq:21}). \end{theorem} \begin{IEEEproof} As was proved in \cite{bhandari2020unlimited}, choosing the difference order $N$ based on (\ref{eq:21}) leads to \begin{equation} \label{tiger_2} \Delta^{N}\mathbf{x}=\mathcal{M}_{\lambda}\left(\Delta^{N}\Tilde{\mathbf{x}}+\Delta^{N}\mathbf{e}\right), \end{equation} where $\mathbf{e}$ denotes the error associated with the reconstruction of the modulo samples by the RKA. To obtain the desired signal $\mathbf{x}^{\star}$ from the modulo samples, we have $\left\|\Delta^{N}\mathbf{x}\right\|_{\infty}<\lambda$, \cite{bhandari2020unlimited}. Considering (\ref{tiger_2}) leads to \begin{equation} \label{tiger_3} \left\|\mathcal{M}_{\lambda}\left(\Delta^{N}\Tilde{\mathbf{x}}+\Delta^{N}\mathbf{e}\right)\right\|_{\infty}<\lambda. \end{equation} Since we have $\left\lfloor a+b\right\rfloor\geq\left\lfloor a\right\rfloor+\left\lfloor b\right\rfloor$ for two arbitrary real numbers $a$ and $b$, one can obtain the following bound for the left-hand side of (\ref{tiger_3}): \begin{equation} \label{tiger_4} \begin{aligned} &\left\|\mathcal{M}_{\lambda}\left(\Delta^{N}\Tilde{\mathbf{x}}+\Delta^{N}\mathbf{e}\right)\right\|_{\infty}\leq\\&\left\|\mathcal{M}_{\lambda}\left(\Delta^{N}\Tilde{\mathbf{x}}\right)+\operatorname{mod}\left(\Delta^{N}\mathbf{e},2\lambda\right)\right\|_{\infty}\leq\\&\left\|\mathcal{M}_{\lambda}\left(\Delta^{N}\Tilde{\mathbf{x}}\right)\right\|_{\infty}+\left\|\operatorname{mod}\left(\Delta^{N}\mathbf{e},2\lambda\right)\right\|_{\infty}. \end{aligned} \end{equation} Therefore, according to (\ref{tiger_3}) and (\ref{tiger_4}), one can conclude $\left\|\mathcal{M}_{\lambda}\left(\Delta^{N}\Tilde{\mathbf{x}}\right)\right\|_{\infty}+\left\|\operatorname{mod}\left(\Delta^{N}\mathbf{e},2\lambda\right)\right\|_{\infty}<\lambda$, which completes the proof. \end{IEEEproof} \end{comment} \section{UNO Reconstruction: Numerical Illustrations and Error Analyses} \label{numerical_unlim} We assessed the performance of the UNO reconstruction through extensive numerical experiments. In particular, we validate that the size of the cube imposed by self-reset ADCs (red contours and shaded regions in Fig.~\ref{figure_1n}) and, hence, the reconstruction error depend on the ADC threshold $\lambda$. We then investigate the effect of input signal amplitude $\left\|\mathbf{x}\right\|_{\infty}$ on the reconstruction performance. In all experiments, we considered the same high DR input signal as in Section~\ref{Guaranteed}. \begin{figure*}[t] \centering \includegraphics[width=1.0\textwidth]{unlimited_amp.png} \caption{Reconstruction of the input signal from one-bit measurements using UNO Algorithm~\ref{algorithm_2} when the ADC threshold is set to $\lambda=0.5$ and the input signal amplitude $\left\|\mathbf{x}\right\|_{\infty}$ is (a) $10$, (b) $15$, and (c) $20$. (d) As in (a) but for $\lambda=1$ and $\left\|\mathbf{x}\right\|_{\infty}=1000$. The inset shows the same plot on a larger scale. } \label{figure_5} \end{figure*} \subsection{Varying ADC Threshold The number of time-varying sampling thresholds was set to $m=400$. In each experiment, the generated signals have the same $\mathrm{DR}_{\mathbf{x}}=8$ but the ADC threshold $\lambda$ changes. For a given $\lambda$, the sequences of time-varying sampling threshold are drawn randomly following the distribution $\left\{\boldsymbol{\uptau}^{(\ell)}\sim \mathcal{N}\left(\mathbf{0},\frac{\lambda^{2}}{9}\mathbf{I}\right)\right\}_{\ell=1}^{m}$. Fig.~\ref{figure_4} illustrates accurate UNO reconstruction for different values of $\lambda \in \{0.2,0.5,1\}$. Table~\ref{table_1} lists the reconstruction NMSE (on a $\log_{10}$ scale), averaged over $15$ experiments, for different values of $\lambda$. We observe that increasing in $\lambda$ leads to higher NMSE because the volume of the unlimited sampling cube grows further, and consequently, more hyperplanes may be required to contain a specific volume around the optimal point in the feasible region. \subsection{Varying Input Signal Amplitude} \label{subsec:sig_amp} Here, we generated the input signals with varying DRs. In each experiment, the ADC threshold $\lambda$ was fixed to $\lambda=0.5$, for which we generated sequences of time-varying sampling threshold as $\left\{\boldsymbol{\uptau}^{(\ell)}\sim \mathcal{N}\left(\mathbf{0},\frac{1}{36}\mathbf{I}\right)\right\}_{\ell=1}^{m}$. Fig.~\ref{figure_5} shows accurate UNO reconstruction for different values of $\left\|\mathbf{x}\right\|_{\infty}$. Table~\ref{table_2} reports the corresponding NMSE averaged over $15$ experiments. Next, we study the reconstruction for a signal with an extremely high DR, with $\left\|x(t)\right\|_{\infty}=1000$. In theory, the unlimited sampling theorem guarantees reconstruction with $T \leq \frac{1}{2\Omega_{\textrm{max}} e}$. However, in practice, signal reconstruction from unlimited samples has its own limitations due to error propagation by the finite-difference operator. Specifically, for a large DR of input signal compared to that of the ADC threshold $\lambda$, the order of difference operator $N$ should also be large. But a large $N$ would also amplify the quantization/round-off noise, leading to an unstable reconstruction. In this scenario, more samples (given by the oversampling factor) are required to decrease $N$. Note that, unlike conventional ADCs, an abundant number of samples does not lead to an increase in power consumption, manufacturing cost, and per-bit chip area in one-bit ADCs. Fig.~\ref{figure_5}d shows an accurate UNO reconstruction for $\lambda=1$ and a 40 times higher sampling rate $1/T$ than the previous experiments. \begin{table} [t] \caption{Averaged UNO reconstruction NMSE for fixed $\mathbf{x}$ } \centering \begin{tabular}{ c || c | c | c } \hline \textbf { $\lambda$ } & $0.2$ & $0.5$ & $1$ \\ \hline \textbf { $10\log_{10}\operatorname{NMSE}$ } & $-72.721$ & $-67.660$ & $-60.987$ \\ \hline \end{tabular} \label{table_1} \end{table} Although UNO and one-bit $\Sigma\Delta$ method \cite{graf2019one} are different in their respective theoretical foundations and applications, here we compare their reconstruction performance for the same signal. The ADC threshold was set to $\lambda=1$ and sequences of the time-varying sampling threshold were drawn as $\left\{\boldsymbol{\uptau}^{(\ell)}\sim \mathcal{N}\left(\mathbf{0},\frac{1}{9}\mathbf{I}\right)\right\}_{\ell=1}^{m}$. For the specific case of $\left\|\mathbf{x}\right\|_{\infty}=40$, Fig.~\ref{figure_6m} compares the UNO-reconstructed signal $\bar{ \mathbf{x}}$ with the one-bit unlimited $\Sigma\Delta$-reconstructed signal $\bar{\mathbf{x}}_{\Sigma\Delta}$ when the ratio between the input signal amplitude and the ADC threshold $\eta=\frac{\left\|\mathbf{x}\right\|_{\infty}}{\lambda}$ is large. The one-bit unlimited $\Sigma\Delta$ degenerates in some parts of the input samples, while the UNO accurately reconstruct the signal. Table~\ref{table_3} further compares the reconstruction NMSE, averaged over $15$ experiments, of both sampling methods for different amplitudes $\left\|\mathbf{x}\right\|_{\infty}\in\left\{20,50\right\}$. Here, the degradation in one-bit $\Sigma\Delta$ reconstruction for large $\eta$ is because of the round-off noise in software and, primarily, imperfect noise shaping in sigma-delta conversion that results in sample corruption. \begin{table}[t] \caption Averaged UNO reconstruction NMSE for $\lambda=0.5$} \centering \begin{tabular}{ c || c | c | c } \hline \textbf {$\left\|\mathbf{x}\right\|_{\infty}$} & $10$ & $15$ & $20$ \\[0.5 ex] \hline \textbf {$10\log_{10}\operatorname{NMSE}$} & $-63.925$ & $-65.820$ & $-63.969$ \\ \hline \end{tabular} \label{table_2} \end{table} \begin{figure}[t] \center{\includegraphics[width=0.55\textwidth]{Comparison_sigmadelta.png}} \caption{A comparison of reconstruction via UNO and one-bit unlimited $\Sigma\Delta$ when $\lambda=1$ and $\left\|\mathbf{x}\right\|_{\infty}=40$. } \label{figure_6m} \end{figure} \begin{table} [t] \centering \caption{Reconstruction $10\log_{10}\operatorname{NMSE}$ for $\lambda=1$ \centering \begin{tabular}{ c | c | c } \hline $\left\|\mathbf{x}\right\|_{\infty}$ & \textbf{One-bit unlimited} $\boldsymbol{\Sigma}\boldsymbol{\Delta}$ & \textbf{UNO}\\ [0.5 ex] \hline \hline $20$ & $0.402$ & $-63.969$ \\ \hline $50$ & $3.777$ & $-62.081$ \\ \hline \end{tabular} \label{table_3} \end{table} \subsection{Analysis of Reconstruction Error \label{error_UNO} To ensure a bounded reconstruction error, the feasible region in (\ref{eq:24}) cannot have an infinite volume in an asymptotic sense when amplitude constraints are imposed by unlimited sampling. As mentioned before, by introducing more samples, it is possible to obtain a polyhedron with a bounded volume that contains the desired point. Further, as we illustrated in Fig.~\ref{figure_1n}, adding more inequality constraints to (\ref{eq:24}) leads to shrinkage of this polyhedron. We now prove this result, i.e., in a probabilistic sense, that increasing the number of samples leads to the reconstruction error approaching zero, and that the resulting overdetermined linear system of inequalities guarantees the convergence of RKA \cite{eamaz2022phase,leventhal2010randomized,briskman2015block}. In other words, using an abundant number of samples (or oversampling in one-bit), the probability of creating the finite-volume space around the desired point $\Tilde{\mathbf{x}}^{\star}$ is increased. Define the distance between the optimal solution $\Tilde{\mathbf{x}}^{\star}$ and the $j$-th hyperplane of (\ref{eq:24}) as \begin{equation} \label{distance} \begin{aligned} d_{j}\left(\Tilde{\mathbf{x}}^{\star},\boldsymbol{\uptau}^{(\ell)}\right) &= \left\|\upomega_{j}\left(\Tilde{\mathbf{x}}^{\star}-\boldsymbol{\uptau}^{(\ell)}\right)\right\|^{2}_{2},\quad j\in\left\{1,\cdots,m^{\prime}\right\}, \end{aligned} \end{equation} where $\upomega_{j}$ is the $j$-th row of $\Tilde{\boldsymbol{\Omega}}$. This distance is also the residual error\footnote{In a linear feasibility problem $\mathbf{C}\mathbf{x}\leq \mathbf{b}$, the \emph{residual error} is defined as $\left\|\mathbf{C}\mathbf{x}^{\star}-\mathbf{b}\right\|_{2}^{2}$ \cite{van1996matrix}.} of (\ref{eq:24}). Intuitively, it is easy to observe that by reducing the distances between $\Tilde{\mathbf{x}}^{\star}$ and the constraint-associated hyperplanes generally increases the possibility of capturing the optimal point. For a specific sample size $m^{\prime}=m n$, when the volume of the finite space around the optimal point is reduced, the mean $\left\{d_{j}\left(\Tilde{\mathbf{x}}^{\star},\boldsymbol{\uptau}^{(\ell)}\right)\right\}_{j=1}^{m^{\prime}}$ \cite{leventhal2010randomized}, i.e., $T_{\text{ave}}=\frac{1}{m^{\prime}}\sum^{m^{\prime}}_{j=1}d_{j}\left(\Tilde{\mathbf{x}}^{\star},\boldsymbol{\uptau}^{(\ell)}\right)$, also decreases. Denote $D_{\ell}\left(\Tilde{\mathbf{x}}^{\star},\boldsymbol{\uptau}^{(\ell)}\right)=\left\|\Tilde{\mathbf{x}}^{\star}-\boldsymbol{\uptau}^{(\ell)}\right\|^{2}_{2}$. Then, $T_{\text{ave}}$ becomes \begin{equation} \label{6566} \begin{aligned} T_{\text{ave}} = \frac{1}{m^{\prime}}\sum^{m}_{\ell=1}D_{\ell}\left(\Tilde{\mathbf{x}}^{\star},\boldsymbol{\uptau}^{(\ell)}\right). \end{aligned} \end{equation} In the one-bit phase retrieval approach studied in \cite{eamaz2022phase}, a \emph{Chernoff bound} was derived to quantify the possibility of creating the above-mentioned finite-volume and the number of samples required for RKA. We apply this result below in Theorem~\ref{theorem_0} to the UNO reconstruction from one-bit samples. Here, we have replaced the error between the true value signal and the initial value $\left\|\Tilde{\mathbf{x}}_{0}-\Tilde{\mathbf{x}}^{\star}\right\|^{2}_{2}$ with the residual error $\left\|\upomega_{j}\left(\Tilde{\mathbf{x}}^{\star}-\boldsymbol{\uptau}^{(\ell)}\right)\right\|^{2}_{2}$ in the Chernoff bound. The latter explains the distance between the hyperplanes and the true value $\mathbf{x}^{\star}$ by including the sampling threshold sequences into its expression. \begin{theorem}\cite[Theorem 1]{eamaz2022phase} \label{theorem_0} Assume the distances $\left\{d_{j}\left(\Tilde{\mathbf{x}}^{\star},\boldsymbol{\uptau}^{(\ell)}\right)\right\}_{j=1}^{m}$ between the desired point $\Tilde{\mathbf{x}}^{\star}$ and the hyperplanes of the polyhedron defined in (\ref{eq:24}) are independent and identically distributed random variables. Then, \begin{enumerate} \item The Chernoff bound of $T_{\text{ave}}$ is \begin{equation} \label{eq:theorem_cher} \operatorname{Pr}\left(\frac{1}{m^{\prime}}\sum^{m^{\prime}}_{j=1}d_{j}\left(\Tilde{\mathbf{x}}^{\star},\boldsymbol{\uptau}^{(\ell)}\right)\leq a\right)\geq 1-\inf_{t\geq 0}\frac{M_{T}}{e^{t a}}, \end{equation} where $a$ is an average distance point in space at which the finite-volume space around the desired signal is created, and \begin{equation} \label{eq:psi} M_{T} = \left(1+t\frac{\mu^{(1)}_{d_{j}}}{m^{\prime}}+\cdots+t^{\kappa}\frac{\mu^{(\kappa)}_{d_{j}}}{\kappa!m^{\prime\kappa}}+\mathcal{O}\left(m^{\prime}\right)\right)^{m^{\prime}}, \end{equation} is the moment generating function (MGF) of the reconstruction error, $\mu^{(\kappa)}_{d_{j}}=\mathbb{E}\left\{d^{\kappa}_{j}\right\}$, and $\mathcal{O}\left(m^{\prime}\right)$ denotes the higher-order terms. \item $M_{T}$ decreases with an increase in the sample size leading to an increase in the lower bound in (\ref{eq:theorem_cher}). \end{enumerate} \end{theorem} Theorem~\ref{theorem_0} states that the abundant number of samples in conventional one-bit quantization significantly affect the reconstruction performance of RKA for a system of linear inequalities in (\ref{eq:24}). Based on this result, Claim~\ref{clm} shows the efficacy of UNO sampling. \begin{claim} \label{clm} Increasing the number of time-varying sampling threshold sequences $m$ is not an effective approach to guarantee the desired signal reconstruction with RKA without using unlimited sampling. \end{claim} \begin{IEEEproof} For RKA-based reconstruction in Section~\ref{sec:onebit_rec}, assume that we increase the number of time-varying sampling threshold sequences from $m$ to $m+\kappa$. Therefore, from (\ref{eq:theorem_cher}) of Theorem~\ref{theorem_0}, the Chernoff bound of the reconstruction error is \begin{equation} \label{cor_1} \operatorname{Pr}\left(\frac{1}{(m+\kappa)n}\sum^{(m+\kappa)n}_{j=1}d_{j}\left(\mathbf{x}^{\star},\boldsymbol{\uptau}^{(\ell)}\right)\leq a\right)\geq 1-P_{T}, \end{equation} where $P_{T}=\inf_{t\geq 0}\frac{M_{T}}{e^{ta}}$ and $d_{j}\left(\mathbf{x}^{\star},\boldsymbol{\uptau}^{(\ell)}\right)=\left\|\upomega_{j}\left(\mathbf{x}^{\star}-\boldsymbol{\uptau}^{(\ell)}\right)\right\|^{2}_{2}$. Without loss of generality, assume $x_{k}=\text{DR}_{\mathbf{x}}$ for $x_{k}>0$, and $\delta=x_{k}-\text{DR}_{\boldsymbol{\uptau}}$ when $\text{DR}_{\mathbf{x}}>\text{DR}_{\boldsymbol{\uptau}}$. The infimum of the distance $\frac{1}{(m+\kappa)n}\sum^{(m+\kappa)n}_{j=1}d_{j}\left(\mathbf{x}^{\star},\boldsymbol{\uptau}^{(\ell)}\right)$ in (\ref{cor_1}) occurs when $\tau_{k}^{(\ell)}=\text{DR}_{\boldsymbol{\uptau}}$ for $\ell\in\left\{1,\cdots,m+\kappa\right\}$. As a result, this infimum is \begin{equation} \label{cor_2} \begin{aligned} \bar{d}&=\inf\left(\frac{1}{(m+\kappa)n}\sum^{(m+\kappa)n}_{j=1}d_{j}\left(\mathbf{x}^{\star},\boldsymbol{\uptau}^{(\ell)}\right)\right),\\&=\inf\left(\frac{1}{(m+\kappa)n}\sum^{(m+\kappa)(n-1)}_{j=1}d_{j}\left(\mathbf{x}^{\star},\boldsymbol{\uptau}^{(\ell)}\right)\right)+\frac{(m+\kappa)\delta}{(m+\kappa)n},\\&=\inf\left(\frac{1}{(m+\kappa)n}\sum^{(m+\kappa)(n-1)}_{j=1}d_{j}\left(\mathbf{x}^{\star},\boldsymbol{\uptau}^{(\ell)}\right)\right)+\frac{\delta}{n}. \end{aligned} \end{equation} The term $\frac{\delta}{n}$ in (\ref{cor_2}) does not depend on the number of time-varying sampling thresholds $m+\kappa$. In other words, increasing the value of $m$ does not guarantee the reconstruction of the desired signal in the polyhedron (\ref{eq:8n}) via the RKA. This phenomenon is also observed in connection to $P_{T}$. A considerable difference between the signal values and thresholds leads to larger values of $\left\{d_{j}\left(\mathbf{x}^{\star},\boldsymbol{\uptau}^{(\ell)}\right)\right\}$ thereby increasing the moments $\left\{\mu^{(\kappa)}_{d_{j}}\right\}$ or MGF $M_{T}$. Therefore, the dependence of $P_{T}$ on $M_{T}$ is unaffected when $m$ is increased. \end{IEEEproof} Note that by using the unlimited sampling and imposing amplitude constraints, the considered distances become bounded and $T_{\text{ave}}$ (\ref{6566}) is guaranteed to be lower than a small $a$. Then, the volume of the resulting finite-space will be smaller than that of the cube imposed by unlimited sampling. In Fig.~\ref{figure_2n}, we show that UNO reconstruction NMSE, averaged over $15$ experiments, significantly improves with the increase in the number of time-varying threshold sequences $m$. The ADC threshold was set to $\lambda=0.5$ and the signal DR was $\left\|\mathbf{x}\right\|_{\infty}=20$. \begin{figure}[t] \centering \includegraphics[width=0.70\textwidth]{number_samples_v02.png} \caption{Average NMSE for RKA-based UNO reconstruction with respect to the number of time-varying threshold sequences $m$ for $\lambda=0.5$ and $\left\|\mathbf{x}\right\|_{\infty}=20$. } \label{figure_2n} \end{figure} \begin{remark} \label{rageNremark} According to Theorem~\ref{theorem_0}, when the number of time-varying threshold sequences $m$ is increased, the reconstruction error $\mathbf{e}=\left(\Tilde{\mathbf{x}}-\bar{\Tilde{\mathbf{x}}}\right)$ and $\|\mathbf{e}\|_{\infty}$ become smaller. We have a smaller lower bound on $h$ defined in \eqref{rageN1} and, consequently, a lower sampling rate based on \eqref{rageN}. In other words, a larger $m$ yields a smaller UNO oversampling factor. \end{remark} \section{Reconstruction in the Presence of Noise} \label{sec:noise} In the presence of noise, one-bit $\Sigma\Delta$ sampling currently lacks similar guarantees. In one-bit noisy models of \cite{zymnis2009compressed,khobahi2020model}, a linear measurement model with additive Gaussian noise was considered. Then, based on the MLE formulation for Gaussian likelihood function, the input signal is recovered. However, in case of non-Gaussian contamination, the MLE objective is nonconvex and the recovered solution is not unique. Moreover, MLE-based reconstruction is computationally more complex for high-dimensional signals. Previously, for unlimited sampling, \cite{bhandari2020unlimited} has shown recovery of noisy bandlimited samples from their modulo samples up to an unknown additive constant, where the noise is entry-wise additive to the modulo samples, i.e., $\Tilde{\mathbf{y}}=\Tilde{\mathbf{x}}+\boldsymbol{\epsilon}$, and $\boldsymbol{\epsilon}$ is the noise vector. Contrary to this, we propose an approach to reconstruct unlimited one-bit sampled signal when the noise is additive to the input signal, which itself has a linear relationship with a desired parameter. This linear model for the noisy measurement $\mathbf{y}$ is \begin{equation} \label{eq:1000} \begin{aligned} \mathbf{y}&=\mathbf{x}+\boldsymbol{\epsilon},\\ \mathbf{x} &= \mathbf{A}\boldsymbol{\uptheta},\quad \mathbf{A}\in\mathbb{R}^{r\times s}, \end{aligned} \end{equation} where $\boldsymbol{\uptheta}$ is the desired parameter vector and the noise follows the distribution $\boldsymbol{\epsilon}\sim\mathcal{N}\left(0,\sigma^{2}_{\boldsymbol{\epsilon}} \mathbf{I}_{m}\right)$. Here, we may have $\mathbf{y} \notin[-\lambda, \lambda]$. Our goal is to estimate $\boldsymbol{\uptheta}$ from the UNO samples of noisy measurement $\mathbf{y}$ obtained as \begin{equation} \label{eq:10001} \begin{aligned} \mathbf{r}^{(\ell)} = \operatorname{sgn}(\mathcal{M}_{\lambda}\left(\mathbf{y}\right)-\boldsymbol{\uptau}^{(\ell)}),\quad \ell\in \mathcal{L}. \end{aligned} \end{equation} Our recovery approach comprises using RKA and Algorithm 1 (with N specified by (55)) to reconstruct noisy measurements from one-bit data, and then exploiting the PnP-ADMM method to estimate the desired parameters from linear overdetermined or undetermined systems. \begin{comment} \begin{table}[t] \caption{NMSE results associated with the performance of Algorithm~\ref{algorithm_2} in input signal reconstruction when the ADC threshold $\lambda$ is changing in each experiment.} \centering \begin{tabular}{ | c | c | } \hline \text {Noise variance} & \text { NMSE ($\log_{10}$ scale)} \\ [0.5 ex] \hline \hline $\sigma^{2}_{\mathbf{z}}=0.1$ & $0.1$ \\[1 ex] \hline $\sigma^{2}_{\mathbf{z}}=0.05$ & $0.05$ \\[1 ex] \hline $\sigma^{2}_{\mathbf{z}}=0.01$ & $0$ \\[1 ex] \hline \end{tabular} \label{table_3} \end{table} \end{comment} \begin{comment} \subsection{Reco Noisy Signal via Maximum Likelihood Estimation (MLE)} \label{MLE} Define the noisy measurement $\Tilde{\boldsymbol{\mu}}=\left[\Tilde{\mu}_{1},\cdots,\Tilde{\mu}_{m}\right]$ by \begin{equation} \label{eq:100} \begin{aligned} \mu_{j} &= x_{j}+z_{j},\\ \Tilde{\mu}_{j}&=\mathcal{M}_{\lambda}\left(\mu_{j}\right),\quad -\lambda\leq \Tilde{\mu}_{j}\leq \lambda \end{aligned} \end{equation} where $\mathbf{z}\sim \mathcal{N}\left(\mathbf{0},\sigma^{2}_{\mathbf{z}}\boldsymbol{I}\right)$ is the noise vector. The likelihood function of $\boldsymbol{\mu}$ is written as $p_{\boldsymbol{\mu}}\left(\mu_{j}|x_{j}, \sigma_{\mathbf{z}}\right)=\frac{1}{\sqrt{2\pi}\sigma_{\mathbf{z}}}e^{-\frac{\left(\mu_{j}-x_{j}\right)^{2}}{2\sigma^{2}_{\mathbf{z}}}}$. As a result, the likelihood function of $\Tilde{\boldsymbol{\mu}}$ is the \emph{truncated Gaussian distribution}, i.e., $\Tilde{\mu}_{j}\sim \mathcal{TN}\left(x_{j},\sigma^{2}_{\mathbf{z}}, -\lambda, \lambda\right)$. Let $\mathbf{\uptau}^{(\ell)}$ denote the time-varying threshold vector and designed by UNO algorithm. The noisy one-bit samples are generated as \begin{equation} \label{eq:101} \begin{aligned} r^{(\ell)}_{j} &= \begin{cases} +1 & \Tilde{\mu}_{j}>\tau^{(\ell)}_{j}, \\ -1 & \Tilde{\mu}_{j}<\tau^{(\ell)}_{j}. \end{cases} \end{aligned} \end{equation} The occurrence probability vector $\boldsymbol{p}$ for the noisy one-bit measurement $\mathbf{r}$ is given as \cite{eamaz2022phase,bhaskar20151}, \begin{equation} \label{eq:102} \begin{aligned} p_{\mathbf{r}}\left(r^{(\ell)}_{j}|x_{j}, \sigma_{\mathbf{z}}\right) &= \begin{cases} 1-F(\tau^{(\ell)}_{j}) & \text{for}\quad \{r^{(\ell)}_{j}=+1\}, \\ F(\tau^{(\ell)}_{j}) & \text{for}\quad \{r^{(\ell)}_{j}=-1\}, \end{cases} \end{aligned} \end{equation} where $F(.)$ is the CDF of $\Tilde{\boldsymbol{\mu}}$ defined in (\ref{eq:1bbbb}). The log-likelihood function of the sign data $\mathbf{r}$ is given by \begin{equation} \label{eq:103} \begin{aligned} \mathcal{L}_{\mathbf{r}}(\mathbf{x}) &= \sum^{m}_{\ell=1}\sum^{n}_{j=1}\left\{\mathbb{I}_{(r^{(\ell)}_{j}=+1)}\log\left(1-F(\tau^{(\ell)}_{j})\right) \right.\\& \left.+\mathbb{I}_{(r^{(\ell)}_{j}=-1)}\log\left(F(\tau^{(\ell)}_{j})\right)\right\}. \end{aligned} \end{equation} Interestingly, by solving the maximum log-likelihood estimation (MLE) problem associated with (\ref{eq:103}), our desired input signal $\mathbf{x}^{\star}$ can be immediately approximated. Therefore, the unconstrained optimization problem can be formulated as \begin{equation} \label{eq:104} \begin{aligned} \min_{\mathbf{x}} \quad -\mathcal{L}_{\mathbf{r}}(\mathbf{x}). \end{aligned} \end{equation} In many cases, $\mathcal{L}_{\mathbf{r}}(\mathbf{x})$ is a concave function and thus the above programs becomes convex. One can readily verify this in the case of a Gaussian noise \cite{davenport20141,eamaz2022phase}. \end{comment} \subsection{PnP-ADMM-Based UNO Reconstruction} \label{sec_LASSO} From the UNO samples \eqref{eq:10001}, we reconstruct $\mathbf{y}$ via Algorithm~\ref{algorithm_2}. The reconstructed signal $\bar{\mathbf{y}}$ also follows the linear model (\ref{eq:1000}). Therefore, we use $\bar{\mathbf{y}}$ to estimate $\boldsymbol{\uptheta}$ through the regularization \begin{comment} LASSO is a regression analysis method that performs both variable selection and regularization in order to enhance the prediction accuracy and interpretability of the resulting statistical model \cite{ali2019generalized}. This method has been extensively utilized in signal reconstruction problems by forming the optimization problem \cite{ali2019generalized}: \begin{equation} \label{eq:183} \min_{\boldsymbol{\theta}}~\|\boldsymbol{\theta}\|_{1}, \text { s.t. }\|\mathbf{\hat{y}}-\mathbf{A} \boldsymbol{\theta}\|_{2} \leq \varepsilon. \end{equation} The minimization problem in (\ref{eq:183}) is convex which makes it attractive from a computational viewpoint. By formulating the Lagrange multiplier for the problem in (\ref{eq:183}), we have: \begin{equation} \label{eq:242} \min_{\boldsymbol{\theta},\eta}~\|\mathbf{\hat{y}}-\mathbf{A} \boldsymbol{\theta}\|_{2}^{2} + \eta \|\boldsymbol{\theta}\|_{1} \end{equation} This regression method can be used in both overdetermined and underdetermined systems \cite{tibshirani1996regression}. By using LASSO, the desired parameters vector $\boldsymbol{\theta}$ may be reconstructed from (\ref{eq:242}). The input additive noise can be viewed as a modeling error considered by LASSO ($\ref{eq:242}$). \end{comment} \begin{equation} \label{Neg1} \widehat{\boldsymbol{\uptheta}}= \arg \min_{\boldsymbol{\uptheta}}~\|\bar{\mathbf{y}}-\mathbf{A} \boldsymbol{\uptheta}\|_{2}^{2} + \eta \rho(\boldsymbol{\uptheta}), \end{equation} where $\rho(\boldsymbol{\uptheta})$ is the penalty term and $\eta>0$ is the real-valued regularization parameter. There is a rich body of literature to select the penalty function $\rho(\cdot)$ including the $\ell_{1}$-norm \cite{tibshirani1996regression}, smoothly clipped absolute deviation (SCAD) \cite{fan2001variable}, adaptive least absolute shrinkage and selection operator (LASSO) \cite{zou2006adaptive} and the minimax-concave (MC) penalty which has a relationship with Huber functions \cite{selesnick2017sparse}. One of the standard approaches to solve regularized problems such as in (\ref{Neg1}) is ADMM that relies on splitting variables \cite{boyd2011distributed}. We conside \begin{equation} \label{Neg2} \widehat{\boldsymbol{\uptheta}}= \arg \min_{\boldsymbol{\uptheta}}~\|\bar{\mathbf{y}}-\mathbf{A} \boldsymbol{\uptheta}\|_{2}^{2} + \eta \rho(\upnu)~\text{subject to} ~\boldsymbol{\uptheta}=\upnu. \end{equation} Using the augmented Lagrangian, we reformulate problem (\ref{Neg2}) as \begin{equation} \label{Neg3} \underset{\boldsymbol{\uptheta}, \upnu}{\textrm{minimize}} \max_{\mathbf{p}}\left\{\|\bar{\mathbf{y}}-\mathbf{A} \boldsymbol{\uptheta}\|_{2}^{2}+\eta \rho(\upnu)+\mathbf{p}^{\top}(\boldsymbol{\uptheta}-\upnu)+\frac{\beta}{2}\|\boldsymbol{\uptheta}-\upnu\|^2\right\}, \end{equation} where $\mathbf{p}$ is the dual variable and $\beta$ is a real-valued design parameter. Denote $\mathbf{u}=\frac{\mathbf{p}}{\beta}$. Then, \begin{equation} \label{Neg4} \underset{\boldsymbol{\uptheta}, \upnu}{\textrm{minimize}} \max_{\mathbf{u}}\left\{\|\bar{\mathbf{y}}-\mathbf{A} \boldsymbol{\uptheta}\|_{2}^{2}+\eta \rho(\upnu)+\frac{\beta}{2}\|\boldsymbol{\uptheta}-\upnu+\mathbf{u}\|^2-\frac{\beta}{2}\|\mathbf{u}\|^2\right\}. \end{equation} The ADMM tackles (\ref{Neg4}) by alternating the minimization of $\boldsymbol{\uptheta}$ and $\upnu$. The update of $\upnu$ is essentially denoising of $\boldsymbol{\uptheta}_{k}+\mathbf{u}_{k-1}$ by the regularization $\eta \rho(\upnu)$. This is the key idea behind PnP-ADMM, where the proximal projection \begin{equation} \label{Neg2000} \upnu_{k}=\arg\min_{\upnu}\left\{\eta\rho(\upnu)+\frac{\beta}{2}\|\upnu-\boldsymbol{\uptheta}_{k}-\mathbf{u}_{k-1}\|^2\right\} \end{equation} is replaced with an appropriate denoiser $\mathcal{D}(.)$. For further details on various denoisers used in PnP techniques, we refer the interested reader to \cite{venkatakrishnan2013plug}. Algorithm~\ref{algorithm_3} summarizes the noisy UNO reconstruction procedure. \begin{algorithm}[H] \caption{Noisy UNO algorithm.} \label{algorithm_3} \begin{algorithmic}[1] \Statex \textbf{Input:} Sequences of one-bit measurements $\left\{\mathbf{r}^{(\ell)}=\operatorname{sgn}\left(\mathcal{M}_{\lambda}\left(\mathbf{y}\right)-\boldsymbol{\uptau}^{(\ell)}\right)\right\}_{\ell=1}^{m}$, where $\mathbf{y}$ follows (\ref{eq:1000}), $\boldsymbol{\uptau}^{(\ell)}\sim \mathcal{N}\left(\mathbf{0},\sigma^{2}_{\boldsymbol{\uptau}}\mathbf{I}\right)$, ADC threshold $\lambda$, design parameters $\eta$ and $\beta$, total number of iterations $k_{\textrm{max}}$. \Statex \textbf{Output:} The approximation of the parameter of interest $\widehat{\boldsymbol{\uptheta}}$ . \vspace{1.2mm} \State $\mathbf{R} \gets \left\{\mathbf{r}^{(\ell)}\right\}_{\ell=1}^{m}$. \vspace{1.2mm} \State $\sigma_{\boldsymbol{\uptau}} \gets \frac{\lambda}{3}$ \vspace{1.2mm} \State $\boldsymbol{\Gamma} \gets \left\{\boldsymbol{\uptau}^{(\ell)}\right\}_{\ell=1}^{m}$ \vspace{1.2mm} \State $\boldsymbol{\Omega}^{(\ell)} \gets \operatorname{diag}\left(\mathbf{r}^{(\ell)}\right)$ \vspace{1.2mm} \State $\Tilde{\boldsymbol{\Omega}}\gets\left[\begin{array}{c|c|c} \boldsymbol{\Omega}^{(1)} &\cdots &\boldsymbol{\Omega}^{(m)} \end{array}\right]^{\top}$ \vspace{1.2mm} \State $\Tilde{\mathcal{P}}\gets \left\{\bar{\Tilde{\mathbf{y}}} \mid \Tilde{\boldsymbol{\Omega}} \bar{\Tilde{\mathbf{y}}}\succeq \operatorname{vec}\left(\mathbf{R}\right)\odot \operatorname{vec}\left(\boldsymbol{\Gamma}\right)\right\}$ $\triangleright$ $\bar{\Tilde{\mathbf{y}}}$ reconstructed modulo samples from RKA. \vspace{1.2mm} \State Reconstruct $\bar{\mathbf{y}}$ from $\bar{\Tilde{\mathbf{y}}}$ with Algorithm~\ref{algorithm_1}. \vspace{1.2mm} \For{$k=1:k_{\textrm{max}}$} \State $\boldsymbol{\uptheta}_{k}\gets\min_{\boldsymbol{\uptheta}}\left\{\|\bar{\mathbf{y}}-\mathbf{A} \boldsymbol{\uptheta}\|_{2}^{2}+\frac{\beta}{2}\|\boldsymbol{\uptheta}-\upnu_{k-1}+\mathbf{u}_{k-1}\|^2\right\}$. \vspace{1.2mm} \State $\upnu_{k}\gets \mathcal{D}\left(\boldsymbol{\uptheta}_{k}+\mathbf{u}_{k-1}\right).$ \vspace{1.2mm} \State $\mathbf{u}_{k}\gets\mathbf{u}_{k-1}+\boldsymbol{\uptheta}_{k}-\upnu_{k}$. \EndFor \vspace{1.2mm} \State \Return $\widehat{\boldsymbol{\uptheta}}\gets \boldsymbol{\uptheta}_{k_{\textrm{max}}}$. \end{algorithmic} \end{algorithm} \subsection{ADC Threshold Selection in Noisy UNO} Theorem~\ref{theorem_4} certifies that the additive noise to the input signal results in an additive noise in modulo domain. \begin{theorem} \label{theorem_4} Assume the noise vector in the measurement model $\mathbf{y}=\mathbf{x}+\mathbf{z}$ to be $\mathbf{z}=\left[z_{k}\right]\sim \mathcal{N}\left(0,\sigma^{2}_{\mathbf{z}} \mathbf{I}_{m}\right)$. Denote $\Tilde{\mathbf{x}}=\mathcal{M}_{\lambda}\left(\mathbf{x}\right)$ and $\Tilde{\mathbf{z}}=\left[\Tilde{z}_{k}\right]$, $\Tilde{z}_{k}=\operatorname{mod}\left(z_{k},2\lambda\right)-2(1-q_{k})\lambda$, $q_{k}\in\{0,1\}$. Then, \begin{equation} \label{eq:n_1} \Tilde{\mathbf{y}}=\Tilde{\mathbf{x}}+\Tilde{\mathbf{z}}, \end{equation} where $\Tilde{\mathbf{y}}=\mathcal{M}_{\lambda}\left(\mathbf{y}\right)$. \end{theorem} \begin{IEEEproof} Applying the modulo operator $\mathcal{M}_{\lambda}$ in (\ref{eq:18}) to the noisy measurements $\mathbf{y}$ produces \begin{equation} \label{eq:n_2} \begin{aligned} \Tilde{\mathbf{y}}&=\mathcal{M}_{\lambda}\left(\mathbf{y}\right)=\mathcal{M}_{\lambda}\left(\mathbf{x}+\mathbf{z}\right)\\&=\mathbf{x}+\mathbf{z}-2\lambda\left\lfloor\frac{\mathbf{x}}{2\lambda}+\frac{1}{2}+\frac{\mathbf{z}}{2\lambda}\right\rfloor, \end{aligned} \end{equation} where $\mathbf{z}\sim \mathcal{N}\left(0,\sigma^{2}_{\mathbf{z}} \mathbf{I}_{m}\right)$. Since we have $\left\lfloor a+b\right\rfloor\geq\left\lfloor a\right\rfloor+\left\lfloor b\right\rfloor$ for two arbitrary real numbers $a$ and $b$, it follows from (\ref{eq:n_2}) that \begin{equation} \label{eq:n_3} \begin{aligned} \mathcal{M}_{\lambda}\left(\mathbf{x}+\mathbf{z}\right)&=\mathbf{x}+\mathbf{z}-2\lambda\left\lfloor\frac{\mathbf{x}}{2\lambda}+\frac{1}{2}+\frac{\mathbf{z}}{2\lambda}\right\rfloor\\&\leq\mathbf{x}-2\lambda\left\lfloor\frac{\mathbf{x}}{2\lambda}+\frac{1}{2}\right\rfloor+\mathbf{z}-2\lambda\left\lfloor\frac{\mathbf{z}}{2\lambda}\right\rfloor\\&=\Tilde{\mathbf{x}}+\mathbf{z}-2\lambda\left\lfloor\frac{\mathbf{z}}{2\lambda}\right\rfloor=\Tilde{\mathbf{x}}+\operatorname{mod}\left(\mathbf{z},2\lambda\right). \end{aligned} \end{equation} Using the identity $\left\lfloor a+b\right\rfloor\leq\left\lfloor a\right\rfloor+\left\lfloor b\right\rfloor+1$, we obtain \begin{equation} \label{eq:n_4} \mathcal{M}_{\lambda}\left(\mathbf{x}+\mathbf{z}\right)\geq\Tilde{\mathbf{x}}+\operatorname{mod}\left(\mathbf{z},2\lambda\right)-2\lambda. \end{equation} A binary combination of the right-hand sides of (\ref{eq:n_3}) and (\ref{eq:n_4}) is equivalent to $\mathcal{M}_{\lambda}\left(x_{k}+z_{k}\right)$, i.e., \begin{equation} \label{morad} \begin{aligned} \Tilde{y}_{k}&=\mathcal{M}_{\lambda}\left(x_{k}+z_{k}\right)\\ &=q_{k}\left(\Tilde{x}_{k}+\operatorname{mod}\left(z_{k},2\lambda\right)\right)+(1-q_{k})\left(\Tilde{x}_{k}+\operatorname{mod}\left(z_{k},2\lambda\right)-2\lambda\right),\\&=\Tilde{x}_{k}+\operatorname{mod}\left(z_{k},2\lambda\right)-2(1-q_{k})\lambda, \end{aligned} \end{equation} where $q_{k}\in\{0,1\}$. Rewrite (\ref{morad}) as \begin{equation} \label{ghoor} \Tilde{y}_{k}=\Tilde{x}_{k}+\Tilde{z}_{k}, \end{equation} where $\Tilde{z}_{k}=\operatorname{mod}\left(z_{k},2\lambda\right)-2(1-q_{k})\lambda$, which completes the proof. \end{IEEEproof} It follows from Theorem~\ref{theorem_4} that the noise corruption in the input signal carries over to the modulo samples. The following theorem unveils the UNO reconstruction guarantee in the presence of noise. \begin{figure*}[t] \centering \includegraphics[width=1.0\columnwidth]{noise_reconst.png} \caption{Reconstruction of the desired parameter vector $\boldsymbol{\theta}$ following the linear model (\ref{eq:1000}) using PnP-ADMM-based UNO for an (a) overdetermined system with $\mathbf{A}\in\mathbb{R}^{728\times 100}$ and (b) underdetermined system with $\mathbf{A}\in\mathbb{R}^{728\times 1000}$. Here, to facilitate a better visual presentation, the number of threshold sequences start from $m=500$. (c) Reconstruction of the noisy input signal from one-bit measurements using PnP-ADMM-based UNO. } \label{figure_7} \end{figure*} \begin{theorem} \label{theorem_5}(UNO sampling with noise) Assume $x(t)$ to be a finite energy, bandlimited signal with maximum frequency $\Omega_{\textrm{max}}$. Let $y(t)$ denote the noisy signal following a linear model $y(t)=x(t)+z(t)$, where $z(t)$ denotes the additive noise signal. Denote the pre-filtered $y(t)$, $x(t)$ and $z(t)$ by, respectively, $y_{\phi}(t)$, $x_{\phi}(t)$ and $z_{\phi}(t)$, where $\phi\in\textrm{PW}_{\Omega}$ with cut-off frequency $\Omega_{\textrm{max}}$ following a linear model $y_{\phi}(t)=x_{\phi}(t)+z_{\phi}(t)$. Denote the samples of $y_{\phi}(t)$, $x_{\phi}(t)$, $z_{\phi}(t)$ and the modulo samples of $y_{\phi}(t)$ by, respectively, $\left(y_{\phi}\right)_{k}$, $\left(x_{\phi}\right)_{k}$, $\left(z_{\phi}\right)_{k}$ and $\left(\Tilde{y}_{\phi}\right)_{k}$, where the sampling rate is $1/T$. The modulo samples reconstructed by the RKA are denoted by $\bar{\Tilde{\mathbf{y}}}_{\phi}$ with the reconstruction error is $\mathbf{e}=\left(\bar{\Tilde{\mathbf{y}}}_{\phi}-\Tilde{\mathbf{y}}_{\phi}\right)$. Then, the sufficient condition to reconstruct bandlimited samples $x_{k}$ from UNO samples $\left\{\mathbf{r}^{(\ell)}=\operatorname{sgn}\left(\mathcal{M}_{\lambda}\left(\mathbf{y}_{\phi}\right)-\boldsymbol{\uptau}^{(\ell)}\right)\right\}_{\ell=1}^{m}$, where $\boldsymbol{\uptau}^{(\ell)}\sim \mathcal{N}\left(\mathbf{0}, \frac{\lambda^{2}}{9}\mathbf{I}\right)$, up to additive multiples of $2\lambda$ in the sense that $\left(\bar{\Tilde{y}}_{\phi}\right)_{k}=x_{k}+\left(\Tilde{z}_{\phi}\right)_{k}+e_{k}$ is \begin{equation} \label{eq:n_5} T \leq \frac{1}{2^{h}\Omega_{\textrm{max}} e}, \end{equation} where $\left(\Tilde{z}_{\phi}\right)_{k}$ is defined in Theorem~\ref{theorem_4} and $h\in\mathbb{N}$ is determined by \begin{equation} \label{eq:n_6} h\geq \frac{\log\left(\frac{2\beta_{x}}{\lambda}\right)}{\log\left(\frac{\lambda}{4\left\|\Tilde{\mathbf{z}}_{\phi}+\mathbf{e}\right\|_{\infty}}\right)}, \end{equation} with \begin{equation} \label{eq:n_7} \lambda\geq 4\zeta\left\|\Tilde{\mathbf{z}}_{\phi}+\mathbf{e}\right\|_{\infty}, \quad \zeta>1. \end{equation} \end{theorem} \begin{IEEEproof} The proof, \textit{ceteris paribus}, follows from repeating the proof of Theorem~\ref{Negar_Theorem} by replacing the RKA error $\mathbf{e}$ with $\Tilde{\mathbf{z}}_{\phi}+\mathbf{e}$. \end{IEEEproof} According to Theorem~\ref{theorem_5}, noisy UNO sampling requires more samples (given by the oversampling factor) specified by \eqref{eq:n_5} than the noiseless case in \eqref{rageN}. This is similar to other conventional noisy samplers. For example, Cadzow denoising \cite{cadzow1988signal}, used to suppress the effect of noise in sparse samplers similarly requires such an oversampling \cite{rudresh2017finite}. \subsection{Numerical Examples} We investigated PnP-ADMM-based noisy UNO reconstruction with $\mathbf{A}=\left[a_{ij}\right]$ to be $a_{ij}\sim\mathcal{N}\left(0,1\right)$ and $\mathbf{y}=\mathbf{y}_{t}+\boldsymbol{\epsilon}$, where $\mathbf{y}_{t}$ was generated as in Section~\ref{Guaranteed} and $\boldsymbol{\epsilon}\sim\mathcal{N}\left(\mathbf{0},\sigma^{2}_{\boldsymbol{\epsilon}} \mathbf{I}_{m}\right)$. Fig.~\ref{figure_7}a and b show accurate noisy UNO reconstruction of the parameter vector with fixed $\sigma^{2}_{\boldsymbol{\epsilon}}=0.1$ in case of, respectively, overdetermined ($r=728$, $s=100$) and underdetermined ($r=728$, $s=1000$) systems in (\ref{eq:1000}). Fig.~\ref{figure_7}c demonstrates the efficacy of Noisy UNO in estimating the desired parameter $\boldsymbol{\uptheta}$ from (\ref{eq:1000}) when only UNO samples of noisy measurement $\mathbf{y}$ are available. Table~\ref{table_4} reports the reconstruction NMSE of $\theta$, i.e., $\operatorname{NMSE}_{\boldsymbol{\uptheta}} \triangleq \frac{\left\|\boldsymbol{\uptheta}-\widehat{\boldsymbol{\uptheta}}\right\|_{2}^{2}}{\left\|\boldsymbol{\uptheta}\right\|_{2}^{2}}$, averaged over $15$ experiments for different noise variances $\sigma^{2}_{\boldsymbol{\epsilon}}\in \left\{0.01,0.05,0.1\right\}$ using the PnP-ADMM-based UNO. Here, following Theorem~\ref{theorem_5}, the ADC threshold was set to $\lambda=1.5$. \begin{table} [t] \centering \caption{Reconstruction $10\log_{10}\operatorname{NMSE}_{\boldsymbol{\uptheta}}$ with PnP-ADMM Noisy UNO} \centering \begin{tabular}{ c | c | c } \hline $\sigma^{2}_{\boldsymbol{\epsilon}}$ & \textbf{Overdetermined system} & \textbf{Underdetermined system}\\ [0.5 ex] \hline \hline $0.1$ & $-48.294$ & $-38.128$ \\ \hline $0.05$ & $-52.676$ & $-42.347$ \\ \hline $0.01$ & $-56.815$ & $-45.259$ \\ \hline \end{tabular} \label{table_4} \end{table} \section{Summary} \label{sec:summ} The design of alternative sampling schemes to enable practical implementations of Shannon's theorem -- from theory to praxis -- has been an active research topic for decades. In this context, our proposed UNO presents a framework of merging one-bit quantization and unlimited sampling. This sampling framework naturally facilitates a judicious design of time-varying sampling thresholds by properly utilizing the information on the distance between the signal values and the thresholds in a high DR regime. The noiseless UNO reconstruction relies on exploiting RKA algorithm while the noisy reconstruction is based on the PnP-ADMM heuristic. These low-complexity approaches are preferable over existing costly reconstruction optimization approaches \cite{venkatakrishnan2013plug,chan2016plug}. The UNO framework achieves multiple objectives of high sampling rate, unlimited DR, less complex and potentially low-power implementations. Our numerical and theoretical analyses demonstrate accurate reconstruction for several different scenarios. Some theoretical questions remain open, e.g. on the relationship between the number of threshold sequences $m$ and reconstruction error in a closed form. This may help in finding the required number of thresholds sequences for perfect reconstruction. Further, a hardware verification of UNO on the lines of unlimited sampling in \cite{bhandari2021unlimited} is also desired. \section*{Acknowledgement} The authors are grateful to Prof. Ayush Bhandari of Imperial College, London for helpful discussions related to his work in \cite{graf2019one} and for providing the relevant source codes to facilitate meaningful comparison. \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:introduction} Positron Emission Tomography (PET) is a widely used imaging technique of nuclear medicine, used in particular for the management of patients in oncology. It is a native quantitative tool very attractive for revealing molecular processes. However, the quantitative precision of PET suffers from several problems when the number of coincidences recorded is low. In particular, micro-spheres of $^{90}$Y are indicated for the treatment of primary and secondary liver cancers, but $^{90}$Y is a $\beta^-$ emitter that emits positrons with a branching ratio of only \num{3.2e-5} \cite{thomas:selwyn2007}, and $^{90}$Y-PET imaging therefore suffers from quantification artifacts. Among those, low-activity regions (\emph{i.e.}, cold areas) suffer from systematic positive bias. Reconstruction algorithms allowing negative activity values have been proposed to overcome this issue \cite{mael:byrne1998,mael:nuyts2002_NEGML,mael:vanslambrouck2015,mael:lim2018}. However, such algorithms increase the variance and introduce non-physical negative values. Here, we propose a non-negativity enforcement post-processing step (NNEPPS) to reduce both the bias and the variance while getting a non-negative image. Let us start by observing that voxel-wise intensities are not really meaningful. It is rather the mean intensity of small homogeneous regions that conveys physical information. Thus, we propose to transfer negative voxel values to neighboring voxels. A first idea would be to use the formalism of optimal transport \cite{mael:monge1781,mael:kantorovitch1942}, to ``transport'' the negative activities towards positive voxels with a minimal cost. However, a major drawback of optimal transport, in this case, is that the transfer may be asymmetric, leading to a spurious spread of information. The approach followed here is to pre-define a symmetric \emph{voxel spread function} describing the proportion of the value coming from a voxel to be distributed to each neighbor. The solution is the one corresponding to the minimal transfer following such a symmetric rule. The contribution of this work is firstly to formalize this post-processing strategy, and secondly to implement and test the method. We cast the problem in a linear programming framework \cite{mael:nocedal1999}, and we propose an adapted version of the dual simplex algorithm to solve the optimization problem. This document is an abridged version of the published paper~\cite{millardet2020local}, to which the reader is referred for more details, as well as for the source code. \section{Problem statement} \label{sec:problem_statement} The goal of the proposed post-processing step is to obtain a non-negative image by a minimal spread of the negative values over the positive voxels. Every voxel value is allowed to increase (we call this increase the \emph{transfer coefficient} assigned to this voxel) while its neighboring values decrease such that the local mean is preserved. This can be formalized in the following way: \begin{equation} \label{eq:formulation} \underset{\vect{\alpha}}{\min} \, ||\vect{\alpha}|| \ \text{ such that } \left\{ \begin{array}{l} \vect{\alpha} \geq 0 \\ \vect{y} = \vect{x} + \mat{H} \vect{\alpha} \geq 0 \\ \end{array} \right. \end{equation} where $\vect{x}$ represents the initial image and $\vect{y}$ the final one. $\vect{\alpha}$ represents a map of the transfer coefficients, and $\mat{H}$ the operator giving the influence of the transfers on the image. The non-negativity of $\vect{\alpha}$ is imposed because there is no reason for positive voxels to spread a potentially ``too large'' value. Numerical tests have confirmed the benefit of this constraint. An interesting property is that the final image is the same if the norm to be minimized is the L1 norm, the L2 norm, or actually, any norm strictly increasing according to each coordinate. The preservation of the global mean is automatically fulfilled in this formalism because it involves only transfers. \emph{i.e.}, each line of $\mat{H}$ sums to zero. Although not mandatory, a simple choice for $\mat{H}$ is that it results from a spatially invariant elementary mask which is convoluted with the transfer map $\vect{\alpha}$, for example, $(-0.5, 1, -0.5)$ for a 1D image. We now state four propositions. For detailed proofs, we refer the reader to the published article~\cite{millardet2020local}. \begin{proposition} \label{theo:existence} A solution to (\ref{eq:formulation}) exists if and only if the mean value of the initial image is non-negative. \end{proposition} \begin{proposition} \label{theo:uniqueness} If a solution to (\ref{eq:formulation}) exists, it is unique. \end{proposition} Fig.~\ref{fig:exemple-2D} illustrates the effect of the NNEPPS on a simple 2D-example. \begin{figure}[htb] \centering \includegraphics[width=\columnwidth]{figures/exemple-2D-iter.png} \caption{In the initial $4\times4$ image (left), three pixels are negative. Their intensities are transferred to their neighbors at iteration 1. A new negative pixel appears (middle). It is added to the set of pixels that need to be subsequently set to 0. After iteration 2 (right), the algorithm stops.} \label{fig:exemple-2D} \end{figure} This problem can be formulated as a linear programming problem. Although 3D-PET images typically contain around \num{e7} voxels, this problem may be solved efficiently by a dual-simplex algorithm, which has, in this case, very interesting properties, and makes the post-processing achievable in around 2 minutes for whole PET images. \begin{proposition} \label{theo:prop_convergnce_simplex} If the post-processing problem has a solution, the dual-simplex algorithm reaches this solution. \end{proposition} \begin{proposition} \label{theo:prop_dual_simplex} Starting from $\vect{y}^{(0)} = \vect{x}$, in every dual-simplex iteration, all the possible entering indexes (all $i$ for which $y^{(k)}_i < 0$) are in the inactive set of the solution \emph{i.e.}, all possible choices are correct choices. On top of that, the leaving index associated to an entering one is known. \end{proposition} \section{Results} Fig.~\ref{fig:image_ss_neg_non_saturee} shows the image produced by the unconstrained algorithm AML\footnote{AML stands for Maximum likelihood with a lower bound A. This lower bound can be made negative to allow negative values}~\cite{rahmim2012direct} before and after the NNEPPS. This phantom (NEMA PET IEC) is composed of a warm background (mostly green in the images), in which an activity of \SI{177}{\kilo\becquerel\per\milli\liter} has been introduced. Six spheres (in red) were filled with an activity of \SI{1,33}{\mega\becquerel\per\milli\liter}. A cylinder was also preserved from any activity. Visually, the NNEPPS cleans all the noise present in the areas free from activity, while keeping unchanged other areas. \begin{figure}[htb] \centering \includegraphics[width=0.46\columnwidth]{figures/AML} \quad \includegraphics[width=0.46\columnwidth]{figures/PPAML} \caption{Reconstruction of a phantom with AML before and after the NNEPPS (PP-AML, for Post-Processed AML). In PP-AML, the cold area at the center is much clearer.} \label{fig:image_ss_neg_non_saturee} \end{figure} Figure \ref{fig:image} shows that the RMSE in the cold cylinder is roughly divided by 2. The positive bias remains approximately unchanged by the NNEPPS until it reaches a limit (of approximately \SI{2.5e4}{\becquerel\per\milli\liter} in this example). \begin{figure}[htb] \centering \includegraphics[width=.93\columnwidth]{figures/Cold-Cyl_ssneg3D_IEC1.eps} \caption{Mean activity of the central cylinder (theoretical value: 0) as a function of the RMSE in the same area (theoretical value: 0), for different reconstruction algorithms before and after the NNEPPS. Each point in the same curve represent a different iteration number (from 1 to 3)} \label{fig:image} \end{figure} Figure \ref{fig:cylindre-autour} shows that the NNEPPS not only achieves the desired goal but also creates a small negative bias in the part of the warm area in the neighborhood of the cold cylinder. However, this negative bias is very limited and the activity remains higher than the constrained algorithm Ordered Subsets Expectation Maximization (OSEM)~\cite{mael:hudson1994}, which can be taken as a reference due to its popularity and its use in clinical routine. \begin{figure}[htb] \centering \includegraphics[width=.93\columnwidth]{figures/IEC1-cylindre-autour.eps} \caption{Graphs displaying the mean activity measured as a function of the distance from the center of the cold cylinder. The theoretical value and OSEM are given for reference.} \label{fig:cylindre-autour} \end{figure} \section{Conclusion} In this work, a post-processing step removing negative intensities is introduced. It aims to keep the bias reduction while reaching a low variance. The proposed algorithm is a condensed simplex on the dual problem, which is justified by the particular nature of the problem. Compared to unconstrained algorithms that produce unbiased outcomes, the NNEPPS, by the spread of negative values into/outside a specific region, generally creates a small bias, especially at the boundary between regions. However, this bias remains lower than that of constrained algorithms. In cold areas, the RMSE is highly improved by the NNEPPS. From a qualitative point of view, images are also visually easier to interpret after the NNEPPS. In particular, cold areas are much more visible. \section*{Acknowledgment} This work has been supported in part by the European Regional Development Fund, the Pays de la Loire region on the Connect Talent scheme MILCOM (Multi-modal Imaging and Learning for Computational-based Medicine), Nantes M\'etropole (Convention 2017-10470), the French National Agency for Research called "Investissements d'Avenir" IRON Labex n\textsuperscript{o} ANR-11-LABX-0018-01 and the French program Infrastructure d'avenir en Biologie Santé ANR-11-INBS-0006 (France Life Imaging). {\small \bibliographystyle{ieeeji} \section{Introduction} \label{sec:introduction} Positron Emission Tomography (PET) is a widely used imaging technique of nuclear medicine, used in particular for the management of patients in oncology. It is a native quantitative tool very attractive for revealing molecular processes. However, the quantitative precision of PET suffers from several problems when the number of coincidences recorded is low. In particular, micro-spheres of $^{90}$Y are indicated for the treatment of primary and secondary liver cancers, but $^{90}$Y is a $\beta^-$ emitter that emits positrons with a branching ratio of only \num{3.2e-5} \cite{thomas:selwyn2007}, and $^{90}$Y-PET imaging therefore suffers from quantification artifacts. Among those, low-activity regions (\emph{i.e.}, cold areas) suffer from systematic positive bias. Reconstruction algorithms allowing negative activity values have been proposed to overcome this issue \cite{mael:byrne1998,mael:nuyts2002_NEGML,mael:vanslambrouck2015,mael:lim2018}. However, such algorithms increase the variance and introduce non-physical negative values. Here, we propose a non-negativity enforcement post-processing step (NNEPPS) to reduce both the bias and the variance while getting a non-negative image. Let us start by observing that voxel-wise intensities are not really meaningful. It is rather the mean intensity of small homogeneous regions that conveys physical information. Thus, we propose to transfer negative voxel values to neighboring voxels. A first idea would be to use the formalism of optimal transport \cite{mael:monge1781,mael:kantorovitch1942}, to ``transport'' the negative activities towards positive voxels with a minimal cost. However, a major drawback of optimal transport, in this case, is that the transfer may be asymmetric, leading to a spurious spread of information. The approach followed here is to pre-define a symmetric \emph{voxel spread function} describing the proportion of the value coming from a voxel to be distributed to each neighbor. The solution is the one corresponding to the minimal transfer following such a symmetric rule. The contribution of this work is firstly to formalize this post-processing strategy, and secondly to implement and test the method. We cast the problem in a linear programming framework \cite{mael:nocedal1999}, and we propose an adapted version of the dual simplex algorithm to solve the optimization problem. This document is an abridged version of the published paper~\cite{millardet2020local}, to which the reader is referred for more details, as well as for the source code. \section{Problem statement} \label{sec:problem_statement} The goal of the proposed post-processing step is to obtain a non-negative image by a minimal spread of the negative values over the positive voxels. Every voxel value is allowed to increase (we call this increase the \emph{transfer coefficient} assigned to this voxel) while its neighboring values decrease such that the local mean is preserved. This can be formalized in the following way: \begin{equation} \label{eq:formulation} \underset{\vect{\alpha}}{\min} \, ||\vect{\alpha}|| \ \text{ such that } \left\{ \begin{array}{l} \vect{\alpha} \geq 0 \\ \vect{y} = \vect{x} + \mat{H} \vect{\alpha} \geq 0 \\ \end{array} \right. \end{equation} where $\vect{x}$ represents the initial image and $\vect{y}$ the final one. $\vect{\alpha}$ represents a map of the transfer coefficients, and $\mat{H}$ the operator giving the influence of the transfers on the image. The non-negativity of $\vect{\alpha}$ is imposed because there is no reason for positive voxels to spread a potentially ``too large'' value. Numerical tests have confirmed the benefit of this constraint. An interesting property is that the final image is the same if the norm to be minimized is the L1 norm, the L2 norm, or actually, any norm strictly increasing according to each coordinate. The preservation of the global mean is automatically fulfilled in this formalism because it involves only transfers. \emph{i.e.}, each line of $\mat{H}$ sums to zero. Although not mandatory, a simple choice for $\mat{H}$ is that it results from a spatially invariant elementary mask which is convoluted with the transfer map $\vect{\alpha}$, for example, $(-0.5, 1, -0.5)$ for a 1D image. We now state four propositions. For detailed proofs, we refer the reader to the published article~\cite{millardet2020local}. \begin{proposition} \label{theo:existence} A solution to (\ref{eq:formulation}) exists if and only if the mean value of the initial image is non-negative. \end{proposition} \begin{proposition} \label{theo:uniqueness} If a solution to (\ref{eq:formulation}) exists, it is unique. \end{proposition} Fig.~\ref{fig:exemple-2D} illustrates the effect of the NNEPPS on a simple 2D-example. \begin{figure}[htb] \centering \includegraphics[width=\columnwidth]{figures/exemple-2D-iter.png} \caption{In the initial $4\times4$ image (left), three pixels are negative. Their intensities are transferred to their neighbors at iteration 1. A new negative pixel appears (middle). It is added to the set of pixels that need to be subsequently set to 0. After iteration 2 (right), the algorithm stops.} \label{fig:exemple-2D} \end{figure} This problem can be formulated as a linear programming problem. Although 3D-PET images typically contain around \num{e7} voxels, this problem may be solved efficiently by a dual-simplex algorithm, which has, in this case, very interesting properties, and makes the post-processing achievable in around 2 minutes for whole PET images. \begin{proposition} \label{theo:prop_convergnce_simplex} If the post-processing problem has a solution, the dual-simplex algorithm reaches this solution. \end{proposition} \begin{proposition} \label{theo:prop_dual_simplex} Starting from $\vect{y}^{(0)} = \vect{x}$, in every dual-simplex iteration, all the possible entering indexes (all $i$ for which $y^{(k)}_i < 0$) are in the inactive set of the solution \emph{i.e.}, all possible choices are correct choices. On top of that, the leaving index associated to an entering one is known. \end{proposition} \section{Results} Fig.~\ref{fig:image_ss_neg_non_saturee} shows the image produced by the unconstrained algorithm AML\footnote{AML stands for Maximum likelihood with a lower bound A. This lower bound can be made negative to allow negative values}~\cite{rahmim2012direct} before and after the NNEPPS. This phantom (NEMA PET IEC) is composed of a warm background (mostly green in the images), in which an activity of \SI{177}{\kilo\becquerel\per\milli\liter} has been introduced. Six spheres (in red) were filled with an activity of \SI{1,33}{\mega\becquerel\per\milli\liter}. A cylinder was also preserved from any activity. Visually, the NNEPPS cleans all the noise present in the areas free from activity, while keeping unchanged other areas. \begin{figure}[htb] \centering \includegraphics[width=0.46\columnwidth]{figures/AML} \quad \includegraphics[width=0.46\columnwidth]{figures/PPAML} \caption{Reconstruction of a phantom with AML before and after the NNEPPS (PP-AML, for Post-Processed AML). In PP-AML, the cold area at the center is much clearer.} \label{fig:image_ss_neg_non_saturee} \end{figure} Figure \ref{fig:image} shows that the RMSE in the cold cylinder is roughly divided by 2. The positive bias remains approximately unchanged by the NNEPPS until it reaches a limit (of approximately \SI{2.5e4}{\becquerel\per\milli\liter} in this example). \begin{figure}[htb] \centering \includegraphics[width=.93\columnwidth]{figures/Cold-Cyl_ssneg3D_IEC1.eps} \caption{Mean activity of the central cylinder (theoretical value: 0) as a function of the RMSE in the same area (theoretical value: 0), for different reconstruction algorithms before and after the NNEPPS. Each point in the same curve represent a different iteration number (from 1 to 3)} \label{fig:image} \end{figure} Figure \ref{fig:cylindre-autour} shows that the NNEPPS not only achieves the desired goal but also creates a small negative bias in the part of the warm area in the neighborhood of the cold cylinder. However, this negative bias is very limited and the activity remains higher than the constrained algorithm Ordered Subsets Expectation Maximization (OSEM)~\cite{mael:hudson1994}, which can be taken as a reference due to its popularity and its use in clinical routine. \begin{figure}[htb] \centering \includegraphics[width=.93\columnwidth]{figures/IEC1-cylindre-autour.eps} \caption{Graphs displaying the mean activity measured as a function of the distance from the center of the cold cylinder. The theoretical value and OSEM are given for reference.} \label{fig:cylindre-autour} \end{figure} \section{Conclusion} In this work, a post-processing step removing negative intensities is introduced. It aims to keep the bias reduction while reaching a low variance. The proposed algorithm is a condensed simplex on the dual problem, which is justified by the particular nature of the problem. Compared to unconstrained algorithms that produce unbiased outcomes, the NNEPPS, by the spread of negative values into/outside a specific region, generally creates a small bias, especially at the boundary between regions. However, this bias remains lower than that of constrained algorithms. In cold areas, the RMSE is highly improved by the NNEPPS. From a qualitative point of view, images are also visually easier to interpret after the NNEPPS. In particular, cold areas are much more visible. \section*{Acknowledgment} This work has been supported in part by the European Regional Development Fund, the Pays de la Loire region on the Connect Talent scheme MILCOM (Multi-modal Imaging and Learning for Computational-based Medicine), Nantes M\'etropole (Convention 2017-10470), the French National Agency for Research called "Investissements d'Avenir" IRON Labex n\textsuperscript{o} ANR-11-LABX-0018-01 and the French program Infrastructure d'avenir en Biologie Santé ANR-11-INBS-0006 (France Life Imaging). {\small \bibliographystyle{ieeeji}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} One of the oldest and most successful theories in physics is hydrodynamics. While hydrodynamics was first understood as a phenomenological set of equations that govern liquids and gases \cite{}, over the past century we have instead recognized that hydrodynamics is best understood as the universal effective field theory that governs thermalization in a chaotic many-body system \cite{}. In the simplest scenarios, the degrees of freedom of a hydrodynamic theory correspond to locally conserved quantities; the way that these modes interact with each other and decay is constrained only by basic symmetries of the theory. Using modern effective field theory methods, sophisticated nonlinear theories of fluctuating hydrodynamics have been developed and applied to increasingly sophisticated systems. One family of novel phases of matter which has interesting dynamics arises when the microscopic degrees of freedom are \emph{fractons} -- excitations which are individually immobile, and can only move in tandem \cite{}. As a simple example, we can consider a phase of matter in which charge/mass is conserved together with dipole moment/center of mass -- in this case, a single particle cannot move without violating the dipole conservation law! The past few years have seen an intense study of the fracton phases of matter that can result by combining many of these interacting fractons. And over the past year, it has been understood that when such fracton phases thermalize \cite{}, the resulting hydrodynamics is non-trivial \cite{}: Fick's law of diffusion, for example, becomes replaced by \emph{subdiffusive} equations, with the dynamical critical exponent dependent on how many multipole moments are conserved. In this paper, we present a qualitatively new universality class of hydrodynamics that emerges when fracton-like multipole conservation laws are combined with canonical energy and momentum conservation. We focus on the case where dipole moment is the only additional conserved quantity, and where the theory has parity and time-reversal symmetry. Without dipole conservation, such a theory is essentially described by textbook Navier-Stokes equations with incoherent conductivities \cite{}. With dipole conservation, the Navier-Stokes equations are completely changed. At finite charge density, the conventional propagating sound modes are replaced by magnon-like propagating modes. The decay rates of these magnon-like modes is diffusive if energy is conserved, but subdiffusive if energy is not conserved. And at zero density, the character of the hydrodynamic modes completely changes; the naive derivative expansion of hydrodynamics at finite density is singular as low density is approached. The subtle nature of this emergent hydrodynamics is intricately related to the fact that (in quantum mechanics) the dipole moment operator $D$, and net momentum operator $P$, do not commute: \cite{} \begin{equation} [D,P] = \mathrm{i}Q, \label{eq:DPQ} \end{equation} where $Q$ represents total charge. An analogous classical statement holds for Poisson brackets. One might expect that such a commutation relation is similar to angular momentum commutation relations in an isotropic fluid -- such commutation relations lead not to new propagating degrees of freedom, but rather constraints on the currents of other modes (the stress tensor, in this case). However, at finite density, (\ref{eq:DPQ}) implies that momentum susceptibility (the generalization of mass density) is singular! This means that a naive hydrodynamic degrees of freedom -- fluid velocity -- is non-local. One of the main results of this paper is that we can nevertheless construct a \emph{local} hydrodynamic theory, using unconventional degrees of freedom. Due to the surprisingly complex nature of hydrodynamics with dipole conservation, we will discuss it from three complementary perspectives. In Section \ref{sec:landau}, we generalize Landau's phenomenological arguments, based on the existence of an entropy current, to derive hydrodynamics with charge, momentum and dipole conservation. In Section \ref{sec:MM}, we show how basic considerations using the memory matrix formalism \cite{} reproduce linearized hydrodynamics. Finally, in Section \ref{sec:EFT}, we show how to generalize the methods of \cite{} to construct a nonlinear effective field theory for fluctuating (stochastic) hydrodynamics, both with and without energy conservation. This construction serves as a highly non-trivial check on our earlier arguments. In Section \ref{sec:instability}, we show that these hydrodynamic theories can be \emph{unstable below four dimensions}. This is true both without energy conservation, and with energy conservation at infinite temperature (under mild assumptions). To the best of our knowledge, such an instability of an \emph{equilibrium fluid} in three dimensions has never before been found. This result generalizes the well-known Kardar-Parisi-Zhang instability of an equilibrium fluid (without dipole conservation) in one dimension \cite{}, and implies the existence of a non-equilibrium fixed point in three dimensions, in an undriven system. In Section \ref{sec:numerics}, we present explicit microscopic models with simultaneous dipole, momentum, and energy conservation. The simplest of these models are classical Hamiltonian systems of $N$ interacting particles, with \begin{equation} H = \sum_{i=1}^{N-1} \left[\frac{(p_i-p_{i+1})^2}{2} + V(x_i - x_{i+1})\right], \end{equation} where $V(x)$ is typically taken to be an even function. Each particle is implicitly assumed to carry charge 1. It is straightforward to see that both the dipole moment and momentum are conserved. We numerically simulate such models, and observe consistency with our hydrodynamic predictions. [WHAT ABOUT INSTABILITY?] Finally, in Section \ref{sec:expt}, we briefly suggest a few ideas for how these exotic fluids might be realized in various future experiments. \section{Landau route to hydrodynamics}\label{sec:landau} In this section, we use the canonical arguments based on the second law of thermodynamics to derive the hydrodynamics of conserved momentum, charge, and dipole. The fundamental assumption of hydrodynamics is that the late time physics is governed by the conserved quantities of the system, which we shall write as \begin{equation} P^i = \int d^d x\, \pi^i,\quad Q=\int d^d x\, n \ \end{equation} where $\pi^i$ and $n$ are the momentum and charge density, respectively. We have not included the dipole density as a separate degree of freedom, as the dipole charge is determined by the charge density itself: $D^i=\int d^d x\, x^i n$. [EXPLAIN WHY] $n$ and $\pi^i$ are subject to local conservation laws: \begin{equation} \label{eom1}\p_t \pi^i + \p_j T^{ji}=0,\qquad \p_t n+\p_i J^i=0\ ,\end{equation} where $T^{ij}$ and $J^i$ are stress and charge flux, and are assumed to be local in $\pi^i,n$. Crucially, we also need to demand \begin{equation}\label{dipfl} J^i = \p_j J^{ji}\ ,\end{equation} which comes from dipole conservation: \begin{equation} \p_t \int d^d x x^i n= -\int d^d x x^i\p_j J^j=\int d^d x J^i\ ,\end{equation} where the right-hand side vanishes only if $J^i$ satisfies (\ref{dipfl}). We will also demand that $J^{ij}$ be local in the densities. Eqs. (\ref{eom1}),(\ref{dipfl}) completely specifies the time evolution of $\pi^i$ and $n$. \subsection{Ideal hydrodynamics} We now impose that the dynamics of these densities be consistent with the local second law of thermodynamics. This amounts at finding a vector $S^\mu$ such that $\p_\mu S^\mu\geq 0$ when evaluated on solutions to hydrodynamics, where the time component $S^t$ coincides with the thermodynamic entropy density, $S^t=s$, if fields are restricted to be homogeneous. This basic constraint will uniquely determine the concrete expressions of $T^{ij},J^i$ in terms of $\pi^i,n$, order by order in derivatives, up to phenomenological coefficients that are determined by the specific underlying system. We shall implement this procedure order by order in derivatives. Contrary to all cases known to us, we will see that this hydrodynamics is special in that, first, the homogeneous part of momentum density decouples from the dynamics, and second, there is an emergent scale determined by the background charge density of the system. We begin by first assuming that the entropy density is a function of momentum and charge densities $s=s(\pi^i,n)$, and we will see that this leads to breaking relation (\ref{dipfl}). Recall the thermodynamic relation $Tds= -V^i d\pi^i-\mu dn$, where $V^i$ and $\mu$ are the velocity and chemical potential of the system. The temperature $T$ is set to be a constant, since we are neglecting energy conservation. The generalization of this relation to ideal hydrodynamics reads \begin{equation} \label{2ndla0}T\p_\mu S^\mu =-V^i\p_\mu T^{\mu i} - \mu\p_\alpha J^\alpha. \end{equation} The most general expressions for the fluxes are $S^i = s_1 V^i$, $T^{ij} = p \delta^{ij} + h_1 V^i V^j$, $J^i = h_2 V^i$. Plugging these expressions in (\ref{2ndla0}) gives $s_1=s$, $p=Ts+\mu n+ V^i \pi^i$, $h_1 V^i =\frac{\p s}{\p V^i}=\pi^i$ and $h_2=n$, which are the current constitutive relations of standard charged hydrodynamics, and indeed we have not used anywhere the fact that we are dealing with a dipole-conserving fluid. We see that (\ref{dipfl}) implies the relation $n V^i = \p_j J^{ji}$, which expresses that $J^{ij}$ is non-local in the hydrodynamic variables, against our assumptions. The only way for (\ref{dipfl}) to be consistent with locality, is to demand \begin{equation} s=s(\p_i v_j,n),\qquad v_i=\frac{\pi^i}{n}\ .\end{equation} Repeating the analysis above, this time taking \begin{equation} S^i = s_1 V^i+\Delta S^i,\quad T^{ij} = p \delta^{ij} + h_1 V^i V^j+\Delta T^{ij},\quad J^i = h_2 V^i +\Delta J^i\ ,\end{equation} where $\Delta S^i,\Delta T^{ij},\Delta J^i$ are higher-derivative expressions of $v_i$ and $n$ to be determined. Plugging in (\ref{2ndla0}), gives \begin{gather} T^{ij}=p \delta^{ij}+ V^i \pi^j-\psi_{ik}\p_j v_k,\qquad J^i= n V^i\\ \beta^i=\frac 1n \p_j \psi_{ji},\qquad \mu=-\p_n s-V^i v^i\\ p=s-n\p_n s,\qquad S^i=n\p_n sV^i-\psi_{ij}\p_t v_j \ \text{ \PG{ check this!}}\ , \end{gather} where we defined $\psi_{ij}=\left.\frac{\p s}{\p(\p_i v_j)}\right|_n$. Note that, the entropy density $s$ as well as equations (\ref{eom1}) are invariant under the shift \begin{equation}\label{dipsh} \pi^i\to \pi^i + n c^i,\qquad T^{ij}\to T^{ij} + J^j c^i\ ,\end{equation} where $c^i$ is a constant vector. This invariance is a manifestation of the dipole algebra (\ref{eq:DPQ}). Indeed, eq. (\ref{eq:DPQ}) implies, using locality: $[D,\pi^i]=i n$, thus leading to (\ref{dipsh}). \subsection{Dissipative hydrodynamics} [dissipative hydro] \subsection{The charge neutral limit} Note that the equation determining $\beta^i=\frac 1n \p_j \psi_{ji}$ appears to become singular as we approach charge neutrality $n\to 0$. To be more concrete, let us expand in linear perturbations around finite density, $n=n_0+\delta n$, and treat $\delta n,\pi^i,\beta^i$ as infinitesimal. Then $s=\frac 1{2n0^2} a_4^{ijkl}\p_i \pi_j \p_k \pi_l+\cdots$, where the dots denote terms depending on $\delta n$, and $\beta^i=\frac 1{n_0^2} a_4^{jikl}\p_j\p_k \pi_l$. As $n_0\to 0$, and assuming the underlying system is charge-conjugation invariant, we expect charge and momentum dynamics to decouple, where in particular the momentum dynamics, being insensitive to the presence of charge conservation, is expected to be that of a standard fluid: we expect that momentum will display diffusive behavior. This intuition can be reconciled with our results above by noticing a subtle change in the derivative expansion at charge neutrality. To see this, we simply need to inspect higher-derivative terms: $s=\frac 1{2n0^2} a_4(\p_i \pi_j)^2+\frac{a_5}{n_0^4} (\p_j^2 \pi_i)^2+\cdots$, where we are neglecting possible tensor structures $a_4^{ijkl},a_5^{ijklpq}$ for simplicity, and where the factor of $n_0^{-4}$ multiplying $a_5$ will be justified below. Repeating the analysis around (\ref{2ndla0}) one finds $\beta^i = \frac 1{n_0^2} a_4 \p^2_j \pi_i-\frac {a_5}{n_0^4}\p_j^4 \pi_i$. Solving for momentum, we get $\p_j^2\pi_i=\frac {1}{a_4}\left(n_0^2\beta^i+\frac{a_5 }{a_4}\p_j^2\beta^i\right)$. Now, as $n_0\to 0$, keeping $\pi^i$ fixed we then see that $\pi^i\to \frac {a_5}{a_4^2}\beta^i$, precisely leading to decoupled momentum conservation with diffusive behavior, according to [dissipative eq.]. Note that, had we chosen a different scaling for $a_5$, say $a_5\sim n_0^2$, the charge neutrality limit would require a divergent $\beta^i$, leading to a singular limit for the momentum conservation equation. We will see how these conclusions are reached in a straightforward way using the effective field theory approach of sec. \ref{sec:EFT}. \section{Memory matrix methods} \label{sec:MM} Next, we use the memory matrix formalism \cite{} to derive the linearized hydrodynamics of the previous section, both near and away from charge neutrality. This approach provides an independent check on many of the non-trivial properties of hydrodynamics that we found above. The memory matrix formalism is an old set of formal manipulations, used to isolate the contributions to linear response theory (two-point functions) which arise from parametrically slow dynamics. Since long wavelength hydrodynamic modes are arbitrarily long lived, this method can be well-suited for calculations of their properties. We now tersely summarize the main results of this method: for details see \cite{}. Consider a many-body system with Hamiltonian $H$, at temperature $T$. One can construct a vector space consisting of all operators $A$ acting on this system: to emphasize the vector nature, we can write $|A)$. An inner product on this space is \begin{equation} (A|B) := T\int\limits_0^\beta \mathrm{d}\lambda \langle A^\dagger(\mathrm{i}\lambda)B\rangle_T \end{equation} with $T=1/\beta$ and $\langle \cdots \rangle_T = \frac{1}{\mathrm{tr}(\mathrm{e}^{-\beta H})} \mathrm{tr}(\mathrm{e}^{-\beta H}\cdots )$ the thermal expectation value. Note that the susceptibility matrix is \begin{equation} (A|B) = T\chi_{AB}. \end{equation} Suppose that we have a designated set of ``slow" operators $|\mathcal{O}_A)$. For us, these are naturally taken to be $n(\mathbf{k})$ and $\pi^i(\mathbf{k})$ (the Fourier wave number is $\mathbf{k}$, and is held fixed). We may define the projectors \begin{equation} \mathfrak{p} = \sum_{\text{slow} A,B} |A) (T\chi)^{-1}_{AB} (B|, \;\;\;\;\; \mathfrak{q}=1-\mathfrak{p}, \end{equation} which project degrees of freedom onto slow ($\mathfrak{p}$) and fast ($\mathfrak{q}$) modes. By noticing that $(A| (z-\mathcal{L})^{-1}|B)$ is linearly related to the retarded Green's function $G^{\mathrm{R}}_{AB}(z)$, one can show that there are hydrodynamic quasinormal modes whenever \cite{} \begin{equation} \det (M + N - \mathrm{i}\omega \chi ) = 0. \end{equation} Here $M$ (the memory matrix) and $N$ are given by \begin{subequations} \begin{align} M_{AB} &= (\dot{A}| \mathfrak{q} \mathrm{i}(z-\mathfrak{q}\mathcal{L}\mathfrak{q})^{-1}\mathfrak{q}|\dot{B}), \\ N_{AB} &= -N_{BA} = \chi_{\dot{A}B}. \end{align} \end{subequations} Here $\mathcal{L} = \mathrm{i}[H,\cdot]$ denotes the Liouvillian, and $\dot{A} = \mathrm{i}[H,A]$, with $H$ the overall Hamiltonian. In this paper, we aim to use this framework to gain further insight (and justification) for the non-trivial hydrodynamics discovered in Section \ref{sec:landau}. Strictly speaking, one can object to this on the grounds that energy conservation is explicit in any theory satisfying the above postulates. Ultimately, we will use this approach to discern what happens when energy is conserved along with dipole and momentum; however, we believe that this approach remains instructive even if we ``ignore" energy conservation as an unjustified assumption. As we will see, some of the confusing features of this fluid are consequences of very general, and even semi-microscopic, arguments. \subsection{Momentum susceptibility} Let us begin by determining the momentum susceptibility; in the memory matrix language, this is $(\pi|\pi) = T \chi_{\pi \pi}$ (we'll leave the Fourier index implicit for the remainder of this section). While a microscopic computation is not possible (nor important for hydrodynamic considerations), we can easily \emph{bound} susceptibility using the Cauchy-Schwarz inequality: \begin{equation} (\pi_x|\pi_x) \ge \frac{(\pi_x|J_x)^2}{(J_x|J_x)}. \end{equation} Here $J_x$ is the $x$-component of the charge current operator; for simplicity, we'll also take $\mathbf{k}=k\hat{\mathbf{x}}$. Now, observe two key properties. Firstly, in a generic many-body system, \begin{equation} (\pi_x|J_x) = T n_0, \end{equation} with $n_0$ the equilibrium charge density: $n_0=\langle n\rangle_T$. Secondly, using (\ref{dipfl}), \begin{equation} (J_x|J_x) = k^2 (J_{xx}|J_{xx}). \end{equation} Since $J_{xx}$ is the local current operator which is well-defined with local dipole conservation, we conclude that $(J_{xx}|J_{xx}) $ is $k$-independent as $k\rightarrow 0$, and should remain finite as $n_0\rightarrow 0$. Combining these 3 equations, we find that for some constant $c>0$, which does not vanish as $n_0\rightarrow 0$, \begin{equation} \chi_{\pi\pi} = c \frac{n_0^2}{k^2}. \end{equation} \subsection{Dynamics without energy} \subsection{Dynamics with energy} \section{Effective field theory of hydrodynamics} \label{sec:EFT} One main result of this paper is that hydrodynamics with dipole conservation possesses anomalous scaling, which is due to the interplay between the nonlinear hydrodynamic interactions and hydrodynamic fluctuations. To derive this we shall use a recently formulated effective field theory (EFT) of hydrodynamics, which systematically describes fluctuations by encoding hydrodynamics into an effective action [ref]. Besides capturing fluctuations, this approach allows to rigorously derive the ``mean field'' hydrodynamics, including both linear and nonlinear terms which were obtained from the phenomenological approaches of secs. \ref{sec:landau} and \ref{sec:MM}. As a by-product, this will furnish an approach that treats the charge-neutral regime as a smooth limit of the charged regime, without implying the tensions with locality that arose in sec. \ref{sec:landau}. \subsection{General setup} The aim of the EFT approach is to systematically encode the correlation functions of hydrodynamic densities and currents. Such correlation functions have the general form \begin{equation} \label{pathin}\mathrm{Tr}(\mathcal T(J_1J_2\cdots)\rho_0\tilde {\mathcal T}(J_3 J_4\cdots\cdots))=\int_{\rho_0} D\psi_1 D\psi_2 \,e^{iS_0[\psi_1]-iS_0[\psi_2]}\,J_1J_2J_3J_4\cdots\ ,\end{equation} where, in the first expression, $\mathcal T$ and $\tilde{\mathcal T}$ denote time- and anti time-ordering, $\rho_0$ is the initial state, which we take to be thermal $\rho_0=e^{-\beta H}/\mathop{\rm tr}(e^{-\beta H})$, with $H$ the microscopic Hamiltonian of the system, and $J_1,J_2,\dots$ are operators inserted at $(t_1,\vec x_1),(t_2,\vec x_2),\dots$. On the right-hand side, we formally rewrote the correlator as a path-integral, where $S_0$ is the action of the microscopic dynamics, and $\psi_1,\psi_2$ are a doubled copy of the degrees of freedom of the system. Since on the left-hand side we have a forward (backward) time evolution given by the time-ordered (anti-time ordered) product, the path integral contains two exponentials of the action $S_0$, with a relative minus sign, as the first one corresponds to forward evolution, while the second one to backward evolution. In other words, the doubling of degrees of freedom comes from that the evolution of the density matrix $\rho_0 \to U(t)\rho_0 U^\dag(t)$ contains two factors of the evolution, one forward and one backward. Computing hydrodynamic correlation functions from the microscopic dynamics is very hard. We thus want to introduce an EFT approach that substitutes the right-hand side of (\ref{pathin}) with a simpler action: \begin{equation} \label{pathin1}\mathrm{Tr}(\mathcal T(J_1J_2\cdots)\rho_0\tilde {\mathcal T}(J_3 J_4\cdots\cdots))=\int D\chi_1 D\chi_2 \,e^{iS[\chi_1,\chi_2]}\,J_1J_2J_3J_4\cdots\ ,\end{equation} where $S$ is the effective action for hydrodynamics, and $\chi_1,\chi_2$ denote the doubled hydrodynamic degrees of freedom. The action $S$ will encode the effects of fluctuation and dissipation and, in particular, will allow us to predict the existence of anomalous scaling. We shall now introduce the degrees of freedom of this EFT. These should be fields that nonlinearly realize the symmetries associated to conservation of charge, dipole and momentum. For momentum conservation, we introduce a set of coordinate fields $X^i=X^i(\sigma^0,\sigma^I)$ which nonlinearly realize translations $P^i$, i.e. \begin{equation} \label{tr1} X^i(\sigma^0,\sigma^i)\to X^i(\sigma^0,\sigma^i)+\xi^i\ ,\end{equation} where $\xi^i$ is a constant vector. We are using $\sigma^I$ to denote an auxiliary coordinate system which can be thought of as labeling the fluid parcels at a fixed value of time $\sigma^0$.\footnote{In older literature, these are the so-called ``Lagrangian specification'' of the fluid [ref].} The coordinates $X^i(\sigma^0,\sigma^I)$ describe the trajectory of the fluid parcel labeled by $\sigma^I$ as a function of time $\sigma^0$. The coordinates $(\sigma^0,X^i)$ are the ``physical'' ones, in the sense that they label the time and space in the lab reference frame.\footnote{It is convenient to denote time by $\sigma^0$ as, in what follows, we will often need to take derivatives with respect to time at fixed $\sigma^I$, not at fixed $X^i$.} Next, we also have a vector degree of freedom $\varphi_i(\sigma^0,\sigma^I)$ that nonlinearly realizes the dipole shift symmetry $D^i$: \begin{equation} \label{tr2} \varphi^i(\sigma^0,\sigma^I)\to \varphi^i(\sigma^0,\sigma^I)+c^i\ ,\end{equation} where $c^i$ is a constant vector. Finally, for charge $Q$, the associated degree of freedom is a scalar $\varphi(\sigma^0,\sigma^I)$, and transforms as \begin{equation} \label{tr3} \varphi(\sigma^0,\sigma^I)\to \varphi(\sigma^0,\sigma^I)+a-c^i X^i\ ,\end{equation} where $a$ is a constant denoting the parameter of transformations associated to $Q$. The fields $\varphi$ and $\varphi^i$ can be heuristically viewed as describing the ``local phase'' of the fluid $e^{i(\varphi+X^i\varphi^i)}$, where this particular form is motivated from the fact that, for dipole-conserving field theories, $U(1)$ global transformations can have a linear dependence in spatial coordinates [ref]. Note that $\varphi$ transforms also under dipole shifts. This particular transformation rule is implied by the commutator (\ref{eq:DPQ}). Indeed, writing infinitesimal translation and dipole shift as $\delta_\xi \varphi=\xi^i\p_i\varphi$, $\delta_{c} \varphi=-c^iX^i$, we have \begin{equation} (\delta_c\delta_\xi-\delta_\xi\delta_c) \varphi=c^i\xi^i\ ,\end{equation} i.e. the commutator is an infinitesimal shift of $\varphi$, as required by (\ref{eq:DPQ}). It can also be verified that the last term in (\ref{tr3}) is the most general transormation consistent with (\ref{eq:DPQ}). The effective action will be invariant under transformations (\ref{tr1}),(\ref{tr2}),(\ref{tr3}) which, as a consequence of Noether's theorem, correspond to the statement of conservation of momentum, dipole and charge, respectively. Now recall from above that all the degrees of freedom have to be doubled, so we will have $X_1^i,X_2^i,\varphi_1^i,\varphi_2^i,\varphi_1$ and $\varphi_2$. The symmetries (\ref{tr1})-(\ref{tr3}) will also be doubled, which in turn correspond to the conservation of the corresponding hydrodynamic currents defined in the forward and backward time contours \PG{(figure that illustrates this as well as (\ref{pathin}) and (\ref{pathin1})?)}. Unlike in the path integral (\ref{pathin}), the effective action appearing in (\ref{pathin1}) does not have a factorized form. This is because, as a result of the coarse-graining, where the fast-moving degrees of freedom have been integrated out, new couplings that are local in the ``folded'' time have been generated. These cross-couplings are responsible for dissipations and fluctuations. While the effective action loses factorization, it still satisfies several properties that come from the unitarity of the underlying microscopic evolution [ref]: \begin{equation} S[\chi,\chi]=0,\qquad S[\chi_2,\chi_1]=-S^*[\chi_1,\chi_2],\qquad \text{Im}\,S[\chi_1,\chi_2]\geq 0\ ,\end{equation} where $\chi_1,\chi_2$ collectively denote the two copies of $X^i,\varphi^i,\varphi$. Note in particular that the action can (and will) be complex-valued; as we will see this is a basic consequence of having fluctuations. Additionally, since the initial state $\rho_0$ is thermal, and assuming that the microscopic Hamiltonian $H$ is invariant under time-reversal, the effective action satisfies a discrete $\mathbb Z_2$ symmetry called ``dynamical KMS symmetry'': \begin{equation}\label{kms} S[\chi_1,\chi_2]=S[\tilde\chi_1,\tilde\chi_2],\qquad \tilde \chi_1(\sigma^0,\sigma^I)=(-1)^\eta \chi_1(-\sigma^0,\sigma^I),\quad \tilde \chi_2(\sigma^0,\sigma)=(-1)^\eta \chi_2(-\sigma^0-i\beta,\sigma^I)\ ,\end{equation} where $(-1)^\eta=\pm 1$ denotes the time-reversal eigenvalue of $\chi$. This symmetry is equivalent to the Euclidean time periodicity of correlation functions on a thermal state with inverse temperature $\beta$. In our effective action, it will relate couplings responsible for dissipation with those describing fluctuations, and it will ensure consistency with the second law of thermodynamics, Onsager relations, and existence of equilibrium. Eq. (\ref{kms}) can be extended to situations where the microscopic Hamiltonian is invariant under a more general discrete symmetry, so long as such symmetry contains time-reversal. A proof of (\ref{kms}) is given in \PG{Appendix (??)}. To complete our effective field theory, we need an additional set of symmetries that characterize the fact that the late-time behavior of the system is that of a fluid. Recall that $\sigma^I$ should be interpreted as labels of fluid elements at a fixed value of $\sigma^0$. Adiabatically reshuffling fluids elements has a vanishing cost in energy, since, in contrast to a solid, fluid parcels are not pinned to a particular spatial location. This means that a specific way to label fluid elements at a given time is not physical, and thus the effective action should be invariant under time-independent redefinitions of $\sigma^I$: \begin{equation} \sigma^I\to \sigma^{'I}(\sigma^J)\ .\end{equation} Had we not considered this symmetry, the action could depend on arbitrary derivatives $\p_IX^i$, and we would describe a solid instead of a liquid. Analogously, in the charge sector, we have the freedom to relabel the local phase $e^{i(\varphi+X^i\varphi^i)}$ at a fixed time. This amounts to requiring the symmetry \begin{equation}\label{diags} \varphi_1(\sigma^0,\sigma^I)\to \varphi_1(\sigma^0,\sigma^I)+\lambda(\sigma^I),\qquad \varphi_2(\sigma^0,\sigma^I)\to \varphi_2(\sigma^0,\sigma^I)+\lambda(\sigma^I)\ ,\end{equation} where $\lambda(\sigma^I)$ is a time-independent redefinition of the phase and can be arbitrarily assigned on each fluid element $\sigma^I$. The symmetry (\ref{diags}) states the absence of spontaneous symmetry breaking of the global $U(1)$ symmetry. Indeed, in the occurrence of spontaneous symmetry breaking, the full information about the phase would be a physical (of course, up to constant shifts of the phase), which would give rise to a superfluid. Instead, in the present context, we are merely interested in the conservation of charge (and dipole) in the absence of spontaneous symmetry breaking. \PG{(check time relabelings, in which case it might be better to use $\sigma^0\to t$)} The formalism we have introduced above is based on quantum mechanics. In the present paper, however, we are interested in the emergent classical, high-temperature hydrodynamic behavior of many-body systems. There is a simple way to take the classical limit of this framework which retains the physics we are interested in and has the benefit of considerably simplifying various technical aspects. To this aim, we restore factors of $\hbar$ and write $\chi_1=\chi+\frac 12 \hbar \chi_a$, $\chi_2=\chi-\frac 12 \hbar \chi_a$, where aggain $\chi$ collectively denotes the hydrodynamic fields, for example: $X_1^i=X^i+\frac 12 \hbar X_a^i$, etc. The fact that $\chi_1-\chi_2$ is linear in $\hbar$ can be heuristically understood from the fact that the forward and backward time evolutions are located a distance $\hbar\beta$ from each other as shown in Fig. [ref], and thus, as $\hbar\to 0$, $\chi_1-\chi_2$ should vanish linearly in $\hbar$. In this limit, the dynamical KMS symmetry becomes \begin{equation} \tilde \chi(\sigma^0,\sigma^I)=(-1)^\eta\chi(-\sigma^0,\sigma^I),\qquad \tilde \chi_a(\sigma^0,\sigma^I)=(-1)^\eta\{\chi_a(-\sigma^0,\sigma^I)+i\beta \p_0\chi(-\sigma^0,\sigma^I)\}\ ,\end{equation} where the dependence on $\hbar$ has factorized out, and the nonlocal time shift in (\ref{kms}) reduced to an \emph{exact} time derivative, allowing for a more straightforward implementation. We now proceed to writing down the invariant blocks that will be use to write the effective action. \subsection{Including energy conservation} \subsection{Momentum relaxation} \section{Hydrodynamic instabilities} \label{sec:instability} in low dimensions...novel KPZ??? The simplest argument is probs to look at $\langle T_{xx}T_{xx}\rangle$ and use the ``incoherent" $k^4$-decaying density/temperature mode coming from quadratic correcetions to pressure?? \section{Hamiltonian dynamics} \label{sec:numerics} We begin this paper by presenting a simple model of classical Hamiltonian dynamics which should fall into the hydrodynamic universality class summarized above. by describing an explicit model of classical many-body dynamics with charge, momentum, energy and dipole conservation. Consider the following many-body classical Hamiltonian: \begin{equation} H = \sum_{i=1}^{N-1} \left[\frac{1}{2}(p_i-p_{i+1})^2 + V(x_i- x_{i+1}) \right]. \end{equation} Clearly, $H$ (energy) and $N$ (the number of particles) are constants of motion, as are the total momentum $P$ and dipole moment $D$: \begin{equation} P = \sum_{i=1}^N p_i, \;\;\; D = \sum_{i=1}^N x_i. \end{equation} In our numerics, we take \begin{equation} V(x) = \frac{1}{2} x^2 + gx^4. \end{equation} Let us first set $g=0$. In this case, the model is exactly solvable; the equations of motion (for $i\ne 1,N$) are \begin{subequations}\begin{align} \dot{x}_i &= p_{i+1}+p_{i-1}-2p_i, \\ \dot{p}_i &= x_{i+1}+x_{i-1}-2x_i. \end{align}\end{subequations} The above linear equations are easily solved by switching to a Fourier basis, and one finds that the modes \section{Possible realizations} \label{sec:expt} \section{Conclusions} Looking forward, we anticipate that our methods can be generalized to discover infinitely many new hydrodynamic universality classes that arise in fracton-like classical or quantum matter. It will be fascinating to look for experimental realizations of the dipole-Navier-Stokes hydrodynamics developed in this letter, perhaps in high quality solid-state devices in very large electric fields, or in low density interacting ultracold atoms in tilted trapped optical lattices \cite{}. \section*{Acknowledgements} We acknowledge useful discussions with Kristan Jensen and Rahul Nandkishore. JG and AL were partially supported by Grant ???? from the Gordon and Betty Moore Foundation's EPiQS Initiative. AL was partially supported by a Research Fellowship from the Alfred P. Sloan Foundation. \onecolumngrid \begin{appendix} \section{Hydrodynamic modes with higher multipole moment conservation} \section{Details of the effective theory} \end{appendix} \section{Introduction} One of the oldest and most applicable theories in physics is hydrodynamics. While hydrodynamics was first understood as a phenomenological set of equations that govern liquids and gases \cite{landau}, over the past century we have instead recognized that hydrodynamics is best understood as the universal effective field theory that governs thermalization in a chaotic many-body system \cite{Crossley:2015evo,Haehl:2015foa,Jensen:2017kzi}. Due to this universality, the same theories of hydrodynamics can describe diverse phases of classical or quantum matter, including ultracold atoms \cite{Cao_2010}, quark-gluon plasma \cite{Shuryak_2009}, and electrons and phonons in high-purity solids \cite{Crossno_2016,Bandurin_2016,de_Jong_1995,Krishna_Kumar_2017}. Novel phases of matter arise when the microscopic degrees of freedom are \emph{fractons} -- excitations which are individually immobile, and can only move in tandem \cite{chamon2005quantum,vijay2015new,vijay2016fracton, pretko2017subdimensional,pretko2017emergent,pretko2017generalized,slagle2017fracton,slagle2018symmetric, schmitz2018recoverable,ma2017fracton,ma2018topological,shirley2019foliated,slagle2017fracton,Pretko2018,radzihovsky2020fractons,Seiberg_2020,Gorantla_2020,rudelius2020fractons,doshi}. As a simple example, we can consider a phase of matter in which the global charge (or mass) is conserved, together with the global dipole moment (or momentum). In this case, a single particle cannot move without violating the dipole conservation law. If such a phase of matter, realized on a lattice, can thermalize \cite{pai2019localization,2020PollmannFragmentation,shattering}, it is described by a novel hydrodynamics in which Fick's law of charge diffusion is replaced by slower subdiffusion \cite{fractonhydro,knap2020,zhang2020,morningstar,iaconis2021,Ganesan:2020wvm}. The emergence of subdiffusion is not special to peculiar microscopic details of particular lattice models; it is guaranteed by the symmetries of the dynamics. This robustness of hydrodynamics to microscopic peculiarities makes it an experimentally ideal probe for constrained dynamics \cite{Guardado_Sanchez_2020}. Here, we study such a dipole-conserving theory which is also translation-invariant. In this case, charge, dipole and momentum are all conserved quantities. We show that these fluids exhibit a highly unusual hydrodynamics, with magnon-like propagations with subdiffusive decay rates. More importantly, below four spatial dimensions, these fluids are violently unstable to thermal fluctuations. Hydrodynamics thus will not exist in any experimentally-realizable spatial dimension; rather, it will be replaced by a fractonic generalization of the Kardar-Parisi-Zhang (KPZ) universality class \cite{kpz}. In one spatial dimension, the KPZ fixed point is remarkably generic: it arises in the study of growing surfaces \cite{kpz}, quantum Hall edge states \cite{gloriosoprl}, and -- most importantly for us -- is the endpoint of a fundamental instability of the Navier-Stokes equations in one spatial dimension \cite{Spohn_2014}. Among the many non-equilibrium universality classes that have been discovered in statistical mechanics, ranging from flocking \cite{flocking} in active matter, to driven-dissipative condensates \cite{PhysRevResearch.2.033018}, to fluctuating smectic liquid crystals \cite{toner}, the KPZ fixed point is unique in that it describes the instability of an ordinary undriven fluid at rest, without any spontaneous symmetry breaking. By studying the curious hydrodynamics arising in matter with fractonic conservation laws, we have discovered that such instabilies of fluids in equilibrium can exist in three dimensions as well. Just as hydrodynamics is robust against microscopic details, so too is the universality class of non-equilibrium dynamics that emerges out of a hydrodynamic instability. The universality class of a dipole-conserving fluid can be realized in any medium with exact or emergent dipole and momentum conservation. While we are not aware of a current experimental platform exhibiting these conservation laws, we outline possible strategies to building them in what follows. Independently of experiment, it is possible to realize these conservation laws in classical or quantum dynamics which can be simulated numerically. We have numerically simulated one-dimensional chaotic classical dynamics with dipole and momentum conservation, and find evidence for the advertised fractonic generalization of the KPZ fixed point. Our work thus establishes an unexpected and profound connection between non-equilibrium statistical mechanics and unconventional fracton phases of matter. \section{Microscopic Models}Given the seemingly abstract nature of how fluids might simultaneously have both dipole and momentum conservation, before diving into the theoretical framework of fluid dynamics, it is instructive to first describe a microscopic model which would lie in such a universality class. For simplicity in the discussion which follows, we focus on one dimensional systems. Let us begin by considering $N$ particles with momenta $p_i$ and displacements $x_i$ ($i=1,\ldots,N$), coupled together by the following Hamiltonian: \begin{equation} H = \sum_{n=1}^{N-1} \frac{(p_n-p_{n+1})^2}{2} + V(x_n - x_{n+1}). \label{eq:Hdip} \end{equation} Here $V(x) = V_2 x^2 + V_3 x^3 + \cdots$ is a generic polynomial. This is qualitatively similar to a simple model of one-dimensional solids with anharmonicity, except for the kinetic energy, which depends on only the difference of momenta. Somewhat similar models have arisen in the ``dipole fermion" picture of fractional quantum Hall states \cite{simon,fliss2021entanglement}. This curious kinetic energy is all that we need to have an emergent dipole conservation. Using the Poisson brackets $\lbrace x_n, p_m\rbrace = \delta_{nm}$, we find that the dipole moment $D$, ``charge" $Q$ and momentum $P$, given by \begin{equation} Q = \sum_{n=1}^N 1, \;\;\; D = \sum_{n=1}^N x_i, \;\;\; P = \sum_{n=1}^N p_i, \end{equation} obey the classical multipole algebra \cite{gromov2018towards} \begin{equation} \label{eq:multipole} \lbrace Q,D\rbrace = \lbrace Q, P\rbrace = 0, \;\;\; \lbrace D,P\rbrace = Q. \end{equation} Since $Q,D$ and $P$ commute with the Hamiltonian, we have conservation of charge, dipole and momentum all at once, in a spatially local theory. Note that energy is also conserved if the dynamics is Hamitonian. But it is straightforward to modify the dynamics to no longer conserve energy. If we assume the dynamics is close to equilibrium (i.e. displacements are small), then we can safely set (without loss of generality) $V(x) = \frac{1}{2}x^2$. A simple calculation reveals that the normal modes of this quadratic and integrable system are of the form $(x_n,p_n) \propto \mathrm{e}^{\mathrm{i}kn - \mathrm{i}\omega_k t}$, and that when $k \ll 1$, $\omega_k \approx \pm k^2$. As promised in the introduction, we have uncovered a magnon-like dissipationless dispersion relation, which we will show later is universal and follows entirely from simultaneous dipole and momentum conservation, even when we include higher order terms (which destroy integrability) in $V(x)$. As we have found a magnon-like dispersion relation, it is tempting to push the analogy between dipole conserving fluids, and isotropic ferromagnets, a little further. Consider the isotropic Heisenberg ferromagnet \begin{equation} H = -\sum_{n=1}^{N-1}\mathbf{S}_n \cdot \mathbf{S}_{n+1}, \end{equation} where $\mathbf{S}_n = (S^x_n, S^y_n, S^z_n)$ and $\lbrace S^i_n,S^j_m\rbrace = \epsilon^{ijk}S^k_n \delta_{nm}$. Imposing the constraint $S^x_n S^x_n+S^y_n S^y_n+S^z_n S^z_n=1$, and perturbing around the minimal energy configuration $S^z_n=1$, we observe that if we identify charge $Q$, dipole $D$ and momentum $P$ with the total $x$, $y$ and $z$ components of spin, the spin algebra is equivalent to (\ref{eq:multipole}). Moreover, Taylor expanding $H$ to leading order in small $S^{x,y}_n$, we observe that up to a global constant, $H$ is approximately given by (\ref{eq:Hdip}) with $V(x)=\frac{1}{2}x^2$. Remarkably, we see that the isotropic ferromagnet has an \emph{approximate} dipole and momentum conservation close to equilibrium. However, nonlinearities in the ferromagnet do not preserve $S^z_n=1$, and so the nonlinear theories differ. Despite this nonlinear discrepancy, we note that nonlinearities are known to be strongly relevant below six dimensions in the Heisenberg ferromagnet \cite{hohenberg}. In the exact dipole-conserving theory, we will show that the upper critical dimension is four. And while the Mermin-Wagner theorem destroys order in the ferromagnet in one dimension, there is no destruction of dipole conservation in the model (\ref{eq:Hdip}). It would be interesting to investigate whether a ferromagnet can be modified by realizable interactions to better stabilize the dipole-conserving hydrodynamics. \section{Hydrodynamics}We now use canonical arguments, based on the second law of thermodynamics, to derive the hydrodynamics of conserved momentum, charge, and dipole. The fundamental assumption of hydrodynamics is that the late time physics is governed locally by the independent quantities of the system, which we write as \begin{equation} P^i = \int \mathrm{d}^d x\, \pi^i,\quad Q=\int \mathrm{d}^d x\, n \ \end{equation} where $\pi^i$ and $n$ are the momentum and charge density, respectively. $ij$ indices in what follows run over spatial dimensions $i=1,\ldots d$, and repeated indices are summed over. The dynamics of the densities $n$ and $\pi^i$ is given by the local conservation laws: \begin{equation} \label{eom1}\p_t \pi^i + \p_j T^{ji}=0,\qquad \p_t n+\p_i J^i=0\ ,\end{equation} where $T^{ij}$ and $J^i$ are stress and charge flux, and are assumed to be local expressions of $\pi^i,n$. Crucially, we also need to demand \begin{equation}\label{dipfl} J^i = \p_j J^{ji}\ ,\end{equation} which comes from dipole conservation: \begin{equation} \label{dipfl1}\p_t \int d^d x x^i n= -\int \mathrm{d}^d x x^i\p_j J^j=\int \mathrm{d}^d x J^i\ ,\end{equation} where the right-hand side vanishes only if $J^i$ satisfies (\ref{dipfl}). We will also demand that $J^{ij}$ be local in the densities. We have not included the dipole density as a separate degree of freedom. Indeed, let us decompose the dipole charge as: $D^i=D_0^i+S^i$, where $D_0^i=\int \mathrm{d}^d x\, x^i n$ is the ``orbital'' component and $S^i$ a remainder, corresponing to a density of microscopic dipoles. In general, we only expect the sum $D^i$ to be conserved, and not each component separately. Therefore, the $S^i$ charge will typically relax into the local density $x^in$, and dipole density is not a separate hydrodynamic degree of freedom \cite{Glorioso_2021}. This is analogous to the reason why a fluid with angular momentum conservation does not have a new hydrodynamic mode associated to angular momentum density. Upon specifying the explicit dependence of $T^{ij}$ and $J^i$ on $\pi^i$ and $n$, Eqs. (\ref{eom1}),(\ref{dipfl}) will completely specify the time evolution of $\pi^i$ and $n$. To find such explicit dependence, we shall write down the most general expressions of $T^{ij}$ and $J^i$ in terms of $\pi^i$ and $n$ following a derivative expansion, and then impose that the dynamics be consistent with the local second law of thermodynamics. This amounts at finding a vector $J_S^i$ such that \begin{equation} \label{entp}\p_t s+\p_i J_S^i\geq 0\end{equation} when evaluated on solutions to hydrodynamics, where $s$ is the thermodynamic entropy density. This basic constraint will uniquely determine the concrete expressions of $T^{ij},J^i$ in terms of $\pi^i,n$, order by order in derivatives, up to phenomenological coefficients that are determined by the specific underlying system. The thermodynamics and hydrodynamics of dipole-conserving systems are special: contrary to all cases known to us, the homogeneous part of momentum density decouples from the dynamics. Indeed, we begin by first assuming that the entropy density is a function of momentum and charge densities $s=s(\pi^i,n)$, as in conventional thermodynamics. We now show that this breaks (\ref{dipfl}). Recall the thermodynamic relation \begin{equation} T\mathrm{d}s= -V^i \mathrm{d}\pi^i-\mu \mathrm{d}n, \end{equation} where $V^i$ and $\mu$ are the velocity and chemical potential of the system. We recall that we are assuming absence of energy conservation, so we will take $T$ to be a constant set by noise in this discussion, and will study hydrodynamics with energy conservation in a technical companion paper \cite{future}. Combining this thermodynamic relation with the fact that, in non-dissipative hydrodynamics, entropy is locally conserved, we arrive at \begin{equation} \label{2ndla0}T(\p_ts+\p_i J_S^i) =-V^i(\p_t\pi^i+\p_j T^{ji}) - \mu(\p_tn+\p_i J^i). \end{equation} The most general expressions for the currents are $J_S^i = s_1 V^i$, $T^{ij} = p \delta^{ij} + h_1 \pi^i V^j$, $J^i = h_2 V^i$, where $s_1,p,h_1,h_2$ are functions of $\pi^i$ and $n$. Plugging these expressions in (\ref{2ndla0}) gives $s_1=s$, $p=Ts+\mu n+ V^i \pi^i$, $h_1 V^i =\pi^i$ and $h_2=n$, which are just the standard constitutive relations of the hydrodynamics of a charged fluid: indeed, we have not used anywhere the fact that we are dealing with a dipole-conserving fluid. In particular, these results together with (\ref{dipfl}) lead to the relation \begin{equation}\label{nV1} n V^i = \p_j J^{ji},\end{equation} which would naively imply that $J^{ij}$ is non-local in the hydrodynamic variables, thus violating our basic assumptions. The only way for (\ref{nV1}) to be consistent with locality, therefore, is to demand that the velocity $V^i$ is itself a total derivative (divided by $n$). Since $V^i$ is defined as the chemical potential of $\pi^i$, such requirement is only possible if the entropy density has the following dependence on $\pi^i$: \begin{equation} s=s(\p_i v_j,n),\qquad v_i\equiv\frac{\pi^i}{n}\label{td2}\ .\end{equation} Again demanding (\ref{2ndla0}), we find \begin{subequations}\begin{align} T^{ij} &= p \delta^{ij} + V^i \pi^j - \psi_{ik}\partial_j v_k, \label{const1} \\ J^{ij} &= \psi_{ij},\label{const1a} \end{align}\end{subequations} where the velocity and the thermodynamic pressure are \begin{equation} V^i=\frac 1n \p_j \psi_{ji},\qquad p=Ts-nT\frac{\partial s}{\partial n} \label{const2}, \end{equation} and we have defined the quantity \begin{equation} \psi_{ij}\equiv\left.T\frac{\p s}{\p(\p_i v_j)}\right|_n\,.\label{const4} \end{equation} Unlike ordinary fluids, the velocity is a higher-derivative expression of momentum density. This is the only way to reconcile (\ref{nV1}) with locality. In a rotationally invariant theory, we find that $T_{ij}$ is symmetric up to total derivatives, consistent with conservation of angular momentum. An explicit derivation of these facts is provided in Appendix \ref{app:zero}. Note that the entropy density $s$ as well as equations of motion are invariant under the shift \begin{equation}\label{dipsh} \pi^i\to \pi^i + n c^i,\qquad T^{ij}\to T^{ij} + J^j c^i\ ,\end{equation} where $c^i$ is a constant vector. This invariance is a manifestation of the dipole algebra (\ref{eq:multipole}). Indeed, (\ref{eq:multipole}) implies, using locality: $\lbrace D^i,\pi^j \rbrace= n\delta^{ij}$, which is equivalent to the symmetry (\ref{dipsh}). In fact, using (\ref{dipsh}) as the only input, one immediately infers (\ref{td2}), which in turn imply (\ref{const1})-(\ref{const4}) and in particular (\ref{dipfl}). Such symmetry-based approach confirms that the hydrodynamics (\ref{const1})-(\ref{const4}) is valid for arbitrary strongly-coupled systems and thus universal. It is straightforward to derive first order dissipative corrections to hydrodynamics. We do this calculation in Appendix \ref{app:const}, and now summarize the results. We find that $J^{ij} = J^{ij}_{(0)} + J^{ij}_{(1)}$ and $T^{ij} = T^{ij}_{(0)} + T^{ij}_{(1)}$, where $J^{ij}_{(0)}$ and $T^{ij}_{(0)}$ correspond to the ideal hydrodynamic results derived above, and \begin{subequations}\label{con1}\begin{align} -T^{ij}_{(1)}&=\eta^{ijkl}\p_k V_l+\alpha^{ijkl}\p_k\p_l \mu\\ J^{ij}_{(1)}&=\kappa^{ijkl}\p_k V_l+C^{ijkl}\p_k\p_l \mu.\end{align}\end{subequations} The tensor structures are detailed in Appendix \ref{app:const}, and are similar to shear and bulk viscosities of an isotropic fluid. To complete our hydrodynamic description, we finally add the effect of thermal fluctuations. Generalizing the standard fluctuation-dissipation theorem \cite{landau}, we add noise to the currents: $T^{ij}\to T^{ij}+\tau^{ij}$ and $J^{ij}\to J^{ij}+\xi^{ij}$, where the variance is determined by the dissipative coefficients of (\ref{con1}): \begin{subequations}\label{noise}\begin{align} \langle \tau^{ij} \tau^{kl}\rangle&=2T \eta^{ijkl}\delta(t)\delta^{(d)}(x)\\ \langle \xi^{ij} \xi^{kl}\rangle&=2T C^{ijkl}\delta(t)\delta^{(d)}(x)\\ \langle \tau^{ij} \xi^{kl}\rangle&=T(\alpha^{ijkl}+\kappa^{klij})\delta(t)\delta^{(d)}(x) \end{align}\end{subequations} In Appendix \ref{app:linearized}, we derive the propagating hydrodynamic modes of this theory: namely, we look for solutions to the hydrodynamic equations in which $n,v_i \propto \mathrm{e}^{\mathrm{i}kx-\mathrm{i}\omega t}$. We find a magnon-like ``sound'' mode with dispersion relation \cite{forster,hohenberg} $\omega = \pm ck^2 - \mathrm{i}\gamma k^4$, and $d-1$ subdiffusive modes for transverse momentum with dispersion relation $\omega = -\mathrm{i}\gamma^\prime k^4$. Explicit expressions for $c,\gamma,\gamma^\prime$ are not illuminating and are provided in the appendix. Note that the qualitative structure of these quasinormal modes matches those of an ordinary fluid, except that each power of wave number $k$ is doubled. \section{Instability of Hydrodynamics} In fact, the true dispersion relations differ from those we found from linear response above. Relevant nonlinearities couple to thermal fluctuations and lead to anomalous scaling, severely affecting the long-time behavior of general dipole-conserving hydrodynamics. For simplicity, we shall present the explicit nonlinearities only in one dimension; the higher dimensional counterpart is qualitatively similar and can be found in Appendix \ref{app:expl}. We consider perturbations of an equilibrium fluid at rest -- $v_x=0$ and $n=n_0$ ($\delta n = n -n_0$) -- and find \begin{subequations}\begin{align} &\p_t v_x+\frac 1\chi \p_x\delta n+\lambda\delta n\p_x\delta n+\lambda^\prime\delta n^2 \p_x\delta n\nonumber\\ &+\frac{aT}{n_0^2}\Gamma\p_x^4v_x-\frac A{n_0\chi}\p_x^3 \delta n+\frac 1{n_0}\p_x\tau^{xx}+\cdots=0,\\ &\p_t \delta n-A\p_x^3 v_x+\frac C\chi\p_x^4 \delta n+\p_x^2\xi^{xx}+\cdots=0. \end{align}\end{subequations} $\cdots$ denote higher derivative/nonlinear terms which are not important for what follows. We included stochastic fluctuations in the equations. The values of constants $\lambda$, $\lambda^\prime$, $\chi$, $a$, $A$ and $C$ do not depend on $v_x$ or $\delta n$. Neglecting fluctuations, the dissipative scaling is $\omega\sim k^4$. From (\ref{noise}), the noise scales as $\tau,\eta\sim k^{\frac{d+4}2}$. Starting from the linearized theory and assuming that $a$, $A$ etc. are scale ($k$) independent, we find $\pi^i\sim k^{\frac{d-2}2}$ and $n\sim k^{\frac d2}$. As per the usual renormalization group analysis, the coefficient of $\lambda$ must scale as $k^{\frac{d-4}2}$, making it \emph{relevant} when $d<4$; $\lambda^\prime$ scales as $k^{d-2}$ and is relevant when $d<2$. As a consequence, we can anticipate anomalous dissipative scaling: the magnon-like sound mode will have dispersion relation $\omega\sim k^2-ik^z$, with $z<4$. We crudely estimate $z$ by assuming that, even after fluctuations are accounted for, the scaling of the densities does not renormalize. Assuming $\lambda\ne 0$ and balancing the time derivative with nonlinearities $\p_t \pi^i\sim \nabla (\delta n)^2,\nabla(\nabla\pi)^2$ leads to $k^{z+\frac{d-2}2}\sim k^{d+1}$, or $z\sim d/2+2$. In one dimension, this gives $z\sim 2.5$. \section{Numerical Simulations} Having predicted both the exotic dissipative hydrodynamics of a dipole-conserving theory, together with its breakdown due to thermal fluctuations, we now describe two models which we have used to numerically test our predictions. \textbf{Model A:} Our first model starts with Hamiltonian (\ref{eq:Hdip}), with \begin{align} V(x) &= \frac{1}{2}x^2 + k_3 x^3 + k_4 x^4. \label{eq:Vx} \end{align} Note that Model A has energy conservation, and strictly speaking our hydrodynamic derivation above does not. Energy conservation changes the universality class and critical exponents of hydrodynamics \cite{future}, and so strictly speaking, model A does not lie in the universality class predicted above. In Appendix \ref{app:modelA}, we study a stochastic version of model A with noise and dissipation, which is not energy conserving and is predicted to lie in our universality class. Further simulations, and discussion of the role of energy conservation, are also described there. \textbf{Model B:} Our second model corresponds to \begin{equation} H = \sum_{i=1}^N \left(-\cos p_i - F x_i\right) + V(x), \end{equation} with $V(x)$ given in (\ref{eq:Vx}). Note that this model does \emph{not} have explicit dipole conservation, nor momentum conservation. However, analogous to the emergence of dipole conservation out of energy conservation in systems placed in strong tilt fields \cite{fractonhydro,zhang2020universal,Guardado_Sanchez_2020}, we predict the emergence of dipole \emph{and} momentum conserving hydrodynamics in this model. Indeed, in Appendix \ref{app:modelB}, we explicitly show that the linearized equations in model B exhibit fast Bloch oscillations superimposed on top of magnon-like hydrodynamic modes. Model B is inspired by one possible experimental realization of dipole-conserving hydrodynamics, in which ultracold fermionic atoms are placed in a tilted optical lattice \cite{Guardado_Sanchez_2020}. The $\cos p_i$ kinetic energy arises from the finite bandwidth of a lattice model, and the $Fx_i$ force field comes from the tilt. In order for this realization to be appropriate, it is important for umklapp scattering to be suppressed and for momentum to be approximately conserved. We now present large scale simulations for each model. A ``thermodynamic" check for dipole-conserving hydrodynamics is to look at the equilibrium fluctuations of the momentum density, which are proportional to the momentum susceptibility: using (\ref{nV1}) and (\ref{const2}), \begin{equation} \chi_{PP} = \frac{\pi_i}{V_i} \propto k^{-2}. \end{equation} As $k\rightarrow 0$, we predict a clear divergence in the equal time correlation functions $\langle p_k(t)p_{-k}(t)\rangle$, with $\langle\cdots\rangle$ an average over times and/or initial conditions; here $p_k$ denotes the discrete Fourier transform of $p_i$. Figure \ref{fig:k2} demonstrates this divergence is present in both models. \begin{figure} \includegraphics[scale = 1.0]{./figures/Pk.pdf} \caption{Distribution function of $\langle p_{-k}(t)p_k(t)\rangle$ correlations at late times for (a) model A and (b) model B. The dashed line is the theoretical $k^{-2}$ prediction.} \label{fig:k2} \end{figure} Next, we study the thermalization time scale of each model. Choosing random initial conditions, higher-momenta modes will thermalize first, so that $\langle p_{-k}(t)p_k(t)\rangle$ will develop a maximum at some momentum $k_*$. For a dispersion relation with $\text{Im}\,\omega\propto k^z$, the thermalization time of a mode with momentum $k$ scales as $k^{-z}$, which will yield \begin{equation} k_*(t)\propto t^{-1/z};\end{equation} note that $z=4$ in linearized hydrodynamics. Figure \ref{fig:z4} shows numerical simulations in both models, which demonstrate that $z\approx 4$ when $k_3=0$, but that $z\approx 2.5$ when $k_3\ne 0$. Crucially, we observe both (\emph{i}) a large scale deviation from $z=4$, which is compatible with our crude estimates for $z$ at the non-equilibrium fixed point, and (\emph{ii}) much weaker instability (in fact, we did not numerically detect one unambiguously) when $k_3=0$. Moreover, our data exhibits the strongest scaling collapse when $z\approx 2.5$ (when $k_3\ne 0$). This constitutes strong numerical evidence for the existence of \emph{the same} hydrodynamic fixed point, arising both in Models A and B. Remarkably, the observed value of $z$ is extremely close to our simplistic estimate of 2.5. \begin{figure} \includegraphics[scale = 1.0]{./figures/scalings_v2.pdf} \includegraphics[scale = 1.0]{./figures/self-similar.pdf} \caption{Temporal dependence of $1/k_*$ showing anomalous scaling in model A (\textbf{top}) and model B (\textbf{middle}). Shown with blue (orange) is the case when $k_3\ne 0$ ($k_3=0$). We estimate $1/z = 0.40\pm 0.03$ when $k_3\ne 0$ and $1/z = 0.26\pm 0.01$ when $k_3=0$ in model A; $1/z = 0.41 \pm 0.02$ when $k_3\ne 0$ and $1/z = 0.27\pm 0.02$ when $k_3=0$ in model B. In model A, $k_3=3\cdot 10^3$ when nonzero, $k_4=2\cdot 10^3$, $N=10^3$, averaging over 50 different initial conditions. In model B, $F=50$, $k_3=3\cdot 10^3$ when nonzero, $k_4=1.8\cdot 10^6$, $N=10^4$, with one random realization shown. The characteristic timescale $\tau_*$ is $\tau_* = 2000/\varepsilon_*$ in the upper panel, whereas $\tau_* = 1/\varepsilon_*$ in the lower panel, with $\varepsilon_*$ the energy density. \textbf{Bottom:} Scaling collapse of the equal-time momentum correlation function in model A for $1/z \approx 0.4$. Shown in the inset is the raw equal-time momentum correlation function without rescaling. } \label{fig:z4} \end{figure} In Model A, we have also studied the fate of the magnon-like sound mode by studying the Fourier transform of unequal time correlation functions $\langle p_{-k}(0)p_k(t)\rangle$. Figure \ref{fig:magnon} shows that this correlation function is sharply peaked near $\omega = c k^2$, with a magnon decay rate consistent with $z\approx 2.5$. This suggests that the real part of the dispersion relation remains quadratic at the dipole-conserving KPZ fixed point, and is consistent with our assumption that the densities do not pick up anomalous exponents at this new fixed point. \begin{figure} \includegraphics[scale = 1.0]{./figures/spectrum.pdf} \caption{(a) Absolute value of the temporal Fourier transform of $\langle P_{-k}(0)P_{k}(t)\rangle$ showing the quadratic dispersion of a propagating mode in model A. (b) Linewidth of the quadratic excitations in panel (a) as a function of momentum. Shown with dashed-dotted lines is the $k^{2.5}$ fit.} \label{fig:magnon} \end{figure} We remark that the numerical detection of anomalous transport scaling can be sometimes quite subtle. For example, it has long been known that energy transport in one-dimensional ``standard'' (non-fractonic) hydrodynamics is anomalous \cite{PhysRevLett.78.1896}; however, this statement has been subject to relatively recent debates, where certain models were observed to display ordinary diffusion \cite{PhysRevE.82.061118,PhysRevE.85.060102}. While the general consensus is now that energy transport in these systems is always anomalous \cite{PhysRevE.90.012124}, these works instruct us that unambiguous determinations of non-equilibrium critical exponents may be quite non-trivial. The relatively weak mixing of energy density with momentum density within these one-dimensional models is consistent with the fact that our model A appears to access the dipole-KPZ fixed point at numerically accessible time scales, even without any noise. \section{Outlook}We have discovered a new phase of matter which is undriven, yet out-of-equilibrium. Our construction was inspired by the physics of fractons, which were originally devised \cite{chamon2005quantum} to protect quantum information against thermalization, but have since revealed deep connections between quantum information, condensed matter physics, quantum field theory, and (due to this work) non-equilibrium statistical mechanics. Although we have focused on the hydrodynamics of a dipole-conserving fluid here, we anticipate infinitely many additional non-equilibrium universality classes arising from the consideration of higher multipole conservation laws \cite{fractonhydro}, subsystem symmetries \cite{fractonhydro,IaconisVijayNandkishore,knap2021}, or explicit/spontaneous symmetry breaking. We look forward to the systematic classification of fracton-inspired non-equilibrium universality classes, and hope for their ultimate discovery in experiment. \section*{Acknowledgements}We acknowledge useful discussions with Luca Delacr\'etaz, Bertrand Halperin, Kristan Jensen, Rahul Nandkishore, Leo Radzihovsky and Dam Thanh Son. This work was supported by the Department of Energy through Award DE-SC0019380 (PG), the Simons Foundation through Award No. 620869 (PG), the Gordon and Betty Moore Foundation's EPiQS Initiative via Grants GBMF4302 and GBMF8686 (JFRN), and by Research Fellowships from the Alfred P. Sloan Foundation under Grant FG-2020-13615 (PG) and FG-2020-13795 (AL). \emph{Note Added.}---While preparing this work, the preprint \cite{grosvenor2021hydrodynamics}, which discusses non-dissipative response in a fluid with conserved dipole and momentum, was posted.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Torsors under Bruhat-Tits group schemes} We show that a Bruhat-Tits group scheme is the limit of all corresponding parahoric group schemes and use this observation to show that the induced map on the level of $\Bun_\mathcal{G}$ is an open immersion. We first discuss (pseudo-)torsors for limits of groups. \subsection{Pseudo-torsors for limits of groups} We use the following result on pseudo-torsors under limits of groups. For a sheaf of groups $\underline{G}$ on a site $\mathcal{C}$ we denote by $\mr{PTor}_{\underline G}$ the category of \emph{$\underline G$-pseudo-torsors} for $\underline G$ with $\underline G$-equivariant maps. In other words, an object of $\mr{PTor}_{\underline G}$ is given by a sheaf $E$ on $\mathcal{C}$ together with a (right) action $E \times \underline G \to E$ of $\underline G$ such that the induced map $E \times \underline G \to E \times E$ given by $(e,g) \mapsto (e, eg)$ is an isomorphism. A map $f \colon \underline G \to \underline G'$ of sheaves of groups on $\mathcal{C}$ induces a functor $f_\ast \colon \mr{PTor}_{\underline G} \to \mr{PTor}_{\underline G'}$ given by $E \mapsto E\times^{\underline G} \underline G'$, where the action of $\underline G'$ is by right multiplication in the second factor. Moreover, the canonical map $(\id_E, \mathbf{1}_{\underline G'}) \colon E \to E \times^{\underline G} \underline G'$ is $\underline G$-equivariant for the $\underline G$-action on $E \times^{\underline G} \underline G'$ via $f$ on the second factor. A $\underline{G}$-pseudo-torsor $E$ is a \emph{$G$-torsor} if for every object $U$ on $\mathcal{C}$ there is a cover $\{U_i \to U \colon i \in I\}$ of $U$ in $\mathcal{C}$ such that $\Gamma(U_i, E) \neq \emptyset$. We denote by $\mathfrak{B}(\underline G)$ the full subcategory of $\mr{PTor}_{\underline G}$ of $\underline G$-torsors on $\mathcal{C}$. The map $f_\ast$ for a map of sheaves of groups $f \colon \underline G \to\underline G'$ restricts to a map $f_\ast \colon \mathfrak{B}(\underline G) \to \mathfrak{B}(\underline G')$. \begin{lem} \label{lemPTor} Let $I$ be a finite partially ordered set and let $(\underline G_i)_{i \in I}$ be a diagram of sheaves of groups over $I$. Let $\underline G = \varprojlim_{i \in I} \underline G_i$. Then $\underline G$ is a sheaf of groups on $\mathcal{C}$ together with a compatible system of projection maps $f_i \colon \underline G \to \underline G_i$. The functor $$ \varprojlim_{i \in I} f_{i, \ast} \colon \mr{PTor}_{\underline G} \to \varprojlim_{i \in I} \mr{PTor}_{\underline G_i}, \qquad E \mapsto (E \times^{\underline G} \underline G_i)_{i \in I}$$ has a right-adjoint given by $$ \lim \colon \left(\varprojlim_{i \in I} \mr{PTor}_{\underline G_i}\right) \to \mr{PTor}_{\underline G}, \qquad (E_i)_{i \in I} \mapsto \varprojlim_{i \in I} E_i.$$ Moreover, the restriction $ \varprojlim_{i \in I} f_{i, \ast}\colon \mathfrak{B}(\underline G) \to \varprojlim_{i \in I} \mathfrak{B}(\underline{G}_i)$ to the full subcategory of torsors is fully faithful. \end{lem} \begin{proof} As a first step, we show that $\varprojlim_{i \in I} E_i$ is indeed a pseudo-torsor for $\underline G$. The sheaf of groups $\underline G$ acts on $E_i$ by the action induced by $f_i$, and all these actions are compatible by the observation above that the reduction maps are equivariant. Hence, $\varprojlim_{i \in I} E_i$ carries a canonical $\underline G$-action. As all the $E_i$ are pseudo-torsors under $\underline G_i$, the induced map \begin{align*} \left( \varprojlim_{i \in I} E_i \right) \times \underline G & \to \left( \varprojlim_{i \in I} E_i \right) \times \left( \varprojlim_{i \in I} E_i \right) \\ ((e_i)_{i \in I}, g) & \mapsto ((e_i)_{i \in I}, (e_i f_i(g))_{i \in I}) \end{align*} is an isomorphism, so $\varprojlim_{i \in I} E_i$ is a $\underline G$-pseudo-torsor. As a next step, we show that the limit is right adjoint to the family of projections. Let $(F_i)_{i \in I} \in \varprojlim_{i \in I} \mr{PTor}_{\underline G_i}$. A $\underline G$-equivariant map $E \to F_i$ factors as $E \to E \times^{\underline G} \underline G_i \to F_i$ for a unique $\underline G_i$-equivariant map $E \times^{\underline G} \underline G_i \to F_i$. Hence, we get $$ \Hom_{\mr{PTor}_{\underline G}}(E, \varprojlim_{i \in I} F_i) = \Hom_{\varprojlim_{i \in I} \mr{PTor}_{\underline G_i}}((E \times^{\underline G} \underline G_i )_{i \in I}, ( F_i)_{i \in I}).$$ In order to see that the restriction to $\mathfrak{B}(\underline G)$ is fully faithful, we check that the unit of the adjunction $E \mapsto \varprojlim_{i \in I} E \times^{\underline G} \underline G_i$ is an isomorphism for $E \in \mathfrak{B}(\underline G)$. We can do so locally, so we may assume that $E$ is trivial. As all maps $E \to E \times^{\underline G} \underline G_i$ are $\underline G$-equivariant, choosing a trivialisation of $E$ induces a compatible choice of trivialisations of all $E \times^{\underline G} \underline G_i$. Hence, the map $E \to \varprojlim_{i \in I} E \times^{\underline G} \underline G_i$ is given by $\underline G \to \varprojlim_{i \in I} \underline G_i$, which is an isomorphism by construction. \end{proof} \begin{remark} \label{remCounterLimTors} Note that given a compatible family of $\underline{G}_i$-torsors $(E_i)_{i \in I} \in \varprojlim_{i \in I} \mathfrak{B}(G_i)$, their limit will in general not be a $\underline G$-torsor, as it might not be possible to produce a compatible system of sections for $(E_i)_{i \in I}$. For example, consider $G_1 = G_2 = \{e\}$ the trivial group and $G_3 = \mathbb{Z}/2$. Then $G_1 \times_{G_3} G_2 = \{e\}$ is again the trivial group. Let us moreover consider the sets $E_1 = E_2 = \{ \ast \}$ and $E_3 = \{a_1,a_2\}$. Then $E_i$ is a trivial $G_i$-torsor for all $i = 1,2,3$. However, under the maps $f_i \colon E_i \to E_3, \ast \mapsto a_i$ for $i = 1,2$, the fibre product $E_1 \times_{E_3} E_2$ is empty, hence in particular not a torsor under the trivial group. \end{remark} \subsection{Deep Bruhat-Tits group schemes are limits of parahoric group schemes} Let us briefly recall some facts from Bruhat-Tits theory \cite{Bruhat1972, Bruhat1984}. In this section, let $k$ be a discretely valued henselian field with ring of integers $\mathcal{O}$. We denote by $\mathfrak{m} \subseteq \mathcal{O}$ its maximal ideal and by $\mathbb{F} = \mathcal{O}/\mathfrak{m}$ its residue field. Moreover, we denote by $k^{\mr{ur}}$ the maximal unramified extension inside some fixed algebraic closure of $k$, by $\mathcal{O}^{\mr{ur}}$ its ring of integers and by $\breve{k}$ (respectively $\breve{\mathcal{O}}$) the completion of $k^{\mr{ur}}$ (respectively $\mathcal{O}^{\mr{ur}}$). Let $G$ be a (connected) reductive group over $k$ such that $G$ is quasi-split over $k^{\mr{ur}}$. Note that $G$ is automatically quasi-split over $k^{\mr{ur}}$ when the cohomological dimension of $k^{\mr{ur}}$ is at most 1 by a theorem of Steinberg. This includes in particular the case $k = \mathbb{F}\dbr{\varpi}$ for a finite field $\mathbb{F}$ we are interested in later. Let us fix a maximal $k$-split torus $S \subseteq G$. We denote by $\mathcal{B}(G/k)$ the corresponding (reduced) Bruhat-Tits building and by $\mathcal{A} = \mathcal{A}(G,S, k) \subseteq \mathcal{B}(G, k)$ the apartment corresponding to $S$. Let $\Phi = \Phi(G,S)$ be the set of roots of $G$ with respect to $S$ and let $\Phi^+ \subseteq \Phi$ be a system of positive roots. We denote by $\Phi^- = - \Phi^+$ and by $\Phi_{\mr{nd}}^{+} \subseteq \Phi^+$ (respectively by $\Phi_{\mr{nd}}^{-} \subseteq \Phi^-$) the subset of non-divisible positive (respectively negative) roots. We consider the space of affine functionals $\mathcal{A}^\ast$ on $\mathcal{A}$ and the set of affine roots $\Psi = \Psi(G,S) \subseteq \mathcal{A}^\ast$ of $G$ with respect to $S$. For an affine functional $\psi \in \mathcal{A}^\ast$, let $\mathcal{H}_{\Psi} \subseteq \mathcal{A}$ be the vanishing hyperplane for $\psi$ and let $\mathcal{H}_{\psi \geq 0} = \{x \in \mathcal{A}\colon \psi(x) \geq 0 \}$ (respectively $\mathcal{H}_{\psi \leq 0} = \{x \in \mathcal{A}\colon \psi(x) \leq 0 \}$) be the corresponding half-spaces. For an affine functional $\psi \in \mathcal{A}^\ast$, we denote by $\dot{\psi}$ its gradient. By construction, for $\psi \in \Psi$ we have $\dot{\psi} \in \Phi$. For a non-empty bounded subset $\Omega \subseteq \mathcal{A}$, we consider the corresponding (local) Bruhat-Tits group scheme\footnote{In the literature it is often additionally required that $\Omega$ is contained in a facet. We explicitly allow $\Omega$ to not be contained in the closure of a facet (this will be the interesting case later) and call $\mathcal{G}_\Omega$ with $\Omega$ contained in the closure of a facet a \emph{parahoric} (Bruhat-Tits) group scheme.} $\mathcal{G}_\Omega$ constructed in \cite[§ 5.1.9 (resp. § 4.6.26)]{Bruhat1984}. It is the unique smooth affine $\mathcal{O}$-group scheme with generic fibre $G$, connected special fibre and $\mathcal{G}_\Omega(\mathcal{O}^{\mr{ur}}) = G(k^{\mr{ur}})^0_\Omega$, where $G(k^{\mr{ur}})^0_\Omega$ is the ``connected'' (pointwise) stabiliser of $\Omega$. For a bounded subset $\Omega \subseteq \mathcal{A}$, we denote by $\mr{cl}(\Omega) = \bigcap_{\psi \in \Psi, \Omega \subseteq \mathcal{H}_{\psi \geq 0}} \mathcal{H}_{\psi \geq 0}$ the intersection of all half-spaces containing $\Omega$. Then the corresponding Bruhat-Tits group scheme does not change when replacing $\Omega$ by $\mr{cl}(\Omega)$, compare \cite[§ 4.6.27]{Bruhat1984}. Hence, we may always assume $\Omega = \mr{cl}(\Omega)$ in the following. By construction, $\mr{cl}(\Omega)$ is convex. For two bounded subsets $\Omega, \Omega'$ of $\mathcal{A}(G, S, k)$ with $\Omega = \mr{cl}(\Omega)$, we write $\Omega' \prec \Omega$ if $\Omega'$ is contained in $\Omega$. In this case, we obtain an induced homomorphism of $\mathcal{O}$-group schemes $\rho_{\Omega', \Omega} \colon \mathcal{G}_{\Omega} \to \mathcal{G}_{\Omega'}$ whose restriction to the generic fibre is given by the identity on $G$. Below, we often take limits over the partially ordered set $\{\mathfrak{f} \prec \Omega\}$ of facets contained in $\Omega$ ordered by inclusion. This poset is connected as $\Omega = \mr{cl}(\Omega)$ is connected. For a root $a \in \Phi$ and $\Omega$ as above, we denote by $U_{a, \Omega} \subseteq G(k)$ the corresponding root subgroup and by $\mathcal{U}_{a, \Omega}$ its integral model, which is a smooth affine $\mathcal{O}$-group scheme. As for the $\mathcal{G}_\Omega$, the group scheme $\mathcal{U}_{a, \Omega}$ only depends on $\mr{cl}(\Omega)$ and for $\Omega' \prec \Omega$ there is a natural map $\mathcal{U}_{a, \Omega} \to \mathcal{U}_{a, \Omega'}$. These integral models are used to construct the \emph{big open cell} $$ \prod_{a \in \Phi_{\mr{nd}}^{-}} \mathcal{U}_{a, \Omega} \times \mathcal{Z} \times \prod_{a \in \Phi_{\mr{nd}}^{+}} \mathcal{U}_{a, \Omega} \hookrightarrow \mathcal{G}_\Omega,$$ which is an open immersion by \cite[§ 4.6.2]{Bruhat1984}, where $\mathcal{Z}$ is an integral model of the centraliser $Z$ of $S$. Note that when $G$ is quasi-split, $T =Z$ is a maximal torus in $G$. The main result of this section is the following theorem. \begin{thm} \label{thmBTGS} Let $G$ be a reductive group over $k$ such that $G$ is quasi-split over the maximal unramified extension $k^{\mr{ur}}$ of $k$. Let $\Omega \subseteq \mathcal{A}(G, S, k)$ be a bounded subset with $\Omega = \mr{cl}(\Omega)$. The map $$\rho = \varprojlim_{\mathfrak{f} \prec \Omega} \rho_{\mathfrak{f}, \Omega} \colon \mathcal{G}_{\Omega} \to \varprojlim_{\mathfrak{f} \prec \Omega} \mathcal{G}_\mathfrak{f}$$ induced by the $\rho_{\mathfrak{f}, \Omega}$ for facets $\mathfrak{f} \prec \Omega$ is an isomorphism of $\mathcal{O}$-group schemes. \end{thm} We need some results on the deformation theory of torsors under (limits of) Bruhat-Tits group schemes. For us, torsors are always taken with respect to the fppf-topology. However, torsors for smooth affine group schemes are always representable by a (necessarily smooth affine) scheme and thus have sections \'etale locally. The deformation theory of such sections of torsors can be controlled by the (dual of) the invariant differentials $\omega_{\mathcal{G}/\mathcal{O}} = e^{\ast} \Omega_{\mathcal{G}/\mathcal{O}}$, where $e \colon \mathcal{O} \to \mathcal{G}$ is the identity section, due to the following result. \begin{lem} \label{lemDefoLifts} Let $\mathcal{G}$ be a smooth affine $\mathcal{O}$-group scheme and let $R$ be an $\mathcal{O}$-algebra with an ideal $I$ of square $I^2 = 0$. We denote by $\overline{R} = R/I$ and $r \colon \mathcal{O} \to \overline{R}$ the induced map. Let $\mathcal{E}$ be a $\mathcal{G}$-torsor over $R$. Let $\gamma \in \mathcal{E}(\overline R)$ be a section of $\mathcal{E}$. Then the set of all lifts of $\gamma$ to $R$ is a torsor under $\mathfrak{g}_{(R,I)} = r^\ast \omega_{\mathcal{G}/\mathcal{O}}^\vee \otimes_{\overline R} I$. \end{lem} \begin{proof} This is essentially a special case of \cite[Expos\'e III, Corollaire 5.2]{SGA1}. Recall that $\mathcal{E}$ is representable by a smooth affine $\mathcal{O}$-scheme. In particular, there exist lifts of $\gamma$ to $R$, so $\mathcal{E}$ is trivial. So let us fix a lift $\gamma'$ of $\gamma$ and a trivialisation of $\mathcal{E}$ that identifies the section $\gamma'$ with the unit in $\mathcal{G}_R$. By \cite[Expos\'e III, Corollaire 5.2]{SGA1}, the set of lifts of $\gamma$ is then a torsor under $$ \gamma^{\ast} \Omega^\vee_{\mathcal{E}/R} \otimes_{\overline R} I \cong r^\ast e^{\ast} \Omega^\vee_{\mathcal{G}/\mathcal{O}} \otimes_{\overline R} I = r^\ast \omega_{\mathcal{G}/\mathcal{O}}^\vee \otimes_{\overline R} I.$$ \end{proof} We use the following lemma to relate the deformation theory problem to the combinatorics in the Bruhat-Tits building. \begin{lem}[{compare \cite[§ 4.6.41]{Bruhat1984}}] \label{lemLieAlgHyp} Assume that $G$ is quasi-split. Let $\psi \in \mathcal{A}^\ast$ be an affine functional with gradient $a = \dot{\psi}$. Let $\Omega \subseteq \mathcal{A}$ be a bounded subset such that $\Omega \subseteq \mathcal{H}_{\psi \leq 0}$. Let moreover $\Omega' \prec \Omega$ such that $\Omega' \subseteq \mathcal{H}_{\psi}$. Then the natural map $\omega^{\vee}_{\mathcal{U}_{a, \Omega}/\mathcal{O}} \to \omega^\vee_{\mathcal{U}_{a, \Omega'}/\mathcal{O}}$ is an isomorphism. \end{lem} \begin{proof} By assumption, we have $U_{a, \Omega} = U_{a, \Omega'}$ as subgroups of $G(k)$. Hence, the induced maps on integral models and consequently on invariant differentials are isomorphisms. \end{proof} Note that in the situation of the lemma when $\Omega \cap \mathcal{H}_{\psi < 0} \neq \emptyset$ the induced map on Lie algebras for the negative root groups $$\mr{Lie}(\mathcal{U}_{-a, \Omega, \mathbb{F}}) = \omega^{\vee}_{\mathcal{U}_{-a, \Omega}/\mathcal{O}} \otimes_\mathcal{O} \mathbb{F} \to \mr{Lie}(\mathcal{U}_{-a, \Omega', \mathbb{F}}) = \omega^{\vee}_{\mathcal{U}_{-a, \Omega'}/\mathcal{O}} \otimes_\mathcal{O} \mathbb{F}$$ in the special fibre of $\Spec(\mathcal{O})$ typically (in particular when $a$ is non-divible and $2a$ is not a root) is the zero map by {\cite[§ 4.6.41]{Bruhat1984}}. Let $(\mathcal{E}_\mathfrak{f})_{\mathfrak{f} \prec \Omega} \in \varprojlim_{\mathfrak{f} \prec \Omega} \mathfrak{B}(\mathcal{G}_\mathfrak{f})(R)$ be a compatible system of $\mathcal{G}_\mathfrak{f}$-torsors. We use the previous two lemmas to construct compatible lifts of sections of $\mathcal{E}_\Omega = \varprojlim_{\mathfrak{f} \prec \Omega} \mathcal{E}_\mathfrak{f}$. This serves two purposes: On the one hand, we use this result for the trivial torsors $\mathcal{E}_\mathfrak{f} = \mathcal{G}_\mathfrak{f}$ to show that we can lift sections from the special fibre of $\varprojlim_{\mathfrak{f} \prec \Omega} \mathcal{G}_\mathfrak{f}$ in the proof of Theorem \ref{thmBTGS} and on the other hand, we use it in the proof of Proposition \ref{propCritTors}, which gives a criterion when $\mathcal{E}_\Omega$ is actually a $\mathcal{G}_\Omega$-torsor. For a subset $\Omega' \prec \Omega$ we denote by $ \mathcal{E}_{\Omega'}= \varprojlim_{\mathfrak{f} \prec \Omega'} \mathcal{E}_\mathfrak{f}$. \begin{lem} \label{lemTorsDefo} Assume that $G$ is quasi-split. Let $R$ be an $\mathcal{O}$-algebra with an ideal $I$ of square $I^2 = 0$. We denote by $\overline{R} = R/I$. \begin{enumerate} \item \label{lemTorsDefoPair} Let $\Omega_1, \Omega_2 \prec \Omega$ be two bounded subsets such that $\Omega_1 = \mr{cl}(\Omega_1)$, $\Omega_2 = \mr{cl}(\Omega_2)$ and that $\Omega_1 \cap \Omega_2$ is contained in an affine root hyperplane $\mathcal{H}_\psi$ for some $\psi \subseteq \Psi$. Assume moreover that $\Omega_1 \cup \Omega_2$ is convex and that $\Omega_1 \subseteq \mathcal{H}_{\psi \geq 0}$ and $\Omega_2 \subseteq \mathcal{H}_{\psi \leq 0}$ lie in different half-spaces. Assume that the assertion of Theorem \ref{thmBTGS} holds for $\mathcal{G}_{\Omega_1}$ and $\mathcal{G}_{\Omega_2}$. Assume that there is a section $\gamma \in \mathcal{E}_{\Omega_1 \cup \Omega_2}(\overline{R})$ and deformations $\gamma_{\Omega_1} \in \mathcal{E}_{\Omega_1}(R)$ and $\gamma_{\Omega_2} \in \mathcal{E}_{\Omega_2}(R)$ of the images of $\gamma$ in $\mathcal{E}_{\Omega_1}$ and $\mathcal{E}_{\Omega_2}$, respectively. Then there exists a deformation $\gamma_{\Omega_1 \cup \Omega_2} \in \mathcal{E}_{\Omega_1 \cup \Omega_2}(R)$ of $\gamma$. \item \label{lemTorsDefoSlice} Let now $\Omega' = \mr{cl}(\Omega') \prec \Omega$ and let $a \in \Phi^+_{\mr{nd}}$ and let $\psi_1 < \psi_2 < \ldots < \psi_m$ be the affine roots with gradient $\dot{\psi_i} = a$ such that $\Omega \cap \mathcal{H}_{\psi_i} \neq \emptyset$. We denote by $\Omega_i = \overline{(\Omega \cap \mathcal{H}_{\psi_i \leq 0} )\setminus \Omega_{i-1}}$ for $i = 1, \ldots, m$ with $\Omega_0 = \emptyset$ and $\Omega_{m+1} = \Omega \setminus (\Omega_m\setminus \mathcal{H}_{\psi_m}) $. Assume that the assertion of Theorem \ref{thmBTGS} holds for $\mathcal{G}_{\Omega_i}$ for $i = 1, \ldots, m+1$. Assume that there is a section $\gamma \in \mathcal{E}_{\Omega'}(\overline{R})$ and deformations $\gamma_{\Omega_i} \in \mathcal{E}_{\Omega_i}(R)$ of the image of $\gamma$ in $\mathcal{E}_{\Omega_i}$ for all $1 \leq i \leq m+1$. Then there exists a deformation $\gamma_{\Omega'} \in \mathcal{E}_{\Omega'}(R)$ of $\gamma$. \end{enumerate} \end{lem} We will prove Theorem \ref{thmBTGS} by induction on $\Omega$ and use this lemma in the inductive step. Hence, it is feasible to assume the validity of Theorem \ref{thmBTGS} for subsets of $\Omega$ here. Once we have established Theorem \ref{thmBTGS} in full (in particular for the application of the lemma in the proof of Proposition \ref{propCritTors}), these conditions of course are vacuous. Before we give the proof of the lemma, let us briefly discuss an example that nicely illustrates the main idea. \begin{ex} We consider $G = \GL_2$ over $k = {\F_q} \dbr{\varpi}$ with $T$ the split maximal diagonal torus. Then $X^\ast(T) \cong \mathbb{Z}^2$ with roots $\Phi = \{\pm (1,-1) \} \subseteq X^\ast(T)$, where the choice of the positive root $a = (1,-1)$ corresponds to the choice of the Borel subgroup given by upper triangular matrices. Let us consider the interval $\Omega = [0,2] \subseteq \mathbb{R} \cong \mathcal{A}(\GL_2, T)$ with $\Omega_1 = [0,1]$ and $\Omega_2 = [1,2]$. \begin{center} \begin{tikzpicture} \draw (-2,0)-- (6,0); \foreach \x in {0,1,2} { \draw (2*\x,0.125) -- (2*\x,-0.125) node[below] {\x}; } \draw (1,0.25) node {$\Omega_1$}; \draw (3,0.25) node {$\Omega_2$}; \end{tikzpicture} \end{center} Let us consider the case $R = {\F_q} \dsq{\varpi}/(\varpi^2)$ and $\overline{R} = R/(\varpi) = {\F_q}$. In this case, for a smooth affine group scheme $\mathcal{G}$ over $\mathcal{O}$, the module $\mathfrak{g} = e^\ast \omega^\vee_{\mathcal{G}/\mathcal{O}} \otimes_{\F_q} (\varpi)/(\varpi^2)$ is given by the tangent space of $\mathcal{G}$ at the identity section in its special fibre. Let us assume we are in the situation of Lemma \ref{lemTorsDefo} (\ref{lemTorsDefoPair}). We are given a section $\gamma \in \mathcal{E}_{[0,2]}({\F_q})$ and sections $\gamma_{[0,1]} \in \mathcal{E}_{[0,1]}({\F_q}\dsq{\varpi}/(\varpi^2))$ and $\gamma_{[1,2]} \in \mathcal{E}_{[1,2]}({\F_q}\dsq{\varpi}/(\varpi^2))$ that lift $\gamma$. Recall that by Lemma \ref{lemDefoLifts}, for $\Omega' \prec \Omega$ the set of all lifts of $\gamma$ in $\mathcal{E}_{\Omega'}$ is a torsor under $\mathfrak{g}_{\Omega'}$. Hence, after fixing a trivialisation of $\mathcal{E}_{\{1\}}$, the images of the lifts $\gamma_{[0,1]}, \gamma_{[1,2]}$ in $\mathcal{E}_{\{1\}}$ induce points in $\mathfrak{g}_{\{1\}}$. Thus, the question becomes if the intersection of the orbits $\mathfrak{g}_{[0,1]}. \gamma_{[0,1]} \cap \mathfrak{g}_{[1,2]}. \gamma_{[1,2]}$ in $\mathfrak{g}_{\{1\}}$ is non-empty, where $\mathfrak{g}_{[0,1]}$ acts via the natural map $\mathfrak{g}_{[0,1]} \to \mathfrak{g}_{\{1\}}$, similarly for $\mathfrak{g}_{[1,2]}$. For $\Omega' \prec \Omega$, we decompose the Lie algebras into its root spaces $\mathfrak{g}_{\Omega'} = \mathfrak{u}_{a, \Omega'} \oplus \mathfrak{h} \oplus \mathfrak{u}_{-a, \Omega'}$, where $a =(1,-1)$ is the positive root. In this situation, the root spaces $\mathfrak{u}_{\pm a, \Omega'}$ are one-dimensional while the Cartan $\mathfrak{h}$ is two-dimensional. Then the induced map $\mathfrak{g}_{[0,1]} \to \mathfrak{g}_{\{1\}}$ is the identity on the Cartan algebra $\mathfrak{h}$ as well as on the positive root space $\mathfrak{u}_{a, [0,1]} = \mathfrak{u}_{a, \{1\}}$ by Lemma \ref{lemLieAlgHyp} while it is the zero map $\mathfrak{u}_{-a, [0,1]} \to \mathfrak{u}_{-a, \{1\}}$ on the negative root spaces. By a similar argument, for the second facet $\Omega_2 = [1,2]$ the map $\mathfrak{g}_{[1,2]} \to \mathfrak{g}_{\{1\}}$ is the identity on the Cartan and the negative root space, while it is the zero map on the positive root space. Decomposing the lifts $\gamma_{[0,1]}$ and $\gamma_{[1,2]}$ in their components, this shows that by the action of $\mathfrak{g}_{[0,1]}$ we can guarantee that the $\mathfrak{u}_{a}$-components agree and by the action of $\mathfrak{g}_{[1,2]}$ we can get matching components in the $\mathfrak{u}_{-a}$-component. This shows the non-emptiness of the intersection of the orbits and hence the existence of a compatible set of lifts. In order to guarantee the correct mapping property in the other directions, it is necessary to have the convexity assumption. This can be seen in the following example in the $\GL_3$-case: \begin{center} \begin{tikzpicture} \draw (-0.5,0)-- (4.5,0); \draw (-0.5,1.7321)-- (4.5,1.7321); \draw (-0.1443, -0.25) -- (1.1443, 1.9821); \draw (1.8557, -0.25) -- (3.1443, 1.9821); \draw (2.1443, -0.25) -- (0.8557, 1.9821); \draw (4.1443, -0.25) -- (2.8557, 1.9821); \draw (1,0.6) node {$\Omega_1$}; \draw (3,0.6) node {$\Omega_2$}; \draw (5,0) node {$\mathcal{H}_\psi$}; \draw [-stealth](6,0.25) -- (6,1.25); \draw (6.25,0.75) node {$a$}; \draw [gray] (2,1.1) node {$\Omega_3$}; \end{tikzpicture} \end{center} We are given two chambers $\Omega_1$ and $\Omega_2$ in the standard apartment in the Bruhat-Tits building of $\GL_3$ that intersect in a single vertex. In particular, $\Omega_1 \cup \Omega_2$ is not convex. The base of both of the triangles lies in some affine root hyperplane $\mathcal{H}_\psi$ with $\dot{\psi} = a$ while both $\Omega_1$ and $\Omega_2$ are contained in the positive half space $\mathcal{H}_{\psi \geq 0}$. But this means that both $\mathfrak{u}_{a, \Omega_1} \to \mathfrak{u}_{a, \Omega_1 \cap \Omega_2}$ and $\mathfrak{u}_{a, \Omega_2} \to \mathfrak{u}_{a, \Omega_1 \cap \Omega_2}$ are the zero maps. Hence, it is in general not possible to lift sections in this situation. The difference to the convex case is the following. We have $\mr{cl}(\Omega_1 \cup \Omega_2) = \Omega_1 \cup \Omega_2 \cup \Omega_3$, where $\Omega_3$ is the triangle ``between'' $\Omega_1$ and $\Omega_2$. For a pair of $\mathcal{G}_{\Omega_1}$- (respectively $\mathcal{G}_{\Omega_2}$-) torsors $\mathcal{E}_{\Omega_1}$ and $\mathcal{E}_{\Omega_2}$ the existence of a compatible $\mathcal{G}_{\Omega_3}$-torsor $\mathcal{E}_{\Omega_3}$ (such a torsor does not exist in general!) can be interpreted as a compatibility condition on the $a$-root spaces, as it will guarantee by Lemma \ref{lemTorsDefo} (\ref{lemTorsDefoPair}) that for two given lifts $\gamma_{\Omega_1} \in \mathcal{E}_{\Omega_1}({\F_q} \dsq{\varpi}/(\varpi^2))$ and $\gamma_{\Omega_2} \in \mathcal{E}_{\Omega_2}({\F_q} \dsq{\varpi}/(\varpi^2))$ their image in $\mathfrak{u}_{a, \Omega_1 \cap \Omega_2}$ agrees. \end{ex} \begin{proof}[Proof of Lemma \ref{lemTorsDefo}] \begin{enumerate} \item Given some $\Omega' \prec \Omega$ (for which Theorem \ref{thmBTGS} holds), the set of all lifts of $\gamma \in \mathcal{E}_{\Omega'}(\overline{R})$ to $\mathcal{E}_{\Omega'}(R)$ is a torsor under $\mathfrak{g}_{\Omega'} = \mathfrak{g}_{\Omega', (R,I)}$ (if such lifts exist at all) by Lemma \ref{lemDefoLifts}. Using the decomposition of the big open cell in $\mathcal{G}_{\Omega'}$, we can decompose $\mathfrak{g}_{\Omega'}$ into the root spaces as $$\mathfrak{g}_{\Omega'} = \bigoplus_{a \in \Phi^-_{\mr{nd}}} \mathfrak{u}_{a, \Omega'} \oplus \mathfrak{h} \oplus \bigoplus_{a \in \Phi^+_{\mr{nd}}} \mathfrak{u}_{a, \Omega'}.$$ After fixing a trivialisation of $\mathcal{E}_{\Omega_1 \cap \Omega_2}$, the images of the lifts $\gamma_{\Omega_1}$ and $\gamma_{\Omega_2}$ in $\mathcal{E}_{\Omega_1 \cap \Omega_2}$ thus define elements of $\mathfrak{g}_{\Omega_1 \cap \Omega_2}$. The question whether there exists a lift $\gamma_{\Omega_1 \cup \Omega_2} \in \mathcal{E}_{\Omega_1 \cup \Omega_2}(R)$ of $\gamma$, or in other words, a compatible pair of lifts $\gamma'_{\Omega_1}$ and $\gamma'_{\Omega_2}$ in $\mathcal{E}_{\Omega_1}$ (respectively in $\mathcal{E}_{\Omega_2}$), is thus the question if the orbits in $\mathfrak{g}_{\Omega_1 \cap \Omega_2}$ have a non-empty intersection $$\mathfrak{g}_{\Omega_1}. \gamma_{\Omega_1} \cap \mathfrak{g}_{\Omega_2}. \gamma_{\Omega_2} \neq \emptyset.$$ We treat this question componentwise with respect to the decomposition into root spaces. On the torus part this is clear as the maps $\mathfrak{g}_{\Omega_i} \to \mathfrak{g}_{\Omega_1 \cap \Omega_2}$ restrict to isomorphisms on $\mathfrak{h}$ by construction for $i = 1,2$. It suffices to show that for all roots $a \in \Phi_{\mr{nd}}$ at least one of $\mathfrak{g}_{\Omega_i} \to \mathfrak{g}_{\Omega_1 \cap \Omega_2}$ restricts to an isomorphism $\mathfrak{u}_{a,\Omega_i} \to \mathfrak{u}_{a,\Omega_1 \cap \Omega_2}$. For $a = \pm \dot{\psi}$ this directly follows from Lemma \ref{lemLieAlgHyp}. Let now $a \in \Phi \setminus \{ \pm \dot \psi\}$, and let $\psi' \in \mathcal{A}$ the minimal affine functional with gradient $\dot{ \psi}' = a$ such that $\Omega_1 \cap \Omega_2 \subseteq \mathcal{H}_{\psi' \leq 0}$. By the convexity assumption, at least one of the $\Omega_i$ is contained in $\mathcal{H}_{\psi' \leq 0}$ for $i = 1,2$. But then $\mathfrak{u}_{a, \Omega_i} \xrightarrow{\cong} \mathfrak{u}_{a, \Omega_1 \cap \Omega_2}$ is an isomorphism by Lemma \ref{lemLieAlgHyp}. \item For each $i = 1, \ldots, m$, the pair of subsets $\bigcup_{1 \leq j \leq i} \Omega_j, \Omega_{i+1}$ of $\Omega'$ satisfies the assumptions of (\ref{lemTorsDefoPair}) by construction (in particular, their intersection is contained in $\mathcal{H}_{\psi_i}$). Using induction on $i$, we construct lifts of $\gamma$ for all $\mathcal{E}_{\bigcup_{1 \leq j \leq i} \Omega_i}$ using (\ref{lemTorsDefoPair}), and hence in particular for $\mathcal{E}_{\Omega'}$. \end{enumerate} \end{proof} \begin{proof}[Proof of Theorem \ref{thmBTGS}] We first remark that the limit $\varprojlim_{\mathfrak{f} \prec \Omega} \mathcal{G}_\mathfrak{f}$ is a finite limit of affine $\mathcal{O}$-group schemes of finite type, hence is again an affine $\mathcal{O}$-group scheme of finite type. Moreover, as all transition maps are identities on the generic fibres, the generic fibre of the limit is isomorphic to $G$ and $\rho$ induces an isomorphism on the generic fibre. By \'etale descent it suffices to work over $\breve k$, the completion of the maximal unramified extension of $k$. We may thus assume that $k = \breve k$, in which case $G$ is quasi-split by assumption. Moreover, we have $$(\varprojlim_{\mathfrak{f} \prec \Omega} \mathcal{G}_\mathfrak{f})(\mathcal{O}) = \varprojlim_{\mathfrak{f} \prec \Omega} (\mathcal{G}_\mathfrak{f}(\mathcal{O})) = \bigcap_{\mathfrak{f} \prec \Omega} G(k)^0_\mathfrak{f} = G(k)^0_\Omega.$$ It remains to show that $\varprojlim_{\mathfrak{f} \prec \Omega} \mathcal{G}_\mathfrak{f}$ is smooth, as smoothness implies by \cite[§ 1.7.3]{Bruhat1984} that $\varprojlim_{\mathfrak{f} \prec \Omega} \mathcal{G}_\mathfrak{f}$ is \'etoff\'e in the sense of \cite[D\'efinition 1.7.1]{Bruhat1984}. But this means that $\rho$ is an isomorphism by the previous observations. We use induction on $\Omega$ to show that $\varprojlim_{\mathfrak{f} \prec \Omega} \mathcal{G}_\mathfrak{f}$ is smooth. Let us fix some enumeration of the set of non-divisible positive roots $\Phi^+_{\mr{nd}} = \{ a_1, \ldots, a_m \}$. We inductively cut down $\Omega$ into slices by hyperplanes with gradient $a_i$ and in each step use Lemma \ref{lemTorsDefo} (\ref{lemTorsDefoSlice}) to construct lifts of the section in the special fibre. For the start of the induction, note that the theorem clearly is satisfied when $\Omega$ is (the closure of) a facet. More concretely, in the last step of the induction we write $\Omega = \bigcup_{1 \leq i \leq m+1} \Omega_i$ using the notation from Lemma \ref{lemTorsDefo} (\ref{lemTorsDefoSlice}) with $a = a_1$. By induction, we assume that the theorem holds for each $\Omega_i$ (that we got by cutting down each $\Omega_i$ using hyperplanes with gradient $a_2$). We check that $\varprojlim_{\mathfrak{f} \prec \Omega} \mathcal{G}_\mathfrak{f}$ is formally smooth. Let $R$ be an $\mathcal{O}$-algebra and let $I \subseteq R$ be an ideal of square zero. We denote by $\overline{R} = R/I$. Let us fix a section $\overline g \in \varprojlim_{\mathfrak{f} \prec \Omega} \mathcal{G}_{\mathfrak{f}}(\overline{R})$. Using the inductive hypothesis, there exist sections $g_i \in \varprojlim_{\mathfrak{f} \prec \Omega_i} \mathcal{G}_{\mathfrak{f}}(R) = \mathcal{G}_{\Omega_i}(R)$. By Lemma \ref{lemTorsDefo} (\ref{lemTorsDefoSlice}), we then obtain a lift $g \in \varprojlim_{\mathfrak{f} \prec \Omega} \mathcal{G}_{\mathfrak{f}}(R)$. As $ \varprojlim_{\mathfrak{f} \prec \Omega} \mathcal{G}_{\mathfrak{f}}$ is an affine scheme of finite presentation over $\mathcal{O}$, this shows that $\mathcal{G}_\Omega$ is smooth. This finishes the proof of the theorem. \end{proof} \begin{cor} The Bruhat-Tits group scheme $\mathcal{G}_\Omega$ is isomorphic to the closure of the diagonal in the generic fibre $$G \xrightarrow{\Delta} \prod_{\mathfrak{f} \prec \Omega} \mathcal{G}_\mathfrak{f}.$$ \end{cor} \begin{proof} The inclusion $\varprojlim_{\mathfrak{f} \prec \Omega} \mathcal{G}_\mathfrak{f} \to \prod_{\mathfrak{f} \prec \Omega} \mathcal{G}_\mathfrak{f}$ is a closed immersion since all $\mathcal{G}_\mathfrak{f}$ are affine and thus separated over $\mathcal{O}$. Since $\mathcal{G}_\Omega$ is in particular flat over $\mathcal{O}$, it is the closure of its generic fibre. The claim then follows from Theorem \ref{thmBTGS}. \end{proof} \begin{remark} Let $\Omega\subseteq \mathcal{B}(G, k)$ be a bounded subset that is not necessarily contained in a single apartment. Theorem \ref{thmBTGS} suggests a way to associate an $\mathcal{O}$-group scheme to $\Omega$, namely to define $$ \mathcal{G}_{\Omega} = \varprojlim_{\mathfrak{f} \prec \Omega} \mathcal{G}_\mathfrak{f}.$$ It is however neither clear whether $\mathcal{G}_\Omega$ is smooth nor whether it has a connected special fibre. \end{remark} \subsection{Torsors for deep Bruhat-Tits group schemes} We consider torsors for the Bruhat-Tits group schemes above. Recall that a limit of $\mathcal{G}_\mathfrak{f}$-torsors for facets $\mathfrak{f} \prec \Omega$ is a $\mathcal{G}_\Omega$-pseudo torsor by Lemma \ref{lemPTor}, but may fail to be a $\mathcal{G}_\Omega$-torsor in general. We give a criterion when a limit of $\mathcal{G}_\mathfrak{f}$-torsors is already a $\mathcal{G}_\Omega$-torsor. \begin{prop} \label{propCritTors} Let $\Omega \subseteq \mathcal{A}$ be a bounded subset with $\Omega = \mr{cl}(\Omega)$ and let $R$ be an $\mathcal{O}$-algebra. Let $(\mathcal{E}_\mathfrak{f})_{\mathfrak{f} \prec \Omega} \in \varprojlim_{\mathfrak{f} \prec \Omega} \mathfrak{B}(\mathcal{G}_{\mathfrak{f}})(R)$. Then $$ \mathcal{E}_\Omega = \varprojlim_{\mathfrak{f} \prec \Omega} \mathcal{E}_\mathfrak{f} $$ is a smooth affine $B$-scheme. In particular, $\mathcal{E}_{\Omega}$ is a $\mathcal{G}_\Omega$-torsor if and only if $\mathcal{E}_{\Omega} \to \Spec(R)$ is surjective. \end{prop} \begin{proof} The second assertion follows from the first one using Lemma \ref{lemPTor}, Theorem \ref{thmBTGS} and \cite[Expos\'e XI, Proposition 4.2]{SGA1}. The first assertion is \'etale-local on $\Spec(R)$, so we may assume that $G$ is quasi-split. It suffices to show that $\mathcal{E}_\Omega \to \Spec(R)$ is formally smooth, as $\mathcal{E}_\Omega$ is clearly representable by an affine $R$-scheme of finite presentation. But this follows from Lemma \ref{lemTorsDefo} (\ref{lemTorsDefoSlice}) by induction on $\Omega$ as in the proof of Theorem \ref{thmBTGS}. \end{proof} The goal of this section is to show that the isomorphism of Bruhat-Tits group schemes of Theorem \ref{thmBTGS} induces an immersion on the level of the corresponding moduli stacks of $\mathcal{G}$-bundles on $X$. Therefore, let us now change perspective and consider (global) Bruhat-Tits group schemes in the following sense. \begin{defn} \label{defnBTgrpschm} A smooth, affine group scheme $\mathcal{G} \to X$ is called a \emph{(global) Bruhat-Tits group scheme} if it has geometrically connected fibres, its generic fibre $\mathcal{G}_K = G$ is a reductive group over $K$ and if for all closed points $x$ of $X$ the pullback $\mathcal{G}_{\mathcal{O}_x} = \mathcal{G} \times_X \Spec(\mathcal{O}_x)$ is of the form $\mathcal{G}_\Omega$ for some bounded subset $\Omega$ contained in an apartment of the Bruhat-Tits building $B(G/K_x)$. The group scheme $\mathcal{G}$ is called a \emph{parahoric (Bruhat-Tits) group scheme} if moreover all $\mathcal{G}_{\mathcal{O}_x}$ are parahoric group schemes. \end{defn} Let $G$ be a (connected) reductive group over the function field $K$ of $X$. Bruhat-Tits group schemes with generic fibre $G$ can be constructed as follows. \begin{const} \label{consBTgrpschm} \begin{enumerate} \item There exists a reductive model $G \to U$ of $G$ over some dense open subset $U \subseteq X$. For each of the finitely many points $x \in X \setminus U$ in the complement of $U$ we choose a parahoric group scheme $\mathcal{G}^{(x)} \to \Spec(\mathcal{O}_x)$ with generic fibre $\mathcal{G}^{(x)}_{K_x} = G_{K_x}$. As $U \amalg \coprod_{x \in X \setminus U} \Spec(\mathcal{O}_x) \to X$ is an fpqc-cover, we can glue $G \to U$ with all $\mathcal{G}^{(x)}$ using fpqc-descent to obtain a smooth affine group scheme $\mathcal{G} \to X$, which is a parahoric group scheme by construction. \item \label{consBTgrpschmOmega} Let us now fix a parahoric model $\mathcal{G} \to X$ and a closed point $x_0$ of $X$. For a connected bounded subset $\Omega$ in an apartment of the Bruhat-Tits building of $G_{K_{x_0}}$ as in the previous paragraph, we denote by $\mathcal{G}_\Omega \to \Spec(\mathcal{O}_{x_0})$ the corresponding (local) Bruhat-Tits group scheme. We glue $\mathcal{G}_{\Omega}$ with $\mathcal{G}$ along the identity over $K_{x_0}$ and denote the resulting smooth affine group scheme over $X$ by a slight abuse of notation again by $\mathcal{G}_{\Omega}$. Then $\mathcal{G}_\Omega$ is a Bruhat-Tits group scheme in the sense of the previous definition and parahoric if and only if $\Omega$ is contained in the closure of a facet. The local homomorphisms $\rho_{\Omega', \Omega} \colon \mathcal{G}_\Omega \to \mathcal{G}_{\Omega'}$ over $\Spec(\mathcal{O}_{x_0})$ for $\Omega' \prec \Omega$ glue with the identity away from $x_0$ to morphisms of group schemes $\rho_{\Omega', \Omega} \colon \mathcal{G}_\Omega \to \mathcal{G}_{\Omega'}$ on $X$. \end{enumerate} \end{const} In particular, the isomorphism of Theorem \ref{thmBTGS} extends to an isomorphism $$ \mathcal{G}_\Omega \xrightarrow{\cong} \varprojlim_{\mathfrak{f} \prec \Omega} \mathcal{G}_\mathfrak{f}$$ of the corresponding global Bruhat-Tits group schemes. For any smooth affine group scheme $\mathcal{H}$ on $X$, we denote by $\Bun_\mathcal{H}$ the moduli stack of $\mathcal{H}$-bundles on $X$. By the functoriality of $\Bun$, the maps $\rho_{\mathfrak{f}, \Omega}$ induce maps $\rho_{\mathfrak{f}, \Omega, \ast} \colon \Bun_{\mathcal{G}_\Omega} \to \Bun_{\mathcal{G}_\mathfrak{f}}$ for all facets $\mathfrak{f} \prec \Omega$. \begin{thm} \label{thmBunGImm} Let $G$ be a reductive group over $K$, let $x_0$ be a closed point of $X$ and let $\Omega = \mr{cl}(\Omega)$ be a bounded subset of an apartment in the Bruhat-Tits building $\mathcal{B}(G_{K_{x_0}}, K_{x_0})$. Let $\mathcal{G}_{\Omega} \to X$ be the corresponding Bruhat-Tits group scheme from Construction \ref{consBTgrpschm} (\ref{consBTgrpschmOmega}). The map $$ \rho_{\Omega, \ast} := \varprojlim_{\mathfrak{f} \prec \Omega} \rho_{\mathfrak{f}, \Omega, \ast} \colon \Bun_{\mathcal{G}_\Omega} \to \varprojlim_{\mathfrak{f} \prec \Omega} \Bun_{\mathcal{G}_\mathfrak{f}} $$ induced by the $\rho_{\mathfrak{f}, \Omega, \ast}$ for facets $\mathfrak{f} \prec \Omega$ is schematic and a quasi-compact open immersion. \end{thm} \begin{proof} By \cite[Proposition 3.19]{Breutmann2019}, the maps $\rho_{\mathfrak{f}, \Omega, \ast}$ are schematic and quasi-projective for all facets $\mathfrak{f} \prec\Omega$. By Lemma \ref{lemLim}, the map $ \rho_{\Omega, \ast} $ is schematic, separated and of finite type. Moreover, all $\Bun_{\mathcal{G}_\mathfrak{f}}$ are locally of finite type over ${\F_q}$ by \cite[Proposition 1]{Heinloth2010}. By Lemma \ref{lemPTor}, the map $\rho_{\Omega, \ast}$ is a monomorphism. We show that $\rho_{\Omega, \ast}$ is formally \'etale. Let $R$ be a local artinian ${\F_q}$-algebra with maximal ideal $I \subseteq R$ of square zero. Let moreover $(\mathcal{E}_\mathfrak{f})_{\mathfrak{f} \prec \Omega} \in \varprojlim_{\mathfrak{f} \prec \Omega} \Bun_{\mathcal{G}_\mathfrak{f}} (R)$ such that $\varprojlim_{\mathfrak{f} \prec \Omega} \mathcal{E}_{\mathfrak{f}}$ is a $\mathcal{G}_\Omega$-torsor over $X_{\overline{R}}$, where $\overline{R} = R/I$. We claim that $\varprojlim_{\mathfrak{f} \prec \Omega} \mathcal{E}_{\mathfrak{f}}$ is already a $\mathcal{G}_\Omega$-torsor over $X_R$. The map $\widehat{(X_R)_{x_0}} \cup (X \setminus \{x_0\})_R \to X_R$ is a fpqc-cover, where $\widehat{(X_R)_{x_0}} = \Spec(\mathcal{O}_{x_0} \widehat{\otimes}_{\F_q} R)$, with $\mathcal{O}_{x_0} \widehat{\otimes}_{\F_q} V$ being the underlying ${\F_q}$-algebra of the completion of $X_R$ along $x_0$. As all maps $\mathcal{G}_\Omega \to \mathcal{G}_\mathfrak{f}$ for $\mathfrak{f} \prec \Omega$ are the identity away from $x_0$, all transition maps $\mathcal{E}_{\mathfrak{f}', R} \times^{\mathcal{G}_{\mathfrak{f}'}} \mathcal{G}_\mathfrak{f} \to \mathcal{E}_{\mathfrak{f}, R}$ are isomorphisms away from $x_0$. Using Proposition \ref{propCritTors}, it remains to check that the pullback to $\varprojlim_{\mathfrak{f} \prec \Omega} \mathcal{E}_{\mathfrak{f}} \to \widehat{(X_R)_{x_0}}$ is surjective, but the underlying topological spaces of $\widehat{(X_R)_{x_0}}$ and $\widehat{(X_{\overline{R}})_{x_0}}$ agree. Hence, $\rho_{\Omega, \ast}$ is formally \'etale and thus a quasi-compact open immersion being a flat monomorphism of finite presentation. \end{proof} \section{Bounds for shtukas} Global shtukas for $\GL_n$ were first introduced in \cite{Drinfeld1987a} and generalised to split reductive groups (respectively to flat affine group schemes of finite type) by \cite{Varshavsky2004} and \cite{Rad2019a}, respectively. In this section, we recall the definition and basic properties of moduli spaces of (iterated, global) shtukas. We use global bounds following \cite{Rad2017} and introduce a new notion of local bounds in the style of \cite{Rad2015} compatible with global bounds. For Bruhat-Tits group schemes we construct (global and local) bounds given by cocharacters that recover the bounds from \cite{Lafforgue2018} in the constant split reductive case. Let $\mathcal{G} \to X$ be a smooth affine group scheme. Let $I$ be a finite set and let $I = I_1 \cup \ldots \cup I_m$ be a partition of $I$. We write $I_\bullet = (I_1, \ldots, I_m)$. \begin{defn}[{\cite[Definition 3.3]{Rad2019a}}] We denote by $\Sht_{\mathcal{G},X^I, I_\bullet}$ the stack fibered in groupoids over ${\F_q}$ whose $S$ valued points are given by tuples $$ ((x_i)_{i \in I}, (\mathcal{E}_j)_{j = 0, \ldots, m}, (\varphi_j)_{j = 1, \ldots, m}, \theta),$$ where \begin{itemize} \item $x_i \in X(S)$ are points on $X$ called the \emph{characteristic sections} (or \emph{legs}) for $i \in I$, \item $\mathcal{E}_j \in \Bun_\mathcal{G}(S)$ are $\mathcal{G}$-bundles on $X_S$ for $0 \leq j \leq m$, \item $\varphi_j \colon \mathcal{E}_{j - 1}|_{X_S \setminus \bigcup_{i \in I_j} \Gamma_{x_i}} \xrightarrow{ \cong } \mathcal{E}_{j}|_{X_S \setminus \bigcup_{i \in I_j} \Gamma_{x_i}}$ are isomorphisms of $\mathcal{G}$-bundles away from the graphs $\Gamma_{x_i} \subseteq X_S$ of the sections $x_i$, and \item $\theta \colon \sigma^\ast \mathcal{E}_m \xrightarrow{\cong} \mathcal{E}_0$ is an isomorphism of $\mathcal{G}$-bundles on $X_S$. \end{itemize} \end{defn} The projection to the characteristic sections defines a map $\Sht_{\mathcal{G},X^I, I_\bullet} \to X^I$. By \cite[Theorem 3.15]{Rad2019a}, $\Sht_{\mathcal{G},X^I, I_\bullet}$ is an ind-Deligne Mumford stack that is separated and locally of ind-finite type over $X^I$. Let $I_\bullet'$ be a second partition of $I$ that is finer than $I_\bullet$. The forgetful map $$ \Sht_{\mathcal{G},X^I, I'_\bullet} \to \Sht_{\mathcal{G},X^I, I_\bullet}$$ is an isomorphism over $$U = \{\underline{x} = (x_i)_{i \in I} \in X^I \colon x_{i_1} \neq x_{i_2} \text{ for all } i_1, i_2 \in I_j \text{ and } 1 \leq j \leq m \} \subseteq X^I$$ by the argument in \cite[Lemma A.8 a)]{Varshavsky2004}. When $I_\bullet = (I)$ is the trivial partition, we write $\Sht_{\mathcal{G},X^I} = \Sht_{\mathcal{G},X^I, (I)}$. Let us fix pairwise different closed points $y_i \in X$ for all $i \in I$. We denote by $$ \Sht^{\underline{y}}_{\mathcal{G},X^I} = \Sht_{\mathcal{G},X^I, I_\bullet} \times_{X^I} \Spf(\mathcal{O}_{\underline{y}}) = \Sht_{\mathcal{G},X^I} \times_{X^I} \Spf(\mathcal{O}_{\underline{y}})$$ the restriction of the moduli space of shtukas to the formal neighbourhood of $\mathcal{O}_{\underline{y}}$. By the previous observation, this stack does not depend on the choice of the partition $I_\bullet$ of $I$. \begin{ass} \label{ass} In the following, we consider moduli spaces of shtukas in essentially three different situations. \begin{enumerate} \item $\mathcal{G} \to X$ is a smooth affine group scheme. (The \emph{smooth affine case}) \label{assSa} \item $G$ is a reductive group over $K$ and $\mathcal{G} \to X$ is a smooth affine group scheme with generic fibre $G$. (The \emph{generically reductive case}) \label{assRed} \item $G$ is a reductive group over $K$ and $\mathcal{G}_\Omega \to X$ is a Bruhat-Tits group scheme for a bounded subset $\Omega = \mr{cl}(\Omega)$ of an apartment in the Bruhat-Tits building for $G_{K_{x_0}}$ for some fixed closed point $x_0$ of $X$ as in Construction \ref{consBTgrpschm}. (The \emph{Bruhat-Tits case}) \label{assBT} \end{enumerate} \end{ass} \subsection{Global bounds} We recall the notion of (global) bounds for shtukas following \cite[Definition 3.1.3]{Rad2017}. In the case where $\mathcal{G}$ is a Bruhat-Tits group scheme, we construct boundedness conditions given by cocharacters in the style of \cite{Lafforgue2018}. We need the following iterated version of Beilinson-Drinfeld affine Grassmannians first introduced by \cite{Beilinson1996} in the case of constant group schemes. \begin{defn} We denote by $\Gr_{\mathcal{G},X^I, I_\bullet}$ the functor on ${\F_q}$-schemes whose $S$ valued points are given by tuples $$ ((x_i)_{i \in I}, (\mathcal{E}_j)_{j = 0, \ldots, m}, (\varphi_j)_{j = 1, \ldots, m}, \varepsilon),$$ where \begin{itemize} \item $x_i \in X(S)$ are points on $X$ called the \emph{characteristic sections} (or \emph{legs}) for $i \in I$, \item $\mathcal{E}_j \in \Bun_\mathcal{G}(S)$ are $\mathcal{G}$-bundles on $X_S$ for $0 \leq j \leq m$, \item $\varphi_j \colon \mathcal{E}_{j - 1}|_{X_S \setminus \bigcup_{i \in I_j} \Gamma_{x_i}} \xrightarrow{ \cong } \mathcal{E}_{j}|_{X_S \setminus \bigcup_{i \in I_j} \Gamma_{x_i}}$ are isomorphisms of $\mathcal{G}$-bundles, and \item $\varepsilon \colon \mathcal{E}_m \xrightarrow{\cong} \mathcal{G} \times_X X_S$ is a trivialisation of $\mathcal{E}_m$. \end{itemize} \end{defn} Then $\Gr_{\mathcal{G},X^I,I_\bullet}$ is representable by an ind-scheme over $X^I$ by \cite{Heinloth2010}. Let $R$ be a ${\F_q}$-algebra. For a relative effective Cartier divisor $D \subseteq X_{R}$, the formal completion of $X_R$ along $D$ is a formal affine scheme. We denote by $\hat \mathcal{O}_D$ the underlying $R$-algebra and by $\hat{D} = \Spec(\hat \mathcal{O}_D)$ the corresponding affine scheme. Then $D$ is a closed subscheme of $\hat D$ and we set $\hat{D}^0 = \hat{D} \setminus D$. We apply this construction in particular when $D = \Gamma_{\underline{x}} = \bigcup_{i \in I} \Gamma_{x_i}$ is the union of graphs of points $\underline{x} = (x_i)_{i \in I} \in X^I(R)$. In this case we write $\hat{\Gamma}_{\underline{x}} = \hat{D}$ and $\hat{\Gamma}_{\underline{x}}^0 = \hat D^0$. \begin{remark} \label{remBLdescent} Using Beauville-Laszlo descent \cite{Beauville1995} (compare also \cite[Remark 2.3.7 and Theorem 2.12.1]{Beilinson1996} and \cite{Laszlo1997}), the affine Grassmannian has the following alternative description, compare \cite[Construction 1.8]{Lafforgue2018}. Let $R$ be a ${\F_q}$-algebra. Then an $R$-point of $\Gr_{\mathcal{G}, X^I, I_\bullet}$ is given by a tuple $$((x_i)_{i \in I}, (\mathcal{E}_j)_{j = 0, \ldots, m}, (\varphi_j)_{j = 1, \ldots, m}, \varepsilon),$$ where the $\mathcal{E}_j$ are now $\mathcal{G}$-torsors on $\hat{\Gamma}_{\underline{x}}$ and the $\varphi_j$ are isomorphisms over $\hat{\Gamma}_{\underline{x}} \setminus \hat{\Gamma}_{\underline{x}_j}$, where $\underline{x}_j = (x_i)_{i \in I_j}$. \end{remark} Let $U \subseteq X^I$ be the complement of all diagonals. Using this description of the affine Grassmannian, we find that $\Gr_{\mathcal{G},X^I, I_\bullet}|_U = \left(\prod_{i \in I} \Gr_{\mathcal{G}, X}\right)|_U$. We also make use of a global version of the (positive) loop group. \begin{defn} The global loop group $\mathcal{L}_{X^I} \mathcal{G}$ is the functor on the category of ${\F_q}$-algebras $$ \mathcal{L}_{X^I} \mathcal{G} \colon R \mapsto \left\{ (\underline{x}, g)\colon \underline{x} \in X^I(R), g \in \mathcal{G}({\hat{\Gamma}_{\underline{x}}}^0) \right\}.$$ The positive global loop group $\mathcal{L}_{X^I}^+ \mathcal{G}$ is the functor on the category of ${\F_q}$-algebras $$ \mathcal{L}^+_{X^I} \mathcal{G} \colon R \mapsto \left\{ (\underline{x}, g)\colon \underline{x} \in X^I(R), g \in \mathcal{G}({\hat{\Gamma}_{\underline{x}}}) \right\}.$$ \end{defn} By \cite[Proposition 2]{Heinloth2010}, $\mathcal{L}_{X^I} \mathcal{G}$ is representable by an ind-group scheme over $X^I$ and $\mathcal{L}_{X^I}^+\mathcal{G}$ is representable by an affine group scheme over $X^I$ with geometrically connected fibres. Moreover, the projection $\mathcal{L} \mathcal{G} \to \Gr_{\mathcal{G}, X^I}$ induces an isomorphism of fpqc-sheaves $\mathcal{L}_{X^I} \mathcal{G} / \mathcal{L}_{X^I}^+ \mathcal{G} \to \Gr_{\mathcal{G}, X^I}$. There is a natural left $\mathcal{L}^+_{X^I}\mathcal{G}$-action on $\Gr_{\mathcal{G}, X^I, I_\bullet}$ given by changing the trivialisation $\varepsilon$. \begin{remark} It is well-known that there is a formally smooth map $$\Sht_{\mathcal{G},X^I, I_\bullet} \to [\mathcal{L}^+_{X^I} \mathcal{G} \backslash \Gr_{\mathcal{G}, X^I, I_\bullet}],$$ compare for example \cite[Theorem 3.2.1]{Rad2017} and \cite[Proposition 2.8]{Lafforgue2018}. In this sense, the affine Grassmannian is a local model for the moduli stack of shtukas. \end{remark} We define (global) bounds for shtukas as certain subschemes of the affine Grassmannian following \cite[Definition 3.1.3]{Rad2017}. \begin{defn} \label{defnGlobBounds} We fix an algebraic closure $K^{\alg}$ of the function field $K= K(X)$ of $X$. For a finite extension $K'$ of $K$ in $K^{\alg}$ we denote by $\widetilde{X}_{K'}$ the normalisation of $X$ in $K'$. It is a smooth projective curve over ${\F_q}$ together with a finite morphism $ \widetilde{X}_{K'} \to X$. \begin{enumerate} \item Let $K_1$ and $K_2$ be two finite extensions of $K$. Two locally closed subschemes $Z_1\subseteq \Gr_{\mathcal{G},X^I, I_\bullet}\times_{X^I} \widetilde X_{K_1}^I$ and $Z_2\subseteq \Gr_{\mathcal{G},X^I, I_\bullet}\times_{X^I} \widetilde X_{K_2}^I$ are called \emph{equivalent} if there is a finite extension $K_1.K_2\subseteq K'\subseteq K^{\alg}$ of the composite $K_1.K_2$ of $K_1$ and $K_2$, such that $Z_1 \times_{\widetilde X_{K_1}^I} \widetilde X_{K'}^I=Z_2 \times_{\widetilde X_{K_2}^I} \widetilde X_{K'}^I$ in $\Gr_{\mathcal{G},X^I, I_\bullet}\times_{X^I} \widetilde X_{K'}^I$. Let $\mathcal{Z}$ be an equivalence class of locally closed subschemes $Z_{K'} \subseteq \Gr_{\mathcal{G},X^I, I_\bullet}\times_{X^I} \widetilde X_{K'}^I$ and let $G_\mathcal{Z}:= \{g \in \Aut(K^{\alg}/K): g^\ast(\mathcal{Z})=\mathcal{Z}\}$. The \emph{field of definition} $K_\mathcal{Z}$ of $\mathcal{Z}$ is the intersection of the fixed field of $G_\mathcal{Z}$ in $K^{\alg}$ with all the finite extensions of $K$ over which a representative of $\mathcal{Z}$ exists. \item A \emph{bound} is an equivalence class $\mathcal{Z}$ of quasi-compact locally closed subschemes $Z_{K'}\subset \Gr_{\mathcal{G},X^I, I_\bullet}\times_{X^I} \widetilde X_{K'}^I$ that admits a representative $Z_{K_\mathcal{Z}}$ over its field of definition $K_\mathcal{Z}$ that is moreover stable under the left $\mathcal{L}_{X^I}^+\mathcal{G} \times_{X^I} \widetilde X_{ K_\mathcal{Z} }^I$-action on $\Gr_{\mathcal{G},X^I, I_\bullet}\times_{X^I} \widetilde X_{ K_\mathcal{Z}}^I$. The field of definition $K_\mathcal{Z}$ of $\mathcal{Z}$ is called the \emph{reflex field} of $\mathcal{Z}$, and the corresponding curve $X_\mathcal{Z}:=\widetilde X_{K_\mathcal{Z}}$ is called the \emph{reflex curve} of $\mathcal{Z}$. By abuse of notation we usually identify $\mathcal{Z}$ with its representative over the reflex curve. Such a representative is unique by Lemma \ref{lemBoundEq} below. \item Let $\mathcal{Z}$ be a bound in the above sense and let $$ \underline{\mathcal{E}} = ((x_i)_{i \in I}, (\mathcal{E}_j)_{j = 0, \ldots, m}, (\varphi_j)_{j = 1, \ldots, m}, \theta) \in (\Sht_{\mathcal{G},X^I, I_\bullet} \times_{X^I} X_\mathcal{Z}^I)(S).$$ By \cite[Lemma 3.4]{Haines2020a}, there exists an \'etale cover $T \to S$ such that $\hat{\Gamma}_{\underline x_T}\to\hat{\Gamma}_{\underline x}$ trivializes $\mathcal{E}_m|_{\hat{\Gamma}_{\underline x}}$. Fixing a trivialisation $\alpha \colon \mathcal{E}_m|_{\hat\Gamma_{\underline x_T}} \xrightarrow{\cong} \mathcal{G}|_{\hat \Gamma_{\underline x_T}}$ defines a point in $(\Gr_{\mathcal{G},X^I, I_\bullet} \times_{X^I} X_\mathcal{Z}^I)(T)$, compare Remark \ref{remBLdescent}. We say that $\underline{\mathcal{E}}$ is \emph{bounded by} $\mathcal{Z}$ if this point factors through $\mathcal{Z}$. As $\mathcal{Z}$ is invariant under the left $\mathcal{L}_{X^I}^+\mathcal{G}$-action, the definition is independent of the choice of the trivialisation $\alpha$. \end{enumerate} We denote by $\Sht^\mathcal{Z}_{\mathcal{G},X^I, I_\bullet} \to X_\mathcal{Z}^I$ the moduli stack of $\mathcal{G}$-shtukas bounded by $\mathcal{Z}$ in this sense. As in the unbounded case, for a tuple $(y_i)_{i \in I}$ of pairwise distinct closed points of $X_\mathcal{Z}$ we write $$\Sht^{\mathcal{Z}, \underline{y}}_{\mathcal{G},X^I} = \Sht^{\mathcal{Z}}_{\mathcal{G}, X^I} \times_{X_\mathcal{Z}^I} \Spf(\mathcal{O}_{\underline{y}}).$$ \end{defn} Let us recall some properties of this stack of bounded global $\mathcal{G}$-shtukas. \begin{remark} \label{remRepSht} By \cite[Theorem 3.1.6]{Rad2017}, the moduli stack of bounded $\mathcal{G}$-shtukas $\Sht^\mathcal{Z}_{\mathcal{G},X^I, I_\bullet}$ is a Deligne-Mumford stack locally of finite type and separated over $X^I$, and a locally closed substack of $\Sht_{\mathcal{G},X^I, I_\bullet}$. The diagonal of $\Sht^\mathcal{Z}_{\mathcal{G}, X^I, I_\bullet}$ is schematic, finite and unramified by \cite[Corollary 3.16]{Rad2019a}. \end{remark} \begin{remark} \label{remLocMod} There is a version of the local model theorem also for the moduli space of bounded shtukas. Let $\mathcal{Z}$ be a bound. By \cite[Theorem 3.2.1]{Rad2017}, its representative $\mathcal{Z}$ inside the affine Grassmannian $$\Gr_{\mathcal{G}, X^I, I_\bullet} \times_{X^I} X_\mathcal{Z}^I$$ is an \'etale local model for $\Sht^\mathcal{Z}_{\mathcal{G}, X^I, I_\bullet}$. Moreover, the $\mathcal{L}^+_{X^I} \mathcal{G}$-action on $\mathcal{Z}$ factors through a finite-dimensional quotient $\mathcal{H}$ of $\mathcal{L}^+_{X^I} \mathcal{G}$ and we have a smooth map $\Sht^\mathcal{Z}_{\mathcal{G}, X^I, I_\bullet} \to [ \mathcal{H} \backslash \mathcal{Z}]$, compare \cite[Proposition 2.8]{Lafforgue2018}. \end{remark} The following lemma is a global analogue of \cite[Remark 4.6]{Rad2015} and shows in particular, that the representative of a bound $\mathcal{Z}$ over the reflex field is unique. \begin{lem} \label{lemBoundEq} Let $Z_{1, K_1}$ and $Z_{2, K_2}$ be two closed subschemes of $\Gr_{\mathcal{G},X^I, I_\bullet}\times_{X^I} \widetilde X_{K_1}^I$ and $\Gr_{\mathcal{G},X^I, I_\bullet}\times_{X^I} \widetilde X_{K_2}^I$, respectively. Then $Z_{1,K_1}$ and $Z_{2, K_2}$ are equivalent if and only if $Z_{1, K'} = Z_{2, K'}$ for all finite extensions $K'$ of $K$ containing both $K_1$ and $K_2$. \end{lem} \begin{proof} Let $Z_{1, K_1}$ and $Z_{2, K_2}$ be equivalent and let $K''$ be a common (finite) extension of $K_1$ and $K_2$ such that $Z_{1, K''} = Z_{2, K''}$. Let moreover $K'/K$ be another finite extension of $K$ containing both $K_1$ and $K_2$. The question if $Z_{1, K'} = Z_{2, K'}$ in $\Gr_{\mathcal{G},X^I, I_\bullet}\times_{X^I} \widetilde X_{K'}^I$ is fpqc-local and satisfied after the fpqc base change along $\widetilde{X}^I_{K'.K''} \to \widetilde{X}^I_{K'}$ by assumption. Note that the flatness of the map follows from the flatness of the normalisation map $\widetilde{X}_{K'.K''} \to \widetilde{X}_{K'}$. Hence, $Z_{1, K'} = Z_{2, K'}$. The other direction is clear. \end{proof} \begin{remark} \label{remDefBoundGl} Our definition has a couple of subtle differences compared with \cite[Definition 3.1.3]{Rad2017}. We do not require our bounds to be closed but only locally closed subschemes of the affine Grassmannian. This allows us to also consider for example Schubert cells as bounds. On the other hand, we require the bounds to have a representative over the reflex field. We do not know if such a representative always exists in this generality, as noted in \cite[Remark 3.1.4]{Rad2017}. However, this condition is certainly satisfied for bounds given by Schubert varieties, in which case the reflex field of the bound is the reflex field of the corresponding cocharacter. Moreover, we use the existence of a representative over the reflex field for example in the proof of Lemma \ref{lemChangeBound}. By Lemma \ref{lemBoundEq}, a point $\underline{\mathcal{E}} \in (\Sht_{\mathcal{G},X^I, I_\bullet} \times_{X^I} X_\mathcal{Z}^I)(S)$ is bounded by $\mathcal{Z}$ if and only if after the choice of some trivialisation of $\underline{\mathcal{E}}$ over some fppf-cover $T \to S$ the induced point $T \times_{X^I_\mathcal{Z}} \widetilde{X}^I_{K'} \to \Gr_{\mathcal{G},X^I, I_\bullet}\times_{X^I} \widetilde X_{K'}^I$ factors through $Z_{K'}$ for some (or equivalently for all) representative $Z_{K'}$ of $\mathcal{Z}$. In particular, the notion of bounded shtukas above agrees in this aspect with the defintion of \cite{Rad2017}. \end{remark} In our setting, the notion of a shtuka datum (respectively a map of shtuka data) in the sense of \cite[Definitions 3.1 and 3.9]{Breutmann2019} restricts to the following. \begin{defn} \label{defnShtDat} A \emph{shtuka datum} $(\mathcal{G}, \mathcal{Z})$ is a pair of a smooth affine group scheme $\mathcal{G} \to X$ and a bound $\mathcal{Z}$ in $\Gr_{\mathcal{G}, X^I, I_\bullet} \times_{X^I} X_\mathcal{Z}^I$, where $X_\mathcal{Z}$ is the reflex curve of $\mathcal{Z}$. A \emph{map of shtuka data} $f \colon (\mathcal{G}, \mathcal{Z}) \to (\mathcal{G}', \mathcal{Z}')$ is a map of group schemes $f \colon \mathcal{G} \to \mathcal{G}'$ such that the map $$ \mathcal{Z} \times_{X^I_\mathcal{Z}} X^I_{\mathcal{Z}.\mathcal{Z}'} \hookrightarrow \Gr_{\mathcal{G}, X^I, I_\bullet} \times_{X^I_\mathcal{Z}} X^I_{\mathcal{Z}.\mathcal{Z}'} \xrightarrow{f_\ast} \Gr_{\mathcal{G}',X^I, I_\bullet} \times_{X^I_\mathcal{Z}} X^I_{\mathcal{Z}.\mathcal{Z}'}$$ factors through $\mathcal{Z}' \times_{X^I_{\mathcal{Z}'}} X^I_{\mathcal{Z}.\mathcal{Z}'}$, where $X_{\mathcal{Z}.\mathcal{Z}'} = \widetilde{X}_{K_\mathcal{Z}.K_{\mathcal{Z}'}}$ is the normalisation of the compositum of the reflex fields of $\mathcal{Z}$ and $\mathcal{Z}'$, respectively. \end{defn} A map of shtuka data $f \colon (\mathcal{G}, \mathcal{Z}) \to (\mathcal{G}', \mathcal{Z}')$ induces a map on the corresponding moduli stacks of shtukas $$ f_\ast \colon \Sht^\mathcal{Z}_{\mathcal{G},X^I, I_\bullet} \times_{X^I_\mathcal{Z}} X^I_{\mathcal{Z}.\mathcal{Z}'} \to \Sht^{\mathcal{Z}'}_{\mathcal{G}', X^I, I_\bullet} \times_{X^I_{\mathcal{Z}'}} X^I_{\mathcal{Z}.\mathcal{Z}'}$$ by the following lemma that is an analogue of \cite[Lemma 3.15]{Breutmann2019}. \begin{lem} \label{lemChangeBound} Let $f \colon (\mathcal{G}, \mathcal{Z}) \to (\mathcal{G}', \mathcal{Z}')$ be a map of shtuka data. Let $$\underline{\mathcal{E}} \in (\Sht^\mathcal{Z}_{\mathcal{G}, X^I, I_\bullet} \times_{X^I} X^I_{\mathcal{Z}. \mathcal{Z}'})(S).$$ Then $f_\ast \underline{\mathcal{E}} \in (\Sht_{\mathcal{G}', X^I, I_\bullet} \times_{X^I} X^I_{\mathcal{Z}. \mathcal{Z}'})(S)$ is bounded by $\mathcal{Z}'$. \end{lem} \begin{proof} Let $\underline{\mathcal{E}} = ((x_i)_{i \in I}, (\mathcal{E}_j)_{j=0, \ldots,m}, (\varphi_j)_{j = 1, \ldots, m}, \theta) \in (\Sht^{\mathcal{Z}}_{\mathcal{G},X^I, I_\bullet} \times_{X^I} X_{\mathcal{Z}.\mathcal{Z}'}^I)(S)$. Let $T \to S$ be a fppf-cover that trivialises $\mathcal{E}_m|_{\hat{\Gamma}_{\underline{x}}}$ and choose a trivialisation $\alpha \colon \mathcal{E}_m|_{\hat{\Gamma}_{\underline{x}_T}} \xrightarrow{\cong} \mathcal{G}|_{\hat{\Gamma}_{\underline{x}_T}}$. Then $(\underline{\mathcal{E}}_T, \alpha)$ defines an $T$-valued point in $\Gr_{\mathcal{G}, X^I, I_\bullet} \times_{X^I} X_{\mathcal{Z}.\mathcal{Z}'}^I$. As $\underline{\mathcal{E}}$ is bounded by $\mathcal{Z}$, the induced point $T \times_{X_\mathcal{Z}^I} {X_{\mathcal{Z}.\mathcal{Z}'}^I} \to \Gr_{\mathcal{G}, X^I, I_\bullet} \times_{X^I} {X_{\mathcal{Z}.\mathcal{Z}'}^I} $ factors through $\mathcal{Z} \times_{X_{\mathcal{Z}'}^I} {X_{\mathcal{Z}.\mathcal{Z}'}^I}$. Then the map $$ T \hookrightarrow T \times_{X_{\mathcal{Z}}^I} X_{\mathcal{Z}.\mathcal{Z}'}^I \to \Gr_{\mathcal{G}, X^I, I_\bullet} \times_{X^I} X_{\mathcal{Z}.\mathcal{Z}'}^I$$ factors through $\mathcal{Z} \times_{X_{\mathcal{Z}}} X_{\mathcal{Z}.\mathcal{Z}'}^I$, hence its image under $f_\ast$ lies in $\mathcal{Z}' \times_{X_{\mathcal{Z}'}} X_{\mathcal{Z}.\mathcal{Z}'}^I$ by assumption. Thus, the map $ T \times_{X_{\mathcal{Z}'}^I} X_{\mathcal{Z}.\mathcal{Z}'}^I \to \Gr_{\mathcal{G}', X^I, I_\bullet} \times_{X^I} X_{\mathcal{Z}.\mathcal{Z}'}^I$ factors through $\mathcal{Z}' \times_{X_{\mathcal{Z}'}^I} X_{\mathcal{Z}.\mathcal{Z}'}^I $, too. \end{proof} Note that we used the existence of a representative of the bounds over their respective reflex fields. We do not know how to prove the lemma without this assumption. \begin{const}[Bounds from cocharacters in the generically reductive case] \label{constBound} Let us now construct bounds given by cocharacters in the generically reductive case (compare Assumption \ref{ass} (\ref{assRed})). Let $G$ be a reductive group over $K$ and let $\mu$ be a conjugacy class of geometric cocharacters of $G$ with reflex field $K_\mu$. Let $K'/K$ be a finite separable extension that splits $G$. We denote by $\Gr_{G_{K'}}^{\leq \mu} \subseteq \Gr_{G_{K'}} = \Gr_{G} \times_K K'$ the Schubert variety inside the (classical) affine Grassmannian for $G_{K'}$. The Schubert variety is already defined over the reflex field of $\mu$ and hence descends to a closed subscheme $\Gr_{G}^{\leq \mu} \subseteq \Gr_{G} \times_K K_\mu$. Let now $\mathcal{G} \to X$ be a smooth affine group scheme with generic fibre $\mathcal{G}_K = G$. By \cite[Section 0.2]{Richarz2021a}, the generic fibre of Beilinson-Drinfeld Grassmannian for $\mathcal{G}$ can be identified (non canonically) with the affine Grassmannian for $G$, $\Gr_{\mathcal{G}, X} \times_X \Spec(K) \cong \Gr_G$. We use this observation to define $\Gr^{\leq \mu}_{\mathcal{G}, X}$ as the scheme-theoretic image $$ \Gr^{\leq \mu}_{\mathcal{G}, X} = \mr{image} \left(\Gr_{G}^{\leq \mu} \hookrightarrow \Gr_{\mathcal{G}, X} \times_X X_\mu \right)$$ where we denote by $X_\mu = \widetilde{X}_{K_\mu}$ the reflex curve of $\mu$. Note that this definition is independent of the choice of the identification of the generic fibre. Let $\underline{\mu} = (\mu_i)_{i \in I}$ be a tuple of conjugacy classes of cocharacters $\mu_i$ of $G$. We denote by $K_{\underline{\mu}}$ the compositum of all reflex fields of the $\mu_i$ and by $X_{\underline{\mu}} = \widetilde{X}_{K_{\underline{\mu}}}$. We denote by $\Gr^{\leq \underline{\mu}}_{\mathcal{G}, X^I, I_\bullet} \subseteq \Gr_{\mathcal{G}, X^I, I_\bullet} \times_{X^I} X_{\underline{\mu}}^I$ the Zariski closure of the preimage of $\prod_{i \in I} \left( \Gr_{\mathcal{G}, X}^{\leq \mu_i} \times_{X_{\mu_i}} X_{\underline{\mu}} \right)$ under the isomorphism $\Gr_{\mathcal{G}, X^I, I_\bullet}|_{U} \xrightarrow{\cong} \left(\prod_{i \in I} \Gr_{\mathcal{G}, X} \right)|_{U}$, where $U \subseteq X^I$ is the complement of all diagonals in $X^I$. By construction, the equivalence class of $\Gr^{\leq \underline{\mu}}_{\mathcal{G}, X^I, I_\bullet}$ defines a bound for $\mathcal{G}$ with reflex curve $X_{\underline{\mu}}$ and $\Gr^{\leq \underline{\mu}}_{\mathcal{G}, X^I, I_\bullet}$ is a representative of this bound over $X_{\underline{\mu}}$. We say that a global $\mathcal{G}$-shtuka is \emph{bounded by $\underline{\mu}$} if it is bounded by $\Gr^{\leq \underline{\mu}}_{\mathcal{G}, X^I, I_\bullet}$ and denote by $\Sht^{ \leq \underline{\mu}}_{\mathcal{G}, X^I, I_\bullet} \subseteq \Sht_{\mathcal{G}, X^I, I_\bullet} \times_{X^I} X_{\underline{\mu}}^I$ the corresponding moduli stack of global $\mathcal{G}$-shtukas bounded by $\Gr^{\leq \underline{\mu}}_{\mathcal{G}, X^I, I_\bullet}$. \end{const} \begin{lem} \label{lemCocharShtData} Let $G$ be a reductive group and let $f \colon \mathcal{G} \to \mathcal{G}'$ be a map of smooth affine group schemes with generic fibres $G$ such that $f$ is an isomorphism over a dense open subset $U$ of $X$. Let $\underline{\mu} = (\mu_i)_{i \in I}$ be a tuple of conjugacy classes of geometric cocharacters of $G$. Then $f$ induces a map $f_\ast \colon \Gr_{\mathcal{G}, X^I, I_\bullet}^{\leq \underline{\mu}} \to \Gr_{\mathcal{G}', X^I, I_\bullet}^{\leq \underline{\mu}}$ that is an isomorphism over $U^I$. \end{lem} \begin{proof} That $f_\ast$ is defined and an isomorphism over $U^I$ is clear. That $f_\ast$ extends to a map over $X^I$ follows by the construction of $ \Gr_{\mathcal{G}, X^I, I_\bullet}^{\leq \underline{\mu}}$ as a schematic closure. \end{proof} \begin{remark} Let us comment on how the bounds constructed above compare to other notions of bounds given by cocharacters in the literature. \begin{enumerate} \item When $\mathcal{G}$ is constant split reductive, our bounds agree with the bounds of \cite[D\'efinition 1.12]{Lafforgue2018}. This in particular includes the case of Drinfeld shtukas in \cite{Drinfeld1987a}, that means shtukas for $\mathcal{G} = \GL_n$ and $\underline{\mu} = ((1,0,\ldots,0), (0,\ldots,0,-1))$. In a similar fashion, the bounds used in the unitary case in \cite{Feng2021, Feng2021a} can be realised in this way. \item Already in the split case, there are several other ways to define bounds given by cocharacters, compare \cite{Varshavsky2004} and \cite{Rad2019a}. In general, these definitions do not agree, see for example \cite[Remarque 1.8]{Lafforgue2018}. The proof of our main Theorem \ref{thmLvlMapGeneral} does not rely on the concrete construction of the bounds, but only on the fact that the bounds constructed above satisfy Lemma \ref{lemCocharShtData} and the conditions of Theorem \ref{thmLvlMapGen}. \item In the non-split case, \cite[§ 12.3.1]{Lafforgue2018} constructs bounds for parahoric group schemes $\mathcal{G}$ that are given by representations of the $L$-group of $G$. Starting from a cocharacter $\mu$ of a split maximal torus $T$ of $G$ (defined over some finite extension of $K$), we can take the direct sum $W$ of all Galois translates of $\mu$. We can then (at least in the generic fibre) recover $\Gr_{\mathcal{G}, X^I, I_\bullet}^{\leq \underline{\mu}}$ as a component in the base change $\Gr_{\mathcal{G}, X^I, I_\bullet}^{W} \times_{X^I} X_{\underline{\mu}}^I$, where $\Gr_{\mathcal{G}, X^I, I_\bullet}^{W}$ denotes the bound given by $W$ from \cite{Lafforgue2018}. However, in order to study the geometry of the special fibre of our moduli spaces of shtukas it seems to be necessary to use the finer bounds. \end{enumerate} \end{remark} \subsection{Local bounds} We define similar bounds for local shtukas. We note that \cite{Rad2015} defines bounds for local shtukas, which however is incompatible with the bounds for global shtukas in general, compare Remark \ref{remLocalBoundsComp} below. We introduce a variant of their notion of local bounds that are naturally compatible with the global bounds defined above. We start by giving the definition of local shtukas. We continue to use the notation in the local setting from above. Let $k = \mathbb{F} \dbr{t}$ be a local field in characteristic $p$ with ring of integers $\mathcal{O} = \mathbb{F} \dsq{t}$ and finite residue field $\mathbb{F}$. Let $\mathcal{G} \to \mathcal{O}$ be a smooth affine group scheme. We denote by $L\mathcal{G}$ (respectively $L^+\mathcal{G}$) the (positive) loop group of $\mathcal{G}$ defined as functors on the category of $\mathbb{F}$-algebras as $$ R \mapsto L\mathcal{G}(R) = \mathcal{G}(R\dbr{t}) \qquad \text{and} \qquad R \mapsto L^+\mathcal{G}(R) = \mathcal{G}(R \dsq{t}),$$ respectively. The loop group $L\mathcal{G}$ is representable by an ind-group scheme of ind-finite type over $\mathbb{F}$, the positive loop group is representable by an affine (infinite dimensional) group scheme over $\mathbb{F}$. Recall that the (classical) affine Grassmannian $\Gr_\mathcal{G}$ for $\mathcal{G}$ is given by the fpqc-sheafification of the quotient $\Gr_\mathcal{G} = (L\mathcal{G}/L^+\mathcal{G})_{\mr{fpqc}}$. Moreover, using the inclusion $L^+ \mathcal{G} \to L \mathcal{G}$, there is a natural way to associate to a $L^+\mathcal{G}$-torsor $\mathcal{E}^+$ its corresponding $L\mathcal{G}$-torsor $\mathcal{E}$. For an $\mathbb{F} \dsq{t}$-algebra $R$ we denote by $\zeta \in R$ the image of $t$. We denote by $\mathcal{N}ilp_{\mathbb{F} \dsq{\zeta}}$ the category of $\mathbb{F} \dsq{t}$-algebras where $\zeta$ is nilpotent. \begin{defn} Let $R \in \mathcal{N}ilp_{\mathbb{F} \dsq{\zeta}}$. A \emph{local $\mathcal{G}$-shtuka} over $R$ is a pair $\underline{\mathcal{E}} = (\mathcal{E}^+, \varphi)$ consisting of a $L^+\mathcal{G}$-torsor $\mathcal{E}^+$ on $R$ and an isomorphism of $L \mathcal{G}$-torsors $\varphi \colon \sigma^\ast \mathcal{E} \to \mathcal{E}$. \end{defn} Instead of defining bounds as certain subschemes in $\Gr_\mathcal{G} \widehat{\times} \Spf(\mathbb{F} \dsq{t})$ as in \cite{Rad2015}, we use the following local variant of Beilinson-Drinfeld affine Grassmannians following \cite{Richarz2021a} to define local bounds. \begin{defn} The Beilinson-Drinfeld affine Grassmannian $\Gr_{\mathcal{G}, \mathcal{O}}$ for $\mathcal{G}$ is the functor on $\mathcal{O}$-algebras defined by $$ R \mapsto \left\{ (\mathcal{E}, \alpha) \colon \begin{array}{l} \mathcal{E} \text{ a $\mathcal{G}$-torsor on $\Spec(R\dsq{t- \zeta})$,} \\ \alpha \colon \mathcal{E}|_{R\dbr{t- \zeta}} \xrightarrow{\cong} \mathcal{G}_{R\dbr{t- \zeta}} \text{ a trivialisation over} R \dbr{t-\zeta} \end{array}\right\}.$$ \end{defn} By \cite{Richarz2021a}, $\Gr_{\mathcal{G}, \mathcal{O}}$ is representable by an ind-scheme over $\mathcal{O}$. Moreover, for a smooth, affine group scheme $\mathcal{G} \to X$ and a closed point $x \in X$ we have a canonical isomorphism $\Gr_{\mathcal{G}_{\mathcal{O}_x}, \mathcal{O}_x} = \Gr_{\mathcal{G}, X} \times_X \Spec(\mathcal{O}_x)$. The affine Grassmannian $\Gr_{\mathcal{G}, \mathcal{O}}$ carries an action of the positive loop group $\mathcal{L}^+_{\mathcal{O}} \mathcal{G}$ defined as the functor on $\mathcal{O}$-algebras by $$ R \mapsto (\mathcal{L}^+_{\mathcal{O}} \mathcal{G} )(R) = \mathcal{G}(R\dsq{t- \zeta}).$$ Note that the special fibre of $\Gr_{\mathcal{G},\mathcal{O}}$ is the classical affine Grassmannian for $\mathcal{G}$, while the generic fibre of $\Gr_{\mathcal{G}, \mathcal{O}}$ is the $B_{\mr{dR}}$-affine Grassmannian for $G = \mathcal{G}_k$. In order to define bounded local shtukas, we need to construct points in (the formal completion of) $\Gr_{\mathcal{G}, \mathcal{O}}$ from a local shtuka. This is done as follows. Let $ \underline{\mathcal{E}} = (\mathcal{E}, \varphi)$ be a local shtuka over $R \in \mathcal{N}ilp_{\mathbb{F} \dsq{\zeta}}$. Let $R \to R'$ be an fppf-cover that trivialises $\mathcal{E}$. As $\zeta \in R$ is nilpotent by assumption, we have $R \dsq{t- \zeta} = R \dsq{t}$. Using the equivalence of $L^+\mathcal{G}$-torsors over $R$ with formal $\hat \mathcal{G} = \mathcal{G} \times_{\mathbb{F} \dsq{t}} \Spf(\mathbb{F} \dsq{t})$-torsors over $\Spf(R \dsq{t}) = \Spf(R \dsq{t - \zeta})$ from \cite[Proposition 2.4]{Rad2015}, a trivialisation $\alpha \colon \mathcal{E}_{R'} \xrightarrow{\cong} \hat \mathcal{G}_{\Spf(R' \dsq{t - \zeta})}$ defines a $R'$-rational point in $\widehat{\Gr}_{\mathcal{G}, \mathbb{F} \dsq{t}} := \Gr_{\mathcal{G}, \mathbb{F} \dsq{t}} \times_{\Spec(\mathbb{F}\dsq{t})} \Spf(\mathbb{F} \dsq{t})$ given by $(\sigma^\ast \mathcal{E}, \alpha \circ \varphi)$. Using this version of affine Grassmannians, we define local bounds in the style of \cite[Definitions 4.5 and 4.8]{Rad2015}. \begin{defn} \label{defnLocBd} Let us fix an algebraic closure $k^\mr{alg}$ of $k$. \begin{enumerate} \item Let $\mathcal{O} \subseteq \mathcal{O}_1, \mathcal{O}_2$ be two finite extensions of discrete valuation rings in $k^\mr{alg}$. We call two locally closed subschemes $$Z_1 \subseteq \Gr_{\mathcal{G}, \mathcal{O}} \times_{\Spec(\mathcal{O})} \Spec(\mathcal{O}_1) \qquad \text{and} \qquad Z_2 \subseteq \Gr_{\mathcal{G}, \mathcal{O}} \times_{\Spec(\mathcal{O})} \Spec(\mathcal{O}_2)$$ \emph{equivalent} if there is a common finite extension $\mathcal{O}_1, \mathcal{O}_2 \subseteq \mathcal{O}'$ of discrete valuation rings in $k^\mr{alg}$ such that $Z_1 \times_{\Spec(\mathcal{O}_1)} \Spec(\mathcal{O}') = Z_2 \times_{\Spec(\mathcal{O}_2)} \Spec(\mathcal{O}') $ in $\Gr_{\mathcal{G}, \mathcal{O}} \times_{\Spec(\mathcal{O})} \Spec(\mathcal{O}')$. \item A \emph{local bound} is an equivalence class $\mathcal{Z}$ of quasi-compact locally closed subschemes of $\Gr_{\mathcal{G}, \mathcal{O}}$ such that all representatives are stable under the $\mathcal{L}^+_\mathcal{O} \mathcal{G}$-action and such that $\mathcal{Z}$ admits a representative over its field of definition (also called its \emph{reflex field}) as defined in \cite[Definition 4.5]{Rad2015}. \item Let $\mathcal{Z}$ be a bound in the above sense and let $ \underline{\mathcal{E}} = (\mathcal{E}, \varphi)$ be a local shtuka over $R \in \mathcal{N}ilp_{\mathbb{F} \dsq{\zeta}}$. Let $R \to R'$ be an fppf-cover that trivialises $\mathcal{E}$ and choose a trivialisation $\alpha$ of $\mathcal{E}$ over $R'$. We say that $\underline{\mathcal{E}}$ \emph{is bounded} by $\mathcal{Z}$ if for all representatives $Z_{\mathcal{O}'}$ of $\mathcal{Z}$ over $\mathcal{O}'$, the point in $\widehat{\Gr}_{\mathcal{G},\mathcal{O}}(R')$ induced by $\alpha$ factors through $Z_{\mathcal{O}'}$. As $Z_{\mathcal{O}'}$ is invariant under the left $\mathcal{L}_{\mathcal{O}'}^+\mathcal{G}$-action, the definition is independent of the choice of the trivialisation $\alpha$. \end{enumerate} \end{defn} \begin{remark} The discussion of \cite[Remarks 4.6, 4.7 and 4.9]{Rad2015} (respectively their global analogues in Lemma \ref{lemBoundEq} and Remark \ref{remDefBoundGl}) also applies in this setting. In particular, the representative of a bound over its reflex field is unique and it suffices to check boundedness of a local shtuka for a single representative. By a slight abuse of notation we may thus identify a bound with its representative over its reflex field. Note that it is not known if an equivalence class of $\mathcal{L}^+_\mathcal{O} \mathcal{G}$-stable subschemes in $\Gr_{\mathcal{G}, \mathcal{O}}$ always admits a representative over its reflex field. \end{remark} As in the global case (compare Construction \ref{constBound}) we define bounds given by cocharacters when the generic fibre of $\mathcal{G}$ is reductive. When $\mathcal{G}$ is parahoric, these bounds coincide with the global Schubert varieties defined in \cite[Definition 2.3]{Richarz2016}. \begin{defn} Assume that the generic fibre $G = \mathcal{G}_k$ of $\mathcal{G}$ is reductive. Let $\mu$ be a conjugacy class of geometric cochcaracters of $G$ with reflex field $k_\mu$. Let $\mathcal{O}_\mu$ be the ring of integers in $k_\mu$. Then $\Gr_{\mathcal{G}, \mathcal{O}}^{\leq \mu}$ is defined to be the scheme-theoretic closure of $\Gr_G^{\leq \mu}$ inside $\Gr_{\mathcal{G}, \mathcal{O}} \times_{\Spec(\mathcal{O})} \Spec(\mathcal{O}_\mu)$. \end{defn} Clearly, $\Gr_{\mathcal{G}, \mathcal{O}}^{\leq \mu}$ defines a local bound with reflex ring $\mathcal{O}_\mu$. Note that when $\mathcal{G}$ is constant split reductive, the bounds defined here may differ from the bound given by $\mu$ in \cite[Definition 3.5]{Hartl2011}, compare \cite[Remark 2.1.7]{Zhu2016} and \cite[Remark 1.18]{Lafforgue2018}. However, they do agree when $\mu$ is minuscule and $G^{\mr{der}}$ is simply connected. \subsection{Local-global compatibility.} We explain how to construct local bounds from global ones. We recall the global-to-local functor for shtukas from \cite[Section 5]{Rad2015} and show that our notions of global and local bounds are compatible in the sense that a global shtuka is bounded if and only if its corresponding local shtukas are bounded by the associated local bounds. This observation gives rise to a bounded version of the Serre-Tate Theorem \cite[Theorem 5.13]{Rad2015}. We use the notation following \cite[Remark 5.2]{Rad2015}. Let $y \in X$ be a closed point. We denote by $\mathcal{O}_y$ the completed local ring at $y$, and by $\mathfrak{m}_y \subseteq \mathcal{O}_y$ and $\mathbb{F}_y = \mathcal{O}_y/\mathfrak{m}_y$ its maximal ideal with uniformiser $\varpi_y$ and residue field, respectively. Let $x \in X(R)$ be a section of $X$ such that $x$ factors through $\Spf(\mathcal{O}_y)$, in other words, the image of the uniformiser $\varpi_y$ in $R$ is nilpotent. Then the $\mathfrak{m}$-adic completion of $\mathcal{O}_y \otimes_{\F_q} R$ factors as $$\mathcal{O}_y {\widehat{\otimes}}_{\F_q} R = (\mathbb{F}_y \otimes_{\F_q} R)\dsq{\varpi_y} = \prod_{1 \leq \ell \leq [\mathbb{F}_y \colon {\F_q}]} \mathcal{O}_y {\widehat \otimes}_{\mathbb{F}_y} R = \prod_{1 \leq \ell \leq [\mathbb{F}_y \colon {\F_q}]} R\dsq{\varpi_y} .$$ The $\ell$-th factor is defined by the ideal $\mathfrak{a}_\ell = \langle a \times 1 - 1 \otimes x(a)^{q^\ell} \colon a \in \kappa_y \rangle$ in $\mathcal{O}_y {\widehat \otimes}_{\F_q} R $ and the Frobenius $\sigma$ cyclically permutes the factors. \begin{remark} \label{remGlobalLocalBounds} We explain how global bounds give rise to local bounds following \cite[Proposition 4.3.3]{Rad2017}. Let $\mathcal{G} \to X$ be a smooth affine group scheme and let $\mathcal{Z}$ be a global bound for $\mathcal{G}$. Let us fix a tuple $\underline{y} = (y_i)_{i \in I} \in X^I$ of pairwise distinct closed points in $X$. Using the isomorphism $\Gr_{\mathcal{G},X^I, I_\bullet}|_U = \left( \prod_{i \in I} \Gr_{\mathcal{G}, X} \right)|_U$ over the complement of all diagonals $U$ in $X^I$, we denote by $\mathcal{Z}_i$ the image of $\mathcal{Z}$ under the projection to the $i$-th component. Then $\mathcal{Z}_i \subseteq \Gr_{\mathcal{G}, X} \times_X X_\mathcal{Z}$ is a quasi-compact locally closed subscheme stable under the action of $\mathcal{L}_{X}^+\mathcal{G}$. Let $y_i'$ be a closed point of $ \widetilde{X}_{\mathcal{Z}}$ lying over $y_i$. We denote by $\mathcal{Z}_{y_i'} = \mathcal{Z}_i \times_{\widetilde{X}_{K'}} \Spec(\mathcal{O}_{y_i'})$. Then $\mathcal{Z}_{y_i'} \subseteq \Gr_{\mathcal{G}, \mathcal{O}_{y}} \times_{\Spec(\mathcal{O}_y)} \Spec (\mathcal{O}_{y_i'})$ is a locally closed subscheme stable under the loop group action. In particular, for a tuple of points $\underline{y}' = (y_i')_{i \in I}$ of $\mathcal{X}_\mathcal{Z}^I$ lying over $\underline{y}$, we can associate to a global bound $\mathcal{Z}$ an $I$-tuple of equivalence classes of $\mathcal{L}^+_\mathcal{O} \mathcal{G}$-stable subschemes $(\mathcal{Z}_{y'_i})_{i \in I}$. Note that it is not clear in general that the equivalence class of subschemes defined by $\mathcal{Z}_{y_i'}$ does indeed admit a representative over its reflex ring (which will in general be different from $\mathcal{O}_{y_i'}$). However, in the generically reductive case and $\mathcal{Z} = \Gr_{\mathcal{G}, X^I, I_\bullet}^{\leq \underline{\mu}}$ for an $I$-tuple of conjugacy classes of geometric cocharacters of $G = \mathcal{G}_K$ we get $\mathcal{Z}_{y_i'} = \Gr^{\leq \mu_i}_{\mathcal{G}_{\mathcal{O}_{y_i}}, \mathcal{O}_{y_i}} \times_{\Spec(\mathcal{O}_{\mu_i})} \Spec(\mathcal{O}_{y_i'})$ by construction, so in this case the $\mathcal{Z}_{y_i'}$ do indeed define local bounds. \end{remark} \begin{remark} \label{remLocalBoundsComp} More precisely, \cite[Proposition 4.3.3]{Rad2017} construct local bounds in the sense of \cite{Rad2015} by further pulling back the global bound to a subscheme in $\Gr_\mathcal{G} \hat{\times}_{\F_q} \Spf(\mathcal{O})$. In particular, the local bounds associated to $\Gr_{G, X^I, I_\bullet}^{\leq \underline{\mu}}$ in the split reductive case are $\Gr_{G}^{\leq \mu_i} \hat{\times}_{\F_q} \Spf(\mathcal{O})$ rather than $\Gr^{\leq \mu_i}_{\mathcal{G}_{\mathcal{O}_{y_i}}, \mathcal{O}_{y_i}}$. As noted in \cite[Example 4.13]{Rad2015}, the latter kind of bounds seem to be the more natural to consider. \end{remark} \subsubsection*{Global-to-local functor} We explain how to associate local shtukas to global shtukas following \cite[Section 5]{Rad2015}. Let us fix a tuple $\underline{y} = (y_i)_{i \in I}$ of pairwise distinct closed points of $X$. Let $\underline{\mathcal{E}} = ((x_i)_{i \in I}, \mathcal{E}, \varphi) \in \Sht^{\underline{y}}_{\mathcal{G}, X^I}(R)$. By the observation above, the $y_i$-adic completion of $\mathcal{E}$ decomposes as $$ \mathcal{E} {\widehat{\times}}_{X_R} \Spf(\mathcal{O}_{y_i} {\widehat{\otimes}}_{{\F_q}} R) = \coprod_{1 \leq \ell \leq [\mathbb{F}_{y_i} \colon {\F_q}]} \mathcal{E} {\widehat{\times}}_{X_R} \Spf(R\dsq{\varpi_{y_i}}),$$ and each component is a formal $\hat \mathcal{G}_{y_i} = \mathcal{G} \times_X \Spf(\mathcal{O}_{y_i})$-torsor over $R$. Hence, $\widehat{\underline{\mathcal{E}}_{y_i}} = \left(\mathcal{E} {\widehat{\times}}_{X_R} V(\mathfrak{a}_0), \varphi^{\deg(y_i)}\right)$ is a local $\mathcal{G}_{\mathcal{O}_{y_i}}$-shtuka over $R$. \begin{defn} The \emph{global-to-local functor} associates to a global shtuka $\underline{\mathcal{E}} \in \Sht_{\mathcal{G}, X^I}^{\underline{y}}(R)$ a tuple of local $\mathcal{G}_{y_i}$-shtukas for $i \in I$ given by $$ \widehat{\underline\mathcal{E}_{\underline{y}}} = (\widehat{\underline\mathcal{E}_{y_i}})_{i \in I}.$$ Then, $\widehat{\underline\mathcal{E}_{y_i}}$ is called the \emph{local shtuka} of $\underline{\mathcal{E}}$ at $y_i$. \end{defn} \begin{remark} \label{remEtLocSht} In a similar fashion, for a closed point $y$ of $X$ we can associate to a global shtuka $\underline{\mathcal{E}} = ((x_i),(\mathcal{E}_j), (\varphi_j), \theta) \in \Sht_{\mathcal{G}, X^I, I_\bullet}|_{(X\setminus\{y\})^I}(R)$ with characteristic sections away from $y$ an \'etale local shtuka at $y$ by \cite[Remark 5.6]{Rad2015} as follows. We denote by $\widetilde{\mathcal{G}}_{y} = \mr{Res}_{\mathcal{O}_y/\mathbb{F}_q\dsq{\varpi_y}} \mathcal{G}_{\mathcal{O}_y}$. Then $\widetilde{\mathcal{G}}_{y}$ is a smooth affine group scheme over ${\F_q} \dsq{\varpi_y}$. The \emph{\'etale local $\widetilde{\mathcal{G}}_{y}$-shtuka} associated to $\mathcal{E}$ is then given by $\underline{\widetilde{\mathcal{E}}}_y = (\widetilde{\mathcal{E}}_y, \varphi)$ with $\widetilde{\mathcal{E}}_y = \mr{Res}_{(\mathcal{O}_y \widehat{\otimes}_{\F_q} R)/R\dsq{\varpi_y}}(\mathcal{E}_m \widehat{\times}_{X_R} \Spf(\mathcal{O}_y \widehat{\otimes}_{\F_q} R))$ and $\varphi = \varphi_m \circ \ldots \circ \varphi_0 \circ \theta$. Note that $\underline{\widetilde{\mathcal{E}}}_y$ is called \'etale as $\varphi$ is an isomorphism by assumption. \end{remark} The global-to-local functor is compatible with our notion of bounds in the following sense. Let us fix a global bound $\mathcal{Z}$ for $\mathcal{G}$ and a tuple of closed points $\underline{y}' = (y_i')_{i \in I \in X_\mathcal{Z}^I}$ such that $y_i'$ lies over $y_i$. We denote by $\Sht^{\underline{y}'}_{\mathcal{G}, X^I} = \Sht_{\mathcal{G}, X^I} \times_{X^I} \Spf(\mathcal{O}_{\underline{y'}})$. \begin{prop} \label{propBoundLocGlobComp} Let $\mathcal{Z}$ be a global bound such that its associated local equivalence classes $\mathcal{Z}_{y_i'}$ constructed in Remark \ref{remGlobalLocalBounds} admit representatives over their respective reflex fields (and are thus local bounds in the sense of Definition \ref{defnLocBd}). A global shtuka $\underline{\mathcal{E}} \in \Sht^{\underline{y}'}_{\mathcal{G}, X^I} (R) $ is bounded by $\mathcal{Z}$ if and only if for all $i \in I$ its associated local shtuka $\widehat{\underline{\mathcal{E}}_{y_i}}$ at $y_i$ is bounded by $\mathcal{Z}_{y'_i}$. \end{prop} Note that the condition on $\mathcal{Z}$ is satisfied for $\Gr_{\mathcal{G}, X^I, I_\bullet}^{\leq \underline{\mu}}$. \begin{proof} Let us fix an fppf-cover $R' \to R$ and a trivialisation $\alpha \colon \mathcal{E}|_{\hat\Gamma_{\underline x_{R'}}} \xrightarrow{\cong} \mathcal{G}|_{\hat \Gamma_{\underline x_{R'}}}$. As the $(y_i)_{i \in I}$ were assumed to be pairwise distinct, we have $\hat\Gamma_{\underline x_{R'}} = \bigcup_{i \in I} \hat\Gamma_{x_{i, R'}}$. Moreover, by \cite[Lemma 5.3]{Rad2015} we have $ \hat\Gamma_{x_{i, R'}} = V(\mathfrak{a}_0)$. By construction, the induced point $(\underline{\mathcal{E}}_{R'}, \alpha) \in \Gr_{\mathcal{G}, X^I}(R')$ factors through $\mathcal{Z}$ if and only if the restriction of $\alpha$ to $\hat\Gamma_{x_{i, R'}} $ factors through $\mathcal{Z}_{y_i'}$ for all $i \in I$, or equivalently the corresponding point $R' \times_{\mathcal{O}_{\mathcal{Z}_{y_i'}}} \mathcal{O}_{y_i'} \to \Gr_{\mathcal{G}, \mathcal{O}_{y_i}} \times_{\Spec(\mathcal{O}_{y_i})} \Spec(\mathcal{O}_{y_i'})$ factors through $\mathcal{Z}_{y_i'}$. But this is the case if and only if the local shtuka $\widehat{\underline{\mathcal{E}}_{y_i}}$ at $y_i$ is bounded by $\mathcal{Z}_{y'_i}$ by definition. \end{proof} \begin{remark} \label{remShtLocBoundsSpaces} Let $\underline{y} = (y_i)_{i \in I}$ be a tuple of pairwise distinct closed points of $X$. Let $(\mathcal{Z}_i)_{i \in I}$ be a tuple of local bounds at $\underline{y}$. We denote by $\mathcal{O}_{(\mathcal{Z}_i)_{i \in I}} = \widehat{\bigotimes}_{i \in I} \mathcal{O}_{\mathcal{Z}_i}$. As in \cite[Definition 4.3.2]{Rad2017}, we say a global shtuka $\underline{\mathcal{E}} \in \Sht^{\underline{y}}_{\mathcal{G}, X^I} \times_{\Spf(\mathcal{O}_{\underline{y}})} \Spf(\mathcal{O}_{(\mathcal{Z}_i)_{i \in I}})$ is bounded by $(\mathcal{Z}_i)_{i \in I}$ if its associated local shtuka at $y_i$ is bounded by $\mathcal{Z}_i$ for all $i \in I$. When the local bounds come from a global bound, the previous proposition shows that this notion of local boundedness conditions agrees with the global one. We do not explore these local boundedness conditions for global shtukas further here as the bounds we are later interested in, namely the ones given by cocharacters, arise from global bounds. \end{remark} The global-to-local functor also gives rise to a Serre-Tate theorem relating the deformation theory of global shtukas with the deformation theory of their associated local shtukas, compare {\cite[Theorem 5.10]{Rad2015}}. Let $S = \Spec(R) \in \mathcal{N}ilp_{\mathcal{O}_{\underline{y}}}$ and let $i \colon \overline{S} = \Spec(R/I) \hookrightarrow S$ be a closed subscheme defined by a nilpotent ideals $I$. Let $\underline{\bar \mathcal{E}} \in \Sht_{\mathcal{G}, X^I}^{\mathcal{Z}, \underline y}(\overline{S})$. We denote by $\Def^{\mathcal{Z}}_{\underline{\bar\mathcal{E}}}(S)$ the category of bounded deformations of $\underline{\mathcal{E}}$ to $S$, in other words, the category of pairs $(\underline{\mathcal{E}}, \beta \colon i^*\underline{\mathcal{E}} \tilde \rightarrow \underline{\bar{\mathcal{E}}} )$ where $\underline\mathcal{E} \in \Sht_{\mathcal{G}, X^I}^{\mathcal{Z}, \underline{y}}(S)$ and $\beta$ is an isomorphism of $\mathcal{G}$-shtukas over $\overline{S}$. Similarly, for a local $\mathcal{G}_{y_i}$-shtuka $\underline{\bar \mathcal{E}}$ bounded by $\mathcal{Z}_{y_i}$ we define $\Def^{\mathcal{Z}_{y_i}}_{\underline{\bar\mathcal{E}}}(S)$ as the category of bounded deformations of $\underline{\bar \mathcal{E}}$ to $S$, that is, the category of pairs $(\underline{\mathcal{E}}, \beta \colon i^*\underline{\mathcal{E}} \tilde \rightarrow \underline{\bar{\mathcal{E}}} )$ where $\underline \mathcal{E}$ is a local $\mathcal{G}_{y_i}$-shtuka on $S$ bounded by $\mathcal{Z}_{y_i}$ and $\beta$ is an isomorphism of local $\mathcal{G}_{y_i}$-shtukas over $\overline{S}$. \begin{cor}[Bounded Serre-Tate Theorem for shutkas] \label{corBoundedSerreTate} Let $\underline{\bar \mathcal{E}} \in \Sht^{\mathcal{Z}, \underline{y}}_{\mathcal{G}, X^I}(\overline{S})$. The restriction of the global-to-local functor $$\widehat{(-)_{\underline{y}}} \colon \Def^{\mathcal{Z}}_{\underline{\bar\mathcal{E}}}(S) \to \prod_{i \in I} \Def^{\mathcal{Z}_i}_{\widehat{\underline{\overline{\mathcal{E}}}_{y_i}}}(S), \qquad (\underline{\mathcal{E}}, \beta) \mapsto (\widehat{\underline\mathcal{E}_{y_i}}, \widehat{\beta_{y_i}})_{i \in I} $$ is an equivalence of categories. \end{cor} \begin{proof} This follows directly from the unbounded case in \cite[Theorem 5.10.]{Rad2015} together with Proposition \ref{propBoundLocGlobComp} \end{proof} \section{Level maps and integral models with deep Bruhat-Tits level} We construct integral models for moduli spaces of shtukas with deep Bruhat-Tits level structures and show that these integral models admit proper, surjective and generically \'etale level maps. In order to do so, we first study the morphism on shtuka spaces induced by a generic isomorphism of group schemes extending a result of \cite{Breutmann2019}. \subsection{Functoriality of shtuka spaces under generic isomorphisms} We study functoriality of shtuka spaces under homomorphisms of group schemes that are generic isomorphisms. We prove an analogue of \cite[Theorem 3.20]{Breutmann2019} in our setting of shtukas with global bounds. In particular, we get the result on the whole curve and need not restrict the legs to a formal neighbourhood of a fixed point as in \cite{Breutmann2019}. Moreover, we show that the level maps in our setting are generically finite \'etale, which is not part of \cite{Breutmann2019}. Note that also the second main functoriality result \cite[Theorem 3.26]{Breutmann2019} for closed immersions of group schemes has since been improved in the revised version of \cite{Breutmann2019} and \cite{Yun2022} to work over $X^I$ rather than only over the completion at a fixed point. \begin{remark} Let us first note the following functoriality properties of the affine Grassmannian in this setting. \begin{enumerate} \item Let $f \colon \mathcal{G} \to \mathcal{G}'$ be a homomorphism of group schemes over $X$ such that $f$ is an isomorphism over a dense open subset $U \subseteq X$. The induced map $$ f_\ast \colon \Gr_{\mathcal{G}, X^I, I_\bullet} \to \Gr_{\mathcal{G}', X^I, I_\bullet} $$ is then an isomorphism over $U^I$ using the moduli description from Remark \ref{remBLdescent}. \item In the Bruhat-Tits case (compare Assumption \ref{ass} (\ref{assBT})) it follows that the map $$ \rho_{\Omega, \ast} \colon \Gr_{\mathcal{G}_\Omega, X^I, I_\bullet} \to \varprojlim_{\mathfrak{f} \prec \Omega} \Gr_{\mathcal{G}_\mathfrak{f}, X^I, I_\bullet}$$ is an open immersion by Theorem \ref{thmBunGImm} and an isomorphism over $(X \setminus \{x_0\})^I$ using the previous observation. \item Moreover, using Lemma \ref{lemCocharShtData} we obtain a map $$ \rho_{\Omega, \ast} \colon \Gr^{\leq \underline \mu}_{\mathcal{G}_\Omega, X^I, I_\bullet} \to \varprojlim_{\mathfrak{f} \prec \Omega} \Gr^{\leq \underline \mu}_{\mathcal{G}_\mathfrak{f}, X^I, I_\bullet}$$ that factors as a closed immersion followed by an open immersion $$\Gr^{\leq \underline \mu}_{\mathcal{G}_\Omega, X^I, I_\bullet} \to \Gr_{\mathcal{G}_\Omega, X^I, I_\bullet} \times_{\varprojlim_{\mathfrak{f} \prec \Omega} \Gr_{\mathcal{G}_\mathfrak{f}, X^I, I_\bullet}} \varprojlim_{\mathfrak{f} \prec \Omega} \Gr^{\leq \underline \mu}_{\mathcal{G}_\mathfrak{f}, X^I, I_\bullet} \to \varprojlim_{\mathfrak{f} \prec \Omega} \Gr^{\leq \underline \mu}_{\mathcal{G}_\mathfrak{f}, X^I, I_\bullet}$$ and is hence locally closed immersion and an isomorphism over $(X \setminus \{x_0\})^I$. \end{enumerate} \end{remark} We need the following lemma on twisted flag varieties in the local setting. \begin{lem} \label{lemTwistedFlagVariety} Let $k = \mathbb{F} \dbr{t}$ be the field of formal Laurent series over an arbitrary field $\mathbb{F}$ and let $\mathfrak{o} = \mathbb{F} \dsq{t}$ the subring of formal power series. Let $G$ be a smooth affine group scheme over $k$ and let $\mathcal{G}$ and $\mathcal{G}'$ be two smooth integral models of $G$ with geometrically connected fibres. Let $f \colon \mathcal{G} \to \mathcal{G}'$ be a homomorphism of $\mathfrak{o}$-group schemes that is the identity on $G$ over $k$. \begin{enumerate} \item The corresponding twisted flag variety $L^+\mathcal{G}'/L^+\mathcal{G}$ is representable by a smooth and separated scheme of finite type over $\mathbb{F}$. If $\mathbb{F}$ is finite or separably closed, then $$\left( L^+\mathcal{G}'/L^+\mathcal{G} \right)(\mathbb{F}) = \mathcal{G}'(\mathfrak{o})/\mathcal{G}(\mathfrak{o}).$$ \item Assume that $\mathbb{F}$ is finite. We equip $G(k)$ with the analytic topology induced by the natural topology on $k$ (note that $k$ is locally compact in this case). Then $\mathcal{G}(\mathfrak{o}) \subseteq \mathcal{G}'(\mathfrak{o})$ are compact open subgroups of $G(k)$. In particular, the quotient $\mathcal{G}'(\mathfrak{o})/\mathcal{G}(\mathfrak{o})$ is discrete and finite. \item Let $S$ be an $\mathbb{F}$-scheme. Giving a $\mathcal{L}^+ \mathcal{G}$-torsor over $S$ is equivalent to giving a $\mathcal{L}^+ \mathcal{G}'$-torsor $\mathcal{E}'$ over $S$ together with an isomorphism $\mathcal{E}'/\mathcal{L}^+\mathcal{G} \xrightarrow{\cong} \mathcal{L}^+ \mathcal{G}'/\mathcal{L}^+ \mathcal{G}$. \end{enumerate} \end{lem} Note that giving an isomorphism $\mathcal{E}'/\mathcal{L}^+\mathcal{G} \xrightarrow{\cong} \mathcal{L}^+ \mathcal{G}'/\mathcal{L}^+ \mathcal{G}$ in (3) is also clearly equivalent to giving a section in $\left(\mathcal{E}'/\mathcal{L}^+\mathcal{G} \right)(S)$. \begin{proof} \begin{enumerate} \item By the argument in the proof of \cite[Lemma 3.17]{Breutmann2019}, the quotient stack $L^+\mathcal{G}'/L^+\mathcal{G}$ is representable by a separated scheme of finite type over $\mathbb{F}$ that is moreover a closed subscheme of the affine Grassmannian $\Gr_\mathcal{G}$. As both $\mathcal{L}^+ \mathcal{G}$ and $\mathcal{L}^+ \mathcal{G}'$ are formally smooth over $\mathbb{F}$, the quotient $L^+\mathcal{G}'/L^+\mathcal{G}$ is hence formally smooth as well. For the second claim, it suffices to show that $H^{1}(\mathbb{F}, L^+\mathcal{G})$ is trivial by the moduli description of the quotient stack. But this is shown in the proof of \cite[Corollary 3.22]{Richarz2019}. \item Clearly, both $\mathcal{G}(\mathfrak{o})$ and $\mathcal{G}'(\mathfrak{o})$ are compact open subgroups of $G(k)$ by construction. The existence of the map $f$ then means that $\mathcal{G}(\mathfrak{o})$ is a subgroup of $\mathcal{G}'(\mathfrak{o})$. The assertion on the quotient then directly follows from basic facts from topology. \item Given a $\mathcal{L}^+ \mathcal{G}$-torsor $\mathcal{E}$ on $S$, its associated $\mathcal{L}^+ \mathcal{G}'$-torsor is given by $\mathcal{E} \times^{\mathcal{L}^+ \mathcal{G}} \mathcal{L}^+\mathcal{G}'$. The map on sections given by $(e,g) \mapsto g$ then induces an isomorphism $$\mathcal{E}'/ \mathcal{L}^+ \mathcal{G} \xrightarrow{\cong} \mathcal{L}^+ \mathcal{G}'/ \mathcal{L}^+ \mathcal{G}.$$ This construction is an equivalence. \end{enumerate} \end{proof} \begin{thm} \label{thmLvlMapGen} Let $\mathcal{G}$ and $\mathcal{G}'$ be two smooth affine group schemes over $X$ with geometrically connected fibres. Let $f \colon (\mathcal{G}, \mathcal{Z}) \to (\mathcal{G}', \mathcal{Z}')$ be a map of shtuka data such that the map $f \colon \mathcal{G} \to \mathcal{G}'$ is an isomorphism over $U = X \setminus \left\{y_1, \ldots, y_n\right\}$ for a finite set of closed points $ \left\{y_1, \ldots, y_n\right\}$ of $X$. \begin{enumerate} \item The induced map $$f_\ast \colon \Sht^{\mathcal{Z}}_{\mathcal{G},X^I, I_\bullet} \times_{X^I_\mathcal{Z}} X^I_{\mathcal{Z}.\mathcal{Z}'} \to \Sht^{\mathcal{Z}'}_{\mathcal{G}',X^I, I_\bullet} \times_{X^I_{\mathcal{Z}'}} X^I_{\mathcal{Z}.\mathcal{Z}'}$$ is schematic, separated and of finite type. \item Assume that $\mathcal{G}$ is a parahoric Bruhat-Tits group scheme and that $\mathcal{Z} \subseteq \Gr_{\mathcal{G}, X^I, I_\bullet} \times_{X^I} X_\mathcal{Z}^I$ is a closed subscheme. Then the map $f_\ast$ is moreover proper. \item Assume that $\mathcal{Z} \times_{X_\mathcal{Z}^I} X_{\mathcal{Z}.\mathcal{Z}'} \to \mathcal{Z}' \times_{X_\mathcal{Z}^I} X_{\mathcal{Z}.\mathcal{Z}'} $ is an isomorphism over $\left( U \times_{X} X_{\mathcal{Z}.\mathcal{Z}'} \right)^I$. Then the map $f_\ast$ is \'etale locally representable by the constant scheme $$ \prod_{i = 1}^n \underline{\mathcal{G}'(\mathcal{O}_{y_i})/\mathcal{G}(\mathcal{O}_{y_i})}.$$ In particular, $f_\ast$ is finite \'etale and surjective over $\left( U \times_{X} X_{\mathcal{Z}.\mathcal{Z}'} \right)^I$. \item Under the assumptions of (2) and (3) assume additionally that $\mathcal{Z}'$ is the schematic closure of $\mathcal{Z}'|_{\left( U \times_{X} X_{\mathcal{Z}'} \right)^I}$ in $\Gr_{\mathcal{G}', X_{\mathcal{Z}'}^I, I_\bullet}$. Then $f_\ast$ is surjective. \end{enumerate} \end{thm} \begin{remark} The first two statements are direct analogues of the corresponding statements in \cite[Theorem 3.20]{Breutmann2019}, while there is no analogue of the third assertion in \cite[Theorem 3.20]{Breutmann2019}. In order to get surjectivity of the map $f_\ast$, in \cite{Breutmann2019} it is assumed that the bound $\mathcal{Z}$ arises as the base change of $\mathcal{Z}'$ under the map $f_\ast$ on affine Grassmannians. This assumption does not seem adequate in our setting, in particular, it is not satisfied for the bounds given by cocharacters in the Bruhat-Tits case. We thus replace the assumption by the condition that the map on bounds is a generic isomorphism and that the bounds arise as schematic closures from their generic part, both of which are satisfied in our setting. Note that when $\mathcal{Z}$ arises as a base change, the map $\mathcal{Z} \to \mathcal{Z}'$ is clearly an isomorphism over $U^I$. Note moreover that a similar statement also holds for moduli spaces of shtukas with local boundedness conditions as in Remark \ref{remShtLocBoundsSpaces}. In fact, the proof of \cite{Breutmann2019} for (1) and (2) directly translates to this setting. \end{remark} \begin{proof} \begin{enumerate} \item We proceed as in the proof of \cite[Theorem 3.20]{Breutmann2019}. We consider the projection $\Sht^\mathcal{Z}_{\mathcal{G}, X^I, I_\bullet} \to \prod_{j=1, \ldots, m} \Bun_\mathcal{G}$ given by $\underline{\mathcal{E}} \mapsto (\mathcal{E}_j)_{j=1, \ldots, m}$. Let us fix $$\underline{\mathcal{E}}' = ((x_i)_{i \in I}, (\mathcal{E}'_j)_{j = 0, \ldots, m}, (\varphi_j)_{j = 1, \ldots, m}, \theta) \in \left( \Sht^{\mathcal{Z}'}_{\mathcal{G}', X^I, I_\bullet} \times_{X^I_{\mathcal{Z}'}} X^I_{\mathcal{Z}. \mathcal{Z}'}\right) (S).$$ We claim that the induced map $$ S \times_{\Sht^{\mathcal{Z}'}_{\mathcal{G}', X^I, I_\bullet} \times_{X^I_{\mathcal{Z}'}} X^I_{\mathcal{Z}. \mathcal{Z}'}} \left( \Sht^{\mathcal{Z}}_{\mathcal{G}, X^I, I_\bullet} \times_{X^I_{\mathcal{Z}}} X^I_{\mathcal{Z}. \mathcal{Z}'} \right)\to S \times_{\prod_{j = 1}^m \Bun_{\mathcal{G}'}} \prod_{j = 1}^m \Bun_{\mathcal{G}} $$ is a quasi-compact locally closed immersion. This shows the assertion (1) using that $\Bun_\mathcal{G} \to \Bun_{\mathcal{G}'}$ is schematic and quasi-projective by \cite[Proposition 3.18]{Breutmann2019}. In order to show the claim, let us fix a point $$(s, (\mathcal{E}_j)_{j = 1, \ldots m}, (\psi_j)_{j = 1, \ldots, m}) \in (S \times_{\prod_{j = 1}^m \Bun_{\mathcal{G}'}} \prod_{j = 1}^m \Bun_{\mathcal{G}})(T),$$ where $s \colon T \to S$ is a map of schemes, the $\mathcal{E}_j$ are $\mathcal{G}$-bundles and $\psi_j \colon s^\ast \mathcal{E}' \xrightarrow{\cong} f_\ast \mathcal{E}$ is an isomorphism of $\mathcal{G}'$-bundles over $X_T$. As in the proof of \cite[Theorem 3.20]{Breutmann2019}, there is at most one $T$-valued point $(s, \underline{\mathcal{E}}, \psi)$ of $S \times_{\Sht^{\mathcal{Z}'}_{\mathcal{G}', X^I, I_\bullet}} \Sht^{\mathcal{Z}}_{\mathcal{G}, X^I, I_\bullet}$ mapping to $(s, (\mathcal{E}_j)_{j = 1, \ldots m}, (\psi_j)_{j = 1, \ldots, m})$ as the maps $\varphi_j$ of $\underline{\mathcal{E}}$ are already uniquely determined over an open dense subset by the $\varphi_j'$. It remains to check that the locus where such an extension exists is closed in $T$. Let $D = X \setminus U$ be the effective Cartier divisor in $X$ given by $\underline{y}$. Let $1 \leq j \leq m$. The map $\varphi_{j, T}' \colon \mathcal{E}'_{j-1}|_{X_T \setminus \bigcup_{i \in I_j} \Gamma_{\underline{x}_j}} \to {\mathcal{E}'_j}|_{X_T \setminus \bigcup_{i \in I_j} \Gamma_{\underline{x}_j}} $ defines a map $\varphi_{j} \colon \mathcal{E}_{j-1}|_{X_T \setminus (D \cup \bigcup_{i \in I_j}) \Gamma_{\underline{x}_j}} \to {\mathcal{E}_j}|_{X_T \setminus (D \cup \bigcup_{i \in I_j} \Gamma_{\underline{x}_j})}$. Trivialising both $\mathcal{E}_{j-1}$ and $\mathcal{E}$ over $\hat D \cup \hat\Gamma_{\underline{x}_j}$ defines an element $\varphi_j \in \mathcal{G}(\hat{D}^0 \cup \Gamma_{\underline{x}_j})$. By the argument that the positive loop group is a closed subscheme of the loop group, the locus where $\varphi_j$ can be extended to $\hat{D} \setminus \Gamma_{\underline{x}}$ is closed. Finally, the locus where this is bounded by $\mathcal{Z}$ is reprsentable by a quasi-compact immmersion. \item This follows from the argument in (1) as in the parahoric case the map $\Bun_\mathcal{G} \to \Bun_{\mathcal{G}'}$ is projective by \cite[Proposition 3.18]{Breutmann2019}. \item It suffices to show the first claim, namely that the map $f_\ast$ is \'etale locally representable by the constant scheme $ \prod_{\ell = 1}^n \underline{\mathcal{G}'(\mathcal{O}_{y_\ell})/\mathcal{G}(\mathcal{O}_{y_\ell})}.$ We follow the proof of \cite[Proposition 2.16]{Varshavsky2004}. Let $$\underline{\mathcal{E}'} = ((x_i), (\mathcal{E}'_i), (\varphi'_i), \theta) \in \Sht^{\mathcal{Z}'}_{\mathcal{G}', X^I, I_\bullet}|_{U_{\mathcal{Z}, \mathcal{Z}'}^I}(S).$$ For $\ell = 1, \ldots, n$, we denote by $\widetilde{\underline{\mathcal{E}'}_{y_k}} = (\widetilde{\mathcal{E}'}_{y_\ell}, \varphi)$ the associated \'etale local shtuka of $\underline{\mathcal{E}'}$ at $y_\ell$ as defined in Remark \ref{remEtLocSht}. The fibre product $$S' = S \times_{\underline{\mathcal{E}}', \Sht^{\mathcal{Z}'}_{\mathcal{G}', X^I, I_\bullet}|_{U_{\mathcal{Z}, \mathcal{Z}'}^I}, f_\ast} \Sht^{\mathcal{Z}}_{\mathcal{G}, X^I, I_\bullet}|_{U_{\mathcal{Z}, \mathcal{Z}'}^I}$$ is then given by the set of tuples $(\widetilde{\underline{\mathcal{E}'}_{y_\ell}})_{\ell = 1, \ldots, n}$ of \'etale local $\widetilde{\mathcal{G}_{\mathcal{O}_{y_\ell}}}$-shtukas such that $f_\ast \widetilde{\underline{\mathcal{E}'}_{y_\ell}} = \widetilde{\underline{\mathcal{E}'}_{y_\ell}}$. As the claim is \'etale-local on $S$, we may assume that all $\widetilde{\mathcal{E}'}_{y_\ell}$ are trivial $\mathcal{L}^+ \widetilde{\mathcal{G}'_{\mathcal{O}_{y_\ell}}}$-torsors. By Lemma \ref{lemTwistedFlagVariety} (3), the fibre product $S'$ is then representable by the scheme of Frobenius fixed points of $\prod_{\ell = 1}^n \widetilde{\mathcal{G}'_{\mathcal{O}_{y_\ell}}} / \mathcal{L}^+ \widetilde{\mathcal{G}_{\mathcal{O}_{y_\ell}}}$, which is given by the constant scheme $\prod_{\ell = 1}^n \underline{ \left(\mathcal{L}^+ \widetilde{\mathcal{G}'_{\mathcal{O}_{y_\ell}}} / \mathcal{L}^+ \widetilde{\mathcal{G}_{\mathcal{O}_{y_\ell}}} \right)({\F_q})} $ by \cite[Lemma 3.3]{Varshavsky2004}. By Lemma \ref{lemTwistedFlagVariety} (1), this scheme can be identified with $\prod_{\ell = 1}^n \underline{\mathcal{G}'(\mathcal{O}_{y_\ell})/\mathcal{G}(\mathcal{O}_{y_\ell})}$, and by Lemma \ref{lemTwistedFlagVariety} (2) it is finite over ${\F_q}$. \item Let us fix a point $s \in \Sht^{\mathcal{Z}'}_{\mathcal{G}', X^I, I_\bullet}$. If $s$ lies over $U$, it is in the image of $\Sht^{\mathcal{Z}'}_{\mathcal{G}', X^I, I_\bullet}$ by (3). Let us thus assume that $s$ maps to $X^I \setminus U$. By the local model theorem (compare Remark \ref{remLocMod}), we have a smooth map $\Sht^{\mathcal{Z}'}_{\mathcal{G}', X^I, I_\bullet} \to [\mathcal{H} \backslash \mathcal{Z}']$, where $\mathcal{H}$ is a finite-dimensional quotient of $\mathcal{L}_{X^I}^+ \mathcal{G}$. By assumption on $\mathcal{Z}'$, the image of $s$ in $[\mathcal{H} \backslash \mathcal{Z}']$ has a generalisation $s'$ over $U$. As the local model map is smooth, $s'$ lifts to a generalisation $s''$ of $s$ in $\Sht^{\mathcal{Z}'}_{\mathcal{G}', X^I, I_\bullet}$. As $f_\ast$ is generically surjective by (3), there is a point $t \in \Sht^\mathcal{Z}_{\mathcal{G}, X^I, I_\bullet}$ mapping to $s''$. As $f_\ast$ is proper by (2), specialisations lift along $f_\ast$. Hence, $s$ is in the image of $f_\ast$. \end{enumerate} \end{proof} Let us also state the result in the generically reductive case with bounds given by cocharacters. \begin{cor} \label{corLvlMapParahoric} Let $G$ be a reductive group over $K$ and let $f \colon \mathcal{G} \to \mathcal{G}'$ be a map of two smooth affine models of $G$ that is an isomorphism over some dense open subset $U$ of $X$. Let $\underline{\mu} = (\mu_i)_{i \in I}$ be an $I$-tuple of conjugacy classes of cocharacters for $G$. The induced map $$ f_\ast \colon \Sht^{\leq \underline{\mu}}_{\mathcal{G}, X^I,I_\bullet} \to \Sht^{\leq \underline{\mu}}_{\mathcal{G}', X^I,I_\bullet}$$ is schematic, separated and of finite type. Moreover, it is finite \'etale and surjective over $(U \times_X X_{\underline{\mu}})^I$. When $\mathcal{G}$ is a parahoric Bruhat-Tits group scheme, $f_\ast$ is proper and surjective. \end{cor} \begin{proof} The bounds given by $\underline \mu$ for $\mathcal{G}$ and $\mathcal{G}'$ clearly satisfy the conditions of Theorem \ref{thmLvlMapGen}. \end{proof} \subsection{Moduli spaces of shtukas with deep Bruhat-Tits level structure} In this section, we define the integral model of the moduli space of shtukas with deep Bruhat-Tits level structure as the schematic image of the moduli space of shtukas for the Bruhat-Tits group scheme inside the limit of all the corresponding spaces for parahoric level. \begin{prop} \label{propImmersion} In the situation of Assumption \ref{ass} (\ref{assBT}), the map $$ \rho_{\Omega, \ast} \colon \Sht^{\leq \underline{\mu}}_{\mathcal{G}_\Omega, X^I,I_\bullet} \to \varprojlim_{\mathfrak{f} \prec \Omega} \Sht^{\leq \underline{\mu}}_{\mathcal{G}_\mathfrak{f}, X^I,I_\bullet}$$ is schematic and representable by a quasi-compact locally closed immersion. Moreover, $\rho_{\Omega, \ast}$ is an isomorphism over $(X \setminus x_0)^I$. When $\Omega$ is (the closure of) a facet, $\rho_{\Omega, \ast}$ is an isomorphism. \end{prop} \begin{proof} The assertion in the case that $\Omega$ is a facet is clear because the index set $\{\mathfrak{f} \prec \Omega\}$ has the final object $\Omega$. By Corollary \ref{corLvlMapParahoric} and Lemma \ref{lemLim}, the map is schematic, separated and of finite type. By Theorem \ref{thmBunGImm}, the corresponding map on the unbounded moduli stacks of shtukas is an open immersion. Hence, $ \rho_{\Omega, \ast}$ is certainly a locally closed immersion as being bounded by $\underline{\mu}$ is a closed condition. Over $(X \setminus x_0)^I$, an element of $\Sht_{\mathcal{G}_\Omega, X^I,I_\bullet}$ is bounded by $\underline{\mu}$ if and only if its image under $\rho_{\mathfrak{f}, \Omega, \ast}$ for one (or equivalently all) facet $\mathfrak{f} \prec \Omega$ is bounded by $\underline{\mu}$ by Lemma \ref{lemCocharShtData}. Thus, $\rho_{\Omega, \ast}$ is an open immersion over $(X \setminus x_0)^I$. Moreover, the map $\rho_{\Omega, \ast}$ is finite away from $x_0$ by Lemma \ref{lemLim}, hence also a closed immersion. In order to see that $\rho_\ast$ is an isomorphism over $(X \setminus x_0)^I$, we first show that the projection $\varprojlim_{\mathfrak{f} \prec \Omega} \Sht^{\leq \underline{\mu}}_{\mathcal{G}_\mathfrak{f}, X^I,I_\bullet} \to \Sht^{\leq \underline{\mu}}_{\mathcal{G}_\mathfrak{f}, X^I,I_\bullet}$ is finite \'etale of degree $[\mathcal{G}_\mathfrak{f}(\mathcal{O}_{x_0}) \colon \mathcal{G}_\Omega(\mathcal{O}_{x_0})]$ over $(X \setminus x_0)^I$ for all facets $\mathfrak{f} \prec \Omega$. Note that this does not follow directly from Lemma \ref{lemLimProj}. Instead, we a similar induction on $\Omega$ as in the proof of Theorem 2.3 (and Lemma 2.6). Note that in the parahoric case the claim is part of Theorem \ref{thmLvlMapGen}. For the inductive step, we note that for $\Omega = \Omega_1 \cup \Omega_2$ with $\Omega_i = \mr{cl}(\Omega_i)$ we have $$ \varprojlim_{\mathfrak{f} \prec \Omega} \Sht^{\leq \underline{\mu}}_{\mathcal{G}_\mathfrak{f}, X^I,I_\bullet} \xrightarrow{\cong} \varprojlim_{\mathfrak{f} \prec \Omega_1} \Sht^{\leq \underline{\mu}}_{\mathcal{G}_\mathfrak{f}, X^I,I_\bullet} \times_{\varprojlim_{\mathfrak{f} \prec \Omega_1 \cap \Omega_2} \Sht^{\leq \underline{\mu}}_{\mathcal{G}_\mathfrak{f}, X^I,I_\bullet}} \varprojlim_{\mathfrak{f} \prec \Omega_2} \Sht^{\leq \underline{\mu}}_{\mathcal{G}_\mathfrak{f}, X^I,I_\bullet}.$$ Hence, for any $\Omega' \prec \Omega_1$ we have that the projection $\varprojlim_{\mathfrak{f} \prec \Omega} \Sht^{\leq \underline{\mu}}_{\mathcal{G}_\mathfrak{f}, X^I,I_\bullet} \to \varprojlim_{\mathfrak{f} \prec \Omega'} \Sht^{\leq \underline{\mu}}_{\mathcal{G}_\mathfrak{f}, X^I,I_\bullet}$ is finite \'etale of degree \begin{align*} [\mathcal{G}_{\Omega'}(\mathcal{O}_{x_0}) \colon \mathcal{G}_{\Omega_1}(\mathcal{O}_{x_0})] \cdot [\mathcal{G}_{\Omega_1 \cap \Omega_2}(\mathcal{O}_{x_0}) \colon \mathcal{G}_{\Omega_2}(\mathcal{O}_{x_0})] & = [\mathcal{G}_{\Omega'}(\mathcal{O}_{x_0}) \colon \mathcal{G}_{\Omega_1}(\mathcal{O}_{x_0})] \cdot [\mathcal{G}_{\Omega_1}(\mathcal{O}_{x_0}) \colon \mathcal{G}_{\Omega}(\mathcal{O}_{x_0})] \\ & = [\mathcal{G}_{\Omega'}(\mathcal{O}_{x_0}) \colon \mathcal{G}_{\Omega}(\mathcal{O}_{x_0})]. \end{align*} Thus, $\rho_{\Omega, \ast}$ is a closed immersion of stacks that are finite \'etale of the same degree over $\Sht_{\mathcal{G}_\mathfrak{f}, X^I, I_\bullet}^{\leq \underline{\mu}}$ for any facet $\mathfrak{f} \prec \Omega$ and hence an isomorphism. \end{proof} \begin{defn} \label{defnIntMod} The integral model $\overline{\Sht}^{\leq \underline{\mu}}_{\mathcal{G}_\Omega, X^I,I_\bullet}$ of the moduli space of shtukas with $\mathcal{G}_\Omega$-level is defined to be the schematic image in the sense of \cite{Emerton2021} of the map $$ \rho_{\Omega, \ast} \colon \Sht^{\leq \underline{\mu}}_{\mathcal{G}_\Omega, X^I,I_\bullet} \to \varprojlim_{\mathfrak{f} \prec \Omega} \Sht^{\leq \underline{\mu}}_{\mathcal{G}_\mathfrak{f},X^I,I_\bullet}.$$ \end{defn} By Proposition \ref{propImmersion}, we have $\overline{\Sht}^{\leq \underline{\mu}}_{\mathcal{G}_\mathfrak{f},X^I,I_\bullet} = \Sht^{\leq \underline{\mu}}_{\mathcal{G}_\mathfrak{f},X^I,I_\bullet}$ in the parahoric case. Moreover, the inclusion $\Sht^{\leq \underline{\mu}}_{\mathcal{G}_\Omega, X^I,I_\bullet} \to \overline{\Sht}^{\leq \underline{\mu}}_{\mathcal{G}_\Omega, X^I,I_\bullet} $ is an isomorphism away from $x_0$ by Proposition \ref{propImmersion} together with the fact that the schematic closure commutes with flat base change. \begin{remark} As the map $\rho_{\Omega, \ast}$ is schematic and the schematic image commutes with flat base change and is fpqc-local on the target (compare \cite[Remark 3.1.2]{Emerton2021}), the schematic image of $\rho_{\Omega, \ast}$ is given by descending the schematic image of the map of schemes $\rho_{\Omega, \ast, S} \colon S \times_{\varprojlim_{\mathfrak{f} \prec \Omega} \Sht^{\leq \underline{\mu}}_{\mathcal{G}_\mathfrak{f},X^I,I_\bullet}} \Sht^{\leq \underline{\mu}}_{\mathcal{G}_\Omega, X^I,I_\bullet} \to S$ for any smooth atlas $S \to \varprojlim_{\mathfrak{f} \prec \Omega} \Sht^{\leq \underline{\mu}}_{\mathcal{G}_\mathfrak{f},X^I,I_\bullet}$. \end{remark} By construction, we have level maps $\overline{\rho}_{\mathfrak{f}, \Omega}\colon \overline{\Sht}^{\leq \underline{\mu}}_{\mathcal{G}_\Omega, X^I,I_\bullet} \to \Sht^{\leq \underline{\mu}}_{\mathcal{G}_\mathfrak{f},X^I,I_\bullet}$ for all facets $\mathfrak{f} \prec \Omega$. In particular, for $\Omega' \prec \Omega$ we obtain a map $\overline{\rho}_{\Omega', \Omega} \colon\overline{\Sht}^{\leq \underline{\mu}}_{\mathcal{G}_\Omega, X^I,I_\bullet}\to \varprojlim_{\mathfrak{f} \prec \Omega'} \Sht^{\leq \underline{\mu}}_{\mathcal{G}_\mathfrak{f},X^I,I_\bullet}$ that factors through $\overline{\Sht}^{\leq \underline{\mu}}_{\mathcal{G}_{\Omega'}, X^I,I_\bullet}$ by construction. \begin{thm} \label{thmLvlMapGeneral} Let $\Omega, \Omega'$ be two bounded connected subsets of an appartment in the Bruhat-Tits building of $G_{K_{x_0}}$ such that $\Omega' \prec \Omega$. Then, the level map $$\overline{\rho}_{\Omega', \Omega} \colon \overline{\Sht}^{\leq \mu}_{\mathcal{G}_{\Omega}, X^I, I_\bullet} \to \overline{\Sht}^{\leq \mu}_{\mathcal{G}_{\Omega'}, X^I, I_\bullet}$$ is schematic, proper, surjective and finite \'etale away from $x_0$. \end{thm} \begin{proof} As a first step, we show that $\overline{\rho}_{\Omega', \Omega}$ is schematic. By Lemmas \ref{lemLimProj} and \ref{lemLim}, the map $$ \varprojlim_{\mathfrak{f} \prec \Omega} \Sht^{\leq \underline{\mu}}_{\mathcal{G}_\mathfrak{f}, X^I,I_\bullet} \to \varprojlim_{\mathfrak{f}' \prec \Omega'} \Sht^{\leq \underline{\mu}}_{\mathcal{G}_{\mathfrak{f}'}, X^I,I_\bullet} $$ is schematic. The claim for $\overline{\rho}_{\Omega', \Omega}$ then follows from Lemma \ref{lemSubSchm}. That the map is finite \'etale away from $x_0$ follows from the fact that the map $\Sht^{\leq \mu}_{\mathcal{G}_{\Omega}, X^I, I_\bullet} \to \overline{\Sht}^{\leq \mu}_{\mathcal{G}_{\Omega}, X^I, I_\bullet}$ is an isomorphism away from $x_0$ by the observation above together with Corollary \ref{corLvlMapParahoric}. Moreover, the map $\overline{\Sht}^{\leq \mu}_{\mathcal{G}_{\Omega'}, X^I, I_\bullet} \to \varprojlim_{\mathfrak{f} \prec \Omega'} \Sht^{\leq \mu}_{\mathcal{G}_{\mathfrak{f}}, X^I, I_\bullet}$ is a closed immersion by definition. Thus, by Lemma \ref{lemLim}, it suffices to consider the level maps $$ \overline{\Sht}^{\leq \mu}_{\mathcal{G}_{\Omega}, X^I, I_\bullet} \to \Sht^{\leq \mu}_{\mathcal{G}_{\mathfrak{f}}, X^I, I_\bullet} $$ for facets $\mathfrak{f} \prec \Omega$ to show the properness. Similarly, by construction of $\overline{\Sht}^{\leq \mu}_{\mathcal{G}_{\Omega}}$, it suffices to show the claim for the projections $$ \varprojlim_{\mathfrak{f} \prec \Omega} \Sht^{\leq \mu}_{\mathcal{G}_{\mathfrak{f}}, X^I, I_\bullet} \to \Sht^{\leq \mu}_{\mathcal{G}_{\mathfrak{f}}, X^I, I_\bullet}.$$ But for the projections the claim follows from Lemma \ref{lemLimProj}. The surjectivity follows as in the parahoric case in the proof of Theorem \ref{thmLvlMapGen}. \end{proof} \begin{remark} In a similar fashion we define an integral model of the moduli space of bounded $\mathcal{G}$-shtukas for \emph{arbitrary} Bruhat-Tits group schemes $\mathcal{G} \to X$. More precisely, let $x_1, \ldots, x_n$ be closed points of $X$ such that $\mathcal{G}|_{X \setminus \{x_1, \ldots, x_n\}}$ is parahoric. We set $U = X \setminus \{x_1, \ldots, x_n\}$. For each $1 \leq m \leq n$ let $\Omega_m \subseteq \mathcal{B}(G_{K_{x_m}}, K_{x_m})$ be a bounded subset contained in an apartment with $\Omega_m = \mr{cl}(\Omega_m)$ such that $\mathcal{G}|_{\mathcal{O}_{x_m}} = \mathcal{G}_{\Omega_m}$. In this case, we write $\mathcal{G}= \mathcal{G}_{(\Omega_m)_{1 \leq m \leq n}}$ and define the corresponding integral model $$ \overline{\Sht}^{\leq \mu}_{\mathcal{G}, X^I, I_\bullet} = \mr{image}\left( \Sht^{\leq \mu}_{\mathcal{G}, X^I, I_\bullet} \hookrightarrow \varprojlim_{(\mathfrak{f}_m)_{m=1, \ldots, n} \prec (\Omega_m)_{m = 1, \ldots, n}} \Sht^{\leq \mu}_{\mathcal{G}_{(\mathfrak{f}_m)_{m=1, \ldots, n}}, X^I, I_\bullet} \right) $$ as above as the schematic image in the sense of \cite{Emerton2021} of the embedding of the stack of bounded $\mathcal{G}$-shtukas into the limit of the corresponding stacks with parahoric level. By the same arguments as above, the analogous assertions from Proposition \ref{propImmersion} and Theorem \ref{thmLvlMapGeneral} hold in this setting as well. In particular, the natural map $\Sht_{\mathcal{G}, X^I, I_\bullet}^{\leq \underline{\mu}} \to \overline{\Sht}_{\mathcal{G}, X^I, I_\bullet}^{\leq \underline{\mu}}$ is an open immersion and an isomorphism over $U$, and for any map of Bruhat-Tits group schemes $f \colon \mathcal{G} \to \mathcal{G}'$ that is an isomorphism generically we obtain a proper, surjective and generically \'etale level map $$f_\ast \colon \overline{\Sht}_{\mathcal{G}, X^I, I_\bullet}^{\leq \underline{\mu}} \to \overline{\Sht}_{\mathcal{G}', X^I, I_\bullet}^{\leq \underline{\mu}}.$$ \end{remark} \begin{ex} Let us consider the Drinfeld case, that is $G = \GL_r$ and $\underline{\mu} = ((1,0,\ldots,0),(0, \ldots,0,-1))$. In this case, \cite{Bieker2022} defines \emph{Drinfeld $\Gamma_0(\mathfrak{p}^n)$-level structures} for shtukas adapting the notion of Drinfeld $\Gamma_0(p^n)$-level structures for elliptic curves of \cite{Katz1985}. Moreover, the moduli space of Drinfeld shtukas with Drinfeld $\Gamma_0(\mathfrak{p}^n)$-level structure identifies with $\overline{\Sht}^{\leq \underline \mu}_{\GL_{r,\Omega}, X^2, (1,2)}$ by \cite[Theorem 6.7]{Bieker2022} for the standard simplex $\Omega$ of side length $n$ in the Bruhat-Tits building of $\GL_r$. \end{ex} \section{Some lemmata on algebraic stacks} We collect some results on finite connected limits of algebraic stacks we use below for which we could not find a reference in the literature. In this section, $I$ will always denote a connected index category and $(\mathcal{X}_i)_{i \in I}$ denotes a diagram over $I$ of (fppf-) Artin stacks over some base scheme $S$. \begin{lem} \label{lemLimProj} Assume that all algebraic stacks $\mathcal{X}_i$ have a diagonal that is schematic. Let all transition maps in $(\mathcal{X}_i)_{i \in I}$ be schematic. Then the projections $\varprojlim_{i \in I} \mathcal{X}_i \to \mathcal{X}_j$ are schematic for all $j \in I$. Moreover, assume that all $\mathcal{X}_i$ are separated over $S$ and that all transition maps have a property $\mathbf{P}$ of morphisms of schemes that is stable under base change and composition and is smooth local on the target such that all proper maps have $\mathbf{P}$. Then the projections $\varprojlim_{i \in I} \mathcal{X}_i \to \mathcal{X}_j$ have property $\mathbf{P}$ for all $j \in I$. \end{lem} \begin{proof} It suffices to show the claim for fibre products and equalisers. For fibre products this is clear. Let us thus consider the equaliser diagram \begin{center} \begin{tikzcd} \mathcal{X}_1 \arrow[shift left=.75ex]{r}{f} \arrow[shift right=.75ex, swap]{r}{g} & \mathcal{X}_2. \end{tikzcd} \end{center} The equaliser of this diagram is given by the fibre product $\mathcal{X} = \mathcal{X}_2 \times_{\Delta, \mathcal{X}_2 \times_S \mathcal{X}_2, (f,g)} \mathcal{X}_1$. Thus, the projection $\mathcal{X} \to \mathcal{X}_1$ arises as the base change of the diagonal of $\mathcal{X}_1$ and is thus schematic in the first case and moreover proper in the second case (as we assumed $\mathcal{X}_1$ to be separated). The projection $\mathcal{X} \to \mathcal{X}_2$ has the required properties as it is the composition $\mathcal{X} \to \mathcal{X}_1 \to \mathcal{X}_2$. \end{proof} \begin{lem} \label{lemLim} Let $(f_i \colon \mathcal{X} \to \mathcal{X}_i)_{i \in I}$ be a cone over the diagram $(\mathcal{X}_i)_{i \in I}$ such that all maps $f_i$ are schematic. Then the limit $f \colon \mathcal{X} \to \varprojlim_{i \in I} \mathcal{X}_i$ is schematic as well. Assume moreover that all $f_i$ are separated and have a property $\mathbf{P}$ of morphisms of schemes that is stable under base change and composition and is smooth local on the target such that all closed immersions have $\mathbf{P}$. Then $f$ has $\mathbf{P}$. \end{lem} \begin{proof} Let Let $T$ be an $S$-scheme. Let us fix a map $T \to \varprojlim_{i \in I} \mathcal{X}_i$. As different limits commute, we get that $$ T \times_{\varprojlim_{i \in I} \mathcal{X}_i} \mathcal{X} = \varprojlim_{i \in I} (T \times_{\mathcal{X}_i} \mathcal{X}),$$ which is representable by a scheme by assumption. For the second part, let us denote by $T_i = T \times_{\mathcal{X}_i} \mathcal{X}$. Then $T_i$ is a separated $T$-scheme by assumption. As $I$ is connected, we may take the limit on the right hand side in the category of $T$-schemes (as opposed to the category of $S$-schemes). We represent the limit as an equaliser between products \begin{center} \begin{tikzcd} \varprojlim_{i \in I} T_i = \mr{eq}\left( \prod_{i \in I} T_i \right. \arrow[r, shift left=.75ex] \arrow[r, shift right=.75ex, swap] & \left. \prod_{i \in I} T_i \right), \end{tikzcd} \end{center} where the products are taken in the category of $T$-schemes. As all $T_i$ are separated over $T$, the inclusion of $\varprojlim_{i \in I} T_i \hookrightarrow \prod_{i \in I} T_i$ is a closed immersion. Moreover, as all $T_i \to T$ have property $\mathbf{P}$, so does their product. Hence, $\varprojlim_{i \in I} T_i \to T$ has property $\mathbf{P}$. \end{proof} \begin{lem} \label{lemSubSchm} Let $f \colon \mathcal{X} \to \mathcal{X}'$ be a schematic map of algebraic stacks and let $\mathcal{Y} \subseteq \mathcal{X}$ and $\mathcal{Y}' \subseteq \mathcal{X}'$ be two closed substacks such that $f|_{\mathcal{Y}}$ factors through $\mathcal{Y}'$. Then $f|_{\mathcal{Y}} \colon \mathcal{Y} \to \mathcal{Y}'$ is schematic. \end{lem} \begin{proof} Let $S$ be a scheme and let us fix a map $y' \colon S \to \mathcal{Y}'$. As $f$ is schematic, the fibre product $T = S \times_{y, \mathcal{X}', f} \mathcal{Y}$ is representable by a scheme. Then $T = S \times_{\mathcal{Y}'} \mathcal{Y}$. \end{proof} \section{Newton stratification} We recall the Newton stratification on stacks of global shtukas and define a Newton stratification on our integral models with deep level. We show that the expected closure relations of Newton strata are satisfied in the hyperspecial case. Let $k \cong \mathbb{F} \dbr{t}$ be a local field in characteristic $p$ with ring of integers $\mathcal{O} \cong \mathbb{F} \dsq{t}$ and finite residue field $\mathbb{F}$. We denote by $\bar{k} = k^{\mr{sep}}$ a fixed separable closure and by $\breve{k} \cong \mathbb{F}^{\mr{alg}} \dbr{t}$ the completion of the maximal unramified extension of $k$. Let $G/k$ be a reductive group and let us fix $T \subseteq G$ be a maximal torus defined over $k$. As $G_{\breve{k}} = G \times_k \breve{k}$ is quasi-split by a theorem of Steinberg, we can choose a Borel $B \subseteq G_{\breve{k}}$ containing $T_{\breve{k}}$. We denote by $X_\ast(T)$ its group of geometric cocharacters and by $\pi_1(G)$ the algebraic fundamental group of $G$ given by the quotient of the cocharacter lattice by the coroot lattice. We denote by $B(G)$ the set of $\sigma$-conjugacy classes in $G(\breve{k}) = LG(\mathbb{F}^{\mr{alg}})$. By \cite{Kottwitz1985, Kottwitz1997, Rapoport1996}, the elements of $B(G)$ are classified by two invariants: the \emph{Kottwitz map} denoted by $$ \kappa \colon B(G) \to \pi_1(G)_{\Gal({\bar k}/k)} $$ and the \emph{Newton map} denoted by $$ \nu \colon B(G) \to (\Hom(\mathbb{D}_{\bar k}, G_{\bar k})/G({\bar k}))^{\Gal({\bar k}/k)},$$ where $\mathbb{D}$ denotes the pro-torus with character group $\mathbb{Q}$ and $G({\bar k})$ acts by conjugation. Note that we can identify $$ (\Hom(\mathbb{D}_{\bar k}, G_{\bar k})/G({\bar k}))^{\Gal({\bar k}/k)} = X_\ast(T)_{\mathbb{Q}}^{+, \Gal({\bar k}/k)} = X_\ast(T)_{\mathbb{Q}, \Gal({\bar k}/k)}^{+}$$ with the set of rational dominant (with respect to the choice of $B$) Galois-invariant cocharacters, and that $\kappa(b) = \nu(b)$ in $\pi_1(G)_{\mathbb{Q}, {\Gal({\bar k}/k)}}$. The choice of Borel determines a set of simple positive roots and consequently defines the dominance order on $X_\ast(T)_{\mathbb{Q}}$ by $\mu_1 \leq \mu_2$ if $\mu_2 - \mu_1$ is a $\mathbb{Q}$-linear combination of positive simple roots with non-negative coefficients. Via $\kappa$ and $\nu$ the dominance order induces a partial order on $B(G)$ by $b_1 \leq b_2$ if and only if $\kappa(b_1) = \kappa(b_2)$ and $\nu(b_1) \leq \nu(b_2)$. Let $\mathcal{G} \to \Spec(\mathcal{O})$ be a smooth affine group scheme such that $\mathcal{G}_k = G$. Note that for an algebraically closed extension $\ell$ of $\mathbb{F}$ the set of $\sigma$-conjugacy classes in $LG(\ell)$ does not depend on the choice of $\ell$ by \cite[Lemma 1.3]{Rapoport1996}. It classifies quasi-isogeny classes of local $\mathcal{G}$-shtukas by associating to $(L^+\mathcal{G},b)$ the class $[b] \in B(G)$. For a local $\mathcal{G}$-shtuka $\underline{\mathcal{E}}$ over $S = \Spec(R)$ and a point $s \in S$ we denote by $[\underline{\mathcal{E}}_s] \in B(G)$ the corresponding element. This does not depend on the choice of an algebraic closure of the residue field at $s$. Let us shift perspective back to the global setting again and consider a smooth affine group scheme $\mathcal{G} \to X$ with generic fibre $\mathcal{G}_K = G$ a reductive group. Let us moreover fix a tuple $\underline{y} = (y_i)_{i \in I}$ of pairwise distinct closed points of $X$. Let us fix a bound $\mathcal{Z}$ and points $\underline{y}' = (y'_i)_{i \in I} \in X_\mathcal{Z}^I$ lying over $\underline{y}$. We denote by $\Sht^{\mathcal{Z}}_{\mathcal{G}, X^I, \mathbb{F}_{\underline{y}'}} = \Sht^{\mathcal{Z}, \underline{y}'}_{\mathcal{G}, X^I} \times_{\Spf(\mathcal{O}_{\underline{y}'})} \Spec(\mathbb{F}_{\underline{y}'})$ the special fibre of the moduli space of shtukas at $\underline{y}$. \begin{defn}[{\cite[Definition 4.12]{Breutmann2019}}] Let $\ell$ be an algebraically closed extension of $\mathbb{F}_{\underline{y}'}$. The global-to-local functor induces maps \begin{align*} \delta_{\mathcal{G}, y_i, \ell} \colon \Sht^{\mathcal{Z}}_{\mathcal{G}, X^I, \mathbb{F}_{\underline{y}'}}(\ell) & \to B(G_{y_i}) \\ {\underline{\mathcal{E}}} & \mapsto [\widehat{\underline{\mathcal{E}}_{y_i}}] \end{align*} for all $i \in I$ and $$ \delta_{\mathcal{G}, \underline{y}, \ell} = \prod_{i \in I} \delta_{\mathcal{G}, y_i, \ell}\colon \Sht^{\mathcal{Z}}_{\mathcal{G}, X^I, \mathbb{F}_{\underline{y}'}}(\ell) \to \prod_{i \in I} B(G_{y_i}).$$ Let $\underline{b} = (b_i)_{i \in I} \in \prod_{i \in I} B(G_{y_i})$. The locus in $\Sht^{\mathcal{Z}}_{\mathcal{G}, X^I, \mathbb{F}_{\underline{y}'}}$ where $\delta_{\mathcal{G}, \underline{y}}$ maps to $\underline{b}$ is locally closed by \cite[Theorem 7.11]{Hartl2011}, compare also \cite{Rapoport1996}. The reduced substack on this locally closed subset is denoted by $\Sht^{\mathcal{Z}, \underline{b}}_{\mathcal{G}, X^I, \mathbb{F}_{\underline{y}'}}$ and called the \emph{Newton stratum} associated to $\underline{b}$. \end{defn} The Newton map is compatible with changing the group scheme in the following sense. \begin{lem}[compare {\cite[Section 5.2]{Breutmann2019}}] \label{lemNewtonStratComp} Let $G/K$ be a reductive group and let $\mathcal{G}$ and $\mathcal{G}'$ be two smooth affine models of $G$ over $X$. Let $f \colon (\mathcal{G}, \mathcal{Z}) \to (\mathcal{G}', \mathcal{Z}')$ be a map of shtuka data such that $f \colon \mathcal{G} \to \mathcal{G}'$ is given by the identity on $G$ in the generic fibre. Recall that $f$ induces a map $$ f_\ast \colon \Sht^{\mathcal{Z}}_{\mathcal{G}, X^I, I_\bullet} \times_{X^I_\mathcal{Z}} X^I_{\mathcal{Z}. \mathcal{Z}'} \to \Sht^{\mathcal{Z}'}_{\mathcal{G}', X^I, I_\bullet} \times_{X^I_{\mathcal{Z}'}} X^I_{\mathcal{Z}. \mathcal{Z}'}. $$ Then $$ \delta_{\mathcal{G}', \underline{y}} \circ f_\ast = \delta_{\mathcal{G}, \underline{y}}.$$ \end{lem} \begin{proof} The proof of {\cite[Section 5.2]{Breutmann2019}} carries over to this situation. \end{proof} Let us now consider the Bruhat-Tits case, compare Assumption \ref{ass} (\ref{assBT}). Thus, let $\Omega = \mr{cl}(\Omega)$ be a subset of an appartment of the Bruhat-Tits building of $G_{K_{x_0}}$ for a fixed closed point $x_0$ of $X$. Let $\mathcal{G}_\Omega$ be the corresponding Bruhat-Tits group scheme. Let $\underline{\mu} = (\mu_i)_{i \in I}$ be a conjugacy class of geometric cocharacters of $G$. Let moreover $\underline{y}' = (y'_i)$ be a tuple of closed points of $X_{\underline{\mu}}$ lying over $\underline{y}$. In order to define a Newton stratification on $\overline{\Sht}^{\leq \underline{\mu}}_{\mathcal{G}_\Omega, X^I, \mathbb{F}_{\underline{y}'}}$, we note that by construction and by the previous lemma, we have that the map $$ \delta_{\mathcal{G}_\mathfrak{f}, \underline{y}} \circ \rho_{\mathfrak{f}, \Omega} \colon \overline{\Sht}^{\leq \underline{\mu}, }_{\mathcal{G}_\Omega, X^I, \mathbb{F}_{\underline{y}'}} \to \Sht^{\leq \underline{\mu}}_{\mathcal{G}_\mathfrak{f}, X^I, \mathbb{F}_{\underline{y}'}} \to \prod_{i \in I} B(G_{y_i}) $$ does not depend on the choice of the facet $\mathfrak{f} \prec \Omega$. Hence, we obtain a well-defined map $$ \bar \delta_{\mathcal{G}_\Omega, \underline{y}} \colon \overline{\Sht}^{\leq \underline{\mu}}_{\mathcal{G}_\Omega, X^I, \mathbb{F}_{\underline{y}'}} \to \prod_{i \in I} B(G_{y_i}).$$ Let $\underline{b} = (b_i)_{i \in I} \in \prod_{i \in I} B(G_{y_i})$. The locus in $\overline{\Sht}^{\leq \underline{\mu}}_{\mathcal{G}_\Omega, X^I, \mathbb{F}_{\underline{y}'}} $ where $\bar \delta_{\mathcal{G}_\Omega, \underline{y}}$ maps to $\underline{b}$ is again locally closed by the result in the parahoric case together with Lemma \ref{lemNewtonStratComp}. \begin{defn} \label{defnNewtonDeep} Let $\underline{b} = (b_i)_{i \in I} \in \prod_{i \in I} B(G_{y_i})$. The \emph{Newton stratum} in $\overline{\Sht}^{\leq \underline{\mu}}_{\mathcal{G}_\Omega, X^I, \mathbb{F}_{\underline{y}'}} $ associated to $\underline{b}$ is the reduced locally closed substack on the set of points where $\bar \delta_{\mathcal{G}, \underline{y}}$ maps to $\underline{b}$. It is denoted by $\overline{\Sht}^{\leq \underline{\mu}, \underline{b}}_{\mathcal{G}_\Omega, X^I, \mathbb{F}_{\underline{y}'}}$. \end{defn} We have the obvious analogue of Lemma \ref{lemNewtonStratComp} for deep level, in other words, the Newton stratification for deep levels is still compatible with the level maps. \begin{cor} \label{corNewtonLevel} Let $\Omega' \prec \Omega$ be two connected bounded subsets of the Bruhat-Tits building. Then $$ \bar{\delta}_{\mathcal{G}_{\Omega'}, \underline{y}} \circ \bar{\rho}_{\Omega', \Omega} = \bar{\delta}_{\mathcal{G}_{\Omega}, \underline{y}}.$$ In particular, $\overline{\Sht}^{\leq \underline{\mu}, \underline{b}'}_{\mathcal{G}_\Omega, X^I, \mathbb{F}_{\underline{y}'}} \cap \overline{\overline{\Sht}^{\leq \underline{\mu}, \underline{b}}_{\mathcal{G}_\Omega, X^I, \mathbb{F}_{\underline{y}'}}} \neq \emptyset$ only if $\underline{b}' \leq \underline{b}$. \end{cor} \begin{proof} This follows from the construction and Lemma \ref{lemNewtonStratComp}. The second statement then follows directly from the parahoric case in \cite[Proposition 4.11, Section 5]{Breutmann2019}, compare also \cite[Theorem 7.11]{Hartl2011}. \end{proof} We conclude by showing the strong stratification property of the Newton stratification in the hyperspecial case. \begin{thm} \label{thmNewtonHyp} Let $\mathcal{G} \to X$ be a parahoric group scheme that is hyperspecial at $y_i$ for all $i \in I$. Let $\underline{\mu} = (\mu_i)_{i \in I}$ be a conjugacy class of geometric cocharacters of $G$. Then the Newton stratification at $\underline{y}'$ satisfies the strong stratification property in the sense that $$\overline{\Sht^{\leq \underline{\mu}, \underline{b}}_{\mathcal{G}, X^I, \mathbb{F}_{\underline{y}'}}} = \bigcup_{\underline{b}' \leq \underline{b}} \Sht^{\leq \underline{\mu}, \underline{b'}}_{\mathcal{G}, X^I, \mathbb{F}_{\underline{y}'}}$$ for all $ \underline{b} \in \prod_{i \in I} B(G_{y_i})$. \end{thm} \begin{proof} Let $\underline{b}, \underline{b}' \in \prod_{i \in I} B(G_{y_i})$ with $\underline{b}' \leq \underline{b}$. It suffices to show that every closed point $ \bar{s} = \underline{\mathcal{E}} \in \Sht^{\leq \underline{\mu}, \underline{b'}}_{\mathcal{G}, X^I, \mathbb{F}_{\underline{y}'}} (\mathbb{F}^{\mr{alg}}_{\underline{y}'})$ lies in the closure of $ \Sht^{\leq \underline{\mu}, \underline{b}}_{\mathcal{G}, X^I, \mathbb{F}_{\underline{y}'}}$. Let $R$ be the $\mathcal{O}_{\underline{y}'}$-algebra pro-representing the deformation functor of $\bar{s}$. Then $\bar{s}$ lies in the closure of $\overline{\Sht^{\leq \underline{\mu}, \underline{b}}_{\mathcal{G}, X^I, \mathbb{F}_{\underline{y}'}}}$ if and only if the same is true in the Newton stratification on $\Spec R$. By the bounded Serre-Tate Theorem (Corollary \ref{corBoundedSerreTate}) the universal deformation ring factors as $\Spec R = \prod_{i \in I} \Spec R_i$, where $R_i$ is the universal deformation ring of the corresponding bounded local shtuka at $y_i$. Under this isomorphism we have $\Spec(R)_{\underline{b} } = \prod_{i \in I} \Spec(R_i)_{b_i}$, where we denote by $\Spec(R_i)_{{b}_{i}}$ the corresponding Newton strata in $\Spec R_i$ for $i \in I$. On $\Spec R_i$ the closure properties hold by \cite[Theorem 2, Lemma 21 (2)]{Viehmann2011}, and thus they hold on $\Spec R$. This proves the assertion. \end{proof} \begin{remark} As in the case of Shimura varieties, one should only expect the Newton stratum $\Sht^{\leq \underline{\mu}, \underline{b}}_{\mathcal{G}, X^I, \mathbb{F}_{\underline{y}'}}$ to be non-empty if $b_i \in B(G_{K_{y_i}}, \mu_i)$ for all $i \in I$. The non-emptiness of these Newton strata seems to remain an open question, compare \cite[Section 5]{Breutmann2019}. \end{remark} \section{Introduction} Moduli spaces of (global) shtukas serve as function field analogues of Shimura varieties. They were first introduced in \cite{Drinfeld1987a} for $\GL_n$ and later generalised to arbitrary split reductive groups in \cite{Varshavsky2004} and further to flat affine group schemes of finite type in \cite{Rad2019a}. They are used to great succes in establishing a Langlands correspondence over function fields in \cite{Drinfeld1987} for $\GL_2$, \cite{Lafforgue2002} for $\GL_n$ and \cite{Lafforgue2018} for arbitrary reductive groups. Recently, a lot of progress has been made in studying the geometry of moduli spaces of shtukas with parahoric level, compare for example \cite{Rad2015}, \cite{Rad2017}, \cite{Breutmann2019}, \cite{Yun2019} and \cite{Zhu2014}. However, little is recorded for deeper level structures. A first result beyond the parahoric case is obtained in \cite{Bieker2022}, where Drinfeld $\Gamma_0(\mathfrak{p}^n)$-level structures for Drinfeld shtukas are defined and it is shown that their moduli spaces admit well-behaved (that is finite flat and generically \'etale) level maps. The goal of this work is to construct integral models of moduli spaces of shtukas with deep Bruhat-Tits level structures for general reductive groups that generalise both the parahoric case and in the Drinfeld case the moduli space of shtukas with Drinfeld $\Gamma_0(\mathfrak{p}^n)$-level structures of \cite{Bieker2022}, and to study properties of these integral models. Let $X$ be a smooth, projective and geometrically connected curve over a finite field ${\F_q}$. Let $G$ be a (connected) reductive group over the function field $K$ of $X$ and let us fix a parahoric model $\mathcal{G} \to X$ of $G$. In other words, $\mathcal{G}$ is a smooth affine group scheme over $X$ with generic fibre $G$ such that for all closed points $x$ of $X$ the pullback $\mathcal{G}_{\Spec(\mathcal{O}_x)}$ to the spectrum of the completed local ring $\mathcal{O}_x$ at $x$ is a parahoric group scheme in the sense of \cite{Bruhat1984}. Let us fix a closed point $x_0$ of $X$. Let $\Omega$ be a bounded subset of an apartment in the Bruhat-Tits building of $G_{K_{x_0}}$, where $K_{x_0}$ is the completion of $K$ at $x_0$. By Bruhat-Tits theory, we get a smooth affine $\mathcal{O}_{x_0}$-group scheme $\mathcal{G}_{\Omega}$ with connected fibres that we glue with $\mathcal{G}$ outside of $x_0$ to obtain a (global) Bruhat-Tits group scheme $\mathcal{G}_{\Omega} \to X$ which is smooth and affine with connected fibres by construction. Without changing $\mathcal{G}_\Omega$, we may assume that $\Omega$ is convex, closed and a union of facets. Let $I$ be a finite set and let $\underline{\mu} = (\mu_i)_{i \in I}$ be a tuple of conjugacy classes of geometric cocharacters of $G$. For simplicity, we assume in this introduction that $\underline{\mu}$ is defined over the function field $K$ of $X$ (in general it will only be defined over a finite separable extension of $K$). A global $\mathcal{G}_\Omega$-shtuka over a scheme $S$ is a $\mathcal{G}_\Omega$-bundle $\mathcal{E}$ on $X_S$ together with an isomorphism $\varphi \colon \sigma^\ast \mathcal{E}|_{X_S \setminus \Gamma_{\underline{x}}} \xrightarrow{\cong} \mathcal{E}|_{X_S \setminus \Gamma_{\underline{x}}}$ away from the graph $\Gamma_{\underline{x}}$ of an $I$-tuple $\underline{x} \in X^I(S)$ of points of $X$. We denote by $\Sht^{\leq \underline{\mu}}_{\mathcal{G}_\Omega, X^I}$ the moduli space of global $\mathcal{G}_\Omega$-shtukas bounded by $\underline{\mu}$ (compare Definition \ref{defnGlobBounds} and Construction \ref{constBound} for the precise definition of boundedness conditions). While for a subset $\Omega'$ of $\Omega$ there is still a natural map $\Sht^{\leq \underline{\mu}}_{\mathcal{G}_\Omega, X^I} \to \Sht^{\leq \underline{\mu}}_{\mathcal{G}_{\Omega'}, X^I}$ by \cite[Theorem 3.20]{Breutmann2019} (compare also Theorem \ref{thmLvlMapGen}), already in the Drinfeld case $G = \GL_2$, the level map $\Sht_{\GL_{2, [0,n]}, X^2}^{\leq ((0,-1), (1,0))} \to \Sht_{\GL_2, X^2}^{\leq ((0,-1), (1,0))}$ is neither proper nor surjective for $n \geq 2$, compare \cite[Remark 2.20]{Bieker2022}. We propose the following construction to relatively compactify $\Sht^{\leq \underline{\mu}}_{\mathcal{G}_{\Omega}, X^I}$. \begin{defn}[compare Definition \ref{defnIntMod}] \label{defnIntroIntMod} In the situation above, that is, for a reductive group $G$ over $K$, and a Bruhat-Tits group scheme $\mathcal{G}_\Omega \to X$ for a subset $\Omega$ (assumed to be convex, closed and a union of facets) of the Bruhat-Tits building for $G_{K_{x_0}}$ at the fixed point $x_0$ of $X$ as above, the \emph{integral model of the moduli space of shtukas with $\mathcal{G}_\Omega$-level structure} $\overline{\Sht}_{\mathcal{G}_\Omega, X^I}^{\leq \underline{\mu}}$ is defined to be the schematic image in the sense of \cite{Emerton2021} of the map $$\Sht_{\mathcal{G}_\Omega, X^I}^{\leq \underline{\mu}} \to \varprojlim_{\mathfrak{f} \prec \Omega} \Sht_{\mathcal{G}_\mathfrak{f}, X^I}^{\leq \underline{\mu}},$$ where the limit is taken over all facets $\mathfrak{f}$ contained in $\Omega$. \end{defn} Clearly, in the parahoric case (that is, when $\Omega$ is a facet) we have $$\Sht_{\mathcal{G}_\Omega, X^I}^{\leq \underline{\mu}} = \overline \Sht_{\mathcal{G}_\Omega, X^I}^{\leq \underline{\mu}} = \varprojlim_{\mathfrak{f} \prec \Omega} \Sht_{\mathcal{G}_\mathfrak{f}, X^I}^{\leq \underline{\mu}},$$ so the construction above generalises the parahoric case. Moreover, by \cite[Theorem 6.7]{Bieker2022}, this general notion of integral models in the Drinfeld case recovers for example the moduli space of shtukas with Drinfeld $\Gamma_0(\mathfrak{p}^n)$-level structure at $x_0$. The main result of this work is to show that this construction of integral models admits proper, surjective and generically finite \'etale level maps: \begin{thm}[compare Proposition \ref{propImmersion} and Theorem \ref{thmLvlMapGeneral}] \label{thmIntroSht} In the situation of Definition \ref{defnIntroIntMod}, the map $$\Sht_{\mathcal{G}_\Omega, X^I}^{\leq \underline{\mu}} \to \varprojlim_{\mathfrak{f} \prec \Omega} \Sht_{\mathcal{G}_\mathfrak{f}, X^I}^{\leq \underline{\mu}} $$ is schematic and a quasi-compact locally closed immersion. It factors into an open immersion $\Sht_{\mathcal{G}_\Omega, X^I}^{\leq \underline{\mu}} \to \overline\Sht_{\mathcal{G}_\Omega, X^I}^{\leq \underline{\mu}}$ followed by the closed immersion $\overline\Sht_{\mathcal{G}_\Omega, X^I}^{\leq \underline{\mu}} \to \varprojlim_{\mathfrak{f} \prec \Omega} \Sht_{\mathcal{G}_\mathfrak{f}, X^I}^{\leq \underline{\mu}}$. The restriction of the inclusion $$ \Sht_{\mathcal{G}_\Omega, X^I}^{\leq \underline{\mu}}|_{(X \setminus \{x_0\})^I} \xrightarrow{\cong} \overline{\Sht}_{\mathcal{G}_\Omega, X^I}^{\leq \underline{\mu}}|_{(X \setminus \{x_0\})^I}$$ away from $x_0$ is an isomorphism. Moreover, for a subset $\Omega' \prec \Omega$, there is a natural level map $$\bar{\rho}_{\Omega', \Omega} \colon \overline \Sht_{\mathcal{G}_\Omega, X^I}^{\leq \underline{\mu}} \to \overline \Sht_{\mathcal{G}_{\Omega'}, X^I}^{\leq \underline{\mu}} $$ that is schematic, proper, surjective and over $(X \setminus \{x_0\})^I$ is finite \'etale. \end{thm} Note that in the Drinfeld case, this is \cite[Theorem 6.7, Proposition 5.8]{Bieker2022}. In the parahoric case, the level maps on moduli spaces of shtukas are also studied in \cite[Theorem 3.20]{Breutmann2019}. However, the notion of bounds used there does not quite capture the situation we are interested in here. We discuss the notion of global bounds for global shtukas following \cite{Rad2017} and give a defintion of local bounds that is compatible with the global notion. We generalise the result of \cite[Theorem 3.20]{Breutmann2019} to include bounds in this sense (compare Theorem \ref{thmLvlMapGen}). Using the assertion in the parahoric case, we are able to deduce the result also for deep level structures. The first part of Theorem \ref{thmIntroSht} is based on a corresponding result on the moduli space of $\mathcal{G}_\Omega$-bundles. \begin{thm}[compare Theorem \ref{thmBunGImm}] \label{thmIntroBunBT} In the situation of Definition \ref{defnIntroIntMod}, the natural map $$ \Bun_{\mathcal{G}_\Omega} \to \varprojlim_{\mathfrak{f} \prec \Omega} \Bun_{\mathcal{G}_\mathfrak{f}}$$ is a quasi-compact open immersion. \end{thm} As a first step in the proof of this theorem, we show in the local case (and hence also for the corresponding global Bruhat-Tits group schemes), that the not necessarily parahoric Bruhat-Tits group scheme $\mathcal{G}_\Omega$ is the limit of all its associated parahoric group schemes $$ \mathcal{G}_\Omega \xrightarrow{\cong} \varprojlim_{\mathfrak{f} \prec \Omega} \mathcal{G}_\mathfrak{f},$$ compare Theorem \ref{thmBTGS}. Note that given a compatible system of $\mathcal{G}_\mathfrak{f}$-torsors for all facets $\mathfrak{f} \prec \Omega$, it is in general not true that their limit is a torsor for $\mathcal{G}_\Omega$, as it might be impossible to construct a compatible system of sections. By controlling the deformation theory of torsors for the $\mathcal{G}_\mathfrak{f}$, we are able to show that the locus where the limit of a compatible system of $\mathcal{G}_\mathfrak{f}$-bundles on $X$ is already a $\mathcal{G}_\Omega$-bundle on $X$ is open. Additionally to the existence of well-behaved level maps, we show that the Newton stratification on the special fibre of the moduli space of shtukas in the parahoric case induces a well-defined Newton stratification on the special fibre in the case of deeper level. For a reductive group $H$ over a local field $k$ we denote by $B(H)$ the set of $\sigma$-conjugacy classes in $H(\breve{k})$, where $\breve{k}$ is the completion of the maximal unramified extension of $k$. Then $B(H)$ classifies quasi-isogeny classes of local shtukas for (an integral model of) $H$. We fix a tuple of pairwise distinct closed points $\underline{y} = (y_i)_{i \in I}$ in $X$ and denote by $\overline \Sht_{\mathcal{G}_\Omega, X^I, \mathbb{F}_{\underline{y}}}^{\leq \underline{\mu}} = \overline \Sht_{\mathcal{G}_\Omega, X^I}^{\leq \underline{\mu}} \times_{X^I} \mathbb{F}_{\underline{y}}$ the special fibre over $\underline{y}$, where $\mathbb{F}_{\underline{y}}$ is the compositum of the residue fields of the points $y_i$ of $X$. Combining our compactification with the results of \cite[Theorem 7.11]{Hartl2011} and \cite[Section 5]{Breutmann2019} in the parahoric case, we get the following result on the Newton stratification for deep level. \begin{thm}[compare Definition \ref{defnNewtonDeep} and Corollary \ref{corNewtonLevel}] Let $\ell$ be an algebraically closed extension of $\mathbb{F}_{\underline{y}}$. There is a well-defined map $$ \bar\delta_{\mathcal{G}_\Omega} \colon \overline \Sht_{\mathcal{G}_\Omega, X^I, \mathbb{F}_{\underline{y}}}^{\leq \underline{\mu}}(\ell) \to \prod_{i \in I} B(G_{K_{y_i}}) $$ that is compatible with the level maps in the sense that for $\Omega' \prec \Omega$ we have $$ \bar\delta_{\mathcal{G}_\Omega} = \bar\delta_{\mathcal{G}_{\Omega'}} \circ \bar \rho_{\Omega', \Omega}.$$ Moreover, for $\underline{b} = (b_i)_{i \in I} \in B(G_{K_{y_i}})$ the preimage of $\underline{b}$ under $\bar\delta_{\mathcal{G}_\Omega}$ is the set of $\ell$-valued points of a locally closed substack $\Sht_{\mathcal{G}_\Omega, X^I, \mathbb{F}_{\underline{y}}}^{\leq \underline{\mu}, \underline{b}}$ of $\Sht_{\mathcal{G}_\Omega, X^I, \mathbb{F}_{\underline{y}}}^{\leq \underline{\mu}}$ called the \emph{Newton stratum} of $\Sht_{\mathcal{G}_\Omega, X^I, \mathbb{F}_{\underline{y}}}^{\leq \underline{\mu}}$ for $\underline{b}$. \end{thm} In the parahoric case, the map $\bar{\delta}$ is given by associating to a point in the special fibre over $\underline{y}$ the quasi-isogeny classes of its local shtukas at the points $y_i$. We use the compatibility of the Newton stratification with the level maps in the parahoric case to extend this result to the case of deep level. Moreover, we show that in the hyperspecial case the Newton stratification satisfies the strong stratification property (as for Shimura varieties). Recall that there is a natural order on $B(H)$ induced by the dominance order on cocharacters. It is well-known in the parahoric case that the closure of a Newton stratum $$\overline{\Sht^{\leq \underline{\mu}, \underline{b}}_{\mathcal{G}, X^I, \mathbb{F}_{\underline{y}}}} \subseteq \bigcup_{\underline{b'} \leq \underline{b}} \Sht^{\leq \underline{\mu}, \underline{b'}}_{\mathcal{G}, X^I, \mathbb{F}_{\underline{y}}}$$ is contained in a union of Newton strata. Note that this also generalises to deeper level. We say that the Newton stratification satisfies the strong stratification property when we even have equality. However, the inclusion is strict in general. For local shtukas for split reductive groups, the strong stratification property is due to \cite{Viehmann2011}. We use the Serre-Tate theorem for shtukas of \cite{Rad2015} to deduce the corresponding result in the global case. \begin{thm}[compare Theorem \ref{thmNewtonHyp}] Let $\mathcal{G} \to X$ be a parahoric group scheme that is hyperspecial at $y_i$ for all $i \in I$. Then the Newton stratification at $\underline{y}$ satisfies the strong stratification property in the sense that $$\overline{\Sht^{\leq \underline{\mu}, \underline{b}}_{\mathcal{G}, X^I, \mathbb{F}_{\underline{y}}}} = \bigcup_{\underline{b} \leq \underline{b'}} \Sht^{\leq \underline{\mu}, \underline{b'}}_{\mathcal{G}, X^I, \mathbb{F}_{\underline{y}}}$$ for all $ \underline{b} \in \prod_{i \in I} B(G_{y_i})$. \end{thm} \subsection*{Organisation} This paper is organised as follows. In Section 2, we study (torsors under) Bruhat-Tits group schemes and show Theorem \ref{thmIntroBunBT}. In Section 3, we introduce moduli spaces of shtukas and discuss how to define (global) boundedness conditions. In particular, we give a new definition of local bounds that is compatible in a natural way with usual notions of global bounds. In Section 4, we first prove a variant of the functoriality result for moduli spaces of shtukas of \cite[Theorem 3.20]{Breutmann2019} showing in particular that the level maps in the parahoric case are well-behaved in our setting. We use this result to define our integral models with deep level structure and show these models admit well-behaved level maps as well, proving Theorem \ref{thmIntroSht}. In Section 5, we construct a Newton stratification on the integral models with deep level. \subsection*{Acknowledgements.} First of all I thank my advisor Timo Richarz for introducing me to this topic, his steady encouragement and his interest in my work. I thank Gebhard B\"ockle, Paul Hamacher, Jo\~{a}o Louren\c{c}o, Eva Viehmann and Torsten Wedhorn for helpful conversations surrounding this work. I thank Catrin Mair and Thibaud van den Hove for their comments on a preliminary version of this paper. I thank Urs Hartl for sharing the revised version of \cite{Breutmann2019} and Tasho Kaletha for sharing a preliminary version of \cite{Kaletha}. This work received funding from the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) TRR 326 \textit{Geometry and Arithmetic of Uniformized Structures}, project number 444845124. \subsection*{Notation} We fix the following notation. Let ${\F_q}$ be a finite field with $q$ elements, let $p$ be the characteristic of ${\F_q}$. All schemes will be ${\F_q}$-schemes unless otherwise specified. Let $X$ be a smooth projective and geometrically connected curve over ${\F_q}$ with function field $K$. For a closed point $x$ of $X$ we denote by $\mathcal{O}_{X,x}$ the local ring at $x$ and by $\mathcal{O}_x$ its completion, by $\mathfrak{m}_{x_0} \subseteq \mathcal{O}_{x_0}$ the maximal ideal with uniformiser $\varpi_{x}$ and by $\mathbb{F}_{x}$ the residue field. Moreover, we denote by $K_x$ the completion of $K$ at $x$. We denote by $\sigma$ the (absolute) $q$-Frobenius endomorphism $\Frob_S$ of some ${\F_q}$-scheme $S$, and also the map $\sigma = id_X \times \Frob_S \colon X_S \to X_S$. It is always clear from context which map $\sigma$ is meant. \input{BTschemes} \input{Bounds} \input{LevelMaps} \input{NewtonStrat}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} In the last three years, there has been a significant development in understanding four-dimensional supersymmetric gauge theories \cite{Se},\cite{SeWi},\cite{Se2}. In particular new light is shed on mechanism for confinement in view of condensation of massless solitons in the vacua of the Coulomb phase in $N=2$ supersymmetric gauge theories. It is important that these massless solitons appear at the singularities of moduli space of vacua and the superpotential which breaks $N=2$ supersymmetry to $N=1$ causes condensation. Conversely, from the analysis of the confining phase of $N=1$ theories with a suitable superpotential, we can identify the singular points in the Coulomb phase of $N=2$ theories. Thus it becomes possible to construct the $N=2$ Seiberg-Witten curves since the $N=2$ curves are determined almost completely by the singularity conditions. This idea has been successfully applied to $N=2$ supersymmetric Yang-Mills theories with the classical groups as well as $G_2$, and $N=2$ supersymmetric $SU(N_c)$ QCD \cite{InSe}-\cite{Ki}. In this paper, we extend our analysis to describe the Coulomb phase of $N=1$ supersymmetric gauge theories with an adjoint matter field $\Phi$ and fundamental flavors $Q, \tilde Q$. A tree-level superpotential consists of the Yukawa-like term $\tilde{Q} \Phi^{l} Q$ in addition to the Casimir terms built out of $\Phi$, and we shall consider arbitrary classical gauge groups. In the appropriate limit the theory is reduced to $N=2$ supersymmetric QCD. We derive the low-energy effective superpotential for the phase with a single confined photon and obtain the hyperelliptic curves which describe the Coulomb phase of the theory. In the $N=2$ limit we will show that these curves agree with the results of \cite{HO}-\cite{ArSh}, and hence our results are also viewed as a non-trivial test of the $N=2$ curves proposed previously. We start with discussing $N=1$ $SU(N_c)$ supersymmetric gauge theory with an adjoint matter field $\Phi$, $N_f$ flavors of fundamentals $Q$ and anti-fundamentals $\tilde{Q}$ to explain our method in this paper. We take a tree-level superpotential \begin{equation} W=\sum_{n=1}^{N_c} g_n u_n +\sum_{l=0}^{r} {\rm Tr}_{N_f} \, \lambda_l \, \tilde{Q} \Phi^l Q, \hskip10mm u_n=\frac{1}{n} {\rm Tr}\, \Phi^n , \label{r1} \end{equation} where ${\rm Tr}_{N_f} \, \lambda_l \, \tilde{Q} \Phi^l Q = \sum_{i,j=1}^{N_f} (\lambda_l)^i_j \tilde{Q}_i \Phi^l Q^j$ and $r \leq N_c-1$. If we set $(\lambda_0)^i_j=m^i_j$ with $[m, m^{\dagger}]=0$, $(\lambda_1)^i_j=\D^i_j , \, (\lambda_l)^i_j=0 $ for $l>1$ and all $g_i=0$, eq.(\ref{r1}) recovers the superpotential in $N=2$ $SU(N_c)$ supersymmetric QCD with quark mass $m$. The second term in (\ref{r1}) was considered in a recent work \cite{Ka}. Let us focus on the classical vacua with $ Q=\tilde{Q}=0$ and an unbroken $SU(2) \times U(1)^{N_c-2}$ symmetry which means $\Phi ={\rm diag} (a_1, a_1, a_2, a_3, \cdots , a_{N_c-1})$ up to gauge transformations. (Note that the superpotential (\ref{r1}) has no classical vacua with unbroken $U(1)^{N_c-1}$.) In this vacuum, we will evaluate semiclassicaly the low-energy effective superpotential. Our procedure is slightly different from that adopted in \cite{ElFoGiInRa} upon treating $Q$ and $\tilde{Q}$. We investigate the tree-level parameter region where the Higgs mechanism occurs at very high energies and the adjoint matter field $\Phi$ is quite heavy. Then the massive particles are integrated out and the scale matching relation becomes \begin{equation} {\Lambda_L}^{6-N_f} = g_{N_c}^2 \Lambda^{2 N_c-N_f}, \label{sumatch} \end{equation} where $\Lambda$ is the dynamical scale of high-energy $SU(N_c)$ theory with $N_f$ flavors and $\Lambda_L$ is the scale of low-energy $SU(2)$ theory with $N_f$ flavors. Eq.(\ref{sumatch}) is derived by following the $SU(N_c)$ Yang-Mills case \cite{ElFoGiInRa} while taking into account the existence of fundamental flavors at low energies \cite{KSS}. The semiclassical superpotential in low-energy $SU(2)$ theory with $N_f$ flavors reads \begin{equation} W=\sum_{n=1}^{N_c} g_n u_n^{cl} + \sum_{l=0}^{r} a_1^l \, {\rm Tr}_{N_f} \, \lambda_l \, \tilde{Q} Q \label{w1} \end{equation} which is obtained by substituting the classical values of $\Phi$ and integrating out all the fields except for those coupled to the $SU(2)$ gauge boson. Here, the constraint ${\rm Tr} \Phi^{cl}=a_1+\sum_{i=1}^{N_c-1} a_i=0$ and the classical equation of motion $\sum_{i=1}^{N_c-1} a_i =-g_{N_c-1}/g_{N_c}$ yield \cite{Ki} \begin{equation} a_1= \frac{g_{N_c-1}}{g_{N_c}}. \end{equation} Below the flavor masses which can be read off from the superpotential (\ref{w1}), the low-energy theory becomes $N=1$ $SU(2)$ Yang-Mills theory with the superpotential \begin{equation} W=\sum_{n=1}^{N_c} g_n u_n^{cl}. \label{w2} \end{equation} This low-energy theory has the dynamical scale $\Lambda_{YM}$ which is related to $\Lambda$ through \begin{equation} {\Lambda_{YM}}^{6} = {\rm det} \left ( \sum_{l=0}^{r} \lambda_l a_1^l \right ) \, g_{N_c}^2 \Lambda^{2 N_c-N_f}. \label{sc1} \end{equation} As in the previous literatures \cite{ElFoGiInRa},\cite{TeYa} we simply assume here that the superpotential (\ref{w2}) and the scale matching relation (\ref{sc1}) are exact for any values of the tree-level parameters. Now we add to (\ref{w2}) a dynamically generated piece which arises from gaugino condensation in $SU(2)$ Yang-Mills theory. The resulting effective superpotential $W_L$ where all the matter fields have been integrated out is thus given by \begin{eqnarray} W_L & = & \sum_{n=1}^{N_c} g_n u_n^{cl} \pm 2 \Lambda_{YM}^3 \nonumber \\ & = & \sum_{n=1}^{N_c} g_n u_n^{cl} \pm 2 g_{N_c} \sqrt{A(a_1)} \label{w3} \end{eqnarray} with $A$ being defined as $A(x) \equiv \Lambda^{2 N_c-N_f} \, {\rm det} \left ( \sum_{l=0}^{r} \lambda_l x^l \right ) $. From $\langle u_n \rangle = \partial W_L/\partial g_n$ we find \begin{equation} \langle u_n \rangle = u_n^{cl} (g) \pm \D_{n,N_c-1} \frac{A'(a_1)}{\sqrt{A(a_1)}} \pm \D_{n,N_c} \frac{1}{\sqrt{A(a_1)}} \left ( 2 A(a_1) -a_1 A'(a_1) \right). \label{v1} \end{equation} If we define a hyperelliptic curve \begin{equation} y^2= P(x)^2 -4 A(x), \label{c1} \end{equation} where $P(x)=\left \langle {\rm det} \left ( x- \Phi \right ) \right \rangle$ is the characteristic equation of $\Phi$, the curve is quadratically degenerate at the vacuum expectation values (\ref{v1}). This can be seen by plugging (\ref{v1}) in $P(x)$ \begin{equation} P(x)=P_{cl} (x) \mp x \frac{A'(a_1)}{\sqrt{A(a_1)}} \mp \frac{1}{\sqrt{A(a_1)}} \left ( 2 A(a_1) -a_1 A'(a_1) \right), \end{equation} where $P_{cl}(x)={\rm det} \left ( x- \Phi_{cl} \right )$, and hence \begin{equation} P(a_1)= \mp 2 {\sqrt{A(a_1)}} \, , \hskip10mm P'(a_1)= \mp \frac{A'(a_1)}{\sqrt{A(a_1)}}. \end{equation} Then the degeneracy of the curve is confirmed by checking $ y^2|_{x=a_1}=0$ and $ \frac{\partial}{\partial x} y^2 |_{x=a_1} = 0$. The transition points from the confining to the Coulomb phase are reached by taking the limit $g_{i} \rightarrow 0$ while keeping the ratio $g_i/g_{j}$ fixed \cite{ElFoGiInRa}. These points correspond to the singularities in the moduli space. Therefore the curve (\ref{c1}) is regarded as the curve relevant to describe the Coulomb phase of the theory with the tree-level superpotential $W=\sum_{l=0}^{r} {\rm Tr}_{N_f} \, \lambda_l \, \tilde{Q} \Phi^l Q$. Indeed, the curve (\ref{c1}) agrees with the one obtained in \cite{Ka}. Especially in the parameter region that has $N=2$ supersymmetry, we find agreement with the curves for $N=2$ $SU(N_c)$ QCD with $N_f<2 N_c-1$ \cite{HO},\cite{ArPlSh},\cite{ArSh}.\footnote[2]{For $N_f=2 N_c-1$ an instanton may generate a mass term and shift the bare quark mass in $A(x)$. If we include this effect the curve (\ref{c1}) completely agrees with the result in \cite{ArSh}.} The procedure discussed above can be also applied to the other classical gauge groups. Let us consider $N=1$ $SO(2 N_c)$ supersymmetric gauge theory with an adjoint matter field $\Phi$ which is an antisymmetric $2 N_c \times 2 N_c$ tensor, and $2 N_f$ flavors of fundamentals $Q$. We assume a tree-level superpotential \begin{equation} W=\sum_{n=1}^{N_c-2} g_{2 n} u_{2 n} + g_{2 (N_c-1)} s_{N_c-1}+\lambda v +{1\over 2}\sum_{l=0}^{r} {\rm Tr}_{2 N_f} \, \lambda_l \, Q \Phi^l Q, \label{w4} \end{equation} where $r \leq 2 N_c-1$, \begin{equation} u_{2 n} =\frac{1}{2 n} {\rm Tr}\, \Phi^{2 n}, \hskip10mm v ={\rm Pf}\, \Phi=\frac{1}{2^{N_c} N_c !} \epsilon_{i_1 i_2 j_1 j_2 \cdots} \Phi^{i_1 i_2} \Phi^{j_1 j_2} \cdots \end{equation} and \begin{equation} ks_k+\sum_{i=1}^k i s_{k-i} u_{2i}=0, \hskip10mm s_0=-1, \hskip10mm k=1,2,\cdots . \end{equation} Here, ${}^t \lambda_l=(-1)^l \lambda_l$ and the $N=2$ supersymmetry is present when we set $(\lambda_0)^i_j=m^i_j$ where $[m, m^{\dagger}]=0$, $(\lambda_1)^i_j={\rm diag} (i \sigma_2,i \sigma_2, \cdots )$ with $\sigma_{2} = \pmatrix{0 & -i \cr i & 0}, \, (\lambda_l)^i_j=0 $ for $l>1$ and all $g_i=0$. As in the case of $SU(N_c)$, we concentrate on the unbroken $SU(2) \times U(1)^{N_c-1}$ vacua with $\Phi ={\rm diag} (a_1 \sigma_2, a_1 \sigma_2 , a_2 \sigma_2, a_3 \sigma_2, \cdots , a_{N_c-1} \sigma_2)$ and $Q=0$. By virtue of using $s_{N_c}$ instead of $u_{2 N_c}$ in (\ref{w4}) the degenerate eigenvalue of $\Phi_{cl}$ is expressed by $g_i$ \begin{equation} a_1^2=\frac{g_{2(N_c-2)}}{g_{2(N_c-1)}} \end{equation} as found for the $SO(2 N_c+1)$ case \cite{TeYa}. Note that the superpotential (\ref{w4}) has no classical vacua with unbroken $SO(4) \times U(1)^{N_c-1}$ when $g_{2 (N_c-2)} \neq 0$. We also note that the fundamental representation of $SO(2 N_c)$ is decomposed into two fundamental representations of $SU(2)$ under the above embedding. It is then observed that the scale matching relation between the high-energy $SO(2 N_c)$ scale $\Lambda$ and the scale $\Lambda_L$ of low-energy $SU(2)$ theory with $2 N_f$ fundamental flavors is given by \begin{equation} {\Lambda_L}^{6-2 N_f} = g_{2(N_c-1)}^2 \Lambda^{4( N_c-1) -2 N_f}. \end{equation} The superpotential for low-energy $N=1$ $SU(2)$ QCD with $2 N_f$ flavors can be obtained in a similar way to the $SU(N_c)$ case, but the duplication of the fundamental flavors are taken into consideration. After some manipulations it turns out that the superpotential for low-energy $N=1$ $SU(2)$ QCD with $2 N_f$ flavors is written as \begin{equation} W=\sum_{n=1}^{N_c-2} g_{2 n} u_{2 n}^{cl} + g_{2 (N_c-1)} s_{N_c-1}^{cl}+\lambda v^{cl} +\sum_{l=0}^{r} a_1^l {\rm Tr}_{2 N_f} \, \lambda_l \, \widetilde{{\bf Q}} {\bf Q}, \label{w5} \end{equation} where \begin{equation} {\bf Q}^j = {1\over \sqrt{2}} \pmatrix{Q^j_1-iQ^j_2 \cr Q^j_3-iQ^j_4}, \hskip10mm \widetilde{\bf Q}_j= {1\over \sqrt{2}}\pmatrix{Q^j_1+iQ^j_2 \cr Q^j_3+iQ^j_4}. \end{equation} Upon integrating out the $SU(2)$ flavors we have the scale matching between $\Lambda$ and $\Lambda_{YM}$ for $N=1$ $SU(2)$ Yang-Mills theory \begin{equation} {\Lambda_{YM}}^{6} = {\rm det} \left ( \sum_{l=0}^{r} \lambda_l a_1^l \right ) \, g_{2(N_c-1)}^2 \Lambda^{4( N_c-1)-2 N_f}, \label{sc2} \end{equation} and we get the effective superpotential \begin{eqnarray} W_L & = & \sum_{n=1}^{N_c-2} g_n u_n^{cl} + g_{2 (N_c-1)} s_{N_c-1}^{cl}+\lambda v^{cl} \pm 2 \Lambda_{YM}^3 \nonumber \\ & = & \sum_{n=1}^{N_c-2} g_n u_n^{cl} + g_{2 (N_c-1)} s_{N_c-1}^{cl}+\lambda v^{cl} \pm 2 g_{2(N_c-1)} \sqrt{A(a_1)}, \label{wso} \end{eqnarray} where $A$ is defined by $A(x) \equiv \Lambda^{4( N_c-1)-2 N_f} \, {\rm det} \left ( \sum_{l=0}^{r} \lambda_l x^l \right ) =A(-x)$. The vacuum expectation values of gauge invariants are obtained from $W_L$ as \begin{eqnarray} \langle s_{ n} \rangle & =& s_{ n}^{cl} (g) \pm \D_{n,N_c-2} \frac{A'(a_1)}{\sqrt{A(a_1)}} \pm \D_{n,N_c-1} \frac{1}{\sqrt{A(a_1)}} \left ( 2 A(a_1) -a_1^2 A'(a_1) \right), \nonumber \\ \langle v \rangle & =& v^{cl} (g), \label{vso} \end{eqnarray} where $A'(x)=\frac{\partial}{\partial x^2} A(x)$. It is now easy to see that a curve \begin{equation} y^2=P(x)^2-4 x^4 A(x) \end{equation} with $P(x)=\left \langle {\rm det} \left ( x-\Phi \right ) \right \rangle$ is degenerate at these values of $\langle s_n \rangle, \, \langle v \rangle$, and reproduces the known $N=2$ curve \cite{H}, \cite{ArSh}. The only difference between $SO(2N_c)$ and $SO(2 N_c+1)$ is that the gauge invariant ${\rm Pf}\, \Phi$ vanishes for $SO(2 N_c+1)$. Thus, taking a tree-level superpotential \begin{equation} W=\sum_{n=1}^{N_c-1} g_{2 n} u_{2 n} + g_{2 N_c} s_{N_c} +{1\over 2} \sum_{l=0}^{r} {\rm Tr}_{2 N_f} \, \lambda_l \, Q \Phi^l Q , \hskip10mm r \leq 2N_c , \label{wsoo} \end{equation} we focus on the unbroken $SU(2) \times U(1)^{N_c-1}$ vacuum which has the classical expectation values $\Phi ={\rm diag} (a_1 \sigma_2, a_1 \sigma_2 , a_2 \sigma_2, \cdots , a_{N_c-1} \sigma_2,0)$ and $Q=0$ \cite{TeYa}. As in the $SO(2 N_c)$ case we make use of the scale matching relation between the high-energy scale $\Lambda$ and the low-energy $N=1$ $SU(2)$ Yang-Mills scale $\Lambda_{YM}$ \begin{equation} {\Lambda_{YM}}^{6} = {\rm det} \left ( \sum_{l=0}^{r} \lambda_l a_1^l \right ) \, g_{2 N_c} g_{2(N_c-1)} \Lambda^{2(2 N_c-1- N_f)}. \label{sc3} \end{equation} As a result we find the effective superpotential \begin{eqnarray} W_L & = & \sum_{n=1}^{N_c-1} g_{2n} u_n^{cl} + g_{2 N_c} s_{N_c}^{cl} \pm 2 \Lambda_{YM}^3 \nonumber \\ & = & \sum_{n=1}^{N_c-1} g_{2n} u_n^{cl} + g_{2 N_c} s_{N_c}^{cl} \pm 2 \sqrt{g_{2 N_c} g_{2(N_c-1)} A(a_1)}, \label{wsoo2} \end{eqnarray} where $A$ is defined as $A(x) \equiv \Lambda^{2( 2 N_c-1- N_f)}\, {\rm det} \left ( \sum_{l=0}^{r} \lambda_l x^l \right )$. Noting the relation $a_1^2=g_{2(N_c-1)}/g_{2 N_c}$ \cite{TeYa} we calculate the vacuum expectation values of gauge invariants \begin{eqnarray} \langle s_{ n} \rangle = s_{ n}^{cl} (g) & \pm & \D_{n,N_c-1} \frac{1}{\sqrt{A(a_1)}} \left ( \frac{A(a_1)}{a_1}+a_1 A'(a_1) \right) \nonumber \\ & \pm & \D_{n,N_c} \frac{1}{\sqrt{A(a_1)}} \left ( a_1 A(a_1) -a_1^3 A'(a_1) \right). \label{vsoo} \end{eqnarray} For these $\langle s_n \rangle$ we observe the quadratic degeneracy of the curve \begin{equation} y^2=\left( \frac{1}{x} P(x) \right )^2-4 x^2 A(x) , \end{equation} where $P(x)=\left \langle {\rm det} \left ( x-\Phi \right ) \right \rangle$. In the $N=2$ limit we see agreement with the curve constructed in \cite{H},\cite{ArSh}. The confining phase superpotential for the $SO(5)$ gauge group was obtained also in \cite{LaPiGi}. Let us now turn to $Sp(2 N_c)$ gauge theory. We take for matter content an adjoint field $\Phi$ and $2 N_f$ fundamental fields $Q$. The $2N_c \times 2N_c$ tensor $\Phi$ is subject to ${}^t \Phi = J \Phi J $ with $J={\rm diag}(i\sigma_2, \cdots, i\sigma_2)$. Our tree-level superpotential reads \begin{equation} W=\sum_{n=1}^{N_c-1} g_{2 n} u_{2 n} + g_{2 N_c} s_{N_c} +{1\over 2} \sum_{l=0}^{r} {\rm Tr}_{2 N_f} \, \lambda_l \, Q J \Phi^l Q, \label{wsp} \end{equation} where ${}^t \lambda_l=(-1)^{l+1} \lambda_l$ and $r \leq 2N_c-1$. The classical vacuum with the unbroken $SU(2) \times U(1)^{N_c-1}$ gauge group corresponds to \begin{equation} J \Phi= {\rm diag}(\sigma_{1}a_1,\; \sigma_{1}a_1,\; \sigma_{1}a_2,\cdots,\sigma_{1}a_{N_c-1}) , \hskip10mm \sigma_{1} =\left( \begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array} \right ), \end{equation} where $a_1^2=g_{2(N_c-1)}/g_{2N_c}$. The scale $\Lambda_L$ for low-energy $SU(2)$ theory with $2 N_f$ flavors is expressed as \cite{TeYa} \begin{equation} {\Lambda_L}^{6-2 N_f} = \left( \frac{g_{2 N_c}^2}{g_{2 (N_c-1)}} \right)^2 \Lambda^{2(2N_c+2- N_f)}. \label{embed} \end{equation} There exists a subtle point in the analysis of $Sp(2 N_c)$ theory. When $Sp(2 N_c)$ is broken to $SU(2) \times U(1)^{N_c-1}$ the instantons in the broken part of the gauge group play a role since the index of the embedding of the unbroken $SU(2)$ in $Sp(2 N_c)$ is larger than one (see eq.(\ref{embed})) \cite{InSe2},\cite{Aff}. The possible instanton contribution to $W_L$ will be of the same order in $\Lambda$ as low-energy $SU(2)$ gaugino condensation. Therefore even in the lowest quantum corrections the instanton term must be added to $W_L$. For clarity we begin with discussing $Sp(4)$ Yang-Mills theory. In this theory by the symmetry and holomorphy the effective superpotential is determined to take the form $W_L=f \left( \frac{g_4}{g_2} \Lambda^2 \right) \frac{g_4^2}{g_2} \Lambda^6$ with $f$ being certain holomorphic function. If we assume that there is only one-instanton effect, the precise form of $W_L$ including the low-energy gaugino condensation effect may be given by \begin{equation} W_L= 2 \frac{g_4^2}{g_2} \Lambda^6 \pm 2 \frac{g_4^2}{g_2} \Lambda^6, \end{equation} as in the case of $SO(4) \simeq SU(2) \times SU(2)$ breaking to the diagonal $SU(2)$. This is due to the fact $Sp(4) \simeq SO(5)$ and the natural embedding of $SO(4)$ in $SO(5)$. Our low-energy $SU(2)$ gauge group is identified with the one diagonally embedded in $SO(4) \simeq SU(2) \times SU(2)$ \cite{InSe2},\cite{ILS}. Accordingly, in $Sp(2 N_c)$ Yang-Mills theory, we first make the matching at the scale of $Sp(2 N_c)/Sp(4)$ $W$ bosons by taking all the $a_1-a_i$ large. Then the low-energy superpotential is found to be \begin{equation} W_L=W_{cl}+2 \frac{g_{2 N_c}}{a_1^2} \Lambda^{2(N_c+1)} \pm 2 \frac{g_{2 N_c}}{a_1^2} \Lambda^{2(N_c+1)}. \end{equation} Let us turn on the coupling to fundamental flavors $Q$ and evaluate the instanton contribution. When flavor masses vanish there is a global $O(2N_f) \simeq SO(2N_f)\times {\bf Z}_2$ symmetry. The couplings $\lambda_l$ and instantons break a ``parity'' symmetry ${\bf Z}_2$. We treat this ${\bf Z}_2$ as unbroken by assigning odd parity to the instanton factor $\Lambda^{2 N_c+2- N_f}$ and $O(2N_f)$ charges to $\lambda_l$. Symmetry consideration then leads to the one-instanton factor proportional to $B(a_1)$ where \begin{equation} B(x)=\Lambda^{2 N_c+2- N_f} {\rm Pf} \left(\sum_{l\, {\rm even}} \lambda_{l}x^{l}\right). \end{equation} Note that $B(x)$ is parity invariant since Pfaffian has odd parity. Thus the superpotential for low-energy $N=1$ $SU(2)$ QCD with $2 N_f$ flavors including the instanton effect turns out to be \begin{equation} W=\sum_{n=1}^{N_c-1} g_{2 n} u_{2 n}^{cl} + g_{2 N_c} s_{N_c}^{cl} +\sum_{l=0}^{r} a_1^l {\rm Tr}_{2 N_f} \, \lambda_l \, \widetilde{{\bf Q}} {\bf Q} +2 \frac{g_{2 N_c}^2}{g_{2 (N_c-1)}} B(a_1), \label{wsp2} \end{equation} where \begin{equation} {\bf Q}^j = \pmatrix{Q^j_1 \cr Q^j_3}, \hskip10mm \widetilde{\bf Q}_j= \pmatrix{Q^j_2 \cr Q^j_4}. \end{equation} When integrating out the $SU(2)$ flavors, the scale matching relation between $\Lambda$ and the scale $\Lambda_{YM}$ of $N=1$ $SU(2)$ Yang-Mills theory becomes \begin{equation} {\Lambda_{YM}}^{6} = {\rm det} \left ( \sum_{l=0}^{r} \lambda_l a_1^l \right ) \, \left( \frac{g_{2 N_c}^2}{g_{2 (N_c-1)}} \right)^2 \Lambda^{2(2N_c+2- N_f)}, \label{sc4} \end{equation} and we finally obtain the effective superpotential \begin{eqnarray} W_L & = & \sum_{n=1}^{N_c-1} g_n u_n^{cl} + g_{2 N_c} s_{N_c}^{cl} \pm 2 \Lambda_{YM}^3 +2 \frac{g_{2 N_c}^2}{g_{2 (N_c-1)}} B(a_1) \nonumber \\ & = & \sum_{n=1}^{N_c-1} g_n u_n^{cl} + g_{2 N_c} s_{N_c}^{cl} +2 \frac{g_{2 N_c}^2}{g_{2 (N_c-1)}} \left(B(a_1) \pm \sqrt{A(a_1)} \right), \label{wsp3} \end{eqnarray} where $A(x) \equiv \Lambda^{2( 2 N_c+2- N_f) }\, {\rm det} \left ( \sum_{l=0}^{r} \lambda_l x^l \right )$. The gauge invariant expectation values $\langle s_n \rangle$ are \begin{eqnarray} \langle s_{ n} \rangle = s_{ n}^{cl} (g) & +& \D_{n,N_c-1} \frac{1}{a_1^4} \left( -2 B(a_1)+2a_1^2B'(a_1) \pm \frac{1}{\sqrt{A(a_1)}} \left( -2 A(a_1) + a_1^2 A'(a_1) \right) \right) \nonumber \\ & + & \D_{n,N_c} \frac{1}{a_1^2} \left( 4 B(a_1)-2a_1^2B'(a_1) \pm \frac{1}{\sqrt{A(a_1)}} \left( 4 A(a_1) - a_1^2 A'(a_1) \right) \right). \label{vsp} \end{eqnarray} Substituting these into a curve \begin{equation} x^2 y^2 = \left( x^2 P(x) +2 B(x) \right)^2-4 A(x), \label{spcurve} \end{equation} we see that the curve is degenerate at (\ref{vsp}). In this case too our result (\ref{spcurve}) agrees with the $N=2$ curve obtained in \cite{ArSh}. Using the technique of confining phase superpotential we have determined the curves describing the Coulomb phase of $N=1$ supersymmetric gauge theories with adjoint and fundamental matters with classical gauge groups. In the $N=2$ limit our results recover the curves for the Coulomb phase in $N=2$ QCD. For the gauge group $Sp(2N_c)$, in particular, we have observed that taking into account the instanton effect in addition to $SU(2)$ gaugino condensation is crucial to obtain the effective superpotential for the phase with a confined photon. This explains in terms of $N=1$ theory a peculiar feature of the $N=2$ $Sp(2N_c)$ curve when compared to the $SU(N_c)$ and $SO(N_c)$ cases. \vskip10mm We thank K. Ito for helpful discussions on the instanton calculus. TK would like to thank T. Hotta for useful discussions. The work of ST is supported by JSPS Research Fellowship for Young Scientists. The work of SKY is supported in part by the Grant-in-Aid for Scientific Research on Priority Area 213 ``Infinite Analysis'', the Ministry of Education, Science and Culture, Japan. \newpage
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} For many years there has been considerable interest in magnetic monopoles, the topological soliton solutions of Yang-Mills-Higgs gauge theories in three space dimensions with particle-like properties. In particular BPS monopoles have been the focus of much research (see \cite{ms04} for a recent review). These monopoles satisfy a rather ubiquitous first order Bogomolny equation $$B_i=\frac{1}{2}\sum_{j,k=1}\sp3\epsilon_{ijk}F\sp{jk}=D_i\Phi$$ (together with certain boundary conditions, the remnant of a limit in which the Higgs potential is removed). Here $F_{ij}$ is the field strength associated to a gauge field $A$, and $\Phi$ is the Higgs field. The Bogomolny equation may be viewed as a dimensional reduction of the four dimensional self-dual equations upon setting all functions independent of $x_4$ and identifying $\Phi=A_4$; they are also encountered in supersymmetric theories when requiring certain field configurations to preserve some fraction of supersymmetry. The study of BPS monopoles is intimately connected with integrable systems. Nahm gave a transform of the ADHM instanton construction to produce BPS monopoles \cite{nahm82} and the resulting Nahm's equations have Lax form with corresponding spectral curve $\hat{\mathcal{C}}$. Hitchin gave a twistorial description of this curve \cite{hitchin82}: just as Ward's twistor transform relates instanton solutions in $\mathbb{R}\sp4$ to certain holomorphic vector bundles over the twistor space $\mathbb{CP}\sp3$, Hitchin showed that the dimensional reduction leading to BPS monopoles could be made at the twistor level as well and also obtained the same curve lying in mini-twistor space, $\hat{\mathcal{C}}\subset$ T$\mathbb{P}\sp1$. Subject to certain nonsingularity conditions on the curve $\hat{\mathcal{C}}$ Hitchin was able to prove all monopoles could be obtained by this approach \cite{hitchin83}. Bringing methods from integrable systems to bear upon the construction of solutions to Nahm's equations for the gauge group $SU(2)$ Ercolani and Sinha \cite{es89} later showed how one could solve (a gauge transform of) the Nahm equations in terms of a Baker-Akhiezer function for the curve $\hat{\mathcal{C}}$. Despite the many striking results now obtained, disappointingly few explicit solutions are known. This paper is part of a longer reappraisal of this work, seeking to understand where the difficulties in implementation lie and developing new techniques to address them. Throughout we shall focus on $SU(2)$ BPS monopoles and this paper will deal with curves $\hat{\mathcal{C}}$ of the form \begin{equation} \eta^3+\chi(\zeta^6+b \zeta^3-1)=0\label{bren03} \end{equation} where $\chi$, $b$ are certain real parameters. Such a curve is of genus $4$ and could describe a charge three monopole. Two types of problem arise (that will be made more precise below). The first is that the curve (\ref{bren03}) is subject to a set of constraints whereby the periods of a meromorphic differential on the curve are specified. This type of constraint arises in many other settings as well, for example when specifying the filling fractions of a curve in the AdS/CFT correspondence. Such constraints are transcendental in nature and the number of cases where they have be solved explicitly is rather limited. This is certainly an area that needs to be studied more. For the curve (\ref{bren03}) a countable number of solutions to this constraint have been obtained. The second type of problem arises in implementing a constraint that may be expressed as the vanishing of a real one parameter family of cohomologies of certain line bundles, $H^0(\hat{\mathcal{C}},L^{\lambda}(n-2))=0$ for $\lambda\in(0,2)$. Viewing the line bundles as points on the Jacobian this is equivalent to a real line segment not intersecting the theta divisor $\Theta$ of the curve. Indeed there are sections for $\lambda=0,2$ and the flow is periodic (mod $2$) in $\lambda$ and so we are interested in the number of times a real line intersects $\Theta$. While techniques exist that count the number of intersections of a complex line with the theta divisor we are unaware of anything comparable in the real setting. We establish here that of the countable number of solutions to the second constraint that \begin{theorem}\label{mainthm} The only curves of the family (\ref{bren03}) that yield BPS monopoles correspond to tetrahedrally symmetric monopoles. These have $b=\pm5\sqrt{2}$, $\chi^{\frac{1}{3}} =-\frac16\, \frac{\Gamma(\frac16)\Gamma(\frac13)}{2\sp\frac16\, \pi\sp{\frac12}}$. \end{theorem} An outline of the paper is as follows. In section 2 we will recall the constraints on the curve $\hat{\mathcal{C}}$ that are equivalent for a monopole to exist. We shall then describe the curve (\ref{bren03}) in more detail and make concrete these constraints for this curve. This will entail a description of the homology and period matrix for the curve. At this stage we will have reduced the problem to properties of the theta function for the genus $4$ curve $\hat{\mathcal{C}}$. (Our theta function conventions are given in Appendix A.) Now the curve (\ref{bren03}) (and indeed the more general curve $\eta^3+\alpha\eta\zeta^2+\chi(\zeta^6+b \zeta^3-1)=0$ that will be explored elsewhere) is invariant under the cyclic group $\texttt{C}_3$ and we have a covering $\pi:\hat{\mathcal{C}}\rightarrow\mathcal{C}$ of a genus $2$ curve $\mathcal{C}$. In section 3 we will describe a theorem of Fay and Accola that allows us to express the genus 4 theta functions in terms of genus 2 theta functions so reducing the problem to one of genus 2 theta functions. Then in section 4 we will use the reduction theory of Humbert to further reduce the problem to that of elliptic functions. At this stage we have reduced the initial problem of the existence of a BPS monopole to a question about the number of zeros of an elliptic function on an interval. Section 5 describes this in more detail. Although we have a stronger conjecture than we can prove, we are able to establish the theorem given above. We end with some final observations in section 6. \section{The Spectral Curve and its Constraints} \subsection{Hitchin Data} If $\zeta$ is the inhomogeneous coordinate on the Riemann sphere, and $(\zeta,\eta)$ are the standard local coordinates on $T{\mathbb P}\sp1$ (defined by $(\zeta,\eta)\rightarrow\eta\frac{d}{d\zeta}$), the spectral curve of a charge $n$ monopole $\hat{\mathcal{C}}\subset$ T$\mathbb{P}\sp1$ may be expressed in the form \begin{equation} P(\eta,\zeta)=\eta^n+\eta^{n-1} a_1(\zeta)+\ldots+\eta^r a_{n-r}(\zeta)+ \ldots+\eta\, a_{n-1}(\zeta)+a_n(\zeta)=0,\label{spectcurve} \end{equation} where $a_r(\zeta)$ (for $1\leq r\leq n$) is a polynomial in $\zeta$ of maximum degree $2r$. We may view $\hat{\mathcal{C}}$ as an $n$-fold branched cover of $\mathbb{P}\sp{1}$ and (by a rotation if necessary) we may assume $n$ distinct preimages $\{\infty_k\}_{k=1}\sp{n}$ of the point $\zeta=\infty$. The form of the curve means that $\eta/\zeta\sim \rho_k \zeta$ at $\infty_k$. For a generic $n$-monopole the spectral curve is irreducible and has genus $g_{\hat{\mathcal{C}}}=(n-1)^2$. We will denote by $\{\hat{\mathfrak{a}}_k,\hat{\mathfrak{b}}_k\}_{k=1}\sp{g_{\hat{\mathcal{C}}}}$ a canonical homology basis of $\hat{\mathcal{C}}$. The Hitchin data constrains the curve $\hat{\mathcal{C}}$ explicitly in terms of the polynomial $P(\eta,\zeta)$ and implicitly in terms of the behaviour of various line bundles on $\hat{\mathcal{C}}$. If the homogeneous coordinates of $ {\mathbb P}\sp1$ are $[\zeta_0,\zeta_1]$ we consider the standard covering of this by the open sets $U_0=\{[\zeta_0,\zeta_1]\,|\,\zeta_0\ne0\}$ and $U_1=\{[\zeta_0,\zeta_1]\,|\,\zeta_1\ne0\}$, with $\zeta=\zeta_1/\zeta_0$ the usual coordinate on $U_0$. We will denote by $\hat U_{0,1}$ the pre-images of these sets under the projection map $\pi:T{\mathbb P}\sp1\rightarrow{\mathbb P}\sp1$. Let $L^{\lambda}$ denote the holomorphic line bundle on $T{\mathbb P}\sp1$ defined by the transition function $g_{01}=\rm{exp}(-\lambda\eta/\zeta)$ on $\hat U_{0}\cap \hat U_{1}$, and let $L^{\lambda}(m)\equiv L^{\lambda}\otimes\pi\sp*\mathcal{O}(m)$ be similarly defined in terms of the transition function $g_{01}=\zeta^m\exp{(-\lambda\eta/\zeta)}$. A holomorphic section of such line bundles is given in terms of holomorphic functions $f_\alpha$ on $\hat U_\alpha$ satisfying $f_\alpha=g_{\alpha\beta}f_\beta$. We denote line bundles on $\mathcal{C}$ in the same way, where now we have holomorphic functions $f_\alpha$ defined on $\mathcal{C}\cap\hat U_\alpha$. The Hitchin data constrains the curve to satisfy:\\ \begin{description} \item[H1] The curve $\hat{\mathcal{C}}$ is real with respect to the standard real structure on $T{\mathbb P}\sp1$ (the anti-holomorphic involution defined by reversing the orientation of the lines in ${\mathbb R}\sp3$), \begin{equation} \tau:(\zeta,\eta)\mapsto(-\frac{1}{\bar{\zeta}}, -\frac{\bar{\eta}}{\bar{\zeta}^2}). \end{equation} \item[H2] $L^2$ is trivial on $\hat{\mathcal{C}}$ and $L(n-1)$ is real. \item[H3] $H^0(\hat{\mathcal{C}},L^{\lambda}(n-2))=0$ for $\lambda\in(0,2)$.\\ \end{description} Only the first of these constraints is immediate to implement. The reality of the curve means the coefficients of (\ref{spectcurve}) satisfy \begin{equation}\label{spectcurvereal} a_r(\zeta)=(-1)^r\zeta^{2r}\overline{a_r(-\frac{1}{\overline{\zeta}})} .\end{equation} For the curve (\ref{bren03}) this is why $\chi$ and $b$ are real. Ercolani and Sinha show the reality of $L(n-1)$ within the Baker-Akhiezer function setting and \cite{bren06} implements this in terms of theta functions on the curve. The triviality of $L^2$ means that there exists a nowhere-vanishing holomorphic section. In terms of the open sets $\hat U_{0,1}$ we will have two, nowhere-vanishing holomorphic functions, $f_0$ on $\hat U_0\cap\hat{\mathcal{C}}$ and $f_1$ on $\hat U_1\cap\hat{\mathcal{C}}$, such that on the intersection \begin{equation} f_{0}(\eta,\zeta)=\mathrm{exp} \left\{ -2\frac{\eta}{\zeta} \right\} f_1(\eta,\zeta). \label{triv3} \end{equation} Consideration of the logarithmic derivative of (\ref{triv3}) shows that, in order to avoid essential singularities, we must have \begin{align} \mathrm{d}\mathrm{log}\,f_{0}(P) &=\left(\frac{2\rho_k}{t^2} +O(1)\right){d}t,\quad \text{at}\quad P\rightarrow \infty_k,\label{foex} \end{align} where $t=1/\zeta$ is a local parameter. Ercolani and Sinha introduced the normalized meromorphic differential $\gamma_\infty$ whose pole behaviour is that of $\frac12\mathrm{d}\mathrm{log}\,f_{0}(P)$ and whose $\mathfrak{a}$-periods vanish. In terms of the vector of $\mathfrak{b}$-periods we have \begin{lemma}[Ercolani-Sinha Constraints]\label{EScond} The following are equivalent: \begin{enumerate} \item $L\sp2$ is trivial on $\hat{\mathcal{C}}$. \item $2\widehat{\boldsymbol{U}}\in \Lambda\Longleftrightarrow \widehat{\boldsymbol{U}}=\frac{1}{2\pi\imath}\left(\oint_{\hat{\mathfrak{b}}_1}\gamma_{\infty}, \ldots,\oint_{\hat{\mathfrak{b}}_{g_{\hat{\mathcal{C}}}}}\gamma_{\infty}\right)= \frac12 \boldsymbol{n}+\frac12\hat\tau\boldsymbol{m} $ where $\boldsymbol{n}$, $\boldsymbol{m}\in\mathbb{Z}\sp{\hat g}$. \item There exists a 1-cycle $\mathfrak{c}=\boldsymbol{n}\cdot{\hat{\mathfrak{a}}}+ \boldsymbol{m}\cdot{\hat{\mathfrak{b}}}$ such that $\oint\limits_{\mathfrak{c}}\Omega=-2\beta_0$ for every holomorphic differential $\Omega=\dfrac{\beta_0\eta^{n-2}+\beta_1(\zeta)\eta^{n -3}+\ldots+\beta_{n-2}(\zeta)}{\frac{\partial{P}}{\partial \eta}}\,d\zeta$, where $\beta_j(\zeta)$ is a polynomial of degree at most $2j$ in $\zeta$. \end{enumerate} \end{lemma} Here $\Lambda$ is the period lattice of $\hat{\mathcal{C}}$ and $\hat\tau$ the $\mathfrak{a}$-normalized period matrix. Ercolani and Sinha established the equivalence of (1) and (2) while the dual form of the Ercolani-Sinha constraints (3) was given by Houghton, Manton and Ram\~ao \cite{hmr99}. If the anti-holomorphic involution $\tau$ induces an action $M_\tau$ on the homology, $\begin{pmatrix}\tau_\ast \hat{\mathfrak{a}}\\ \tau_\ast \hat{\mathfrak{b}}\end{pmatrix}=M_\tau \begin{pmatrix} \hat{\mathfrak{a}}\\ \hat{\mathfrak{b}}\end{pmatrix}$, then we have $M_\tau JM_\tau=-J$, where $J$ is the standard symplectic structure, and \begin{corollary}\label{HMRinvc} $\tau_*\mathfrak{c}=-\mathfrak{c}$ or $2\widehat{\mathbf{U}}M_\tau= \begin{pmatrix} \mathbf{n} & \mathbf{m} \end{pmatrix} M_\tau=- \begin{pmatrix} \mathbf{n} & \mathbf{m} \end{pmatrix} $. \end{corollary} The Ercolani-Sinha constraints place $g_{\hat {\mathcal{C}}}$ transcendental constraints on the spectral curve $ \hat{\mathcal{C}}$ and, as noted in the introduction, solving these is a major difficulty in implementing this theory. We have yet to discuss \textbf{H3}, the implementation of which is the second type of problem mentioned in the introduction. Here $L^{\lambda}(n-2)$ is a degree $g_{\hat {\mathcal{C}}} -1$ line bundle so using Riemann's vanishing theorem for line bundles $\mathcal{L}$ of this degree, ${\rm multiplicity}_\mathcal{L}\, \theta =\mathop{\rm dim}\nolimits H\sp0(\mathcal{C},\mathcal{O}(\mathcal{L}))$, we see that $L^{\lambda}(n-2)$ does not lie in the theta divisor for $\lambda\in(0,2)$. In \cite{bren06} we establish that \begin{lemma} Let $\widetilde{\boldsymbol{K}}= \boldsymbol{K}+\boldsymbol{\phi}\left((n-2) \sum_{k=1}\sp{n}\infty_k\right)$ where $\boldsymbol{K}$ is the vector of Riemann constants and $\boldsymbol{\phi}$ the Abel-Jacobi map. Then \begin{equation}\label{beh3} H^0(\hat{\mathcal{C}},L^{\lambda}(n-2))\ne0\Longleftrightarrow \theta(\lambda\widehat{\boldsymbol{U}}- \widetilde{\boldsymbol{K}}\,|\,\hat\tau)=0 \end{equation} for $\lambda\in(0,2)$, where $\theta$ is Riemann's theta function for the curve $\hat{\mathcal{C}}$. \end{lemma} Thus the second problem is to determine when the (real) line $\lambda\widehat{\boldsymbol{U}}- \widetilde{\boldsymbol{K}}$ intersects the theta divisor $\Theta$. We note the following properties of the vector $\widetilde{\boldsymbol{K}}$, \begin{lemma} \hfill \begin{itemize} \item $\widetilde{\boldsymbol{K}}$ is independent of the choice of base point of the Abel map. \item $2\widetilde{\boldsymbol{K}}\in\Lambda$. \item For $n\ge3$ then $\widetilde{\boldsymbol{K}}\in \Theta_{\rm singular}$, the singular locus of the theta divisor. \item $\widehat{\boldsymbol{U}}\pm \widetilde{\boldsymbol{K}}$ is a non-singular even theta characteristic. \end{itemize} \end{lemma} It is straightforward to see that $2\widehat{\boldsymbol{U}}\ne0$ and is a primitive vector in the period lattice. We also remark that because $\widetilde{\boldsymbol{K}}$ is a half-period then \begin{equation}\label{beh3b} \theta(\lambda\widehat{\boldsymbol{U}}- \widetilde{\boldsymbol{K}}\,|\,\hat\tau)=0 \Longleftrightarrow \theta(\lambda\widehat{\boldsymbol{U}}+ \widetilde{\boldsymbol{K}}\,|\,\hat\tau)=0. \end{equation} \subsection{The curve and its properties} We will write the curve (\ref{bren03}) in the form \begin{equation} w^3=z^6+bz^3-1=(z^3-\alpha\sp3)(z^3+\frac{1}{\alpha\sp3})\label{cyccurve} \end{equation} to avoid various factors, where $(z,w)=(\zeta,-\eta/\chi^{1/3})$ and $1/\alpha\sp3=(b+\sqrt{b^2+4})/{2}$. With $\rho=e\sp{2\imath\pi /3}$ this curve has symmetries: $$\mathrm{R}:\ (z,w)\rightarrow (z,\rho{ w}),\qquad \sigma:\ (z,w)\rightarrow (\rho z, {\rho w}),\qquad \mathrm{c}:\ (z,w)\rightarrow (-1/z,-w/z^2). $$ These yield the group $C_3\times S_3$, with $C_3=<\mathrm{R}|\,\mathrm{R}^3=1>$ and $S_3=<\sigma,\mathrm{c}|\,\sigma^3=1,\mathrm{c}^2=1,\mathrm{c}\sigma \mathrm{c}=\sigma^2>$. When $b=\pm5\sqrt{2}$, the dihedral symmetry $S_3$ is enlarged to tetrahedral symmetry by $$ \mathrm{t}:\ (z,w)\rightarrow \left(\pm\,\frac{\sqrt{2}\mp z}{1\pm\sqrt{2}z}, -\frac{3 w}{(1\pm\sqrt{2}z)^2}\right),\qquad \mathrm{t}^2=1,$$ with $A_4$ being generated by $\sigma$ and $\mathrm{t}$. \begin{figure}[scale=1.25] \setlength{\unitlength}{\textwidth} \begin{center} \begin{picture}(1,1) \put(0.5,0.6){\includegraphics[width=0.5\unitlength]{cycles0_1_cyclic_a.ps}} \put(0.5,0.14){\includegraphics[width=0.5\unitlength]{cycles0_0_cyclic_a.ps}} \put(0,0.6){\includegraphics[width=0.5\unitlength]{BE06_123_a.ps}} \put(0,0.14){\includegraphics[width=0.5\unitlength]{BE06_0_a.ps}} \put(0.1,0.1){(a) The symmetric basis $\hat{\mathfrak{a}}\sp{s}_\ast, \hat{\mathfrak{b}}\sp{s}_\ast$} \put(0.6,0.1){(b) The cyclic basis $\hat{\mathfrak{a}}\sp{c}_\ast, \hat{\mathfrak{b}}\sp{c}_\ast$} \put(0.37,0.71){$\hat{\mathfrak{a}}_1\sp{s}$} \put(0.3,0.81){$\hat{\mathfrak{b}}_1\sp{s}$} \put(0.2,0.79){$\hat{\mathfrak{a}}_2\sp{s}$} \put(0.1,0.71){$\hat{\mathfrak{b}}_2\sp{s}$} \put(0.17,0.61){$\hat{\mathfrak{a}}_3\sp{s}$} \put(0.36,0.67){$\hat{\mathfrak{b}}_3\sp{s}$} \put(0.87,0.71){$\hat{\mathfrak{a}}_1\sp{c}$} \put(0.8,0.81){$\hat{\mathfrak{b}}_1\sp{c}$} \put(0.7,0.79){$\hat{\mathfrak{a}}_2\sp{c}$} \put(0.6,0.71){$\hat{\mathfrak{b}}_2\sp{c}$} \put(0.67,0.61){$\hat{\mathfrak{a}}_3\sp{c}$} \put(0.86,0.67){$\hat{\mathfrak{b}}_3\sp{c}$} \put(0.235,0.225){$\hat{\mathfrak{a}}_4\sp{s}$} \put(0.11,0.225){$\hat{\mathfrak{b}}_4\sp{s}$} \put(0.735,0.225){$\hat{\mathfrak{a}}_0\sp{c}$} \put(0.61,0.225){$\hat{\mathfrak{b}}_0\sp{c}$} \end{picture} \end{center} \caption{The homology bases. Sheet 1 is denoted by a solid line; sheet 2 by a dashed line; and sheet 3 by a dotted line.} \label{fighom} \end{figure} The paper \cite{bren06} chose an homology basis\footnote{The conventions of the paper \cite{bren06} were such that the $\mathfrak{b}$-normalized period matrix was positive definite. We must change the relative orientation of the $\mathfrak{a}$-cycles and $\mathfrak{b}$-cycles to obtain the positive definite $\mathfrak{a}$-normalized period matrix used in this paper.} $\{ \hat{\mathfrak{a}}_i\sp{s},\hat{\mathfrak{b}}_i\sp{s}\}$ reflecting the symmetry $\mathrm{R}$: $\mathrm{R}(\hat{\mathfrak{b}}_i\sp{s})=-\hat{\mathfrak{a}}_i\sp{s}$ ($i=1,2,3$) and $\mathrm{R}(\hat{\mathfrak{b}}_4\sp{s})=\hat{\mathfrak{a}}_4\sp{s}$. If we order the branch points $\{\lambda_1,\ldots,\lambda_6\}$ in terms of increasing argument and denote by $\gamma_k(z_i,z_j)$ the oriented path going from $P_i=(z_i,w_i)$ to $P_j=(z_j,w_j)$ in the $k$-th sheet we may express these cycles as \begin{align}\begin{split} \hat{\mathfrak{a}}\sp{s}_1&=\gamma_1(\lambda_2,\lambda_1)+\gamma_2(\lambda_1,\lambda_2), \qquad \hat{\mathfrak{b}}\sp{s}_1=\gamma_1(\lambda_2,\lambda_1)+\gamma_3(\lambda_1,\lambda_2),\\ \hat{\mathfrak{a}}\sp{s}_2&=\gamma_1(\lambda_4,\lambda_3)+\gamma_2(\lambda_3,\lambda_4) ,\qquad \hat{\mathfrak{b}}\sp{s}_2=\gamma_1(\lambda_4,\lambda_3)+\gamma_3(\lambda_3,\lambda_4),\\ \hat{\mathfrak{a}}\sp{s}_3&=\gamma_1(\lambda_6,\lambda_5) +\gamma_2(\lambda_5,\lambda_6),\qquad \hat{\mathfrak{b}}\sp{s}_3=\gamma_1(\lambda_6,\lambda_5) +\gamma_3(\lambda_5,\lambda_6),\\ \hat{\mathfrak{a}}\sp{s}_4&=\gamma_3(\lambda_2,\lambda_1) +\gamma_2(\lambda_1,\lambda_5)+\gamma_3(\lambda_5,\lambda_6) +\gamma_1(\lambda_6,\lambda_2), \\ \hat{\mathfrak{b}}\sp{s}_4& =\gamma_2(\lambda_2,\lambda_1)+\gamma_3(\lambda_6,\lambda_2) +\gamma_2(\lambda_5,\lambda_6)+\gamma_1(\lambda_1,\lambda_5). \end{split}\label{homology_s} \end{align} This is shown in Figure 1(a). In the next section we shall choose a homology basis reflecting the symmetry $\sigma$ in order to use the results of Fay and Accola. Let us fix the following lexicographical ordering of independent canonical holomorphic differentials of $\hat{\mathcal{C}}$, \begin{equation} {d}u_1= \frac{{d} z}{w},\quad {d}u_2= \frac{{d} z}{w^2},\quad {d}u_3= \frac{z{d} z}{w^2},\quad {d}u_4= \frac{z^2{d} z}{w^2}. \label{diffbasis} \end{equation} Then the symmetry $\mathrm{R}$ together with the Riemann bilinear relations shows that the period matrix for $\hat{\mathcal{C}}$ may be expressed in terms of just the four periods $$\boldsymbol{x}=(x_1,x_2,x_3,x_4)^T =\left( \oint_{\mathfrak{a}_1}{d}u_1,\ldots,\oint_{\mathfrak{a}_4}{d}u_1 \right)^T.$$ Following Wellstein \cite{wel99} and Matsumoto \cite{matsu00} we find \begin{proposition} \label{matsumoto1} Let $\hat{\mathcal{C}}$ be the triple covering of $\mathbb{P}^1$ with six distinct point $\lambda_1, \ldots,\lambda_6$, \begin{equation} w^3=\prod_{i=1}^6(z-\lambda_i) .\label{curvegen} \end{equation} Then the $\mathfrak{a}$-normalized Riemann period matrix is of the form \begin{align} \hat\tau\sp{s}&=\rho^2\left( H+(\rho\sp2-1)\frac{\boldsymbol{x}\boldsymbol{x}^T} {\boldsymbol{x}^T H\boldsymbol{x}} \right),\label{taumats} \end{align} where $H=\mathrm{diag}(1,1,1,-1)$. Then $\hat\tau\sp{s}$ is positive definite if and only if \begin{align} \bar{\boldsymbol{x}}^T H \boldsymbol{x} <0. \label{condition1} \end{align} \end{proposition} In fact the symmetry of (\ref{cyccurve}) means that $x_2=\rho x_1$, $ x_3=\rho^2 x_1$, and only two periods need be computed. Choosing the first sheet so that $w=\sqrt[3]{(z^3-\alpha\sp3)(z^3+{1}/{\alpha\sp3})}$ is negative and real on the real $z$-axis between the branch points $(-1/\alpha,\alpha)$ these may be expressed in terms of the integrals computed on the first sheet \begin{align} \mathcal{I}_1(\alpha)&=\int\limits_{0}^{\alpha}\frac{{d}z}{w} =-\frac{2\pi\sqrt{3}\alpha}{9} {_2F_1}\left(\frac13,\frac13;1;-\alpha^6\right),\\ \mathcal{J}_1(\alpha)&=\int\limits_{0}^{-1/\alpha}\frac{{d}z}{w} = \frac{2\pi\sqrt{3}}{9\alpha} {_2F_1}\left(\frac13,\frac13;1; -\alpha^{-6}\right).\label{integralsij}\end{align} Here $_2F_1(a,b;c;z)$ is the standard Gauss hypergeometric function and we may express the periods as $x_{1}=-(2\mathcal{J}_1+\mathcal{I}_1)\rho -2\mathcal{I}_1 -\mathcal{J}_1$ and $x_{4}=3(\mathcal{J}_1-\mathcal{I}_1)\rho+3\mathcal{J}_1$. Thus the period matrix may be expressed via (\ref{taumats}) as rational expressions in terms of $x_1$ and $x_4$, or equivalently in terms of $\mathcal{I}_1$ and $\mathcal{J}_1$. Now the Ercolani-Sinha constraints place constraints on the periods. These were solved in \cite{bren06} to give \begin{proposition}\label{propesnt} To each pair of relatively prime integers $(m,n)=1$ for which $$(m + n)(m -2n)<0$$ we obtain a solution\footnote{Note that $\boldsymbol{m}$ differs by a sign from that of \cite{bren06} because we are working with $\mathfrak{a}$-normalized quantities and their attendant orientations (see the previous footnote).} $\boldsymbol{n}=\begin{pmatrix} n& m-n&-m&2n-m \end{pmatrix}$, $\boldsymbol{m}=\begin{pmatrix} -m& n&m-n&3n \end{pmatrix}$ to the Ercolani-Sinha constraints for the curve (\ref{cyccurve}) as follows. First we solve for $t$, where \begin{equation}\label{esct} \dfrac{2n-m}{m + n}=\frac{{_2F_1}(\frac{1}{3}, \frac{2}{3}; 1,t)}{{_2F_1}(\frac{1}{3}, \frac{2}{3}; 1,1-t)}. \end{equation} Then \begin{equation} b=\frac{1-2t}{\sqrt{t(1-t)}},\qquad t= \frac{-b+\sqrt{b^2+4}}{2\sqrt{b^2+4}}, \end{equation} and we obtain $\chi$ from \begin{equation}\label{esevch} \chi^{\frac{1}{3}} = -(n + m )\, \frac{2 \pi}{3 \sqrt{3}}\ \frac{\alpha}{(1+\alpha\sp6)\sp\frac13}\ {_2F_1}(\frac{1}{3}, \frac{2}{3}; 1, t) \end{equation} with $\alpha\sp6=t/(1-t)$. \end{proposition} Remarkably one may solve the transcendental equation (\ref{esct}) using a theory developed to explain various formulae of Ramanujan \cite{bbg95,bv98}. We shall not need any examples beyond those of \cite{bren06}. At this stage one finds that the period matrix for a curve (\ref{cyccurve}) satisfying \textbf{H1} and \textbf{H2} may be expressed in terms in terms of the quantity \begin{equation}\label{esR}\mathcal{R}= \frac{\mathcal{I}_1(\alpha)}{\mathcal{J}_1(\alpha)}= \frac{m-2n}{m+n}.\end{equation} We note the following symmetries that preserve the constraints on $(m,n)$, \begin{equation}\label{symnmR} (m,n)\mapsto (-m,-n),\quad \mathcal{R}\mapsto \mathcal{R};\qquad (m,n)\mapsto (n-m,n),\quad \mathcal{R}\mapsto \frac1{\mathcal{R}}. \end{equation} The remaining problem is to determine those allowed $(n,m)$ which also satisfy \textbf{H3}. To make use of our alternative characterisation (\ref{beh3}) we record \begin{lemma}[\cite{bren06}] For the curve (\ref{cyccurve}) $\widetilde{\boldsymbol{K}}= \Theta_{{\rm singular}}$. Expressed as a characteristic, $\widetilde{\boldsymbol{K}} =\dfrac12\left[\begin{matrix}1&1&1&1\\1&1&1&1\end{matrix}\right]$. \end{lemma} \noindent{\bf Remark:} Because in the case under consideration $\widetilde{\boldsymbol{K}}$ is an even half-period the function the function $\theta(\lambda\widehat{\boldsymbol{U}}\pm \widetilde{\boldsymbol{K}},\hat\tau)$ vanishes to second order at least at the points $\lambda=0$ and $\lambda=2$ \begin{align*} \left.\theta(\lambda\widehat{\boldsymbol{U}}\pm \widetilde{\boldsymbol{K}},\hat\tau)\right|_{\lambda\sim0}&= \partial^2_{\widehat{\boldsymbol{U}}} \theta(\widehat{\boldsymbol{K}};\widehat{\tau}) \lambda^2+O(\lambda^4),\\ \left.\theta(\lambda\widehat{\boldsymbol{U}}\pm \widetilde{\boldsymbol{K}},\hat\tau)\right|_{\lambda\sim2} &=\partial^2_{\widehat{\boldsymbol{U}}} \theta(\widehat{\boldsymbol{K}};\widehat{\tau}) (\lambda-2)^2+O((\lambda-2)^4). \end{align*} We shall see that in fact it vanishes to order 4. Finally we note some of the coverings associated with the curve $\hat{\mathcal{C}}$. \begin{lemma}\label{covering} The action of $\sigma$ on the curve (\ref{cyccurve}) yields the unramified covering $\pi:\widehat{\mathcal{C}}\rightarrow\mathcal{C}:=\widehat{\mathcal{C}}/\texttt{C}_3$, where $\mathcal{C}$ is the genus $2$ hyperelliptic curve \begin{equation} \mathcal{C}= \{ (\mu,\nu)| \nu^2=(\mu^3+b)^2+4 \}, \label{hcurve} \end{equation} with $\nu=z^3+1/z^3$ and $\mu=-w/z$. Further, $\mathcal{C}$ two-sheetedly covers the two elliptic curves $\mathcal{E}_{\pm}$, \begin{equation} \mathcal{E}_{\pm}= \{ (z_{\pm},w_{\pm})| w_{\pm}^2=z_{\pm}(1-z_{\pm}) (1-k_{\pm}^2z_{\pm}) \},\label{curvepm}\\ \end{equation} where \begin{align*} z_{\pm}&=\frac{K^2-L^2}{K^2-\rho L^2}\, \frac{K-\mu}{\rho K-\mu}\,\frac{L^2-K\mu}{L^2-K\rho \mu},\\ w_{\pm}&=-\imath\sqrt{2+\rho}\sqrt{\frac{L\pm K}{L\mp K}} \frac{K^2}{L} \frac{L^2-\rho K^2}{\rho L^2-K^2}\frac{\nu(L\mp \mu)} {(\mu-\rho K)^2(L^2-\rho K \mu)^2 }.\label{cover2a} \end{align*} With $M={K}/{L}$, $K=(2\imath -b)^{\frac13}$ and $L=(b^2+4)^{\frac16}$ the Jacobi moduli $k_{\pm}$ are given by \begin{equation}\label{jacobimodpm} k_{\pm}^2=-\frac{\rho(\rho M \pm 1)(\rho M \mp 1)^3 } {(M\pm 1)(M\mp 1 )^3}. \end{equation} \end{lemma} \section{Fay-Accola factorization} Thus far we have recalled earlier works: a countable putative family of spectral curves for $SU(2)$ BPS monopoles has been produced together with their period matrices and the vector $ \widetilde{\boldsymbol{K}}$ but it remains to discuss the Hitchin constraint \textbf{H3}. The formulation of this constraint (\ref{beh3}) is in terms of genus 4 theta functions and in this section we wish to reduce this to questions of genus 2 theta functions making use of a remarkable factorization theorem due to Accola and Fay \cite{acc71, fay73} and also observed by Mumford. Let $\pi:\hat{\mathcal{C}}\rightarrow\mathcal{C}$ be a cyclic unramified covering. The map $\pi$ leads to a map $\pi\sp\ast: \text{Jac}({\mathcal{C}})\rightarrow\text{Jac}(\hat{\mathcal{C}})$ which may be lifted to $\pi\sp\ast:{\mathbb C}\sp{g}\rightarrow {\mathbb C}\sp{\hat g}$. When $\hat z=\pi\sp\ast z$ the theta functions on $\hat{\mathcal{C}}$ and ${\mathcal{C}}$ are related by this factorization theorem. We shall now describe this theorem in the monopole setting. Previously we have considered the symmetry $\mathrm{R}$ of the spectral curve. Now we shall focus on the cyclic symmetry $\sigma$, $\texttt{C}_3=<\sigma\,|\,\sigma^3=1>$ and the unramified covering $\pi:\widehat{\mathcal{C}}\rightarrow\mathcal{C}:=\widehat{\mathcal{C}}/\texttt{C}_3$ described in Lemma \ref{covering}. (The same considerations apply to the curve $\eta^3+\alpha\eta\chi^2+\chi(\zeta^6+b \zeta^3-1)=0$ and more generally cyclically symmetric monopoles.) To implement this theory we need a different choice of homology basis to that described earlier which reflects this symmetry. We wish an homology basis $\{\hat{\mathfrak{a}}_0\sp{c},\ldots,\hat{\mathfrak{a}}_3\sp{c}; \hat{\mathfrak{b}}_0\sp{c},\ldots, \hat{\mathfrak{b}}_3\sp{c}\}$ on $\widehat{\mathcal{C}}$ and $\{\mathfrak{a}_0,\mathfrak{a}_1, \mathfrak{b}_0,\mathfrak{b}_1\}$ on $\mathcal{C}$ satisfying (for $i=1,2,3$) \[ \sigma^k(\hat{\mathfrak{a}}_i\sp{c})=\hat{\mathfrak{a}}_{i+k}\sp{c},\; \sigma^k(\hat{\mathfrak{b}}_i\sp{c})=\hat{\mathfrak{b}}_{i+k}\sp{c},\; \sigma^k(\hat{\mathfrak{a}}_0\sp{c})\sim\hat{\mathfrak{a}}_{0}\sp{c},\; \sigma^k(\hat{\mathfrak{b}}_0\sp{c})=\hat{\mathfrak{b}}_{0}\sp{c},\quad k=1,2,3, \] (that is $\sigma^k(\hat{\mathfrak{a}}_0\sp{c})$ is homologous to $\hat{\mathfrak{a}}_0\sp{c}$) and such that $$ \pi(\hat{\mathfrak{a}}_i\sp{c} )= {\mathfrak{a}}_1,\; \pi(\hat{\mathfrak{b}}_i \sp{c})= {\mathfrak{b}}_1,\; \pi(\hat{\mathfrak{a}}_0\sp{c} )= {\mathfrak{a}}_0,\; \pi(\hat{\mathfrak{b}}_0\sp{c} )= 3{\mathfrak{b}}_0.$$ We may construct such a basis as follows. We take $\hat{\mathfrak{a}}_1\sp{c}=\hat{\mathfrak{a}}_1\sp{s}$, $\hat{\mathfrak{b}}_1\sp{c}=\hat{\mathfrak{b}}_1\sp{s}=-\mathrm{R}\sp2 \hat{\mathfrak{a}}_1\sp{s}$ and $\hat{\mathfrak{a}}_0\sp{c}=\hat{\mathfrak{a}}_4\sp{s}$ and extend these by $\hat{\mathfrak{a}}_{i+k}\sp{c}=\sigma_\ast\sp{k}(\hat{\mathfrak{a}}_i\sp{c})$ and $\hat{\mathfrak{b}}_{i+k}\sp{c}=\sigma_\ast\sp{k}(\hat{\mathfrak{b}}_i\sp{c})$. Thus $\hat{\mathfrak{a}}_{i+k}\sp{s}=\mathrm{R}\sp{2(i+k-1)}\sigma_\ast\sp{k}(\hat{\mathfrak{a}}_i\sp{c})$, and $\hat{\mathfrak{b}}_{i+k}\sp{s}=\mathrm{R}\sp{2(i+k-1)}\mathrm\sigma_\ast\sp{k}(\hat{\mathfrak{b}}_i\sp{c})$. At this stage we have defined the cyclic cycles $\hat{\mathfrak{a}}_{i}\sp{c}$, $\hat{\mathfrak{b}}_{i}\sp{c}$ ($i=1,2,3$) together with $\hat{\mathfrak{a}}_{0}\sp{c}$. We complete the homology basis by seeking an invariant cycle $\hat{\mathfrak{b}}_0\sp{c}$ and define the cycles on $\mathcal{C}$ in terms of these. Such a basis is exhibited in Figure 1(b). If we take as ordered bases $\{\hat\gamma_i\sp{s}\}=\{\hat{\mathfrak{a}}_1\sp{s},\ldots,\hat{\mathfrak{a}}_4\sp{s};$ $ \hat{\mathfrak{b}}_1\sp{s},\ldots, \hat{\mathfrak{b}}_4\sp{s}\}$ and $\{\hat\gamma_i\sp{c}\}=\{\hat{\mathfrak{a}}_1\sp{c},\ldots,\hat{\mathfrak{a}}_3\sp{c}, \hat{\mathfrak{a}}_0\sp{c}; \hat{\mathfrak{b}}_1\sp{c},\ldots, \hat{\mathfrak{b}}_3\sp{c},\hat{\mathfrak{b}}_0\sp{c}\}$ then $\hat\gamma_i\sp{c}=\mathfrak{S} \hat\gamma_i\sp{s}$ where $\mathfrak{S}$ is the symplectic matrix \begin{equation}\label{sympA} \mathfrak{S}=\left(\begin {array}{cccccccc} 1&0&0&0&0&0&0&0\\\noalign{\medskip}0& -1&0&0&0&1&0&0\\\noalign{\medskip}0&0&0&0&0&0&-1&0\\\noalign{\medskip}0 &0&0&1&0&0&0&0\\\noalign{\medskip}0&0&0&0&1&0&0&0\\\noalign{\medskip}0 &-1&0&0&0&0&0&0\\\noalign{\medskip}0&0&1&0&0&0&-1&0 \\\noalign{\medskip}0&0&0&-1&0&0&0&1\end {array} \right):=\left( \begin{array}{cc} A&B\\C&D \end{array}\right) \in\mathrm{Sp}(8,\mathbb{Z}). \end{equation} For example $$\hat{\mathfrak{a}}\sp{c}_2=\sigma\hat{\mathfrak{a}}\sp{c}_1= \sigma\hat{\mathfrak{a}}\sp{s}_1=\mathcal{R}\hat{\mathfrak{a}}\sp{s}_2= -\mathcal{R}\sp2\hat{\mathfrak{b}}\sp{s}_2=(1+\mathcal{R})\hat{\mathfrak{b}}\sp{s}_2 =-\hat{\mathfrak{a}}\sp{s}_2+\hat{\mathfrak{b}}\sp{s}_2.$$ Fay works with the ordered basis $\{\hat\gamma_i\sp{c}\}=\{\hat{\mathfrak{a}}_0\sp{c}, \hat{\mathfrak{a}}_1\sp{c},\ldots,\hat{\mathfrak{a}}_3\sp{c} ; \hat{\mathfrak{b}}_0\sp{c},\hat{\mathfrak{b}}_1\sp{c},\ldots, \hat{\mathfrak{b}}_3\sp{c}\}$, this reordering being achieved (on both the $\mathfrak{a}$ and $\mathfrak{b}$-cycles) by $$\mathrm{S}:= \left( \begin {array}{cccc} 0&0&0&1\\ \noalign{\medskip}1&0&0&0\\ \noalign{\medskip}0&1&0&0\\ \noalign{\medskip}0&0&1&0\end {array} \right). $$ We may again represent these cycles as integrals between branch points: \begin{align}\begin{split} \hat{\mathfrak{a}}\sp{c}_1&=\gamma_1(\lambda_2,\lambda_1)+\gamma_2(\lambda_1,\lambda_2), \qquad \hat{\mathfrak{b}}\sp{c}_1=\gamma_1(\lambda_2,\lambda_1)+\gamma_3(\lambda_1,\lambda_2),\\ \hat{\mathfrak{a}}\sp{c}_2&=\gamma_2(\lambda_4,\lambda_3)+\gamma_3(\lambda_3,\lambda_4) ,\qquad \hat{\mathfrak{b}}\sp{c}_2=\gamma_2(\lambda_4,\lambda_3)+\gamma_1(\lambda_3,\lambda_4),\\ \hat{\mathfrak{a}}\sp{c}_3&=\gamma_3(\lambda_6,\lambda_5)+\gamma_1(\lambda_5,\lambda_6) ,\qquad \hat{\mathfrak{b}}\sp{c}_3=\gamma_3(\lambda_6,\lambda_5) +\gamma_2(\lambda_5,\lambda_6),\\ \hat{\mathfrak{a}}\sp{c}_0&=\gamma_3(\lambda_2,\lambda_1) +\gamma_2(\lambda_1,\lambda_5)+\gamma_3(\lambda_5,\lambda_6) +\gamma_1(\lambda_6,\lambda_2), \\ \hat{\mathfrak{b}}\sp{c}_0& =\gamma_3(\lambda_1,\lambda_2)+\gamma_1(\lambda_2,\lambda_5) +\gamma_2(\lambda_5,\lambda_6) +\gamma_3(\lambda_6,\lambda_3)+\gamma_1(\lambda_3,\lambda_4) +\gamma_2(\lambda_4,\lambda_1). \end{split}\label{homology_c} \end{align} With the cyclic homology basis just described we have \begin{theorem}[\bf Fay-Accola] \label{fayaccola} With respect to the ordered canonical homology bases $\{\hat\gamma_i\sp{c}\}=\{\hat{\mathfrak{a}}_0\sp{c}, \hat{\mathfrak{a}}_1\sp{c},\ldots,\hat{\mathfrak{a}}_3\sp{c} ; \hat{\mathfrak{b}}_0\sp{c},\hat{\mathfrak{b}}_1\sp{c},\ldots, \hat{\mathfrak{b}}_3\sp{c}\}$ and $\{\mathfrak{a}_0,\mathfrak{a}_1, \mathfrak{b}_0,\mathfrak{b}_1\}$ specified above then the $\mathfrak{a}$-normalized Riemann period matrices of $\hat{\mathcal{C}}$ and $\mathcal{C}$ take the respective forms \begin{equation}\hat{\tau}\sp{c =\left( \begin{array}{cccc} a&b&b&b\\ b&c&d&d\\ b&d&c&d\\ b&d&d&c \end{array}\right),\qquad \tau\sp{c =\left( \begin{array}{cc} \frac13 a&b\\ b&c+2d \end{array}\right) .\end{equation} Moreover for arbitrary $\boldsymbol{ z}=(z_0,z_1)\in \mathbb{C}^2$ then $\pi\sp\ast\boldsymbol{ z}=\boldsymbol{\hat z}=(3\,z_0,z_1,z_1,z_1)$ and \begin{equation} \frac{\theta(3\,z_0,z_1,z_1,z_1;\hat{\tau}\sp{c})} {\prod_{k=0}^{2}\theta\left[\begin{matrix}0&0 \\ \frac{k}{3}&0 \end{matrix}\right]\left(z_0,z_1;\tau\sp{c \right)} =c_0(\widehat{\tau}\sp{c ) \label{fafactora} \end{equation} is a non-zero modular constant $c_0(\hat{\tau}\sp{c ) $ independent of $\boldsymbol{ z}$. \end{theorem} In our setting we obtain \begin{proposition} The quantities $a,b,c,d$ appearing in the Fay-Accola theorem are expressible in terms of the holomorphic integrals $x_1,x_4$ (with $ \rho^3=1$) as \begin{align} a&=-\frac{6x_1^2-x_4^2+\rho(3x_1^2+x_4^2)}{3x_1^2-x_4^2},& b&=\frac{(1+2\rho)x_1x_4}{3x_1^2-x_4^2},\\ c&=\frac{2x_1^2-x_4^2+\rho(x_1^2-x_4^2)}{3x_1^2-x_4^2},& d&=-\frac{(1+2\rho)x_1^2}{3x_1^2-x_4^2}. \label{abcd} \end{align} \end{proposition} \begin{proof} If we perform the symplectic transformation of the period matrix (\ref{taumats}) with the symplectic transformation (\ref{sympA}), $\hat\tau\sp{s}\rightarrow (C+D\hat\tau\sp{s})(A+B\hat\tau\sp{s})^{-1}$, we obtain a period matrix of the form $$\left( \begin{array}{cccc} c&d&d&b\\ d&c&d&b\\ d&d&c&b\\ b&b&b&a \end{array}\right)= \mathrm{S}\sp{-1}{\hat\tau}\sp{c} \mathrm{S}, $$ the final result coming after conjugation by $\mathrm{S}$ to change the order of the homology basis to match that of Fay. \end{proof} Again we see that the period matrix just depends on the ratio of $x_1/x_4$ or equivalently on $\mathcal{R}$. Now to make use of the Fay-Accola theorem we must show that the vectors $\widehat{\boldsymbol{U}}$ and $\widetilde{\boldsymbol K}$ may be obtained by pullback from $\mathop{\rm Jac}\nolimits(\mathcal{C})$. To this end we have \begin{proposition} In the cyclic homology basis $\{\hat{\mathfrak{a}}_0\sp{c},\ldots,\hat{\mathfrak{a}}_3\sp{c}; \hat{\mathfrak{b}}_0\sp{c},\ldots, \hat{\mathfrak{b}}_3\sp{c}\}$ the winding vector $\widehat{\boldsymbol{U}}$ and vector $\widetilde{\boldsymbol{K}}$ take the form \begin{align} \widehat{\boldsymbol{U}}&=(\widehat{U}_0,\widehat{U}_1,\widehat{U}_1, \widehat{U}_1),\quad \widehat{U}_0=-\frac{C_0x_4}{3x_1^2-x_4^2},\quad \widehat{U}_1 =\frac{C_0 x_1}{3x_1^2-x_4^2},\label{wind}\\ \widetilde{\boldsymbol{K}} &= \left( \frac12,\frac12,\frac12,\frac12\right)+ \left( \frac12,\frac12,\frac12,\frac12\right)\hat{\tau}\sp{c} =(\widetilde{K}_0,\widetilde{K}_1, \widetilde{K}_1,\widetilde{K}_1),\label{ksymm} \end{align} where $C_0=-3(2n-m)$. The winding vector is a half-period and the Ercolani-Sinha vector may be written $2\widehat{\boldsymbol{U}}= \boldsymbol{\widehat n}+\boldsymbol{\widehat m}\,\hat{\tau}\sp{c}$ where $$(\boldsymbol{\widehat n},\boldsymbol{\widehat m})=(5n-m,n,n,n,3n,-m,-m,-m).$$ \end{proposition} \begin{proof} Using the explicit expression for the matrix $\mathcal{A}$ from \cite{bren06} one has the vector $\widehat{\boldsymbol{U}}\sp{s}=\nu(1,0,0,0)\mathcal{A}^{-1} $ and \begin{equation} \widehat{\boldsymbol{U}}\sp{s}=-C_0\left( \frac{x_1}{x_4^2},\frac{\rho x_1}{x_4^2}, \frac{\rho^2 x_1}{x_4^2}, -\frac{1}{x_4} \right). \end{equation} Then with the symplectic transformation (\ref{sympA}) $\widehat{\boldsymbol{U}}=\widehat{\boldsymbol{U}}\sp{s} (A+B\hat\tau\sp{s})^{-1}$. The value of constant $C_0$ is found from the condition $\bf{H2}$. Performing the symplectic transformation on the vector $(\boldsymbol{n},\boldsymbol{ m})$ given in Proposition 7 yields $$(\boldsymbol{\widehat n}\mathrm{S},\boldsymbol{\widehat m}\mathrm{S})=(\boldsymbol{n},\boldsymbol{ m}) \mathfrak{S}\sp{-1}=(n,n,n,5n-m,-m,-m,-m,3n) $$ and the result follows. The only point to note is in the transformation of the vector of Riemann constants. This has two parts: the vector $\boldsymbol K$ transforms as a vector, ${\boldsymbol K}\rightarrow{\boldsymbol K}(A+B\hat\tau\sp{s})^{-1}$; but in transforming the argument of a theta function function by a symplectic transformation the characteristics of the theta function also transform (see Appendix A), and a theta function with no characteristics may acquire characteristics. When dealing with Riemann's theta function (with vanishing characteristic) the acquired characteristics are typically placed in the transformed vector of Riemann constants. We do this here and find $$\boldsymbol{\widetilde K}=(\frac12\ldots\frac12)\,\mathfrak{S}\sp{-1} \begin{pmatrix}1\\ \widehat\tau\sp{c} \end{pmatrix}+ \frac12\left((CD^T)_0,(AB^T)_0\right) \begin{pmatrix}1\\ \widehat\tau\sp{c}\end{pmatrix} $$ yielding the stated result. \end{proof} The form of the vectors $\widehat{\boldsymbol{U}}$ and $\widetilde{\boldsymbol K}$ given in the theorem establishes \begin{corollary} $\widehat{\boldsymbol{U}}=\pi\sp\ast(\boldsymbol{U}\sp\ast)$ and $\widetilde{\boldsymbol K}=\pi\sp\ast(\boldsymbol{K}\sp\ast)$ where $$ \boldsymbol{U}^{\ast}=\left(\frac13 \widehat{U}_0,\widehat{U}_1\right),\; \boldsymbol{K}^{\ast}=\left(\frac13 \widehat{K}_0,\widehat{K}_1\right).$$ \end{corollary} Therefore we may employ the Fay-Accola result. Introduce the three functions \begin{equation*} f_{k}(\lambda)=\theta(\lambda\,\boldsymbol{U}^{\ast}+\boldsymbol{K}^{\ast} +k\,\boldsymbol{l}^{\ast} \,|\, \tau^c),\quad k=0,+1,-1, \quad\boldsymbol{l}^{\ast}=\left(\frac13,0\right). \end{equation*} Up to exponential factors these correspond to the three genus 2 theta functions with characteristics in the denominator of (\ref{fafactora}). Making use of (\ref{beh3}, \ref{beh3b}) we arrive at \begin{theorem}\label{beH3} If $\lambda\in[0,2]$ then \begin{equation}\label{beh3c} H^0(\hat{\mathcal{C}},L^{\lambda}(n-2))\ne0\Longleftrightarrow \theta(\lambda\widehat{\boldsymbol{U}}\pm \widetilde{\boldsymbol{K}}\,|\,\hat\tau)=0\Longleftrightarrow f_{k}(\lambda)=0 \end{equation} for at least one $k\in\{0,\pm1\}$. \end{theorem} At this stage we have reduced the question \textbf{H3} to questions about various genus 2 theta functions and we shall look at these in the next section. \noindent{\bf{Remark:}} We note that the symplectic transformation $$ \left( \begin {array}{cccccccc} 0&0&0&0&0&1&1&1\\ \noalign{\medskip}0&0&0&0&1&0&0&0\\ \noalign{\medskip}0&2&-1&-1&0&0&0&0\\ \noalign{\medskip}0&0&0&0&0&0&1&-1\\ \noalign{\medskip}0&-1&0&0&0&0&0&0\\ \noalign{\medskip}-1&0&0&0&0&0&0&0\\ \noalign{\medskip}0&0&0&0&0&0&0&-1\\ \noalign{\medskip}0&1&-1&0&0&0&0&0\end {array} \right) $$ brings $\widehat\tau\sp{c}$ to the form $$ \left( \begin {array}{cccc} -\dfrac13\,{\dfrac {a}{-3\,{b}^{2}+ac+2\,ad}}&{\dfrac {b}{-3\,{b}^{2}+ac+2\,ad}}&-\dfrac13&0\\ \noalign{\medskip}{\dfrac {b}{ -3\,{b}^{2}+ac+2\,ad}}&-{\dfrac {c+2\,d}{-3\,{b}^{2}+ac+2\,ad}}&0&0\\ \noalign{\medskip}-\dfrac13&0&\dfrac{c-d}6&\dfrac12\\ \noalign{\medskip}0&0&\dfrac12&-\dfrac12\, \dfrac1{ c-d } \end {array} \right). $$ One may identify the top $2\times2$ block as conjugate to $-\tau\sp{c\,-1}/3$. Using this one can use Weierstrass reduction to rewrite the genus 4 theta functions as in \cite{bren06}. \section{The Humbert variety} At this stage by using the Fay-Accola theorem we have reduced the question of the vanishing of the genus 4 $\theta$-function $\theta(\lambda\,\widehat{\boldsymbol{U}}+ \widetilde{\boldsymbol{K}};\widehat{\tau}\sp{c} )$ to that of the vanishing of the genus 2 $\theta$-functions $f_k(\lambda)$, $k\in\{0,\pm1\}$. We can in fact do better. In \cite{bren06} it was observed that $\mathcal{C}$ covered two elliptic curves and we shall now exploit this geometry making use of ideas of Humbert expounded in Krazer \cite{kr03} that we now recall. \begin{definition} The period matrix $\tau$ of a genus two algebraic curve $\mathcal{C}$ belongs to the Humbert variety $\mathcal{H}_{\Delta}$ associated with the symplectic invariant $\Delta$ if there exist integer $q_i\in\mathbb{Z}$ satisfying \begin{equation} q_1+q_2\tau_{11}+q_3\tau_{12}+q_4\tau_{22}+ q_5 (\tau_{12}^2-\tau_{11}\tau_{22})=0 \label{humbert} \end{equation} and \begin{equation} \label{hopf} q_3^2-4(q_1q_5+q_2q_4)=\Delta. \end{equation} The curve $\mathcal{C}$ covers elliptic curves $\mathcal{E}_{\pm}$ if and only if $\Delta$ is a perfect square, $\Delta=h^2\geq 1$, $h\in\mathbb{N}$. Then the integer $h$ is the degree of the cover. \end{definition} \begin{theorem}[\bf Bierman-Humbert] Let $\tau\in \mathcal{H}_{\Delta}$ and $\Delta=h^2$. Then there exists a symplectic transformation $\mathfrak{S}\in \mathrm{Sp}(4,\mathbb{Z})$, such that \begin{equation} \mathfrak{S}\circ \tau =\widetilde{\tau}=\left( \begin{array}{cc} \widetilde{\tau}_{11}&\frac{1}{h}\\ \frac{1}{h}&\widetilde{\tau}_{22} \end{array} \right) \label{beir-humb} \end{equation} The transformation $\mathfrak{S}$ is given constructively and may be realized in a finite number of steps. \end{theorem} A modern proof of the theorem is given \cite{murabayashi94} revising that of Krazer \cite{kr03}. When a $2\times2$ period matrix $\widetilde{\tau}$ has the structure (\ref{beir-humb}) we may decompose the associated $\theta$-function as \begin{align} \theta(z_1,z_2\,|\,\widetilde{\tau})= \sum_{k=0}^{h-1} \vartheta_3\left(z_1+\frac{k}{h}\,|\,\widetilde{\tau}_{1,1}\right)\theta \left[\begin{array}{c}\frac{k}{h}\\0 \end{array}\right] \left( hz_2 \,|\,h^2\widetilde{\tau}_{2,2}\right), \quad h^2=\Delta.\label{decomposition} \end{align} Here and below $\vartheta_k(z|\tau)$, $k=1,2,3,4$ denote the Jacobi theta functions \cite{ba55}. Our case is relevant to the Humbert variety $\mathcal{H}_{4}$ that has received most study. The following is true. \begin{proposition} Let $\tau\sp{c}$ be the period matrix of the curve $\mathcal{C}$. Then $\tau\sp{c}\in \mathcal{H}_{4}$ with \begin{equation} \widetilde{\tau}_{11}= \frac12\,{\frac { \left( \rho-1 \right) \left( -3\,{x}_{{1}}+2\,{x}_{{4}}+\rho\,{x}_{{4}} \right) }{3\,{x}_{{1}}-{x}_{{4}}+\rho\,{x}_{{4}}}} , \quad \widetilde{\tau}_{22}= \frac16\,{\frac { \left( 2+\rho \right) \left( -3 \,{x}_{{1}}-{x}_{{4}}+\rho\,{x}_{{4}} \right) }{3\,{ x}_{{1}}+2\,{x}_{{4}}+\rho\,{x}_{{4}}}} .\label{tau1122a} \end{equation} \end{proposition} \begin{proof} Substituting the expressions we have for $a$, $b$, $c$ and $d$ in terms of $x_1$ and $x_4$ in the matrix equality \[ \tau\sp{c}=\left(\begin{array}{cc} \tau_{11}&\tau_{12}\\ \tau_{12}&\tau_{22} \end{array} \right)=\left(\begin{array}{cc}\frac13 a&b\\b&c+2d \end{array} \right) \] we may eliminate $x_1$ and $x_4$ to obtain the two relations: \begin{align} 0&=-2-\tau_{22}+3\tau_{11}\label{relation1a},\\ 0&=1-3\tau_{11}-\tau_{12}^2 \emph{}+3\tau_{11}^2.\label{relation2a} \end{align} Using the first of these we may write $3\tau_{11}^2= \tau_{11}(\tau_{22}+2)$ which leads to the second taking the form \[1-\tau_{11}+\tau_{11} \tau_{22}-{\tau_{12}}^2=0.\] This is (\ref{humbert}) with $q_1 = 1$, $q_2 = -1$, $q_3 = 0$, $q_4 =0$, $q_5 = -1$ and the value of the invariant $\Delta=4$. (We remark that other possibilities may arise in the elimination process but we present only one resulting in $\Delta=4$.) Standard procedure \cite{kr03}, \cite{bbeim94} yields the symplectic transformation \begin{equation} \mathfrak{S}= \left(\begin{array}{cccc} 0&1&1&0\\ \noalign{\medskip}1&1&0&1\\ \noalign{\medskip}0&1&0&1\\ \noalign{\medskip}0&0&1&0\end {array} \right)=\left( \begin{array}{cc} \alpha&\beta\\ \gamma&\delta\end{array} \right) \in \mathrm{Sp}(4,\mathbb{Z})\label{transf2a} \end{equation} which reduces $\tau\sp{c}$ to the form (\ref{beir-humb}) with $h=2$ and the stated identifications (\ref{tau1122a}). \end{proof} This proposition enables us to write the genus two theta functions and so the functions $f_k$ in terms of Jacobi $\theta$-functions, \begin{align}\label{tfa} \theta(z_1,z_2\,|\,\widetilde{\tau})= \vartheta_3\left(z_1\left.\right|\widetilde{\tau}_{1,1}\right)\vartheta_3 \left( 2z_2\,|\,4\widetilde{\tau}_{2,2}\right)+ \vartheta_3\left(z_1+1/2\left.\right|\widetilde{\tau}_{1,1}\right)\vartheta_2 \left( 2z_2\,|\,4\widetilde{\tau}_{2,2}\right). \end{align} We will need to transform the argument ${\boldsymbol z}=\lambda\,\boldsymbol{U}^{\ast}+\boldsymbol{K}^{\ast} +k\,\boldsymbol{l}^{\ast}$ using the transformation (\ref{transf2a}) but before doing so it is helpful to consider the moduli $\widetilde{\tau}_{11}$ and $\widetilde{\tau}_{22}$ and an additional link between them. As explained earlier, the periods $x_{1,4}$ are expressible in terms of the integrals $\mathcal{I}_1(\alpha)$ and $\mathcal{J}_1(\alpha)$ whose ratio (\ref{esR}) is constrained to be $\mathcal{R}$. Only the ratios of $x_{1,4}$ appear in (\ref{tau1122a}) and we may replace these by $\mathcal{R}$, $$\widetilde{\tau}_{11}=\frac12\,{\frac {2+4\,\rho+3\,\mathcal{R}}{\mathcal{R}+1+2\,\rho}} =\frac{2i\sqrt{3}+3\,\mathcal{R}}{2(\mathcal{R}+i\sqrt{3})} ,\qquad \widetilde{\tau}_{22}=-{\frac {1+2\,\rho+3\,\mathcal{R}}{6\,\mathcal{R}}}= -\frac{i\sqrt{3}+3\,\mathcal{R}}{6\,\mathcal{R}}. $$ It is convenient to introduce the purely imaginary quantity (with positive imaginary part) \begin{equation} \mathcal{T}=-\frac{2\imath\sqrt{3}}{\mathcal{R}}=2\imath\sqrt{3}\, \frac{n+m}{2n-m}.\label{finmoda} \end{equation} In terms of this we have \begin{equation}\label{modTa} \widetilde{\tau}_{11}=1- \frac1{\mathcal{T} -2},\qquad \widetilde{\tau}_{22}=\frac{\mathcal{T}}{12}-\frac12 \end{equation} and the transformed arguments take the form \begin{align*} \boldsymbol{U}'&= {\boldsymbol{U}}^{\ast} (\alpha+\beta\tau)^{-1}=\left( \frac { \left( -1+i\sqrt {3} \right) C_0\,\mathcal{T} }{36(\mathcal{T}-2)}, -\frac{\left( 3+i\sqrt {3} \right) C_0\,\mathcal{T} }{216} \right),\\ \boldsymbol{l}'&=\boldsymbol{l}^{\ast}(\alpha+\beta\tau)^{-1}=\left( -\frac { \left( \mathcal{T}-3 \right)}{3(\mathcal{T}-2)}, \frac16 \right), \\ \boldsymbol{K}'&={\boldsymbol{K}}^{\ast}(\alpha+\beta\tau)^{-1}+ \frac12\left((\gamma\delta^T)_0,(\alpha\beta^T)_0\right) \begin{pmatrix}1\\ \tau_{\mathfrak{a}}\end{pmatrix} =\left(\frac43-\frac1{3(\mathcal{T}-2)},\frac{\mathcal{T}}{12}-\frac16\right). \end{align*} Using (\ref{modTa}, \ref{tfa}) and various substitutions the following proposition is established in Appendix B. \begin{proposition}\label{humred} For each pair of relatively prime integers $(m,n)=1$ for which $(2n-m)(n+m)>0$ let $\widehat{\boldsymbol{U}}$ be the Ercolani-Sinha vector and $\widehat{\tau}$ the period matrix of the genus 4 curve described above. Then the function $\theta(\lambda\,\widehat{\boldsymbol{U}}+\widehat{\boldsymbol{K}}\,|\, \widehat{\tau} )$ vanishes for $\lambda\in[0,2]$ if and only if at least one of the three functions (with $k\in\mathbb{Z}$) \begin{align} h_k(y):=\frac{\vartheta_3}{\vartheta_2} \left(i\sqrt{3}\,y+\frac{k\,\mathcal{T}}{3}\,\Big|\,\mathcal{T}\right) +(-1)^{k}\, \frac{\vartheta_2}{\vartheta_3}\left( y+ \frac{k}{3} \,|\,\frac{\mathcal{T}}{3}\right),\quad k=-1,0,1\mod 3,\label{vanishing11a} \end{align} also vanishes. Here $y:=y(\lambda)=\lambda\,(n+m)\rho/3$, $\mathcal{T}=2i\sqrt{3}(n+m)/(2n-m)$ and $\frac{\vartheta_3}{\vartheta_2} \left(z|\mathcal{T}\right)$ is shorthand for $\frac{\vartheta_3\left(z|\mathcal{T}\right)}{ \vartheta_3\left(z|\mathcal{T}\right)}$. Further the functions $h_k$ satisfy \begin{align}\label{hper} h_{k+3}(y)&=h_k(y), \qquad h_{k}\left(y+\frac{2(n+m)}3\right)=h_{k-[n+m]}(y),\nonumber \\ h_{k}\left(y+\frac{2(n+m)}3\rho\right) &=\begin{cases} (-1)\sp{n+m}\,h_{k-[n+m]}(y)&\text{if }m\text{ even},\\ (-1)\sp{k}h_{k-[n+m]}(y+\mathcal{T}/2) &\text{if }m\text{ odd}, \end{cases}\\ h_k(y(\lambda+2))&=0 \Longleftrightarrow h_{k-[n+m]}(y(\lambda))=0.\nonumber \end{align} \end{proposition} Therefore $h_k$ are elliptic functions with periods $2(n+m)$ and $4(n+m)\rho$. We also note that the zero divisors of $h_k(y)$ and $h_k(y+\mathcal{T}/2)$ are the same. Thus we have reduced the question of \textbf{H3} to that of the zeros of the elliptic functions $h_k$. This theta function question is much simpler than the corresponding (much greater degree) theta function expressions of \cite{bren06} which arose making use of Weierstrass-Poincar\'e reduction. Exploiting the geometry has greatly simplified the problem. We shall turn to the theta function question in the next section. \noindent{\bf{Remark:}} We have an action of $\Gamma(2)\times \Gamma(2)$ on period matrices of the form $\begin{pmatrix}\lambda_1&1/2\\ 1/2&\lambda_2\end{pmatrix}$ which may be associated to any genus 2 curve with extra involution distinct from the hyperelliptic involution. Here $\Gamma(2)=\left\{\begin{pmatrix}a&b\\c&d\end{pmatrix}\in PSL(2,\mathbb{Z})\,\Big|\, a\equiv d\equiv1\pmod2,\ b\equiv c\equiv0\pmod2 \right\}$ has generators $\tau\mapsto \tau+2$ and $\tau\mapsto \frac{\tau}{1-2\tau}$ and we have the exact sequence $$1\rightarrow\Gamma(2)\rightarrow PSL(2,\mathbb{Z})\rightarrow S_3\rightarrow1.$$ To see this we observe that with the action $\begin{pmatrix}A&B\\C&D\end{pmatrix}$: $\tau\mapsto(C+D\,\tau)(A+B\,\tau)\sp{-1}$ we have \begin{align*} s&&\begin{pmatrix} 1&2&-4&0\\ 0&1&0&0\\ 0&0&1&0\\ 0&1&-2&1 \end{pmatrix}&&\begin{pmatrix}\lambda_1&1/2\\ 1/2&\lambda_2\end{pmatrix} \mapsto \begin{pmatrix}\frac{\lambda_1}{1-4\lambda_1}&1/2\\ 1/2&\lambda_2\end{pmatrix}\\ t&&\begin{pmatrix} 1&0&0&0\\ 0&1&0&0\\ 1&0&1&0\\ 0&0&0&1 \end{pmatrix}&&\begin{pmatrix}\lambda_1&1/2\\ 1/2&\lambda_2\end{pmatrix} \mapsto \begin{pmatrix}{\lambda_1}+1&1/2\\ 1/2&\lambda_2\end{pmatrix}\\ &&\begin{pmatrix} 0&1&0&0\\ 1&0&0&0\\ 0&0&0&1\\ 0&0&1&0 \end{pmatrix}&&\begin{pmatrix}\lambda_1&1/2\\ 1/2&\lambda_2\end{pmatrix} \mapsto\begin{pmatrix}\lambda_2&1/2\\ 1/2&\lambda_1\end{pmatrix} \end{align*} If we set $\mu=2\lambda_1$ then $s$ and $t$ give the actions $\mu\mapsto \mu+2$, $\mu\mapsto \frac{\mu}{1-2\mu}$ and so generate $\Gamma(2)$; conjugation by the final matrix then extends this to $\Gamma(2)\times \Gamma(2)$. Then with \begin{equation}\label{bolzaform} \widetilde{\tau}_{11}'=s^2t^{-1}\left(\widetilde{\tau}_{11}\right) =- \frac1{\mathcal{T} +6}, \qquad \widetilde{\tau}_{22}'=t\left(\widetilde{\tau}_{22}\right)=\frac{\mathcal{T} +6}{12} \end{equation} and we have $12\widetilde{\tau}_{11}'\widetilde{\tau}_{22}'+1=0$, the relation Bolza's claimed for period matrices for such curves \cite{b88}. We may now give an alternate characterization of those curves (\ref{bren03}) satisfying Hitchin's conditions $\textbf{H1}$ and $\textbf{H2}$. \begin{proposition}\label{formbmn} The family of curves $\eta^3+\chi(\zeta^6+b\zeta^3-1)=0$ satisfy the constraints $\mathbf{H1}$ and $\mathbf{H2}$ when \begin{equation} b(m,n)=-\frac{\sqrt{3}(p(m,n)^6-45p(m,n)^4+135p(m,n)^2-27) } {9p(m,n)(p(m,n)^4-10p(m,n)^2+9)}\label{bformula} \end{equation} and $\chi=\chi(m,n)$ may be expressed in terms of $m$, $n$ and $b(m,n)$ by Proposition \ref{propesnt}. Here $m$ and $n$ are relatively prime integers $(m,n)=1$ for which $(m+n)(m-2n)<0$ and \begin{equation} p(m,n)=\frac{ 3\vartheta_3^2 \left(0\vert \frac{\mathcal{T}(m,n)}{2}\right) } {\vartheta_3^2\left(0\vert \frac{\mathcal{T}(m,n)}{6}\right) },\qquad \mathcal{T}(m,n)=2\imath\sqrt{3}\frac{n+m}{2n-m}.\end{equation} \end{proposition} Indeed we may relate the elliptic curves $\mathcal{E}_\pm$ of Lemma \ref{covering} with the period matrix (\ref{tau1122a}) or the symplectically equivalent (\ref{bolzaform}) via \begin{corollary}\label{modulikpm} The genus two curve $\mathcal{C}$ two-sheetedly covers the elliptic curves $\mathcal{E}_{\pm}$ whose Jacobian moduli may be written \[ k_+^2=\frac{\vartheta_2^4\left(0\vert \frac{\mathcal{T}}{6}\right)} {\vartheta_3^4\left(0\vert \frac{\mathcal{T}}{6}\right)},\quad k_-^2=\frac{\vartheta_2^4\left(0\vert \frac{\mathcal{T}}{2}\right)} {\vartheta_3^4\left(0\vert \frac{\mathcal{T}}{2}\right)} \] \end{corollary} The proof of both the proposition and corollary is presented in Appendix C. \section{The Theta function Question} The final step in establishing the existence of monopoles with spectral curve (\ref{bren03}) is then to understand the vanishing properties of the function \begin{equation}\label{defH} H(y)=h_{-1}(y) h_0(y) h_1(y) \end{equation} with the definitions introduced in Proposition \ref{humred}. $H(y)$ is also an elliptic function with periods $2(n+m)/3$ and $4(n+m)\rho/3$. Given the periodicity in $k$ of $h_k$ proven in this proposition we have that \begin{lemma} $H(y(\lambda))=0\Leftrightarrow H(y(\lambda+2))=0$ and these functions have the same vanishing properties. \end{lemma} Numerical calculations in \cite{bren07} suggested the conjecture \begin{conjecture}\label{conjecture} For each pair of relatively prime integers $(m,n)=1$ for which $(2n-m)(n+m)>0$ let $y=y(\lambda)=\lambda(n+m)\rho/3$ and $\mathcal{T}=2i\sqrt{3}(n+m)/(2n-m)$. Then $H(y)$ vanishes $2(|n|-1)$ times on the interval $\lambda\in(0,2)$. \end{conjecture} To prove the uniqueness of the tetrahedral monopole within the class of symmetric monopole curves it will suffice to show only $(m,n)=(1,1)$ and $(0,1)$ have no zeros within the range. At present we don't know how to prove the more general conjecture. We expect vanishing at $\lambda=0$ and $2$. This follows here due to \begin{lemma}\label{fundv} We have the following identities for all $\tau$ in the upper half-plane: \begin{align} \frac{\vartheta_3\left(\frac{\tau}{3}\left.\right|\tau \right)} {\vartheta_2\left(\frac{\tau}{3}\left.\right|\tau \right)}&= \frac{\vartheta_2\left(\frac{1}{3}\left.\right|\frac{\tau}{3} \right)} {\vartheta_3\left(\frac{1}{3}\left.\right|\frac{\tau}{3} \right)}\label{relation}\\ \vartheta_4^2(0|\tau)i\sqrt{3}&\, \frac{ \vartheta_1\left.\left(\frac{\tau}{3}\right|\tau \right) \vartheta_4\left.\left(\frac{\tau}{3}\right|\tau \right) } {\vartheta_2^2\left.\left(\frac{\tau}{3}\right|\tau \right)}+ \vartheta_4^2\left(0\left|\frac{\tau}{3}\right)\right. \frac{ \vartheta_1\left.\left(\frac{1}{3}\right|\frac{\tau}{3} \right) \vartheta_4\left.\left(\frac{1}{3}\right|\frac{\tau}{3} \right) } {\vartheta_3^2\left.\left(\frac{1}{3}\right|\frac{\tau}{3} \right)}=0\label{derrel} \end{align} Similar identities may be obtained by cyclic interchange of the $\theta$-subscripts $i,j,k\in \{2,3,4\}$. \end{lemma} As a consequence we obtain \begin{align*} \frac{\vartheta_3\left(\frac{\tau}{3}\left.\right|\tau \right)} {\vartheta_2\left(\frac{\tau}{3}\left.\right|\tau \right)}= \frac{\vartheta_3\left(\pm\frac{\tau}{3}\left.\right|\tau \right)} {\vartheta_2\left(\pm\frac{\tau}{3}\left.\right|\tau \right)}= \frac{\vartheta_3\left(\frac{2\tau}{3}\left.\right|\tau \right)} {\vartheta_2\left(\frac{2\tau}{3}\left.\right|\tau \right)}&= \frac{\vartheta_2\left(\frac{1}{3}\left.\right|\frac{\tau}{3} \right)} {\vartheta_3\left(\frac{1}{3}\left.\right|\frac{\tau}{3} \right)}=- \frac{\vartheta_2\left(\frac{2}{3}\left.\right|\frac{\tau}{3} \right)} {\vartheta_3\left(\frac{2}{3}\left.\right|\frac{\tau}{3} \right)}. \end{align*} Although we have not seen these identities in the standard texts known to us these identities may be established by standard techniques. We then have, \begin{lemma}\label{vanishing02} At $\lambda=0$ we have \begin{equation} h_{\pm1}(0)=0, \; h_0(0)\neq 0,\label{vanishing111} \end{equation} and each of the functions, $h_{\pm1}(y(\lambda))$ vanish to second order in $\lambda$. \end{lemma} \begin{proof} At the point $y=0$ we have that $h_{\pm1}(0)=0$ on account of (\ref{relation}). Further the derivatives \begin{align*} \frac{d}{dz}\left(\frac{\vartheta_3}{\vartheta_2}\left(z\,\Big|\,\mathcal{T}\right)\right)&=\pi \vartheta_4^2(0\vert\mathcal{T})\, \frac{\vartheta_1}{\vartheta_2}\left(z\,\Big|\,\mathcal{T}\right) \frac{\vartheta_4}{\vartheta_2}\left(z\,\Big|\,\mathcal{T}\right),\\ \frac{d}{dz}\left(\frac{\vartheta_2}{\vartheta_3}\left(z\,\Big|\,\mathcal{T}\right)\right)&=-\pi \vartheta_4^2(0\vert\mathcal{T})\, \frac{\vartheta_1}{\vartheta_3}\left(z\,\Big|\,\mathcal{T}\right) \frac{\vartheta_4}{\vartheta_3}\left(z\,\Big|\,\mathcal{T}\right). \end{align*} show that $h_{\pm1}'(0)=0$ also vanishes on account of (\ref{derrel}) and so both vanish to second order here. Standard $\theta$-function expansions show that $h_0(0)$ is nonvanishing. \end{proof} A consequence of this lemma is that $H(y(\lambda))$ vanishes to fourth order in $\lambda$ at $\lambda=0$ and $2$; this is equivalent to the higher order vanishing of $\theta(\lambda\widehat{\boldsymbol{U}}\pm \widetilde{\boldsymbol{K}},\hat\tau)$ remarked upon earlier. We may now establish the theorem stated in the introduction. \begin{proof}[Proof of Theorem 1] The dependence (\ref{finmoda}) of the modulus $\mathcal{T}=\mathcal{T}(\mathcal{R})$ on the ratio $\mathcal{R}$ means that the vanishing $h_k(y)=0$ and $H(y)=0$ define (respectively) implicit functions $y=X_k(\mathcal{R})$ and $y=X(\mathcal{R})$ of the real variable $\mathcal{R}$. Although our problem has $\mathcal{R}\in\mathbb{Q}$ we may extend its domain to the whole real half-line, $|\mathcal{R}|\in\mathbb{R}^+$ (recall our conventions are such that $\mathcal{R}<0$). The functions $X_\ast(\mathcal{R})$ are clearly multi-valued reflecting the periodicities of $h_k(y)$ and $H(y)$. We may determine many points on $X(\mathcal{R})$ using the fundamental Lemma \ref{fundv}. For example, consider $y=2\rho/3$ and solutions to $h_{-1}(2\rho/3)=0$. Substitution and some simplification leads to solving \begin{equation}\label{mudots}\frac{\vartheta_3}{\vartheta_2} \left(\frac{|\mathcal{R}|+2}6\,\mathcal{T}\,\Big|\,\mathcal{T}\right) =\frac{\vartheta_2}{\vartheta_3}\left( \frac{|\mathcal{R}|}6\,\mathcal{T}+ \frac{1}{3} \,|\,\frac{\mathcal{T}}{3}\right). \end{equation} Using Lemma \ref{fundv} we find solutions when \begin{itemize} \item $|\mathcal{R}|$ is even, giving $|\mathcal{R}|=6k$ or $6k+2$ ie $2,6,8,12,14,\ldots$ \item $|\mathcal{R}|$ is odd, giving $|\mathcal{R}|=3,5,9,11,15,17,\ldots$. \end{itemize} Similar arguments give solutions $$\begin{array}{lcl} y=2\rho/3&h_{-1}&|\mathcal{R}|=2,3,5,6,8,9,11,12,\ldots\\ y=2\rho/3&h_{0}&|\mathcal{R}|=1,2,4,5,7,8,10,11,\ldots\\ y=4\rho/3&h_{0}&|\mathcal{R}|=1,2,4,5,7,8,10,11\ldots\\ y=4\rho/3&h_{0}&|\mathcal{R}|=3k+1/2,\\ y=4\rho/3&h_{1}&|\mathcal{R}|=2,3,5,6,8,9,11,12,\ldots\\ y=4\rho/3&h_{1}&|\mathcal{R}|=3k+1/2,\ldots\\ y=2\rho&h_{1}&|\mathcal{R}|=1,2,3,4,5,6,7,8,9,\ldots\\ \end{array}$$ and so on. At several of these points these points the tangent to $X_\ast(\mathcal{R})$ becomes vertical; these may be obtained by solving the analogous formulae for the tangent. A graph of some of the components of $X(\mathcal{R})$ is given in Figure \ref{figXR3}. \begin{figure}[scale=1.25] \caption{The plot shows (some) branches of the multi-valued function $y=X(\mathcal{R})$ given implicitly by the equation $H(y)=0$. Circles on the plot shows points at which the tangent lines are vertical. The bold lines correspond to $|\mathcal{R}|=2$ and $1/2$. The different colours correspond to branches of $y=X_k(\mathcal{R})$ (blue=$X_1$, green=$X_0$, red=$X_1$).} \label{figXR3} \vskip1cm \begin{center} \setlength{\unitlength}{1cm} \begin{picture}(11,8) \put(-1,4){$y/\rho$} \put(6,-1){$|\mathcal{R}|$} \put(0,0){\includegraphics[width=11cm,height=8cm]{test.eps}} \end{picture} \end{center} \end{figure} Now using the symmetry (\ref{symnmR}) we may assume that $n+m\ge1$ and so for $\lambda\in(0,2)$ then $y/\rho\in(0,2(n+m)/3)\supseteq(0,2/3)$. Now we see that $X(\mathcal{R})$ always has a zero in $(0,2/3)$ for all $|\mathcal{R}|=(2n-m)/(n+m)\in(1,2)\cup(2,\infty)$ and so these values cannot yield a monopole. Similarly if $n+m\ge2$ there is always a zero of $X(\mathcal{R})$ in $(0,4/3)$ for any $|\mathcal{R}|\ne1/2$. Thus we must have either $n+m=1$ and $|\mathcal{R}|\in(0,1]\cup\{2\}$ or $n+m=2$ and $|\mathcal{R}|=1/2$. The only solutions to these constraints are $(m,n)=(0,1)$ with $|\mathcal{R}|=2$ and $(m,n)=(1,1)$ with $|\mathcal{R}|=1/2$. For all other $(m,n)$ there are solutions to $H(y)=0$ for $\lambda\in(0,2)$ and thus by Proposition \ref{humred} they do not yield monopoles. Now the two cases $(m,n)=(0,1)$, $(1,1)$ were shown to yield the tetrahedrally symmetric monopoles in \cite{bren06}. Thus we have established Theorem \ref{mainthm}. \end{proof} \noindent{\bf{Remark:}} We numerically observe that $h_k(y)=0\Longleftrightarrow h_k(\rho y)=0 \Longleftrightarrow h_k(\rho^2 y)=0$. The modulus of the elliptic functions $h_k$ is $2\rho$ so this is not simply complex multiplication. This observation remains unexplained. \section{Discussion} This paper has been devoted to the study of certain charge three (centred) $SU(2)$ BPS monopoles. The spectral curve of the general monopole of this class may be put (by a rotation) in the form $$\eta^3+\eta(\alpha_0\zeta\sp4 +\alpha_1\zeta\sp3+\alpha\zeta\sp2-\bar\alpha_1\zeta+\bar\alpha_0) +\beta\zeta\sp6 +\beta_1\zeta\sp5+\beta_2\zeta\sp4+\gamma\zeta\sp3 -\bar\beta_2\zeta\sp2 +\bar\beta_1\zeta -\beta=0 $$ with $\alpha$, $\beta$ and $\gamma$ real. Hitchin's constraints on the spectral curve mean there are transcendental relations amongst these coefficients and the outstanding problem is to realise these. Two sorts of problem arise. The first is implementing Hitchin's constraint \textbf{H2} coming from the triviality of the line bundle $L\sp2$ on the spectral curve. This leads via the equivalent Ercolani-Sinha constraints (Lemma 2) to requiring the vector of $\mathfrak{b}$-periods of the meromorphic differential $\gamma_\infty$ to be a half-period in the period lattice. In \cite{bren06} it was shown for the trigonal curve $$\eta^3 +\beta\zeta\sp6 +\beta_1\zeta\sp5+\beta_2\zeta\sp4+\gamma\zeta\sp3 -\bar\beta_2\zeta\sp2 +\bar\beta_1\zeta -\beta=0 $$ how these might be reexpressed in terms of the four $\mathfrak{a}$-periods of a specified holomorphic differential. These constraints were then solved for the symmetric curves $$\eta^3 +\beta\zeta\sp6 +\gamma\zeta\sp3 -\beta=0, $$ and a countable family of curves satisfying this constraint of Hitchin ensued. The problem of requiring a curve with specified periods of a given meromorphic differential arises in many settings within finite-gap integration theory. The bijective correspondence between harmonic maps $ T ^ 2\rightarrow S ^ 3 $ and algebro-geometric data, specifying curves with given filling fractions in the AdS/CFT correspondence and seeking closed real geodesics on an ellipsoid all lead to this problem. Finding ways to solve such will be an important area for future research. The second type of problem and the focus of this paper has been in satisfying Hitchin's constraint \textbf{H3}, the vanishing of a real one parameter family of cohomologies of certain line bundles, $H^0(\hat{\mathcal{C}},L^{\lambda}(n-2))=0$ for $\lambda\in(0,2)$. We reexpressed this in terms of the intersection of a real line with the theta divisor $\Theta$ of the curve and the problem is to count the number of intersection points. Again a more general theory is called for. We made progress here utilizing two features of the geometry of our situation. The first was that our curve (\ref{bren03}) has extra symmetry: it falls within a class studied by Hitchin, Manton and Murray \cite{hmm95} when looking at spectral curves of monopoles with spatial symmetries. Curves of the form \begin{equation}\label{cychmm} \eta^3+\alpha\eta\zeta^2+\beta\zeta^6+\gamma \zeta^3-\beta=0 \end{equation} have a cyclic $\texttt{C}_3$ symmetry $(\zeta,\eta)\rightarrow( \rho\zeta,\rho\eta)$ where $\rho^3=1$. If $\gamma=0$ this is enlarged to a dihedral symmetry $\texttt{D}_3$ with $(\zeta,\eta)\rightarrow( 1/\zeta,-\eta/\zeta^2)$. Actually the spectral curves themselves have larger symmetry (for any $\gamma$ in the curve (\ref{cychmm}) we have the symmetry $(\zeta,\eta)\rightarrow( -1/\zeta,-\eta/\zeta^2)$) but the nomenclature is based on those symmetries that may be realized as spatial symmetries. This cyclic symmetry means there exists a quotient spectral curve. Here it was of genus 2. We were then able to show that a theorem of Fay and Accola applied and so the problem reduced to one about the theta divisor of the quotient curve (Theorem \ref{beH3}). For a cyclically invariant charge $n$ monopole exactly the same considerations apply and we find the genus $(n-1)^2$ monopole curve is an $n$-fold unbranched cover of a genus $(n-1)$ hyperelliptic curve. This is the Affine Toda curve of Seiberg-Witten theory observed by Sutcliffe \cite{sut96}. Thus symmetry together with the Fay-Accola theorem reduces the problem significantly. The second simplifying feature reduced the problem to one of elliptic functions: Humbert theory tells us our curve covered an elliptic curve. This feature is a consequence of the great symmetry of our curve and will not persist for the general curves (\ref{cychmm}). Notwithstanding this reduction to questions of elliptic functions we have not proven the general Conjecture \ref{conjecture} counting the number of intersections of the line with the theta divisor. We have however established the uniqueness of the tetrahedrally symmetric monopole within spectral curves of the form (\ref{bren03}). An obvious line for further study is to seek those monopoles within the class (\ref{cychmm}). Hitchin, Manton and Murray argued that there were five loci of monopoles within this. These loci are totally geodesic submanifolds of the full moduli space and may be viewed as the orbits of geodesic monopole scattering. Of these loci, one corresponded to $\texttt{D}_3$ symmetric monopoles: asymptotically we have $\alpha^3=27\beta^2$ (with $\beta$ large and positive at one end and negative at the other) and half-way along this there is the axisymmetric monopole. The other four loci were isomorphic: at one end asymptotically one has $\alpha^3=27\beta^2$ (with $\beta$ of either sign) and $\gamma=0$ while at the other end $\alpha=\pi^2/4-3b^2$, $\beta=0$ and $\gamma=2b(b^2+\pi^2/4)$ (with $b$ of either sign). Half-way along this is the tetrahedrally symmetric monopole, the four loci corresponding to four distinct orientations of the tetrahedron. Extending our work to this broader class encounters new difficulties. Although we may use the cyclic symmetry to express the curve and Ercolani-Sinha constraints to ones for the quotient curve, the period integrals arising are not simply expressible in terms of hypergeometric functions and the curve does not cover an elliptic curve. These are significant complications and we hope to pursue this elsewhere. \section*{Acknowledgements} Both authors are grateful to MISGAM for funding a research visit of VZE to Edinburgh in 2009.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{INTRODUCTION} The jet of 3C 273 is nearby (z = 0.158) large-scale jet (a possible deprojected length is $\sim$ 0.654 Mpc, Harris \& Krawczynski 2006), this ideal lab of the large-scale jet has been studied through multi-band observations (e.g., Jester et al. 2001, 2005, 2006, 2007; Uchiyama et al. 2006; Meyer \& Georganopoulos, 2014). The SED of the 3C 273 jet imply a two-component nature (Sambruna et al. 2001; Jester et al. 2006; Uchiyama et al. 2006): a synchrotron low-energy component from radio to optical and a high-energy component including X-rays which of mechanism is still puzzling. Many researchers discuss the merits and demerits of possible mechanisms responsible for the X-ray jet (e.g., Sambruna et al. 2001; Uchiyama et al. 2006; see Harris \& Krawczynski 2006, for a review). Here, we only discuss IC/CMB model for the X-ray jet emission of 3C 273. The Achilles' heel of IC/CMB model for the X-ray jet emission is that this model leads to high, even super-Eddington kinetic power (Dermer \& Atoyan 2004; Uchiyama et al. 2006; Meyer \& Georganopoulos, 2014). We think this problem may be due to the symmetrical assumption of an isotropic pitch-angle distribution, an anisotropic pichi-angle distribution may be more close to real situation. For simplicity we consider only the symmetric assumpiton. Recently, Meyer \& Georganopoulos (2014) presents $\emph{Fermi}$ observations rule out the IC/CMB X-ray model entirely for knot A in the 3C 273 jet. But we have a different answer. In $\S$ 2, we present that IC/CMB model could well explain the large-scale jet X-ray radiation of 3C 273, and apply IC/CMB model to the individual knots in the 3C 273 jet, which does not violate new $\emph{Fermi}$ observations. A conclusion is given in $\S$ 3. \section{THE MODEL, FITTING RESULTS AND DISCUSSION} We consider a standard synchrotron scenario (Kardashev 1962) in which source particles have a power law distribution, but we think particle spectral index may be a piecewise function, which is due to the complex and nonlinear acceleration (e.g., Liu \& Shen, 2007; Sahayanathan, 2008). For isotropic distributions, the synchrotron flux expression is (Kardashev 1962): \begin{equation} I_{\nu,s}\propto\left\{ \begin{array}{ll} \nu^{-(p-1)/2},&\mbox{~$\nu_{min,s} \ll \nu \ll \nu_{s}$;}\\ \nu^{-p/2},&\mbox{~$\nu_{s} \ll \nu \ll \nu_{max,s}$,}\\ \end{array} \right. \end{equation} where $p$ is particle spectral index, $\nu_{max,s}$ and $\nu_{min,s}$ are the maximum and minimum cutoff frequencies of synchrotron spectrum. $\nu_{s}$ is the synchrotron break frequency, i.e., the peak frequency in synchrotron SED. IC/CMB spectrum is an exact copy of the synchrotron one (Tavecchio et al. 2000; Celotti et al. 2001; Georganopoulos et al. 2006): \begin{equation} I_{\nu,c}\propto\left\{ \begin{array}{ll} \nu^{-(p-1)/2},&\mbox{~$\nu_{min,c} \ll \nu \ll \nu_{c}$;}\\ \nu^{-p/2},&\mbox{~$\nu_{c} \ll \nu \ll \nu_{max,c}$,}\\ \end{array} \right. \end{equation} where $\nu_{max,c}$ and $\nu_{min,c}$ are the maximum and minimum cutoff frequencies of IC/CMB spectrum. $\nu_{c}$ is the IC/CMB break frequency, i.e., the peak frequency in IC/CMB SED. For the 3C 273 jet with equipartition conditions $B\delta\approx 10^{-4}$ G (Jester et al. 2005), $\nu_c=6.6 \times 10^8 \delta^2 \nu_s$ and $L_c=2.5 \times 10^{-3} \delta^4 L_s$ (Georganopoulos et al. 2006; Meyer \& Georganopoulos 2014), where $L_c$ and $L_s$ are the peak IC/CMB and synchrotron luminosities. The data (from radio to X-ray) of knots and large-scale jet are compiled from Jester et al. (2005, 2007) and Uchiyama et al. (2006) (see Liu \& Shen (2009), for detail). For simplicity, we chose $\nu_{max,s} \sim \nu_{min,c} \sim \nu_{cut}$ and assume $\nu_{cut}$ is between two data points. Thus, flux formula of the 3C 273 large-scale jet from radio to X-ray has a three-section form. Liu \& Shen (2009) performed independent fitting to the two components, here we simultaneously fit to the two components. We use the fitting method of Liu \& Shen (2007, 2009) and Liu et al. (2013) to obtain the best fit. We arbitrarily divide the data points from radio to X-ray into three groups, the first two groups is synchrotron component, the third group belongs to IC/CMB component. Then, we calculate the corresponding $\chi^2_{\nu}$ for all the possible combinations, and choose a combination with a minimal $\chi^2_{\nu}$ among them as the best fit. The best fitting results are shown in Table 1. The data points and model fits are plotted in Fig. 1, including the $\emph{Fermi}$ measurements and $95\%$, $99\%$, $99.9\%$ upper limits (Meyer $\&$ Georganopoulos 2014) and the TeV flux upper limits from shallow HESS observations (Aharonian et al. 2005; Georganopoulos et al. 2006). Our fitting could well interpret the emissions of the 3C 273 jet, and does not violate the $99\%$ and $99.9\%$ 3-10 GeV band $\emph{Fermi}$ upper limit and other $\gamma$-ray observations, i.e., $\emph{Fermi}$ observations do not rule out IC/CMB X-ray model. $\nu_{cut}$ is within $1.00\times10^{15}$ to $1.86\times10^{15}$ Hz. In the case of equipartition conditions, we can derive $\nu_{c}\sim2.82\times10^{23}$ Hz, $\delta\sim5.09$ and B $\sim19.64$ $\mu$G. Based on the IC/CMB model fits, the $\emph{Fermi}$ observations may be mainly contributed by the small-scale jet of 3C 273 (i.e., the core). If we assume that Doppler factor, magnetic field and $\nu_{cut}$ are similar along the jet, we can impose the synchrotron radio spectrum lower than 10 GHz radio frequency and the IC/CMB $\gamma$-ray spectrum of knots in the 3C 273 jet. The results are shown in Fig. 2 (please note we don't consider possible absorption effects). For the individual knots, the synchrotron spectrum of low-energy electrons responsible for the IC/CMB X-ray emission may be different from the extrapolation of the 10 GHz radio spectrum (i.e., $p$ may be a piecewise function), which may be due to more complex acceleration, even multi-populations of electrons. The IC/CMB model for knots in the 3C 273 jet still do not violate new $\emph{Fermi}$ observations. If the low-energy radio spectrum of 3C 273 large-scale jet was similar to the one of knot A in Fig. 2, then the $95\%$ 3-10 GeV band $\emph{Fermi}$ upper limit of 3C 273 even was higher than the flux expected from IC/CMB model with a lower doppler factor. Future high-resolution observations could examine our interpretation on the SED of knots and large-scale jet in 3C 273. \section{CONCLUSION} We fit IC/CMB model to the large-scale jet and knots X-ray radiation of 3C 273, for the individual knots $p$ may be not a constant, and model fits do not violate new $\emph{Fermi}$ observations. Based on our model fit, the $\emph{Fermi}$ observations may mainly come from the small-scale jet of 3C 273 (i.e., the core). Our model fits could examined by future observations. \section{ACKNOWLEDGMENT} We acknowledge the support from the National Natural Science Foundation of China (NSFC) through grants U1231106 and the Open Research Program (14010203) of Key Laboratory for Research in Galaxies and Cosmology, Chinese Academy of Sciences. We acknowledge the Scientific Research Foundation of the Civil Aviation University of China (09QD15X). \clearpage
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Generalized Column Subset Selection}\label{Sec:CSS} The Column Subset Selection (CSS) problem can be generally defined as the selection of a few columns from a data matrix that best approximate its span \cite{Frieze98-Rnd, Drineas06-Cols, Boutsidis08-Clust, Boutsidis09a-CSS, Boutsidis11a-NearOpt, Civril12-CSS-Sparse}. We extend this definition to the generalized problem of selecting a few columns from a source matrix to approximate the span of a target matrix. The generalized CSS problem can be formally defined as follows: \begin{problem} {\bf (Generalized Column Subset Selection)} \label{Pr:GenCSSNew} \label{Pr:FS} Given a source matrix $A\in\mathbb{R}^{m\times n}$, a target matrix $B\in\mathbb{R}^{m\times r}$ and an integer $l$, find a subset of columns $\mc{L}$ from $A$ such that $|\mc{L}| =l$ and \begin{displaymath} \mc{L}={\arg\min}_{\mc{S}}\:\|B-\proj{S}B\|_{F}^{2}, \end{displaymath} where $\mc{S}$ is the set of the indices of the candidate columns from $A$, $\proj{S}\in\mathbb{R}^{m\times m}$ is a projection matrix which projects the columns of $B$ onto the span of the set $\mc{S}$ of columns, and $\mc{L}$ is the set of the indices of the selected columns from $A$. \end{problem} The CSS criterion $\mathbf{F}\left(\mc{S}\right)=\|B-\proj{S}B\|_{F}^{2}$ represents the sum of squared errors between the target matrix $B$ and its rank-$l$ approximation $\proj{S}B$ . In other words, it calculates the Frobenius norm of the residual matrix $F=B-\proj{S}B$. Other types of matrix norms can also be used to quantify the reconstruction error \cite{Boutsidis09a-CSS, Boutsidis11a-NearOpt}. The present work, however, focuses on developing algorithms that minimize the Frobenius norm of the residual matrix. The projection matrix $\proj{S}$ can be calculated as $\proj{S}=A_{:\mc{S}} \inv{A_{:\mc{S}}^{T}A_{:\mc{S}}} A_{:\mc{S}}^{T} \:,$ where $A_{:\mc{S}}$ is the sub-matrix of $A$ which consists of the columns corresponding to $\mc{S}$. It should be noted that if $\mc{S}$ is known, the term $\inv{A_{:\mc{S}}^{T}A_{:\mc{S}}} A_{:\mc{S}}^{T}B$ is the closed-form solution of least-squares problem $T^{*}={\arg\min}_T\left\Vert B-A_{:\mc S}T\right\Vert _{F}^{2}$. \section{A Fast Greedy Algorithm for Generalized CSS} Problem \ref{Pr:GenCSSNew} is a combinatorial optimization problem whose optimal solution can be obtained in $O\pth{\max\pth{n^lmrl, n^lml^2}}$. In order to approximate this optimal solution, we propose a fast greedy algorithm that selects one column from $A$ at a time. The greedy algorithm is based on a recursive formula for the projection matrix $P^{(\mc{S})}$ which can be derived as follows. \begin{lemma} \label{Lm:Proj} Given a set of columns $\mc{S}$. For any $\mc{P} \subset \mc{S}$, $\proj{S}=P^{\left(\mc{P}\right)}+R^{\left(\mc{R}\right)}\:, $ where $R^{\left(\mc{R}\right)} = E_{:\mc{R}} \inv{E_{:\mc{R}}^{T}E_{:\mc{R}}} E_{:\mc{R}}^{T}$ is a projection matrix which projects the columns of $E = A - P^{\left(\mc{P}\right)} A$ onto the span of the subset $\mc{R} = \mc{S}\setminus \mc{P}$ of columns. \end{lemma} \proof Define $D=A_{:\mc{S}}^{T}A_{:\mc{S}}$. The projection matrix $\proj{S}$ can be written as $\proj{S}=A_{:\mc{S}}D^{-1}A_{:\mc{S}}^{T}$. Without loss of generality, the columns and rows of $A_{:\mc{S}}$ and $D$ can be rearranged such that the first sets of rows and columns correspond to $\mc{P}$. Let $S=D_{\mc{R}\mc{R}}-D_{\mc{P}\mc{R}}^{T}D_{\mc{P}\mc{P}}^{-1}D_{\mc{P}\mc{R}}$ be the Schur complement \cite{lutkepohl1996handbook} of $D_{\mc{P}\mc{P}}$ in $D$, where $D_{\mc{P}\mc{P}}=A_{:\mc{P}}^{T}A_{:\mc{P}}$, $D_{\mc{P}\mc{R}}=A_{:\mc{P}}^{T}A_{:\mc{R}}$ and $D_{\mc{R}\mc{R}}=A_{:\mc{R}}^{T}A_{:\mc{R}}$. Using the block-wise inversion formula \cite{lutkepohl1996handbook}, $D^{-1}$ can be calculated as \begin{displaymath} D^{-1}= \left[\begin{array}{cc} D_{\mc{P}\mc{P}}^{-1}+D_{\mc{P}\mc{P}}^{-1}D_{\mc{P}\mc{R}}S^{-1}D_{\mc{P}\mc{R}}^{T}D_{\mc{P}\mc{P}}^{-1} & -D_{\mc{P}\mc{P}}^{-1}D_{\mc{P}\mc{R}}S^{-1}\\ -S^{-1}D_{\mc{P}\mc{R}}^{T}D_{\mc{P}\mc{P}}^{-1} & S^{-1}\end{array}\right] \end{displaymath} Substituting with $A_{:\mc{S}}$ and $D^{-1}$ in $\proj{S}=A_{:\mc{S}}D^{-1}A_{:\mc{S}}^{T}$, the projection matrix can be simplified to \begin{equation} \label{Eq:ProjP2} \begin{split} \proj{S}=A_{:\mc{P}}D_{\mc{P}\mc{P}}^{-1}A_{:\mc{P}}^{T} +\left(A_{:\mc{R}}-A_{:\mc{P}}D_{\mc{P}\mc{P}}^{-1}D_{\mc{P}\mc{R}}\right)S^{-1}\left(A_{:\mc{R}}^{T}-D_{\mc{P}\mc{R}}^{T}D_{\mc{P}\mc{P}}^{-1}A_{:\mc{P}}^{T}\right) \:. \end{split} \end{equation} The first term of the right-hand side is the projection matrix $P^{\left(\mc{P}\right)}$ which projects vectors onto the span of the subset $\mc{P}$ of columns. The second term can be simplified as follows. Let $E$ be an $m \times n$ residual matrix which is calculated as: $E=A-P^{\left(\mc{P}\right)}A$. The sub-matrix $E_{:\mc{R}}$ can be expressed as \begin{displaymath} E_{:\mc{R}}=A_{:\mc{R}}-P^{\left(\mc{P}\right)}A_{:\mc{R}} = A_{:\mc{R}}-A_{:\mc{P}}\pth{A_{:\mc{P}}^{T}A_{:\mc{P}}}^{-1}A_{:\mc{P}}^{T}A_{:\mc{R}}=A_{:\mc{R}}-A_{:\mc{P}}D_{\mc{P}\mc{P}}^{-1}D_{\mc{P}\mc{R}} \:. \end{displaymath} Since projection matrices are idempotent, then $P^{\left(\mc{P}\right)}P^{\left(\mc{P}\right)}=P^{\left(\mc{P}\right)}$ and \begin{displaymath} E_{:\mc{R}}^{T}E_{:\mc{R}} = \pth{A_{:\mc{R}}-P^{\left(\mc{P}\right)}A_{:\mc{R}}}^T \pth{A_{:\mc{R}}-P^{\left(\mc{P}\right)}A_{:\mc{R}}} =A_{:\mc{R}}^TA_{:\mc{R}}-A_{:\mc{R}}^TP^{\left(\mc{P}\right)}A_{:\mc{R}}\:. \end{displaymath} Substituting with $P^{\left(\mc{P}\right)}=A_{:\mc{P}}\pth{A_{:\mc{P}}^{T}A_{:\mc{P}}}^{-1}A_{:\mc{P}}^T$ gives \begin{displaymath} \begin{split} E_{:\mc{R}}^{T}E_{:\mc{R}} =A_{:\mc{R}}^TA_{:\mc{R}}-A_{:\mc{R}}^TA_{:\mc{P}}\pth{A_{:\mc{P}}^{T}A_{:\mc{P}}}^{-1}A_{:\mc{P}}^TA_{:\mc{R}}= D_{\mc{R}\mc{R}}-D_{\mc{P}\mc{R}}^{T}D_{\mc{P}\mc{P}}^{-1}D_{\mc{P}\mc{R}} = S \:. \end{split} \end{displaymath} Substituting $\pth{A_{:\mc{P}}D_{\mc{P}\mc{P}}^{-1}A_{:\mc{P}}^{T}}$, $\pth{A_{:\mc{R}}-A_{:\mc{P}}D_{\mc{P}\mc{P}}^{-1}D_{\mc{P}\mc{R}}}$ and $S$ with $P^{\left(\mc{P}\right)}$, $E_{:\mc{R}}$ and $E_{:\mc{R}}^{T}E_{:\mc{R}}$ respectively, Equation (\ref{Eq:ProjP2}) can be expressed as \begin{displaymath} \begin{split} \proj{S}=\proj{P} + E_{:\mc{R}}\pth{E_{:\mc{R}}^{T}E_{:\mc{R}}}^{-1}E_{:\mc{R}}^T\:. \end{split} \end{displaymath} The second term is the projection matrix $R^{\left(\mc{R}\right)}$ which projects vectors onto the span of $E_{:\mc{R}}$. This proves that $\proj{S}$ can be written in terms of $P^{\left(\mc{P}\right)}$ and $R$ as $\proj{S}=P^{\left(\mc{P}\right)}+R^{\left(\mc{R}\right)}$ \hfill\BlackBox Given the recursive formula for $\proj{S}$, the following theorem derives a recursive formula for $\mathbf{F}\left(\mathcal{S}\right)$. \begin{theorem}\label{Th:RecF} Given a set of columns $\mc{S}$. For any $\mc{P} \subset \mc{S}$, $ \mathbf{F}\left(\mathcal{S}\right)=\mathbf{F}\left(\mathcal{P}\right)-\left\Vert R^{\left(\mathcal{R}\right)}F\right\Vert _{F}^{2} \:,$ where $F = B - P^{\left(\mc{P}\right)}B$ and $R^{\left(\mc{R}\right)}$ is a projection matrix which projects the columns of $F$ onto the span of the subset $\mc{R} = \mc{S}\setminus \mc{P}$ of columns of $E=A - P^{\left(\mc{P}\right)}A$ \end{theorem} \proof By definition, $\mathbf{F}\left(\mathcal{S}\right)=\left\Vert B-P^{\left(\mathcal{S}\right)}B\right\Vert _{F}^{2}$. Using Lemma \ref{Lm:Proj}, $P^{\left(\mathcal{S}\right)}B=P^{\left(\mathcal{P}\right)}B+R^{\left(\mathcal{R}\right)}B$. The term $R^{\left(\mathcal{R}\right)}B$ is equal to $R^{\left(\mathcal{R}\right)}F$ as $E_{:\mc{R}}^{T}B = E_{:\mc{R}}^{T}F$. To prove that, multiplying $E_{:\mc{R}}^{T}$ by $F = B - P^{\left(\mc{P}\right)}B$ gives $ E_{:\mc{R}}^{T}F=E_{:\mc{R}}^{T}B-E_{:\mc{R}}^{T}P^{\left(\mc{P}\right)}B$. Using $E_{:\mc{R}}=A_{:\mc{R}}-P^{\left(\mc{P}\right)}A_{:\mc{R}}$, the expression $E_{:\mc{R}}^{T}P^{\left(\mc{P}\right)}$ can be written as $ E_{:\mc{R}}^{T}P^{\left(\mc{P}\right)}=A_{:\mc{R}}^{T}P^{\left(\mc{P}\right)}-A_{:\mc{R}}^{T}P^{\left(\mc{P}\right)}P^{\left(\mc{P}\right)}$. This is equal to $0$ as $P^{\left(\mc{P}\right)}P^{\left(\mc{P}\right)}=P^{\left(\mc{P}\right)}$ (an idempotent matrix). Substituting in $\mathbf{F}\left(\mathcal{S}\right)$ and using $F=B-P^{\left(\mathcal{P}\right)}B$ gives \begin{equation*} \mathbf{F}\left(\mathcal{S}\right)=\left\Vert B-P^{\left(\mathcal{P}\right)}B-R^{\left(\mathcal{R}\right)}F\right\Vert _{F}^{2} = \left\Vert F-R^{\left(\mathcal{R}\right)}F\right\Vert _{F}^{2} \end{equation*} Using the relation between Frobenius norm and trace, $\mathbf{F}\left(\mathcal{S}\right)$ can be simplified to \begin{displaymath} \mathbf{F}\left(\mathcal{S}\right)=\text{tr}\left(\left(F-R^{\left(\mathcal{R}\right)}F\right)^{T}\left(F-R^{\left(\mathcal{R}\right)}F\right)\right) =\text{tr}\left(F^{T}F-F^{T}R^{\left(\mathcal{R}\right)}F\right)=\left\Vert F\right\Vert _{F}^{2}-\left\Vert R^{\left(\mathcal{R}\right)}F\right\Vert _{F}^{2} \end{displaymath} Using $\mathbf{F}\left(\mathcal{P}\right)=\left\Vert F\right\Vert _{F}^{2}$ proves the theorem. \hfill\BlackBox Using the recursive formula for $\mathbf{F}\left(\mathcal{S}\cup\{i\}\right)$ allows the development of a greedy algorithm which at iteration $t$ selects column $p$ such that \begin{equation*} p={\arg\min}_i\:\mathbf{F}\left(\mathcal{S}\cup\{i\}\right)={\arg\max}_i\left\Vert P^{\pth{\left\{ i\right\}}}F\right\Vert _{F}^{2}\:. \end{equation*} Let $G=E^TE$ and $H=F^TE$, the objective function $\left\Vert P^{\pth{\left\{ i\right\}}}F\right\Vert _{F}^{2}$ can be simplified to \begin{equation*} \left\Vert E_{:i}\left(E_{:i}^{T}E_{:i}\right)^{-1}E_{:i}^{T}F\right\Vert _{F}^{2}=\text{tr}\left(F^TE_{:i}\left(E_{:i}^{T}E_{:i}\right)^{-1}E_{:i}^{T}F\right)=\frac{\left\Vert F^TE_{:i}\right\Vert ^{2}}{E_{:i}^{T}E_{:i}}=\frac{\left\Vert H_{:i}\right\Vert ^{2}}{G_{ii}}\:. \end{equation*} This allows the definition of the following greedy generalized CSS problem. \begin{problem} {\bf (Greedy Generalized CSS)} \label{Pr:GreedyCSS} At iteration $t$, find column $p$ such that \begin{equation*} p={\arg\max}_i\hspace{1em}\frac{\left\Vert H_{:i}\right\Vert ^{2}}{G_{ii}} \end{equation*}where $H=F^TE$, $G = E^TE$, $F=B-P^{\left(\mathcal{S}\right)}B$, $E=A-P^{\left(\mathcal{S}\right)}A$ and $\mc{S}$ is the set of columns selected during the first $t-1$ iterations. \end{problem} For iteration $t$, define $\bs{\delta} = G_{:p}$, $\bs{\gamma} = H_{:p}$, $\bs{\omega} = G_{:p}/\sqrt{G_{pp}} = \bs{\delta}/\sqrt{\bs{\delta}_{p}}$ and $\bs{\upsilon} = H_{:p}/\sqrt{G_{pp}} = \bs{\gamma}/\sqrt{\bs{\delta}_{p}}$ \:. The vectors $\bs{\delta}^{(t)}$ and $\bs{\gamma}^{(t)}$ can be calculated in terms of $A$, $B$ and previous $\bs{\omega}$'s and $\bs{\upsilon}$'s as \begin{equation} \label{eq:delta_omega} \bs{\delta}^{(t)}=A^{T}A_{:p}-\sum_{r=1}^{t-1}\bs{\omega}_{p}^{(r)}\bs{\omega}^{(r)}, \:\:\:\:\:\: \bs{\gamma}^{(t)}=B^{T}A_{:p}-\sum_{r=1}^{t-1}\bs{\omega}_{p}^{(r)}\bs{\upsilon}^{(r)}\:. \end{equation} The numerator and denominator of the selection criterion at each iteration can be calculated in an efficient manner without explicitly calculating $H$ or $G$ using the following theorem. \begin{theorem} \label{Th:Rec_fg2} Let $\bs{f}_{i}=\left\Vert H_{:i}\right\Vert ^{2}$ and $\bs{g}_{i}=G_{ii}$ be the numerator and denominator of the greedy criterion function for column $i$ respectively, $\bs{f}=\left[\bs{f}_{i}\right]_{i=1..n}$, and $\bs{g}=\left[\bs{g}_{i}\right]_{i=1..n}$. Then, \begin{displaymath} \begin{split}\bs{f}^{(t)}&=\Big(\bs{f}-2\left(\bs{\omega}\circ\left(A^{T}B\bs{\upsilon}-\Sigma_{r=1}^{t-2}\left(\bs{\upsilon}^{\left(r\right)T}\bs{\upsilon}\right)\bs{\omega}^{^{\left(r\right)}}\right)\right) +\|\bs{\upsilon}\|^{2}\left(\bs{\omega}\circ\bs{\omega}\right)\Big)^{(t-1)},\\ \bs{g}^{(t)} &=\Big(\bs{g}-\left(\bs{\omega}\circ\bs{\omega}\right)\Big)^{(t-1)}\:,\end{split} \end{displaymath} where $\circ$ represents the Hadamard product operator. \end{theorem} In the update formulas of Theorem \ref{Th:Rec_fg2}, $A^TB$ can be calculated once and then used in different iterations. This makes the computational complexity of these formulas $\bigOO(nr)$ per iteration. The computational complexity of the algorithm is dominated by that of calculating $A^TA_{:p}$ in (\ref{eq:delta_omega}) which is of $\bigOO(mn)$ per iteration. The other complex step is that of calculating the initial $\bs{f}$, which is $\bigOO(mnr)$. However, these steps can be implemented in an efficient way if the data matrix is sparse. The total computational complexity of the algorithm is $\bigOO(\max(mnl, mnr))$, where $l$ is the number of selected columns. Algorithm \ref{Alg:GenGCSS} in Appendix A shows the complete greedy algorithm. \section{Generalized CSS Problems} We describe a variety of problems that can be formulated as a generalized column subset selection (see Table \ref{tab:GCSS}). It should be noted that for some of these problems, the use of greedy algorithms has been explored in the literature. However, identifying the connection between these problems and the problem presented in this paper gives more insight about these problems, and allows the efficient greedy algorithm presented in this paper to be explored in other interesting domains. \begin{table*}[t] \caption{\label{tab:GCSS}Different problems as instances of the generalized column subset selection problem.} \begin{center} {\scriptsize \begin{tabular}{ccc} \hline \textbf{\small Method} & \textbf{\small Source}{\small{} } & \textbf{\small Target }\tabularnewline \hline {\small Generalized CSS} & {\small $A$} & {\small $B$}\tabularnewline \hline {\small Column Subset Selection} & {\small Data matrix $A$} & {\small Data matrix $A$}\tabularnewline {\small Distributed CSS} & {\small Data matrix $A$} & {\small Random subspace $A\Omega$ }\tabularnewline {\small SVD-based CSS} & {\small Data matrix $A$} & {\small SVD-based subspace $U_{k}\Sigma_{k}$}\tabularnewline {\small Sparse Approximation} & {\small Atoms $D$} & {\small Target vector $\bf{y}$}\tabularnewline {\small Simultaneous Sparse Approximation} & {\small Atoms $D$} & {\small Target vectors $\left[\bf{y}_{\pth{1}}, \bf{y}_{\pth{2}}, ... \bf{y}_{\pth{r}}\right]$}\tabularnewline \hline \end{tabular}{\scriptsize \par} \end{center} \end{table*} \textbf{Column Subset Selection.} The basic column subset selection \cite{Frieze98-Rnd, Drineas06-Cols, Boutsidis08-Clust, Boutsidis09a-CSS, Boutsidis11a-NearOpt} is clearly an instance of the generalized CSS problem. In this instance, the target matrix is the same as the source matrix $B=A$ and the goal is to select a subset of columns from a data matrix that best represent other columns. The greedy algorithm presented in this paper can be directly used for solving the basic CSS problem. A detailed comparison of the greedy CSS algorithm and the state-of-the-art CSS methods can be found at \cite{Farahat12tt}. In our previous work \cite{farahat11-icdm, farahat12}, we successfully used the proposed greedy algorithm for unsupervised feature selection which is an instance of the CSS problem. We used the greedy algorithm to solve two instances of the generalized CSS problem: one is based on selecting features that approximate the original matrix $B=A$ and the other is based on selecting features that approximate a random partitioning of the features $B_{:c}=\sum_{j\in\mc{P}_{c}}A_{:j}$. The proposed greedy algorithms achieved superior clustering performance in comparison to state-of-the-art methods for unsupervised feature selection. \textbf{Distributed Column Subset Selection.} The generalized CSS problem can be used to define distributed variants of the basic column subset selection problem. In this case, the matrix $B$ is defined to encode a concise representation of the span of the original matrix $A$. This concise representation can be obtained using an efficient method like random projection. In our recent work \cite{Farahat13css}, we defined a distributed CSS based on this idea and used the proposed greedy algorithm to select columns from big data matrices that are massively distributed across different machines. \textbf{SVD-based Column Subset Selection.} {\c{C}}ivril and Magdon-Ismail \cite{Civril12-CSS-Sparse} proposed a CSS method which first calculates the Singular Value Decomposition (SVD) of the data matrix, and then selects the subset of columns which best approximates the leading singular values of the data matrix. The formulation of this CSS method is an instance of the generalized CSS problem, in which the target matrix is calculated from the leading singular vectors of the data matrix. The greedy algorithm presented in \cite{Civril12-CSS-Sparse} can be implemented using Algorithm \ref{Alg:GenGCSS} by setting $B=U_{k}\Sigma_{k}$ where $U_{k}$ is a matrix whose columns represent the leading left singular vectors of the data matrix, and $\Sigma_{k}$ is a matrix whose diagonal elements represent the corresponding singular values. Our greedy algorithm is however more efficient than the greedy algorithm of \cite{Civril12-CSS-Sparse}. \textbf{Sparse Approximation.} Given a target vector and a set of basis vectors, also called atoms, the goal of sparse approximation is to represent the target vector as a linear combination of a few atoms \cite{tropp2004greed}. Different instances of this problem have been studied in the literature under different names, such as variable selection for linear regression \cite{Das2008}, sparse coding \cite{olshausen1997sparse, lee2007efficient}, and dictionary selection \cite{CevherK11, DasK11}. If the goal is to minimize the discrepancy between the target vector and its projection onto the subspace of selected atoms, the sparse approximation can be considered an instance of the generalized CSS problem in which the target matrix is a vector and the columns of the source matrix are the atoms. Several greedy algorithms have been proposed for sparse approximation, such as basic matching pursuit \cite{mallat1993matching}, orthogonal matching pursuit \cite{tropp2007signal}, the orthogonal least squares \cite{chen1989orthogonal}. The greedy algorithm for generalized CSS is equivalent to the orthogonal least squares algorithm (as defined in \cite{blumensath2007difference}) because at each iteration it selects a new column such that the reconstruction error after adding this column is minimum. Algorithm \ref{Alg:GenGCSS} can be used to efficiently implement the orthogonal least squares algorithm by setting $B=\bf{y}$, where $\bf{y}$ is the target vector. However, an additional step will be needed to calculate the weights of the selected atoms as $\inv{A_{:\mc{S}}^{T}A_{:\mc{S}}} A_{:\mc{S}}^{T}\bf{y}$. \textbf{Simultaneous Sparse Approximation.} A more general sparse approximation problem is the selection of atoms which represent a group of target vectors. This problem is referred to as simultaneous sparse approximation \cite{tropp2006algorithms}. Different greedy algorithms have been proposed for simultaneous sparse approximation with different constraints \cite{tropp2006algorithms, CevherK11}. If the goal is to select a subset of atoms to represent different target vectors without imposing sparsity constraints on each representation, simultaneous sparse approximation will be an instance of the greedy CSS problem, where the source columns are the atoms and the target columns are the input signals. \section{Conclusions} We define a generalized variant of the column subset selection problem and present a fast greedy algorithm for solving it. The proposed greedy algorithm can be effectively used to solve a variety of problems that are instances of the generalized column subset selection problem. \newpage \bibliographystyle{abbrv}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec1} In the past decade, much effort has been devoted to the on-demand single-electron source, within which electron wave packets carrying single or few electric charges can be injected coherently into a quantum conductor \cite{dubois-2013-minim-excit, dubois-2013-integ-fract, keeling-2006-minim-excit, bocquillon-2013-coher-indis, hofer-2013-emiss-time, moskalets-2013-singl-elect-sourc, gabelli-2013-shapin-time, jullien-2014-quant-tomog-elect, feve-2007-deman-coher, keeling-2008-coher-partic, mahe-2010-curren-correl, albert-2010-accur-quant, grenier-2011-singl-elect, haack-2011-coher-singl, sherkunov-2012-optim-pumpin, bocquillon-2012-elect-quant-optic, fletcher-2013-clock-contr, ubbelohde-2014-partit-deman, ryu-2016-ultraf-emiss, splettstoesser-2017-singl-elect, misiorny-2018-shapin-charg, dashti-2019-minim-excit}. In a simple way, such injection can be realized by applying a nanosecond pulse on the Ohmic contact of the conductor, as illustrated in Fig.~\ref{fig1}. The injected charges $Q$ of the wave packet is decided by the flux $\varphi$ of the pulse, while the detailed quantum states of the wave packet can be controlled via fine-tuning the profile of the pulse. This offers a simple but feasible approach to archive the time-resolved quantum control of propagating electron wave packet in solid-state circuits \cite{grenier-2013-fract-minim, wahl-2014-inter-charg, ferraro-2014-real-time, kamata-2014-fract-wave, freulon-2015-hong-ou, dasenbrook-2015-dynam-gener, belzig-2016-elemen-andreev, vannucci-2017-minim-excit, rech-2017-minim-excit, yin-2018-deman-elect, ronetti-2018-cryst-levit, vanevic-2016-elect-elect, vanevic-2012-contr-elect, bisognin-2019-quant-tomog}. \begin{figure}[H] \includegraphics[width=7.0cm]{fig1a} \includegraphics[width=7.0cm]{fig1b} \caption{ (a) Schematic of the on-demand electron injection via the voltage pulse $V(t)$. By applying $V(t)$ on the contact of the quantum conductor, electron (hole) or $e$h pairs from the reservoir (region I) can be injected into the quantum conductor (region III). The voltage drop is assumed to occur across a short interval at the interface (region II). (b) Schematic of the applied voltage pulse train. The pulse train is composed of identical Lorentzian pulses, which can be characterized by half width at half maximum $W$ and Faraday flux $\varphi$. These pulses are separated by a time interval $T$.} \label{fig1} \end{figure} Generally speaking, the wave packet is composed of charged quasiparticles in the Fermi sea ($| \mathbf{F} \rangle$) of the conductor, which are usually accompanied by a neutral cloud of electron-hole ($e$h) pairs \cite{landau-1957-theor-fermi-liquid, pines-2018-theor-quant-liquid}. Remarkably, it is possible to inject a ``clean'' wave packet without $e$h pairs, which can be done by tuning the pulse to be a Lorentzian with integer quantum flux \cite{ivanov-1997-coher-states, keeling-2006-minim-excit}. In doing so, one obtains soliton-like quasiparticles propagating on top of the Fermi sea, which has been named as ``levitons'' \cite{dubois-2013-integ-fract, jullien-2014-quant-tomog-elect}. Each leviton carries an unit electric charge and has a well-defined wave function. A sequence of well-separated levitons can be injected by using a train of Lorentzian pulses, emerging as promising candidates for flying qubits in solid state circuits \cite{bocquillon-2014-elect-quant, dasenbrook-2016-singl-elect, glattli-2016-levit-elect, baeuerle-2018-coher-contr, olofsson-2020-quant-telep}. By using a Lorentzian pulse with fractional quantum flux, one can inject a wave packet carrying fractional charges, which has a quite different structure. On one hand, it contains a large amount of $e$h pairs, which is closely related to the dynamical orthogonality catastrophe \cite{levitov-1996-elect-count, glattli-2018-pseud-binar}. On the other hand, it can sustain charged quasiparticles carrying fractional charges. The structure of the wave packet has been demonstrated for the Lorentzian pulse with a half-quantum flux. In this case, the quantum state of the wave packet can be decomposed into two mixed states: one represents the neutral cloud of $e$h pairs, while the other one can be regarded as a zero-energy quasiparticle carrying an effective $e/2$ charge \cite{moskalets-2016-fract-charg}. This makes the wave packet show distinctly different features from the wave packet built from levitons \cite{hofer-2014-mach-zehnd, gaury-2014-dynam-contr, belzig-2016-elemen-andreev}. Intuitively, one expect that the fractional-charged quasiparticles can be injected in a similar way as levitons, providing an alternative approach to realize flying qubits. However, the nature of these quasiparticles has not been fully understood yet. In particular, it is not clear how a leviton can evolve into a fractional-charged quasiparticle as the flux of pulses changes. To answer this question, one needs to describe the quantum states for both integer- and fractional-charged wave packets in an unified manner, which has not been given yet. In this paper, we attack this problem by examining the case when a Lorentzian pulse train with repetition period $T$ is applied on the Ohmic contact, as illustrated in Fig.~\ref{fig1}(b). In this case, we show that the injected charges are carried by a train of of wave packets, whose quantum state can be given as \begin{equation} | \mathbf{\Psi_{\rm train}} \rangle = \prod_{l=0, \pm 1, \pm 2, ...} | \mathbf{\Psi}_l \rangle, \label{s1:eq1} \end{equation} with $| \mathbf{\Psi}_l \rangle$ representing the quantum state of the $l$-th wave packet. Each wave packet is composed of charged quasiparticles and neutral $e$h pairs, which can be described by a set of single-body wave functions $\psi^\alpha_{kl}(t)$, with $\alpha=c$ for the quasiparticles and $\alpha=e$/$h$ for the electron/hole component of the $e$h pairs. This allows one to introduce the corresponding creation operators as \begin{align} C^{\dagger}_{kl} & = \int^{+\infty}_{-\infty} dt \psi^c_{kl}(t) \hat{a}^{\dagger}(t), \nonumber\\ (B^e_{kl})^{\dagger} & = \int^{+\infty}_{-\infty} dt \psi^e_{kl}(t) \hat{a}^{\dagger}(t), \nonumber\\ (B^h_{kl})^{\dagger} & = \int^{+\infty}_{-\infty} dt \psi^h_{kl}(t) \hat{a}(t), \label{s1:eq2} \end{align} with $\hat{a}(t)$ [$\hat{a}^{\dagger}(t)$] being the electron annihilation [creation] operator in the time domain. In doing so, the quantum state of the $l$-th wave packet can be described by the Slater determinant as \begin{equation} | \mathbf{\Psi}_l \rangle = \Big[\prod_k C^{\dagger}_{kl}\Big] \prod_k \Big[\sqrt{1 - p_k} + i \sqrt{p_k}(B^e_{kl})^{\dagger} (B^h_{kl})^{\dagger} \Big] | \mathbf{F}\rangle, \label{s1:eq3} \end{equation} with $p_k$ representing the excitation probabilities of the $e$h pairs. Both the excitation probabilities $p_k$ and the single-body wave functions $\psi^\alpha_{kl}(t)$ can be extracted from the time-dependent scattering matrix, providing a general way to study the quantum state of both the integer- and fractional-charged wave packets. As the charges $Q$ are injected with the period $T$, one may expect that the charged quasiparticles are also injected with the same period. Indeed, this picture holds when $Q/e$ takes integer values. This is illustrated in the inset of Fig.~\ref{fig2}, corresponding to $Q/e=1$. In this case, all the single-body wave functions of the charged quasiparticles exhibit the same profiles, which are separated from each other by the time interval $T$. They essentially correspond to a periodic train of levitons. The structure of the leviton train can be understood intuitively from the corresponding waiting time distribution $W(\tau)$ (WTD) \cite{dasenbrook16_quant_theor_elect_waitin_time_clock}, which is characterized by a strong peak around $\tau=T$ [see the green dashed curve in the main panel of Fig.~\ref{fig2}]. This indicates that the voltage pulse tends to inject exactly one electron per period into the quantum conductor. \begin{figure} \includegraphics[width=8.0cm]{fig2} \caption{ (Color online) The waiting time distribution $W(\tau)$ between electrons above the Fermi sea (main panel) and the corresponding wave function $\psi^c_{kl}(t)$ (inset), corresponding to the pulse width $W/T=0.1$. The green dashed curve in the inset (a) represent the wave functions of levitons, corresponding to $Q/e=1$. The red solid curves in the inset (b) represent the wave functions of the charged quasiparticles, corresponding to $Q/e=2/3$. Note that there are two types of quasiparticles here, which are represented by the thick and thin curves.} \label{fig2} \end{figure} In contrast, the above picture is inapplicable when $Q/e$ takes fractional values. In this case, the charged quasiparticles are essentially injected with two different periods. Due to the mismatch between these two periods, the wave functions of the quasiparticles can exhibit different profiles, which are injected with an extended period longer than $T$. This is illustrated in the inset (b) of Fig.~\ref{fig2}, corresponding to $Q/e=2/3$. One can see that the wave functions can exhibit two types of profiles, which are plotted with thick and thin curves. They are separated from each other by the time interval $3T/2$. On average, each quasiparticle can carry only $2e/3$ charge into the quantum conductor within a single period $T$. This makes them behave {\em effectively} like quasiparticles carrying fractional charges. These quasiparticles can have pronounced impact on the charge injection. In particular, they lead to the cycle-missing event, in which the voltage pulse can fail to inject an electron within a single period $T$. Such event can be seen from the corresponding WTD $W(\tau)$, which exhibits a series of peaks at multiplies of the period $T$ [see the red solid curve in Fig.~\ref{fig2}]. The wave function $\psi^c_{kl}(t)$ can hence provide an unified description of the charged quasiparticles, which is applicable for both the integer- and fractional-charged wave packets. This allows us to elucidate in detail how a leviton can evolve into a fractional-charged quasiparticle as the flux $\varphi$ of the pulse changes. In the meantime, our approach can also provide the information of the $e$h pairs. This allow us to clarify how additional $e$h pairs can be excited during the evolution of the levitons. The paper is organized as follows: In Sec.~\ref{sec2}, we present the model of the system and introduce a general expression for the quantum state of the wave packets. We discuss the typical behaviors of the wave functions of quasiparticles in Sec.~\ref{sec3} and~\ref{sec4}. The corresponding waiting time distribution is also discussed in these two sections. The evolution of levitons and $e$h pairs are discussed in Sec.~\ref{sec5} and~\ref{sec6}, respectively. We summarizes in Sec.~\ref{sec7}. \section{Bloch-Messiah reduction in the framework of scattering matrix formalism} \label{sec2} The electron source can be modeled as a single-mode quantum conductor, as illustrated in Fig.~\ref{fig1}(a). We choose the driving voltage $V(t)$ of the form \begin{equation} \frac{e}{\hbar}V(t) = \sum_{l=0, \pm 1, \pm 2, ...} \frac{2 \varphi W}{W^2 + (t-lT)^2}, \label{s2:eq0} \end{equation} which corresponds to a periodic train of Lorentzian pulses with width $W$ [see Fig.~\ref{fig1}(b)]. The voltage drop $V(t)$ between the contact and the conductor is assumed to occur across a short interval, so that the corresponding dwell time $\tau_D$ satisfies: $ k_B T_e \ll \hbar/T < \hbar/W \ll \hbar/\tau_D \ll E_F$, with $E_F$ representing the Fermi energy and $T_e$ representing the electron temperature. In this paper, we choose $E_F=0$ and concentrate on the zero-temperature limit. The scattering matrix of the system can be solely determined by the driving voltage $V(t)$ as \begin{equation} S(t, t') = \delta(t-t') \exp[- i \frac{e}{\hbar} \int^t_0 d\tau V(\tau)]. \label{s2:eq1} \end{equation} Given the scattering matrix, the electrons in the contact and the conductor can be related via the equation \begin{eqnarray} \hat{b}(t) = \int dt' S(t, t') \hat{a}(t'), \label{s2:eq2} \end{eqnarray} where $\hat{a}(t)$ and $\hat{b}(t)$ represent the electron annihilation operators in the Ohmic contact and the quantum conductor, respectively. In this setup, the injected current can be simply given as $I(t) = (e^2/h) V(t)$. The charge $Q$ injected within a single period can be solely by the flux $\varphi$ as \begin{eqnarray} Q = \int^{+T/2}_{-T/2} dt I(t) = e \varphi. \label{s2:eq3} \end{eqnarray} For simplicity, here we assume $Q/e>0$ so that the wave packets carry negative charges. The quantum state of the injected wave packets can be obtained from the Bloch-Messiah reduction, which extracts the many-body quantum state from the decomposition of the first-order correlation function $G(t, t')$ \cite{yin-2019-quasip-states, yue-2019-normal-anomal}. In the zero-temperature limit, $G(t, t')$ can be given as \begin{equation} i G(t, t') = \langle \mathbf{F} | \hat{b}^{\dagger}(t') \hat{b}(t) | \mathbf{F} \rangle, \label{s2:eq4} \end{equation} with $| \mathbf{F} \rangle$ representing the Fermi sea. To find the many-body state corresponding to $G(t, t')$, the Bloch-Messiah reduction essentially seeks out the quantum state $| \mathbf{\Psi} \rangle$, which satisfies \begin{equation} \langle \mathbf{\Psi} | \hat{a}^{\dagger}(t') \hat{a}(t) | \mathbf{\Psi} \rangle = \langle \mathbf{F} | \hat{b}^{\dagger}(t') \hat{b}(t) | \mathbf{F} \rangle. \label{s2:eq5} \end{equation} This can be done by a proper decomposition of $G(t, t')$. Here we only present the outline and leave the details to Appendix~\ref{app1}. \subsection{Decomposition in Floquet space} For the system under periodic driving, it is straightforward to perform such decomposition in Floquet space \cite{moskalets-2011-scatt-matrix, pedersen-1998-scatt-theor}, which can be generally written as \begin{align} \label{s2:eq6} i G(t, t') &= \sum_k \int^{\Omega}_0 \frac{d\omega}{\Omega} e^{-i\omega(t-t')} u^c_k(\omega, t) [ u^c_k(\omega, t') ]^{\ast} \\ & \mbox{}+ \sum_k \int^{\Omega}_0 \frac{d\omega}{\Omega} e^{-i\omega(t-t')} \left[ \begin{tabular}{cc} $u^e_k(\omega, t)$, & $u^h_k(\omega, t)$\\ \end{tabular}\right] \nonumber\\ & \hspace{-1.5cm}\times \left[\begin{tabular}{cc} $p_k(\omega)$ & $i \sqrt{p_k(\omega)[1 - p_k(\omega)]}$\\ $-i \sqrt{p_k(\omega)[1 - p_k(\omega)]}$ & $1 - p_k(\omega)$\\ \end{tabular}\right] \left[\begin{tabular}{c} $u^e_k(\omega, t')$\\ $u^h_k(\omega, t')$\\ \end{tabular}\right]^{\ast}, \nonumber \end{align} with asterisk denoting the complex conjugation. In the above expression, the quantity $p_k(\omega)$ is real, which satisfies $p_k(\omega) \in [0, 1]$. The functions $u^\alpha_k(\omega, t)$ are complex, which are periodic in the time domain $u^\alpha_k(\omega, t) = u^\alpha_k(\omega, t+T)$ with $\alpha = c, e$ and $h$. These functions can form orthonormal basis within a single period, \textit{i}.\textit{e}., \begin{equation} \label{s2:eq7} \int^{T/2}_{-T/2} dt [ u^{\alpha'}_{k'}(\omega, t) ]^\ast u^\alpha_k(\omega, t) = \delta_{\alpha, \alpha'} \delta_{k, k'}. \end{equation} All these functions can be characterized by two indices $\omega$ and $k$. Here $k$ is a discrete index, which can be described by (dimensionless) integer numbers. In contrast, the index $\omega$ has the unit of frequency, which satisfies $\omega \in [0, \Omega)$ with $\Omega = 2\pi/T$ being the repetition rate of the pulses. The function $u^\alpha_k(\omega, t)$ is closely related to the single-body wave function of the charged quasiparticle ($\alpha=c$) and the neutral $e$h pair ($\alpha=e, h$), while $p_k(\omega)$ represents the excitation probability of the $e$h pair. Both $u^\alpha_k(\omega, t)$ and $p_k(\omega)$ can be obtained from the polar decomposition of the scattering matrix. In general cases, they can exhibit a complicated dependence on $\omega$. For the scattering matrix given in Eq.~\eqref{s2:eq1}, we find that the $\omega$-dependence can be much simpler: First, the probabilities $p_k(\omega)$ are independent on $\omega$ and can hence be written as $p_k$ for short. Second, $u^\alpha_k(\omega, t)$ can be written in the form of separation of variables as \begin{equation} u^\alpha_k(\omega, t) = U^\alpha_k(t)F^{Q}_k(\omega), \label{s2:eq8} \end{equation} where $F^{Q}_k(\omega)$ is a real function defined in the region $\omega \in [0, \Omega)$, while $U^\alpha_k(t)$ is a complex periodic function defined in the whole time domain $t \in (-\infty, +\infty)$, which satisfies $U^\alpha_k(t) = U^\alpha_k(t+T)$. The function $U^\alpha_k(t)$ usually has to be obtained numerically, which is sensitive to the details of the scattering matrix. In contrast, the function $F^{Q}_k(\omega)$ can be given analytically. To do this, it is convenient to describe the discrete index $k$ by two non-negative integers $n$ and $m$ [\textit{i}.\textit{e}., $n, m = 0, 1, 2, ...$]. In doing so, we find that $F^{Q}_k(\omega)$ can be written as \begin{eqnarray} F^{Q}_k(\omega) & = & \begin{cases} H[ (Q/e - n + 1)\Omega - \omega], & \text{for $Q/e \in [n-1, n]$,}\\ H[\omega - (Q/e - n)\Omega], & \text{for $Q/e \in (n, n+1]$,}\\ 0, & \text{otherwise.} \end{cases}\nonumber\\ \label{s2:eq9} \end{eqnarray} with $H(\omega)$ representing Heaviside step function \footnote{Here we choose $H(0)=1$.}. Note that $F^{Q}_k(\omega)$ is independent on the details of the scattering matrix and is solely decided by the charged $Q$ of the wave packet. \begin{table}[h] \caption{\label{tab:ex} Parameter space for $k=[n, m]$ for the charged quasiparticles ($\alpha=c$) and $e$h pairs ($\alpha=e, h$), corresponding to $n, m \le 3$. The parameters for the charged quasiparticles are marked in gray shadow.} \begin{ruledtabular} \begin{tabular}{|p{2cm}|p{2cm}|p{2cm}|p{2cm}|} $[0, 0]$ & \cellcolor{lightgray}$[1, 0]$ & \cellcolor{lightgray}$[2, 0]$ & \cellcolor{lightgray}$[3, 0]$ \\ \hline $[0, 1]$ & $[1, 1]$ & \cellcolor{lightgray}$[2, 1]$ & \cellcolor{lightgray}$[3,1]$ \\ \hline $[0, 2]$ & $[1, 2]$ & $[2, 2]$ &\cellcolor{lightgray}$[3, 2]$ \\ \hline $[0, 3]$ & $[1, 3]$ & $[2, 3]$ &$[3, 3]$ \\ \end{tabular} \end{ruledtabular} \end{table} It is worth noting that the available parameter space of the index $k=[n, m]$ is different for the charged quasiparticles ($\alpha=c$) and the $e$h pairs ($\alpha=e, h$): one has $m < n$ for the charged quasiparticles, while $m \ge n$ for the $e$h pairs. This can be demonstrated more intuitively in Table~\ref{tab:ex}. \subsection{Decomposition in wave-packet representation} Given the decomposition of $G(t, t')$ in Eq.~\eqref{s2:eq6}, one can construct a set of single-body wave functions corresponding to the injected quasiparticles. The many-body state of the wave packets can then be described by using the Slater determinant built from them. However, one can construct different sets of single-body wave functions, which are related to each other via unitary transformations. Hence the detailed expression of the Slater determinant is not uniquely defined. As the driving voltage $V(t)$ corresponds to a train of pulses [see Eq.~\eqref{s2:eq0}], it is favorable to express the single-body wave functions in a similar form. This can be done by defining a set of wave functions $\psi^\alpha_{kl}(t)$ from $u^\alpha_k(\omega, t)$ as \begin{eqnarray} \hspace{-0.5cm}\psi^\alpha_{kl}(t) & = & \frac{1}{\sqrt{q_k}}\int^\Omega_0 \frac{d\omega}{\Omega} e^{-i \omega (t-lT/q_k)} u^\alpha_k(\omega, t), \nonumber\\ & = & U^\alpha_k(t) \int^\Omega_0 \frac{d\omega}{\Omega} \frac{F^{Q}_k(\omega)}{\sqrt{q_k}} e^{-i \omega (t-lT/q_k)}, \label{s2:eq10} \end{eqnarray} with $l=0, \pm 1, \pm 2, ...$. Note that we have introduced a normalization factor $q_k$ so that $\psi^\alpha_{kl}(t)$ can form an orthonormal basis set in the whole time domain $t \in (-\infty, +\infty)$, which satisfies \begin{equation} \int^{+\infty}_{-\infty}dt [\psi^{\alpha'}_{k'l'}(t)]^{\ast} \psi^\alpha_{kl}(t) = \delta_{\alpha, \alpha'} \delta_{k, k'} \delta_{l, l'}. \label{s2:eq11} \end{equation} By substituting Eqs.~\eqref{s2:eq8},~\eqref{s2:eq9} and~\eqref{s2:eq10} into~\eqref{s2:eq11}, it is straightforward to show that $q_k$ can be given analytically as \begin{eqnarray} q_k & = q_{[n,m]} = & \begin{cases} Q/e - n + 1, & \text{for $Q/e \in [n-1, n]$,}\\ n + 1 - Q/e, & \text{for $Q/e \in (n, n+1]$,}\\ 0, & \text{otherwise.} \end{cases}\nonumber\\ \label{s1:eq5} \end{eqnarray} The wave functions $\psi^\alpha_{kl}(t)$ can be regarded as Martin-Landauer-like wave packets \cite{martin92_wave_packet_approac_to_noise}, which offers an intuitive way to interpret the time-resolved behavior of the charged quasiparticles ($\alpha = c$) and $e$h pairs ($\alpha = e, h$). The decomposition of $G(t, t')$ can then be given as \begin{eqnarray} \label{s2:eq12} && \hspace{0cm}i G(t, t') = \sum_{k,l} \psi^c_{kl}(t) [ \psi^c_{kl}(t') ]^{\ast} \\ && \hspace{1.1cm}\mbox{}+ \sum_{k,l} \left[ \begin{tabular}{cc} $\psi^e_{kl}(t)$, & $\psi^h_{kl}(t)$\\ \end{tabular}\right] \nonumber\\ && \times \left[\begin{tabular}{cc} $p_k$ & $i \sqrt{p_k[1 - p_k]}$\\ $-i \sqrt{p_k[1 - p_k]}$ & $1 - p_k$\\ \end{tabular}\right] \left[\begin{tabular}{c} $\psi^e_{kl}(t')$\\ $\psi^h_{kl}(t')$\\ \end{tabular}\right]^{\ast}.\nonumber \end{eqnarray} For wave packets carrying integer and fractional charges, both the charged quasiparticles and $e$h pairs can show different natures, leading to wave functions with different features. To better demonstrate these differences, we shall first concentrate on two concrete examples: wave packets carrying an unit ($Q=e$) and two-thirds ($Q/e=2/3$) electric charges. \section{Wave packet with unit charge} \label{sec3} Let us start our discussion from the wave packet carrying an unit electric charge ($Q=e$). In this case, the decomposition of $G(t, t')$ takes a simple form: \begin{equation} i G(t, t') = \sum_l \psi^c_{[1,0]l}(t) [ \psi^c_{[1,0]l}(t') ]^{\ast}. \label{s3:eq1} \end{equation} This indicates that the each wave packet contain only one charged quasiparticle associated with the index $k=[1,0]$. By introducing the creation operator \begin{equation} C^{\dagger}_{[1, 0]l} = \int^{+\infty}_{-\infty} dt \psi^c_{[1, 0]l}(t) \hat{a}^{\dagger}(t), \label{s3:eq2} \end{equation} the corresponding many-body state of the whole wave packet train can be expressed as \begin{equation} | \mathbf{\Psi_{\rm train}} \rangle = \prod_{l=0, \pm 1, \pm 2, ...} C^{\dagger}_{[1, 0]l} | \mathbf{F}\rangle. \label{s3:eq3} \end{equation} Equation~\eqref{s3:eq3} essentially corresponds to a periodic train of levitons. Accordingly, the wave functions $\psi^c_{[1,0]l}(t)$ can be regarded as Martin-Landauer-like wave packets built from levitons. This can be seen more clearly by carrying out the integration in Eq.~\eqref{s2:eq10} \footnote{Note that in this case, we have $q_{[1,0]}=1.0$.}: \begin{equation} \psi^c_{[1, 0]l}(t) = U^c_{[1, 0]}(t) e^{-i \Omega (t-lT)/2} \sinc[\frac{\Omega (t-lT)}{2\pi}], \label{s3:eq5} \end{equation} where the periodic function \begin{equation} U^c_{[1, 0]}(t) = \frac{\sqrt{ \cosh(\pi W/T) \sinh(\pi W/T)/T }}{\sin[\pi (t/T - i W/T)]}, \label{s3:eq5-1} \end{equation} represents the leviton train \cite{glattli-2016-hanbur-brown}. Each wave function $\psi^c_{[1,0]l}(t)$ exhibits a strong peak around $t=lT$, corresponding to a leviton injected in the $l$-th period. Wave functions with different $l$ can form a periodic sequence, providing an intuitive way to understand the structure of the wave packet train. This is illustrated in the inset of Fig.~\ref{fig3}. The wave functions $\psi^c_{[1, 0]l}(t)$ can provide an orthonormal basis set in the time domain, within which various physical quantities can be expressed in a neat way. In particular, the current carried by the train of levitons can be written as [see Appendix~\ref{app2} for details] \begin{equation} I(t) = \sum_{l=0, \pm 1, \pm 2, ...} e| \psi^c_{[1, 0]l}(t) |^2. \label{s3:eq6} \end{equation} One notices that in Eq.~\eqref{s3:eq6}, the current $I(t)$ is expressed as an incoherent summation of all the wave functions $\psi^c_{[1, 0]l}(t)$, even if these functions can overlap with each other [see the inset of Fig.~\ref{fig3}]. However, this does not mean that levitons contribute incoherently to the charge transport process. In fact, the overlap between the wave functions can enhance the fluctuations of the waiting time between successive electron injection. This effect can be seen more intuitively from the waiting time distribution (WTD) between electron above the Fermi sea \cite{brandes08_waitin_times_noise_singl_partic_trans, albert11_distr_waitin_times_dynam_singl_elect_emitt, albert12_elect_waitin_times_mesos_conduc}. \begin{figure} \includegraphics[width=7.5cm]{fig3} \caption{ (Color online) The WTD between electrons above the Fermi sea, corresponding to the width $W/T=0.1$. The red solid curve represents the exact WTD, while the black dashed curve represents the semi-classical approximation from Eq.~\eqref{s3:eq10}. The corresponding train of levitons are illustrated by the wave functions $|\psi^c_{[1, 0]l}(t)|^2T$ in the inset. The red solid curves correspond to $l=-2, 0$ and $2$, while the green dashed curves correspond to $l=-1$ and $1$.} \label{fig3} \end{figure} The WTD can be calculated from the corresponding idle time probability $\Pi(t_s, t_e)$ \cite{dasenbrook16_quant_theor_elect_waitin_time_clock}. It can be expressed as the determinant [see Appendix~\ref{app3} for details]: \begin{equation} \Pi(t_s, t_e) = \det[ \hat{1} - \hat{Q}_{se} ], \label{s3:eq7} \end{equation} where $\hat{1}$ denotes the unit operator and the operator $\hat{Q}_{se}$ counts the number of electrons injected in the time interval $[t_s, t_e]$, whose energy is larger than the Fermi energy $E_F$. By introducing the Dirac notation $\langle t | 1,0;l \rangle = \psi^c_{[1,0]l}(t)$, the matrix element of the operator $\hat{Q}_{se}$ can be given as \begin{equation} \langle 1,0;l | \hat{Q}_{se} | 1,0;l' \rangle = \int^{t_e}_{t_s} dt [\psi^c_{[1,0]l}(t)]^{\ast} \psi^c_{[1,0]l'}(t). \label{s3:eq7-1} \end{equation} For system under periodic driving, it is usually convenient to average the idle time probability over a single period: \begin{equation} \Pi(\tau) = \int^{T/2}_{-T/2} dt_s \Pi(t_s, t_s + \tau). \label{s3:eq8} \end{equation} In doing so, one obtains the time-averaged idle time probability $\Pi(\tau)$, which only depends on the length of the time interval. The corresponding WTD can be given as \begin{equation} W(\tau) = \langle \tau \rangle \partial^2_{\tau} \Pi(\tau), \label{s3:eq9} \end{equation} with $\langle \tau \rangle$ being the mean waiting time. The above equations offer a direct relation between the wave functions and WTD, where the overlap between levitons manifest itself as the off-diagonal elements in Eq.~\eqref{s3:eq7-1}. When the overlap vanishes, the idle time probability $\Pi(t_s, t_e)$ can be reduced to \begin{equation} \Pi_c(t_s, t_e) = \prod_{l=0, \pm 1, \pm 2, ...} \Big[ 1 - \int^{t_e}_{t_s}dt |\psi^c_{[1,0]l}(t)|^2 \Big]. \label{s3:eq10} \end{equation} A quite similar result has been obtained for the ideal single-electron source built from the mesoscopic capacitor \cite{hofer16_elect_waitin_times_mesos_capac}. The corresponding WTD $W_c(\tau)$ calculated from $\Pi_c(t_s, t_e)$ can exhibit a strong peak around the point $\tau=T$ and drops rapidly to zero when $\tau > 2T$, as illustrated by the black dashed curve in Fig.~\ref{fig3}. This indicates that one injects exactly one electron per period, corresponding to the case of ideal single-electron injection. In realistic conditions, Eq.~\eqref{s3:eq10} can be regarded as a semi-classical approximation. The presence of the overlap between levitons can lead to a deviation between the exact WTD $W(\tau)$ and the semi-classical approximation $W_c(\tau)$. This can be seen by comparing the red solid curve [$W(\tau)$] to the black dashed one [$W_c(\tau)$] in Fig.~\ref{fig3}, which are calculated for $W/T=0.1$. One can see that the peak in the WTD is slightly broadened due to the overlap, indicating an enhancement of the fluctuations of the waiting time. \begin{figure} \includegraphics[width=7.5cm]{fig4} \caption{ (Color online) The WTDs between electrons above the Fermi sea, corresponding to $Q=e$ and the width $W/T=0.05$ (green), $0.1$ (red) and $0.2$ (blue). The solid curves represent the exact WTD, while the dashed curves represent the semi-classical approximation from Eq.~\eqref{s3:eq10}. Curves corresponding to different $W/T$ are shifted vertically for better visibility.} \label{fig4} \end{figure} In fact, the enhancement is not significant for $W/T=0.1$. Moreover, it can be suppressed by decreasing $W/T$. This is illustrated in Fig.~\ref{fig4}, where we compare the WTDs for the width $W/T=0.05$, $0.1$ and $0.2$. This indicates that the ideal single-electron injection can be approached in the limit $W/T \to 0$. Accordingly, the wave functions $\psi^c_{[1,0]l}(t)$ are well-separated and can be treated as individual levitons in this limit. The above results show that levitons can be well described by the single-body wave function $\psi^c_{kl}(t)$. In the following section, we shall further demonstrate that the wave function $\psi^c_{kl}(t)$ can also be used to describe the charged quasiparticles in the fractional-charged wave packet. \section{Wave packet with two-thirds charges} \label{sec4} Now we turn to the wave packet carrying two-thirds electric charges ($Q/e=2/3$). In this case, each wave packet still contains only one charged quasiparticle associated with the index $k=[1,0]$. Due to the dynamical orthogonality catastrophe, one expect that the wave packet can also contain a large amount of neutral $e$h pairs, when the pulse width $W/T$ is small enough. However, as it is difficult to generate well-behaved voltage pulses with too small width $W/T < 0.1$ \cite{dubois-2013-minim-excit, dubois-2013-integ-fract}, there exist only a rather limited number of $e$h pairs under typical experimental conditions. In fact, even for the width $W/T = 0.1$, we find that the excitation probabilities $p_k$ of the $e$h pairs are all smaller than $0.15$. \subsection{Charged quasiparticles} \label{sec4a} As a first step toward exploring the quantum state of the wave packets, let us omit the contribution of the $e$h pairs, which is valid for large width $W/T$. In doing so, the correlation function $G(t,t')$ can be decomposed into the same form as the one of levitons [see Eq.~\eqref{s3:eq1}]. However, the wave function $\psi^c_{[1,0]l}(t)$ takes a different form, which can be written as \begin{align} \psi^c_{[1, 0]l}(t) &= U^c_{[1, 0]}(t) \nonumber\\ & \times \frac{e^{-i q_{[1,0]}\Omega (t-lT/q_{[1,0]})/2}}{\sqrt{q_{[1,0]}}} \sinc[\frac{q_{[1,0]}\Omega (t-lT/q_{[1,0]})}{2\pi}], \label{s4:eq1-1} \end{align} with the factor $q_{[1,0]} = 2/3$. By comparing the wave function of levitons in Eq.~\eqref{s3:eq5}, we show that there are two differences between the two cases: 1) The periodic function $U^c_{[1, 0]}(t)$ has to be obtained numerically in this case; 2) While the function $U^c_{[1, 0]}(t)$ has the period $T$, the sinc function in this case represents the wave packet localized around $t=l(3T/2)$. This indicates that the wave functions $\psi^c_{[1,0]l}(t)$ correspond to the quasiparticles, which are injected with two different periods: $T$ and $3T/2$. It is the double periodicity, which makes the quasiparticles exhibit qualitatively different features from the ones of levitons. The period $3T/2$ decides the charges carried by the quasiparticles. In fact, as the wave functions $\psi^c_{[1, 0]l}(t)$ with different $l$ are still orthogonal to each other [see Eq.~\eqref{s2:eq11}], one can still express the current as the incoherent summation of them, which has the same form as the one of levitons [see Eq.~\eqref{s3:eq6}]. However, as these wave functions are separated from each other by the time interval $3T/2$ [see the inset of Fig.~\ref{fig5}], on average each quasiparticle can carry only $2e/3$ charge within a single period $T$, making them behave {\em effectively} like quasiparticles carrying fractional charges. Note that in this case, the wave functions can exhibit two different profiles. For $l=-2, 0$ and $2$ (red solid curves), the wave functions $\psi^c_{[1, 0]l}(t)$ can exhibit a strong peak, which is accompanied by two small shoulder peaks. In contrast, for $l=-1$ and $1$ (green dashed curves), the wave functions $\psi^c_{[1, 0]l}(t)$ exhibit double peak structures. This is a direct consequence of the double periodicity of the wave functions. In fact, from Eq.~\eqref{s4:eq1-1}, one can see that when the periods corresponding to $U^c_{[1, 0]}(t)$ (with the period $T$) and the sinc function (with the period $T/q_{[1,0]}$) do not match, for $q_{[1,0]} = A/B$ (with $A$ and $B$ being coprime integers), the wave functions can exhibit $A$ different profiles, which are separated from each other with the extended period $BT/A$. \begin{figure} \includegraphics[width=7.5cm]{fig5} \caption{ (Color online) The WTD between electrons above the Fermi sea, corresponding to the width $W/T=0.1$. The corresponding train of charged quasiparticles are illustrated by the wave functions $|\psi^c_{[1, 0]l}(t)|^2T$ in the inset. The red solid curves correspond to $l=-2, 0$ and $2$, while the green dashed curves correspond to $l=-1$ and $1$.} \label{fig5} \end{figure} Due to the mismatch between the two periods, the wave functions are strongly overlapped with each other. This can be seen intuitively from the inset of Fig.~\ref{fig5}. The overlap can induce a large fluctuation of the waiting time, which can be seen from the corresponding WTD \footnote{Generally speaking, the electron component of the $e$h pairs can also contribute to the WTD. However, the contribution remains negligible due to the small excitation probability for the width $W/T > 0.05$.}. This is illustrated by the red solid curves in the main panel of Fig.~\ref{fig5}. One can see that the WTD exhibits a series of peaks at multiplies of the repetition period $T$. This indicates the presence of the cycle-missing event, in which the voltage pulse fails to inject an electron within a single period $T$ \cite{potanina17_elect_waitin_times_period_driven, hofer16_elect_waitin_times_mesos_capac}. As the overlap between the wave functions are rather large, the semi-classical approximation $W_c(\tau)$ of the WTD [Eq.~\eqref{s3:eq10}] is inapplicable. One can see that $W_c(\tau)$ largely overestimates the WTD around the point $\tau=0$, which is illustrated by the black dashed curve in Fig.~\ref{fig5}. In fact, $W_c(\tau)$ gives an unphysical value around this point: The WTD should be zero at $\tau=0$ due to the Pauli principle. Unlike the case of levitons, the overlap between the wave functions cannot be eliminated by just decreasing the width $W/T$. As a consequence, the multiple-peak structure of the WTD preserves as $W/T$ decreases. This is illustrated in Fig.~\ref{fig6}, corresponding to $W/T=0.2$, $0.1$ and $0.05$. \begin{figure} \includegraphics[width=7.5cm]{fig6} \caption{ (Color online) The WTD between electrons above the Fermi sea, corresponding to $Q/e=2/3$ and the width $W/T=0.05$ (green), $0.1$ (red) and $0.2$ (blue). Curves corresponding to different $W/T$ are shifted vertically for better visibility.} \label{fig6} \end{figure} The above results explains the nature of the fractional-charged quasiparticles: they are just quasiparticles injected with an extended period $T/q_k$, which is longer than the period $T$ of the driving pulses. The wave functions of these quasiparticles are always strongly overlapped with each other, manifesting themselves at multiple peaks in the corresponding WTD. The feature of these quasiparticles can be characterized by the factor $q_k$, indicating that each quasiparticle can carry $e q_k$ charges per period $T$, making them behave {\em effectively} as fractional-charge quasiparticles. \subsection{Electron-hole pairs} \label{sec4c} Now let us briefly discuss the $e$h pairs in the wave packet. For $W/T=0.1$, we find that each wave packet contains only one $e$h pair, which is associated with the index $k=[0, 0]$. The corresponding excitation probability $p_{[0,0]}$ is only $0.138$. The other $e$h pairs are negligible due to their small excitation probabilities \footnote{They are all smaller than $0.002$}. The $e$h pairs can be described in a similar way as the charged quasiparticles. In fact, the wave functions of the electron and hole components can be expressed in a similar form as shown in Eq.~\eqref{s4:eq1-1}: \begin{align} \psi^{e/h}_{[0, 0]l}(t) &= U^{e/h}_{[0, 0]}(t) \nonumber\\ & \times \frac{e^{-i q_{[0,0]}\Omega (t-lT/q_{[0,0]})/2}}{\sqrt{q_{[0,0]}}} \sinc[\frac{q_{[0,0]}\Omega (t-lT/q_{[0,0]})}{2\pi}]. \label{s4:eq9} \end{align} with the factor $q_{[0,0]}=1/3$. The corresponding wave functions $\psi^{e/h}_{[1,0]l}(t)$ are plotted by the green/blue curves in Fig.~\ref{fig7}, where the wave function $\psi^c_{[1,0]l}(t)$ of the charged quasiparticles are also plotted by the red curves for comparison. One can see that in this case, the wave functions for the electron (hole) component exhibit only one type of profiles. They are separated from each other by the time interval $3T$, making them behave as quasiparticles carrying $e/3$ charges. Note that electron and hole components carry the same amount of charges but with opposite sign, which cannot contribute to the total charge $Q$ of the wave packet. \begin{figure} \includegraphics[width=7.5cm]{fig7} \caption{ (Color online) Wave functions of the charged quasiparticle ($k = [1, 0]$) and the $e$h pair ($k = [0, 0]$). The red curves represent the wave functions of the charged quasiparticle. The green and blue curves represent the wave functions for the electron and hole components of the $e$h pair, respectively. All the wave functions are calculated with $W/T=0.1$. The solid (dashed) curves from left to right correspond to $l=-2$, $0$ and $2$ ($l=-1$ and $1$), respectively.} \label{fig7} \end{figure} By combining the information of both the charged quasiparticles and $e$h pairs, the quantum state of the whole train of wave packet can be written as \begin{align} \label{s4:eq10} | \mathbf{\Psi_{\rm train}} \rangle &= \prod_{l=0, \pm 1, \pm 2, ...} C^{\dagger}_{[1, 0]l} \Big[\sqrt{1 - p_{[0, 0]}} \nonumber\\ &\hspace{1cm}+ i \sqrt{p_{[0, 0]}}(B^e_{[0, 0]l})^{\dagger} (B^h_{[0, 0]l})^{\dagger} \Big] | \mathbf{F}\rangle, \end{align} with \begin{align} C^{\dagger}_{[1,0]l} & = \int^{+\infty}_{-\infty} dt \psi^c_{[1,0]l}(t) \hat{a}^{\dagger}(t), \nonumber\\ (B^e_{[0,0]l})^{\dagger} & = \int^{+\infty}_{-\infty} dt \psi^e_{[0,0]l}(t) \hat{a}^{\dagger}(t), \nonumber\\ (B^h_{[0,0]l})^{\dagger} & = \int^{+\infty}_{-\infty} dt \psi^h_{[0,0]l}(t) \hat{a}(t). \label{s4:eq10-1} \end{align} This provides a full information of the injected electric wave packet. It allows us to elucidate how the quantum state of wave packets can evolve as the flux of the pulses changes. In the following section, we shall concentrate on the evolution of the charged quasiparticles. We shall show how levitons can emerge as the flux approaches an integer value. \section{Evolution of charged quasiparticle} \label{sec5} The evolution of the charged quasiparticles can be fully described by the single-body wave function $\psi^c_{kl}(t)$. This is illustrated in Fig.~\ref{fig8}, corresponding to the index $k=[1,0]$. In the figure, we choose $W/T=0.1$ and $Q/e \in (0.0, 2.0)$. Curves with different colors and line types correspond to wave functions $\psi^c_{[1,0]l}(t)$ with different $l$. As the factor $q_{[1,0]}$ can play an important role, we also show the corresponding $q_{[1,0]}$ alongside the wave functions. \begin{figure} \includegraphics[width=8.5cm]{fig8} \caption{ (Color online) Wave functions of the charged quasiparticle $|\psi^c_{[1,0]l}(t)|^2T$, corresponding to $W/T=0.1$ and $Q/e \in (0.0, 2.0)$. For each value of $Q/e$, we plot the wave functions for $l=-3 \sim 3$. Curves with different colors and line types correspond to $|\psi^c_{[1,0]l}(t)|^2T$ with different $l$.} \label{fig8} \end{figure} From the figure, one first notices that one has $q_{[1,0]} = Q/e$ when $Q/e \in (0.0, 1.0)$. For $q_{[1,0]} = 1/4$, all the wave functions of the quasiparticles exhibit the same profile. These quasiparticles are injected with the extended period $4T$, indicating that they can carry $e/4$ charge within each period $T$. As $q_{[1,0]}$ increases from $1/4$ to $1/2$, the extended period is reduced to $2T$, indicating that the quasiparticles evolve into the $e/2$-charged quasiparticles. As $q_{[1,0]}$ further increases from $1/2$ to $3/4$, there can exist three types of quasiparticle, which are injected with the extended period $3T/4$, leading to $3e/4$ charges per period. As $q_{[1,0]}$ reaches $1.0$, all the quasiparticles can evolve into levitons, which are injected with the period $T$. \begin{figure} \includegraphics[width=7.0cm]{fig9} \caption{ (Color online) Factor $q_k$ as functions of $Q/e$. The red solid curve represents $q_{[1,0]}$. The green dashed curves represent $q_{[2,0]}$ and $q_{[2,1]}$. Note that one has $q_{[2,0]} = q_{[2,1]}$, so the two curves are overlapped. Similarly, the blue dotted curves represent $q_{[3,0]}$, $q_{[3,1]}$ and $q_{[3,2]}$. The black solid curve represents the charge of the wave packet $Q/e$, which satisfies $Q = e\sum_k q_k$.} \label{fig9} \end{figure} For $Q/e \in (1.0, 2.0)$, one has $q_{[1,0]} = 2 - Q/e$. As $Q/e$ increases in this region, $q_{[1,0]}$ is dropping linearly to zero. Accordingly, the levitons can evolve back into fractional-charged quasiparticles, which are injected with the extended period $T/q_{[1,0]}$. Note that one has $T/q_{[1,0]} \to +\infty$ for $q_{[1,0]} \to 0$. This implies that the corresponding quasiparticles cannot be injected in this limit, since the time interval between successive quasiparticle injection tends to infinity. Generally speaking, quasiparticles associated with the index $k=[n,m]$ can only be injected when $Q/e \in [n-1, n+1]$, as shown in Eq.~\eqref{s1:eq5}. This can be seen more clearly for Fig.~\ref{fig9}. \begin{figure} \includegraphics[width=7.5cm]{fig10} \caption{ (Color online) The WTDs between electrons above the Fermi sea, corresponding to $Q/e \in (0.0, 2.0)$ and the width $W/T=0.1$. Curves corresponding to different $Q/e$ are shifted vertically for better visibility. } \label{fig10} \end{figure} The evolution of the quasiparticles can also be seen from the corresponding WTD, as illustrated in Fig.~\ref{fig10}. One can see that for $Q/e = 1/4$, the waiting time has a rather wide distribution. This is because the corresponding wave functions of the quasiparticles are strongly overlapped, as shown in Fig.~\ref{fig8}. As $Q/e$ approaches $1.0$, the WTD $W(\tau)$ tends to exhibit a strong peak around $\tau = T$, indicating the emergence of levitons. Hence the evolution of the wave functions for $Q/e < 1.0$ can also be tracked by using the corresponding WTD. As $Q/e$ goes above $1.0$, additional charged quasiparticles can be injected. From Fig.~\ref{fig9}, one can see that two additional quasiparticles $k=[2,0]$ and $[2,1]$ can emerge. The evolution of these two quasiparticles are demonstrated in Fig.~\ref{fig11}. By comparing to Fig.~\ref{fig8}, one can see that they evolves in a similar way as the quasiparticle $k=[1,0]$. As these two quasiparticles can also contribute to the WTD, it is difficult to read the evolution of a single charged quasiparticles from the WTD when $Q/e > 1.0$. \begin{figure} \includegraphics[width=8.5cm]{fig11a} \includegraphics[width=8.5cm]{fig11b} \caption{ (Color online) (Color online) Wave functions of the charged quasiparticle $|\psi^c_{[2,0]l}(t)|^2T$ (a) and $|\psi^c_{[2,1]l}(t)|^2T$ (b), corresponding to $W/T=0.1$ and $Q/e \in (1.0, 3.0)$. For each value of $Q/e$, we plot the wave functions for $l=-3 \sim 3$. Curves with different colors and line types correspond to the wave functions with different $l$. } \label{fig11} \end{figure} \section{Evolution of electron-hole pairs and shot noise} \label{sec6} As levitons evolve into fractional-charged quasiparticle, additional $e$h pairs can be excited. Due to the small excitation probabilities, the $e$h pairs can have little contribution to the WTD between electrons above the Fermi sea \footnote{Generally speaking, the electron component of the $e$h pair can also contribute to the WTD between electrons above the Fermi sea. However, the contribution remains negligible for $W/T =0.05$.}. In contrast, it can have pronounced impact on the shot noise, which has been extensively studied in previous works \cite{dubois-2013-integ-fract, bocquillon-2014-elect-quant, vanevic-2012-contr-elect, vanevic-2016-elect-elect}. When the wave packet is partitioned at a localized scatter with transmission probability $D$, both the charged quasiparticles and $e$h pairs can contribute to the shot noise $S_N$. It can be decomposed into two part [see Appendix~\ref{app2} for details]: $S_N = S_c + S_{ex}$, where \begin{align} S_c &= S_0 \sum_k q_k, \nonumber\\ S_{ex} &= 2 S_0 \sum_k q_k p_k . \label{s6:eq1} \end{align} with $S_0 = 2 \frac{e^2}{h} D(1-D)\hbar\Omega$ being the typical scale of the shot noise. The first part corresponds to the contribution of the charged quasiparticles. It is solely decided by the charge $Q$ of the wave packet, since one has $\sum_k q_k = Q/e$ from Eq.~\eqref{s1:eq5}. The second part is the excess shot noise, which has been used extensively to characterize the feature of $e$h pairs \cite{dubois-2013-integ-fract, vanevic-2007-elemen-event}. By using the information of the excitation probability $p_k$ and the factor $q_k$, one can decompose the excess shot noise $S_{ex}$ into the contribution of individual $e$h pairs. This is illustrated in Fig.~\ref{fig12}. From the figure, one can identify the contribution of three $e$h pairs, corresponding to $k=[0,0]$, $k=[1,1]$ and $k=[2,2]$. These $e$h pairs dominate the excess shot noise $S_{ex}$ in different regions. Such decomposition makes it possible to extract the information of individual $e$h pairs from the excess shot noise. By combining the WTD with the shot noise, one can hence obtain the full information of the evolution of the quantum state of the wave packet. \begin{figure} \includegraphics[width=7.0cm]{fig12a} \includegraphics[width=7.0cm]{fig12b} \includegraphics[width=7.0cm]{fig12c} \caption{ (Color online) (a) The excitation probabilities $p_k$ for the $e$h pairs. The inset shows the zoom-in of the figure. The $p_k$ of other $e$h pairs are too small to be seen from the figure. (b) The injection probabilities $q_k$ of the electron/hole component of the $e$h pairs. (c) Excess shot noise $S_{ex}$ as a function of the charge $Q/e$ of the wave packet. The red solid, green dashed and blue dotted curves represent the contribution from the $e$h pairs $k=[0, 0]$, $k=[1, 1]$ and $k=[2, 2]$, respectively. The excess shot noise is normalized to $S_0 = 2 \frac{e^2}{h} D(1-D)\hbar\Omega$.} \label{fig12} \end{figure} \section{Summary and Outlook} \label{sec7} In summary, we have present a general approach to extract the quantum state of wave packets injected by Lorentzian pulse train with arbitrary flux. We show that the charged quasiparticles can be described by a set of single-body wave functions $\psi^c_{kl}(t)$. These wave functions can be regarded as Martin-Landauer-like wave packets, which offers an intuitive way to interpret their time-resolved behaviors. In integer-charged wave packets, the charged quasiparticles are levitons, which are injected with the same period as the pulse train. No $e$h pairs can be injected in this case. In fractional-charged wave packets, the charged quasiparticles can be injected with two different periods. Due to the double periodicity, their wave functions can exhibit different profiles. They can form a periodic train, whose period is longer than the period of the pulse train. This makes them behave effectively as quasiparticles carrying fractional charges. We show that the evolution of the charged quasiparticles can be seen from the WTD between electrons above the Fermi sea. Our approach can also be used to describe the evolution of $e$h pairs, which can be tracked by using the shot noise. Note that although our approach is demonstrated for the Lorentzian pulses, it is rather general and can be applied to pulses with arbitrary profiles. We expect our work will be helpful to explore the full potential of the voltage electron source. \begin{acknowledgments} The authors would like to thank Professor D. C. Glattli and Professor M. V. Moskalets for helpful comments and discussion. This work was supported by the National Key R\&D Program of China under Grant No. 2016YFF0200403, the Key Program of National Natural Science Foundation of China under Grant No. 11234009, Young Scientists Fund of National Natural Science Foundation of China under Grant No. 11504248 and SCU Innovation Fund under Grant No. 2020SCUNL209. \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
\subsection{Why Post-Editing Matters to Stories?} ``Mostly when I think of pacing, I go back to Elmore Leonard, who explained it so perfectly by saying he just left out the boring parts. This suggests cutting to speed the pace, and that's what most of us end up having to do (kill your darlings, kill your darlings, even when it breaks your egocentric little scribbler's heart, kill your darlings)…I got a scribbled comment that changed the way I rewrote my fiction once and forever. Jotted below the machine-generated signature of the editor was this mot: `Not bad, but PUFFY. You need to revise for length. Formula: 2nd Draft = 1st Draft – 10\%. Good luck.' '' — Stephen King, On Writing, 2000~\cite{king2000writing} \subsection{What?} \subsection{How?} \end{comment} \section{Introduction} \input{intro.tex} \section{Related Work} \input{related_work} \section{Dataset Construction \& Analysis} \input{data.tex} \section{Baseline Experiments} \input{auto-edit-as-ai-task} \section{Discussion} \input{discussion} \section{Conclusion} \input{conclusion} \subsection{Methods} Two neural approaches, Long short-term memory (LSTM) and Transformer, are used as baselines, where we experiment using {\em (i)} text only (T) and {\em (ii)} both text and images (T+I) as inputs. \paragraph{LSTM} An LSTM seq2seq model is used~\cite{sutskever2014sequence}. For the text-only setting, the original stories and the human-edited stories are treated as source-target pairs. For the text-image setting, we first extract the image features using the pre-trained ResNet-152 model~\cite{he2016deep} and represent each image as a 2048-dimensional vector. We then apply a dense layer on image features in order to both fit its dimension to the word embedding and learn the adjusting transformation. By placing the image features in front of the sequence of text embedding, the input sequence becomes a matrix $\in \mathbb{R}^{(5+len) \times dim}$, where $len$ is the text sequence length, $5$ means 5 photos, and $dim$ is the dimension of the word embedding. The input sequence with both image information and text information is then encoded by LSTM, identical as in the text-only setting. \paragraph{Transformer (TF)} We also use the Transformer architecture~\cite{vaswani2017attention} as baseline. The text-only setup and image feature extraction are identical to that of LSTM. For Transformer, the image features are attached at the end of the sequence of text embedding to form an image-enriched embedding. It is noteworthy that the position encoding is only applied on text embedding. The input matrix $\in \mathbb{R}^{(len+5) \times dim}$ is then passed into the Transformer as in the text-only setting. \begin{figure*}[t] \centering \includegraphics[width=1.0\textwidth]{fig/representative_example_stories_full_width.png} \caption{Example stories generated by baselines.} \label{fig:representative-story} \end{figure*} \subsection{Experimental Setup and Evaluation} \begin{comment} \begin{table}[] \centering \scriptsize \addtolength{\tabcolsep}{-0.1cm} \begin{tabular}{llrrrr} \toprule \hline & \textbf{Data Includes} & \textbf{BLEU4} & \textbf{METEOR} & \textbf{ROUGE} & \textbf{Skip-Thoughts}\\ \hline \ding{172} & AREL & .110 & .099 & .063 & .062 \\ \ding{173} & LSTM-Edited AREL & .106 & .109 & .067 & .205 \\ \ding{174} & \ding{172}+\ding{173} & .095 & .092 & .059 & .116 \\ \hline \ding{175} & GLAC & .222 & .203 & .140 & .151 \\ \ding{176} & LSTM-Edited GLAC & .163 & .176 & .138 & .087 \\ \ding{177} & \ding{175}+\ding{176} & .196 & .194 & .148 & .116 \\ \hline \ding{178} & \ding{172}+\ding{175} & .091 & .086 & .059 & .088 \\ \ding{179} & \ding{173}+\ding{176} & .089 & .103 & .067 & .101 \\ \ding{180} & \ding{172}+\ding{173}+\ding{175}+\ding{176} & .090 & .096 & .069 & .094 \\ \hline \bottomrule \end{tabular} \addtolength{\tabcolsep}{0.1cm} \caption{Spearman rank-order correlation $\rho$ between the automatic evaluation scores (sum of all six aspects) and human judgment calculated using different data combination. GLAC(N=202), AREL(N=97)\hsu{modify after 2(Spearman)}} \label{tab:auto-human-cor} \end{table} \end{comment} \paragraph{Data Augmentation} In order to obtain sufficient training samples for neural models, we pair \textit{less}-edited stories with \textit{more}-edited stories of the same photo sequence to augment the data. In \textit{VIST-Edit}\xspace, five human-edited stories are collected for each photo sequence. We use the human-edited stories that are less edited -- measured by its Normalized Damerau-Levenshtein distance \cite{levenshtein1966binary,damerau1964technique} to the original story -- as the source and pair them with the stories that are more edited (as the target.) This data augmentation strategy gives us in total fifteen ($\left ( ^5_2 \right )+5=15$) training samples given five human-edited stories \paragraph{Human Evaluation} Following the evaluation procedure of the first VIST Challenge~\cite{mitchell2018proceedings}, for each visual story, we recruit five human judges on MTurk to rate it on six aspects (at \$0.1/HIT.) We take the average of the five judgments as the final scores for the story. Table~\ref{tab:human-eval} shows the results. The LSTM using text-only input outperforms all other baselines. It improves all six aspects for stories by AREL, and improves ``Focus'' and ``Human-like'' aspects for stories by GLAC. These results demonstrate that a relatively small set of human edits can be used to boost the story quality of an existing large VIST model. Table~\ref{tab:human-eval} also suggests that the quality of a post-edited story is heavily decided by its pre-edited version. Even after editing by human editors, AREL's stories still do not achieve the quality of pre-edited stories by GLAC. The inefficacy of image features and Transformer model might be caused by the small size of \textit{VIST-Edit}\xspace. It also requires further research to develop a post-editing model in a multimodal context. \begin{comment} \footnote{Given 5 human-edited stories, we could generate $4$ samples by paring the story with the lowest distance to the other four stories, $3$ samples from the second lowest one to the other three stories, and so on. Eventually we could have $4+3+2+1=10$ stories.} The system overcame the problem of limited APE training data by generating a large amount of artificial training data with back-translation~\cite{sennrich2015improving}. The idea was to use two phase-based MT systems to translate German to English and then back to German. The original German, the translated English, and the back-translated German are treated as the post-edited, source, and the machine-translated sentences respectively. For the text-only setting, the source-target pairs is identical as that of LSTM. For the text-image setting, we again use the image features extracted from the last convolutional layer of pre-trained ResNet-152 mdoel \cite{he2016deep} and apply a dense layer to adjust the dimension and learn the transformation. We use the LSTM seq2seq model \cite{sutskever2014sequence} as the main structure. For text-only setting, the original stories and the human edited stories are treated as the source-target pairs in the translation problem. To incorporate the image information, we first extract the image features by the pretrained ResNet-152 model \cite{he2016deep} where each image is represented as a 2048-dimension vector. We apply a dense layer on image features in order to both fit its dimension to the word embedding and learn the adjusting transformation. By placing the image features in front of the word embeddings, the source sequence becomes a matrix $\in \mathbb{R}^{(len+5) \times dim}$ where $len$ is the text sequence length, $5$ means 5 photos, and $dim$ stands for the dimension of the word embedding. We feed $\textbf{M}$ into the LSTM seq2seq model as what we did in the text-only setting. We use a classical Machine Translation model - Transformer \cite{vaswani2017attention} as our main structure. For the text part, we use auto-generated story and human-edit story as our source-target pairs for the input. For the image plus text part, we first extract the image features from the last convolutional layer of pre-trained ResNet-152 mdoel which is a 2048-dimension vector. Then we pass all the image feature through an embedding layer to make them fit the same dimension as text sequence. One photo sequence contains five photos, which is a matrix $\textbf{T} \in 5 \times 2048 $. After passing through the embedded layer, we obtain the output matrix $\textbf{T}' \in 5 \times dim $, which is the same dimension of word embedding. We simply concatenate $\textbf{T}$ and $\textbf{T}'$, which represent the matrix of word embedding of text sequence. We treat each image feature as one word vector and get the new matrix $\textbf{S} \in (5 + $length$) \times dim$, where $length$ means the length of text sequence. Finally, we put $\textbf{S}$ into our Transformer model to generate the sentence. \begin{table*}[t] \small \begin{tabular}{@{}lrrrrrr|rrrrrr@{}} \toprule \multicolumn{1}{c}{\textit{}} & \multicolumn{6}{c|}{\textit{\textbf{AREL}}} & \multicolumn{6}{c}{\textit{\textbf{GLAC}}} \\ \midrule Edited By & Focused & Coherent & Share & Human & Grounded & Detailed & Focused & Coherent & Share & Human & Grounded & Detailed \\ \midrule \textbf{N/A} & 3.487 & 3.751 & 3.763 & 3.746 & 3.602 & 3.761 & 3.878 & 3.908 & 3.930 & 3.817 & 3.864 & 3.938 \\ \midrule \textbf{TF (T)} & 3.433 & 3.705 & 3.641 & 3.656 & 3.619 & 3.631 & 3.717 & 3.773 & 3.863 & 3.672 & 3.765 & 3.795 \\ \textbf{TF (T+I)} & \textbf{3.542} & 3.693 & 3.676 & 3.643 & 3.548 & 3.672 & 3.734 & 3.759 & 3.786 & 3.622 & 3.758 & 3.744 \\ \textbf{LSTM (T)} & \textbf{3.551} & \textbf{3.800} & \textbf{3.771} & \textbf{3.751} & \textbf{3.631} & \textbf{3.810} & \textbf{3.894} & 3.896 & 3.864 & \textbf{3.848} & 3.751 & 3.897 \\ \textbf{LSTM (T+I)} & \textbf{3.497} & 3.734 & 3.746 & 3.742 & 3.573 & 3.755 & 3.815 & 3.872 & 3.847 & 3.813 & 3.750 & 3.869 \\ \midrule \textbf{Human} & 3.592 & 3.870 & 3.856 & 3.885 & 3.779 & 3.878 & 4.003 & 4.057 & 4.072 & 3.976 & 3.994 & 4.068 \\ \bottomrule \end{tabular} \caption{Human evaluation results for visual story post-editing models.} \end{table*} \end{comment} \subsection{Stylized Visual Captioning and Storytelling} One approach is to use computer vision and deep learning to generate attractive ``stylized'' captions directly for images or videos. This approach is end-to-end, which differs from template matching~\cite{kulkarni2013babytalk} and image-text aligning~\cite{farhadi2010every,zhu2015aligning,karpathy2014deep}. This approach is also different from generating ``factual'' captions that describe the actual situation in images, such as~\cite{vinyals2015show,venugopalan2015sequence,xu2015show,mnih2014recurrent}. One can consider captioning as a special type of machine translation problem that converts source images (instead of sentences in a source language) into sentences in a target language. Therefore, the encoder-decoder framework that integrates two deep neural networks, which has been adopted in neural machine translation~\cite{cho2014learning,sutskever2014sequence,bahdanau2014neural}, can be naturally applied for this task. One example is \textit{StyleNet}, an encoder-decoder based model for learning romantic and humorous captions~\cite{gan2017styleNet}. \textit{StyleNet} consists of a convolutional neural network (CNN) and many recurrent neural networks (RNN) that act as an encoder and decoders respectively. The CNN encodes images into a fixed-length vector, which represents a data point in a high dimensional space. The RNNs decode the vector into captions with various lengths and styles. The training data of \textit{StyleNet} includes paired images and factual-captions, together with unlabeled romantic-style and humorous-style text corpus. The decoders are trained by using multi-task learning (MTL), where the training phase alternates among several tasks~\cite{luong2015multi}. Training the main task (e.g., language-to-language machine translation) with several additional tasks (e.g., image captioning or unsupervised learning) jointly can increase the performance of the main task~\cite{luong2015multi}. During training, both the parameters in the decoders and encoders can be shared, depending on the type of the main task. MTL works in parallel, which differs from the sequential approach that the model is first pretrained and then finetuned, such as~\cite{dai2015semi}. Another example of generating stylized captions is the \textit{SemStyle} system~\cite{mathews2018semstyle}, which also adopted the encoder-decoder network. Both the encoder and decoder were trained separately on image caption datasets and a large text corpus respectively. The encoder mapped images to semantics terms, and the decoder converted semantics terms into sentences. Different from \textit{StyleNet} that took the latest output of the encoder as input for the decoder, \textit{SemStyle} added the attention mechanism. This idea is analogous to the attention mechanism in machine translation, which allows the decoder to access multiple outputs from the encoder, instead of using only the output from the final layer of the encoder~\cite{bahdanau2014neural,luong2015effective}. Attention mitigates the issue of information loss due to the intermediate step that encodes the input into a fixed-length vector in the typical encoder-decoder model. Beyond captioning, visual storytelling focuses on creating engaging narratives by making sense of a series of images~\cite{huang2016visual}. One recent work developed a model to generate coherent stories from photo streams by using adversarial training, which involved a generator and several discriminators~\cite{wang2018show,acl2018wang}. Similar to Generative Adversarial Nets~\cite{goodfellow2014generative}, the generator tries to create human-indistinguishable stories while the discriminators learn to determine if the created stories are relevant to the input images and resemble the human writing style. The adversarial training strategy applies deep reinforcement learning, where the generator is improved by using the discriminator. The generator is analogous to an agent's policy function, which maps states to actions. Each input image represents the current state, and each output sentence represents the best action. The discriminator is analogous to an environment, which is a function that provides the ground-truth reward for correcting the estimated reward computed by the generator. The training process iterates until the discriminator is unable to determine whether a story is created by a human or the generator. In the context of visual storytelling, the generator is a hierarchical model that concatenates multiple encoder-decoder networks in parallel, which has been adopted for generating a paragraph from every image \cite{krause2016para}. The hierarchical structure simplifies the visual storytelling challenge to a one-to-one mapping between photo streams and sentences. The discriminators are CNN-based (or RNN-based) classifiers (or regressors), which provide scores to indicate if the generated story resembles human-level style. \subsection{Neural Automatic Post-Editing} Another approach is neural automatic post-editing (APE), which uses deep learning to automatically revises the text generated from a machine translation (MT) system, given both the source sentences (or images in our case) and translated sentences. The MT is treated as a black-box system that is fixed and not modifiable. The goal of APE is to correct systematic MT errors, thereby reducing translators' workload and eventually increasing translation productivity~\cite{astudillo2018proceedings}. APE is different from computer-assisted post-editing, which aims to develop interactive systems that suggest modifications based on human guidance, such as~\cite{grangier2018quickedit,peris2017interactive}. Additionally, neural APE is sentence-to-sentence, which differs from previous state-of-the-art phrase-based models that translate and reorder phrase segments for each sentence, such as~\cite{simard2007statistical,bechara2011statistical}. One can also view APE as a multimodal translation problem, where MT generated sentences are translated into post-edited sentences with the source images as additional information~\cite{specia2016shared}. The encoder-decoder model has been applied to automatic post-editing. For instance, Libovicky et al. developed a system with many encoders and a decoder~\cite{libovicky2016cuni}. The input of the encoders are pairs of original sentences and machine-translated sentences. The output of the decoder are post-edited sentences. To prevent the model from paraphrasing the original sentences instead of editing them, the post-edited sentences are represented as a sequence of edit operations: \textit{keep}, \textit{delete}, and \textit{insert}. Another system, proposed by Junczys-Dowmunt and Grundkiewicz, adopted an ensemble-based approach to combined multiple deep neural networks with a log-linear model~\cite{junczys2016log}. This approach enabled each component in the ensemble to take different inputs, including the post-edited and machine-translated sentences. The system overcame the problem of limited APE training data by generating a large amount of artificial training data with back-translation~\cite{sennrich2015improving}. The idea was to use two phase-based MT systems to translate German to English and then back to German. The original German, the translated English, and the back-translated German are treated as the post-edited, source, and the machine-translated sentences respectively. Model sophisticated sequence-to-sequence learning models with the attention mechanism that extended these two works were later proposed~\cite{junczys2017exploration,libovicky2017attention}. Previous work has shown that APE can be reduced to the Quality Estimation (QE) task because of their similarity in sharing the same inputs but different outputs~\cite{martins2017pushing}. QE aims to predict the quality of a machine-translated sentence, in word or sentence level. APE produced post-edited sentences, which can be viewed as performing three operations for each word: \textit{keep}, \textit{insert}, and \textit{delete}. QE produced binary labels for each word, indicating if the word requires post-editing. In this case, positive QE labels are mapped to the \textit{insert} and \textit{delete} operations. Negative QE labels are mapped to the \textit{keep} operation. Combining both APE and QE into an ensemble model has been demonstrated to have the potential to improve the performance for both tasks~\cite{hokamp2017ensembling}. \end{comment} \subsection{What do people edit?} We analyze human edits for GLAC and AREL. First, crowd workers systematically \textbf{increase lexical diversity}. We use type-token ratio (TTR), the ratio between the number of word types and the number of tokens, to estimate the lexical diversity of a story~\cite{1af8cfb45ecf4823b5e14c69b80d4d5a}. Figure~\ref{fig:ttr} shows significant (p\textless.001, paired t-test) positive shifts of TTR for both AREL and GLAC, which confirms the findings in Hsu {\em et al.}~\shortcite{hsu2019users}. Figure~\ref{fig:ttr} also indicates that GLAC generates stories with higher lexical diversity than that of AREL. \begin{figure}[thbp] \centering \includegraphics[width=1.0\columnwidth]{fig/ttr_shift.png} \caption{KDE plot of type-token ratio (TTR) for pre-/post-edited stories. People increase lexical diversity in machine-generated stories for both AREL and GLAC.} \label{fig:ttr} \end{figure} Second, people \textbf{shorten AREL's stories but lengthen GLAC's stories.} We calculate the average number of Part-Of-Speech (POS) tags for tokens in each story using the python NLTK~\cite{bird2009natural} package, as shown in Table~\ref{tab:pos-stats}. We also find that the average number of tokens in an AREL story (43.0, SD=5.0) decreases (41.9, SD=5.6) after human editing, while that of GLAC (35.0, SD=4.5) increases (36.7, SD=5.9). Hsu has observed that people often replace ``determiner/article + noun'' phrases ({\em e.g.,} ``a boy'') with pronouns ({\em e.g.,} ``he'') in AREL stories~\shortcite{hsu2019users}. However, this observation cannot explain the story lengthening in GLAC, where each story on average has an increased 0.9 nouns after editing. Given the average per-story edit distances~\cite{levenshtein1966binary,damerau1964technique} for AREL (16.84, SD=5.64) and GLAC (17.99, SD=5.56) are similar, this difference is unlikely to be caused by deviation in editing amount. \begin{table}[h] \scriptsize \centering \addtolength{\tabcolsep}{-0.165cm} \begin{tabular}{lrrrrrrrrrrr} \toprule \resizebox{0.55cm}{!}{\textbf{AREL}} & \textbf{.} & \textbf{ADJ} & \textbf{ADP} & \textbf{ADV} & \textbf{CONJ} & \textbf{DET} & \textbf{NOUN} & \textbf{PRON} & \textbf{PRT} & \textbf{VERB} & \textbf{Total} \\ \hline \textbf{Pre} & 5.2 & 3.1 & 3.5 & 1.9 & 0.5 & 8.1 & 10.1 & 2.1 & 1.6 & 6.9 & 43.0 \\ \textbf{Post} & 4.7 & 3.1 & 3.4 & 1.9 & 0.8 & 7.1 & 9.9 & 2.3 & 1.6 & 7.0 & 41.9 \\ \textbf{$\Delta$} & -0.5 & 0.0 & -0.1 & -0.1 & 0.4 & -1.0 & -0.2 & 0.2 & 0.0 & 0.1 & -1.2 \\ \bottomrule \toprule \resizebox{0.55cm}{!}{\textbf{GLAC}} & \textbf{.} & \textbf{ADJ} & \textbf{ADP} & \textbf{ADV} & \textbf{CONJ} & \textbf{DET} & \textbf{NOUN} & \textbf{PRON} & \textbf{PRT} & \textbf{VERB} & \textbf{Total} \\ \hline \textbf{Pre} & 5.0 & 3.3 & 1.7 & 1.9 & 0.2 & 6.5 & 7.4 & 1.2 & 0.8 & 6.9 & 35.0 \\ \textbf{Post} & 4.5 & 3.2 & 2.4 & 1.8 & 0.8 & 6.1 & 8.3 & 1.5 & 1.0 & 7.0 & 36.7 \\ \textbf{$\Delta$} & -0.5 & -0.1 & 0.7 & -0.1 & 0.6 & -0.3 & 0.9 & 0.3 & 0.2 & 0.1 & 1.7 \\ \bottomrule \end{tabular} \addtolength{\tabcolsep}{0.165cm} \caption{Average number of tokens with each POS tag per story. ($\Delta$: the differences between post- and pre-edit stories. NUM is omitted because it is nearly 0. Numbers are rounded to one decimal place.)}\label{tab:pos-stats} \end{table} \textit{Deleting} extra words requires much less time than other editing operations~\cite{popovic2014relations}. Per Figure~\ref{fig:ttr}, AREL's stories are much more repetitive. We further analyze the type-token ratio for nouns (${TTR}_{noun}$) and find AREL generates duplicate nouns. The average ${TTR}_{noun}$ of an AREL's story is 0.76 while that of GLAC is 0.90. For reference, the average ${TTR}_{noun}$ of a human-written story (the entire VIST dataset) is 0.86. Thus, we hypothesize workers prioritized their efforts in deleting repetitive words for AREL, resulting in the reduction of story length. \begin{comment} \begin{figure}[bhp] \centering \includegraphics[width=1.0\columnwidth]{fig/noun_example.pdf} \caption{Example} \end{figure} people usually shorten AREL's stories We suspect that post-edited stories observe a systematic increase of type-token ratio (TTR) in post-edited stories. TTR, calculated as the ratio between the number of words types and the number of tokens, is a common indicator of lexical diversity of a document~\cite{1af8cfb45ecf4823b5e14c69b80d4d5a}. as shown in Figure~\ref{fig:ttr}. First, people \textbf{shorten AREL's stories but lengthen GLAC's stories.} As shown in Table~\ref{tab:pos-stats}, the post-edited visual stories from GLAC are significantly longer ($p < 0.001$). Second, people \textbf{increase lexical diversity}. We observe a systematic increase of type-token ratio (TTR) for models, as shown in Figure~\ref{fig:ttr}. TTR, calculated as the ratio between the number of words types and the number of tokens, is often used to estimate lexical diversity of a document. index of lexical diversity [5], which is calculated as the ratio between the number of different words (types) Ntype and the total number of words (tokens) Ntoken of a text unit, i.e., TTR = Ntype /Ntoken. A higher TTR indicates a higher lexical diversity. Per our analysis, the average TTR for an automatic story is 0.62 (SD = 0.09), while the average TTR for an edited story is 0.72 (SD = 0.06). This difference is statistically significant (paired t-test, $p < 0.001$, N = 4810). This result suggested that users reduced the word redundancy and increased the lexical diversity in machinegenerated stories. The normalized frequency histogram of TTR of pre- and post-edited stories are shown in Figure 4. Finally, Edited stories are slightly shorter Hsu {\em et al.} has collected a small set of human edits for 962 AREL's stories generated using VIST test set~\shortcite{hsu2019users}. We acquired their data in \textit{VIST-Edit}\xspace. the NLTK toolkit~\cite{bird2009natural} \paragraph{Move to ``ideal'' story length.} \paragraph{Increase lexical diversity.} We echoed~\cite{hsu2019users} \paragraph{People put a fix amount of effort in editing.} \begin{figure}[h] \centering \includegraphics[width=\linewidth]{fig/ab_ed_dist_avg.png} \caption{KDE plot of absolute edit distance between pre-/post-edited stories. People edit the same amount of operation for both AREL and GLAC.} \label{fig:ab_edit_distribution} \end{figure} We compute the edit distance \cite{levenshtein1966binary,damerau1964technique} between pre-/post-edited stories. Since there are five post-edited stories for a given pre-edit story, we compute the average as edit distance. We only consider samples that appear in both AREL and GLAC. The average edit distance is 16.84 (SD=5.64) for AREL and 17.99 (SD=5.56) for GLAC. The high degree of similarity between two distributions (Figure~\ref{fig:ab_edit_distribution}) suggests that regardless of the story quality, people always do the same amount of editing operation. MTurk is a pay-per-task marketplace so workers tend to provide a similar amount of time an effort for a fixed given price, turn to realize MTurk is a pay-per-task marketplace so workers tend to provide a fixed amount of time an effort for a fixed given price, in our case, \$0.12/HIT. requesters should expect a fixed amount of work for each task. Hsu {\em et al.} observed that after human post-editing, an AREL story usually becomes shorter ($p<0.001$); however, we find the opposite for GLAC. We analyzed the parts of speech of the words in each story. The universal part-of-speech tagset\footnote{A Universal Part-of-Speech Tagset:\\ https://www.nltk.org/book/ch05.html} provided by NLTK toolkit was used. The average number of tokens of each POS in a story is shown in Table~\ref{tab:pos-stats}. We observed that the ``DET'' (determiner, article) tag contributes the most to the reduction of story length, while the number of ``PRON'' (pronoun) tags increased the most. In order to understand the possible reasons behind this phenomenon, we observed the stories that had fewer DET tags but more PRON tags after post editing. We found that the nouns with determiners and articles (DET) are often replaced by pronouns (PRON). The following is a typical example, where the replacement texts are highlighted in red. One explanation that could govern all these phenomena \end{comment} \subsection{Methods} Two neural approaches, Long short-term memory (LSTM) and Transformer, are used as baselines, where we experiment using {\em (i)} text only (T) and {\em (ii)} both text and images (T+I) as inputs. \paragraph{LSTM} An LSTM seq2seq model is used~\cite{sutskever2014sequence}. For the text-only setting, the original stories and the human-edited stories are treated as source-target pairs. For the text-image setting, we first extract the image features using the pre-trained ResNet-152 model~\cite{he2016deep} and represent each image as a 2048-dimensional vector. We then apply a dense layer on image features in order to both fit its dimension to the word embedding and learn the adjusting transformation. By placing the image features in front of the sequence of text embedding, the input sequence becomes a matrix $\in \mathbb{R}^{(5+len) \times dim}$, where $len$ is the text sequence length, $5$ means 5 photos, and $dim$ is the dimension of the word embedding. The input sequence with both image information and text information is then encoded by LSTM, identical as in the text-only setting. \paragraph{Transformer (TF)} We also use the Transformer architecture~\cite{vaswani2017attention} as baseline. The text-only setup and image feature extraction are identical to that of LSTM. For Transformer, the image features are attached at the end of the sequence of text embedding to form an image-enriched embedding. It is noteworthy that the position encoding is only applied on text embedding. The input matrix $\in \mathbb{R}^{(len+5) \times dim}$ is then passed into the Transformer as in the text-only setting. \begin{figure*}[t] \centering \includegraphics[width=1.0\textwidth]{fig/representative_example_stories_full_width.png} \caption{Example stories generated by baselines.} \label{fig:representative-story} \end{figure*} \subsection{Experimental Setup and Evaluation} \begin{comment} \begin{table}[] \centering \scriptsize \addtolength{\tabcolsep}{-0.1cm} \begin{tabular}{llrrrr} \toprule \hline & \textbf{Data Includes} & \textbf{BLEU4} & \textbf{METEOR} & \textbf{ROUGE} & \textbf{Skip-Thoughts}\\ \hline \ding{172} & AREL & .110 & .099 & .063 & .062 \\ \ding{173} & LSTM-Edited AREL & .106 & .109 & .067 & .205 \\ \ding{174} & \ding{172}+\ding{173} & .095 & .092 & .059 & .116 \\ \hline \ding{175} & GLAC & .222 & .203 & .140 & .151 \\ \ding{176} & LSTM-Edited GLAC & .163 & .176 & .138 & .087 \\ \ding{177} & \ding{175}+\ding{176} & .196 & .194 & .148 & .116 \\ \hline \ding{178} & \ding{172}+\ding{175} & .091 & .086 & .059 & .088 \\ \ding{179} & \ding{173}+\ding{176} & .089 & .103 & .067 & .101 \\ \ding{180} & \ding{172}+\ding{173}+\ding{175}+\ding{176} & .090 & .096 & .069 & .094 \\ \hline \bottomrule \end{tabular} \addtolength{\tabcolsep}{0.1cm} \caption{Spearman rank-order correlation $\rho$ between the automatic evaluation scores (sum of all six aspects) and human judgment calculated using different data combination. GLAC(N=202), AREL(N=97)\hsu{modify after 2(Spearman)}} \label{tab:auto-human-cor} \end{table} \end{comment} \paragraph{Data Augmentation} In order to obtain sufficient training samples for neural models, we pair \textit{less}-edited stories with \textit{more}-edited stories of the same photo sequence to augment the data. In \textit{VIST-Edit}\xspace, five human-edited stories are collected for each photo sequence. We use the human-edited stories that are less edited -- measured by its Normalized Damerau-Levenshtein distance \cite{levenshtein1966binary,damerau1964technique} to the original story -- as the source and pair them with the stories that are more edited (as the target.) This data augmentation strategy gives us in total fifteen ($\left ( ^5_2 \right )+5=15$) training samples given five human-edited stories \paragraph{Human Evaluation} Following the evaluation procedure of the first VIST Challenge~\cite{mitchell2018proceedings}, for each visual story, we recruit five human judges on MTurk to rate it on six aspects (at \$0.1/HIT.) We take the average of the five judgments as the final scores for the story. Table~\ref{tab:human-eval} shows the results. The LSTM using text-only input outperforms all other baselines. It improves all six aspects for stories by AREL, and improves ``Focus'' and ``Human-like'' aspects for stories by GLAC. These results demonstrate that a relatively small set of human edits can be used to boost the story quality of an existing large VIST model. Table~\ref{tab:human-eval} also suggests that the quality of a post-edited story is heavily decided by its pre-edited version. Even after editing by human editors, AREL's stories still do not achieve the quality of pre-edited stories by GLAC. The inefficacy of image features and Transformer model might be caused by the small size of \textit{VIST-Edit}\xspace. It also requires further research to develop a post-editing model in a multimodal context. \begin{comment} \footnote{Given 5 human-edited stories, we could generate $4$ samples by paring the story with the lowest distance to the other four stories, $3$ samples from the second lowest one to the other three stories, and so on. Eventually we could have $4+3+2+1=10$ stories.} The system overcame the problem of limited APE training data by generating a large amount of artificial training data with back-translation~\cite{sennrich2015improving}. The idea was to use two phase-based MT systems to translate German to English and then back to German. The original German, the translated English, and the back-translated German are treated as the post-edited, source, and the machine-translated sentences respectively. For the text-only setting, the source-target pairs is identical as that of LSTM. For the text-image setting, we again use the image features extracted from the last convolutional layer of pre-trained ResNet-152 mdoel \cite{he2016deep} and apply a dense layer to adjust the dimension and learn the transformation. We use the LSTM seq2seq model \cite{sutskever2014sequence} as the main structure. For text-only setting, the original stories and the human edited stories are treated as the source-target pairs in the translation problem. To incorporate the image information, we first extract the image features by the pretrained ResNet-152 model \cite{he2016deep} where each image is represented as a 2048-dimension vector. We apply a dense layer on image features in order to both fit its dimension to the word embedding and learn the adjusting transformation. By placing the image features in front of the word embeddings, the source sequence becomes a matrix $\in \mathbb{R}^{(len+5) \times dim}$ where $len$ is the text sequence length, $5$ means 5 photos, and $dim$ stands for the dimension of the word embedding. We feed $\textbf{M}$ into the LSTM seq2seq model as what we did in the text-only setting. We use a classical Machine Translation model - Transformer \cite{vaswani2017attention} as our main structure. For the text part, we use auto-generated story and human-edit story as our source-target pairs for the input. For the image plus text part, we first extract the image features from the last convolutional layer of pre-trained ResNet-152 mdoel which is a 2048-dimension vector. Then we pass all the image feature through an embedding layer to make them fit the same dimension as text sequence. One photo sequence contains five photos, which is a matrix $\textbf{T} \in 5 \times 2048 $. After passing through the embedded layer, we obtain the output matrix $\textbf{T}' \in 5 \times dim $, which is the same dimension of word embedding. We simply concatenate $\textbf{T}$ and $\textbf{T}'$, which represent the matrix of word embedding of text sequence. We treat each image feature as one word vector and get the new matrix $\textbf{S} \in (5 + $length$) \times dim$, where $length$ means the length of text sequence. Finally, we put $\textbf{S}$ into our Transformer model to generate the sentence. \begin{table*}[t] \small \begin{tabular}{@{}lrrrrrr|rrrrrr@{}} \toprule \multicolumn{1}{c}{\textit{}} & \multicolumn{6}{c|}{\textit{\textbf{AREL}}} & \multicolumn{6}{c}{\textit{\textbf{GLAC}}} \\ \midrule Edited By & Focused & Coherent & Share & Human & Grounded & Detailed & Focused & Coherent & Share & Human & Grounded & Detailed \\ \midrule \textbf{N/A} & 3.487 & 3.751 & 3.763 & 3.746 & 3.602 & 3.761 & 3.878 & 3.908 & 3.930 & 3.817 & 3.864 & 3.938 \\ \midrule \textbf{TF (T)} & 3.433 & 3.705 & 3.641 & 3.656 & 3.619 & 3.631 & 3.717 & 3.773 & 3.863 & 3.672 & 3.765 & 3.795 \\ \textbf{TF (T+I)} & \textbf{3.542} & 3.693 & 3.676 & 3.643 & 3.548 & 3.672 & 3.734 & 3.759 & 3.786 & 3.622 & 3.758 & 3.744 \\ \textbf{LSTM (T)} & \textbf{3.551} & \textbf{3.800} & \textbf{3.771} & \textbf{3.751} & \textbf{3.631} & \textbf{3.810} & \textbf{3.894} & 3.896 & 3.864 & \textbf{3.848} & 3.751 & 3.897 \\ \textbf{LSTM (T+I)} & \textbf{3.497} & 3.734 & 3.746 & 3.742 & 3.573 & 3.755 & 3.815 & 3.872 & 3.847 & 3.813 & 3.750 & 3.869 \\ \midrule \textbf{Human} & 3.592 & 3.870 & 3.856 & 3.885 & 3.779 & 3.878 & 4.003 & 4.057 & 4.072 & 3.976 & 3.994 & 4.068 \\ \bottomrule \end{tabular} \caption{Human evaluation results for visual story post-editing models.} \end{table*} \end{comment} \subsection{What do people edit?} We analyze human edits for GLAC and AREL. First, crowd workers systematically \textbf{increase lexical diversity}. We use type-token ratio (TTR), the ratio between the number of word types and the number of tokens, to estimate the lexical diversity of a story~\cite{1af8cfb45ecf4823b5e14c69b80d4d5a}. Figure~\ref{fig:ttr} shows significant (p\textless.001, paired t-test) positive shifts of TTR for both AREL and GLAC, which confirms the findings in Hsu {\em et al.}~\shortcite{hsu2019users}. Figure~\ref{fig:ttr} also indicates that GLAC generates stories with higher lexical diversity than that of AREL. \begin{figure}[thbp] \centering \includegraphics[width=1.0\columnwidth]{fig/ttr_shift.png} \caption{KDE plot of type-token ratio (TTR) for pre-/post-edited stories. People increase lexical diversity in machine-generated stories for both AREL and GLAC.} \label{fig:ttr} \end{figure} Second, people \textbf{shorten AREL's stories but lengthen GLAC's stories.} We calculate the average number of Part-Of-Speech (POS) tags for tokens in each story using the python NLTK~\cite{bird2009natural} package, as shown in Table~\ref{tab:pos-stats}. We also find that the average number of tokens in an AREL story (43.0, SD=5.0) decreases (41.9, SD=5.6) after human editing, while that of GLAC (35.0, SD=4.5) increases (36.7, SD=5.9). Hsu has observed that people often replace ``determiner/article + noun'' phrases ({\em e.g.,} ``a boy'') with pronouns ({\em e.g.,} ``he'') in AREL stories~\shortcite{hsu2019users}. However, this observation cannot explain the story lengthening in GLAC, where each story on average has an increased 0.9 nouns after editing. Given the average per-story edit distances~\cite{levenshtein1966binary,damerau1964technique} for AREL (16.84, SD=5.64) and GLAC (17.99, SD=5.56) are similar, this difference is unlikely to be caused by deviation in editing amount. \begin{table}[h] \scriptsize \centering \addtolength{\tabcolsep}{-0.165cm} \begin{tabular}{lrrrrrrrrrrr} \toprule \resizebox{0.55cm}{!}{\textbf{AREL}} & \textbf{.} & \textbf{ADJ} & \textbf{ADP} & \textbf{ADV} & \textbf{CONJ} & \textbf{DET} & \textbf{NOUN} & \textbf{PRON} & \textbf{PRT} & \textbf{VERB} & \textbf{Total} \\ \hline \textbf{Pre} & 5.2 & 3.1 & 3.5 & 1.9 & 0.5 & 8.1 & 10.1 & 2.1 & 1.6 & 6.9 & 43.0 \\ \textbf{Post} & 4.7 & 3.1 & 3.4 & 1.9 & 0.8 & 7.1 & 9.9 & 2.3 & 1.6 & 7.0 & 41.9 \\ \textbf{$\Delta$} & -0.5 & 0.0 & -0.1 & -0.1 & 0.4 & -1.0 & -0.2 & 0.2 & 0.0 & 0.1 & -1.2 \\ \bottomrule \toprule \resizebox{0.55cm}{!}{\textbf{GLAC}} & \textbf{.} & \textbf{ADJ} & \textbf{ADP} & \textbf{ADV} & \textbf{CONJ} & \textbf{DET} & \textbf{NOUN} & \textbf{PRON} & \textbf{PRT} & \textbf{VERB} & \textbf{Total} \\ \hline \textbf{Pre} & 5.0 & 3.3 & 1.7 & 1.9 & 0.2 & 6.5 & 7.4 & 1.2 & 0.8 & 6.9 & 35.0 \\ \textbf{Post} & 4.5 & 3.2 & 2.4 & 1.8 & 0.8 & 6.1 & 8.3 & 1.5 & 1.0 & 7.0 & 36.7 \\ \textbf{$\Delta$} & -0.5 & -0.1 & 0.7 & -0.1 & 0.6 & -0.3 & 0.9 & 0.3 & 0.2 & 0.1 & 1.7 \\ \bottomrule \end{tabular} \addtolength{\tabcolsep}{0.165cm} \caption{Average number of tokens with each POS tag per story. ($\Delta$: the differences between post- and pre-edit stories. NUM is omitted because it is nearly 0. Numbers are rounded to one decimal place.)}\label{tab:pos-stats} \end{table} \textit{Deleting} extra words requires much less time than other editing operations~\cite{popovic2014relations}. Per Figure~\ref{fig:ttr}, AREL's stories are much more repetitive. We further analyze the type-token ratio for nouns (${TTR}_{noun}$) and find AREL generates duplicate nouns. The average ${TTR}_{noun}$ of an AREL's story is 0.76 while that of GLAC is 0.90. For reference, the average ${TTR}_{noun}$ of a human-written story (the entire VIST dataset) is 0.86. Thus, we hypothesize workers prioritized their efforts in deleting repetitive words for AREL, resulting in the reduction of story length. \begin{comment} \begin{figure}[bhp] \centering \includegraphics[width=1.0\columnwidth]{fig/noun_example.pdf} \caption{Example} \end{figure} people usually shorten AREL's stories We suspect that post-edited stories observe a systematic increase of type-token ratio (TTR) in post-edited stories. TTR, calculated as the ratio between the number of words types and the number of tokens, is a common indicator of lexical diversity of a document~\cite{1af8cfb45ecf4823b5e14c69b80d4d5a}. as shown in Figure~\ref{fig:ttr}. First, people \textbf{shorten AREL's stories but lengthen GLAC's stories.} As shown in Table~\ref{tab:pos-stats}, the post-edited visual stories from GLAC are significantly longer ($p < 0.001$). Second, people \textbf{increase lexical diversity}. We observe a systematic increase of type-token ratio (TTR) for models, as shown in Figure~\ref{fig:ttr}. TTR, calculated as the ratio between the number of words types and the number of tokens, is often used to estimate lexical diversity of a document. index of lexical diversity [5], which is calculated as the ratio between the number of different words (types) Ntype and the total number of words (tokens) Ntoken of a text unit, i.e., TTR = Ntype /Ntoken. A higher TTR indicates a higher lexical diversity. Per our analysis, the average TTR for an automatic story is 0.62 (SD = 0.09), while the average TTR for an edited story is 0.72 (SD = 0.06). This difference is statistically significant (paired t-test, $p < 0.001$, N = 4810). This result suggested that users reduced the word redundancy and increased the lexical diversity in machinegenerated stories. The normalized frequency histogram of TTR of pre- and post-edited stories are shown in Figure 4. Finally, Edited stories are slightly shorter Hsu {\em et al.} has collected a small set of human edits for 962 AREL's stories generated using VIST test set~\shortcite{hsu2019users}. We acquired their data in \textit{VIST-Edit}\xspace. the NLTK toolkit~\cite{bird2009natural} \paragraph{Move to ``ideal'' story length.} \paragraph{Increase lexical diversity.} We echoed~\cite{hsu2019users} \paragraph{People put a fix amount of effort in editing.} \begin{figure}[h] \centering \includegraphics[width=\linewidth]{fig/ab_ed_dist_avg.png} \caption{KDE plot of absolute edit distance between pre-/post-edited stories. People edit the same amount of operation for both AREL and GLAC.} \label{fig:ab_edit_distribution} \end{figure} We compute the edit distance \cite{levenshtein1966binary,damerau1964technique} between pre-/post-edited stories. Since there are five post-edited stories for a given pre-edit story, we compute the average as edit distance. We only consider samples that appear in both AREL and GLAC. The average edit distance is 16.84 (SD=5.64) for AREL and 17.99 (SD=5.56) for GLAC. The high degree of similarity between two distributions (Figure~\ref{fig:ab_edit_distribution}) suggests that regardless of the story quality, people always do the same amount of editing operation. MTurk is a pay-per-task marketplace so workers tend to provide a similar amount of time an effort for a fixed given price, turn to realize MTurk is a pay-per-task marketplace so workers tend to provide a fixed amount of time an effort for a fixed given price, in our case, \$0.12/HIT. requesters should expect a fixed amount of work for each task. Hsu {\em et al.} observed that after human post-editing, an AREL story usually becomes shorter ($p<0.001$); however, we find the opposite for GLAC. We analyzed the parts of speech of the words in each story. The universal part-of-speech tagset\footnote{A Universal Part-of-Speech Tagset:\\ https://www.nltk.org/book/ch05.html} provided by NLTK toolkit was used. The average number of tokens of each POS in a story is shown in Table~\ref{tab:pos-stats}. We observed that the ``DET'' (determiner, article) tag contributes the most to the reduction of story length, while the number of ``PRON'' (pronoun) tags increased the most. In order to understand the possible reasons behind this phenomenon, we observed the stories that had fewer DET tags but more PRON tags after post editing. We found that the nouns with determiners and articles (DET) are often replaced by pronouns (PRON). The following is a typical example, where the replacement texts are highlighted in red. One explanation that could govern all these phenomena \end{comment} \subsection{Why Post-Editing Matters to Stories?} ``Mostly when I think of pacing, I go back to Elmore Leonard, who explained it so perfectly by saying he just left out the boring parts. This suggests cutting to speed the pace, and that's what most of us end up having to do (kill your darlings, kill your darlings, even when it breaks your egocentric little scribbler's heart, kill your darlings)…I got a scribbled comment that changed the way I rewrote my fiction once and forever. Jotted below the machine-generated signature of the editor was this mot: `Not bad, but PUFFY. You need to revise for length. Formula: 2nd Draft = 1st Draft – 10\%. Good luck.' '' — Stephen King, On Writing, 2000~\cite{king2000writing} \subsection{What?} \subsection{How?} \end{comment} \section{Introduction} \input{intro.tex} \section{Related Work} \input{related_work} \section{Dataset Construction \& Analysis} \input{data.tex} \section{Baseline Experiments} \input{auto-edit-as-ai-task} \section{Discussion} \input{discussion} \section{Conclusion} \input{conclusion} \subsection{Stylized Visual Captioning and Storytelling} One approach is to use computer vision and deep learning to generate attractive ``stylized'' captions directly for images or videos. This approach is end-to-end, which differs from template matching~\cite{kulkarni2013babytalk} and image-text aligning~\cite{farhadi2010every,zhu2015aligning,karpathy2014deep}. This approach is also different from generating ``factual'' captions that describe the actual situation in images, such as~\cite{vinyals2015show,venugopalan2015sequence,xu2015show,mnih2014recurrent}. One can consider captioning as a special type of machine translation problem that converts source images (instead of sentences in a source language) into sentences in a target language. Therefore, the encoder-decoder framework that integrates two deep neural networks, which has been adopted in neural machine translation~\cite{cho2014learning,sutskever2014sequence,bahdanau2014neural}, can be naturally applied for this task. One example is \textit{StyleNet}, an encoder-decoder based model for learning romantic and humorous captions~\cite{gan2017styleNet}. \textit{StyleNet} consists of a convolutional neural network (CNN) and many recurrent neural networks (RNN) that act as an encoder and decoders respectively. The CNN encodes images into a fixed-length vector, which represents a data point in a high dimensional space. The RNNs decode the vector into captions with various lengths and styles. The training data of \textit{StyleNet} includes paired images and factual-captions, together with unlabeled romantic-style and humorous-style text corpus. The decoders are trained by using multi-task learning (MTL), where the training phase alternates among several tasks~\cite{luong2015multi}. Training the main task (e.g., language-to-language machine translation) with several additional tasks (e.g., image captioning or unsupervised learning) jointly can increase the performance of the main task~\cite{luong2015multi}. During training, both the parameters in the decoders and encoders can be shared, depending on the type of the main task. MTL works in parallel, which differs from the sequential approach that the model is first pretrained and then finetuned, such as~\cite{dai2015semi}. Another example of generating stylized captions is the \textit{SemStyle} system~\cite{mathews2018semstyle}, which also adopted the encoder-decoder network. Both the encoder and decoder were trained separately on image caption datasets and a large text corpus respectively. The encoder mapped images to semantics terms, and the decoder converted semantics terms into sentences. Different from \textit{StyleNet} that took the latest output of the encoder as input for the decoder, \textit{SemStyle} added the attention mechanism. This idea is analogous to the attention mechanism in machine translation, which allows the decoder to access multiple outputs from the encoder, instead of using only the output from the final layer of the encoder~\cite{bahdanau2014neural,luong2015effective}. Attention mitigates the issue of information loss due to the intermediate step that encodes the input into a fixed-length vector in the typical encoder-decoder model. Beyond captioning, visual storytelling focuses on creating engaging narratives by making sense of a series of images~\cite{huang2016visual}. One recent work developed a model to generate coherent stories from photo streams by using adversarial training, which involved a generator and several discriminators~\cite{wang2018show,acl2018wang}. Similar to Generative Adversarial Nets~\cite{goodfellow2014generative}, the generator tries to create human-indistinguishable stories while the discriminators learn to determine if the created stories are relevant to the input images and resemble the human writing style. The adversarial training strategy applies deep reinforcement learning, where the generator is improved by using the discriminator. The generator is analogous to an agent's policy function, which maps states to actions. Each input image represents the current state, and each output sentence represents the best action. The discriminator is analogous to an environment, which is a function that provides the ground-truth reward for correcting the estimated reward computed by the generator. The training process iterates until the discriminator is unable to determine whether a story is created by a human or the generator. In the context of visual storytelling, the generator is a hierarchical model that concatenates multiple encoder-decoder networks in parallel, which has been adopted for generating a paragraph from every image \cite{krause2016para}. The hierarchical structure simplifies the visual storytelling challenge to a one-to-one mapping between photo streams and sentences. The discriminators are CNN-based (or RNN-based) classifiers (or regressors), which provide scores to indicate if the generated story resembles human-level style. \subsection{Neural Automatic Post-Editing} Another approach is neural automatic post-editing (APE), which uses deep learning to automatically revises the text generated from a machine translation (MT) system, given both the source sentences (or images in our case) and translated sentences. The MT is treated as a black-box system that is fixed and not modifiable. The goal of APE is to correct systematic MT errors, thereby reducing translators' workload and eventually increasing translation productivity~\cite{astudillo2018proceedings}. APE is different from computer-assisted post-editing, which aims to develop interactive systems that suggest modifications based on human guidance, such as~\cite{grangier2018quickedit,peris2017interactive}. Additionally, neural APE is sentence-to-sentence, which differs from previous state-of-the-art phrase-based models that translate and reorder phrase segments for each sentence, such as~\cite{simard2007statistical,bechara2011statistical}. One can also view APE as a multimodal translation problem, where MT generated sentences are translated into post-edited sentences with the source images as additional information~\cite{specia2016shared}. The encoder-decoder model has been applied to automatic post-editing. For instance, Libovicky et al. developed a system with many encoders and a decoder~\cite{libovicky2016cuni}. The input of the encoders are pairs of original sentences and machine-translated sentences. The output of the decoder are post-edited sentences. To prevent the model from paraphrasing the original sentences instead of editing them, the post-edited sentences are represented as a sequence of edit operations: \textit{keep}, \textit{delete}, and \textit{insert}. Another system, proposed by Junczys-Dowmunt and Grundkiewicz, adopted an ensemble-based approach to combined multiple deep neural networks with a log-linear model~\cite{junczys2016log}. This approach enabled each component in the ensemble to take different inputs, including the post-edited and machine-translated sentences. The system overcame the problem of limited APE training data by generating a large amount of artificial training data with back-translation~\cite{sennrich2015improving}. The idea was to use two phase-based MT systems to translate German to English and then back to German. The original German, the translated English, and the back-translated German are treated as the post-edited, source, and the machine-translated sentences respectively. Model sophisticated sequence-to-sequence learning models with the attention mechanism that extended these two works were later proposed~\cite{junczys2017exploration,libovicky2017attention}. Previous work has shown that APE can be reduced to the Quality Estimation (QE) task because of their similarity in sharing the same inputs but different outputs~\cite{martins2017pushing}. QE aims to predict the quality of a machine-translated sentence, in word or sentence level. APE produced post-edited sentences, which can be viewed as performing three operations for each word: \textit{keep}, \textit{insert}, and \textit{delete}. QE produced binary labels for each word, indicating if the word requires post-editing. In this case, positive QE labels are mapped to the \textit{insert} and \textit{delete} operations. Negative QE labels are mapped to the \textit{keep} operation. Combining both APE and QE into an ensemble model has been demonstrated to have the potential to improve the performance for both tasks~\cite{hokamp2017ensembling}. \end{comment}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Observational predictions in multiverse models depend on one's choice of the probability measure. Different measure prescriptions can give vastly different answers. This is the so-called measure problem of eternal inflation. Perhaps the simplest way to regulate the infinities of eternal inflation is to impose a cutoff on a hypersurface of constant global time. One starts with a patch of a spacelike hypersurface $\Sigma$ somewhere in the inflating region of spacetime and follows its evolution along the congruence of geodesics orthogonal to $\Sigma$. The cutoff is imposed at a hypersurface of constant time $t$ measured along the geodesics. The resulting measure, however, depends on the choice of the time variable $t$. An attractive choice is to use the proper time $\tau$ along the geodesics~\cite{Linde:1993xx,GarciaBellido:1993wn,Vilenkin:1994ua}. One finds, however, that this proper time measure suffers from the youngness paradox, predicting that the universe should be much hotter than observed~\cite{Tegmark:2004qd}. Another popular choice is the scale factor time, $t=\ln a$, where $a$ is the expansion factor along the geodesics~\cite{Linde:1993xx,GarciaBellido:1993wn,DeSimone:2008bq, Bousso:2008hz, DeSimone:2008if}. The problem with this choice is that the scale factor evolution is not monotonic. For example, in regions with a negative cosmological constant, $\Lambda<0$, expansion is followed by contraction, so $a$ starts to decrease along the geodesics. The scale factor measure then requires that the entire contracting region to the future of the turnaround point be included under the cutoff. This gives a higher weight to regions of negative $\Lambda$, so the scale factor measure tends to predict that we should expect to measure $\Lambda<0$ (unless this is strongly suppressed by anthropic factors). Some other measure proposals have even more severe problems with negative $\Lambda$. For example, the lightcone time cutoff \cite{Bousso:2009dm} gives an overwhelming preference for $\Lambda < 0$ \cite{Salem:2009eh}.\footnote{Local measure proposals, which sample spacetime regions around individual geodesics with subsequent averaging over an ensemble of geodesics, yield probability distributions that sensitively depend on the choice of the ensemble. This choice is largely arbitrary, and thus these proposals are incomplete as they now stand. The ``watcher measure" of Ref.~\cite{Garriga:2012bc} follows a single ``eternal" geodesic, but makes the assumption that the big crunch singularities in AdS bubbles lead to bounces, where contraction is followed by expansion, so that geodesics can be continued through the crunch regions. We do not adopt this assumption in the present paper. } In this paper, we introduce a new global time measure which does not suffer from these problems. We divide the initial hypersurface $\Sigma$ into infinitesimally small segments of equal 3-volume $\epsilon \to 0$ and follow the evolution of these segments along the orthogonal congruence of geodesics. The time coordinate $\Omega$ is defined as the 4-volume spanned by the segment, \begin{eqnarray} \Omega(\tau) = \frac{1}{\epsilon} \int_{(0,\tau) \times \epsilon {\cal V}^{(3)} (\tau)} \sqrt{-g} \, d^4 x = \int_0^\tau d\tau' {\cal V}^{(3)}(\tau'), \label{Omega0} \end{eqnarray} where $\epsilon {\cal V}^{(3)}(\tau)$ is the 3-volume of the evolved segment at proper time $\tau$, $\tau$ is set equal to zero at $\Sigma$, and ${\cal V}^{(3)}(0)=1$. $\Omega$ has a clear geometric meaning and it clearly grows monotonically along the geodesics. The measure is defined by imposing a cutoff at $\Omega_c={\rm const}$. If the universe can locally be approximated as homogeneous and isotropic, we can write ${\cal V}^{(3)}(\tau) = a^3(\tau)$, where $a(\tau)$ is the scale factor with $a(0)=1$. Then \begin{eqnarray} \Omega(\tau) = \int_0^\tau d\tau' a^3(\tau'). \label{Omega} \end{eqnarray} We can think of the geodesics in the congruence as representing an ensemble of inertial observers spread uniformly over the initial surface $\Sigma$. The measure prescription is then that each observer samples an equal 4-volume $\propto\Omega_c$. The distribution of ``observers" may become rather irregular in regions of structure formation. The scale factor (or the 3-volume ${\cal V}^{(3)}$ in Eq.~(\ref{Omega0})) comes to a halt in collapsed regions which have decoupled from the Hubble flow and continues to evolve between these regions. Furthermore, the geodesic congruence may develop caustics where geodesics cross. One can adopt the rule that geodesics are terminated as they cross at a caustic. As it was noted in Ref.~\cite{Bousso:2012tp}, this does not create any gaps in the congruence. But the resulting cutoff surface would still be rather irregular. Such dependence of the measure on details of structure formation appears unsatisfactory and calls for some sort of coarse graining, with averaging over the characteristic length scale of structure formation. This issue was emphasized in Ref.\cite{Bousso:2008hz} in the case of scale factor measure and was further discussed in Ref.~\cite{DeSimone:2008if}. A somewhat related problem is that even though $\Omega$ grows monotonically along geodesics of the congruence, the surfaces of constant $\Omega$ are not necessarily spacelike, so $\Omega$ is not a good global time coordinate. As a result an event may be included under the cutoff, while some events in its causal past are not included. A possible way to cure this problem is to modify the cutoff surface $\Omega=\Omega_c$ by excluding future light cones of all points on that surface.\footnote{This prescription was suggested in Ref.~\cite{DeSimone:2008if} to address a similar problem for the scale factor measure.} Then all events under the cutoff are included together with their causal past. This prescription also alleviates the problem of sensitivity of the measure to structure formation. If the characteristic scale of structure formation is much smaller than the horizon, the modified cutoff surface would roughly coincide with a constant $\Omega$ surface in the background FRW geometry. The implementation of the 4-volume measure is somewhat more complicated than in the cases of proper time and scale factor measures, but it becomes tractable in a number of interesting special cases. In the next section we use this measure to estimate the volume fraction occupied by different vacua in the eternally inflating part of spacetime, assuming low transition rates between the vacua. In Sections 3 and 4 we find respectively the probability distributions for the cosmological constant and for the density parameter (or spatial curvature) under assumptions similar to those that were used in Refs.~\cite{DeSimone:2008bq,DeSimone:2009dq} to calculate these distributions in the scale factor measure. A formalism that can be used to determine the distributions in more general landscapes is outlined in Section 5. Finally, our results are briefly summarized and discussed in Section 6. \section{Volume distribution of vacua} Consider a multiverse consisting of bubbles of de Sitter (dS) and terminal (Anti-de Sitter and Minkowski) vacua, labeled by index $j$. The expansion rate of dS vacuum $j$ is $H_j$ and nucleation rate of bubbles of vacuum $i$ in parent vacuum $j$ per Hubble volume per Hubble time is $\kappa_{ij}$. We shall assume that $\kappa_{ij}\ll 1$ -- which is expected, since nucleation occurs by quantum tunneling. In this section we shall calculate the 3-volume occupied by each dS vacuum on a surface of constant $\Omega$ in the inflating part of spacetime and use the result to find the abundances of Boltzmann brains in dS vacua. We shall not be interested in volumes occupied by terminal vacua in this section. \subsection{Relation to scale factor cutoff} An approximate relation between the 4-volume and scale factor cutoffs can be found if we note that the scale factor grows exponentially in the inflating regions, and therefore the integral in Eq.~(\ref{Omega}) is dominated by the upper limit. In a region occupied by vacuum $j$, the scale factor is $a_j(\tau)=Ce^{H_j \tau}$ with $C={\rm const}$, so we can write approximately \begin{eqnarray} \Omega_j (\tau)\approx \int^\tau a_j^3(\tau')d\tau' \approx \frac{a_j^3(\tau)}{3H_j}. \label{Omegaa} \end{eqnarray} The cutoff surface at $\Omega=\Omega_c = {\rm const}$ can then be approximated as \begin{eqnarray} \frac{a^3(\tau)}{3H_j} = \Omega_c, \label{cutoff} \end{eqnarray} so the 4-volume cutoff at $\Omega=\Omega_c$ is approximately equivalent to the scale factor cutoff at \begin{eqnarray} t_c = \frac{1}{3}\ln (3 H_j \Omega_c), \label{relation} \end{eqnarray} where the scale factor time is defined as $t=\ln a$. The approximations (\ref{Omegaa}), (\ref{relation}) are accurate, as long as the cutoff surface does not pass within a few Hubble times of a transition from one vacuum to another (on the daughter vacuum side). The correction to Eq.~(\ref{Omegaa}) is $\sim a_i^3 /3H_i$, where $a_i$ is the scale factor at the time when the vacuum region $j$ being considered was created from a parent vacuum $i$. If $H_j\lesssim H_i$, which is usually the case, this correction is negligible already at one Hubble time after the transition $i\to j$, when $a/a_i=e$ and the correction is $\lesssim e^{-3} \approx 1/20$. The correction is more significant for large upward jumps with $H_j\gg H_i$. In this case the condition for Eq.~(\ref{relation}) to be accurate is $a/a_i \gtrsim (H_j/H_i)^{1/3}\gg 1$. This would happen on some segment of the cutoff surface if it lies within a scale factor time $t_{ji}\sim (1/3)\ln(H_j/H_i)$ of the transition from $i$ to $j$ (on the side of $j$). We expect such segments to be rare -- both because large upward jumps are strongly suppressed and because the interval $t_{ji}$ is much shorter than the scale factor time that geodesics typically spend in vacuum $j$. Thus we expect the approximations (\ref{Omegaa}), (\ref{relation}) to hold for a generic cutoff surface. Similar approximations should apply in spacetime regions where the Hubble parameter $H$ is not constant, but varies on a timescale much longer that $H^{-1}$ (e.g., in quantum diffusion or slow-roll regions). In this case Eq.~(\ref{Omegaa}) is replaced by \begin{eqnarray} \Omega (\tau) \approx \frac{a^3(\tau)}{3H(\tau)}. \label{Omegaa2} \end{eqnarray} \subsection{Volume distribution and Boltzmann brains} We can now find the volume distribution of different vacua. We start with the volume distribution on constant scale factor surfaces and then rewrite the result on a constant 4-volume surface by using \eq{relation}. The former distribution can be found from the rate equation (see, e.g., \cite{DeSimone:2008if}) \begin{eqnarray} \frac{dV_i}{dt}= 3 V_i + \sum_j M_{ij}V_j , \end{eqnarray} where $V_i(t)$ is the volume occupied by vacuum $i$ on a constant scale factor surface $t={\rm const}$ within a region of a fixed comoving size, $t$ is the scale factor time, \begin{eqnarray} M_{ij}=\kappa_{ij} -\delta_{ij}\kappa_{i} \label{Mij} \end{eqnarray} is the transition matrix, and \begin{eqnarray} \kappa_i=\sum_r \kappa_{ri} \end{eqnarray} is the total decay rate of vacuum $i$ per Hubble volume per Hubble time. The late-time asymptotic solution of this equation for dS vacua $i$ is \begin{eqnarray} V_i(t) = s_i e^{(3-q)t}, \label{Vj} \end{eqnarray} where $q>0$ is the smallest solution of the eigenvalue equation \begin{eqnarray} (\kappa_i-q)s_i=\sum_j \kappa_{ij} s_j \end{eqnarray} and $s_i$ is the corresponding eigenvector. Substituting \eq{relation} in Eq.~(\ref{Vj}) we find \begin{eqnarray} V_i (\Omega_c) = s_i (3 H_i \Omega_c)^{1-q/3}. \label{VolumeDistribution} \end{eqnarray} $q$ is an exponentially small number, so to a good approximation we can write \begin{eqnarray} V_i (\Omega_c) \propto s_i H_i . \label{VolumeDistribution1} \end{eqnarray} This is the (approximate) asymptotic volume distribution in the 4-volume cutoff measure. Compared to the scale factor measure, the volume of faster expanding vacua is enhanced by a factor $H_i$. The distribution (\ref{VolumeDistribution1}) can be used to find the abundance of Boltzmann brains (BBs) in different dS vacua. Suppose BBs are produced in vacuum $i$ at a rate $\Gamma_i^{BB}$ per unit spacetime volume. The number of BBs $N_i^{BB}$ is then proportional to the total 4-volume in that vacuum. With a scale factor cutoff at $t=t_c$ this volume is \begin{eqnarray} V_i^{(4)}(t_c)=\int^{t_c}V_i(t)d\tau=H_i^{-1}\int^{t_c}V_i(t)dt =\frac{1}{3-q}H_i^{-1} s_i e^{(3-q)t_c}, \label{V4tc} \end{eqnarray} where we have used Eq.~(\ref{Vj}). Now, using Eq.~(\ref{relation}) to express $t_c$ in terms of $\Omega_c$, we find \begin{eqnarray} V_i^{(4)}(\Omega_c)\propto s_i H_i^{-q} \end{eqnarray} and \begin{eqnarray} N_i^{BB}\propto \Gamma^{BB} s_i, \end{eqnarray} where we have approximated $H_i^{-q}\approx 1$. The difference from the scale-factor cutoff measure, which gives \cite{DeSimone:2008if,Bousso:2008hz} $N_i^{BB}\propto \Gamma_i^{BB} H_i^{-1} s_i$ is only by a factor of $H_i$, which is not exponentially large. Thus the analysis of the Boltzmann-brane problem in the 4-volume cutoff measure is (almost) the same as that in the scale-factor measure. Since the problem can be evaded in the latter measure~\cite{Bousso:2008hz, DeSimone:2008if}, we conclude that the 4-volume cutoff measure may also be free from the Boltzmann-brane problem, depending on the properties of the landscape. We expect the conditions for avoidance of the BB problem to be very similar to those in the scale factor measure. \section{Probability distribution for cosmological constant} In this section we calculate the probability distribution for the cosmological constant $\Lambda$ under the same assumptions that were used in Ref.~\cite{DeSimone:2008bq} for the scale factor measure. Specifically, we focus on a subset of bubbles that have (nearly) the same physical properties as our bubble, apart from the value of $\Lambda$. We shall assume that the number of such bubble types in the landscape is very large, so the distribution of $\Lambda$ is nearly continuous. After nucleation each bubble goes through a period of slow-roll inflation, followed by periods of radiation and matter domination, until $\Lambda$ eventually starts to dominate. We will be interested in the values of $\Lambda$ for which this happens late in the matter era. Let ${\tilde a}_\Lambda(\tau)$ be the scale factor in a region with a given value of $\Lambda$, where the proper time $\tau$ is measured from the moment of thermalization (end of inflation) and ${\tilde a}$ is normalized so that ${\tilde a}(0)=1$. We can define a reference time $\tau_m$ such that $\tau_{eq}\ll\tau_m\ll\tau_\Lambda$, where $\tau_{eq}$ is the time of equal matter and radiation densities and $\tau_\Lambda$ is the time of $\Lambda$ domination. Then the evolution before $\tau_m$ is the same in all regions, while after $\tau_m$ the scale factor is given by \begin{eqnarray} {\tilde a}_\Lambda(\tau)= \left\{ \begin{array}{ll} \displaystyle{ {\tilde a}_m \lmk \frac{3}{2} H_\Lambda \tau_m \right)^{-2/3}\sinh^{2/3}\left(\frac{3}{2} H_\Lambda\tau\right) }&~~~~{\rm for}~~ \Lambda > 0 \vspace{0.3cm} \\ \displaystyle{ {\tilde a}_m \lmk \frac{3}{2} H_\Lambda\tau_m \right)^{-2/3}\sin^{2/3}\left(\frac{3}{2} H_\Lambda\tau\right) }&~~~~{\rm for}~~ \Lambda < 0, \end{array} \right. \label{a1} \end{eqnarray} where $H_\Lambda=\sqrt{|\Lambda|/3}$. Here, ${\tilde a}_m = {\tilde a}(\tau_m)$; it depends on the evolution prior to $\tau_m$, but the quantity ${\tilde a}_m \tau_m^{-2/3}$ is independent of $\tau_m$ (and of $\Lambda$). A cutoff at $\Omega=\Omega_c$ in a bubble thermalized at $\Omega_*$ with a scale factor $a_*$ corresponds to a cutoff at proper time $\tau_c$, which can be found from \begin{eqnarray} \Omega_c=\Omega_*+ a_*^3 \int_0^{\tau_c} {\tilde a}_\Lambda^3(\tau)d\tau. \label{Omegac} \end{eqnarray} From Eq.~(\ref{Omegaa2}) we can write \begin{eqnarray} \Omega_*\approx \frac{1}{3H_*} a_*^3, \end{eqnarray} where $H_*$ is the expansion rate at the end of slow-roll inflation in the bubble. Hence we can rewrite Eq.~(\ref{Omegac}) as \begin{eqnarray} \Omega_c\approx \Omega_*\left[1+ 3H_* \int_0^{\tau_c} {\tilde a}_\Lambda^3(\tau)d\tau\right]. \label{Omegac2} \end{eqnarray} The rest of the analysis closely follows Ref.~\cite{DeSimone:2008bq}, where references to earlier literature can also be found. The physical volume thermalizing in a scale factor time interval $dt_*$ in the spacetime region defined by the geodesic congruence is \begin{eqnarray} d{\cal V}_*\propto e^{\gamma t_*}dt_* , \end{eqnarray} where $t_*=\ln a_*$ and $\gamma=3-q\approx 3$. Expressing $t$ in terms of $\Omega$, we have \begin{eqnarray} d{\cal V}_*\propto \Omega_*^{\gamma-3} d\Omega_* \approx d\Omega_*, \end{eqnarray} which says that thermalized volume is produced at approximately constant rate per unit 4-volume. After thermalization, density perturbations grow, some fraction of matter clusters into galaxies, and observers evolve in some of these galaxies. The probability distribution for $\Lambda$ is proportional to the number of observers in regions with that value of $\Lambda$. We assume that the number of observers is proportional to the number of large galaxies with mass $M \gtrsim M_G$ ($\sim 10^{12} M_\odot$). Then the probability distribution can be expressed as \begin{eqnarray} P(\Lambda)\propto \int_0^{\Omega_c} F(\tau_c-\Delta\tau) {d\Omega_*}, \label{P} \end{eqnarray} where $F(\tau)$ is the fraction of matter that clusters into large galaxies at proper time $\tau$ after thermalization, $\Delta\tau$ is the time required for observers to evolve, and $\tau_c$ is expressed in terms of $\Omega_c/\Omega_*$ from Eq.~(\ref{Omegac2}). Introducing a new variable $X=\Omega_*/\Omega_c$, we can write \begin{eqnarray} P(\Lambda) = N \int_0^{1} F(\tau_c(X)-\Delta\tau) {dX}, \label{P2} \end{eqnarray} where $N$ is a normalization constant determined by $\int P(\Lambda) d \Lambda / \Lambda_{\rm obs} = 1$ with $\Lambda_{\rm obs}$ being the observed value of cosmological constant. In Eqs.~(\ref{P}) and (\ref{P2}), we implicitly assumed that $\Lambda >0$. When the landscape includes AdS vacua with $\Lambda <0$, some of the AdS regions will crunch prior to the cutoff, and such regions should be treated separately. The probability distribution for $\Lambda <0$ should be calculated from \begin{eqnarray} P (\Lambda) &=& N \lkk \int_0^{X_{\rm crunch}} F(\tau_c(X_{\rm crunch})-\Delta\tau) dX + \int_{X_{\rm crunch}}^1 F(\tau_c(X)-\Delta\tau) dX \right] \\ &=& N \lkk X_{\rm crunch} F(\tau_{\rm crunch} -\Delta\tau) + \int_{X_{\rm crunch}}^1 F(\tau_c(X)-\Delta\tau) dX \right], \label{P3} \end{eqnarray} where $X_{\rm crunch} \equiv X(\tau_{\rm crunch})$ and $\tau_{\rm crunch} \equiv 2 \pi / 3 H_\Lambda$. We will be interested in regions where $\tau_c\gg\tau_m$; then the integral in (\ref{Omegac2}) is dominated by the range $\tau\gg\tau_m$, so we can use Eq.~(\ref{a1}) for ${\tilde a}_\Lambda(\tau)$. This gives \begin{eqnarray} X^{-1}\approx \left\{ \begin{array}{ll} \displaystyle{ \frac{2H_i {\tilde a}_m^3}{9H_\Lambda^3 \tau_m^2} \left[\sinh(3H_\Lambda\tau_c)-3H_\Lambda\tau_c\right] }&~~~~{\rm for}~~ \Lambda > 0 \vspace{0.3cm} \\ \displaystyle{ \frac{2H_i {\tilde a}_m^3}{9H_\Lambda^3 \tau_m^2} \left[-\sin(3H_\Lambda\tau_c)+3H_\Lambda\tau_c\right] }&~~~~{\rm for}~~ \Lambda < 0. \end{array} \right. \label{X1} \end{eqnarray} Note that $\tau_c$ is assumed to be smaller than $\tau_{\rm crunch} \equiv 2 \pi / 3 H_\Lambda$ for $\Lambda < 0$. We use the Press-Schechter form \cite{Press:1973iz,Bardeen:1985tr} with a linear perturbation theory for the collapsed fraction $F(\tau)$. The distribution $P(\Lambda)$ can then be found numerically from Eqs.~(\ref{P2}) and (\ref{X1}), as it was done in Ref.~\cite{DeSimone:2008bq}. We use the same parameters as the one used in the same paper (e.g., $\Delta \tau = 5 \times 10^9$ years and the root-mean square fractional density contrast averaged over a comoving scale enclosing mass $10^{12} M_\odot$ at present $\sigma (10^{12} M_\odot) \approx 2.03$) while we use the updated cosmological parameters from the Planck data, such as $\Omega_\Lambda^{(\rm obs)} = 0.69$ and $\Omega_m^{(\rm obs)} = 0.31$~\cite{Aghanim:2018eyx}. We plot the resulting probability distributions in Fig.~\ref{fig1}, with solid blue and dashed red curves corresponding to 4-volume and scale factor cutoffs respectively. The left panel shows the full distributions, while the right panel shows the (normalized) distributions for positive $\Lambda$ in the logarithmic scale. The lighter (darker) blue-shaded regions represent the $1\sigma$ ($2\sigma$) ranges for the probability distribution in the 4-volume cutoff measure. To plot the distribution for $\Lambda<0$ in the scale factor measure, we set $\tau_c = \tau_{\rm crunch}$ for $\tau_c > \tau_{\rm turn}$, where $\tau_{\rm turn} \equiv \pi / 3 H_\Lambda$ is the turnaround time when the contracting phase begins and $\tau_{\rm crunch} \equiv 2\pi / 3 H_\Lambda$ is the time of the big crunch. Since $\tau_{\rm crunch}$ is twice larger than $\tau_{\rm turn}$, this results in a discontinuous jump of $\tau_c$ and in a larger probability for $\Lambda < 0$ in the scale-factor cutoff measure. We see however that the difference between the distributions in the two measures is not dramatic. The total probability for $\Lambda$ to be positive is $3\%$ for the scale factor and $8\%$ for the 4-volume cutoff measure. We note that in either measure the probability of negative $\Lambda$ is expected to be significantly reduced due to anthropic effects that have not been taken into account here. After the turnaround galaxies begin to accrete matter at a rate that increases with time and galactic mergers become more frequent. This may prevent galaxies from setting into stable configurations, which in turn would cause planetary systems to undergo more frequent close encounters with passing stars. Life extinctions due to nearby supernova explosions and to gamma-ray bursts would also become more frequent. Some of these effects have been discussed in Refs.~\cite{Piran:2015yga,Totani:2018zkp}. With all relevant anthropic effects taken into account, both distributions for $\Lambda$ are likely to be in a good agreement with observation. \begin{figure} \centering \includegraphics[width=0.45\linewidth]{fig1.pdf} \qquad \includegraphics[width=0.45\linewidth]{fig2.pdf} \caption{Distribution of cosmological constant in the 4-volume cutoff measure (solid blue curve) and the scale-factor cutoff measure (red dashed curve). The right panel is the probability distribution $\Lambda \times P(\Lambda)$ for $\Lambda>0$ in the logarithmic scale. All distributions are normalized as $\int P(\Lambda)d\Lambda / \Lambda_{\rm obs}=1$. The lighter (darker) blue-shaded regions represent the $1\sigma$ ($2\sigma$) ranges for the probability distribution in the 4-volume cutoff measure. } \label{fig1} \end{figure} \section{Probability distribution for spatial curvature} In this section we use the 4-volume cutoff measure to calculate the probability distribution for the spatial curvature with a cosmological constant fixed at the observed value. Again, we focus on a subset of bubbles that have the same physical properties as our bubble, apart from the e-folding number of the slow-roll inflation inside the bubble, $N_e$. The spacetime inside a nucleated bubble has a negative spatial curvature. After a short period of curvature domination, the curvature rapidly decreases due to inflationary expansion and becomes completely negligible by the end of inflation. However, it may become significant again in the late universe and may influence structure formation. The density parameter for the spatial curvature at present (i.e., at the time when the CMB temperature is the same as in our universe at present), $\Omega_k = 1 - \rho / \rho_{\rm cr}$, where $\rho_{\rm cr}$ is the critical density, is related to the e-folding number $N_e$ as $\Omega_k \propto e^{-2N_e}$. The proportionality constant depends on the detailed history of the universe after inflation. Since the spatial curvature depends on the reference time and the notation for the density parameter may be confused with the 4-volume time, we use a time-independent variable $k \equiv (\abs{\Omega_k}^3 / \Omega_\Lambda \Omega_m^2)^{1/3}$ in the following calculation. For inflation at the GUT scale and assuming instantaneous reheating, $k \sim e^{124-2N_e}$~\cite{DeSimone:2009dq}. Let us define $\Omega_{\rm nuc}$ as the 4-volume time at bubble nucleation. It is related to the time of thermalization $\Omega_*$ as \begin{eqnarray} \Omega_* = \Omega_{\rm nuc} + \int_{\tau_{\rm nuc}}^{\tau_*} a^3 d \tau = \Omega_{\rm nuc} (1+ C e^{3 N_e}), \end{eqnarray} where $C$ is a constant that is universal for all bubbles. We can neglect the factor of $1$ in the parenthesis and obtain $d \Omega_* \propto e^{3N_e} d \Omega_{\rm nuc}$. As we discussed in the previous section, the physical volume nucleating in a 4-volume interval $d \Omega_{\rm nuc}$ is proportional to $d \Omega_{\rm nuc}$. After thermalization, the number of observers is proportional to $e^{3 N_e} F (\tau_c - \Delta \tau)$ and hence the distribution is given by \begin{eqnarray} P (k) dk \propto P_{\rm prior} (N_e (k)) d N_e \int_0^{\Omega_c} e^{3 N_e} F (\tau_c - \Delta \tau) d \Omega_{\rm nuc}, \label{Pk0} \end{eqnarray} where the prior distribution $P_{\rm prior} (N_e)$ is determined by the landscape. Generally we expect that long inflation requires fine-tuning, so $P_{\rm prior} (N_e)$ is a decreasing function of $N_e$. For a random Gaussian landscape one finds \cite{Masoumi:2016eag, Masoumi:2017gmh} \begin{eqnarray} P_{\rm prior} (N_e) \propto N_e^{-3}. \label{PNe} \end{eqnarray} Noting that $F = 0$ for $\Omega_{\rm nuc} \in (\Omega_c / C e^{3 N_e} , \Omega_c)$ and $d N_e / d k \propto 1/k$, we rewrite Eq.~(\ref{Pk0}) as \begin{eqnarray} P(k) \propto k^{-1} P_{\rm prior} (N_e (k)) \int_0^{\Omega_c} F (\tau_c - \Delta \tau) d \Omega_*. \label{Pk} \end{eqnarray} The proportionality constant is determined by the normalization condition, $\int P(k) dk = 1$. Although the integral in \eq{Pk} has the same form as \eq{P}, the collapsed fraction $F(\tau)$ is different because of the effect of the spatial curvature. Again, we use the Press-Schechter form \cite{Press:1973iz,Bardeen:1985tr} with a linear perturbation theory for the collapsed fraction $F(\tau)$, following Ref.~\cite{DeSimone:2009dq}. In that paper, the collapsed function is expressed in terms of $x \equiv \rho_\Lambda / \rho_m \propto \tilde{a}^3$. Then it is convenient to rewrite $X \equiv \Omega_* / \Omega_c$ as \begin{eqnarray} X^{-1} \propto \int_0^{\tau_c} \tilde{a}^3 d \tau \propto \int_0^{x_c} \frac{dz}{\sqrt{1 + z^{-1} + k z^{-2/3}}}, \end{eqnarray} where we use $H^2 = H_\Lambda^2 ( 1 + x^{-1} + k x^{-2/3})$ and define $x_c$ as the value of $x$ at $\Omega = \Omega_c$. We can calculate \eq{Pk} by rewriting the integral in terms of $x$ and using the collapsed function given in Ref.~\cite{DeSimone:2009dq}. We calculated $P(k)$ numerically with the prior distribution given by Eq.~(\ref{PNe}). We neglect $\Delta \tau$ in \eq{Pk} for simplicity because it has been argued in Ref.~\cite{DeSimone:2009dq} that it does not significantly affect the collapsed function. The result is shown as the solid curve in Fig.~\ref{fig3}. This distribution is almost indistinguishable from that in the scale factor cutoff measure~\cite{DeSimone:2009dq} which is shown by a dashed curve. The Planck data favors a slightly negative value of $k$~\cite{DiValentino:2019qzk} but is consistent with a spatially flat universe within $2\sigma$~\cite{Aghanim:2018eyx}. The observationally allowed range within $3\sigma$ is about $\abs{\Omega_k} \lesssim 0.01$ or $\abs{k} \lesssim 3 \times 10^{-2}$, which is indicated by shading in the figure. The probability for curvature to be in this range is about $94\%$. A detection of curvature is probably possible in the future if $k \gtrsim 3 \times 10^{-4}$. The range of $k$ where curvature satisfies the observational bound and is still detectable is shown by the blue-shaded region in the figure. The probability for $k$ to be in this range is about $7\%$ \cite{Freivogel:2005vv,DeSimone:2009dq}. \begin{figure} \centering \includegraphics[width=0.75\linewidth]{fig3.pdf} \caption{Distribution of spatial curvature in the 4-volume cutoff measure (solid blue curve) and the scale-factor cutoff measure (red dashed curve). The two distributions are essentially the same. The shaded regions are allowed by the Planck constraint. In the blue-shaded region, the spatial curvature may be detected in the future. } \label{fig3} \end{figure} \section{General formalism} So far we calculated probability distributions in the 4-volume cutoff measure using the approximate relation (\ref{Omegaa2}) between the scale factor and 4-volume cutoffs. If a more accurate description is needed, the analysis becomes more complicated. The reason is that in order to evolve the distribution to larger values of $\Omega$ using $d\Omega=a^3 d\tau$, we need to know the scale factor $a$, which generally takes different values on different parts of the constant $\Omega$ surface. In this section we shall introduce a formalism that can in principle be used to address this issue. We first consider models where eternal inflation is driven by quantum diffusion of a scalar field $\phi$. Let us introduce the distribution function $f(\Omega,\phi,V)$ defined as the fraction of comoving volume occupied by regions with given values of $\phi$ and $V=a^3$ on hypersurfaces of constant $\Omega$. The evolution of the multiverse can then be described by the Fokker-Planck equation \cite{Winitzki:2005ya} \begin{eqnarray} \frac{\partial f}{\partial\Omega}+\frac{\partial j_\phi}{\partial\phi} +\frac{\partial j_V}{\partial V}= 0, \label{FP} \end{eqnarray} where the fluxes $j_\phi$ and $j_V$ are given by \begin{eqnarray} j_\phi=-\frac{\partial}{\partial\phi}\left(Df\right)+ \frac{d\phi}{d\Omega} f, \label{jphi} \end{eqnarray} \begin{eqnarray} j_V = \frac{dV}{d\Omega}f. \end{eqnarray} With $d\Omega=V d\tau$ we can express the drift velocity of $\phi$ as \begin{eqnarray} \frac{d\phi}{d\Omega}=\frac{1}{V}\frac{d\phi}{d\tau}=-\frac{1}{4\pi V}\frac{dH}{d\phi}, \end{eqnarray} where $H(\phi)=[(8\pi/3 U(\phi)]^{1/2}$ is the inflationary expansion rate and $U(\phi)$ is the scalar field potential. Similarly, we find \begin{eqnarray} \frac{dV}{d\Omega}=3{H}, \end{eqnarray} where we have used $H=\frac{1}{a}\frac{da}{d\tau}$. The diffusion coefficient $D$ in Eq.~(\ref{jphi}) can be found from the dispersion of quantum fluctuations of $\phi$ over proper time interval $d\tau$: \begin{eqnarray} \langle (\delta\phi)^2\rangle = \frac{H^3}{4\pi^2} d\tau = \frac{H^3}{4\pi^2 V} d\Omega = 2D d\Omega, \end{eqnarray} which gives $D=H^3/8\pi^2 V$. Combining all this we obtain the following equation for $f(\Omega,\phi,V)$: \begin{eqnarray} V\left(\frac{\partial}{\partial\Omega}+3H\frac{\partial}{\partial V}\right)f-\frac{1}{8\pi^2}\frac{\partial^2}{\partial\phi^2}(H^3 f) -\frac{1}{4\pi} \frac{\partial}{\partial\phi} \lmk \frac{d H}{d \phi} f\right)=0. \label{diffusion} \end{eqnarray} Once the function $f(\Omega,\phi,V)$ is found, the comoving and physical volume distributions of $\phi$ on surfaces of constant $\Omega$ can respectively be found from \begin{eqnarray} F(\Omega,\phi)=\int_0^\infty dV f(\Omega,\phi,V) \label{F} \end{eqnarray} and \begin{eqnarray} F_V(\Omega,\phi)=\int_0^\infty dV V f(\Omega,\phi,V). \label{FV} \end{eqnarray} In models with bubble nucleation we can define the distribution $f_j(\Omega,V)$ as the fraction of comoving volume occupied by vacuum of type $j$ with a given value of $V$ on surfaces of constant $\Omega$. It satisfies the equation \begin{eqnarray} V\left(\frac{\partial}{\partial\Omega}+3H_i\frac{\partial}{\partial V}\right)f_i=\sum_j {\tilde M}_{ij} f_j =\sum_j M_{ij}H_j f_j, \label{bubble} \end{eqnarray} where ${\tilde M}_{ij}=M_{ij}H_j$ is the proper time transition matrix and $M_{ij}$ is the scale factor time transition matrix given by Eq.~(\ref{Mij}). The reason we have a proper time transition matrix on the right-hand side of (\ref{bubble}) is that the differential operator $V\partial/\partial\Omega$ on the left-hand side is a derivative with respect to $\tau$. Once again, the comoving and physical volume distributions of different vacua on surfaces of constant $\Omega$ can be found as \begin{eqnarray} F_i(\Omega)=\int_0^\infty dV f(\Omega,V) \label{Fi} \end{eqnarray} and \begin{eqnarray} F_{iV}(\Omega)=\int_0^\infty dV V f(\Omega,V). \label{FiV} \end{eqnarray} Equations (\ref{diffusion}) and (\ref{bubble}) are difficult to solve analytically, but they may be useful for a numerical analysis in specific models. \section{Summary and discussion} We have proposed a new probability measure for eternally inflating universes, which regulates infinite numbers of events by a cutoff at a constant 4-volume time $\Omega$, defined by Eqs.~(\ref{Omega0}),(\ref{Omega}). The main advantage of this measure is that it avoids the problems with contracting AdS regions that plagued earlier measure proposals. Otherwise, its properties are similar to those of the scale factor cutoff measure. With suitable assumptions about the landscape, it does not suffer from the Boltzmann brain problem. The predicted distribution for the cosmological constant $\Lambda$ is similar to the scale factor measure, but with a higher probability for positive values of $\Lambda$: $P(\Lambda>0)=8\%$ and $3\%$ in 4-volume and scale factor measures, respectively. The probability of negative $\Lambda$ is likely to be greatly reduced when anthropic effects in contracting regions are properly taken into account, and one expects the resulting distribution to be in a good agreement with observation. The probability distribution for the curvature parameter $\Omega_k$ in the new measure is essentially the same as in the scale factor measure, assuming that the cosmological constant is fixed at the observed value. This distribution depends on the prior distribution $P(N_e)$ for the number of e-foldings of slow roll inflation. With $P(N_e)\propto N_e^{-3}$, as suggested by random Gaussian models of the landscape, one finds that the probability for $\Omega_k$ to be below the observational upper bound ($\Omega_k \lesssim 0.01$) and still be detectable (that is, $\Omega_k \gtrsim 10^{-4}$) is rather small, $P \sim 7\%$. We note finally that one could introduce a family of measure proposals with properties similar to the 4-volume cutoff. For example, instead of $\Omega$ one could use the ``time" coordinate \begin{eqnarray} t_p (\tau)=\int_0^\tau d\tau' [{\cal V}^{(3)}]^p \end{eqnarray} with $p>0$. The 4-volume cutoff corresponds to $p=1$. This choice may be preferred because it has a clear geometric meaning. One hopes however that the probability measure will eventually be determined by the fundamental theory. \section*{Acknowledgments} We are grateful to Alan Guth and Ken Olum for useful discussions. A.~V. was supported in part by the National Science Foundation under grant PHY-1820872. M.~Y. was supported by JSPS Overseas Research Fellowships and the Department of Physics at MIT. M.~Y. was also supported by the U.S. Department of Energy, Office of Science, Office of High Energy Physics of U.S. Department of Energy under grant Contract Number DE-SC0012567.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{intro} By growing the number of elderly and patients, providing healthcare services becomes difficult. Accordingly, using an intelligent system can facilitate these services. However, such a system should provide healthcare services without interrupting [1]. Consequently, appropriate hardware and software platform must be applied to design and implement healthcare systems. To this end, wireless sensors as a hardware platform can collect data of elderly or patients continuously and the system can provide the right response when it is necessary. On the one hand, the diversity of the data obtained from sensors in a healthcare system makes this data incorrect, unclear, and sometimes contradictory. On the other hand, sensors in an environment and patients cannot aggregate and make decisions based on the received data. To overcome this problem, using multiagent systems would be a good software model to manage these data [1].\\ \hphantom{th}Another main issue of healthcare systems is to secure personal data in wireless sensor networks of healthcare systems against attacks. Since healthcare systems are remotely manageable and based on Internet, they are vulnerable to frequent attacks of healthcare systems such as Denial of service (DoS) and User to Root (U2R) attacks. Consequently, attackers can make healthcare networks unavailable by DoS attacks, or a user who has access to data of a healthcare system can get a higher access level by committing User to Root attacks. Therefore, in this research, we propose a framework that contains two steps. In the first step, we design a layered multiagent-based architecture. In the second step, we introduce a layered architecture to consider intrusion detection systems (IDS) for our agent-based healthcare system. Consequently, we classify agents according to their energy consumption and security of their data. Then, we allocate an intrusion detection system (IDS) [2] to each group of agents.\\ \hphantom{th}In multiagent systems (MAS), agents interact with each other autonomously. They do their jobs based on data gathered from sensors without interrupting other processes [1]. Consequently, using multiagent systems is a proper method to design our healthcare system.\\ \hphantom{th}In the second step of our framework, we assign intrusion detection systems to the agent-based healthcare system. Consequently, we introduce intrusion detection systems in the following paragraph briefly. Intrusion detection systems monitor and analyze activities in computer systems or computer networks to find out suspicious events that intrude them. Different kinds of Intrusion detection systems can be categorized base on the type of events that they monitor and their implementation [2]. In this paper, we use anomaly and misuse (signature-based) host-based Intrusion detection systems.\\ \hphantom{th}Therefore, the purpose of this paper is to propose a framework based on multiagent technology for collecting data from an environment and patients using a secured wireless sensor network at a hospital or a health facility. In this paper, we design an agent-based healthcare system to collect sensor networks data, provide healthcare services, and consider the security of healthcare sensor networks to confront possible attacks.\\ \hphantom{th}After this introduction, the rest of this paper is as follows: In the section 2, we shortly discuss related works, which are related to the background of our framework. In the section 3, we explain our proposed framework. The simulation results, which show and evaluate the performance of the framework, are presented in the section 4. In the section 5, first, we discuss our contribution to the improvement of healthcare systems, and then we compare our framework with some similar works in the past. Finally, in the section 6, we end this paper with a conclusion and mention some future works. \section{Related works} \label{sec:2} This section presents some of the related works about remote healthcare systems, secured healthcare systems, and SVM-based intrusion detection systems. The basic concepts of this section are the prerequisite of our framework. \subsection{Remote healthcare systems} \label{sec:2-1} Remote healthcare systems, which monitor the status of patients remotely and widely, consist of three main parts: 1) Some wireless sensors, 2) Sensor analysis and signal analysis to detect problems, and 3) Alert to healthcare staff. Aingeru [3, 4, 5], PANGEA [1, 6], MADIP [7], JTH [8], GerAmi [9], Koutkias [10], Kaluza [11], and Cervantes [12, 13] are examples of this kind of healthcare systems. Among these healthcare systems, PANGEA and MADIP systems are more recent than the others, and Aingeru system is similar to our proposed framework. Consequently, we introduce Aingeru, PANGEA, and MADIP in the following paragraphs bri-efly.\\ \hphantom{th}Aingeru healthcare system is a Java-based system. It also uses the Jade tool, which is a Java-based tool, as an agent platform. In this system, sensors are connected to PDA devices by Bluetooth 1.1 wireless connection, and PDA devices are connected to the Control Center by a GPRS connection. This healthcare system gives intelligent, comprehensive, and constant monitoring of the elderly and patients. Every cared person has a sensor-connected PDA, which is responsible for measuring vital parameters (such as heart rate, amount of oxygen in the blood). The system evaluates data locally (on a PDA) and delivers results to a relevant specialist in an emergency. The control center of a hospital collects all data from patient-connected sensors. Then it stores them in the central database. Physicians can get access to all relevant data of patients through a web-based program. The main components of this system are 1) PDA of a user that is for supervising users, 2) The control center that provides remote services, 3) The health center that gives medical services, and 4) Technical center, which is responsible for maintenance of all infrastructures.\\ \hphantom{th}The PANGEA multiagent healthcare system launched and tested at El Residencial La Vega healthcare facility in Salamanca, Spain for 8 months in 2014. In this system, relevant agents respond to condition of an environment based on data of sensors automatically. The following paragraphs are a brief description of the agents in the PANGEA system.\\ \hphantom{th}\textbf{Information Provider Organization:} This agent is the interface of the system to communicate with users.\\ \hphantom{th}\textbf{Home Automation Organization:} This part includes all agents that are responsible for controlling sensors in the environment. These sensors are a smoke detector, a temperature sensor, and a presence detection sensor. These agents send all data gathered from these sensors to the ZigBee Supervisor.\\ \hphantom{th}\textbf{ZigBee Supervisor Agent:} This agent is responsible for sending data to the database of the system.\\ \hphantom{th}\textbf{Information Agent:} This agent is the interface of the database.\\ \hphantom{th}\textbf{Locating Organization:}Incorporates all agents that are responsible for controlling sensors of patients to locate them.\\ \hphantom{th}\textbf{Caregiver Organization:} Every nurse and physician has one agent that aggregates data related to their activities.\\ \hphantom{th}The MADIP is a healthcare system in heterogeneous and wide-range networks such as the Internet or metrop-olitan and national networks. This system consists of two types of agents including static agents and mobile agents. Static agents offer resources and facilities for mobile agents. Mobile agents operate autonomously and communicate with static agents and host systems by using network infrastructure. The main parts of the MADIP system are as follows: User agent, Diagnostic agent, and resource agent as static agents, physician agent as a mobile agent, knowledge-based data server, and external services.\\ \subsection{Security of agent-based healthcare systems} \label{sec:2-2} In the following, we study earlier works about the security of agent-based healthcare systems. The priority concern of healthcare systems is sensitive data protection from unauthorized users. The simple approach in [14] uses the Access Control (AC) mechanism to address this problem. However, traditional AC models are inflexible and difficult to apply in healthcare systems, so this paper proposes two data filtering layers (policies) before returning results to the user, including access policy (public cloud) and privacy policy (private cloud). It introduces an access control model focusing on privacy protection for the healthcare systems. These two sets of policies protect patient records of healthcare systems, so they are accessible securely.\\ \hphantom{th}Fog computing provides storage, computing, and networking between end devices and the cloud [15]. Portable devices can be fog servers, where they can perform many tasks such as processing, visualization, and transferring data to the cloud. The main idea of Fog computing is to migrate some tasks of cloud data center tasks to the fog servers on the network edge. Also, Fog computing has proven to be extremely useful in time-sensitive applications such as health care applications [16–18]. Consequently, the use of Fog computing improves security because of its proximity to the devices [19] and accelerates real-time data processing of applications [19]. Since medical information is considered sensitive data, the privacy issue in e-health systems is vital. This issue has been taken care of in [20] by prohibiting unauthorized access and encrypting information before sending it to storage.\\ \hphantom{th}The biometric features acquired from biosignals are distinctive and random. Consequently, they can develop security mechanisms for smart healthcare systems. In [21] the Inter-Pulse Intervals (IPIs) extracted from heartbeats are used to generate secret keys to secure WBSNs. These keys are distributed to nodes within and outside the WBSNs to secure data transmission in remote healthcare platforms.\\ \hphantom{th}In [22] a three-factor authentication protocol, which is based on Shamir's threshold scheme, increases the security and privacy of data in healthcare systems and minimizes the communication cost. \\ \hphantom{th}The architecture of the healthcare system proposed in [23] is composed of three layers. The first layer, which is the lowest layer and called Sensor Network Tier layer, includes two types of sensors. The first type is the wearable sensor system that pairs with biological sensors (belt attached to caregivers). The second type is the sensors that are located in a building and are responsible for transmitting data of an environment using a wireless or wired network with the ZigBee protocol [24]. The second layer, called mobile computing network tier, includes mobile devices such as PDA devices and laptops that connect to a fixed station using a network infrastructure or a multi-hub network. The third layer, back-end network tier, includes stationary stations and servers that give application layer services for lower layers and is responsible for processing and saving the data received from mobile devices.\\ \hphantom{th}The purpose of this architecture is to keep the confidentiality of the data stored in each layer using a digital signature or proper encryption. For example, in the network layer of this sensor network, AES encryption secures the communication between each pair of nodes. Access of devices in the second layer to the data of the first layer is limited, so access is available after authentication. Furthermore, each device in the second layer has a public key and private key pair that is assigned by a third-party entity from the third layer (local server). Hosts in the second layer use this key pair to encrypt access to the first layer.\\ \hphantom{th}In [25], the approach is to balance the performance and the security of a sensor-based healthcare system. In this way, this sensor network consists of three parts as follows: the Medical Sensor Network (MSN), the Patient Area Network (PAN), and a service center. Also proposed security architecture consists of three separate and complementary layers, so there is a security structure for each part of the network.\\ \hphantom{th}The MSN layer includes environment sensors such as temperature and humidity sensors and mobile devices such as PDA devices. Symmetric encryption secures the communication between each pair of nodes in this layer.\\ The security layer of PAN, which includes patient-conne-cted sensors, secures medical data associated with MSN-layer nodes (such as physicians and nurses) and a service center. This security layer authenticates devices connected to patients by using a private key. The corresponding security layer of the service center consists of a public key and a certificate. Therefore, components of the PAN and the MSN can receive a pair of private and public keys and a unique identification number (ID) by having the public key of the service center certificate.\\ \hphantom{th}To conclude, encryption algorithms are proposed to secure remote healthcare systems in these pieces of research. However, the cryptographic algorithms cannot confront attacks except eavesdropping and data modification, which means these systems are vulnerable to other common attacks of remote healthcare systems such as buffer overflow. Therefore, in this research, we intend to improve this security weakness of earlier remote healthcare systems by considering the energy consumption of sensors and the sensitivity of their data.\\ \subsection{SVM-based IDS} \label{sec:2-3} In our proposed framework, we intend to improve the security weakness of remote healthcare systems mentioned in the previous paragraph. For this purpose, we use intrusion detection systems in our proposed framework. We explain the mechanism of intrusion detection systems in the next section. These intrusion detection systems use the support vector machine to classify network traffic. Therefore, in this section, we introduce earlier works about intrusion detection systems to prove the advantages of SVM in comparison to other network traffic classifiers.\\ \hphantom{th}In [26], machine-learning methods such as Optimal Power Flow (OPF), SVM, and K-Nearest Neighbors (KNN) collaborate to find abnormal traffic in computer networks. The network traffic contains information such as source and destination IP address, packet count, service type, protocol type, source port, destination port, start time, and end time of a session between two hosts in a network. The focus of this research is on DoS, Brute Force, and Port Scanning attacks. The IDS in this research detects all these three types.\\ \hphantom{th}In [27], the IDS consists of the PCA (principal components analysis) method and the SVM. The approach of this research is to select a suitable set of features. NSL-KDD standard data set is used to evaluate the performance of the proposed IDS. \\ \hphantom{th}Research [28] describes SVM as one of the best methods to detect abnormal behaviors in networks and proposes an SVM based IDS. It mentions that SVM has a high computational cost, although if the number of features of the training set decreases, the SVM machine will perform better. \\ \hphantom{th}Research [29] proposes an SVM-based IDS to identify attacks in enterprise networks. The features are selected by information techniques to learn multiple SVM. According to [29], this method has a higher detection rate than the artificial neural network. Furthermore, some other works in this field prove that SVM performs better than different network traffic classifiers [30, 31].\\ \hphantom{th}According to the earlier research, which we reviewed in this section, we got enough background knowledge for our proposed framework. Therefore, in the next section, we describe our framework.\\ \section{System overview} \label{sec:3} In this section, we introduce our framework for multia-gent-based healthcare systems that facilitates the provision of healthcare services and secures the wireless sensor network of healthcare systems against attacks simultaneously.\\ \hphantom{th}Multiagent systems help us to integrate data in remote healthcare systems due to the distribution of the data. Each agent has a specific responsibility and interacts with other agents to manage and analyze the data to improve services. Therefore, we design our architecture base on multiagent systems. Consequently, as shown in Figure 1, the first step is to define and plan agents of our system. The second step is to classify agents base on their data and energy capacity. Then, for each group of agents, we consider a proper IDS to secure the sensor network of our healthcare system against attacks.\\ \hphantom{th}Therefore, in the first part of this section, we design our multiagent framework and plan agents, capabilities, and goals of them. In the second part of this section, we classify agents according to their data and energy capacity. Then we add a suitable IDS to each group of agents.\\ \begin{figure} \includegraphics{Fig1.eps} \caption{The guideline of our proposed framework} \label{fig:1} \end{figure} \subsection{Multiagent design} \label{sec:3-1} In this section, we present the multiagent design of our healthcare system. Figure 2, which represents all agents and interactions between them in Tropos methodology, shows the multiagent architecture of our proposed framework. Tropos [32–34] is a software engineering method for designing multiagent systems. This methodology covers all concepts of agents such as goals, programs, and tasks at the development stage. It also focuses on software requirements. The Tropos methodology is about agents and all its related ideas and analysis of primary software needs. The actor diagram is for the stage of determining the basic requirements of the software. In the Tropos methodology, a circle expresses an agent, an ellipsoid shows a goal, a cloud shows a soft goal, and a dashed line implies that there are several sub-systems of the same kind. A soft goal is an ambiguous goal in a system that there is no clear benchmark for achieving. Actors in multiagent systems are agents that play a role in a system. Therefore, according to Figure 2, there are five actors in our proposed framework. In the following paragraphs, we introduce these agents (actors).\\ \hphantom{th}\textbf{Patient agent} manages all data related to patients. These data include the profile of patients, heart rate, blood pressure, body temperature, and the location of patients. The database center saves these data at the start of admission to a health facility center. Also, real-time sensors of patients collect other necessary data such as blood pressure, heart rate, and body temperature. Presence sensors attached to patients also find the location of patients and detect the presence of patients in their beds. This agent detects the condition of patients after the integration of health monitoring data. After reviewing the collected data, if the patient agent diagnoses the health condition of a patient as unsuitable, it will alert the nurse agent. Also, the patient agent can send his/her request to the nurse agent.\\ \hphantom{th}\textbf{Nurse agent} manages activities of nurses according to the priorities of them. This agent reports the status of patients to a specific nurse and warns in an emergency. It also requests nurses to do activities and collects reports from them.\\ \hphantom{th}\textbf{Physician agent} Physicians do their jobs with the help of this agent on portable devices such as tablets, laptops, and smartphones. This agent helps them to monitor the status of patients and receive reports of nurses. Physicians check the condition of patients and the activities of nurses with the help of the physician agent virtually. They can also check the health condition of patients and send out all activities that nurses should do for patients with the help of this agent.\\ \hphantom{th}\textbf{Ambient agent} controls sensors in an environment include smoke detection sensors and temperature sensors. This agent identifies the condition of an environment after collecting the data of these sensors. If this agent detects an emergency in an environment such as a fire near to patients, it will alert nurses.\\ \hphantom{th}\textbf{Database agent} is the database interface. It maintains and manages the profile of patients. This agent keeps alert messages with the feedbacks of nurses.\\ \begin{figure*} \includegraphics[width=1\textwidth]{Fig2.eps} \caption{The actor diagram of the proposed framework in Tropos methodology} \label{fig:2} \end{figure*} \hphantom{th}According to Figure 3, after the admission of patients to a healthcare center, the interactions between the agents begin as follows:\\ \hphantom{th}1. After collecting data from patients, the patient agent sends identity information of patients, such as name, age, and date of entrance, to the database agent. The database agent saves them in the database.\\ \hphantom{th}2. The patient agent analyzes data, including blood pressure, heart rate, and body temperature, to find the health condition of patients. After analyzing the obtained data, if the patient agent detects that the health condition of a patient is unsuitable, it will alert the nurse agent.\\ \hphantom{th}3. The patient agent sends the request of the patient to the nurse agent.\\ \hphantom{th}4. The ambient agent collects the data of ambient sensors, including smoke detector sensors and temperature sensors, to detect the status of an environment. In an emergency, this agent alerts the nurse agent.\\ \hphantom{th}5. The nurse agent sends the activity reports of nurses to the physician agent after nurses react to received alerts.\\ \hphantom{th}6. The nurse agent sends activity reports to the database agent to save the alerts and their reactions.\\ \hphantom{th}7. The physician agent can request the nurse agent to do treating activities.\\ \hphantom{th}8. The physician agent can ask the patient agent to send the health condition of patients.\\ \hphantom{th}9. The patient agent responds to the request of the physician agent by sending blood pressure, heart rate, body temperature, etc. \\ \begin{figure*} \includegraphics[width=1\textwidth]{Fig3.eps} \caption{The association of the agents in the proposed framework} \label{fig:3} \end{figure*} \subsection{Layered architecture of intrusion detection systems} \label{sec:3-2} As we mentioned before, the second step is securing the healthcare system. We use a clustered wireless sensor network [35] (one of the most common network topologies in sensor networks) in our healthcare system. Because of the difference in activities of each sensor, the probability of an attack is different for each one. Therefore, in the following paragraph, we discuss the mechanism of the IDS for clustered wireless sensor networks in the research [36].\\ \hphantom{th}The capabilities of sensors in clustered sensor networks are heterogeneous [37, 38]. Sinks need to collect data from all clusters in these networks, so they have more processing power and use more energy than sensors. Cluster sensors also have more processes than ordinary sensors, so they use more energy. Other sensors do not have the same capacity as cluster heads because they only collect ambient data within their scope. Consequently, the capabilities of sinks are more than cluster heads, and the capabilities of cluster heads are more than other sensors. As a result, sinks and cluster heads are more affected by attacks than ordinary sensors, and they should have better security measures.\\ \hphantom{th}We also arrange the agents of our framework and use an IDS suitable for each group of agents with similar reasons. As shown in Figure 4, first, we classify the agents based on the sensitivity of their data and the energy level of their corresponding sensors. In this case, the database agent and other agents (that have data from aggregated sensors) have better security measures than those with restricted data. \\ \hphantom{th}The patient agent collects the health data of patients, and the ambient agent collects the data of ambient sensors. Therefore, these two agents have restricted data, which are not valuable for attackers. Consequently, they rarely attack these agents. Furthermore, the wearable sensors of patients have limited energy, so we consider an anomaly IDS for these agents. Anomaly intrusion detection systems, which have a high detection rate, create a model of normal traffic, and detect network attacks by comparing network traffic with the model. However, anomaly intrusion detection systems identify some normal traffic of networks which has different patterns from the rest as an attack wrongly, so they have a high false-positive rate and low accuracy [36]. Also, they have a low computational cost and low power consumption because of their simple structure, which makes them suitable for both the patient and the ambient agents.\\ \hphantom{th}The nurse agent collects the health status of patients and the status of the ambient agents. Then, in an emergency, the nurse agent alerts nurses, and after recording feedbacks of nurses, the nurse agent sends a record of alerts to the database agent. Besides, the physician agent can ask the nurse agent to do a specific action, such as sending the health status of patients. Consequently, the nurse agent and the physician agent collect data from patients and environments, which makes them more likely to be attacked than the patient agent and the ambient agent. The physician agent and nurse agent do not collect data directly from sensors, and they run on mobile devices such as mobile phones, so they have more energy supply than the ambient agent and the patient agent. As a result, we consider a misuse intrusion detection system for the physician agent and the nurse agent. Misuse intrusion detection systems detect the type of attacks in addition to the normal, and attack traffic, so they have a more complex structure, higher processing cost, and more energy consumption than anomaly intrusion detection systems.\\ \hphantom{th}The database agent saves profiles of patients, alerts of the patient agent and the ambient agent to the nurse agent, and reports of nurses' reactions. This agent collects and saves all data of sensors in our healthcare system. Consequently, the database agent is more likely to be attacked than the rest of the agents and requires a better intrusion detection system than the two lower layers in Figure 4. As a result, we consider a hybrid intrusion detection system for this agent. In hybrid intrusion detection systems, an anomaly intrusion detection system and a misuse intrusion detection system complement each other. The anomaly intrusion detection system, which has a high detection rate, detects anomaly traffic. Then the misuse intrusion detection system corrects traffic, in which the anomaly intrusion detection system detects abnormal wrongly due to its low accuracy [39]. Finally, the decision-making unit determines the type of traffic. Therefore, hybrid intrusion detection systems use the high detection rate of anomaly intrusion detection systems and the high accuracy of misuse intrusion detection systems.\\ \begin{figure} \includegraphics{Fig4.eps} \caption{Layered architecture of the intrusion detection systems in the proposed framework} \label{fig:4} \end{figure} \hphantom{th}Table 1 shows the rules of the decision-making unit of the hybrid intrusion detection system. As we mentioned before, anomaly intrusion detection systems have a high detection rate in detecting normal traffic. In the first rule, this feature helps to distinguish normal traffic. We also mentioned that anomaly intrusion detection systems have low accuracy in detecting attacks. The high accuracy of misuse intrusion detection systems modifies this low accuracy in the second and third rules. Consequently, if the misuse IDS does not find an attack that the anomaly IDS detects, so this traffic is normal. If the misuse IDS detects the class of an attack that the anomaly IDS find, then it is an attack. \begin{table} \centering \caption{Rules of the hybrid IDS in our framework} \label{tab:1} \resizebox{\linewidth}{!} \begin{tabular}{>{\hspace{0pt}}p{1\linewidth}} \hline \textbf{Rules} \\ \hline If the anomaly IDS detects traffic as normal, traffic will be normal. \\ \hline If the anomaly detection detects an attack and the misuse detection does not detect any attack, then it is not an attack and it is an incorrect classification. \\ \hline If the anomaly detection detects an attack and similarly the misuse detection detects an attack, then it is an attack and the decision-making unit will determine the class of the attack. \\ \hline \end{tabular} } \end{table} \hphantom{th} Hybrid intrusion detection systems have more computational cost and more energy usage than anomaly intrusion detection systems and misuse intrusion detection systems because they use two different types of intrusion detection systems at the same time. The database agent has an unlimited energy supply (the database server works with electrical power supply.), so hybrid intrusion detection systems fit the database agent perfectly. \\ \hphantom{th}So far, we have introduced details of our proposed framework. In the next section, we discuss how to implement the agents in this framework. Additionally, we evaluate the intrusion detection system in each layer of the framework to prove the correctness of the agents' classification and assigning an IDS to each layer. \section{Experiment} \label{sec:4} After the phase of analysis and design, we implement and test our framework. In this section, we focus on the evaluation and implementation results of our healthcare system. Also, we examine our solution by using a dataset, and then we present the run-time and the memory consumption of the intrusion detection systems. Finally, we compare previous works in terms of the security of healthcare systems with our proposed method. \subsection{Implementation of our multiagent-based healthcare system} \label{sec:4-1} We implement the agents of our proposed framework with JADE tool (version 4.4) in Java (JDK 8) by using NetBeans 8.2. First of all, we create the required agents from the classes which we have implemented with JADE in Java. As we mentioned previously, we use the following agents in this study: Database agent, Physician agent, Nurse agent, Ambient agent, and Patient agent. The database agent saves and manages the profile of patients under the care and keeps alert messages and feedbacks of nurses. In this study, we create a dataset consist of 1000 patients. These samples of patients include blood pressure, body temperature, heart rate, location of patients, ambient emergency, and ambient temperature samples. The patient agent and the ambient agent can access these data. According to the rules of the patient agent and ambient agent, if the health data of patients and the situation of an environment become abnormal, these agents will alert the nurse agent. We also implement other interactions between the agents, which we described previously. \subsection{The data collection of the intrusion detection systems} \label{sec:4-2} As we discussed in Section 2, in earlier research, proper encryption algorithms are proposed in three layers to secure healthcare systems. However, these algorithms cannot confront common attacks of healthcare systems except eavesdropping and data modification. Therefore, our focus in this study is to propose a healthcare system framework that has tight security against common healthcare system attacks. Consequently, we first study the usual attacks on healthcare systems, which previous research mentions. Then we determine their types according to the general classification of network attacks to get an appropriate sample of network traffic to evaluate the intrusion detection systems of our framework.\\ \hphantom{th}Network attacks are classified into four groups [40, 41] as follows: Denial of Service (DoS), Remote to Local (R2L), User to Root (U2R), and Probe. Also, according to [42-44] spoofing attack, impersonation, the elevation of privileges, DoS attack, eavesdropping, and data modification are the usual attacks of healthcare networks. These attacks, except eavesdropping and data modification, are mainly U2R and DoS attacks.\\ \hphantom{th}To implement and evaluate the intrusion detection systems of our proposed framework, we use NSL-KDD [45] dataset. This dataset solves some of the inherent problems of the KDD'99 and is a newer version of that. Therefore, the NSL-KDD has some benefits over the KDD datasets as follows: 1) It does not include redundancy of the data or duplicate records in the train set, so the classifiers are not biased towards the more frequent data. 2) There are no duplicate records in the proposed test sets. Therefore, the performance of the learners is not biased by the methods that have better detection rates on frequent data. 3) The number of selected records from each difficulty group is proportional to the percentage of the related data in the original KDD data set. 4) The number of records in the train and test sets is reasonable, which makes it affordable to run experiments on this dataset.\\ \hphantom{th}To cover more attack types of healthcare systems, we extract U2R and DoS attacks from the attack samples of the NSL-KDD dataset by writing a query in Excel for each attack. Consequently, as Table 2 shows, we have ten datasets of the attacks, and a dataset of normal traffic from the NSL-KDD.\\ \hphantom{th}Then, we implement a program in Java environment to create our dataset consists of ten attacks. This dataset contains 10,000 samples consist of 4,000 samples of normal traffic and 300 samples from every ten attacks. We use three datasets because we use three different layers of the intrusion detection systems in our proposed framework.\\ \hphantom{th}Table 3 and Table 4 show the type of traffic samples and the number of each one in the test set and the train set for the anomaly, the misuse, and the hybrid intrusion detection systems, respectively. We obtain this information by analysis of the datasets with Weka tool (version 3.9) [46, 47]. In the next part of this section, we implement the intrusion detection systems by using SVM and evaluate them with the datasets.\\ \subsection{The implementation results of the intrusion detection systems} \label{sec:4-3} We implement the intrusion detection systems of our proposed framework with R tool (version 3.5.0) [48]. These intrusion detection systems classify the traffic of the network by using support vector machine (SVM) [30]. SVM is one of the most efficient machine learning algorithms in pattern recognition problems, such as classifying traffic of networks and image processing.\\ \hphantom{th}In the IDS of each layer of our framework, SVM detects the pattern of abnormal traffic from normal traffic, which is non-linear patterns. Non-linear patterns cannot be easily distinguished and separated, unlike linear patterns, which are easily distinguishable. So non-linear patterns need some process to become easily separable. Consequently, the Kernel function maps the original data to a higher dimensional space to classify non-linear patterns [30].\\ \hphantom{th}In the R tool, functions of pattern recognition problems are available in the caret package, and the kernel functions are in the kernlab package. Therefore, we use these two packages in the implementation of the intrusion detection systems of our framework. We use the ksvm function from the caret package to use SVM as a network traffic classifier. The first argument of ksvm function is the attribute of classification. In this study, it is traffic types that can be normal or the name of the attack (abnormal). The second argument declares the dataset that SVM classifies. The last argument names the type of the ksvm function. We use the c-svc type, which is for solving data classification problems. Finally, we name the kernel function. The kernel function, which we use to classify our network traffic, is the radial basis function (RBF in R). After we implement the intrusion detection systems of each layer, we evaluate the parameters of each layer's IDS. In the following, we discuss and conclude about these parameters.\\ \begin{table} \centering \caption{DoS and U2R attacks of the NSL-KDD dataset} \label{tab:2} \resizebox{\linewidth}{!} \begin{tabular}{>{\hspace{0pt}}p{0.548\linewidth}>{\hspace{0pt}}p{0.452\linewidth}} \hline \textbf{ Attack name } & \textbf{ Attack type } \\ \hline ~ ~ ~ ~ Back & ~ ~ ~ DOS \\ ~Buffer overflow & ~ ~ ~ U2R \\ ~ ~ ~ ~ land & ~ ~ ~ DOS \\ ~ ~loadmodule & ~ ~ ~ U2R \\ ~ ~ ~ neptune & ~ ~ ~ DOS \\ ~ ~ ~ ~ ~perl & ~ ~ ~ U2R \\ ~ ~ ~ ~ ~pod & ~ ~ ~ DOS \\ ~ ~ ~ ~rootkit & ~ ~ ~ U2R \\ ~ ~ ~ ~smurf & ~ ~ ~ DOS \\ ~ ~ ~teardrop & ~ ~ ~ DOS \\ \hline \end{tabular} } \end{table} \begin{table} \centering \caption{The number of traffic samples from each type, in the dataset of the anomaly IDS} \label{tab:3} \resizebox{\linewidth}{!} \begin{tabular}{>{\hspace{0pt}}p{0.408\linewidth}>{\hspace{0pt}}p{0.304\linewidth}>{\hspace{0pt}}p{0.281\linewidth}} \hline \textbf{Traffic type} & \multicolumn{2}{>{\hspace{0pt}}p{0.585\linewidth}}{\textbf{~ ~ ~ ~ ~ ~ ~Dataset }} \\ \hline & Train set & Test set \\ \hline ~ ~ Attack & ~ ~1471 & ~ ~ 997 \\ \hline ~ ~Normal & ~ ~4000 & ~ ~4000 \\ \hline ~ ~ ~Total & ~ ~5471 & ~ ~4997 \\ \hline \end{tabular} } \end{table} \begin{table} \centering \caption{The number of traffic samples from each type, in the dataset of the misuse IDS and hybrid IDS} \label{tab:4} \resizebox{\linewidth}{!} \begin{tabular}{>{\hspace{0pt}}p{0.204\linewidth}>{\hspace{0pt}}p{0.373\linewidth}>{\hspace{0pt}}p{0.219\linewidth}>{\hspace{0pt}}p{0.204\linewidth}} \hline \multicolumn{2}{>{\hspace{0pt}}p{0.577\linewidth}}{\textbf{~ ~ ~ ~ ~ Traffic type }} & \multicolumn{2}{>{\hspace{0pt}}p{0.423\linewidth}}{\textbf{~ ~ ~ ~ ~Dataset }} \\ \hline \multirow{11}{0.204\linewidth}{\hspace{0pt}Attack} & & Trainset & Testset \\ \cline{2-4} & ~ ~ ~neptune & ~ ~300 & ~ 300 \\ \cline{2-4} & ~ ~ ~ ~back & ~ ~300 & ~ 300 \\ \cline{2-4} & ~ ~ ~ smurf & ~ ~300 & ~ 300 \\ \cline{2-4} & ~bufferoverflow & ~ ~ 30 & ~ ~20 \\ \cline{2-4} & ~ ~ ~ ~ pod & ~ ~201 & ~ ~41 \\ \cline{2-4} & ~ load module & ~ ~ ~9 & ~ ~ 2 \\ \cline{2-4} & ~ ~ ~ ~ perl & ~ ~ ~3 & ~ ~ 2 \\ \cline{2-4} & ~ ~ ~ ~ land & ~ ~ 18 & ~ ~ 7 \\ \cline{2-4} & ~ ~ ~ rootkit & ~ ~ 10 & ~ ~13 \\ \cline{2-4} & ~ ~ ~teardrop & ~ ~300 & ~ ~12 \\ \hline ~Normal & ~ ~ ~ Normal & ~ 4000 & ~4000 \\ \hline ~ Total & & ~ ~5471 & ~4997 \\ \hline \end{tabular} } \end{table} \hphantom{th}Table 5 shows the average runtime and memory consumption of the intrusion detection systems of the proposed framework after ten runs. The information in this table shows the correctness of considering the intrusion detection system for each layer of the proposed framework. As we mentioned earlier, the structure of the anomaly IDS is simpler than the misuse and the hybrid IDS. Therefore, it has less runtime and memory consumption than the misuse and the hybrid IDS in Table 5. It means it is suitable for the patient and the ambient agents, which have limited energy. Misuse (signature-based) IDS, which we consider for the nurse and the physician agents, has more runtime and memory consumption than the anomaly IDS. The hybrid IDS, which we consider for the database agent, uses two intrusion detection systems at the same time. Therefore, it has the most runtime and memory consumption.\\ \hphantom{th}Table 6 shows the detection rate and false-positive rate of the intrusion detection systems of the proposed framework. The information in this table shows the correctness of considering the intrusion detection system for each layer of the proposed framework.\\ \hphantom{th}According to table 6, the anomaly IDS has a high detection rate and a high false-positive rate (low accuracy) than the misuse IDS. It means the anomaly IDS is suitable for the patient and the ambient agents because they have restricted data of the network, so attackers rarely attack these agents. The misuse IDS has a lower false-positive rate (higher accuracy) than the anomaly IDS. It means it is suitable for the nurse agent and the physician agent, which have more data of the network than the patient agent and the ambient agent, so they are more likely to be attacked. In the hybrid IDS, the anomaly IDS and the misuse IDS complement each other. In other words, as table 6 shows, it benefits from the high detection rate of the anomaly IDS and the high accuracy of the misuse IDS. Consequently, it is suitable for the database agent, which is highly potential to be attacked.\\ \begin{table} \centering \caption{Average runtime and memory usage of the intrusion detection systems used in the framework} \label{tab:5} \resizebox{\linewidth}{!} \begin{tabular}{>{\hspace{0pt}}p{0.213\linewidth}>{\hspace{0pt}}p{0.292\linewidth}>{\hspace{0pt}}p{0.494\linewidth}} \hline \textbf{~ ~IDS} & \textbf{Runtime (s)} & \textbf{Memory usage (MB)} \\ \hline Anomaly & ~ ~ ~3.001 & ~ ~ ~ ~ ~275.088 \\ \hline ~Misuse & ~ ~ ~7.784 & ~ ~ ~ ~ ~285.486 \\ \hline ~Hybrid & ~ ~ 11.474 & ~ ~ ~ ~ ~286.138 \\ \hline \end{tabular} } \end{table} \section{Our Contributions} \label{sec:5} As we mentioned in Section 2, spoofing, impersonation, the elevation of privileges, DoS attacks, eavesdropping, and data modification are the usual attacks of healthcare networks. According to some earlier research that we discussed before, proper encryption algorithms were proposed in three layers to secure healthcare systems. These algorithms cannot confront attacks except eavesdropping and data modification. Therefore, the main advantages of our proposed framework in comparison with earlier works are as follows: 1) Designing a healthcare system by using the multiagent technology to collect data of the healthcare network. It means we have one agent for each patient and an environment with specific rules. 2) Considering security measures to confront possible attacks other than eavesdropping, besides the provision of healthcare services. 3) Allocation of SVM-based intrusion detection systems appropriate to each group of the agents to save energy and computational cost of the network. \begin{table} \centering \caption{Detection rate and false positive rate of the intrusion detection systems used in the framework} \label{tab:6} \resizebox{\linewidth}{!} \begin{tabular}{>{\hspace{0pt}}p{0.301\linewidth}>{\hspace{0pt}}p{0.297\linewidth}>{\hspace{0pt}}p{0.397\linewidth}} \hline \textbf{~ ~IDS} & \textbf{~ DR (\%)} & \textbf{~ ~ ~FPR (\%)} \\ \hline Anomaly & ~ ~95.01 & ~ ~ ~ ~~9.97 \\ \hline ~Misuse & ~ ~50.04 & ~ ~ ~ ~ 0.9 \\ \hline ~Hybrid & ~ ~97.2 & ~ ~ ~ ~ 0.88 \\ \hline \end{tabular} } \end{table} \section{Conclusion and future works} \label{sec:6} Our goal in this paper is to design a healthcare system based on wireless sensor networks and secure it against unauthorized access and network attacks. Consequently, we create a healthcare system with new capabilities. In this system, we consider security measures appropriate to wireless sensor networks besides providing medical services. For this purpose, after defining the agents and their interactions, the intrusion detection systems are used to confront healthcare network attacks. These intrusion detection systems are proportional to the level of energy and the sensitivity of data of each layer in our framework. Also, the implementation and evaluation of each IDS of the layers prove the correctness of our framework design. We also discuss SVM as an efficient network traffic classifier in our framework. Our proposed framework consists of two steps, so we can implement each step with other methods and tools and compare the results. Consequently, we suggest the following items as the future works: 1) Using a standard medical dataset in the multiagent design phase. 2) Using other learning algorithms to set the rules of the patient agent and the ambient agent to learn the usual behavior of patients and ambient sensors. It might enhance the flexibility of the agents.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The problem of estimating the covariance/scatter matrix from a set of observation vectors is a crucial step in many signal processing and machine learning applications, such as signal detection, clustering and distance learning. Among all the possible non-Gaussian and heavy-tailed statistical data models, the family of Complex Elliptically Symmetric (CES) distributions have been recognized to provide a general and reliable characterization of random observations in a wide range of real-word scenarios \cite{Esa}. In short, a set of $L$ i.i.d. CES distributed vectors, say $\mathbb{C}^N \ni \mb{z}_l \sim CES_N(\bs{\mu}_0, \bs{\Sigma}_0, h_0),\; l=1,\ldots,L$, is fully characterized by a location parameter $\bs{\mu}_0$, a scatter matrix $\bs{\Sigma}_0$ and a density generator $h_0:\mathbb{R}_0^+ \rightarrow \mathbb{R}^+$ that generally plays the role of a \textit{nuisance} function. In fact, inference procedures in CES distributed data usually involves the joint estimation of $\bs{\mu}_0$ and $\bs{\Sigma}_0$ in the presence of an unknown density generator $h_0$. Remarkably, this additional, \textit{infinite-dimensional}, nuisance parameter puts the CES model in the framework of \textit{semiparametric} models. Note that, due to the well-known scale ambiguity between $\bs{\Sigma}_0$ and $h_0$, only a scaled version of the scatter matrix, called \textit{shape matrix}, is identifiable. For this reason, from now on, we only consider the shape matrix $\mb{V}_{1,0} \triangleq \bs{\Sigma}_0/[\bs{\Sigma}_0]_{1,1}$ as parameter of interest instead of the unconstrained scatter matrix $\bs{\Sigma}_0$. As recently pointed out in both the statistical \cite{Hallin_P_Annals, Hallin_Annals_Stat_2, Hallin_P_2006, PAINDAVEINE} and signal processing literature \cite{For_EUSIPCO,For_SCRB, For_SCRB_complex, Sem_eff_est_TSP}, the \textit{semiparametric} nature of the CES model allows for the derivation of semiparametric information bounds and robust inference procedures able to handle with the lack of \textit{a priori} knowledge on the actual density generator $h_0$. Common inference algorithms in CES distributed data are based on the celebrated class of robust $M$-estimators that encompasses Huber's and Tyler's estimators as special cases (see e.g. \cite{Esa}). The two main advantages of $M$-estimators of shape matrices are: \textit{i}) their performance does not drop dramatically in severe heavy-tailed scenarios and \textit{ii}) they are $\sqrt{L}$-consistent under any, possibly unknown, density generator $h_0$. On the other hand, their major drawback is their lack of (semiparametric) efficiency, as shown in \cite{For_SCRB, For_SCRB_complex}. In their seminal paper \cite{Hallin_Annals_Stat_2}, building upon the Le Cam's theory of Local Asymptotic Normality and on the invariance properties of rank statistics\footnote{The order statistics $Q_{L(1)}<Q_{L(2)}<\ldots<Q_{L(L)}$ of a set of (continuous) real-valued random variables $Q_1,\ldots,Q_L$ are the values of such random variables ordered in an ascending way. The rank $r_l$ of $Q_l$ is its position index in the order statistics.}, Hallin, Oja and Paindaveine have shown that it is possible to derive a shape matrix estimator able to reconcile the two dichotomic properties of \textit{robustness} and \textit{semiparametric efficiency}. This estimator, derived in \cite{Hallin_Annals_Stat_2} for Real Elliptically Symmetric (RES) distributed data, belongs to the class of rank-based, $R$-estimators. In our recent work \cite{Sem_eff_est_TSP}, a tutorial derivation of such $R$-estimator has been proposed together with its extension to CES data. The aim of the present paper is then to validate the theoretical results about the complex $R$-estimator provided in \cite{Sem_eff_est_TSP} with a comprehensive investigation of its statistical properties. Specifically, its finite-sample performance will be analyzed in various scenarios and its semiparametric efficiency assessed by comparing its Mean Squared Error with the Constrained Semiparametric Cram\'er-Rao Bound (CSCRB) derived in \cite{For_SCRB, For_SCRB_complex}. Moreover, its robustness to \textit{outliers}, i.e. random vectors characterized by a different distribution with respect to the one of the observations, will be investigated through numerical simulations. \textit{Notation}: The notation used in this paper follows the one introduced in \cite{Sem_eff_est_TSP} and it is not reported here for brevity. However, for the sake of clarity, we recall the definition of some operators and special matrices that will be extensively used ahead. Specifically, $\mathrm{vec}$ indicates the standard vectorization operator that maps column-wise the entry of an $N \times N$ matrix $\mathbf{A}$ in an $N^2$-dimensional column vector $\cvec{\mb{A}}$. The operator $\ovec{\mb{A}}$ defines the $N^2-1$-dimensional vector obtained from $\cvec{\mb{A}}$ by deleting its first element, i.e. $\cvec{\mb{A}} \triangleq [a_{11},\ovec{\mb{A}}^T]^T$. A matrix $\mb{A}$ such that $[\mb{A}]_{1,1} \triangleq 1$, is indicated as $\mb{A}_1$. Let us define the following two matrices: \begin{equation} \Pi^{\perp}_{\cvec{\mb{I}_N}}=\mb{I}_{N^2} - N^{-1}\mathrm{vec}(\mb{I}_N)\mathrm{vec}(\mb{I}_N)^T, \end{equation} \begin{equation} \mb{P} \triangleq \quadre{\mb{e}_2|\mb{e}_3|\cdots| \mb{e}_{N^2}}^T, \end{equation} where $\mb{e}_i$ is the $i$-th vector of the canonical basis of $\mathbb{R}^{N^2}$. For the sake of interpretability and of consistency with the existing literature, in all our plots we show the results related to a re-normalized version of each considered estimator as: \begin{equation}\label{re_norm} \widehat{\mb{V}}_\gamma^\varphi \triangleq N \widehat{\mb{V}}_{1,\gamma}^\varphi/\trace{\widehat{\mb{V}}_{1,\gamma}^\varphi}, \end{equation} where $\gamma$ and $\varphi$ indicates the estimator at hand. As performance index for the shape matrix estimators, we use: \begin{equation} \varsigma_\gamma^\varphi \triangleq \norm{E\{\mathrm{vec}( \widehat{\mb{V}}_\gamma^\varphi-\mb{V}_0)\mathrm{vec}(\widehat{\mb{V}}_\gamma^\varphi-\mb{V}_0)^H\}}_F, \end{equation} while as performance bounds, we adopt the index \cite{For_SCRB,For_SCRB_complex}: \begin{equation} \varepsilon_{CSCRB} \triangleq \norm{[\mathrm{CSCRB}(\bs{\Sigma}_0,h_0)]}_F. \end{equation} Note that $\mb{V}_0 = N\bs{\Sigma}_0/\trace{\bs{\Sigma}_0}$ while $\bs{\Sigma}_0$ and $h_0$ represent the true scatter matrix and the true density generator. In all the simulations presented in this paper, we use the following common setting: \begin{itemize} \item $\bs{\Sigma}_0$ is a Toeplitz Hermitian matrix whose first column is given by $[1,\rho, \ldots,\rho^{N-1}]^T$; $\rho = 0.8e^{j2\pi/5}$ and $N=8$. \item The zero-mean data are generated according to a complex $t$-distribution $P_{Z}(\mb{z}|\bs{\Sigma}_0, h_0)$ whose pdf is given by: \begin{equation} \label{true_CES} p_{Z}(\mb{z}|\bs{\Sigma}_0, h_0) =|\bs{\Sigma}_0|^{-1} h_0 \left( \mb{z}^{H}\bs{\Sigma}_0^{-1}\mb{z} \right), \; \mathrm{and} \end{equation} \begin{equation}\label{h_t} h_0(t) =\frac{\Gamma(\lambda+N)}{\pi^{N}\Gamma({\lambda} )}\tonde{\frac{\lambda}{\eta}}^{\lambda}\tonde{\frac{\lambda}{\eta} + t}^{-(\lambda+N)}, \end{equation} where $\lambda \in (1,+\infty)$ is a shape parameter that controls the tail of the distribution, while $\eta$ is a scale parameter that accounts for the data statistical power $\sigma^2$. Specifically, under the assumption of finite second order moments, we have that $\sigma^2 = \lambda/(\eta(\lambda-1))$. In our simulation, we choose $\sigma^2=4$. \item The number of Monte Carlo runs is $10^6$. \end{itemize} It is worth underlying here that the particular choice of the complex $t$-distribution as nominal CES distribution for the observations does not represent a limitation, since, due to the semiparametric nature of the considered $R$-estimator, the findings obtained for $t$-distributed data holds true for any other CES distributions. \section{A robust semiparametric efficient $R$-estimator} The aim of this Section is to recall, from an algorithmic standpoint, the $R$-estimator introduced in \cite{Hallin_Annals_Stat_2} for the RES case and in \cite{Sem_eff_est_TSP} for the CES case. We refer the reader to \cite{Hallin_Annals_Stat_2} and \cite{Sem_eff_est_TSP} for a deep theoretical investigation on its derivation and its asymptotic properties. Moreover, for the ease of exposition, in the following we assume to have a \textit{zero-mean} dataset. A procedure to handle non-zero mean data is discussed in \cite{Sem_eff_est_TSP}. For the interested reader, our Matlab and Python implementation of the algorithm can be found online at \cite{Code_R}. Let $\mb{z}_l \sim CES_N(\mb{0}, \mb{V}_{1,0}, h_0),\; l=1,\ldots,L$ be a set of i.i.d., CES distributed, observations. The robust semiparametric efficient $R$-estimator of $\mb{V}_{1,0}$ is given by: \begin{equation} \label{R_est} \ovec{\widehat{\mb{V}}_{1,R}} = \ovec{\widehat{\mb{V}}_1^\star} + L^{-1/2}\widehat{\bs{\Upsilon}}^{-1}\widetilde{\bs{\Delta}}_{\widehat{\mb{V}}_1^\star}, \end{equation} where $\widehat{\mb{V}}_1^\star$ is a preliminary $\sqrt{L}$-consistent estimator of $\mb{V}_{1,0}$. The matrix $\widehat{\bs{\Upsilon}}$ and the vector $\widetilde{\bs{\Delta}}_{\widehat{\mb{V}}_1^\star}$ are defined respectively as \begin{equation} \widehat{\bs{\Upsilon}} \triangleq \hat{\alpha}\mb{L}_{\widehat{\mb{V}}_1^\star} \mb{L}_{\widehat{\mb{V}}_1^\star}^H, \end{equation} \begin{equation} \widetilde{\bs{\Delta}}_{\widehat{\mb{V}}_1^\star} \triangleq L^{-1/2}\mb{L}_{{\mb{V}}_1^\star}\sum_{l=1}^{L}K\tonde{\frac{r^\star_l}{L+1}} \mathrm{vec}(\hat{\mb{u}}^\star_l(\hat{\mb{u}}^\star_l)^H) \end{equation} and the scalar $\hat{\alpha}$ can be obtained as: \begin{equation}\label{com_alpha_hat} \hat{\alpha} = \nicefrac{\norm{\widetilde{\bs{\Delta}}_{\widehat{\mb{V}}_1^\star + L^{-1/2}\mb{H}^0} - \widetilde{\bs{\Delta}}_{\widehat{\mb{V}}_1^\star}}}{ \norm{ \mb{L}_{\widehat{\mb{V}}_1^\star} \mb{L}_{\widehat{\mb{V}}_1^\star}^H\ovec{\mb{H}^0}} }, \end{equation} where $\mb{H}^0$ is a \virg{small perturbation}, Hermitian, matrix such that $[\mb{H}^0]_{1,1}=0$. Following \cite{Sem_eff_est_TSP}, we set $\mb{H}^0 = (\mb{G}+\mb{G}^H)/2$ where $[\mb{G}]_{i,j} \sim \mathcal{CN}(0,\upsilon^2)$, $[\mb{G}]_{1,1}=0$ and $\upsilon = 0.01$. The function $K:(0,1)\rightarrow \mathbb{R}^+$ is generally indicated as \textit{score function} and belongs to the set $\mathcal{K}$ of continuous, square integrable functions that can be expressed as the difference of two monotone increasing functions. All the other terms involved in the definition of the $R$-estimator in \eqref{R_est} are summarized as follows \cite{Sem_eff_est_TSP}: \begin{itemize} \item $\hat{Q}^\star_l \triangleq \mb{z}_l^H[\widehat{\mb{V}}^\star_1]^{-1}\mb{z}_l$, \item $\hat{\mb{u}}^\star_l \triangleq (\hat{Q}^\star_l)^{-1/2}[\widehat{\mb{V}}^\star_1]^{-1/2}\mb{z}_l$, \item $r^\star_1,\ldots,r^\star_L$ are the ranks of the (continuous) real random variables $\hat{Q}^\star_1,\ldots,\hat{Q}^\star_L$, \item $\mb{L}_{\widehat{\mb{V}}^\star_1} \triangleq \mb{P} \tonde{[\widehat{\mb{V}}^\star_1]^{-T/2}\otimes[\widehat{\mb{V}}^\star_1]^{-1/2}} \Pi^{\perp}_{\cvec{\mb{I}_N}}$. \end{itemize} In can be noted that, for a practical implementation of the $R$-estimator in \eqref{R_est}, we just need to specify two terms: the preliminary estimator $\widehat{\mb{V}}_1^\star$ and the score function $K \in \mathcal{K}$. In the following, a discussion on the most suitable choice for these two terms is provided. \section{On the choice of the preliminary estimator $\widehat{\mb{V}}^\star_1$} This Section is devoted to the study of the impact of the preliminary estimator $\widehat{\mb{V}}^\star_1$ on the \virg{finite-sample} performance of the $R$-estimator in \eqref{R_est}. In fact, if on one hand, any preliminary $\sqrt{L}$-consistent estimator leads to an \textit{asymptotically} semiparametric efficient $R$-estimator, on the other hand, the choice of $\widehat{\mb{V}}^\star_1$ may have significant impact on the \virg{finite-sample} performance of $\widehat{\mb{V}}_{1,R}$. Here, we analyze two preliminary estimators: the Sample Covariance Matrix (SCM) and Tyler's estimator. \subsection{The SCM as preliminary estimator} Let $\{\mb{z}_l\}_{l=1}^L \sim CES_N(\mb{0}, \mb{V}_{1,0}, h_0)$ be a set of i.i.d. CES distributed random vectors with unknown density generator $h_0$. The SCM preliminary estimator $\widehat{\mb{V}}^\star_{1,SCM}$ is given by: \begin{equation} \label{SCM_1} \widehat{\mb{V}}^\star_{1,SCM} = \frac{\hat{\bs{\Sigma}}_{SCM}}{[\hat{\bs{\Sigma}}_{SCM}]_{1,1}}, \quad \hat{\bs{\Sigma}}_{SCM} \triangleq \frac{1}{L} \sum\nolimits_{l=1}^{L} \mb{z}_l\mb{z}_l^H. \end{equation} Under the assumption of finite second order moments, $\widehat{\mb{V}}^\star_{1,SCM}$ is a $\sqrt{L}$-consistent estimator of the shape matrix $\mb{V}_{1,0}$ under any density generators, and consequently it can be used as preliminary estimator. The SCM is a very popular estimator of the covariance/shape matrix due to its low computational complexity that makes it a suitable estimator in real-time applications. However, its main drawback is in the fact that its performance rapidly decreases in non-Gaussian data. In Fig. \ref{fig:Fig1}, we report the MSE indices as function of the number $L$ of observations of $\widehat{\mb{V}}^\star_{SCM}$ in \eqref{SCM_1} and of the $R$-estimator in \eqref{R_est} that exploits $\widehat{\mb{V}}^\star_{SCM}$ as preliminary estimator. Note that both the estimators are re-normalized according to \eqref{re_norm}. As score function, we used the \textit{van der Waerden} score \cite{Sem_eff_est_TSP}: \footnote{The choice of the score function will be discussed in the next Section.} \begin{equation}\label{K_CG} K_{vdW}(u) \triangleq \Phi_G^{-1}(u), \end{equation} where $\Phi_G^{-1}$ indicates the inverse function of the cdf of a Gamma-distributed random variable with parameters $(N,1)$. The resulting $R$-estimator will be indicated as $\widehat{\mb{V}}_{R,vdW}^{SCM}$. As we can see from Fig. \ref{fig:Fig1}, the linear \virg{one-step} correction term $L^{-1/2}\widehat{\bs{\Upsilon}}^{-1}\widetilde{\bs{\Delta}}_{\widehat{\mb{V}}_{1,SCM}^\star}$ can improve significantly the efficiency of the SCM at the price of a very small increase of computational load. Indeed, the linear \virg{one-step} correction can be evaluated in closed form, without the need of any fixed point iteration scheme required, for example, to implement an $M$-estimator. \subsection{Tyler's estimator as preliminary estimator} The result in Fig. \ref{fig:Fig1} has been obtained by setting a shape parameter $\lambda$ for the $t$-distributed data equal to 2. It is interesting to check the semiparametric efficiency of the $R$-estimator in \eqref{R_est} as function of $\lambda$, i.e. as function of the data \virg{heavy tailedness}, for a given number $L$ of data. Since, as said before, the SCM suffers in non-Gaussian scenarios, we may expect that the use of a robust $M$-estimator, e.g. Tyler's one, as preliminary estimator can lead to better finite-sample performance. Let us start by introducing the constrained Tyler estimator as a preliminary estimator. Tyler's estimator $\widehat{\mb{V}}_{Ty}$ can be obtained as the convergence point of the following fixed point iterative procedure: \begin{equation} \label{C_Tyler} \widehat{\mb{V}}^{(k+1)}_{Ty} = \frac{N}{L}\sum_{l=1}^{L}\frac{\mb{z}_l\mb{z}_l^H}{\mb{z}_l^H[\widehat{\mb{V}}^{(k)}_{Ty}]^{-1}\mb{z}_l}, \end{equation} where the starting point is, e.g., $\mb{V}^{(0)}_{Ty} = \mb{I}_N$. In order to obtain a proper preliminary estimator for the $R$-estimator in \eqref{R_est}, the usual constraint on the first top-left element has to be imposed: $\widehat{\mb{V}}^{\star}_{1,Ty} = \nicefrac{\widehat{\mb{V}}_{Ty}}{[\widehat{\mb{V}}_{Ty}]_{1,1}}$. After the re-normalization in \eqref{re_norm}, in Fig. \ref{fig:Fig2} we report the MSE indices of the SCM and Tyler's preliminary estimators, $\widehat{\mb{V}}^\star_{SCM}$ and $\widehat{\mb{V}}^{\star}_{Ty}$, together with the ones of the corresponding $R$-estimators built upon them, i.e. $\widehat{\mb{V}}_{R,vdW}^{SCM}$ and $\widehat{\mb{V}}_{R,vdW}^{Ty}$. As before, the \textit{van der Waerden} score $K_{vdW}$ in \eqref{K_CG} is used. The number of observations exploited here is equal to $L=5N$, so we are in a finite-sample regime. As expected, $\widehat{\mb{V}}_{R,vdW}^{Ty}$ outperforms $\widehat{\mb{V}}_{R,vdW}^{SCM}$ in the presence of heavy-tailed data (small $\lambda$). This is due to the robustness property of Tyler's estimator. Clearly, the price to pay is in the computational cost needed to implement the fixed point iterations required to obtain $\widehat{\mb{V}}_{Ty}$ in \eqref{C_Tyler}. Moreover, $\widehat{\mb{V}}_{R,vdW}^{Ty}$ is an (almost) semiparametric efficient estimator for every value of $\lambda$, even in finite-sample regime. As it can be noted from \eqref{fig:Fig2}, the MSE index of $\widehat{\mb{V}}_{R,vdW}^{Ty}$ achieves the CSCRB from $\lambda >6$. Of course, in asymptotic regime, i.e. for $L \rightarrow \infty$, this semiparametric efficiency property can be achieved for smaller values of $\lambda$ as well. Another interesting point to note in Fig. \ref{fig:Fig2} is the fact that, while possessing the same distributional robustness property of the Tyler's estimator, the $R$-estimator $\widehat{\mb{V}}_{R,vdW}^{Ty}$ outperforms Tyler's one for almost all the values of $\lambda$. Remarkably, this augmented efficiency can be obtained at the negligible cost of evaluating the linear \virg{one-step} correction term $L^{-1/2}\widehat{\bs{\Upsilon}}^{-1}\widetilde{\bs{\Delta}}_{\widehat{\mb{V}}_{1,Ty}^\star}$. After having analyzed the impact of the choice of the preliminary estimators of the finite-sample performance of the $R$-estimator in \eqref{R_est}, in the next section we will focus on the selection of the score function $K \in \mathcal{K}$. \begin{figure}[h] \centering \includegraphics[height=5cm]{Preliminary_SCM.pdf} \caption{MSE indices and CSCRB vs $L$ ($\lambda = 2$).} \label{fig:Fig1} \end{figure} \begin{figure}[h] \centering \includegraphics[height=5cm]{Preliminary_lambda.pdf} \caption{MSE indices and CSCRB vs $\lambda$ ($L = 5N$).} \label{fig:Fig2} \end{figure} \section{On the choice of the score function $K$} In the context of rank statistics, the term \textit{score function} indicates a continuous scalar function $K:(0,1)\rightarrow \mathbb{R}^+$ satisfying the following two properties: \textit{i}) $K$ is square integrable and \textit{ii}) $K$ can be expressed as the difference of two monotone increasing functions \cite{Hallin_P_Annals}. A classical example of scores is the set of \textit{power functions} defined as $K_a(u)=N(a+1)u^a$ where $u\in (0,1)$ and $a \geq 0$ is a tuning parameter \cite{Hallin_PCA}. The celebrated \textit{Wilcoxon} ($a=1$) and \textit{Spearman} ($a=2$) scores belong to this set. Another way to build a score function is the one described in \cite{Hallin_P_Annals}, \cite{Hallin_Annals_Stat_2} and \cite{Sem_eff_est_TSP} where a \virg{misspecified} approach \cite{SPM} is used. We refer to \cite{Sem_eff_est_TSP} for a theoretical description of this set of scores and for a discussion on how to build them. Here we limit ourselves to cite two examples: the \textit{van der Waerden} score already introduced in \eqref{K_CG} and the $t_{\nu}$-score given by: \begin{equation}\label{CK_t} K_{t_\nu}(u) = \frac{N(2N+\nu)F^{-1}_{2N,\nu}(u)}{\nu + 2NF^{-1}_{2N,\nu}(u)},\quad u \in (0,1), \end{equation} where $F_{2N,\nu}(u)$ stands for the Fisher cdf with $2N$ and $\nu \in (0,\infty)$ degrees of freedom. Note that, from the properties if the Fisher distribution, we have: $\lim_{\nu \rightarrow \infty} K_{t_\nu}(u) = K_{vdW}(u)$. In Fig. \ref{fig:Fig3}, after the usual re-normalization in \eqref{re_norm}, the MSE indices of four $R$-estimators exploiting as score functions the \textit{Wilcoxon} ($\widehat{\mb{V}}_{R,Wi}^{Ty}$), the \textit{Spearman} ($\widehat{\mb{V}}_{R,Sp}^{Ty}$), the $t_{\nu}$- ($\widehat{\mb{V}}_{R,\mathbb{C}t_{\nu}}^{Ty}$) and the \textit{van der Waerden} ($\widehat{\mb{V}}_{R,vdW}^{Ty}$) scores are reported and compared with the CSCRB. For the $t_{\nu}$-score, a tuning parameter $\nu=5$ has been chosen. Moreover, in all $R$-estimators, the Tyler's estimator has been used as preliminary estimator. A visual inspection of Fig. \ref{fig:Fig3} tells us that, for $\lambda > 6$, the \textit{van der Waerden} score leads to the lowest MSE index among the other considered scores. Moreover, it can be noted that, for the power scores as the \textit{Wilcoxon} and the \textit{Spearman} ones, the resulting MSE increases as the tuning parameter $a$ increases. However, this claim should be validated by further theoretical and numerical analyses. The $t_{\nu}$-score has the best performance for highly heavy-tailed data ($1<\lambda<5$), while it provides a MSE that is between the (good) one of the \textit{van der Waerden} score and the (bad) one of the power scores. Again, $t_{\nu}$-score depends on an additional tuning parameter $\nu$ that should be carefully chosen. To conclude, we can say that the \textit{van der Waerden} score is a suitable score function since it leads to an (almost) semiparametric efficient $R$-estimator and does not depend on additional tuning parameter whose setting may result to be impractical in real-world applications. \begin{figure}[h] \centering \includegraphics[height=5cm]{t_dist_vs_score_functions2.pdf} \caption{MSE indices and CSCRB vs $\lambda$ ($L = 5N$).} \label{fig:Fig3} \end{figure} \section{Robustness with respect to outliers} This last section is dedicated to the analysis of the robustness to \textit{outliers} of the $R$-estimator in \eqref{R_est}. An outlier is a random vector that presents a different statistical characterization with respect to the main body of the observations. In robust statistics, two main tools to quantify the robustness to outliers of an estimator are its \textit{breakdown point} and its \textit{influence function} \cite[Ch. 11 and 12]{huber_book}. The evaluation of these two fundamental tools will be left to future work. Anyway, in this Section we provide a numerical study of the robustness to \textit{outliers} of the proposed $R$-estimator by considering the Tyler's one as benchmark. More specifically, we compare the MSE indices of $\widehat{\mb{V}}_{R,vdW}^{Ty}$ and of $\widehat{\mb{V}}_{Ty}$ in two different scenarios: \begin{enumerate} \item The outliers are generated as random vectors uniformly distributed on the complex unit sphere, i.e. $\mb{u} \sim \mathcal{U}(\mathbb{C}S^{N-1})$, where $\mathbb{C}S^{N-1} \triangleq \{\mb{u}\in \mathbb{C}^N|\norm{\mb{u}}=1\}$. \item The data are generated according to the Huber's $\varepsilon$-contamination model (see e.g. \cite[Ch. 4]{huber_book}). \end{enumerate} \subsection{Case 1: $\mathbb{C}S^{N-1}$-uniformly distributed outliers} In this scenario, we assume to have $L = L_p + L_o$ observations, where $L_p$ is the number of \virg{proper} observations while $L_o$ is the number of outliers. Specifically, let us assume to have a dataset \begin{equation} D_u \triangleq \graffe{\{\mb{z}_l\}_{l=1}^{L_p},\{\mb{u}_l\}_{l=1}^{L_o}}, \end{equation} where $\mb{z}_l \sim p_{Z}(\mb{z}|\bs{\Sigma}_0, h_0)$ and $h_0$ is the density generator of a $t$-distribution given in \eqref{h_t}, while $\mb{u}_l \sim \mathcal{U}(\mathbb{C}S^{N-1})$. In our numerical analysis, we use the dataset $D_u$ to estimate the shape matrix $\mb{V}_0 = N\bs{\Sigma}_0/\trace{\bs{\Sigma}_0}$ by means of the $R$-estimator in \eqref{R_est}, $\widehat{\mb{V}}_{R,vdW}^{Ty}$, and of Tyler's one in \eqref{C_Tyler}, $\widehat{\mb{V}}_{Ty}$, both re-normalized according to \eqref{re_norm}. Fig. \ref{fig:Fig4} shows the MSE indices of the two estimators as function of the percentage of outliers. As we can see, the proposed $R$-estimator presents slightly better performance than Tyler's one. Anyway, the important fact to be noted here is that the MSE of $\widehat{\mb{V}}_{R,vdW}^{Ty}$ remains stable for small percentage of outliers. \begin{figure}[h] \centering \includegraphics[height=5cm]{Robusteness_vs_outliers.pdf} \caption{MSE indices vs \% of outliers ($L = 100N$, $\lambda=2$).} \label{fig:Fig4} \end{figure} \subsection{Case 2: $\varepsilon$-contamination model} The $\varepsilon$-contamination model has been firstly introduced by Huber in the context of robust hypothesis testing \cite{huber_LR}. Since then, it has been widely adopted as a suitable device to assess the robustness of both testing and estimation procedures. Let $P_{Z}(\mb{z}|\bs{\Sigma}_0, h_0)$ be the nominal data CES distribution parameterized by the scatter matrix $\bs{\Sigma}_0$ and the density generator $h_0$ and let $Q_Z(\mb{z}|\bs{\Xi}_0, l_0)$ be a \virg{contaminating} CES distribution whose scatter matrix $\bs{\Xi}_0$ and density generator $l_0$ may be different from the nominal ones. Then, the $\varepsilon$-contamination model can be expressed as the following set of distributions: \begin{multline}\label{cont_model} \mathcal{F} \triangleq \left\lbrace F_Z|F_Z(\mb{z})=(1-\varepsilon) P_{Z}(\mb{z}|\bs{\Sigma}_0, h_0)+\right. \\ \left. + \varepsilon Q_Z(\mb{z}|\bs{\Xi}_0, l_0), \varepsilon \in [0,1] \right\rbrace. \end{multline} Suppose now to have a dataset $D_c$ of $L$ i.i.d. observations sampled from a pdf $f_Z(\mb{z})$ whose related distribution $F_Z(\mb{z})$ belongs to $\mathcal{F}$ in \eqref{cont_model}, i.e.: \begin{equation}\label{Z_cont} D_c=\{\mb{z}_l\}_{l=1}^L, \; \mb{z}_l \sim f_Z(\mb{z}). \end{equation} This implies that, with probability $(1-\varepsilon)$, the $l^\mathrm{th}$ observation vector $\mb{z}_l$ has distribution $P_{Z}(\mb{z}|\bs{\Sigma}_0, h_0)$ (i.e. it is a valuable observation) while, with probability $\varepsilon$, $\mb{z}_l$ has distribution $Q_Z(\mb{z}|\bs{\Xi}_0, l_0)$ (i.e. it is an outlier). In our simulations, we assume as nominal distribution $P_{Z}(\mb{z}|\bs{\Sigma}_0, h_0)$ the complex $t$-distribution whose relevant pdf is given in \eqref{true_CES}. As contaminating CES distribution $Q_Z(\mb{z}|\bs{\Xi}_0, l_0)$ we adopt a Generalized Gaussian (GG) distribution whose density generator can be expressed as \cite{Esa}: \begin{equation}\label{l_t} l_0(t) =\frac{s\Gamma(N)b^{-N/s}}{\pi^{N}\Gamma(N/s )}\exp\tonde{-t^s/b}, \end{equation} As parameters characterizing the GG distribution, we choose $\bs{\Xi}_0 \triangleq \sigma^2\mb{I}_N$ as scatter matrix, $s=0.1$ as shape parameter while $b$ is set in order to provide for the outliers the same statistical power $\sigma^2$ of the $t$-distributed data. As for the Case 1 previously discussed, we estimate the shape matrix $\mb{V}_0 = N\bs{\Sigma}_0/\trace{\bs{\Sigma}_0}$ from the contaminated dataset $D_c$ in \eqref{Z_cont} by means of $\widehat{\mb{V}}_{R,vdW}^{Ty}$ and of $\widehat{\mb{V}}_{Ty}$, both re-normalized according to \eqref{re_norm}. The resulting MSE indices are reported in Fig. \ref{fig:Fig5} as function of the $\varepsilon$-contamination parameter. It can be noted that, even if the Tyler's estimator have slightly better performance than the $R$-estimator, the MSE of $\widehat{\mb{V}}_{R,vdW}^{Ty}$ does not increase dramatically as $\varepsilon$ increases and it remains close to the one relative to $\widehat{\mb{V}}_{Ty}$. \begin{figure}[h] \centering \includegraphics[height=5cm]{Robusteness_vs_contamination.pdf} \caption{MSE indices vs $\varepsilon$ ($L = 100N$, $\lambda = 2$).} \label{fig:Fig5} \end{figure} \section{Concluding remarks} In the first part of this paper, the \virg{finite-sample} performance of the proposed $R$-estimator have been analyzed in different configurations and compared with the relevant CSCRB. The proposed numerical investigation showed that the setting involving Tyler's estimator as preliminary estimator and the \textit{van der Waerden} score as score function leads to the lowest MSE index among all the other considered configurations and for almost all values of $\lambda$. Furthermore, the robustness to outliers of the $R$-estimator has been assessed by using the one of Tyler's estimator as benchmark. The proposed numerical study revealed that the $R$-estimator is approximately as robust to outliers as Tyler's one. Needless to say, the numerical analysis provided in this paper is just a first attempt but a theoretical characterization of the robustness in terms of breakdown point and influence function is necessary and it is left to future works. \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section*{Acknowledgement} \paragraph{Acknowledgement} We thank P.~Heinzner for his ongoing support over the last years. Further we have to thank A.~Abbondandolo and M.~Kalus for helpful remarks and comments and L.~Ryvkin for directing our attention to the question. \bibliographystyle{alpha}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Eugene Wigner introduced random matrix models about fifty years ago into nuclear physics \cite{wigner}. The scope of applications has increased over the years \cite{AT,NT} including fields such as molecular and atomic physics, mesoscopics and field theory. More recently random matrix theory has started to be used in quantum information theory \cite{GS-JQO,GPS-NJP-fidelity,FFS,PS-PRA, GPS-NJP}. For an introduction to such applications see \cite{PS-AIP}. There the concept of individual qubits and their interactions becomes important. This implies that we enter the field of two-body random ensembles (TBRE) \cite{BW-review,FMS-PRE}, i.e. ensembles of Hamiltonians of $n$-body systems interacting by two-body forces. While such ensembles have received considerable attention, it was first focussed on fermions and later also included bosons. Yet in quantum information theory the qubits are taken to be distinguishable, and indeed the same holds for spintronics. Interest in both fields has sharply increased recently \cite{NC,spintronix}. It is thus very pertinent to formulate and investigate TBRE's for distinguishable qubits. As random matrix ensembles are mainly determined by their symmetry properties this ensemble will be very different from other TBRE's. In particular, as the particles are distinguishable, their interaction can vary from particle pair to particle pair and can indeed be randomly distributed, thus introducing an entirely new aspect. This has the consequence that the topology according to which spins or qubits are distributed or interact will be important, Thus chains, trees and crystals of particles with nearest, second nearest and up to $k$th order interaction can be represented. As mentioned above, random matrix ensembles are usually basically defined by the invariance group of their measure and, if that is not enough, some minimal information conditions \cite{balian,PS-AIP} or independence condition \cite{Kramers}. Note that we deal with a symmetry of the ensemble, rather than with a symmetry of individual systems. The two concepts are to some degree complementary, and the former has also been called {\em structural invariance}. We propose an adequate definition for such ensembles in a very general framework in terms of independent Gaussian distributed variables. We then give an alternate representation in terms of the invariance group and variables that determine the orbits of the Hamiltonian on the ensemble under the action of the group. In order to show the relevance of the new ensemble we address the simplest possible topology, namely the chain with nearest neighbour interactions. For this system we focus on the ensemble averaged structure of the ground state and demonstrate the existence of an unusual quantum phase transition \cite{Sachdev}, which is triggered by breaking of {\em time-reversal invariance} (TRI). Entanglement, a key resource of quantum many-body systems in terms of quantum information, is to large extent related to quantum correlations, localization properties and quantum chaos. Entanglement has also been used as a property, alternative to long-range order in spatial correlation functions, to describe systems undergoing a quantum phase transition \cite{Osterloh:02}. In one-dimensional systems such as quantum spin chains, it was shown \cite{Kitaev} that the entanglement entropy of the ground state typically saturates or diverges logarithmically with size when approaching the thermodynamic limit. Furthermore, it has been shown that logarithmic divergence implies quantum criticality. Interesting results emerge when a spatially homogeneous spin model is replaced by its disordered counterpart, where the spin interactions are taken at random. In this case there is often no physical justification why random interactions should still obey specific restricted forms such as Ising or Heisenberg interactions. In this context, we argue, it is more natural to use two-spin random ensembles (TSRE) for distinguishable particles, specifically choosing quantum spins $1/2$, though these ensembles can readily be generalized to arbitrary spin. By construction these ensembles, as given in section \ref{sec:def}, are invariant with respect to arbitrary local rotations, which we may view as gauge transformations. Another physical motivation for the definition of such ensembles is the coupling among arbitrary and perhaps mutually independent two level quantum systems which may come from completely different physical contexts such as e.g. two-level atoms, Josephson junctions and photons. In section \ref{sec:results} we concentrate on one-dimensional systems or spin chains and present results of numerical calculations, mainly based on density matrix renormalization group (DMRG) \cite{White}, in which we investigate entanglement and correlation properties of the ground state, averaged over an ensemble, and the average spectral gap to the first excited state as well as its fluctuations. If we include the interaction with an external random magnetic field, and hence TRI is broken, we find fast decay of correlations, saturation of entanglement entropy, and power law decay $g\approx N^{-0.4}$ of the spectral gap $g$ with the system size $N$ while its distribution displays Wigner-type level repulsion. When the strength of external field goes to zero, and time-reversal invariance is restored, we find long range order, logarithmically divergent entanglement entropy, and {\em exponential} decay of the spectral gap, while the level repulsion disappears. We argue that this quantum phase transition is non-conventional from the point of view of established models, since in what we shall call non-critical case we still find slow power law closing of the spectral gap. \section{The embedded ensemble of spin Hamiltonians with random two-body interactions} \label{sec:def} In this section we define the two-spin random ensembles of Hamiltonians for systems with $N$ distinguishable spins or qubits with at most two body interactions and describe its basic (invariance) properties. If we do not allow all spins to interact, we have to define which ones do. The simplest case will be a chain with nearest neighbour interactions, but in general we need a graph, whose vertices correspond to spins and whose edges correspond to two-body interactions. We proceed to formalize this. Let ${\cal G}=({\cal V},{\cal E})$ be an {\em undirected graph} with a finite set of $N$ {\em vertices} ${\cal V}$ and a set of $M$ {\em edges} ${\cal E}\subset {\cal V}\times{\cal V}$. In addition, let $\lambda : {\cal V}\to \mathbb{R}^+$ and $\mu : {\cal E}\to \mathbb{R}^+$ be two non-negative functions defined on the sets of vertices and edges, respectively. To such a graph we assign a $2^N$ dimensional Hilbert space of $N$ spins or qubits, placed at its vertices ${\cal H}_{\cal G} = \otimes_{j\in{\cal V}} \mathbb{C}^2 \equiv \mathbb{C}^{2^N}$, and a set of $N$ Pauli operators $\sigma^\alpha_j : {\cal H}_{\cal G}\to {\cal H}_{\cal G}, \alpha\in\{1,2,3\},j\in{\cal V}$ satisfying ${\cal SO}(3)$ commutation relations $[\sigma^\alpha_j,\sigma^\beta_k]= {\rm i} \varepsilon_{\alpha\beta\gamma}\sigma^\gamma_j \delta_{j,k}$. We also use the notation $\vec{\sigma}_j = (\sigma^1_j,\sigma^2_j,\sigma^3_j)$. Let $A^{(j,k)} \in \mathbb{R}^{3\times 3}$, $(j,k)\in {\cal E}$, be a set of $M$ {\em random} real $3\times 3$ matrices, and $\vec{b}^{(j)}\in\mathbb{R}^3$, $j \in {\cal V}$, be a set of $N$ {\em random} $3$ dimensional real vectors. The TSRE then consists of the {\em random} Hamiltonians \begin{equation} H = \sum_{(j,k)\in{\cal E}} \mu(j,k)\, \vec{\sigma}_j \cdot A^{(j,k)} \vec{\sigma}_k + \sum_{j\in{\cal V}}\lambda(j)\, \vec{b}^{(j)} \cdot\vec{\sigma}_j. \label{eq:H} \end{equation} The above defined functions on the edges and vertices of the graph are used to determine the average strength of the corresponding terms in the Hamiltonian. The distribution of random two-body interaction matrices (for short also {\em bond matrices}) $A^{(j,k)}$ and the random external field vectors $\vec{b}^{(j)}$ shall be uniquely determined by requiring the following two conditions: {\em maximum local invariance} and {\em maximum independence} expressed formally as: \begin{enumerate} \item An ensemble of Hamiltonians (\ref{eq:H}) should be invariant with respect to an arbitrary {\em local} ${\cal SO}(3)$ transformation, namely \begin{equation} \vec{\sigma}'_j = O_j \vec{\sigma}_j, \label{eq:can} \end{equation} where $O_j\in {\cal SO}(3)_j$, $ j\in {\cal V}$, meaning that the choice of local coordinate system is arbitrary for each spin/qubit. Obviously, (\ref{eq:can}) preserves the canonical commutation relations for the Pauli operators. Then it follows immediately that the {\em joint probability distributions} of $\{ A^{(j,k)}, \vec{b}^{(j)}\}$ should be invariant with respect to transformations \begin{equation} A^{(j,k)'} = O A^{(j,k)} O', \qquad \vec{b}^{(j)'} = O \vec{b}^{(j)}, \end{equation} where $O,O'$ are arbitrary independent ${\cal SO}(3)$ rotations for each $j,k$. \item The matrix elements of the tensors $A^{(j,k)}_{\alpha,\beta}$ and of the vectors $\vec{b}^{(j)}_{\alpha}$ should be {\em independent} random variables. \end{enumerate} Following arguments similar to those presented in \cite{Kramers} it is straightforward to show that, in order to satisfy conditions (i) and (ii) above for pre-determined but general strengths of bonds $\mu(j,k)$ and external fields $\lambda(j)$, $A^{(j,k)}_{\alpha,\beta}$ and $\vec{b}^{(j)}_{\alpha}$ should be {\em Gaussian independent random variables} of zero mean and equal variance, which are uniquely specified in terms of the correlators \begin{eqnarray*} \ave{A^{(j,k)}_{\alpha, \beta} A^{(j',k')}_{\alpha',\beta'}} &=& \delta_{jj'}\delta_{kk'}\delta_{\alpha\alpha'}\delta_{\beta\beta'},\\ \ave{b^{(j)}_\alpha b^{(j')}_{\alpha'}} &=& \delta_{jj'}\delta_{\alpha\alpha'},\\ \ave{A^{(j,k)}_{\alpha,\beta} b^{(j')}_{\alpha'}} &=& 0, \end{eqnarray*} where $\ave{\bullet}$ denotes an ensemble average. We abbreviate the ensemble defined in this way by ${\rm TSRE}({\cal G},\mu,\lambda)$ noting that it depends on the graph and the strength functions $\mu$ and $\lambda$. Let us now describe some other elementary properties of TSRE. We have seen that each member $H$ of ${\rm TSRE}({\cal G},\mu,\lambda)$ can be parametrized (\ref{eq:H}) by $9 M + 3 N$ independent random parameters. However, we know for the classical ensembles, that a parametrization in terms of the structural invariance group, i.e. the invariance group of the ensemble and the remaining parameters is very useful. Similarly in the present case for an arbitrary set of local ${\cal SO}(3)$ rotations $O_j$, the transformation of the parameters \begin{equation} A^{(j,k)'} = O_j^T A^{(j,k)} O_k, \qquad \vec{b}^{(j)'} = O^T_j \vec{b}^{(j)}, \label{eq:gauge} \end{equation} preserves the spectrum and all entanglement properties of $H$. In fact, the transformation (\ref{eq:gauge}) can be considered as {\em a gauge transformation} since, composed with the local canonical transformation (\ref{eq:can}), it preserves the Hamiltonian $H$ (\ref{eq:H}) exactly. Two Hamiltonians, specified by $\{A^{(j,k)},b^{(j)}\}$ and $\{A^{(j,k)'},b^{(j)'}\}$, can thus be considered equivalent, and the gauge transformation (\ref{eq:gauge}) defines a natural equivalence relation in ${\rm TSRE}({\cal G},\mu,\lambda)$. Therefore, it may be of interest to consider a simplest parametrization of the set of equivalence classes, or in other words the orbits of a Hamiltonian on the ensemble under the structural invariance transformations. We thus ask, what is a general {\em canonical form} to which each element $H$ can be brought by gauge transformations (\ref{eq:gauge}) and how can it be parametrized? Let the integer $K$ denote the number of such parameters. The equivalent question for the classical ensembles leads to the eigenvalues as canonical parameters, since there the structural invariance group is much bigger. Let us first consider the simplest connected graph, namely an open one dimensional (1D) chain of $N$ vertices $\{ 1,\ldots N\}$ and $M=N-1$ bonds. There it turns out that the matrices $A^{(j,j+1)}$ can be simultaneously {\em symmetrized}, namely all $A^{(j,j+1)'}=[A^{(j,j+1)'}]^T$, by choosing the following gauge transformation \begin{equation} O_{j+1} = R_j O_j,\quad \textrm{where}\quad R_j:=V^{(j,j+1)} [U^{(j,j+1)}]^T \end{equation} and where \begin{equation} A^{(j,j+1)}=: U^{(j,j+1)} D^{(j,j+1)} [V^{(j,j+1)}]^T, \label{eq:SVD} \end{equation} is a standard canonical singular value decomposition (SVD) of the original bond matrix $A^{(j,j+1)}$, with $U^{(j,j+1)},V^{(j,j+1)}\in {\cal SO}(3)$ and $D^{(j,j+1)}$ diagonal matrices of singular values. Since the initial transformation $O_1$ is still free, we can choose it such $O_1:= U^{(1,2)}$ that the first bond matrix is even {\em diagonalized}, $A^{(1,2)'} = D^{(1,2)}$. This symmetrization is unique provided that singular values of all SVD's (\ref{eq:SVD}) are non-degenerate which is the case for a generic member $H$. Thus the number of parameters specifying the bond matrices is $3+6(M-1) = 6M-3$ and in addition to $3N$ external field parameters this gives $K=6M-3+3N$ independent parameters. We recover the original set of parameters if we add the $3N$ parameters of the group of local rotations. Second, we consider the case of a ring graph with $N$ vertices and $M=N$ bonds, which is obtained from the previous case by specifying the periodic boundary condition $N+1\equiv N$. We see that a general $H$ as given in (\ref{eq:H}) can now be symmetrized only if the additional topological condition $R:=R_N \cdots R_2 R_1 = \mathbbm{1}$ is satisfied. Now all but one bond matrix can be symmetrized, for example the last one may in addition be multiplied by a topological rotation $A^{(N,1)'} = A^{(N,1)'}_{\rm symmetric} R$ leading to three additional parameters. Therefore in the case of a ring graph we have $K=6M+3N$ independent parameters; again adding the $3N=3M$ parameters of the local rotations we obtain the full set of parameters. We can now consider the case of a general (connected) graph. From the previous examples it is evident that the only crucial additional parameter is the number $L$ of {\em primitive} cycles, i.e. such cycles which cannot be decomposed into other primitive cycles. It is clear that each primitive cycle adds $3$ additional topological parameters (or one ${\cal SO}(3)$ topological $R$ matrix) to the $6M-3+3N$ parameters which we would have for the case of a {\em tree} graph. Therefore we have \begin{equation} K = 6M + 3N + 3L - 3. \end{equation} Counting the cycle-contributions and taking the primitivity criterion into account again we obtain the the total number of parameters by adding those of the local rotations. The above considerations hold for the case of general bond and vertex strength functions $\mu$ and $\lambda$. If, however, these functions are degenerate, or even constant, i.e. the average interaction strength and field strength do not depend on edges/vertices of the graph, then the structural invariance group of the TSRE may be even larger. In the latter case this group is obtained as a semi-direct product of a {\em discrete symmetry group} of the graph ${\cal G}$ and the gauge group of local rotations, the latter being the normal subgroup. We hope that the considerable invariance properties of the TSRE will prove useful in a future analytical treatment of its properties. \section{Properties of ground states of the TSRE on a 1D chain} \label{sec:results} In this section we shall only consider the simplest case of a TSRE on a 1D chain of $N$ vertices. In addition, we consider the most symmetric case of constant strength functions, say $\mu(j,j+1) \equiv 1$ and $\lambda(j)\equiv \lambda={\rm const}$. Such an ensemble of random spin chain Hamiltonians \begin{equation} H = \sum_{j=1}^{N-1} \vec{\sigma}_j \cdot A^{(j,j+1)} \vec{\sigma}_{j+1} + \lambda \sum_{j=1}^N \vec{b}^{(j)} \cdot\vec{\sigma}_j \label{eq:H1} \end{equation} shall be designated as ${\rm TSRE}(N,\lambda)$ where we explicitly assume open boundary conditions; we shall, however, also consider a ring graph with periodic boundary conditions in which case we shall stress this separately. In particular we shall be interested in the zero temperature (ground state) properties of ${\rm TSRE}(N,\lambda)$. We note that due to the large co-dimension of bond strength space the standard perturbative renormalization group of decimating the strongest bonds \cite{Fisher} would not work and we have at this point to rely on a brute numerical investigation. Still, it turns out that most zero temperature properties of ${\rm TSRE}(N,\lambda)$ can be efficiently simulated using White's density matrix renormalization group (DMRG) finite-size algorithm \cite{White} by which spin chains of sizes up to $N=80$ could at present be achieved. One should not forget that all numerical estimates of {\em ensemble average} or expectation value of some physical quantity $A$, which will be designated as $\ave{A}$, require averaging over many, say ${\cal N}_{\rm r}$, realizations from ${\rm TSRE}(N,\lambda)$, such that the statistical error estimated as $\sigma_A \sim\sqrt{(\ave{A^2}-\ave{A}^2)/{\cal N}_{\rm r}}$ is sufficiently small. Due to the lack of translational invariance the implementation of the DMRG is non-trivial and was only done for a chain with open boundaries. We note that for $\lambda=0$, any $H$ as defined in (\ref{eq:H1}), or even more generally in (\ref{eq:H}), commutes with the following anti-unitary {\em time-reversal} operation \begin{equation} \hat{T} : \vec{\sigma}_j \to -\vec{\sigma}_j,\qquad H\vert_{\lambda=0}\, \hat{T} = \hat{T}\, H\vert_{\lambda=0}, \end{equation} so, for {\em odd} $N$, all eigenvalues of $H$ have to be doubly degenerate (Kramer's degeneracy \cite{Kramers}). However, as this represents more a technical than conceptual problem for the ground state (or ground plane) properties of ${\rm TSRE}(N,\lambda)$ we shall in the following restrict ourselves to the case of {\em even} $N$. \subsection{Distribution of the spectral gap} Let $\ket{0},\ket{1}$, represent the ground state, and the first excited state, of $H$, with eigenenergies $E_0$, and $E_1$, respectively. It is well known that the crucial quantity which determines the rate of relaxation of zero-temperature quantum dynamics is the spectral gap $g = E_1 - E_0$. \begin{figure}[h!] \center{\includegraphics[width=0.8\textwidth]{pix/fig1.eps}} \caption{ Distribution of normalized gap ${\tilde g}=g/\ave{g}$ for a few choices of even $N$. In the non-$\hat{T}$-invariant case an agreement with GUE level spacing distribution is obtained whereas in the $\hat{T}$-invariant case, $\lambda=0$, the level repulsion gradually vanishes in the thermodynamic limit. } \label{fig:1} \end{figure} To clarify the difference with respect to TRI, we plot the normalized gap distribution $dP/d{\tilde g}$ where ${\tilde g} = g/\ave{g}$ for both cases, together with the theoretical level spacing distributions for the Gaussian unitary ensemble (GUE) of random matrices \cite{mehta} and for an uncorrelated Poissonian spectrum (Fig.~\ref{fig:1}). For the non-TRI case we choose $\lambda=1$ and observe, to our accuracy, good agreement with the GUE case for two chain sizes $N=10$ and $N=16$. Our results suggest that the GUE-like gap distribution, exhibiting level repulsion, also holds in the thermodynamic limit $N\to\infty$; numerical results for odd $N$ give the same results. In the case of $\lambda=0$ the level repulsion between $\ket{0}$ and $\ket{1}$ gradually vanishes as we approach the thermodynamic limit, although no conclusive statement can be made about the limiting distribution. For odd $N$, the ground state is degenerate for $\lambda=0$ and the present analysis does not apply. \subsection{Size scaling of the spectral gap} \begin{figure}[h!] \center{\includegraphics[width=0.8\linewidth]{pix/fig3.eps}} \caption{ Spectral gap scaling with the system size for different values of parameter $\lambda$ (see legend). Note an asymptotic scaling $\propto N^{-0.39}$, unless $\lambda=0$ where faster than power law decay of a gap is observed, perhaps asymptotically exponential (see inset for a semi-log scale). } \label{fig:2} \end{figure} Being interested in the thermodynamic limit, it is an important issue to understand how $g(N)$ scales with $N$. The theory of quantum criticality \cite{Sachdev} states that $g$ remains finite in the thermodynamic limit for {\em non-critical systems}, and rapidly converges to zero, as $N\to\infty$ for {\em critical} systems. In Fig.\ref{fig:2} we plot the ensemble averaged spectral gap $\ave{g}$ versus $N$ for different values of the field strength $\lambda$. We find a clear indication that in the non-TRI case the spectral gap exhibits universal asymptotic power law scaling \begin{equation} \ave{g} \sim N^{-\eta},\quad \textrm{with} \quad \eta \approx 0.39 \pm 0.01, \label{eq:powerlaw} \end{equation} whereas in the TRI case, $\lambda=0$, the asymptotic decay of the gap is {\em faster than a power law}, perhaps exponential $\ave{g} \sim \exp(-\xi N)$, with $\xi \approx 0.07\pm 0.02$. According to the standard theory \cite{Sachdev} both cases, $\lambda\neq 0$ and $\lambda = 0$, should be classified as quantum critical, however as we shall see later, the case of slow power-law decaying average gap (\ref{eq:powerlaw}) has many-features of non-critical systems, such as finite correlation length and finite (saturated) entanglement entropy. Therefore we shall, at least for the purposes of the present paper, name the case $\lambda \neq0$ as {\em random non-critical} (RNC) and the case $\lambda = 0$ as {\em random critical} (RC). We note that the results for odd and even $N$ are in agreement in the RNC case. Also results for the case with periodic boundary conditions up to $N=20$ show no significant difference from the results in Fig.~\ref{fig:2} for any $\lambda$. \subsection{Size scaling of the ground state entanglement entropy} \begin{figure}[h!] \center{\includegraphics[width=0.8\linewidth]{pix/fig2.eps}} \caption{ Entanglement entropy versus chain length $N$ for different values of parameter $\lambda$ (see legend). Logarithmic divergence with an estimated asymptotic slope $S \sim 0.17 \log_2 N$ for the critical case $\lambda=0$ is indicated with a dashed line.} \label{fig:3} \end{figure} The second characteristics of quantum phase transitions we choose to investigate in the ${\rm TSRE}(N,\lambda)$, is the entanglement entropy of a symmetric bi-partition of the chain \begin{equation} S(N,\lambda) = \ave{{{\rm tr}\,}_{\{N/2+1,\ldots N\}}\left[ ({{\rm tr}\,}_{\{1,\ldots,N/2\}} \ket{0}\bra{0}) \log_2 ({{\rm tr}\,}_{\{1,\ldots,N/2\}} \ket{0}\bra{0})\right]}, \end{equation} which measures the entanglement in the ground state between two equal halves of the chain. It has been suggested in non-random systems \cite{Kitaev} that for critical cases $S \propto \log_2 N$ whereas in non-critical cases $S$ saturates in the thermodynamic limit. Indeed, as shown in Fig.\ref{fig:3}, we find for the ${\rm TSRE}(N,\lambda)$ that $S$ saturates to a constant finite $S_\infty(\lambda) = \lim_{N\to\infty}S(N,\lambda)$ for the RNC case $\lambda\neq 0$, while in the RC case it grows logarithmically \begin{equation} S(N,\lambda=0) \approx c \log_2 N + c', \quad {\rm with}\quad c\approx 0.17\pm 0.02 \end{equation} We also note an interesting even-odd-$N/2$ effect which slowly diminishes as we approach the thermodynamic limit. As pointed out in Ref. \cite{laflorenciePRL96} such an effect is induced by open boundary conditions. For periodic boundary conditions the entanglement entropy for RNC case is twice as large as in the ${\rm TSRE}(N,\lambda)$ with open boundaries. This further confirms the conjecture that only short-range correlations around the boundary between the two halves contribute to the entanglement. We note that our result is essentially different from results for other models, which can be obtained by perturbative real space renormalization group \cite{Fisher}, for example for the disordered critical Heisenberg chain \cite{rafel}, where $c = (\ln 2)/3$ and is in general model dependent \cite{santachiara}. The fact that the entanglement is reduced in the RNC case with local disorder can be explained by chaotic behavior \cite{Lea:04} signalized by the level repulsion in the gap distribution. A similar effect can be observed in localization properties where hopping of excitations induced by inter-particle interactions is diminished by introducing local disorder \cite{Dykman:04,Olivier:07}. This effect of increased localization is useful for successful quantum computing and has important consequences for transport properties such as conductivity \cite{Altshuler}. \subsection{Correlation functions} \begin{figure}[h!] \center{\includegraphics[width=0.8\linewidth]{pix/fig4.eps}} \caption{ Ensemble fluctuations of the ground state spin-spin correlation function $C(r)$ versus distance $r$, for different values of the parameter $\lambda$ (indicated in the legend). Open symbols indicate results for chain length $L=16$ and $L=20$ whereas closed symbols stand for $L=24$.} \label{fig:4} \end{figure} The most direct probe of criticality is perhaps to investigate of long-range order and (space) correlation functions. In order to do this we compute the ensemble averaged fluctuation of the spin-spin correlation function between two vertices \begin{equation} C(j,k) = \ave{|\bra{0}\sigma_j^\alpha \sigma_k^\beta\ket{0} - \bra{0}\sigma_j^\alpha\ket{0}\bra{0}\sigma_k^\beta\ket{0}|^2}. \label{eq:C} \end{equation} Note that we have to consider average fluctuations of the spin-spin correlation function instead of the correlation function itself, since the latter have to vanish due to the local gauge invariance properties of the TSRE. Because of the local invariance of the TSRE it is enough to consider a single type of correlation function, as the RHS of (\ref{eq:C}) {\em does not} depend on indices $\alpha,\beta$ if $j\neq k$; in fact in numerical computations we average over $\alpha,\beta$ in order to improve statistics. We consider the Hamiltonian (\ref{eq:H1}) with periodic boundary conditions. This allows us to average the fluctuation of the correlation over the chain and hence $C(r) = \frac{1}{N} \sum_i C(i,i+r)$. We expect that the results would be qualitatively the same for the model with open boundaries and sufficiently large $N$, but we obtain better statistics in this way. Fig.~\ref{fig:4} shows the averaged correlation function fluctuation $C(r)$ for a few choices of the control parameter $\lambda$ and of the chain lengths $N=16,20,24$. The results for chains of different lengths coincide for small distances $r$ whereas for larger $r$ finite-size effects are noticeable. For sufficiently large chains it can be conjectured that the fluctuations (or effective correlations) asymptotically decay in the RNC case as $C(r) \asymp C_0 2^{-r/\xi}$ with a finite correlation length $\xi$ whereas the decay for the RC case is slower than exponential, perhaps a power-law, which indicates long-range order $\xi=\infty$. \subsection{Correlation length and the entanglement entropy saturation value} In Fig.~\ref{fig:4} we observe that the degree of localization depends on the control parameter $\lambda$ and the correlation function $C(r)$ decays on larger scales as we approach the critical point $\lambda=0$ which results in a larger correlation length $\xi$. Eventually, the correlation length becomes infinite at the critical point $\lambda=0$. \begin{figure}[h!] \center{\includegraphics[width=0.85\linewidth]{pix/fig5.eps}} \caption{The correlation length $\xi$ as a function of the control parameter $\lambda$. Open and full symbols designate correlation length obtained from the best of $C(r)$ with $c \exp(-r/\xi)$ on sets $r\in \{4,\ldots,8\}$ and $r\in \{5,\ldots,7\}$, respectively, all for $N=24$. The inset demonstrates $\propto -\log_2\lambda$ scaling. Please note that last point at $\lambda=0.1$ is rather inaccurate due to insufficiently large system size $N$, so it is not used in a linear fit (full line) in the inset, for which other points with $\lambda \le 1$ have been used. } \label{fig:5} \end{figure} In Fig.~\ref{fig:5} we show the dependence of the correlation length $\xi$ on the control parameter $\lambda$ as obtained by exponential fit of $C(r)$ for a finite size $N=24$. Unlike conventional phase transition as in e.g. \cite{Osterloh:02}, the correlation length seems to diverge logarithmically as $\xi(\lambda) \sim -\xi_0 \log_2 \lambda + {\rm const}$ with $\xi_0 = 0.26$ (indicated in the inset of fig.\ref{fig:5}), even though algebraic scaling cannot be entirely excluded with the numerical data that are available at present. Large correlation length has strong effect on entanglement. Long range correlations demand longer chains for the entanglement entropy to saturate whereas the saturation value itself also grows when the critical point is approached. In Fig.~\ref{fig:6} we plot the entanglement entropy saturation value for various values of parameter $\lambda$ and observe similar behavior as for the correlation length. \begin{figure}[h!] \center{\includegraphics[width=0.85\linewidth]{pix/fig6.eps}} \caption{Saturated entanglement entropy $S_\infty$ as a function of the control parameter $\lambda$. The inset demonstrates $\propto \log_2 \log_2(\lambda^\star/\lambda)$ scaling, where $\lambda^*=4$, which is indicated by a straight line obtained from a best fit to points with $\lambda < 1$. } \label{fig:6} \end{figure} However, unlike the correlation length the quantity which diverges logarithmically when $\lambda \to 0$ is not the entanglement entropy $S$ but its exponential. In fact, numerical data for small $\lambda$ show good agreement with $2^{k S} \propto -\log_2 \lambda + {\rm const}$, where $k = 4.0$ (indicated in the inset of fig.\ref{fig:6}). Note that $2^S$ is to a good approximation proportional to the effective rank, or the Schmidt number of the ground state $\chi^{\epsilon}$ \cite{Kitaev,MPS}, which denotes the number of eigenvalues of the reduced density matrix needed to describe the state of the system up to an error $\epsilon$. In fact, the effective rank $\chi^\epsilon$, rather than the entanglement entropy, is the decisive indicator of simulability by the DMRG method \cite{Schuch:07} and, we believe, also a relevant quantity in the description of a quantum phase transition. \section{Conclusions} In the present paper we have defined a two-body random matrix ensemble of independent spin Hamiltonians which are invariant under local ${\cal SO}(3)$ transformations and described them in a framework of undirected graphs. As the simplest example, we have studied a chain with nearest-neighbour interactions in a random external field and observed a non-conventional phase transition when the external field is switched off. The system is always critical in conventional terminology as it has a vanishing gap in the thermodynamic limit in all cases studied. Yet we have shown that, in the presence of a random external field breaking time-reversal invariance, the locally disordered system has many properties of non-critical systems such as finite correlation length and finite bipartite entanglement entropy in the thermodynamic limit, whereas the gap decay obeys a universal power law dependence. The transition towards the critical point with vanishing of the external field exhibits logarithmic divergence for the correlation length and the effective rank of the ground state. We have no explanation for the logarithmic behavior in the quantum phase transition. The model proposed is much richer than the example discussed. Thus we expect, that higher connectivity of the graph will yield very different results, but even an exploration of high temperature behaviour for the chain seems very worthwhile. In view of the large structural invariance group of the ensemble in the case of site independent average coupling and external fields we hope, that some analytic results can be obtained for this ensemble. \section*{Acknowledgments} We acknowledge support by Slovenian Research Agency, program P1-0044, and grant J1-7437, by CONACyT under grant 57334 and by UNAM-PAPIIT under grant IN112507. IP and TP thank THS and CIC Cuernavaca for hospitality. \section*{References}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{sec:intro} Distributed graph coloring is one of the most fundamental and extensively studied problems in distributed computing~\cite{luby86,alon86,linial87,cole86,goldberg87,szegedy93,kuhn06,barenboim13,barenboim14,fraigniaud16,barenboim16sublinear,barenboim16locality,harris16,chang18,maus20,barenboim21,ghaffari22}. Research on this problem dates back to the early days of distributed computing~\cite{luby86,alon86,cole86,linial87,goldberg87}. Among them, Linial's seminal work~\cite{linial87} first used the distributed graph coloring problem to study the limitation of local computation. In fact, distributed graph coloring, as a locally checkable labeling problem, is widely considered to be one of the benchmark problems for answering the fundamental question ``what can be computed locally''~\cite{naor93}. This problem also has a wide range of applications in practice, including channel allocation, scheduling, and mutual exclusion~\cite{guellati10,kuhn09}. In graph theory, for a given graph $G=(V,E)$, a \emph{$q$-coloring} is a mapping $\phi:V\to Q$ from the vertex set $V$ to the palette $Q$, where $|Q|=q$. A $q$-coloring is \emph{proper} if $\phi(u)\neq \phi(v)$ for every edge $(u,v)\in E$. Distributed graph coloring is often studied in the synchronous message-passing model~\cite{peleg00}. In this model, a communication network is represented by an $n$-vertex graph $G=(V,E)$ with maximum degree $\Delta$. Each vertex $v\in V$ hosts a processor and each edge $(u,v)\in E$ denotes a communication link between two vertices $u$ and $v$. Each vertex $v\in V$ has a unique identifier $id(v)$ belonging to the set $[n]=\{0,1,\cdots,n-1\}$. In each synchronous \emph{round}, vertices perform unlimited local computation and exchange messages with their neighbors. The time complexity of an algorithm is the maximum number of rounds required for all vertices to arrive at a solution for the considered problem. There is a natural family of distributed graph coloring algorithms known as the \emph{locally-iterative coloring algorithms}, introduced by Szegedy and Vishwanathan~\cite{szegedy93}. Throughout the execution of such algorithms, a proper coloring of the network graph is maintained and updated from round to round. Moreover, in each round, for each vertex $v\in V$, its next color is computed from its current color and the current colors of its neighbors $N(v)\triangleq\{u\in V\mid(u,v)\in E)\}$. We define such algorithms as follow. \begin{definition} [\textbf{Locally-iterative Coloring Algorithm}] \label{def:locally-iter-alg} In the synchronous message-passing model, an algorithm for graph coloring is said to be \emph{locally-iterative} if it maintains a sequence of proper colorings $\phi_t$ of the input $G=(V,E)$ such that: \begin{itemize}[topsep=4pt,itemsep=-2pt] \item the initial coloring $\phi_0$ is constructed locally in the sense that, for every $v\in V$, its initial color $\phi_0(v)$ is computed locally from $id(v)$; \item in each round $t\ge 1$, every vertex $v$ computes its next color $\phi_{t}(v)$ based only on its current color $\phi_{t-1}(v)$ along with the multiset of colors $\{\phi_{t-1}(u)\mid u\in N(v)\}$ appearing in $v$'s neighborhood. \end{itemize} \end{definition} \begin{remark} [Uniform algorithm] \label{remark:uniform-alg} By our definition, in a locally-iterative coloring algorithm, in every round $t\geq 1$, the colors of vertices are updated according to a uniform rule which is oblivious to the current round number and the identity of the vertex running it. This can be formally expressed as a function $\mathcal{A}$ such that: \[ \phi_{t}(v)\gets\mathcal{A}\left(\phi_{t-1}(v),\{\phi_{t-1}(u)\mid u\in N(v)\}\right). \] \end{remark} \smallskip Due to the simplicity and naturalness of its framework, locally-iterative algorithms are of great significance both in theory and practice. Indeed, in computer science, many classical algorithms with a wide range of applications are locally-iterative in nature, such as belief propagation~\cite{pearl88} and distance-vector routing~\cite{bertsekas92}. In this paper, we seek fast locally-iterative algorithms that can compute a proper $(\Delta+1)$-coloring. A heuristic $\Omega(\Delta\log\Delta+\log^* n)$ lower bound for locally-iterative $(\Delta+1)$-coloring algorithms was shown in \cite{szegedy93}, and it holds unless there exists ``a very special type of coloring that can be very efficiently reduced''. For a long time, this bound matched the fastest algorithm~\cite{kuhn06}. Nevertheless, such ``special'' type of coloring was found in a recent breakthrough: Barenboim, Elkin, and Goldberg~\cite{barenboim21} devised a locally-iterative $(\Delta+1)$-coloring algorithm with $O(\Delta)+\log^*n$ runtime, breaking the long-standing barrier. \footnote{Recall \Cref{remark:uniform-alg}, in the work by Barenboim, Elkin, and Goldberg~\cite{barenboim21}, the update rule is defined to be a symmetric function taking the set of colors $\Phi(v)=\{\phi(v)\}\cup\{\phi(u)\mid u\in N(v)\}$ as input. Nonetheless, the algorithm developed in that paper depends on being able to distinguish $\phi(v)$ among $\Phi(v)$, hence the update rule actually used is symmetric over neighbors' colors.} The landscape for general distributed graph coloring algorithms is somewhat different. According to \cite{barenboim21}, all $(\Delta+1)$-coloring algorithms developed before 2009 are locally-iterative, including retrospectively, those before Szegedy and Vishwanathan's work~\cite{goldberg87,linial87}. After 2009, a series of algorithms that are not locally-iterative were developed, achieving linear-in-$\Delta$~\cite{barenboim09linear,kuhn09,barenboim14} or even sublinear-in-$\Delta$~\cite{barenboim16sublinear,fraigniaud16,maus20} runtime. Therefore, a natural question---which is also the main open problem raised in \cite{barenboim21}---is, can one compute a proper $(\Delta+1)$-coloring using a locally-iterative algorithm with $o(\Delta)+\log^*n$ running time? \subsection{Our results} We answer the above question affirmatively, which is formally stated in the following theorem. \begin{theorem} [\textbf{Locally-iterative Coloring Algorithm}] \label{thm:alg-main} There exists a locally-iterative coloring algorithm such that, for any input graph with $n$ vertices and maximum degree $\Delta$, produces a proper $(\Delta+1)$-coloring within $O(\Delta^{3/4}\log{\Delta})+\log^*{n}$ rounds. \end{theorem} \begin{remark} [Initialization]\label{remark:init} Recall \Cref{def:locally-iter-alg}, our algorithm uses a very simple rule to construct the initial coloring: $\phi_0(v)=id(v)+\ell$, where $\ell=\ell(n,\Delta)$ is some fixed parameter that only depends on $n$ and $\Delta$. Furthermore, the unique identification $id$ can be relaxed to any proper $n$-coloring with palette $[n]$. Alternatively, one may consider a more restrictive definition of locally-iterative coloring algorithms, where the initial coloring $\phi_0$ is allowed to be an \emph{arbitrary} proper coloring over some palette $Q$. Assuming such a more restrictive formulation, our algorithm can be interpreted as the concatenation of two locally-iterative coloring algorithms, where the initialization step corresponds to a 0-round locally-iterative coloring algorithm that transforms an arbitrary proper $n$-coloring with palette $[n]$ to a proper $n$-coloring with a new palette $[\ell,\ell+n)$, and the local update steps correspond to another locally-iterative coloring algorithm using an arbitrary proper $n$-coloring with palette $[\ell,\ell+n)$ as its initial coloring. \end{remark} Our algorithm is the first locally-iterative $(\Delta+1)$-coloring algorithm achieving a runtime with sublinear dependency on the maximum degree $\Delta$. In designing it, we build upon various tools and techniques, making modifications tailored for our purpose and setting, and eventually obtain a locally-iterative procedure that transforms a proper $O(\Delta^2)$-coloring to a proper $(\Delta+O(\Delta^{3/4}\log\Delta))$-coloring within $o(\Delta)$ rounds. Inside this procedure we work on special proper colorings that encode (arb)defective colorings, and reduce the number of used colors quadratically in a locally-iterative fashion. Combine this procedure with Linial's well-known color-reduction procedure~\cite{linial87} and the folklore reduce-one-color-per-round procedure, we obtain the complete algorithm. A more detailed technique overview of our algorithm will be given in \Cref{subsec:technique-overview}. \subsubsection{An application in self-stabilizing coloring} Fault-tolerance is another central and fundamental problem in distributed computing. \emph{Self-stabilization}, a concept proposed by Edsger W.\ Dijkstra~\cite{dijkstra74}, is a property that, roughly speaking, guarantees a distributed system starting from an arbitrary state eventually converges to a desired behavior. This concept is regarded as ``a milestone in work on fault tolerance'' by Leslie Lamport, and he believes self-stabilization is ``a very fertile field for research''~\cite{lamport85}. Indeed, over the last four decades, a vast collection of self-stabilizing distributed algorithms have been devised (see, e.g., the classical monograph by Dolev~\cite{dolev00} and the more recent one by Altisen et al.~\cite{altisen19} for more detail), and several of them have seen practical applications~\cite{datta94,chen05}. In this paper, we adopt the same self-stabilizing setting assumed by Barenboim, Elkin, and Goldenberg~\cite{barenboim21}. In this setting, the memory of each vertex consists of two parts: the immutable \emph{read-only memory (ROM)} and the mutable \emph{random access memory (RAM)}. The ROM part is faultless but cannot change during execution; and it may be used to store hard-wired data such as vertex identity and graph parameters $n,\Delta$, as well as the program code. The RAM part on the other hand, can change during algorithm execution; and it is for storing the internal states of the algorithm. The RAM part is subject to error, controlled by an adversary called \emph{Eve}. At any moment during the execution, the adversary can examine the entire memory (including both ROM and RAM) of all vertices, and then make arbitrary changes to the RAM part of all vertices. An algorithm is \emph{self-stabilizing} if it can still compute a proper solution once the adversary stops disrupting its execution. Specifically, assuming that $T_0$ is the last round in which the adversary makes any changes to vertices' RAM areas, if it is always guaranteed that by the end of round $T_0+T$ a desired solution (e.g., a proper $(\Delta+1)$-coloring in our context) is produced by the algorithm, then the algorithm is self-stabilizing with \emph{stabilization time} $T$. Due to the locally-iterative nature of our coloring algorithm, it can be transformed into a self-stabilizing coloring algorithm with relative ease. In exchange for this much stronger level of fault-tolerance, our self-stabilizing algorithm has to store the identity of the vertex running it in the ROM area, implying that it is no longer locally-iterative. (Nonetheless, such non-uniformity is necessary: in a highly symmetric graph---e.g., complete graphs---in which vertices start with identical states, no deterministic algorithm can break symmetry among vertices, hence proper coloring cannot be done.) Our result for self-stabilizing coloring is stated in the following theorem. \begin{theorem}[\textbf{Self-stabilizing Coloring Algorithm}]\label{thm:alg-self-stab} There exists a self-stabilizing coloring algorithm such that, for any input graph with $n$ vertices and maximum degree $\Delta$, produces a proper $(\Delta+1)$-coloring with $O(\Delta^{3/4}\log{\Delta})+\log^*{n}$ stabilization time. \end{theorem} To the best of our knowledge, for the model we considered, this is the first self-stabilizing algorithm for $(\Delta+1)$-coloring with sublinear-in-$\Delta$ stabilization time. In adopting the locally-iterative algorithm to the self-stabilizing setting, we have to cope with a strong level of ``asynchrony'' among vertices, as the adversary can manipulate vertices' states and put them in different stages of the algorithm. We also carefully craft a error-checking procedure that is executed at the beginning of each round, this mechanism ensures once the adversary stops disrupting algorithm execution, any vertex with an ``improper'' state will be detected within one round and resets itself to some proper state. Moreover, such resetting occurs at most once for each vertex, hence allowing the algorithm to make progress efficiently when faults no longer occur. \subsection{Technique overview}\label{subsec:technique-overview} \paragraph{Algorithm framework.} A framework containing three \emph{phases} is used by many distributed graph coloring algorithms (e.g., \cite{linial87,kuhn06,barenboim16sublinear,barenboim21}). In this framework, the first phase, which is often referred as the Linial phase, employs Linial's celebrated algorithm~\cite{linial87} (or some variant of it) and produces a proper $\alpha$-coloring within $\log^*{n}+O(1)$ rounds, where $\alpha=O(\Delta^2)$. The second phase then takes the proper $O(\Delta^2)$-coloring as input and reduces it into another proper $\beta$-coloring, where $\beta\in(\Delta+1,\alpha)$. In the last phase, which is often referred as the standard reduction phase, in each round, every vertex that currently having the maximum color value among its neighborhood updates its color to be the smallest value in $[\Delta+1]$ that has not been used by any of its neighbor, effectively reducing the maximum color value used by any vertex by at least one. Hence, within $\beta-(\Delta+1)$ rounds into the third phase, a proper $(\Delta+1)$-coloring is obtained. Two metrics affect the runtime of the algorithms employing this framework: (1) the duration of the second phase; and (2) the value of $\beta$. Notice these two factors are conflicting, running the second phase longer could result in stronger color reduction, implying smaller $\beta$ value and reduce the duration of the third phase. In the recent breakthrough~\cite{barenboim21}, the authors devised a second phase algorithm called the ``additive-group algorithm'' which has $O(\Delta)$ runtime and ensures $\beta=O(\Delta)$. Internally, this additive-group algorithm is doing a quadratic reduction: $\beta=O(\sqrt{\alpha})$, see Corollary 3.5 of \cite{barenboim21}. To break the linear-in-$\Delta$ time complexity, in this paper, we devise a new locally-iterative intermediate phase algorithm that produces a proper $(\Delta+O(\Delta^{3/4}\log\Delta))$-coloring within $O(\Delta^{3/4}\log\Delta)$ rounds. Compared with the state of the art, our algorithm brings improvements on both of the aforementioned performance-relevant metrics: (1) reduced runtime (from $O(\Delta)$ to $O(\Delta^{3/4}\log\Delta)$); and (2) stronger color reduction (from $\beta=O(\Delta)$ to $\beta=\Delta+O(\Delta^{3/4}\log\Delta)$). \paragraph{New intermediate phase.} Inside our intermediate phase algorithm are three efficient \emph{stages}, the core of which is a variant of the additive-group algorithm (Algorithm 6 in \cite{barenboim21}) that reduces the number of used colors quadratically from $O(\Delta^{3/2}\log^2\Delta)$ to $O(\Delta^{3/4}\log\Delta)$ within $O(\Delta^{3/4}\log\Delta)$ rounds. Hence, we also call the second phase as the \emph{quadratic reduction phase}. However, a limitation of this variant is that it starts with a defective coloring and produces an arbdefective coloring. (Roughly speaking, (arb)defective colorings are ``imperfect'' colorings in that neighing vertices may have identical color, but the number of such collisions is limited. See \Cref{sec:model-and-preliminary}.) So, our core stage actually transforms a $\Delta^{1/4}$-defective $O(\Delta^{3/2}\log^2\Delta)$-coloring to another $(2\cdot\Delta^{1/4})$-arbdefective $O(\Delta^{3/4}\log\Delta)$-coloring. This brings up three issues: (1) how to design a ``transition-in'' stage to transform Linial phase's proper $O(\Delta^2)$-coloring to a defective coloring required by the core stage; (2) how to maintain a proper coloring during the intermediate phase since we are working on (arb)defective colorings; and (3) how to design a ``transition-out'' stage to transform the arbdefective coloring produced by the core stage to a proper coloring so that the last phase may proceed. To solve the first issue, we employ the defective coloring algorithm developed by Barenboim, Elkin, and Kuhn~\cite{barenboim14}, with suitable parameters tailored for our purpose. This leads to an extremely efficient transition-in stage taking only one round. To solve the second issue, for every vertex $v$, we use a tuple $\langle a(v),b(v)\rangle$ to denote its color during the intermediate phase, where $a(v)$ is interpreted as the color of $v$ in the (arb)defective coloring schemes. Then, we employ various cover-free set systems to assign distinct $b$ values for neighbors that may have colliding $a$ value. (See \Cref{sec:model-and-preliminary} for the definition of cover-free set systems.) We also apply coding techniques to establish a one-to-one mapping between $\langle a,b\rangle$ tuples and an interval of colors, so that any two vertices with distinct tuples are assigned with two distinct colors, and vise versa. (In fact, during the second phase, we use quadruples $\langle a(v),b(v),c(v),d(v)\rangle$ to denote the color of vertex $v$. The usage of the other two coordinates $c$ and $d$ will be discussed later.) Lastly, to solve the third issue, we employ techniques used in \cite{barenboim16sublinear} known as ``fixing'' and ``constructions of polynomials''. Nonetheless, adjustments on both the implementation and the analysis have to be made, as in this paper we are considering the more restrictive locally-iterative setting, and have to take ``asynchrony'' into consideration. Here, asynchrony means vertices may end the core stage and start running the transition-out stage in different rounds. In the end, we ensure the transition-out stage takes at most $O(\Delta^{3/4}\log\Delta)$ rounds for any vertex. \paragraph{Self-stabilizing coloring.} To adapt the locally-iterative algorithm to the self-stabilizing setting, the first challenge lies in dealing with a stronger level of asynchrony among vertices, as the adversary can manipulate vertices' internal states and put them in different phases or stages of the algorithm. By assigning disjoint color intervals to different phases, and enforcing vertices running a phase always have colors in the corresponding interval, the difficulty comes down to handling asynchrony within each phase. (This technique was also used in \cite{barenboim21}.) The Linial phase and the third phase are robust to such asynchrony in nature, so only minor modifications are required. For the intermediate phase, the locally-iterative algorithm already deals with asynchrony during the core stage and the transition-out stage. Hence, the only remaining issue is the transition-in stage, which happens simultaneously for all vertices (and takes only one round) in the locally iterative algorithm. To solve this problem, we adjust the output of the transition-in stage: instead of a $\Delta^{1/4}$-defective coloring, it now produces a $\Delta^{1/4}$-arbdefective coloring. Interestingly, the core stage still works without requiring major adjustments, and the performance of the core stage is also unaffected. The other major challenge is error detection and error correction. Once the adversary stops disrupting execution, the algorithm should quickly detect any anomalies in vertices' states and then correct them. Moreover, such corrections should not result in potential new errors that could break out in later rounds. To this end, we carefully extract the key invariants that each phase and/or stage should maintain, and run a error-checking procedure at the beginning of each round to see if any of these invariants is violated. In case no violations are found, the algorithm proceeds as usual, otherwise the vertex resets its color to the initial color, effectively restarting the algorithm from the Linial phase locally. (This is the reason we require the identity to be stored in the ROM area.) Recall the discussion in the last paragraph, our algorithm can handle asynchrony among vertices, so after resetting, even though different vertices may end up in different phases or stages, the algorithm can still proceed and make progress without running into any error. In other words, if $T_0$ is the last round in which any faults occur, for every vertex, the error-checking procedure will reset its color at most once, and such reset will only occur in round $T_0+1$. \subsection{Related work} Distributed graph coloring is a fruitful and fast evolving area, here we briefly discuss some most relevant work, interested readers can refer to, e.g., the monograph by Barenboim and Elkin~\cite{barenboim13} for more details. Cole and Vishkin~\cite{cole86} initiated the study of distributed graph coloring on basic graphs such as rings and paths, and developed a deterministic $3$-coloring algorithm with running time $\log^*n + O(1)$. Goldberg and Plotkin~\cite{goldberg87parallel} devised the first $(\Delta+1)$-coloring algorithm for general graphs with $2^{O(\Delta)}+O(\log^* n)$ running time. Linial~\cite{linial87} devised an algorithm that computes an $O(\Delta^2)$-coloring using $\log^*n+O(1)$ time, implying an $O(\Delta^2+\log^*n)$ time $(\Delta+1)$-coloring algorithm. Szegedy and Vishwanathan~\cite{szegedy93} introduced the notion of locally-iterative graph coloring and derived a randomized algorithm along with a heuristic lower bound, this latter bound is attained by Kuhn and Wattenhofer's algorithm~\cite{kuhn06}. All works mentioned above are locally-iterative. Before this paper, the fastest locally-iterative $(\Delta+1)$-coloring algorithm is from Barenboim, Elkin, and Goldenberg~\cite{barenboim21}, which has an $O(\Delta)+\log^*n$ running time. If we loose the restriction on being locally-iterative, $(\Delta+1)$-coloring algorithms with linear-in-$\Delta$ runtime were first proposed in \cite{barenboim09linear,kuhn09,barenboim14}, then faster algorithms with sublinear-in-$\Delta$ runtime were also discovered~\cite{barenboim16sublinear,fraigniaud16,barenboim21,maus20}. The current best upper bound for $(\Delta+1)$-coloring focusing on $\Delta$-dependency is devised by Maus and Tonoyan~\cite{maus20}, achieving a running time of $O(\sqrt{\Delta\log\Delta}+\log^*n)$. For randomized algorithms, in 1986, the seminal work of Luby~\cite{luby86} and Alon, Noga, Babai~\cite{alon86} showed that distributed $(\Delta+1)$-coloring can be solved within $O(\log{n})$ rounds. Barenboim, Elkin, Pettie and Schneider~\cite{barenboim16locality} improved this bound to $O(\log\Delta)+2^{O(\sqrt{\log\log n})}$. Some improvements have been obtained on this upper bound while maintaining the term $2^{O(\sqrt{\log\log n})}$~\cite{harris16, chang18}. This term is improved to $\text{poly}(\log\log n)$ with the use of better network decomposition techniques~\cite{rozhovn20}. More recently, the upper bound is improved to $O(\log^3\log n)$ in both the CONGEST and the LOCAL model by Ghaffari and Kuhn~\cite{ghaffari22}, and by Halld\'{o}rsson, Nolin, Tonoyan~\cite{halldorsson21}. There are also many deterministic algorithms focusing on the $n$-dependency. Rozho\v{n} and Ghaffari~\cite{rozhovn20} derived the first $\text{poly}(\log{n})$ rounds algorithm with running time $O(\log^7 n)$ using network decomposition. It is reduced to $O(\log^5 n)$ with the improvements on network decomposition~\cite{ghaffari21}. Very recently, this bound is improved by Ghaffari and Kuhn~\cite{ghaffari22} to $O(\log^3 n)$ rounds, without using network decomposition. Distributed graph coloring is also extensively studied in the context of self-stabilization~\cite{dolev00,altisen19}. There are algorithms devised for coloring bipartite graphs~\cite{sur93,kosowski06}, planar graphs~\cite{ghosh93,huang05}, and general graphs~\cite{goddard04,hedetniemi03,gradinariu00}. See \cite{guellati10} for a survey on self-stabilizing coloring algorithms obtained before 2010. In \cite{barenboim21}, Barenboim, Elkin and Goldenberg devised the first sublinear-in-$n$ self-stabilizing $(\Delta+1)$-coloring algorithm, achieving $O(\Delta+\log^*n)$ stabilization time. We improve this bound to sublinear-in-$\Delta$ in this paper. \subsection{Paper outline} In \Cref{sec:model-and-preliminary}, we introduce (arb)defective coloring and cover-free set systems formally, theses coloring schemes and tools are extensively used in our algorithms. In \Cref{sec:alg-framework}, we give a more through introduction on the structure of our locally-iterative coloring algorithm, along with detailed description and analysis for the first and the last phase of the algorithm; we conclude \Cref{sec:alg-framework} with a proof of \Cref{thm:alg-main}. \Cref{sec:quadratic-reduction-phase} provides detailed description and analysis for the quadratic reduction phase. We present the self-stabilizing coloring algorithm and prove \Cref{thm:alg-self-stab} in \Cref{sec:alg-stab}. Lastly, we conclude this paper and briefly discuss potential future work directions in \Cref{sec:conclusion}. \section{Preliminaries}\label{sec:model-and-preliminary} \paragraph{Graph coloring.} Let $G=(V,E)$ be an undirected graph. For any vertex $v\in V$, we use $$N(v)\triangleq\left\{u\in V\mid (u,v)\in E\right\}$$ to denote its neighborhood, and we use $\Delta\triangleq\max_{v\in V}|N(v)|$ to denote the maximum degree of $G$. Let $q>0$ be a positive integer and $Q$ be a \emph{palette} of $q=|Q|$ colors. A \emph{$q$-coloring} $\phi: V\to Q$ of graph $G$ assigns each vertex $v\in V$ one of the $q$ colors from $Q$, and is said to be: \begin{itemize}[topsep=4pt,itemsep=-2pt] \item \emph{proper} if $\phi(u)\neq\phi(v)$ for every edge $(u,v)\in E$; \item \emph{$d$-defective} if for every $v\in V$, the number of neighbors $u\in N(v)$ with $\phi(u)=\phi(v)$ is at most $d$; \item \emph{$a$-arbdefective} if for every used color $\kappa\in Q$, the arboricity of the subgraph induced by the vertices having color $\kappa$ is at most $a$, where the arboricity of an undirected graph is the minimum number of forests into which its edges can be partitioned. \end{itemize} \paragraph{Cover-free set systems.} The existence of \emph{$\Delta$-cover-free set systems} is crucial for Linial's celebrated coloring algorithm~\cite{linial87}. We use such set systems in our algorithm extensively as well. \begin{definition}\label{def:cover-free-set-system} Let $U\neq\emptyset$ be a finite set and $\Delta>0$ be an integer. A set system $\mathcal{F}\subseteq 2^{U}$ with ground set $U$ is \emph{$\Delta$-cover-free} if for every $S_0\in\mathcal{F}$ and every $\Delta$ sets $S_1,\cdots,S_{\Delta}\in\mathcal{F}\setminus\{S_0\}$, it holds that $S_0\nsubseteq\bigcup_{i=1}^{\Delta}S_i$. \end{definition} \begin{theorem}[Erd\H{o}s, Frankl and F\"{u}redi \cite{erdos85}]\label{thm:set-families} For any integers $n>\Delta>0$, there exists a positive integer $m$ satisfying \[ m\le \begin{cases} 4(\Delta+1)^2\log^2{n} & \text{if }n> 8(\Delta+1)^3,\\ 4(\Delta+1)^2 & \text{if }n\le 8(\Delta+1)^3, \end{cases} \] such that for every finite set $U$ of size $|U|\ge m$, there exists a $\Delta$-cover-free set system $\mathcal{F}\subseteq 2^{U}$ of size $|\mathcal{F}|=n$ with ground set $U$. \end{theorem} Barenboim, Elkin, and Kuhn~\cite{barenboim14} generalized $\Delta$-cover-free set systems to a notion of \emph{$\Delta$-union-$(\rho+1)$-cover-free set systems}, and proved their existence for reasonably small parameters. Our algorithm utilizes such generalized cover-free set systems as well. \begin{definition}\label{def:cover-free-set-system-extension} Let $U\neq\emptyset$ be a finite set and $\Delta,\rho$ be two positive integers. A set system $\mathcal{F}\subseteq 2^{U}$ with ground set $U$ is \emph{$\Delta$-union-$(\rho+1)$-cover-free} if for every $S_0\in\mathcal{F}$ and every $\Delta$ sets $S_1,\cdots,S_{\Delta}\in\mathcal{F}\setminus\{S_0\}$, there exists at least one element $x\in S_0$ that appears in at most $\rho$ sets among $S_1,S_2,\cdots,S_{\Delta}$, that is, \[ |\{i\mid x\in S_i, 1\le i\le \Delta \}|\leq \rho. \] \end{definition} \begin{theorem}[Theorem 3.9 of \cite{barenboim14}]\label{thm:set-families-extension} For any integers $n>\Delta>\rho>0$, there exists a $\Delta$-union-$(\rho+1)$-cover-free set family $\mathcal{F}\subseteq 2^{U}$ of size $|\mathcal{F}|=n$ with ground set $U$ satisfying $$|U|\leq 4\cdot\left(\frac{\Delta+1}{\rho+1}\right)^2\log^2{n}.$$ \end{theorem} \section{Overview of the Locally-iterative Coloring Algorithm}\label{sec:alg-framework} As mentioned earlier, our algorithm contains three \emph{phases}: the Linial phase, the quadratic reduction phase, and the standard reduction phase. More specifically: \begin{itemize}[topsep=4pt,itemsep=-2pt] \item The Linial phase transforms a proper $n$-coloring to a proper $O(\Delta^2)$-coloring in $\log^*{n}+O(1)$ rounds. \item The quadratic reduction phase transforms a proper $O(\Delta^2)$-coloring to a proper $(\Delta+O(\Delta^{3/4}\log\Delta))$-coloring in $O(\Delta^{3/4}\log{\Delta})$ rounds. \item The standard reduction phase transforms a proper $(\Delta+O(\Delta^{3/4}\log\Delta))$-coloring to a proper $(\Delta+1)$-coloring in $O(\Delta^{3/4}\log\Delta)$ rounds. \end{itemize} Recall the discuss in the technique overview, the quadratic reduction phase contains two \emph{transition stages} and one \emph{core stage}. In the first transition stage, which is called the \emph{transition-in stage}, vertices use one round to transform Linial phase's proper $O(\Delta^2)$-coloring to a proper $\tilde{O}(\Delta^2)$-coloring which internally encodes a $\Delta^{1/4}$-defective $O(\Delta^{3/2}\log^2\Delta)$-coloring.\footnote{Throughout the paper, we use $\tilde{O}(\cdot)$ to hide poly-logarithmic terms in $\Delta$ (but \emph{not} in $n$) in the standard ${O}(\cdot)$ notation.} Then, in the core stage, vertices use $O(\Delta^{3/4}\log\Delta)$ rounds to transform the proper $\tilde{O}(\Delta^2)$-coloring to a proper $\tilde{O}(\Delta^{5/4})$-coloring. Internally, the core stage is transforming the $\Delta^{1/4}$-defective $O(\Delta^{3/2}\log^2\Delta)$-coloring to a $(2\cdot\Delta^{1/4})$-arbdefective $O(\Delta^{3/4}\log\Delta)$-coloring. Lastly, in the second transition stage, which is called the \emph{transition-out stage}, vertices use another $O(\Delta^{3/4}\log\Delta)$ rounds to transform the proper $\tilde{O}(\Delta^{5/4})$-coloring to a proper $(\Delta+O(\Delta^{3/4}\log\Delta))$-coloring, getting ready for the standard reduction phase. See \Cref{fig:alg-structure} for an overview of the algorithm structure. \begin{figure}[t!] \centering \hrule \vspace{1ex} \includegraphics[scale=0.6]{alg-structure-v1} \vspace{1ex} \hrule \vspace{2ex} \caption{Structure of the locally-iterative $(\Delta+1)$-coloring algorithm.}\label{fig:alg-structure} \vspace{-3ex} \end{figure} We stress that, during execution, although our algorithm internally is working on improper colorings such as defective coloring and arbdefective coloring, with the help of cover-free set systems and coding, we always ensure the coloring vertices produce at the end of each round is proper. Being locally-iterative also means our algorithm cannot depend on the current round number to determine which phase it is in. To solve this issue, we assign each phase an interval so that vertices running that phase will have colors in the corresponding interval. By assigning disjoint intervals to different phases, vertices can correctly determine its progress by observing its current color. (This technique was also used in \cite{barenboim21}.) More specifically, the intervals used by the three phases are $I_1$, $I_2$, and $I_3$, respectively. We assume: $$|I_1|=\ell_1\text{, }|I_2|=\ell_2\text{, and }|I_3|=\ell_3,$$ and define: $$I_1\triangleq[\ell_3+\ell_2,\ell_3+\ell_2+\ell_1)\text{, }I_2\triangleq[\ell_3,\ell_3+\ell_2)\text{, and }I_3\triangleq[0,\ell_3).$$ To give the precise values for $\ell_1,\ell_2,\ell_3$, we first define three integers and three primes numbers: \begin{gather*} m_1 \triangleq 4\Delta^{3/2}\log^2(n_{r^*}) \text{,~~} m_2 \triangleq 4\sqrt{\Delta}\log^2(n_{r^*}) \text{,~~and~~} m_3 \triangleq 16\sqrt{\Delta}\log^2(\lambda^2 m_2);\\ \lambda \in (\sqrt{m_1}+1,2(\sqrt{m_1}+1)] \text{,~~} \mu \in (\sqrt{\Delta}+\sqrt{m_3},2(\sqrt{\Delta}+\sqrt{m_3})] \text{,~~and~~} \tau \in (\sqrt{m_3},2\sqrt{m_3}]. \end{gather*} Due to the Bertrand-Chebyshev theorem~\cite{chebyshev1852}, prime numbers $\lambda,\mu,\tau$ must exist. Then, we set: \begin{align*} |I_1| &= \ell_1=n+O\left(\log^*n\cdot\Delta^2\cdot\log^2{n}\right),\\ |I_2| &= \ell_2=2\lambda^3(\mu+1)\cdot m_3=O\left(\Delta^{13/4}\log^5\Delta\right),\\ |I_3| &= \ell_3=\Delta+(2\sqrt{m_3}+1)\cdot\mu=\Delta+O\left(\Delta^{3/4}\log{\Delta}\right). \end{align*} The following lemmas summarize the guarantees provided by each phase. \begin{lemma}[\textbf{Linial Phase}]\label{lemma:phase1-property-simple} By the end of round $r^*=\log^*{n}+O(1)$, all vertices have completed the Linial phase, producing a proper coloring $\phi_{r^*}$ where $\phi_{r^*}(v)\in[\ell_3+\ell_2,\ell_3+\ell_2+O(\Delta^2))\subseteq I_1$ for every vertex $v$. Moreover, for every round $t\in[1,r^*]$, the coloring $\phi_{t}$ is proper. \end{lemma} \begin{lemma}[\textbf{Quadratic Reduction Phase}]\label{lemma:phase2-property-simple} By the end of round $r^*+2+3\lambda$, all vertices have completed the quadratic reduction phase, producing a proper coloring $\phi_{r^*+2+3\lambda}$ where $\phi_{r^*+2+3\lambda}(v)\in I_3$ for every vertex $v$. Moreover, for every round $t\in[r^*+1,r^*+2+3\lambda]$, the coloring $\phi_{t}$ is proper. \end{lemma} \begin{lemma}[\textbf{Standard Reduction Phase}]\label{lemma:phase3-property-simple} By the end of round $r^*+1+3\lambda+(2\sqrt{m_3}+1)\mu$, the coloring $\phi_{r^*+1+3\lambda+(2\sqrt{m_3}+1)\mu}$ is a proper $(\Delta+1)$-coloring. Moreover, for every round $t\in[r^*+3+3\lambda,\infty)$, the coloring $\phi_t$ is proper. \end{lemma} In the reminder of this section, we describe and analyze the Linial phase and the standard reduction phase in detail, then prove \Cref{lemma:phase1-property-simple} and \Cref{lemma:phase3-property-simple}. We conclude this section with a proof of \Cref{thm:alg-main}, and defer detailed description of the quadratic reduction phase and the proof of \Cref{lemma:phase2-property-simple} to the next section. \subsection{The Linial phase}\label{subsec:phase1} The Linial phase runs a locally-iterative version of Linial's well-known coloring algorithm~\cite{linial87}, and it reduces the number of colors used by the vertices from $n$ to $O(\Delta^2)$ within $\log^*{n}+O(1)$ rounds. More specifically, let $n_0=n$ and for $i\ge 1$, define $$ n_i= \begin{cases} 4(\Delta+1)^2\log^2 (n_{i-1}) & \text{if }n_{i-1}> 8(\Delta+1)^3,\\ 4(\Delta+1)^2 &\text{if }n_{i-1}\le 8(\Delta+1)^3. \end{cases} $$ Let $r^*$ be the smallest $r\ge 0$ such that $n_r\le 4(\Delta+1)^2$. It has been shown that $r^*\le\log^*n+O(1)$. (See, e.g., Section 3.10 of \cite{barenboim13}.) During the Linial phase, vertices will reduce the number of colors used to $n_i$ after $i$ rounds, thus within $r^*\leq\log^*{n}+O(1)$ rounds the algorithm produces a proper $O(\Delta^2)$-coloring. Recall that the Linial phase assigns vertices with colors in interval $I_1=[\ell_3+\ell_2,\ell_3+\ell_2+\ell_1)$. We set: $$|I_1|=\ell_1=\sum_{i=0}^{r^*} n_i=n+O(\log^*n\cdot\Delta^2\cdot\log^2{n}).$$ Furthermore, we partition $I_1$ into $r^*+1$ sub-intervals $I_1^{(0)},I_1^{(1)},\cdots,I_1^{(r^*)}$, such that for each $0\leq t\leq r^*$: $I_1^{(t)}\triangleq[\ell_3+\ell_2+\sum_{t+1\le i\le r^*}n_i~~,~~\ell_3+\ell_2+\sum_{t\le i\le r^*}n_i)$. Notice that $I_1^{(r^*)}=\left[\ell_3+\ell_2,\ell_3+\ell_2+n_{r^*}\right)$. In general, during the Linial phase, after $t$ rounds where $t\in[0,r^*]$, all vertices will have color in interval $I_1^{(t)}$. We now give the complete description of the Linial phase, which contains $r^*$ rounds. Recall that each vertex $v$ has a unique identity $id(v)\in[n]$, the initial color $\phi_0(v)$ of vertex $v$ is: $$\phi_0(v)\gets\ell_3+\ell_2+\sum_{i=1}^{r^*}n_i+id(v),$$ which satisfies $\phi_0(v)\in I_1^{(0)}$. In any round $1\leq t\leq r^*$, every vertex $v$ can correctly determine the value of $t$ by observing the value of $\phi_{t-1}(v)$. Let $\mathcal{F}_{t-1}$ be a $\Delta$-cover-free set system of size $|\mathcal{F}_{t-1}|=n_{t-1}$ with ground set $I_1^{(t)}$, whose existence is guaranteed by \Cref{thm:set-families}. The elements of $\mathcal{F}_{t-1}$ are $\mathcal{F}_{t-1}\triangleq\{S_{t-1}^{(k)}\mid\text{ integers } k\in I_1^{(t-1)}\}$. For any vertex $v$, the color $\phi_t(v)$ is set to be the smallest number in $S_{t-1}^{(\phi_{t-1}(v))}$, excluding all elements of $S_{t-1}^{(\phi_{t-1}(u ))}$ for all $v$'s neighbors $u\in N(v)$. Due to the $\Delta$-cover-freeness of $\mathcal{F}_{t-1}$, such color must exist. The pseudocode of the Linial phase is given in \Cref{alg:phase1}. \begin{algorithm}[t!] \caption{The Linial phase at $v\in V$ in round $1\leq t\leq r^*$.}\label{alg:phase1} \begin{algorithmic}[1] \Statex /* Initialization: $\phi_0(v) \gets \ell_3+\ell_2+\sum_{i=1}^{r^*}n_i+id(v)$. */ \State Send $\phi_{t-1}(v)$ to all neighbors. \If {($\phi_{t-1}(v)\in I_1$)} \State Determine the value of $t$ based on $\phi_{t-1}(v)$. \If{($1\leq t\leq r^*$)} \State $\phi_t(v)\gets\min S_{t-1}^{(\phi_{t-1}(v))}\setminus\bigcup_{u\in N(v)\text{ and }\phi_{t-1}(u)\in I_1^{(t-1)}} S_{t-1}^{(\phi_{t-1}(u))}$. \EndIf \EndIf \end{algorithmic} \end{algorithm} Due to the above discussion, the following lemma, which is a stronger version of \Cref{lemma:phase1-property-simple}, is immediate by an induction on $t$. \begin{lemma}\label{lemma:phase1-property} For every $0\le t\le r^*$, the coloring $\phi_t$ is proper, and $\phi_t(v)\in I_1^{(t)}$ for every vertex $v\in V$. \end{lemma} \subsection{The standard reduction phase}\label{subsec:phase3} Once a vertex $v$ finishes the transition-out stage of the quadratic reduction phase, it will run the standard reduction phase, whose goal is to transform the color $\phi_{t^{\#}_v}\in I_3$ to another color in $[\Delta+1]\subset I_3$, completing the task of $(\Delta+1)$-coloring. Here, $t^{\#}_v$ denotes the smallest round number such that $\phi_{t^{\#}_v}(v)\in I_3$. In other words, $t^{\#}_v+1$ is the first round in which vertex $v$ runs the standard reduction phase. For each vertex $v$, for each round $t\geq t^{\#}_v+1$ in its standard reduction phase, if every neighbor $u\in N(v)$ has also entered the standard reduction phase, and if vertex $v$ has the maximum color value in its one-hop neighborhood, then vertex $v$ will update its color to be the minimum value in $[\Delta+1]$ that still has not be used by any of its neighbors. Clearly, such color must exist. In all other cases, vertex $v$ keeps its color unchanged in round $t$. The pseudocode of this phase is given in \Cref{alg:phase3}. \begin{algorithm}[t!] \caption{The standard reduction phase at $v\in V$ in round $t$}\label{alg:phase3} \begin{algorithmic}[1] \State Send $\phi_{t-1}(v)$ to all neighbors. \If{($\phi_{t-1}(v)\in I_3$)} \If {(for every $u\in N(v)$ it holds that $\phi_{t-1}(u)\in I_3$)} \If {(for every $u\in N(v)$ it holds that $\phi_{t-1}(u)<\phi_{t-1}(v)$)} \State $\phi_t(v)\gets\min([\Delta+1]\setminus \{\phi_{t-1}(u)\mid u\in N(v)\})$. \EndIf \EndIf \EndIf \end{algorithmic} \end{algorithm} We now analyze the time cost of this phase, hence also bounding the total runtime of our algorithm. \begin{lemma}\label{lemma:phase3-time-cost} Every vertex $v$ has its color in $[\Delta+1]$ within $r^*+1+3\lambda+(2\sqrt{m_3}+1)\mu$ rounds. \end{lemma} \begin{proof} By \Cref{lemma:phase2-property-simple}, by the end of round $r^*+2+3\lambda$, every vertex must have its color in $I_3$, and will start running \Cref{alg:phase3} in round $r^*+3+3\lambda$. Starting from round $r^*+3+3\lambda$, in each round, every vertex with the maximum color value in its one-hop neighborhood will change its color to another one in $[\Delta+1]$. In other words, starting from round $r^*+3+3\lambda$, in each round, the maximum value of the colors used by any vertex will be reduced by at least one. Recall that when phase three starts for a vertex, it has a color in interval $I_3=[0,\ell_3)$ with $\ell_3=\Delta+(2\sqrt{m_3}+1)\cdot\mu$. Therefore, by the end of round $r^*+2+3\lambda+(\ell_3-(\Delta+1)) = r^*+1+3\lambda+(2\sqrt{m_3}+1)\mu$, every vertex has its color in $[\Delta+1]$. \end{proof} Next, we argue that the standard reduction phase always maintain a proper coloring. \begin{lemma}\label{lemma:phase3-proper-color} In every round $t\geq r^*+3+3\lambda$, for every pair of neighbors $u$ and $v$, it holds that $\phi_{t}(u)\neq\phi_{t}(v)$. \end{lemma} \begin{proof} Fix a vertex $v$. We prove the lemma by an induction on $t$. In the base case in which $t=r^*+3+3\lambda$, consider an arbitrary neighbor $u\in N(v)$. By \Cref{lemma:phase2-property-simple}, we have $\phi_{r^*+2+3\lambda}(v)\in I_3$, $\phi_{r^*+2+3\lambda}(u)\in I_3$, and $\phi_{r^*+2+3\lambda}(v)\neq\phi_{r^*+2+3\lambda}(u)$. By \Cref{alg:phase3}, in round $r^*+3+3\lambda$, at most one of $u,v$ will change its color, and the updated color of that vertex will not conflict with the other vertex. Hence, we have $\phi_{r^*+3+3\lambda}(u)\neq\phi_{r^*+3+3\lambda}(v)$. This completes the proof of the base case. Lastly, the inductive step can be proved by a similar argument as in the proof of the base case. \end{proof} We conclude this part by noting that \Cref{lemma:phase3-time-cost} and \Cref{lemma:phase3-proper-color} together immediately gives \Cref{lemma:phase3-property-simple}. \subsection{Proof of the main theorem (Theorem 1.3)}\label{subsec:alg-summary} By \Cref{lemma:phase3-time-cost}, every vertex has its color in $[\Delta+1]$ at the end of round $r^*+1+3\lambda+2(\sqrt{m_3}+1)\mu=O(\Delta^{3/4}\log{\Delta})+\log^*{n}$. By \Cref{lemma:phase2-property-simple}, every vertex has its color in $I_3$ by the end of round $r^*+2+3\lambda$, and runs the standard reduction phase ever since. Hence, by \Cref{alg:phase3}, for every round $t\geq r^*+2+3\lambda+2(\sqrt{m_3}+1)\mu$, every vertex's color will remain in $[\Delta+1]$ at the end of that round. On the other hand, combine \Cref{lemma:phase1-property-simple}, \Cref{lemma:phase2-property-simple}, and \Cref{lemma:phase3-property-simple}, we know the algorithm always maintains a proper coloring throughout its execution. This completes the proof of \Cref{thm:alg-main}. \section{The Quadratic Reduction Phase of the Algorithm}\label{sec:quadratic-reduction-phase} In this section, we describe the quadratic reduction phase and prove \Cref{lemma:phase2-property-simple}. The quadratic reduction phase is the most interesting and complicated component of our algorithm, and it contains two transition stages and one core stage. Once the transition-in stage is done, during the core stage, vertices work with colors in the interval $I_2=[\ell_3,\ell_3+\ell_2)$; then in the transition-out stage, vertices map colors from interval $I_2$ to interval $I_3=[0,\ell_3)$. Recall that we set $\ell_2=2\lambda^3(\mu+1)\cdot m_3$, hence for every color $(\ell_3+i)\in I_2$ where $i\in[\ell_2]$, we can use a unique quadruple $\langle a,b,c,d\rangle$ to identify it, where: \begin{align*} a &= \ftp{\frac{i}{2\lambda(\mu+1) m_3}},\\ b &= \ftp{\frac{i-a\cdot2\lambda(\mu+1)m_3}{2\lambda (\mu+1)}},\\ c &= \ftp{\frac{i-a\cdot2\lambda(\mu+1)m_3-b\cdot2\lambda(\mu+1)}{(\mu+1)}},\\ d &= i\bmod(\mu+1). \end{align*} In other words, $i= a\cdot2\lambda(\mu+1)m_3 + b\cdot2\lambda(\mu+1) + c\cdot(\mu+1) + d$. It is easy to verify that: $$a\in[\lambda^2]\text{, }b\in[m_3]\text{, }c\in[2\lambda]\text{, and }d\in[\mu+1].$$ In the reminder of this paper, for any round $t\geq r^*+1$, for any vertex $v$, if $\phi_t(v)\in I_2$, then we use $a_t(v),b_t(v),c_t(v),d_t(v)$ to denote the values of $a(v),b(v),c(v),d(v)$ in $\phi_t(v)$. Moreover, we often use $\ell_3+\langle a_t(v),b_t(v),c_t(v),d_t(v)\rangle$ to denote the color of $v$ at the end of round $t$. Representing colors in $I_2$ with four independent coordinates allows the algorithm to encode additional information in color values. In particular, recall the core stage internally transforms a defective coloring to another arbdefective coloring. In our algorithm, if a vertex $v$ has a color $\phi(v)$ in $I_2$, then the first coordinate of this color---i.e., $a(v)$---encodes the color vertex $v$ has in the defective coloring (or the arbdefective coloring). To ensure $\phi$ is a proper coloring, we use various cover-free set systems to set distinct $b$ values for neighboring vertices that may have identical $a$ value. The end result is that for any pair of neighbors $u\neq v$, we always guarantee $\langle a(u),b(u)\rangle\neq\langle a(v),b(v)\rangle$. Indeed, the third coordinate $c$ is used to encode ``orientation of edges'' which is helpful in constructing arbdefective coloring; and the last coordinate $d$ is only used in the transition-out stage to help map colors from $I_2$ to $I_3$. The following lemmas highlight the key guarantees enforced by each stage. For the transition-in stage, we have the following lemma, where the property $|\{u\mid u\in N(v),\phi_{r^*+1}(u)\in I_2,a_{r^*+1}(u)=a_{r^*+1}(v)\}|\leq\Delta^{1/4}$ means the first coordinate of all vertices' color (i.e., the $a$ values of all vertices) correspond to a $\Delta^{1/4}$-defective coloring at the end of the transition-in stage. \begin{lemma}\label{lemma:phase2-stage1-property} By the end of round $r^*+1$, the coloring $\phi_{r^*+1}$ is proper, and $\phi_{r^*+1}(v)\in I_2$ for every $v\in V$. Moreover, for every $v\in V$, it holds that $|\{u\mid u\in N(v),\phi_{r^*+1}(u)\in I_2,a_{r^*+1}(u)=a_{r^*+1}(v)\}|\leq\Delta^{1/4}$. \end{lemma} Then, for the core stage, we have the following three lemmas. \Cref{lemma:phase2-stage2-proper-coloring} concerns with correctness, it shows vertices running the core stage always maintain a proper coloring with their $\langle a,b\rangle$ tuples. In fact, this lemma also covers the correctness for the majority of the transition-out stage, as for every vertex, in all but the last round of its transition-out stage, its $\langle a,b\rangle$ tuple remains unchanged. \begin{lemma}\label{lemma:phase2-stage2-proper-coloring} For every round $t\geq r^*+1$, let $V'_t$ be the set of vertices running the quadratic reduction phase in round $t$: $V'_t\triangleq\{v\mid \phi_t(v)\in I_2,v\in V\}$. Then, $\phi_t$ corresponds to a proper coloring for the subgraph $G'_t$ induced by the vertices in $V'_t$: for every $v\in V'_t$, it holds that $\phi_t(v)\notin\{\phi_t(u)\mid u\in N(v)\cap V'_t\}$. More precisely, if we regard the pair $\langle a_t(v), b_t(v)\rangle$ as the color of $v$, then this coloring is also proper in graph $G'_t$: for every $v\in V'_t$, it holds that $\langle a_t(v), b_t(v)\rangle \notin \{\langle a_t(u), b_t(u)\rangle \mid u\in N(v)\cap V'_t\}$. \end{lemma} \Cref{lemma:phase2-stage2-time-complexity} shows the time cost of the core stage is $(r^*+2+\lambda)-(r^*+1)=O(\Delta^{3/4}\log{\Delta})$ for every vertex, as in our algorithm, once a vertex finds its $a$ value in $[\lambda]$, its core stage is considered done. \begin{lemma}\label{lemma:phase2-stage2-time-complexity} For every vertex $v\in V$, let $t^*_v$ be the smallest round number such that $\phi_{t^*_v}(v)\in I_2$ and $a_{t^*_v}(v)\in[\lambda]$ are both satisfied. Then, for every vertex $v\in V$, it holds that $t^*_v\leq r^*+2+\lambda$. \end{lemma} \Cref{lemma:phase2-stage2-bounded-arboricity} shows the $a$ values of the vertices running the core stage maintain a $(2\cdot\Delta^{1/4})$-arbdefective coloring. In particular, we use the $c$ values of vertices to determine the orientation of edges, hence bounding the arboricity of the subgraph induced by the vertices having identical $a$ value. Together with \Cref{lemma:phase2-stage1-property}, one can see that the $a$ values of vertices transform from a $\Delta^{1/4}$-defective $\lambda^2$-coloring (recall by definition $a\in[\lambda^2]$) to a $(2\cdot\Delta^{1/4})$-arbdefective $\lambda$-coloring (recall the core stage of a vertex ends when its $a\in[\lambda]$). \begin{lemma}\label{lemma:phase2-stage2-bounded-arboricity} For every round $t\geq r^*+1$, for every $v\in V$ with $\phi_t(v)\in I_2$, it holds that $|\{u\mid u\in N(v),\phi_t(u)\in I_2, a_t(u)=a_t(v), c_t(u)\leq c_t(v)\}|\leq 2\cdot\Delta^{1/4}$. \end{lemma} Lastly, for the transition-out stage, we have \Cref{lemma:phase2-stage3-time-cost} for bounding its time cost, and \Cref{lemma:phase2-stage3-proper-color} for showing its correction. Notice that \Cref{lemma:phase2-stage2-time-complexity} and \Cref{lemma:phase2-stage3-time-cost} together show the time cost of the transition out stage is $(r^*+2+3\lambda)-(r^*+2+\lambda)=O(\Delta^{3/4}\log{\Delta})$ for every vertex. \begin{lemma}\label{lemma:phase2-stage3-time-cost} For every vertex $v\in V$, let $t^{\#}_v$ be the smallest round number such that $\phi_{t^{\#}_v}(v)\in I_3$. Then, we have $t^{\#}_v \leq r^*+2+3\lambda$. \end{lemma} \begin{lemma}\label{lemma:phase2-stage3-proper-color} For every vertex $v\in V$, let $t^{\#}_v$ be the smallest round number such that $\phi_{t^{\#}_v}(v)\in I_3$. Then, we have $\phi_{t^{\#}_v}(v)\notin\{\phi_{t^{\#}_v}(u)\mid u\in N(v), \phi_{t^{\#}_v}(u)\in I_3\}$. \end{lemma} With the above lemmas, we can already prove \Cref{lemma:phase2-property-simple}. \begin{proof}[Proof of \Cref{lemma:phase2-property-simple}] By \Cref{lemma:phase2-stage3-time-cost}, we know by the end of round $r^*+2+3\lambda$, every vertex have completed the quadratic reduction phase and obtained a color in $I_3$. By the definition of $I_3$, we know $\phi_{r^*+2+3\lambda}$ is a $(\Delta+O(\Delta^{3/4}\log\Delta))$-coloring. Next, we prove for every round $t\in[r^*+1,r^*+2+3\lambda]$, the coloring $\phi_{t}$ is proper. (Notice this implies $\phi_{r^*+2+3\lambda}$ is a proper coloring.) By \Cref{lemma:phase2-stage1-property}, we know by the end of round $r^*+1$, the coloring $\phi_{r^*+1}$ is proper. From round $r^*+2$, every vertex starts running the core stage of the quadratic reduction phase. For every vertex $v$, by \Cref{lemma:phase2-stage2-proper-coloring}, for every round $t\in[r^*+2,t^{\#}_v-1]$, its color $\phi_t(v)$ will not conflict with any of its neighbor. Then, in round $t^{\#}_v$, when vertex $v$ chooses its phase three color, by \Cref{lemma:phase2-stage3-proper-color}, we know $\phi_{t^{\#}_v}(v)$ will also not conflict with any of its neighbor. At this point, we have proved, for every vertex $v$, for every round $t\in[r^*+1,t^{\#}_v]$, its color $\phi_t(v)$ will not conflict with any of its neighbor. Now consider a round $t\geq t^{\#}_v+1$, and a neighbor $u\in N(v)$. If $\phi_{t-1}(u)\in I_1$, then obviously $\phi_t(u)\notin I_3$ as $u$ can not map a color from $I_1$ to $I_3$ in one round, implying $\phi_t(v)\neq\phi_t(u)$. If $\phi_{t-1}(u)\in I_2$ and $\phi_{t}(u)\in I_2$, then trivially $\phi_t(v)\neq\phi_t(u)$. If $\phi_{t-1}(u)\in I_2$ but $\phi_{t}(u)\in I_3$, then apply \Cref{lemma:phase2-stage3-proper-color} from the perspective of $u$, we still have $\phi_t(v)\neq\phi_t(u)$. If $\phi_{t-1}(u)\in I_3$, then by the standard reduction phase algorithm, in round $t$, at most one of $u,v$ will change its color, and the updated color of that vertex will not conflict with the other vertex. Once again, we have $\phi_t(v)\neq\phi_t(u)$. By now, we can conclude, for every round $t\in[r^*+1,r^*+2+3\lambda]$, the coloring $\phi_{t}$ is proper. \end{proof} In the reminder of this section, we will describe each stage in detail, and prove \Cref{lemma:phase2-stage1-property} to \Cref{lemma:phase2-stage3-proper-color}. \subsection{Transition-in stage}\label{subsec:phase2-stage1} Recall that after the Linial phase which contains $r^*$ rounds, vertices have a proper coloring with colors from interval $I_1^{(r^*)}=[\ell_3+\ell_2,\ell_3+\ell_2+n_{r^*})$. The first task of the quadratic reduction phase is to transform $\phi_{r^*}$ to a proper coloring with colors from the interval $I_2=[\ell_3,\ell_3+\ell_2)$. To that end, for each vertex $v$, in round $r^*+1$, it will compute $a(v)$ and $b(v)$ based on the colors of itself and its neighbors, and set $c(v)=0,d(v)=\mu$. The result is that a pair of neighboring vertices $u,v$ might have identical $a$ value, but the number of such collisions is limited in each neighborhood: $\Delta^{1/4}$ to be exact. This means the $a$ values of all vertices correspond to a $\Delta^{1/4}$-defective coloring. Moreover, in case collisions occur, the second coordinate of the colors of the two neighboring vertices---i.e., $b(u)$ and $b(v)$---would differ. Therefore, by the end of round $r^*+1$, the coloring $\phi_{r^*+1}$ will be proper, and every vertex's color will be in interval $I_2$. The following property informally summarizes the key guarantees provided by the transition-in stage. \begin{property}[\textbf{Transition-in Stage}]\label{property:phase2-stage1} During algorithm execution, at the end of round $r^*+1$, the coloring $\phi_{r^*+1}$ is proper and every vertex $v\in V$ has $\phi_{r^*+1}(v)\in I_2$. Moreover, the first coordinate of all vertices' color correspond to a $\Delta^{1/4}$-defective coloring. \end{property} \paragraph{Detailed description.} In round $r^*+1$, to compute $a(v)$, we employ the defective coloring algorithm developed by Barenboim, Elkin, and Kuhn~\cite{barenboim14}, with suitable parameters tailored for our purpose. The core of this approach is the usage of $\Delta$-union-$(\rho+1)$-cover-free set systems. Recall \Cref{def:cover-free-set-system-extension}, a $\Delta$-union-$(\rho+1)$-cover-free set system is a set system $\mathcal{F}$ such that for every $\Delta+1$ distinct sets $S_0,S_1,\cdots,S_{\Delta}\in\mathcal{F}$, it holds that there exists at least one element $x\in S_0$ that appears in at most $\rho$ sets among $S_1,S_2,\cdots,S_{\Delta}$. In round $r^*+1$, let $\mathcal{F}_a\triangleq\{S_a^{(i)}\mid \text{ integers }i\in I_1^{(r^*)}\}$ be a $\Delta$-union-$\left(\Delta^{1/4}+1\right)$-cover-free set family with $[m_1]$ as its ground set. Such $\mathcal{F}_a$ exists due to \Cref{thm:set-families-extension}. Recall that by the end of round $r^*$, for each vertex $v$, its color $\phi_{r^*}(v)\in I_1^{(r^*)}$, and $v$ will send $\phi_{r^*}(v)$ to all its neighbors. In round $r^*+1$, for each vertex $v$, it chooses $a(v)$ from $S_a^{(\phi_{r^*}(v))}$. In particular, for every element $x\in S_a^{(\phi_{r^*}(v))}$, vertex $v$ computes the set of neighbors that also have $x$ in their respective $S_a^{(\cdot)}$ sets: $N'(v,x)\triangleq\{u\mid u\in N(v), \phi_{r^*}(u)\in I_1^{(r^*)}, x\in S_a^{(\phi_{r^*}(u))}\}$. Let $\hat{x}$ be the smallest element in $S_a^{(\phi_{r^*}(v))}$ satisfying $|N'(v,\hat{x})|\leq\Delta^{1/4}$, vertex $v$ then assigns $a(v)=\hat{x}\in[\lambda^2]$. Due to \Cref{def:cover-free-set-system-extension} and \Cref{thm:set-families-extension}, every vertex can find such $\hat{x}$ in round $r^*+1$. By the end of round $r^*+1$, vertex $v$'s $a(v)$ may collide with up to $\Delta^{1/4}$ of its neighbors, as $\mathcal{F}_a$ is a $\Delta$-union-$\left(\Delta^{1/4}+1\right)$-cover-free set family. To resolve this issue, we build another $\Delta^{1/4}$-cover-free set family to assign different $b$ values to these potential colliding neighbors. Specifically, for each vertex $v$, let $N''(v)\triangleq\{u\mid u\in N(v),\phi_{r^*}(u)\in I_1^{(r^*)},a(v)\in S_a^{(\phi_{r^*}(u))}\}$ be the set of neighbors that might have colliding $a$ value. We know $|N''(v)|\leq\Delta^{1/4}$ due to above discussion. Now, let $\mathcal{F}_b\triangleq\{S_b^{(i)}\mid \text{ integers }i\in I_1^{(r^*)}\}$ be a $\Delta^{1/4}$-cover-free set family with ground set $[m_2]\subseteq[m_3]$. Such $\mathcal{F}_b$ exists due to \Cref{thm:set-families}. Vertex $v$ assigns $b(v)$ to be an element in $S_b^{(\phi_{r^*}(v))}\setminus\bigcup_{u\in N''(v)} S_b^{(\phi_{r^*}(u))}$, which is guaranteed to exist due to \Cref{def:cover-free-set-system} and \Cref{thm:set-families}. \begin{algorithm}[t!] \caption{The transition-in stage of the quadratic reduction phase at $v\in V$ in round $t=r^*+1$}\label{alg:phase2-stage1} \begin{algorithmic}[1] \State Send $\phi_{t-1}(v)$ to all neighbors. \If {($\phi_{t-1}(v)\in I_1$)} \State Determine the value of $t$ based on $\phi_{t-1}(v)$. \If{($t=r^*+1$)} \For {(every element $x\in S_a^{(\phi_{t-1}(v))}$)} \State $N'(v,x) \gets \left\{u \mid u\in N(v), \phi_{t-1}(u)\in I_1^{(t-1)}, x\in S_a^{(\phi_{t-1}(u))} \right\}$. \EndFor \State $N''(v)\gets\left\{u\mid u\in N(v),\phi_{t-1}(u)\in I_1^{(t-1)},a_{t}(v)\in S_a^{(\phi_{t-1}(u))}\right\}$. \State $a_{t}(v)\gets\min\left\{ x \mid x\in S_a^{(\phi_{t-1}(v))}, \left|N'(v,x)\right|\leq\Delta^{1/4} \right\}$. \State $b_{t}(v)\gets\min S_b^{(\phi_{t-1}(v))}\setminus\bigcup_{u\in N''(v)}S_b^{(\phi_{t-1}(u))}$. \State $c_{t}(v)\gets 0$, $d_{t}(v)\gets\mu$. \State $\phi_{t}(v) \gets \ell_3 + \langle a_{t}(v), b_{t}(v), c_{t}(v), d_{t}(v) \rangle$. \EndIf \EndIf \end{algorithmic} \end{algorithm} The pseudocode of the transition-in stage is given in \Cref{alg:phase2-stage1}. \paragraph{Analysis.} Due to the discussion above, it is easy to see that $\phi_{r^*+1}$ is a proper coloring, and the $a_{r^*+1}$ values of all vertices correspond to a $\Delta^{1/4}$-defective coloring, immediately giving \Cref{lemma:phase2-stage1-property}. \subsection{Core stage}\label{subsec:phase2-stage2} Once the transition-in stage is done, the $a$ values of all vertices correspond to a $\Delta^{1/4}$-defective coloring, using $\lambda^2$ colors (recall that $a\in[\lambda^2]$). The main objective of the core stage is to start from this $\Delta^{1/4}$-defective $\lambda^2$-coloring to gradually obtain a $(2\cdot\Delta^{1/4})$-arbdefective $\lambda$-coloring. Notice that this reduces the number of colors used---or more precisely, the range of the $a$ values of all vertices---from $[\lambda^2]$ to $[\lambda]$, i.e., a quadratic reduction from $O(\Delta^{3/2}\log^2\Delta)$ to $O(\Delta^{3/4}\log\Delta)$. Meanwhile, in exchange for this reduction, the potential collisions on the $a$ values among neighboring vertices are increased. Recall that the arboricity of an undirected graph is the minimum number of forests into which its edges can be partitioned. We ensure the arboricity of the subgraph induced by each $a$ value is at most $2\cdot\Delta^{1/4}$ by guaranteeing that each vertex $v$ has at most $2\cdot\Delta^{1/4}$ outgoing edges satisfying the constraint that the other endpoint of each such edge is another vertex having the same $a$ value with $v$. As mentioned earlier, vertices use the $c$ values to implicitly determine the orientation of edges: vertex $v$ points to vertex $u$ if and only if $c(v)\geq c(u)$.\footnote{A technical detail worth clarifying is, in case $c(u)=c(v)$, the orientation of edge $(u,v)$ can be determined by comparing $id(u)$ and $id(v)$. However, our algorithm does not require $v$ to know $id(u)$, or vise versa. Instead, when $c(u)=c(v)$, vertex $v$ treats $(u,v)$ as pointing to $u$, and $u$ treats $(u,v)$ as pointing to $v$. We shall show our algorithm still works under such interpretation.} Recall that we seek a locally-iterative coloring algorithm, thus by the end of each round, the $\langle a,b,c,d\rangle$ quadruples need to correspond to a proper coloring. To this end, we use $(2\cdot\Delta^{1/4})$-cover-free set families to assign $b$ values to resolve collisions among neighbors. (The $d$ values are only used in the transition-out stage, and will remain to be $\mu$ throughout the core stage.) The end result is that during the core stage, the $\langle a,b\rangle$ pairs of all vertices always correspond to a proper coloring. Another characteristic of the core stage is that vertices may complete this stage at different times: once a vertex $v$ has $a_t(v)\in[\lambda]$ in some round $t\geq r^*+1$, its core stage is considered done, and it may proceed to the transition-out stage. As a result, starting from the core stage, vertices may proceed at different paces. Nonetheless, our algorithm still works under such ``asynchrony'', and is able to enforce an upper bound on the overall time complexity. The following property informally summarizes the key guarantees provided by the core stage. \begin{property}[\textbf{Core Stage}]\label{property:phase2-stage2} By the end of round $r^*+2+\lambda$, every vertex must have completed the quadratic reduction stage. For every round $t\in[r^*+2,r^*+2+\lambda]$, the coloring $\phi_t$ is proper. Moreover, in each such round, consider the subgraph induced by the vertices with color values in $I_2$: $G'_t=(V'_t,E'_t)$, where $V'_t\triangleq\{v\mid \phi_t(v)\in I_2,v\in V\}$, $E'_t\triangleq\{(u,v)\mid u\in V'_t, v\in V'_t, (u,v)\in E\}$. The $a$ values of the vertices in $V'_t$ correspond to a $(2\cdot\Delta^{1/4})$-arbdefective coloring of $G'_t$. \end{property} \paragraph{Detailed description.} We now describe the core stage in detail, which is inspired by the arbdefective locally-iterative algorithm developed by Barenboim, Elkin, and Goldenberg~\cite{barenboim21}. During the entire quadratic reduction phase, for each round $t$ and each vertex $v$, if $\phi_{t}(v)\in I_2$, then the first coordinate of its color quadruple $a_t(v)$ can be interpreted in the following manner: $$a_t(v)=\hat{a}_t(v)\cdot\lambda+\tilde{a}_t(v)\text{, where }\hat{a}_t(v)=\lfloor a_t(v)/\lambda\rfloor\text{ and }\tilde{a}_t(v)=a_t(v)\bmod\lambda.$$ Our goal is to make a series of updates to $a(v)$ so that eventually $\hat{a}(v)=0$, thus reducing the range of $a(v)$ from $[\lambda^2]$ to $[\lambda]$. In fact, once $a(v)<\lambda$, the core stage of $v$ is considered done. On the other hand, for each vertex $v$, in each round $t$ in the core stage where $a_{t-1}(v)\geq\lambda$, it will count the number of neighbors that also have colors in interval $I_2$ and satisfy $a_{t-1}(u)\neq a_{t-1}(v)$, $a_{t-1}(u)\equiv a_{t-1}(v)\bmod\lambda$. Denote this set of neighbors as: $$M_t(v)\triangleq\{u\mid u\in N(v),\phi_{t-1}(u)\in I_2,\hat{a}_{t-1}(u)\neq \hat{a}_{t-1}(v),\tilde{a}_{t-1}(u)= \tilde{a}_{t-1}(v)\}.$$ If $|M_t(v)|>\Delta^{1/4}$, then $v$ updates $a_t(v)$ according to the following rule: $$a_t(v) \gets \hat{a}_{t-1}(v)\cdot\lambda + ((\hat{a}_{t-1}(v)+\tilde{a}_{t-1}(v))\bmod\lambda).$$ Moreover, vertex $v$ keeps its $b,c,d$ values unchanged. Otherwise, if $|M_t(v)|\leq\Delta^{1/4}$, then $v$ updates $a_t(v)$ to be $a_{t-1}(v)\bmod\lambda$, or equivalently: $$a_t(v) \gets \tilde{a}_{t-1}(v).$$ Notice this step reduces the range of $a(v)$ from $[\lambda^2]$ to $[\lambda]$. Vertex $v$ will also set $c_t(v)$ to be larger than $c_{t-1}(u)$, for all $u\in M'_t(v)$: specifically, $c_t(v)\gets 1+\max_{u\in M'_t(v)}\{c_{t-1}(u)\}$. Here, $M'_t(v)$ denotes the set of neighbors of $v$ that also have color in interval $I_2$ and satisfy $\lfloor a_{t-1}(u)/\lambda\rfloor=0$, $a_{t-1}(u)\equiv a_{t-1}(v)\bmod\lambda$. In other words: $$M'_t(v)\triangleq\{u\mid u\in N(v),\phi_{t-1}(u)\in I_2,\hat{a}_{t-1}(u)=0,\tilde{a}_{t-1}(u)=\tilde{a}_{t-1}(v)\}.$$ Also, since the maximum $c$ value attained by any vertex can increase by at most one in each round, and since that every vertex will reduce its $a$ value to $[\lambda]$ by the end of round $r^*+2+\lambda$ (this is stated in \Cref{property:phase2-stage2} and formally proved in \Cref{lemma:phase2-stage2-time-complexity}), we know the value of $c$ will never exceed $\lambda+1$. To ensure $\phi_t$ is a proper coloring in the $|M_t(v)|\leq\Delta^{1/4}$ scenario, vertex $v$ uses a $(2\cdot\Delta^{1/4})$-cover-free set family to assign its $b$ value. This technique was introduced in \cite{barenboim21} and referred as ``Algorithm Excl-Linial'' in that paper; it can be seen as a variant of Linial's algorithm in that each vertex has a ``forbidden color list''. More specifically, in our setting, recall that $\tau$ is a prime satisfying $2\cdot\Delta^{1/4}\log(\lambda^2 m_2)<\tau\leq2\cdot(2\cdot\Delta^{1/4}\log(\lambda^2 m_2))$. We construct a $(2\cdot\Delta^{1/4})$-cover-free set family in the following manner. For every integer $i\in[\lambda^2 m_2]$, we associate a unique polynomial $P_i$ of degree $\log(\lambda^2 m_2)$ over finite field $GF(\tau)$ to it. (This is possible since the number of such polynomials is at least $(\tau-1)^{1+\log(\lambda^2 m_2)}>\lambda^2 m_2$.) Let $\mathcal{F}_c\triangleq\{S_c^{(0)},S_c^{(1)},\cdots,S_c^{(\lambda^2 m_2-1)}\}$ be a set family of size $\lambda^2 m_2$, where $S_c^{(i)}\triangleq\{x\cdot \tau + P_i(x)\mid x\in[\tau]\}$ for every $i\in[\lambda^2 m_2]$. Since the degree of the polynomials $P_i$ is $\log(\lambda^2 m_2)$, the intersection of any two sets in $\mathcal{F}_c$ contains at most $\log(\lambda^2 m_2)$ elements. Since every set in $\mathcal{F}_c$ contains $\tau>2\cdot\Delta^{1/4}\log(\lambda^2 m_2)$ elements, we know $\mathcal{F}_c$ is $(2\cdot\Delta^{1/4})$-cover-free. Now, in a round where $|M_t(v)|\leq\Delta^{1/4}$, recall that vertex $v$ updates $a_t(v)$ to be $\tilde{a}_{t-1}(v)$. After this update, $v$'s $a$ value may collide with the $a$ values of the vertices in $M'_t(v)$, as well as the $a$ values of the vertices in $$\overline{M}'_t(v)\triangleq\{u\mid u\in N(v),\phi_{t-1}(u)\in I_2,\hat{a}_{t-1}(u)\neq0,\tilde{a}_{t-1}(u)=\tilde{a}_{t-1}(v)\}.$$ We will show $|M'_t(v)\cup\overline{M}'_t(v)|\leq 2\cdot\Delta^{1/4}$, hence vertex $v$ can use the smallest element in the following set to be its $b_t(v)$: $$S_c^{(a_{t-1}(v)\cdot m_2 + b_{t-1}(v))}\setminus \left( \{b_{t-1}(u)\mid u\in M'_{t}(v)\} \cup \bigcup_{u\in \overline{M}'_{t}(v)} S_c^{(a_{t-1}(u)\cdot m_2+b_{t-1}(u))} \right).$$ There are several points worth clarifying regarding the above expression. First, the indices $a_{t-1}(v)\cdot m_2 + b_{t-1}(v)$ and $a_{t-1}(u)\cdot m_2 + b_{t-1}(u)$ in the above expression are valid. To see this, notice that when the transition-in stage is done, according to our discussion in \Cref{subsec:phase2-stage1}, each vertex's $b$ value is in $[m_2]$. Hence, when vertex $v$ reduces its $a$ value from $[\lambda^2]$ to $[\lambda]$ in round $t$, we have $a_{t-1}(v)\cdot m_2 + b_{t-1}(v)\in[\lambda^2 m_2]$. Moreover, for each vertex $u\in\overline{M}'_{t}(v)$, by the definition of $\overline{M}'_{t}(v)$ and the above algorithm description, its value of $b$ has not changed since the transition-in stage is done (otherwise it would be the case $\hat{a}_{t-1}(u)=0$), thus the value of $b_{t-1}(u)$ must be in $[m_2]$. Therefore, for each vertex $u\in\overline{M}'_{t}(v)$, we also have $a_{t-1}(u)\cdot m_2 + b_{t-1}(u)\in[\lambda^2 m_2]$. Second, the above expression gives a non-empty set. To see this, notice that by definition any $S^{(\cdot)}_c$ contains at least $\tau>2\cdot\Delta^{1/4}\log^2(\lambda^2 m_2)$ elements, and we are eliminating at most $2\cdot\Delta^{1/4}\log(\lambda^2 m_2)$ elements from it with the expression after the set-minus symbol, as $|M'_t(v)\cup\overline{M}'_t(v)|\leq 2\cdot\Delta^{1/4}$. Lastly, after vertex $v$ updates its $b$ value, we have $b_t(v)\in[m_3]$. This is because $b_t(v)$ is drawn from $S_c^{(a_{t-1}(v)\cdot m_2 + b_{t-1}(v))}$, which by definition only contains elements in $[\tau^2]\subseteq[m_3]$. \begin{algorithm}[t!] \caption{The core stage of the quadratic reduction phase at $v\in V$ in round $t\geq r^*+2$}\label{alg:phase2-stage2} \begin{algorithmic}[1] \State Send $\phi_{t-1}(v)$ to all neighbors. \If{($\phi_{t-1}(v)\in I_2$ \textbf{and} $a_{t-1}(v)\geq\lambda$)} \State $M_t(v)\gets\left\{u\mid u\in N(v),\phi_{t-1}(u)\in I_2,\hat{a}_{t-1}(u)\neq \hat{a}_{t-1}(v),\tilde{a}_{t-1}(u)= \tilde{a}_{t-1}(v)\right\}$. \State $\overline{M}_t(v)\gets\left\{u\mid u\in N(v),\phi_{t-1}(u)\in I_2,\hat{a}_{t-1}(u)=\hat{a}_{t-1}(v),\tilde{a}_{t-1}(u)=\tilde{a}_{t-1}(v)\right\}$. \If{($|M_t(v)|\leq\Delta^{1/4}$)} \State $M'_{t}(v) \gets \left\{ u\mid u\in N(v),\phi_{t-1}(u)\in I_2,\hat{a}_{t-1}(u)=0,\tilde{a}_{t-1}(u)=\tilde{a}_{t-1}(v) \right\}$. \State $\overline{M}'_{t}(v) \gets \left\{ u\mid u\in N(v),\phi_{t-1}(u)\in I_2,\hat{a}_{t-1}(u)\neq 0,\tilde{a}_{t-1}(u)=\tilde{a}_{t-1}(v) \right\}$. \State $a_t(v) \gets \tilde{a}_{t-1}(v)$.\label{alg-line:phase2-stage2-a-update-rule-reduce} \State $b_t(v) \gets \min S_c^{(a_{t-1}(v)\cdot m_2 + b_{t-1}(v))}\setminus \left( \{b_{t-1}(u)\mid u\in M'_{t}(v)\} \cup \bigcup_{u\in \overline{M}'_{t}(v)} S_c^{(a_{t-1}(u)\cdot m_2+b_{t-1}(u))} \right)$. \State $c_t(v) \gets 1+\max\{c_{t-1}(u)\mid u\in M'_{t}(v)\}$. \Else \State $a_t(v) \gets \hat{a}_{t-1}(v)\cdot\lambda + ((\hat{a}_{t-1}(v)+\tilde{a}_{t-1}(v))\bmod\lambda)$.\label{alg-line:phase2-stage2-a-update-rule} \EndIf \State $\phi_{t}(v) \gets \ell_3 + \langle a_t(v), b_t(v), c_t(v), d_t(v) \rangle$. \EndIf \end{algorithmic} \end{algorithm} The pseudocode of the core stage is given in \Cref{alg:phase2-stage2}. \paragraph{Analysis.} We now formally prove the correctness of the core stage and analyze its time complexity. Specifically, the core stage lasts at most $\lambda+1$ rounds and eventually produces a $(2\cdot\Delta^{1/4})$-arbdefective $\lambda$-coloring with the $a$ values of the vertices. Moreover, during this stage, the $\langle a,b\rangle$ pairs of all vertices always maintain a proper coloring. We first show the following claim is true as it will be frequently used later. \begin{claim}\label{claim:phase2-stage2_last_round_equal_a} For every round $t\geq r^*+2$, for every pair of neighboring vertices $u$ and $v$, if $\phi_t(u), \phi_t(v)\in I_2$ and $a_t(u)=a_t(v)\geq\lambda$, then $\phi_{t-1}(u), \phi_{t-1}(v) \in I_2$ and $a_{t-1}(u)=a_{t-1}(v)\geq\lambda$. \end{claim} \begin{proof} Since $t-1\geq r^*+1$, vertex $v$ cannot be in the Linial phase in round $t-1$. In such scenario, by our algorithm, if $\phi_{t-1}(v)\notin I_2$, then it cannot be the case that $\phi_{t}(v)\in I_2$. Hence, if $\phi_t(v)\in I_2$, then $\phi_{t-1}(v)\in I_2$; similarly, if $\phi_t(u)\in I_2$, then $\phi_{t-1}(u)\in I_2$. Moreover, $u$ and $v$ must both be executing \Cref{alg:phase2-stage2} in round $t$. Next, we prove $a_t(u)=a_t(v)\geq\lambda$ implies $a_{t-1}(u)=a_{t-1}(v)$. For the sake of contradiction, assume $a_{t-1}(u)\neq a_{t-1}(v)$. Since $a_t(u)=a_t(v)\geq\lambda$, in round $t$, both $u$ and $v$ update $a$ using the rule in \cref{alg-line:phase2-stage2-a-update-rule} of \Cref{alg:phase2-stage2}. If $a_{t-1}(u)\neq a_{t-1}(v)$, then either $\hat{a}_{t-1}(u)\neq \hat{a}_{t-1}(v)$ or $\tilde{a}_{t-1}(u)\neq \tilde{a}_{t-1}(v)$. In case $\hat{a}_{t-1}(u)\neq \hat{a}_{t-1}(v)$, assume $\hat{a}_{t-1}(u)<\hat{a}_{t-1}(v)$ without loss of generality. Then after the update, we have $a_t(u)\leq\hat{a}_{t-1}(u)\cdot\lambda+(\lambda-1)<\hat{a}_{t-1}(v)\cdot\lambda\leq a_t(v)$, meaning $a_t(u)\neq a_t(v)$, resulting in a contradiction. Otherwise, in case $\hat{a}_{t-1}(u)=\hat{a}_{t-1}(v)$ and $\tilde{a}_{t-1}(u)\neq\tilde{a}_{t-1}(v)$, then $\tilde{a}_t(u)=((\hat{a}_{t-1}(u)+\tilde{a}_{t-1}(u))\bmod\lambda)\neq((\hat{a}_{t-1}(v)+\tilde{a}_{t-1}(v))\bmod\lambda)=\tilde{a}_t(v)$. Once again, we have $a_t(u)\neq a_t(v)$, resulting in a contradiction. By now, we conclude $a_{t-1}(u)=a_{t-1}(v)$. Lastly, notice that if both $u$ and $v$ execute \Cref{alg:phase2-stage2} in round $t$, and if $a_{t-1}(u)=a_{t-1}(v)<\lambda$, then it must be the case that $a_t(u)=a_t(v)<\lambda$, violating the lemma assumption. Hence, we know $a_{t-1}(u)=a_{t-1}(v)\geq\lambda$. \end{proof} Next, we bound the size of the set $M'_t(v)\cup\overline{M}'_t(v)$. It is crucial in showing that the $\langle a,b\rangle$ pairs of vertices maintain a proper coloring during the core stage of the quadratic reduction phase. \begin{claim}\label{claim:phase2-stage2_collisions_bound} For every round $t\geq r^*+2$, for every vertex $v$, if $\phi_{t-1}(v)\in I_2$ and $a_{t-1}(v)\geq\lambda$ and $|M_t(v)|\leq\Delta^{1/4}$, then $|M'_t(v)\cup\overline{M}'_t(v)|\leq 2\cdot\Delta^{1/4}$. \end{claim} \begin{proof} Since $t\geq r^*+2$, $\phi_{t-1}(v)\in I_2$, and $a_{t-1}(v)\geq\lambda$, vertex $v$ executes \Cref{alg:phase2-stage2} in round $t$. Define $\overline{M}_t(v)\triangleq\{u\mid u\in N(v),\phi_{t-1}(u)\in I_2,\hat{a}_{t-1}(u)=\hat{a}_{t-1}(v),\tilde{a}_{t-1}(u)=\tilde{a}_{t-1}(v)\}$. Notice that by definition $M_t(v)\cup\overline{M}_t(v)=M'_t(v)\cup\overline{M}'_t(v)$, thus we only need to prove $|M_t(v)\cup\overline{M}_t(v)|\leq 2\cdot\Delta^{1/4}$. Recall the lemma assumption $|M_t(v)|\leq\Delta^{1/4}$, thus we focus on showing $|\overline{M}_t(v)|\leq\Delta^{1/4}$. For each vertex $u\in\overline{M}_t(v)$, by the definition of $\overline{M}_t(v)$, we have $u\in N(v)$, $\phi_{t-1}(u)\in I_2$, and $a_{t-1}(u)=a_{t-1}(v)\geq\lambda$. By repeatedly applying \Cref{claim:phase2-stage2_last_round_equal_a}, we conclude that $a_{r^*+1}(u)=a_{r^*+1}(v)$; that is, when the transition-in stage is done, $u$ and $v$ have identical $a$ value. Recall \Cref{lemma:phase2-stage1-property}, we know when the transition-in stage is done, the number of neighbors of $v$ that have identical $a$ value with $v$ cannot exceed $\Delta^{1/4}$. Therefore, $|\overline{M}_t(v)|\leq\Delta^{1/4}$. This completes the proof of the claim. \end{proof} We are now ready to prove that $\phi$---or more precisely, the $\langle a,b\rangle$ pairs of vertices---maintains a proper coloring for vertices with colors in interval $I_2$.\footnote{As we shall see, during the transition-out stage, the algorithm will not alter vertices' $a,b,c$ values in their color quadruples. Hence, at this point, we can already argue that the algorithm maintains a proper coloring for vertices with colors in interval $I_2$ throughout their second phase.} \begin{proof}[Proof of \Cref{lemma:phase2-stage2-proper-coloring}] We prove the lemma by induction on $t$. For the base case $t=r^*+1$, by \Cref{lemma:phase2-stage1-property}, we know $V'_{r^*+1}=V$ and $\phi_{r^*+1}$ corresponds to a proper coloring. Since $c_{r^*+1}(v)=0,d_{r^*+1}(v)=\mu$ for every vertex $v$, we further conclude the $\langle a,b\rangle$ pairs of all vertices correspond to a proper coloring. Assume the claim holds for round $t=i$ where $i\geq r^*+1$, we now consider round $t=i+1$. In round $i+1$, a vertex $v\in V'_{i+1}$ running the quadratic reduction phase may have $a_{i}(v)\in [\lambda,\lambda^2)$ or $a_{i}(v)\in [0,\lambda)$. In the former case, $v$ may update its $a$ value from $[\lambda,\lambda^2)$ to $[\lambda,\lambda^2)$ or reduce its $a$ value from $[\lambda,\lambda^2)$ to $[0,\lambda)$ in round $i+1$. In the latter case, $v$ leaves its $a$ and $b$ values unchanged in round $i+1$. We consider these three scenarios separately. \textsc{Scenario I:} vertex $v$ running \Cref{alg:phase2-stage2} updates its $a$ value from $[\lambda,\lambda^2)$ to $[\lambda,\lambda^2)$ in round $i+1$. For any vertex $u\in N(v)\cap V'_{i+1}$ with $a_{i+1}(u)\neq a_{i+1}(v)$, the claim holds trivially. On the other hand, by claim \Cref{claim:phase2-stage2_last_round_equal_a}, any vertex $u\in N(v)\cap V'_{i+1}$ with $a_{i+1}(u)=a_{i+1}(v)\geq\lambda$ has $u\in N(v)\cap V'_{i}$ and $a_{i}(u)=a_{i}(v)\geq\lambda$ as well. By the induction hypothesis, we have $b_{i} (u)\neq b_{i}(v)$. By \Cref{alg:phase2-stage2}, vertex $u$ and $v$ both update $a$ from $[\lambda,\lambda^2)$ to $[\lambda,\lambda^2)$ in round $i+1$. Moreover, we have $b_{i+1}(u)=b_{i}(u)$ and $b_{i+1}(v)=b_i(v)$, implying $b_{i+1}(u)\neq b_{i+1}(v)$. Hence, the $\langle a,b\rangle$ pairs of all vertices in $V'_{i+1}$ correspond to a proper coloring of $G'_{i+1}$. \textsc{Scenario II:} vertex $v$ running \Cref{alg:phase2-stage2} reduces its $a$ value from $[\lambda,\lambda^2)$ to $[0,\lambda)$ in round $i+1$. For any vertex $u\in N(v)\cap V'_{i+1}$ with $a_{i+1}(u)\neq a_{i+1}(v)$, the claim holds trivially. So consider a vertex $u\in N(v)\cap V'_{i+1}$ with $a_{i+1}(u)=a_{i+1}(v)<\lambda$. By \Cref{alg:phase2-stage2}, vertex $u$ either: (a) satisfies $a_i(u)<\lambda$ and does not change its $a,b$ values in round $i+1$; or (b) reduces its $a$ value from $[\lambda,\lambda^2)$ to $[0,\lambda)$ in round $i+1$. In both cases, it is easy to verify that $\tilde{a}_{i}(u)=\tilde{a}_{i}(v)$ must hold. This implies $u\in M'_{i+1}(v)\cup\overline{M}'_{i+1}(v)$. Now, since $v$ reduces its $a$ value from $[\lambda,\lambda^2)$ to $[0,\lambda)$ in round $i+1$, the condition $|M_{i+1}(v)|\leq\Delta^{1/4}$ must be satisfied in round $i+1$. Hence, by \Cref{claim:phase2-stage2_collisions_bound} and the method we used to update $b(v)$, it holds that $b_{i+1}(v)\neq b_{i+1}(u)$ for any $u\in M'_{i+1}(v)\cup\overline{M}'_{i+1}(v)$. \textsc{Scenario III:} vertex $v$ running \Cref{alg:phase2-stage2} leaves its $a$ value and $b$ value unchanged in round $i+1$ since $a_i(v)\in[\lambda]$. For any vertex $u\in N(v)\cap V'_{i+1}$ with $a_{i+1}(u)\neq a_{i+1}(v)$, the claim holds trivially. So consider a vertex $u\in N(v)\cap V'_{i+1}$ with $a_{i+1}(u)=a_{i+1}(v)<\lambda$. By \Cref{alg:phase2-stage2}, vertex $u$ either: (a) satisfies $a_i(u)<\lambda$ and does not change its $a,b$ values in round $i+1$; or (b) reduces its $a$ value from $[\lambda,\lambda^2)$ to $[0,\lambda)$ in round $i+1$. In the former case, we know $a_i(v)=a_{i+1}(v)=a_{i+1}(u)=a_i(u)$. By the induction hypothesis, we know $b_i(u)\neq b_i(v)$. Since $a_i(v)=a_i(u)<\lambda$, by \Cref{alg:phase2-stage2}, we conclude $b_{i+1}(u)=b_i(u)\neq b_i(v)=b_{i+1}(v)$. This completes the proof of the inductive step for case (a). Next, consider case (b), in which $u$ reduces its $a$ value from $[\lambda,\lambda^2)$ to $[0,\lambda)$ in round $i+1$. From the perspective of $u$, by an analysis similar to \textsc{Scenario II}, we know $v\in M'_{i+1}(u)\cup\overline{M}'_{i+1}(u)$. Moreover, by \Cref{claim:phase2-stage2_collisions_bound} and the method we used to update $b(u)$, it holds that $b_{i+1}(u)\neq b_{i+1}(v)$ for any $v\in M'_{i+1}(u)\cup\overline{M}'_{i+1}(u)$. This completes the proof of the inductive step for case (b). \end{proof} We continue to show the core stage internally maintains a $(2\cdot\Delta^{1/4})$-arbdefective coloring with the $a$ values of the vertices. As mentioned earlier, we ensure the arboricity of the subgraph induced by each $a$ value attained by some vertex is at most $2\cdot\Delta^{1/4}$ by orienting edges in $G$, and guarantee that each vertex $v$ has at most $2\cdot\Delta^{1/4}$ outgoing edges satisfying the constraint that the other endpoint of each such edge is another vertex having the same $a$ value with $v$. More specifically, we use the $c$ values of the vertices to implicitly determine the orientation of edges: vertex $v$ points to vertex $u$ if and only if $c(v)\geq c(u)$. To simplify presentation, for each vertex $v$, in a round $t$ during its core stage, we use $A_t(v)$ to define the set of vertices $v$ points to: $$A_t(v)\triangleq\{u\mid u\in N(v),\phi_t(u)\in I_2, a_t(u)=a_t(v), c_t(u)\leq c_t(v)\}.$$ We are ready to show that the core stage internally maintains a $(2\cdot\Delta^{1/4})$-arbdefective coloring with the $a$ values of the vertices (i.e., \Cref{lemma:phase2-stage2-bounded-arboricity}). This proof employs a similar strategy as that of Lemma 6.2 in \cite{barenboim21}. \begin{proof}[Proof of \Cref{lemma:phase2-stage2-bounded-arboricity}] For every vertex $v\in V$, let $t^*_v$ be the smallest round number such that $\phi_{t^*_v}(v)\in I_2$ and $a_{t^*_v}(v)\in[\lambda]$ are both satisfied. Fix some round $\hat{t}\geq r^*+1$, we prove the lemma by considering two complement scenarios: either $\hat{t}<t^*_v$ or $\hat{t}\geq t^*_v$. \textsc{Scenario I:} $\hat{t}<t^*_v$. In this scenario, for every round $t\in[r^*+1,\hat{t}]$, the value of $a_t(v)$ is at least $\lambda$. We shall prove a superset of $A_{\hat{t}}(v)$ is of size at most $\Delta^{1/4}$. Specifically, we claim the size of $\{u\mid u\in N(v),\phi_{\hat{t}}(u)\in I_2,a_{\hat{t}}(u)=a_{\hat{t}}(v)\}$ is at most $\Delta^{1/4}$. To see this, choose an arbitrary vertex $u\in\{u\mid u\in N(v),\phi_{\hat{t}}(u)\in I_2,a_{\hat{t}}(u)=a_{\hat{t}}(v)\}$. Since $\phi_{\hat{t}}(u),\phi_{\hat{t}}(v)\in I_2$ and $a_{\hat{t}}(u)=a_{\hat{t}}(v)\geq\lambda$, by repeatedly applying \Cref{claim:phase2-stage2_last_round_equal_a}, we know $\phi_{r*+1}(u),\phi_{r^*+1}(v)\in I_2$ and $a_{r^*+1}(u)=a_{r^*+1}(v)\geq\lambda$. Due to \Cref{lemma:phase2-stage1-property}, we know the number of neighbors of $v$ satisfying $a_{r^*+1}(u)=a_{r^*+1}(v)$ cannot exceed $\Delta^{1/4}$. Therefore, $|A_{\hat{t}}(v)|\leq|\{u\mid u\in N(v),\phi_{\hat{t}}(u)\in I_2,a_{\hat{t}}(u)=a_{\hat{t}}(v)\}|\leq\Delta^{1/4}$, as required. \textsc{Scenario II:} $\hat{t}\geq t^*_v$. In this scenario, we prove the claim by induction, starting from round $t^*_v$. Consider round $t^*_v$, if $t^*_v=r^*+1$, then due to \Cref{lemma:phase2-stage1-property}, $|A_{t^*_v}(v)|\leq\Delta^{1/4}$, as required. Otherwise, we have $t^*_v>r^*+1$, implying $v$ runs \Cref{alg:phase2-stage2} in round $t^*_v$. By the definition of $t^*_v$, vertex $v$ reduces its $a$ value from $[\lambda,\lambda^2)$ to $[0,\lambda)$ in round $t^*_v$. Hence, by \Cref{alg:phase2-stage2}, $|M_{t^*_v}(v)|\leq\Delta^{1/4}$. Next, we argue $|\overline{M}_{t^*_v}(v)|\leq\Delta^{1/4}$. To see this, choose an arbitrary vertex $u\in\overline{M}_{t^*_v}(v)$. By the definition of $\overline{M}_{t^*_v}(v)$, we know $a_{t^*_v-1}(u)=a_{t^*_v-1}(v)\geq\lambda$. By repeatedly applying \Cref{claim:phase2-stage2_last_round_equal_a}, we know $\phi_{r*+1}(u),\phi_{r^*+1}(v)\in I_2$ and $a_{r^*+1}(u)=a_{r^*+1}(v)\geq\lambda$. Due to \Cref{lemma:phase2-stage1-property}, we know the number of neighbors of $v$ satisfying $a_{r^*+1}(u)=a_{r^*+1}(v)$ cannot exceed $\Delta^{1/4}$. Therefore, $|\overline{M}_{t^*_v}(v)|\leq\Delta^{1/4}$. At this point, we conclude $|A_{t^*_v}(v)|\leq|\{u\mid u\in N(v),\phi_t(u)\in I_2,a_{t^*_v}(u)=a_{t^*_v}(v)\}|\leq|M_{t^*_v}(v)\cup\overline{M}_{t^*_v}(v)|\leq 2\cdot\Delta^{1/4}$. This completes the proof of the base case. Assume $|A_{i}(v)|\leq 2\cdot\Delta^{1/4}$ holds for round $i\geq t^*_v$, consider round $i+1$. Since $i\geq t^*_v$, we have $a_{i}(v)\in [0,\lambda)$. Thus in round $i+1\geq r^*+2$, by \Cref{alg:phase2-stage2}, vertex $v$ does not change its $a,b,c$ values. Particularly, $a_{i+1}(v)=a_i(v)$ and $c_{i+1}(v)=c_i(v)$. On the other hand, for any vertex $u\in N(v)$ satisfying $\phi_{i+1}(u)\in I_2$ and $a_{i+1}(u)=a_{i+1}(v)<\lambda$, by the definition of $t^*_u$, it holds that $t^*_u\leq i+1$. If $t^*_u<i+1$, then we have $a_{i+1}(u)=a_{i}(u)$ and $c_{i+1}(u)=c_{i}(u)$. Thus, in the case $t^*_u< i+1$, if $u\in A_{i+1}(v)$ then $u\in A_{i}(v)$. Otherwise, consider the case $t^*_u=i+1$. Since $t^*_u=i+1\geq r^*+2$, in round $i+1$, vertex $u$ runs \Cref{alg:phase2-stage2} and reduces its $a$ value from $[\lambda,\lambda^2)$ to $[0,\lambda)$. By the method \Cref{alg:phase2-stage2} updates vertices' $c$ values, it must be $c_{i+1}(u)> c_{i+1}(v)$. Thus, in the case $t^*_u=i+1$, vertex $u\notin A_{i+1}(v)$. At this point, we can conclude $A_{i+1}(v)\subseteq A_{i}(v)$. By the induction hypothesis, $|A_{i+1}(v)|\leq 2\cdot\Delta^{1/4}$. This completes the proof of the inductive step. \end{proof} We conclude this part by bounding the duration of the core stage: starting from round $r^*+1$, within $\lambda+2=O(\Delta^{3/4}\log\Delta)$ rounds, all vertices have their $a$ values in $[\lambda]$. Recall this guarantee is summarized in \Cref{lemma:phase2-stage2-time-complexity}, and its proof is similar to that of Lemma 6.1 in \cite{barenboim21}. \begin{proof}[Proof of \Cref{lemma:phase2-stage2-time-complexity}]\label{proof:lemma:phase2-stage2-time-complexity} By \Cref{lemma:phase2-stage1-property}, every vertex $v\in V$ has $\phi_{r*+1}(v)\in I_2$. If vertex $v$ has $a_{r^*+1}(v)\in[0,\lambda)$, then trivially $t^*_v=r^*+1$ and we are done. Otherwise, vertex $v$ has $a_{r^*+1}(v)\in [\lambda,\lambda^2)$, and runs \Cref{alg:phase2-stage2} from round $r^*+2$ to $t^*_v$ (both inclusive). To bound the value of $t^*_v$ when $a_{r^*+1}(v)\in [\lambda,\lambda^2)$, consider a vertex $u\in N(v)$ such that $\phi_{r^*+1}(u)\in I_2$. \smallskip Our first claim is, if $a_{r^*+1}(u)\neq a_{r^*+1}(v)$, then in rounds $[r^*+2,\min\{r^*+1+\lambda,t^*_v-1\}]$, there are at most two rounds such that $u$ and $v$ both have their colors in $I_2$ and have identical $\tilde{a}$ value (by the end of those rounds). To prove this claim, consider three scenarios depending on the value of $t^*_u$. \textsc{Scenario I:} $t^*_u=r^*+1$. Consider a round $t\in[r^*+2,\min\{r^*+1+\lambda,t^*_v-1\}]$. Since $t\leq t^*_v-1$, vertex $v$ updates its $a$ value in rounds $r^*+2$ to $t$ (both inclusive) using \cref{alg-line:phase2-stage2-a-update-rule} of \Cref{alg:phase2-stage2}. This implies $\tilde{a}_t(v)=(\tilde{a}_{r^*+1}(v)+(t-r^*-1)\cdot\hat{a}_{r^*+1}(v))\bmod\lambda$. Since $t^*_u=r^*+1$, in all rounds from $r^*+2$ to $t$ (both inclusive) in which $\phi(u)\in I_2$ (at the beginning of those rounds), $a(u)$ always equal to $a_{r^*+1}(u)\in[\lambda]$ (at the end of those rounds). In particular, $\hat{a}_t(u)=0$ and $\tilde{a}_t(v)=\tilde{a}_{r^*+1}(u)$. Now, to satisfy $\tilde{a}_t(v)=\tilde{a}_t(u)$, the following equality must hold: $$(t-r^*-1)\cdot\hat{a}_{r^*+1}(v)+(\tilde{a}_{r^*+1}(v)-\tilde{a}_{r^*+1}(u))\equiv 0 \pmod \lambda.$$ Recall that we have assumed $a_{r^*+1}(u)\neq a_{r^*+1}(v)$, also recall that $\hat{a}_{r^*+1}(u)=0\neq\hat{a}_{r^*+1}(v)$, so in the above expression $\tilde{a}_{r^*+1}(v)$ may or may not equal to $\tilde{a}_{r^*+1}(u)$. Nonetheless, recall that $\hat{a}_{r^*+1}(v)$, $\tilde{a}_{r^*+1}(v)$, $\tilde{a}_{r^*+1}(u)$ are all in $[\lambda]$, also recall that $\lambda$ is a prime number, so when $t\in[r^*+2,\min\{r^*+1+\lambda,t^*_v-1\}]$, there is at most one choice of $t$ that satisfies the above expression. \textsc{Scenario II:} $t^*_u\geq t^*_v$. Consider a round $t\in[r^*+2,\min\{r^*+1+\lambda,t^*_v-1\}]$. Since $t\leq t^*_v-1$, vertex $v$ updates its $a$ value in rounds $r^*+2$ to $t$ (both inclusive) using \cref{alg-line:phase2-stage2-a-update-rule} of \Cref{alg:phase2-stage2}. This implies $\tilde{a}_t(v)=(\tilde{a}_{r^*+1}(v)+(t-r^*-1)\cdot\hat{a}_{r^*+1}(v))\bmod\lambda$. Since $t\leq t^*_v-1\leq t^*_u-1$, we can similarly conclude $\tilde{a}_t(u)=(\tilde{a}_{r^*+1}(u)+(t-r^*-1)\cdot\hat{a}_{r^*+1}(u))\bmod\lambda$. Now, to satisfy $\tilde{a}_t(v)=\tilde{a}_t(u)$, the following equality must hold: $$(t-r^*-1)\cdot (\hat{a}_{r^*+1}(v)-\hat{a}_{r^*+1}(u))+(\tilde{a}_{r^*+1}(v)-\tilde{a}_{r^*+1}(u))\equiv 0 \pmod \lambda.$$ Recall that we have assumed $a_{r^*+1}(u)\neq a_{r^*+1}(v)$, also recall that $\hat{a}_{r^*+1}(v)$, $\hat{a}_{r^*+1}(u)$, $\tilde{a}_{r^*+1}(v)$, $\tilde{a}_{r^*+1}(u)$ are all in $[\lambda]$ and $\lambda$ is a prime number. If $\hat{a}_{r^*+1}(v)=\hat{a}_{r^*+1}(u)$ and $\tilde{a}_{r^*+1}(v)\neq\tilde{a}_{r^*+1}(u)$, then the above expression cannot be satisfied. Otherwise, if $\hat{a}_{r^*+1}(v)\neq\hat{a}_{r^*+1}(u)$, then there is at most one choice of $t\in[r^*+2,\min\{r^*+1+\lambda,t^*_v-1\}]$ that satisfies the above expression. \textsc{Scenario III:} $r^*+2\leq t^*_u\leq t^*_v-1$. In this scenario, in rounds $r^*+2$ to $\min\{r^*+1+\lambda,t^*_u-1\}$ (both inclusive), by an argument similar to \textsc{Scenario II}, there is at most one round in which $u$ and $v$ both have their colors in $I_2$ and have identical $\tilde{a}$ value (by the end of that round). In round $t^*_u$, the value of $a(u)$ reduces from $[\lambda,\lambda^2)$ to $[0,\lambda)$. Hence, either $\tilde{a}_{t^*_u}(v)=\tilde{a}_{t^*_u}(u)$, or $\tilde{a}_{t^*_u}(v)\neq\tilde{a}_{t^*_u}(u)$. \begin{itemize} \item If $\tilde{a}_{t^*_u}(v)=\tilde{a}_{t^*_u}(u)$, then $t^*_u$ is another round in which $u$ and $v$ both have their colors in $I_2$ and have identical $\tilde{a}$ value (by the end of that round). Next, consider a round $t\in[t^*_u+1,\min\{r^*+1+\lambda,t^*_v-1\}]$. Since $t\leq t^*_v-1$, vertex $v$ updates its $a$ value in rounds $[t^*_u+1,t]$ using \cref{alg-line:phase2-stage2-a-update-rule} of \Cref{alg:phase2-stage2}. This implies $\tilde{a}_t(v)=(\tilde{a}_{t^*_u}(v)+(t-t^*_u)\cdot\hat{a}_{t^*_u}(v))\bmod\lambda$. Since $t\geq t^*_u+1$, in all rounds $[t^*_u+1,t]$ in which $\phi(u)\in I_2$, $a(u)$ always equal to $a_{t^*_t}(u)\in[\lambda]$. In particular, $\hat{a}_t(u)=0$ and $\tilde{a}_t(v)=\tilde{a}_{t^*_u}(u)$. If $\tilde{a}_t(v)=\tilde{a}_t(u)$ is satisfied, then it must be the case: $$(t-t^*_u)\cdot\hat{a}_{t^*_u}(v)+(\tilde{a}_{t^*_u}(v)-\tilde{a}_{t^*_u}(u))\equiv 0 \pmod \lambda.$$ Recall that $\tilde{a}_{t^*_u}(v)=\tilde{a}_{t^*_u}(u)$ and $\hat{a}_{t^*_u}(v)\neq 0$. Since $\lambda$ is a prime number, the above expression can only be satisfied when $t-t^*_u\equiv 0 \pmod \lambda$. However, since $r^*+2\leq t^*_u$ and $t\in[t^*_u+1,\min\{r^*+1+\lambda,t^*_v-1\}]$, we know $1\leq t-t^*_u\leq \lambda-1$, implying $t-t^*_u\equiv 0 \pmod \lambda$ cannot be satisfied. At this point, we conclude, if $\tilde{a}_{t^*_u}(v)=\tilde{a}_{t^*_u}(u)$, then in rounds $[r^*+2,\min\{r^*+1+\lambda,t^*_v-1\}]$, there are at most two rounds in which $u$ and $v$ both have their colors in $I_2$ and have identical $\tilde{a}$ value (by the end of those rounds). \item If $\tilde{a}_{t^*_u}(v)\neq\tilde{a}_{t^*_u}(u)$, then consider a round $t\in[t^*_u+1,\min\{r^*+1+\lambda,t^*_v-1\}]$. By an analysis identical to the above paragraph, we know if $\tilde{a}_t(v)=\tilde{a}_t(u)$ is satisfied, then it must be the case: $$(t-t^*_u)\cdot\hat{a}_{t^*_u}(v)+(\tilde{a}_{t^*_u}(v)-\tilde{a}_{t^*_u}(u))\equiv 0 \pmod \lambda.$$ Recall that $\tilde{a}_{t^*_u}(v)\neq\tilde{a}_{t^*_u}(u)$ and $\hat{a}_{t^*_u}(v)\neq 0$. Since $\lambda$ is a prime number, and since $1\leq t-t^*_u\leq \lambda-1$, we know there is at most one choice of $t\in[t^*_u+1,\min\{r^*+1+\lambda,t^*_v-1\}]$ that satisfies the above expression. At this point, we conclude, if $\tilde{a}_{t^*_u}(v)\neq\tilde{a}_{t^*_u}(u)$, then in rounds $[r^*+2,\min\{r^*+1+\lambda,t^*_v-1\}]$, there are at most two rounds in which $u$ and $v$ both have their colors in $I_2$ and have identical $\tilde{a}$ value (by the end of those rounds). \end{itemize} By now, we have proved our first claim. That is, for any pair of vertices $u,v$ such that $u\in N(v)$ and $\phi_{r^*+1}(u)\in I_2, \phi_{r^*+1}(v)\in I_2$, if $a_{r^*+1}(u)\neq a_{r^*+1}(v)\in[\lambda,\lambda^2)$, then in rounds $r^*+2$ to $\min\{r^*+1+\lambda,t^*_v-1\}$ (both inclusive), there are at most two rounds such that $u$ and $v$ both have their colors in $I_2$ and have identical $\tilde{a}$ value (by the end of those rounds). \smallskip Our second claim is, for any pair of vertices $u,v$ such that $u\in N(v)$ and $\phi_{r^*+1}(u)\in I_2, \phi_{r^*+1}(v)\in I_2$, if $a_{r^*+1}(u)=a_{r^*+1}(v)\in[\lambda,\lambda^2)$, then in rounds $r^*+2$ to $\min\{r^*+1+\lambda,t^*_v-1\}$ (both inclusive), there are at most two rounds such that by the end of each such round, $u$ and $v$ both have their colors in $I_2$, $u$ and $v$ have identical $\tilde{a}$ value but different $a$ values. To prove this claim, consider two complement scenarios depending on the value of $t^*_u$. In scenario one in which $t^*_u\geq t^*_v$, by \Cref{alg:phase2-stage2}, for any round $t\in[r^*+2,\min\{r^*+1+\lambda,t^*_v-1\}]$, we have $\phi_{t}(u)=\phi_{t}(v)\in I_2$ and $a_t(u)=a_t(v)$. The second scenario concerns with the case $t^*_u< t^*_v$. In this scenario, by \Cref{alg:phase2-stage2}, for any round $t\in[r^*+2,t^*_u-1]$, we have $\phi_{t}(u)=\phi_{t}(v)\in I_2$ and $a_t(u)=a_t(v)$. Then, in round $t^*_u$, vertex $u$ reduces its $a$ value from $[\lambda,\lambda^2)$ to $[0,\lambda)$, whereas $v$ keeps its $a$ value in $[\lambda,\lambda^2)$. Thus, by \Cref{alg:phase2-stage2}, $\phi_{t^*_u}(u)=\phi_{t^*_u}(v)\in I_2$, $a_{t^*_u}(u)\neq a_{t^*_u}(v)$, but $\tilde{a}_{t^*_u}(u)=\tilde{a}_{t^*_u}(v)$. Lastly, for every round $t\in[t^*_u+1,\min\{r^*+1+\lambda,t^*_v-1\}]$, by an argument identical to {Scenario III} in the preceding claim, we know $\tilde{a}_{t^*_u}(u)\neq\tilde{a}_{t^*_u}(v)$. This completes the proof of our second claim. \smallskip Combining the two claims, we conclude, for any pair of neighbors $u,v$ such that $\phi_{r^*+1}(u)$ and $\phi_{r^*+1}(v)$ are both in $I_2$, in rounds $r^*+2$ to $\min\{r^*+1+\lambda,t^*_v-1\}$ (both inclusive), there are at most two rounds such that by the end of each such round, $u$ and $v$ both have their colors in $I_2$, $u$ and $v$ have identical $\tilde{a}$ value but different $a$ values. Now, since vertex $v$ has at most $\Delta$ neighbors, and since $\lambda\geq 2\cdot\Delta^{3/4}$, by the pigeonhole principle, we know starting from round $r^*+2$, within $\lambda$ rounds, there must exist a round $t$ in which, by the end of that round, the number of neighbors of $v$ satisfying both $a_t(u)\neq a_t(v)$ and $\tilde{a}_t(u)=\tilde{a}_t(v)$ is at most $\Delta^{1/4}$. In other words, in round $t+1$, we have $|M_{t+1}(v)|\leq\Delta^{1/4}$. As a result, by \Cref{alg:phase2-stage2}, at the end of round $t+1\leq r^*+2+\lambda$, we have $a_{t+1}(v)\in[\lambda]$. This completes the proof of the lemma. \end{proof} \subsection{Transition-out stage}\label{subsec:phase2-stage3} The core stage uses the $\langle a,b\rangle$ pairs of vertices to maintain proper coloring throughout, but internally its primary goal is to produce a $(2\cdot\Delta^{1/4})$-arbdefective $O(\Delta^{3/4}\log\Delta)$-coloring using the $a$ values of vertices' color quadruples. Once the core stage is done, vertices will run the transition-out stage to produce a proper $(\Delta+O(\Delta^{3/4}\log\Delta))$-coloring. In particular, in the transition-out stage, each vertex will map its color from interval $I_2=[\ell_3,\ell_3+\ell_2)$ to another color in interval $I_3=[0,\ell_3)$. Recall that vertices may end the core stage at different time, as vertices may reduce their $a$ values to interval $[\lambda]$ at different time. As a result, vertices may start the transition-out stage in different rounds. Special care has to be taken to ensure correctness under such asynchrony. Particularly, in our algorithm, the $a$ values of the vertices are also used to determine the order that vertices execute the transition-out stage. It is also worth noting, the approach we took during the transition-out stage is inspired by the technique developed by Barenboim~\cite{barenboim16sublinear}. Nonetheless, adjustments on both the implementation and the analysis have to be made, as in this paper we are considering the more restrictive locally-iterative setting, and have to take asynchrony into consideration. The following property informally summarizes the key guarantees provided by the transition-out stage. \begin{property}[\textbf{Transition-out Stage}]\label{property:phase2-stage3} By the end of round $r^*+2+3\lambda$, every vertex must have completed the transition-out stage, and $\phi_{r^*+2+3\lambda}$ corresponds to a proper $(\Delta+O(\Delta^{3/4}\log\Delta))$-coloring. Moreover, for every round $t\in[r^*+3+\lambda,r^*+2+3\lambda]$, the coloring $\phi_t$ is proper. \end{property} \paragraph{Detailed description.} In a round $t$, for a vertex $v\in V$ with $\phi_{t-1}(v)\in I_2$ and $a_{t-1}(v)\in[\lambda]$, if $a_{t-1}(v)\leq a_{t-1}(u)<\lambda$ is satisfied for every $u\in N(v)$ with $\phi_{t-1}(u)\in I_2$, and if $d_{t-1}(v)=\mu$ (recall $d(v)$ always equal to $\mu$ during the core stage), then $v$ uses this round to update $d(v)$, making preparation for the transition. In particular, vertex $v$ considers a family of $\mu$ polynomials $P_{(t-1,v,0)}$, $P_{(t-1,v,1)}$, $\cdots$, $P_{(t-1,v,\mu-1)}$ over finite field $GF(\mu)$. For any $i\in[\mu]$, we define: $$P_{(t-1,v,i)}(x) \triangleq (\lfloor b_{t-1}(v)/\tau \rfloor \cdot x^2 + (b_{t-1}(v)\bmod\tau)\cdot x + i)\bmod\mu.$$ Notice the core stage ensures $b_{t-1}(v)\in[\tau^2]$. Next, we define $L_{(t-1,i)}(v)$ and $L_{t-1}(v)$: $$L_{(t-1,i)}(v)\triangleq\{P_{(t-1,v,i)}(x)+x\cdot\mu\mid x\in[\mu]\}\text{~~and~~}L_{t-1}(v)\triangleq\{\phi_{t-1}(u)\mid u\in N(v),\phi_{t-1}(u)\in I_3\}.$$ In other words, $L_{t-1}(v)$ contains the phase three colors that are already occupied by the neighbors of $v$. With $L_{(t-1,i)}(v)$ and $L_{t-1}(v)$, vertex $v$ sets $d_t(v)$ to be an integer $\hat{i}\in[\mu]$ that minimizes $|L_{(t-1,i)}(v)\cap L_{t-1}(v)|$. Since $|L_{t-1}(v)|\leq\Delta$, and since $L_{(t-1,i)}(v)\cap L_{(t-1,i')}(v)=\emptyset$ for any $i\neq i'$, by the pigeonhole principle, we have $|L_{(t-1,\hat{i})}(v)\cap L_{t-1}(v)|\leq\Delta/\mu$. Once a vertex $v$ sets $d(v)$ to a value other than $\mu$, its preparation for the transition is done, and will attempt to maps its current color in $I_2$ to another color in $I_3$. Specifically, in a round $t$, for a vertex $v\in V$ with $\phi_{t-1}(v)\in I_2$, it will update its color to $I_3$ if the following conditions are met: (a) $a_{t-1}(v)\leq a_{t-1}(u)<\lambda$ is satisfied for every $u\in N(v)$ with $\phi_{t-1}(u)\in I_2$; (b) $d_{t-1}(v)\neq\mu$; and (c) $d_{t-1}(u)\neq\mu$ is satisfied for every $u\in A_{t-1}$. (Recall that $A_{t-1}(v)\triangleq\{u\mid u\in N(v),\phi_{t-1}(u)\in I_2, a_{t-1}(u)=a_{t-1}(v), c_{t-1}(u)\leq c_{t-1}(v)\}$.) The update rule is, let $\hat{k}\in[\mu]$ be the smallest integer satisfying: $$P_{(t-1,v,d_{t-1}(v))}(\hat{k}) + \mu\cdot{\hat{k}} \quad\in\quad L_{(t-1,d_{t-1}(v))}(v) \setminus \left( L_{t-1}(v) \cup \bigcup_{u\in A_{t-1}(v)} L_{(t-1,d_{t-1}(u))}(u) \right),$$ then set $\phi(v)$ to be: $$\phi_t(v) \gets P_{(t-1,v,d_{t-1}(v))}(\hat{k}) + \mu\cdot{\hat{k}}.$$ We will show $\phi_t(v)$ exists, and it indeed is a color in $I_3$. \begin{algorithm}[t!] \caption{The transition-out stage of the quadratic reduction phase at $v\in V$ in round $t$}\label{alg:phase2-stage3} \begin{algorithmic}[1] \State Send $\phi_{t-1}(v)$ to all neighbors. \If{($\phi_{t-1}(v)\in I_2$ \textbf{and} $a_{t-1}(v)<\lambda$)}\label{alg-line:phase2-stage3-if-cond-1} \If{(every $u\in N(v)$ with $\phi_{t-1}(u)\in I_2$ has $a_{t-1}(v) \leq a_{t-1}(u)<\lambda$)}\label{alg-line:phase2-stage3-if-cond-2} \State $A_{t-1}(v)\gets\{u\mid u\in N(v),\phi_{t-1}(u)\in I_2, a_{t-1}(u)=a_{t-1}(v), c_{t-1}(u)\leq c_{t-1}(v)\}$. \For{(every $i\in[\mu]$)} \State $L_{(t-1,i)}(v)\gets\{ (\lfloor{b_{t-1}(v)/\tau}\rfloor\cdot{x^2}+(b_{t-1}(v)\bmod\tau)\cdot{x}+i)\bmod\mu + x\cdot\mu \mid x\in[\mu] \}$. \EndFor \State $L_{t-1}(v)\gets\{\phi_{t-1}(u)\mid u\in N(v),\phi_{t-1}(u)\in I_3\}$. \If{($d_{t-1}(v)=\mu$)}\label{alg-line:phase2-stage3-if-cond-3} \State Let $\hat{i}$ be an integer in $[\mu]$ that minimizes $|L_{(t-1,i)}(v)\cap L_{t-1}(v)|$. \State $d_t(v)\gets\hat{i}$.\label{alg-line:intermediate-to-reduction-d-rule} \State $\phi_t(v) \gets \ell_3 + \langle a_{t}(v),b_{t}(v),c_{t}(v),d_{t}(v)\rangle$. \ElsIf{(every $u\in A_{t-1}(v)$ has $d_{t-1}(u)\neq\mu$)}\label{alg-line:phase2-stage3-if-cond-4} \State Let $\hat{k}\in[\mu]$ be the smallest integer satisfying:\label{alg-line:phase2-stage3-k-rule} \Statex \hspace{12ex} \begin{small} $P_{(t-1,v,d_{t-1}(v))}(\hat{k}) + \mu\cdot{\hat{k}} \in L_{(t-1,d_{t-1}(v))}(v) \setminus \left( L_{t-1}(v) \cup \bigcup_{u\in A_{t-1}(v)} L_{(t-1,d_{t-1}(u))}(u) \right)$. \end{small} \State $\phi_t(v) \gets P_{(t-1,v,d_{t-1}(v))}(\hat{k}) + \mu\cdot{\hat{k}}$. \EndIf \EndIf \EndIf \end{algorithmic} \end{algorithm} The complete pseudocode of the transition-out stage is given in \Cref{alg:phase2-stage3}. \paragraph{Analysis.} We now argue the correctness and the time cost of the transition-out stage, and we begin by showing that the parameter $\hat{k}$ defined in \cref{alg-line:phase2-stage3-k-rule} of \Cref{alg:phase2-stage3} must exist and is of bounded value. \begin{claim}\label{claim:phase2-stage3-bounded-k} For any vertex $v\in V$, let $t^+_v$ be the smallest round number such that $\phi_{t^+_v}(v)\in I_2$ and $d_{t^+_v}(v)\neq\mu$. For any round $t\geq t^+_v+1$, if $\phi_{t-1}(v)\in I_2$, then the following set is non-empty: $$L_{(t-1,d_{t-1}(v))}(v) \setminus \left( L_{t-1}(v) \cup \bigcup_{u\in\{w\mid w\in A_{t-1}(v),d_{t-1}(w)\neq\mu\}}L_{(t-1,d_{t-1}(u))}(u) \right).$$ Let $\hat{k}\in[\mu]$ be the smallest integer such that $P_{(t-1,v,d_{t-1}(v))}(\hat{k})+\mu\cdot{\hat{k}}$ is in the above set, then: $$0\leq\hat{k} \leq \Delta/\mu + 4\cdot\Delta^{1/4}.$$ \end{claim} \begin{proof} Before diving into the details, we outline the high-level proof strategy. Recall the claim statement, for the ease of presentation, we define: \begin{align*} \mathfrak{A}_{\phantom{0}} & \triangleq L_{(t-1,d_{t-1}(v))}(v),\\ \mathfrak{B}_1 & \triangleq L_{t-1}(v),\\ \mathfrak{B}_2 & \triangleq \bigcup_{u\in\{w\mid w\in A_{t-1}(v),d_{t-1}(w)\neq\mu\}}L_{(t-1,d_{t-1}(u))}(u). \end{align*} To prove $\mathfrak{A}\setminus(\mathfrak{B}_1\cup\mathfrak{B}_2)\neq\emptyset$, we will show $|\mathfrak{A}\cap(\mathfrak{B}_1\cup\mathfrak{B}_2)|\leq|\mathfrak{A}\cap\mathfrak{B}_1|+|\mathfrak{A}\cap\mathfrak{B}_2|<|\mathfrak{A}|$, as $\mathfrak{A}\setminus(\mathfrak{B}_1\cup\mathfrak{B}_2)=\mathfrak{A}\setminus(\mathfrak{A}\cap(\mathfrak{B}_1\cup\mathfrak{B}_2))$. On the other hand, recall that $L_{(t-1,d_{t-1}(v))}(v)$, or $\mathfrak{A}$ equivalently, denotes the set $\{P_{(t-1,v,d_{t-1}(v))}(x)+x\cdot\mu\mid x\in[\mu]\}$. Moreover, the value of $P_{(t-1,v,d_{t-1}(v))}(x)+x\cdot\mu$ strictly increases as $x$ increases. As a result, to find $\hat{k}\in[\mu]$, which is the smallest integer such that $P_{(t-1,v,d_{t-1}(v))}(\hat{k})+\mu\cdot{\hat{k}}$ is in $\mathfrak{A}\setminus(\mathfrak{B}_1\cup\mathfrak{B}_2)$, it suffices to bound the size of the set $\mathfrak{A}\cap(\mathfrak{B}_1\cup\mathfrak{B}_2)$. In particular, $\hat{k}$ is at most the $(|\mathfrak{A}\cap(\mathfrak{B}_1\cup\mathfrak{B}_2)|+1)$-th smallest element in $[\mu]$; in other words, $\hat{k}\leq|\mathfrak{A}\cap(\mathfrak{B}_1\cup\mathfrak{B}_2)|$. To sum up, to prove the claim, it suffices to show $|\mathfrak{A}\cap(\mathfrak{B}_1\cup\mathfrak{B}_2)| \leq \Delta/\mu+4\cdot\Delta^{1/4}$, since by then we can conclude: (a) by definition $\Delta/\mu+4\cdot\Delta^{1/4}<\mu$, thus $|\mathfrak{A}\cap(\mathfrak{B}_1\cup\mathfrak{B}_2)|<\mu=|\mathfrak{A}|$, implying $\mathfrak{A}\setminus(\mathfrak{B}_1\cup\mathfrak{B}_2)\neq\emptyset$; and (b) $\hat{k}\leq|\mathfrak{A}\cap(\mathfrak{B}_1\cup\mathfrak{B}_2)|\leq\Delta/\mu+2\cdot\Delta^{1/4}$. We now proceed to prove $|\mathfrak{A}\cap(\mathfrak{B}_1\cup\mathfrak{B}_2)| \leq \Delta/\mu+4\cdot\Delta^{1/4}$, and we do so by bounding the size of $\mathfrak{A}\cap\mathfrak{B}_1$ and $\mathfrak{A}\cap\mathfrak{B}_2$. Consider a vertex $v$ and a round $t\geq t^+_v+1$ with $\phi_{t-1}(v)\in I_2$. By the definition of $t^+_v$ , the definition of $L_{(t-1,d_{t-1}(v))}(v)$, and algorithm description, it holds that: $$L_{(t-1,d_{t-1}(v))}(v)=L_{\left(t-1,d_{t^+_v}(v)\right)}(v)=L_{\left(t^+_v-1,d_{t^+_v}(v)\right)}(v).$$ As a result: $$|\mathfrak{A}\cap\mathfrak{B}_1| = |L_{(t-1,d_{t-1}(v))}(v) \cap L_{t-1}(v)| = \left| L_{\left(t^+_v-1,d_{t^+_v}(v)\right)}(v) \cap L_{t-1}(v) \right|.$$ Observe that as time proceeds from round $t^+_v$ to round $t$, more and more neighbors of $v$ may have done the transition-out stage and start running the third phase; in other words, $L_{t-1}(v)$ may increase as $t$ increases. More precisely, we have: \begin{align*} |\mathfrak{A}\cap\mathfrak{B}_1| &= \left| L_{\left(t^+_v-1,d_{t^+_v}(v)\right)}(v) \cap L_{t-1}(v) \right| \\ &= \left| L_{\left(t^+_v-1,d_{t^+_v}(v)\right)}(v) \cap L_{t^+_v-1}(v) \right| + \left| L_{\left(t^+_v-1,d_{t^+_v}(v)\right)}(v) \cap \left(L_{t-1}(v) \setminus L_{t^+_v-1}(v)\right) \right| \\ &\leq \Delta/\mu + \left| L_{\left(t^+_v-1,d_{t^+_v}(v)\right)}(v) \cap \left(L_{t-1}(v) \setminus L_{t^+_v-1}(v)\right) \right| , \end{align*} where the last inequality is due to the fact that $| L_{\left(t^+_v-1,d_{t^+_v}(v)\right)}(v) \cap L_{t^+_v-1}(v) |\leq\Delta/\mu$ (recall we have argued why this is the case when describing \Cref{alg:phase2-stage3}). On the other hand, notice that: \begin{align*} |\mathfrak{A}\cap\mathfrak{B}_2| &= \left| L_{(t-1,d_{t-1}(v))}(v) \cap \left(\bigcup_{u\in\{w\mid w\in A_{t-1}(v),d_{t-1}(w)\neq\mu\}}L_{(t-1,d_{t-1}(u))}(u)\right) \right| \\ &\leq \sum_{u\in\{w\mid w\in A_{t-1}(v),d_{t-1}(w)\neq\mu\}}|L_{(t-1,d_{t-1}(v))}(v)\cap L_{(t-1,d_{t-1}(u))}(u)|. \end{align*} Recall that $L_{(t-1,d_{t-1}(v))}(v)=\{P_{(t-1,v,d_{t-1}(v))}(x)+x\cdot\mu\mid x\in[\mu]\}$. For an element to be in both $L_{(t-1,d_{t-1}(v))}(v)$ and $L_{(t-1,d_{t-1}(u))}(u)$, it must be the case that $P_{(t-1,v,d_{t-1}(u))}(x)=P_{(t-1,v,d_{t-1}(v))}(x)$ for some $x\in[\mu]$. Recall that $P_{(t-1,v,d_{t-1}(v))}(x) = (\lfloor b_{t-1}(v)/\tau \rfloor \cdot x^2 + (b_{t-1}(v)\bmod\tau)\cdot x + d_{t-1}(v))\bmod\mu$ is a polynomial of degree (at most) two defined over finite field $GF(\mu)$. Since $u\in A_{t-1}(v)$, it must be the case that $b_{t-1}(u)\neq b_{t-1}(v)$, implying $P_{(t-1,v,d_{t-1}(v))}(x)$ and $P_{(t-1,v,d_{t-1}(u))}(u)$ are two distinct polynomials of degree (at most) two. Hence, there are at most two choices of $x\in[\mu]$ satisfying $P_{(t-1,v,d_{t-1}(u))}(x)=P_{(t-1,v,d_{t-1}(v))}(x)$, implying $|L_{(t-1,d_{t-1}(v))}(v)\cap L_{(t-1,d_{t-1}(u))}(u)|\leq 2$. As a result, we conclude: $$|\mathfrak{A}\cap\mathfrak{B}_2| \leq 2\cdot|\{u\mid u\in A_{t-1}(v),d_{t-1}(u)\neq\mu\}| \leq 2\cdot|A_{t-1}(v)|,$$ which leads to the following upper bound on $|\mathfrak{A}\cap(\mathfrak{B}_1\cup\mathfrak{B}_2)|$: $$|\mathfrak{A}\cap(\mathfrak{B}_1\cup\mathfrak{B}_2)| \leq \Delta/\mu + \left| L_{\left(t^+_v-1,d_{t^+_v}(v)\right)}(v) \cap \left(L_{t-1}(v) \setminus L_{t^+_v-1}(v)\right) \right| + 2\cdot|A_{t-1}(v)|.$$ Observe that as time proceeds from round $t^+_v$ to round $t$, more and more neighbors of $v$ may have done the transition-out stage and start running the third phase. To bound the above expression, consider a neighbor $u$ of $v$ with $\phi_{t^+_v-1}(u)\in I_2$. It cannot be the case that $a_{t^+_v-1}(v)>a_{t^+_v-1}(u)$, since by the definition of $t^+_v$ we have $d_{t^+_v}(v)\neq\mu$, yet by \Cref{alg:phase2-stage3} updating $d(v)$ in round $t^+_v$ requires $a_{t^+_v-1}(v)\leq a_{t^+_v-1}(u)$. If $a_{t^+_v-1}(v)<a_{t^+_v-1}(u)$, then by \Cref{alg:phase2-stage3}, vertex $u$ will not start the transition to phase three until the transition of vertex $v$ is done, thus the behavior of $u$ will not change the above upper bound of $|\mathfrak{A}\cap(\mathfrak{B}_1\cup\mathfrak{B}_2)|$ in rounds $[t^+_v,t]$, as in round $t$ we still have $\phi_t(v)\in I_2$ (meaning by the end of round $t$ vertex $v$ has not completed the transition-out stage). Now let us focus on the case $a_{t^+_v-1}(v)=a_{t^+_v-1}(u)$. If $c_{t^+_v-1}(v)<c_{t^+_v-1}(u)$, then $u\notin A_{t^+_v-1}(v)$. Notice that by the definition of $A_{t-1}(v)$ we have $A_{t^+_v-1}(v)=A_{t-1}(v)$, thus the behavior of $u$ will not change $|A_{t-1}(v)|$ in rounds $[t^+_v,t]$. On the other hand, if indeed $u$ obtains its phase three color in some round in $[t^+_v,t]$, then by \cref{alg-line:phase2-stage3-k-rule} of \Cref{alg:phase2-stage3}, when $u$ chooses its color, it will avoid all colors that might be used by $v$. This means the phase three color used by $u$ will not appear in $L_{\left(t^+_v-1,d_{t^+_v}(v)\right)}(v)$, implying the behavior of $u$ will not change $| L_{\left(t^+_v-1,d_{t^+_v}(v)\right)}(v) \cap (L_{t-1}(v) \setminus L_{t^+_v-1}(v))|$ in rounds $[t^+_v,t]$. By now, we conclude that if $a_{t^+_v-1}(v)=a_{t^+_v-1}(u)$ and $c_{t^+_v-1}(v)<c_{t^+_v-1}(u)$, then the behavior of $u$ will not change the upper bound of $|\mathfrak{A}\cap(\mathfrak{B}_1\cup\mathfrak{B}_2)|$ in rounds $[t^+_v,t]$. As a result, the only scenario that the behavior of $u$ might change the upper bound of $|\mathfrak{A}\cap(\mathfrak{B}_1\cup\mathfrak{B}_2)|$ is when $a_{t^+_v-1}(v)=a_{t^+_v-1}(u)$ and $c_{t^+_v-1}(v)\geq c_{t^+_v-1}(u)$. That is, $u\in A_{t^+_v-1}(v)$. For each such vertex $u$, observe that as it transits to the third phase in some round, $|A_t(v)|$ decreases by one, while $|L_t(v)|$ increases by one. Recall the expression of the upper bound of $|\mathfrak{A}\cap(\mathfrak{B}_1\cup\mathfrak{B}_2)|$, the above discussion implies, as vertex $u$ transits to phase three, the value of the upper bound decreases. As a result, we conclude: \begin{align*} |\mathfrak{A}\cap(\mathfrak{B}_1\cup\mathfrak{B}_2)| &\leq \Delta/\mu + \left| L_{\left(t^+_v-1,d_{t^+_v}(v)\right)}(v) \cap \left(L_{t-1}(v) \setminus L_{t^+_v-1}(v)\right) \right| + 2\cdot|A_{t-1}(v)|\\ &\leq \Delta/\mu + \left| L_{\left(t^+_v-1,d_{t^+_v}(v)\right)}(v) \cap \left(L_{t^+_v-1}(v) \setminus L_{t^+_v-1}(v)\right) \right| + 2\cdot \left|A_{t^+_v-1}(v)\right|\\ &\leq \Delta/\mu + 0 + 2\cdot\left(2\cdot\Delta^{1/4}\right), \end{align*} where the last inequality is due to \Cref{lemma:phase2-stage2-bounded-arboricity}. This completes the proof of the claim. \end{proof} We are now ready to bound the time cost of the transition-out stage (i.e., \Cref{lemma:phase2-stage3-time-cost}). \begin{proof}[Proof of \Cref{lemma:phase2-stage3-time-cost}] For each vertex $v$, let $t^*_v$ be the smallest round number such that $\phi_{t^*_v}(v)\in I_2$ and $a_{t^*_v}(v)\in[\lambda]$ are both satisfied, and let $t^+_v$ be the smallest round number such that $\phi_{t^+_v}(v)\in I_2$ and $d_{t^*_v}(v)\neq\mu$ are both satisfied. By algorithm description, $t^*_v < t^+_v < t^{\#}_v$. To prove the lemma, it suffices to prove the following stronger stronger claim: for every vertex $v\in V$, it holds that $t^+_v\leq r^*+1+\lambda+2(a_{t^*_v}(v)+1)$, and that $t^{\#}_v\leq r^*+2+\lambda+2(a_{t^*_v}(v)+1)$. To prove the claim, we do an induction on the value of $a_{t^*_v}(v)\in[\lambda]$. First consider the base case, fix a vertex $v$ with the smallest $a_{t^*_v}$ value. Due to \Cref{lemma:phase2-stage2-time-complexity}, for vertex $v$, as well as every vertex $u\in N(v)$, we have $t^*_v\leq r^*+2+\lambda$ and $t^*_u\leq r^*+2+\lambda$. Thus in round $r^*+3+\lambda$, if $\phi_{r^*+2+\lambda}(v)\in I_2$ and $d_{r^*+2+\lambda}(v)=\mu$, then for vertex $v$, the ``if'' condition in \cref{alg-line:phase2-stage3-if-cond-1} and \cref{alg-line:phase2-stage3-if-cond-2} of \Cref{alg:phase2-stage3} will both be satisfied. Moreover, the ``if'' condition in \cref{alg-line:phase2-stage3-if-cond-3} of \Cref{alg:phase2-stage3} will also be satisfied in this round. As a result, by the end of round $r^*+3+\lambda$, if $\phi_{r^*+3+\lambda}(v)\in I_2$, it must be the case that $d_{r^*+3+\lambda}(v)\neq\mu$. In other words, $t^+_v\leq r^*+3+\lambda$. Apply the same argument for every vertex $u\in A_{r^*+2+\lambda}(v)\supseteq A_{r^*+3+\lambda}(v)$, it holds that $t^+_u\leq r^*+3+\lambda$. Therefore, in round $r^*+4+\lambda$, if $\phi_{r^*+3+\lambda}(v)\in I_2$, then for vertex $v$, the ``if'' condition in \cref{alg-line:phase2-stage3-if-cond-1} and \cref{alg-line:phase2-stage3-if-cond-2} of \Cref{alg:phase2-stage3} will both be satisfied. Moreover, the ``if'' condition in \cref{alg-line:phase2-stage3-if-cond-4} of \Cref{alg:phase2-stage3} will also be satisfied in this round. As a result, by \Cref{claim:phase2-stage3-bounded-k}, by the end of round $r^*+4+\lambda$, vertex $v$ must have obtained a color in $I_3$. This completes the proof of the base case. Assume our claim holds for all vertices $u$ with $a_{t^*_u}(u)\leq i\in[\mu-1]$, consider a vertex $v$ with $a_{t^*_v}(v)=i+1\in[\mu]$. The proof for the inductive step generally follow the same path as in the base case. Specifically, for every vertex $u\in N(v)$ with $a_{t^*_u}(u)\leq i$, by the induction hypothesis, it must be the case that $\phi_{r^*+2+\lambda+2(i+1)}(u)\in I_3$. Thus in round $r^*+3+\lambda+2(i+1)$, every vertex $u\in N(v)$ with $\phi_{r^*+2+\lambda+2(i+1)}(u)\in I_2$ must have $a_{t^*_u}\geq i+1$. Hence, in round $r^*+3+\lambda+2(i+1)$, if $\phi_{r^*+2+\lambda+2(i+1)}(v)\in I_2$ and $d_{r^*+2+\lambda+2(i+1)}(v)=\mu$, then for vertex $v$, the ``if'' condition in \cref{alg-line:phase2-stage3-if-cond-1} and \cref{alg-line:phase2-stage3-if-cond-2} of \Cref{alg:phase2-stage3} will both be satisfied. Moreover, the ``if'' condition in \cref{alg-line:phase2-stage3-if-cond-3} of \Cref{alg:phase2-stage3} will also be satisfied in this round. As a result, by the end of round $r^*+3+\lambda+2(i+1)$, if $\phi_{r^*+3+\lambda+2(i+1)}(v)\in I_2$, it must be the case that $d_{r^*+3+\lambda+2(i+1)}(v)\neq\mu$. In other words, $t^+_v \leq r^*+3+\lambda+2(i+1) = r^*+1+\lambda+2((i+1)+1)$. Apply the same argument for every vertex $u\in A_{r^*+2+\lambda+2(i+1)}(v)\supseteq A_{r^*+3+\lambda+2(i+1)}(v)$, it holds that $t^+_u \leq r^*+3+\lambda+2(i+1)$. Therefore, in round $r^*+4+\lambda+2(i+1)$, if $\phi_{r^*+3+\lambda+2(i+1)}(v)\in I_2$, then for vertex $v$, the ``if'' condition in \cref{alg-line:phase2-stage3-if-cond-1} and \cref{alg-line:phase2-stage3-if-cond-2} of \Cref{alg:phase2-stage3} will both be satisfied. Moreover, the ``if'' condition in \cref{alg-line:phase2-stage3-if-cond-4} of \Cref{alg:phase2-stage3} will also be satisfied in this round. As a result, by \Cref{claim:phase2-stage3-bounded-k}, by the end of round $r^*+4+\lambda+2(i+1)=r^*+2+\lambda+2((i+1)+1)$, vertex $v$ must have obtained a color in $I_3$. This completes the proof of the inductive step. \end{proof} We continue to show the correctness of the transition-out stage. In particular, when a vertex $v$ finishes the transition in round $t^{\#}_v$ and obtained a color in $I_3$, that color $\phi_{t^{\#}_v}(v)$ will not conflict with any neighbor $u\in N(v)$ that also have its color $\phi_{t^{\#}_v}(u)$ in $I_3$. More precisely, \Cref{lemma:phase2-stage3-proper-color} is true. \begin{proof}[Proof of \Cref{lemma:phase2-stage3-proper-color}] By \Cref{alg:phase2-stage3}, vertex $v$ sets $\phi_{t^{\#}_v}(v)$ as the minimum elements in: $$L_{t^{\#}_v-1,d_{t^{\#}_v-1}(v)}(v) \setminus \left( L_{t^{\#}_v-1}(v) \cup \bigcup_{u\in\left\{w\mid w\in A_{t^{\#}_v-1}(v),d_{t^{\#}_v-1}(w)\neq\mu\right\}} L_{t^{\#}_v-1,d_{t^{\#}_v-1}(u)}(u) \right).$$ Consider a neighbor $u\in N(v)$. If $\phi_{t^{\#}_v-1}(u)\in I_1$, then obviously $\phi_{t^{\#}_v}(u)\notin I_3$ as the transition-stage of vertex $u$ takes at least two rounds, thus $\phi_{t^{\#}_v}(u)$ will not conflict with $\phi_{t^{\#}_v}(v)$. If $\phi_{t^{\#}_v-1}(u)\in I_3$, then by \Cref{alg:phase3}, we have $\phi_{t^{\#}_v}(u)=\phi_{t^{\#}_v-1}(u)$. Moreover, when $v$ chooses $\phi_{t^{\#}_v}(v)$ it will not consider $\phi_{t^{\#}_v-1}(u)$ as $\phi_{t^{\#}_v-1}(u)\in L_{t^{\#}_v-1}(v)$. Hence, when $\phi_{t^{\#}_v-1}(u)\in I_3$, we also have $\phi_{t^{\#}_v}(u)\neq\phi_{t^{\#}_v}(v)$. Lastly, if $\phi_{t^{\#}_v-1}(u)\in I_2$, then there are four scenarios: \begin{itemize} \item \textsc{Scenario I:} Vertex $u$ has $a_{t^{\#}_v-1}(u)<a_{t^{\#}_v-1}(v)$. This scenario cannot happen, since by \Cref{alg:phase2-stage3}, vertex $v$ will only set $d(v)$ to a value other than $\mu$ after all its neighbors with smaller $a$ values have done the transition to the third phase. Therefore, if $a_{t^{\#}_v-1}(u)<a_{t^{\#}_v-1}(v)$, then in round $t^{\#}_v$ vertex $v$ must have already started the third phase, a contradiction. \item \textsc{Scenario II:} Vertex $u$ has $a_{t^{\#}_v-1}(u)>a_{t^{\#}_v-1}(v)$. By \Cref{alg:phase2-stage3}, vertex $u$ cannot complete the transition-out stage in round $t^{\#}_v$, as $a_{t^{\#}_v-1}(u)>a_{t^{\#}_v-1}(v)$. Therefore, $\phi_{t^{\#}_v}(u)\in I_2$, implying it will not conflict with the color chosen by vertex $v$. \item \textsc{Scenario III:} Vertex $u$ has $a_{t^{\#}_v-1}(u)=a_{t^{\#}_v-1}(v)$ and $u\in A_{t^{\#}_v-1}(v)$. In such scenario, if indeed $u$ finishes the transition-out stage and obtains a color in $I_3$ by the end of round $t^{\#}_v$, then this color $\phi_{t^{\#}_v}(u)\in L_{t^{\#}_v-1,d_{t^{\#}_v-1}(u)}(u)$. On the other hand, by \Cref{alg:phase2-stage3}, the initial phase three color chosen by vertex $v$, which is $\phi_{t^{\#}_v}(v)$, will not appear in $L_{t^{\#}_v-1,d_{t^{\#}_v-1}(u)}(u)$. Hence, if indeed $u$ finishes the transition-out stage and obtains a color in $I_3$ by the end of round $t^{\#}_v$, then $\phi_{t^{\#}_v}(u)\neq\phi_{t^{\#}_v}(v)$. Otherwise, if $\phi_{t^{\#}_v}(u)\in I_2$, then obviously $\phi_{t^{\#}_v}(u)\neq\phi_{t^{\#}_v}(v)$, as $\phi_{t^{\#}_v}(v)\in I_3$ by the definition of $t^{\#}_v$. \item \textsc{Scenario IV:} Vertex $u$ has $a_{t^{\#}_v-1}(u)=a_{t^{\#}_v-1}(v)$ and $u\notin A_{t^{\#}_v-1}(v)$. In such scenario, we have $v\in A_{t^{\#}_v-1}(u)$. By an analysis similar to \textsc{Scenario III} (but from the perspective of vertex $u$), we conclude that $\phi_{t^{\#}_v}(u)\neq\phi_{t^{\#}_v}(v)$. \end{itemize} This completes the proof of the lemma. \end{proof} \section{Self-stabilizing Coloring Algorithm}\label{sec:alg-stab} Compared with algorithms involving heavy machinery, locally-iterative algorithms could be more robust against various faults due to its simplicity. In this section, we present a self-stabilizing $(\Delta+1)$-coloring algorithm that is a variant of the previously described locally-iterative $(\Delta+1)$-coloring algorithm, with stabilization time $O(\Delta^{3/4}\log{\Delta})+\log^*{n}$. In exchange for this stronger level of fault-tolerance, however, the self-stabilizing algorithm is no longer locally-iterative. In particular, for every vertex $v$, the algorithm depends on the vertex identity $id(v)$, which is stored in the ROM area. The ROM area of each vertex also stores graph parameters $n$ and $\Delta$, as well as the program code of the self-stabilizing algorithm. In the RAM area of each vertex $v$, we store the colors of its local neighborhood and a boolean vector $T_v$ of size $\Delta$, as well as any other variables that are used during algorithm execution. For each vertex $v$, the self-stabilizing algorithm also contains three phases: the Linial phase, the quadratic reduction phase, and the standard reduction phase. Initially, every vertex $v$ set its color as $\phi_0(v)=\ell_3+\ell_2+\sum_1^{r^*} n_i+ id(v)$. At the beginning of each round $t\geq 1$, for every neighbor $u\in N(v)$, vertex $v$ sends a message to $u$ including its current color $\phi_{t-1}(v)$, as well as a boolean variable $T_v[u]$ (i.e., a bit) which is the entry corresponding to $u$ in $T_v$. This boolean variable helps determine the ``orientation'' of edge $(u,v)$, which in turn helps the construction and maintenance of arbdefective coloring. After receiving the messages from neighbors, vertex $v$ will perform an \emph{error-checking} procedure to determine if it is in a proper state, this is crucial as the adversary may arbitrarily change the states of the vertices during execution. If the error-checking passes then we say vertex $v$ is in a \emph{proper} state, it then computes its new color according to its local information and the messages received from the neighbors. Otherwise, if the error-checking fails, the vertex is in an \emph{improper} state. In such case, vertex $v$ simply resets its color to the initial color $\ell_3+\ell_2+\sum_1^{r^*} n_i+ id(v)$, and restarts from the Linial phase in the next round. The following lemma highlights the correctness guarantee provided by our algorithm. In particular, if $T_0$ is the last round in which the adversary makes any changes to the vertices' states, putting some vertices in improper states, then our algorithm guarantees, the error-checking procedure will detect such anomalies at the beginning of round $T_0+1$, and resets the colors of those vertices. Moreover, staring from round $T_0+2$, the error-checking procedure will always pass without detecting any anomaly, allowing the algorithm to make progress and eventually produce a proper $(\Delta+1)$-coloring. \begin{lemma}[\textbf{Correctness of the Self-stabilizing Algorithm}]\label{lemma:self-stabilizing-correctness} If $T_0$ is the last round in which the adversary makes any changes to the RAM areas of vertices, then for every round $t\geq T_0+2$, for every vertex $v$, the error-checking procedure will not reset vertex $v$'s color. \end{lemma} The following lemma, on the other hand, provides a bound on the stabilization time of our algorithm. \begin{lemma}[\textbf{Stabilization Time of the Self-stabilizing Algorithm}]\label{lemma:self-stabilizing-time-complexity} If $T_0$ is the last round in which the adversary makes any changes to the RAM areas of vertices, then for every round $t\geq T_0+O(\Delta^{3/4}\log{\Delta})+\log^*n$, every vertex $v$ has its color in $[\Delta+1]$ \end{lemma} Recall the main theorem for our self-stabilizing coloring algorithm (i.e., \Cref{thm:alg-self-stab}), it can be proved by combining the above two lemmas. \begin{proof}[Proof of \Cref{thm:alg-self-stab}] Assume $T_0$ is the last round in which the adversary makes any changes to the RAM areas of vertices, by \Cref{lemma:self-stabilizing-time-complexity}, every vertex has its color in $[\Delta+1]$ by the end of round $T_0+O(\Delta^{3/4}\log{\Delta})+\log^*n$, and every vertex's color will remain in $[\Delta+1]$ in later rounds. On the other hand, due to \Cref{lemma:self-stabilizing-correctness}, in every round $t\geq T_0+O(\Delta^{3/4}\log{\Delta})+\log^*n$, the error-checking procedure passes. As the error-checking procedure always checks whether neighbors have conflicting colors, we conclude that in every round $t\geq T_0+O(\Delta^{3/4}\log{\Delta})+\log^*n$, the coloring produced at the end of that round is proper. \end{proof} In the reminder of this section, we will first describe the algorithm in detail, and then prove its correctness (i.e., \Cref{lemma:self-stabilizing-correctness}) and stabilization time (i.e., \Cref{lemma:self-stabilizing-time-complexity}). \subsection{Algorithm description}\label{subsec:stab-alg-description} As mentioned above, to cope with the adversary, vertices perform error-checking at the beginning of each round, this is an additional mechanism that is not present in the original locally-iterative algorithm. On the other hand, since vertices reset themselves once errors are found, the ``asynchrony'' among them is stronger. In particular, in the self-stabilizing setting, vertices may end the Linial phase in different rounds. This also brings the side effect that the coloring at the end of the transition-in stage of the second phase is no longer guaranteed to be $\Delta^{1/4}$-defective. Instead, we maintain a $\Delta^{1/4}$-arbdefective $\lambda^2$-coloring. As a result, we must orient each edge earlier, and for this reason we introduce boolean vector $T_v$ at each vertex $v$. We also note that in the self-stabilizing setting, encoding the orientation in vertices' colors no longer works, as least for our approach which uses the values of the third coordinate of vertices' color quadruples (i.e., compare $c(u)$ and $c(v)$), as the adversary can employ a certain strategy to let the $c$ values grow indefinitely. Therefore, in our self-stabilizing algorithm, the role of the $c(v)$ value is replaced by the $T_v$ vector. Nonetheless, we still keep $c(v)$ in the pseudocode for consistency. We now introduce each phase in detail. Notice that we may group different stages and/or phases together if they share the same error-checking conditions, this simplifies presentation and avoids redundancy. \subsubsection{The Linial phase and the transition-in stage of the quadratic reduction phase} \begin{algorithm}[t!] \caption{The Linial phase and the transition-in stage at vertex $v$ in round $t$}\label{alg:self-stabilized-phase1-to-phase2-stage1} \begin{algorithmic}[1] \State Send $\langle \phi_{t-1}(v), T_v[u] \rangle$ to each neighbor $u\in N(v)$, where $T_v[u]$ is the entry corresponds to $u$ in $T_v$. \If {($\phi_{t-1}(v)\notin I_2 \cup I_3$)} \If{(($\exists u\in N(v), \phi_{t-1}(v) = \phi_{t-1}(u)$) \Comment{Error-checking.} \Statex \hspace{\algorithmicindent}\quad \textbf{or} ($\phi_{t-1}(v)\geq \ell_3+\ell_2+\sum_{i=1}^{r^*}n_i$ \textbf{and} $\phi_{t-1}(v)\neq \ell_3+\ell_2+\sum_{i=1}^{r^*}n_i+id(v)$))} \State $\phi_t(v)\gets \ell_3+\ell_2+\sum_1^{r^*} n_i+id(v)$. \Else \State Determine the interval $I_1^{(t^{\prime})}$ that $\phi_{t-1}(v)$ is in. \If{($0 \leq t'< r^*$)} \Comment{Run the Linial phase.} \State $\phi_t(v)\gets\min S_{t'}^{(\phi_{t-1}(v))}\setminus\bigcup_{u\in N(v) \text{ and }\phi_{t-1}(u) \in I_1^{(t^{\prime})}} S_{t'}^{(\phi_{t-1}(u))}$. \Else \Comment{Run the transition-in stage.} \For {(every element $x\in S_a^{(\phi_{t-1}(v))}$)} \State \begin{small}$N_1'(v,x) \gets \left\{u \mid u\in N(v),\phi_{t-1}(u) \in I_1^{(t')}, x\in S_a^{(\phi_{t-1}(u))} \right\}$.\end{small} \State \begin{small}$N_2'(v,x) \gets \{ u\mid u\in N(v), \phi_{t-1}(u) \in I_2, x+\lambda=\hat{a}_{t-1}(u)\cdot \lambda + (\hat{a}_{t-1}(u)+ \tilde{a}_{t-1}(u))\bmod \lambda \}$.\end{small} \EndFor \State $\hat{x}\gets\min\left\{ x \mid x\in S_a^{(\phi_{t-1}(v))}, \left|N_1'(v,x)\cup N_2'(v,x) \right|\leq\Delta^{1/4} \right\}$. \State $a_{t}(v)\gets\hat{x}+\lambda$. \State $b_{t}(v)\gets\min S_b^{(\phi_{t-1}(v))}\setminus\left(\bigcup_{u\in N_1'(v,\hat{x})}S_b^{(\phi_{t-1}(u))}\cup \{b_{t-1}(u)\mid u\in N_2'(v,\hat{x})\}\right)$. \State $c_{t}(v)\gets 0$, $d_{t}(v)\gets\mu$. \State $\phi_{t}(v) \gets \ell_3 + \langle a_{t}(v), b_{t}(v), c_{t}(v), d_{t}(v) \rangle$. \State Initialize $T_v[u]$ to $0$ for all $u\in N(v)$. \For{every element $u\in N_1'(v,\hat{x})\cup N_2'(v,\hat{x})$} \State $T_v[u]\gets 1$. \EndFor \EndIf \EndIf \EndIf \end{algorithmic} \end{algorithm} At the beginning of a round $t$, if a vertex $v$ has its color $\phi_{t-1}(v)$ not in interval $I_2\cup I_3$, then it should run either the Linial phase or the transition-in stage of the quadratic reduction phase. Nonetheless, before proceeding, it will do error-checking to see if any of the following conditions is satisfied: \begin{itemize} \item Its color collide with some neighbor. \item Its color $\phi_{t-1}(v)$ is not in $\left(\bigcup_{i=1}^{r^*}I_1^{(i)}\right)\cup I_2\cup I_3$ (which means $v$ should be running the first iteration of the Linial phase), but that color is not $\ell_3+\ell_2+\sum_{i=1}^{r^*}n_i+id(v)$. \end{itemize} If any of these conditions is satisfied, then vertex $v$ treats itself in an improper state and resets its color to $\ell_3+\ell_2+\sum_{i=1}^{r^*}n_i+id(v)$. That is, it sets $\phi_t(v)=\ell_3+\ell_2+\sum_{i=1}^{r^*}n_i+id(v)$. Otherwise, vertex $v$ first determines which interval $I_1^{(t')}$ it is in, as in the locally-iterative algorithm. If $0\leq t'< r^*$, then it computes a $\Delta$-cover-free set family $\mathcal{F}_{t'}$ as in the locally-iterative algorithm, and sets its new color as the smallest number in $S_{t'}^{(\phi_{t-1}(v))}$, excluding all elements of $S_{t'}^{(\phi_{t-1}(v))}$ for all $v$'s neighbors $u\in N(v)$ satisfying $\phi_{t-1}(u)\in I_1^{(t')}$. If $t'=r^*$, then vertex $v$ transforms its color from interval $I_1$ to $I_2$, effectively running the transition-in stage of the quadratic reduction phase. To do the transformation, vertex $v$ first constructs a $\Delta$-union-$(\Delta^{1/4}+1)$-cover-free set family $\mathcal{F}_a$. Let $q$ be a prime such that $\frac{\Delta+1}{\Delta^{1/4}+1}\log(n_{r^*}) < q\leq 2\cdot \frac{\Delta+1}{\Delta^{1/4}+1}\log(n_{r^*})$, set family $\mathcal{F}_a$ is of size $n_{r^*}$ with $[q^2]\subseteq [m_1]$ as its ground set. More specifically, for every integer $i\in[n_{r^*}]$, we associate a unique polynomial $P_i$ of degree (at most) $\log(n_{r^*})$ over finite field $GF(q)$ to it. Then $\mathcal{F}_a\triangleq \{S_a^{(\ell_3+\ell_2)},\cdots,S_a^{(\ell_3+\ell_2+n_{r^*}-1)}\}$, where $S_a^{(i)}=\{x\cdot q + P_i(x)\mid x\in [q]\}$ for every $i\in[\ell_3+\ell_2,\ell_3+\ell_2+n_{r^*})$. Since the degree of the polynomials is (at most) $\log(n^{r^*})$, the intersection of any two sets $S_a^{(\cdot)}$ contains at most $\log(n_{r^*})$ elements. Recall that every set contains $q>\frac{\Delta+1}{\Delta^{1/4}+1}\log(n_{r^*})$ elements. To cover a set $S_a^{(i)}$ for any $i\in [n_{r^*}]$, we need at least $\Delta^{1/4}+1$ other set in $\mathcal{F}_a$. Thus $\mathcal{F}_a$ is a $\Delta$-union-$(\Delta^{1/4}+1)$-cover-free set family. Then, define two sets $N_1'(v,x)$ and $N_2'(v,x)$: \begin{align*} N_1'(v,x) &\triangleq \{u\mid u\in N(v), \phi_{t-1}(u)\in I_1^{(t')}, x\in S_a^{(\phi_{t-1}(u))}\},\\ N_2'(v,x) &\triangleq \{ u\mid u\in N(v), \phi_{t-1}(u) \in I_2, x+\lambda=\hat{a}_{t-1}(u)\cdot \lambda + (\hat{a}_{t-1}(u)+ \tilde{a}_{t-1}(u))\bmod \lambda \}. \end{align*} Let $\hat{x}$ be the smallest element in $S_a^{(\phi_{t-1}(v)-\ell_2-\ell_3)}$ satisfying $|N_1'(v,\hat{x})\cup N_2'(v,\hat{x})|\leq\Delta^{1/4}$, vertex $v$ then assigns $a(v)=\hat{x}+\lambda \in[\lambda^2]$. Later in the analysis (particularly, in the proof of \Cref{lemma:self-stabilizing-correctness-I2-large-a}), via a counting argument, we will show such $\hat{x}$ must exist when there are no errors in the system. Then, vertex $v$ computes $b(v)$ using the same method as in the locally-iterative algorithm. In particular, vertex $v$ sets value $b(v)$ as the smallest elements in the following set: $$S_b^{(\phi_{t-1}(v))}\setminus\left(\bigcup_{u\in N_1'(v,\hat{x})}S_b^{(\phi_{t-1}(u))}\cup \{b_{t-1}(u)\mid u\in N_2'(v,\hat{x})\}\right).$$ Again, later in the analysis (particularly, in the proof of \Cref{lemma:self-stabilizing-correctness-I2-large-a}), we will argue the above set is non-empty when there are no errors in the system. Lastly, vertex $v$ sets $c(v)=0$ and $d(v)=\mu$, as in the the locally-iterative algorithm. It also sets $T_v[u]$ to $1$ if its neighbor $u$ is in the set $N_1'(v,\hat{x})\cup N_2'(x,\hat{x})$, otherwise $T_v[u]=0$. We note that the orientation of edge $(u,v)$ is determined by $T_v[u]$ and $T_u[v]$: vertex $v$ points to vertex $u$ if and only if $T_v[u]=1$. The pseudocode of the Linial phase and the transition-in stage of the quadratic reduction phase in the self-stabilizing setting is given in \Cref{alg:self-stabilized-phase1-to-phase2-stage1}. \subsubsection{The core stage of the quadratic reduction phase} \begin{algorithm}[t!] \caption{The core stage of the quadratic reduction phase at vertex $v$ in round $t$}\label{alg:self-stabilizing-phase2-stage2} \begin{algorithmic}[1] \State Send $\langle \phi_{t-1}(v), T_v[u] \rangle$ to each neighbor $u\in N(v)$, where $T_v[u]$ is the entry corresponds to $u$ in $T_v$. \If{($\phi_{t-1}(v)\in I_2$ \textbf{and} $a_{t-1}(v)\geq\lambda$)} \If{(($\exists u\in N(v), \phi_{t-1}(u)\in I_2, a_{t-1}(v)=a_{t-1}(u), b_{t-1}(u)=b_{t-1}(v)$) \Comment{Error-checking.} \Statex \hspace{\algorithmicindent}\quad \textbf{or} ($\exists u\in N(v), \phi_{t-1}(u)\in I_2, a_{t-1}(v)=a_{t-1}(u), T_u[v]+T_v[u]=0$) \Statex \hspace{\algorithmicindent}\quad \textbf{or} ($|\{u\mid u\in N(v), T_v[u]=1\}|>\Delta^{1/4}$) \Statex \hspace{\algorithmicindent}\quad \textbf{or} ($\exists u\in N(v)\cup \{v\}, \phi_{t-1}(u)\in I_2, a_{t-1}(u)\geq\lambda, b_{t-1}(u)\notin [m_2]$))} \State $\phi_t(v)\gets \ell_3+\ell_2+\sum_1^{r^*}{n_i}+id(v)$. \Else \Comment{Run the core stage.} \State $M_t(v)\gets\left\{u\mid u\in N(v),\phi_{t-1}(u)\in I_2,\hat{a}_{t-1}(u)\neq \hat{a}_{t-1}(v),\tilde{a}_{t-1}(u)= \tilde{a}_{t-1}(v)\right\}$. \label{alg-line:self-stabilizing-phase2-stage2-start} \State $\overline{M}_t(v)\gets\left\{u\mid u\in N(v),\phi_{t-1}(u)\in I_2,\hat{a}_{t-1}(u)=\hat{a}_{t-1}(v),\tilde{a}_{t-1}(u)=\tilde{a}_{t-1}(v), T_v[u]=1\right\}$. \If{($|M_t(v)|\leq\Delta^{1/4}$)} \State $M'_{t}(v) \gets \left\{ u\mid u\in N(v),\phi_{t-1}(u)\in I_2,\hat{a}_{t-1}(u)=0,\tilde{a}_{t-1}(u)=\tilde{a}_{t-1}(v) \right\}$. \State $\overline{M}'_{t}(v) \gets (M_t(v)\cup\overline{M_t}(v))\setminus M_t'(v)$. \State $a_t(v) \gets \tilde{a}_{t-1}(v)$. \State $b_t(v) \gets \min S_c^{(a_{t-1}(v)\cdot m_2 + b_{t-1}(v))}\setminus \left( \{b_{t-1}(u)\mid u\in M'_{t}(v)\} \cup \bigcup_{u\in \overline{M}'_{t}(v)} S_c^{(a_{t-1}(u)\cdot m_2+b_{t-1}(u))} \right)$. \State $\phi_{t}(v) \gets \ell_3 + \langle a_t(v), b_t(v), c_t(v), d_t(v) \rangle$. \State Initialize $T_v[u]$ to $0$ for all $u\in N(v)$. \For{every elements in $M_t(v)\cup \overline{M_t}(v)$} \State $T_v[u]\gets 1$. \EndFor \Else \State $a_t(v) \gets \hat{a}_{t-1}(v)\cdot\lambda + ((\hat{a}_{t-1}(v)+\tilde{a}_{t-1}(v))\bmod\lambda)$.\label{alg-line:intermediate-a-update-rule} \State $\phi_{t}(v) \gets \ell_3 + \langle a_t(v), b_t(v), c_t(v), d_t(v) \rangle$. \EndIf \label{alg-line:self-stabilizing-phase2-stage2-end} \EndIf \EndIf \end{algorithmic} \end{algorithm} Recall that in the locally-iterative coloring algorithm, during the quadratic reduction phase, for a vertex $v$ with $\phi(v)\in I_2$, if $a(v)$ is already in $[\lambda]$, then its core stage is considered done, and may proceed to the transition-out stage. This is still the case in the self-stabilizing settings: a vertex runs the core stage only if it finds $a(v)\geq\lambda$. (See \Cref{alg:self-stabilizing-phase2-stage2} for the pseudocode.) Moreover, in case $a(v)\geq\lambda$, before proceeding, vertex $v$ will do error-checking to see if any of the following conditions is satisfied: \begin{itemize} \item There exists a neighbor $u$ of $v$ such that $a(v)=a(u)$ and $b(v)=b(u)$, effectively implying $u$ and $v$ have identical color. \item There exists a neighbor $u$ of $v$ such that $a(v)=a(u)$ yet $T_v[u]+T_u[v]=0$, implying that the orientation of edge $(u,v)$ is still undetermined when $a(v)=a(u)$. \item The number of neighbors $u\in N(v)$ with $T_v[u]=1$ is larger than $\Delta^{1/4}$, violating the bounded arboricity assumption during the core stage. \item There exists a vertex $u\in N(v)\cup \{v\}$ has its color in $I_2$ and $a(u)\geq\lambda$, yet $b(u)\geq m_2$, violating the range of $b$ values during the core stage. \end{itemize} If any of these conditions is satisfied, then vertex $v$ treats itself in an improper state and resets its color to $\ell_3+\ell_2+\sum_{i=1}^{r^*}n_i+id(v)$. Otherwise, it executes \cref{alg-line:self-stabilizing-phase2-stage2-start} to \cref{alg-line:self-stabilizing-phase2-stage2-end} of \Cref{alg:self-stabilizing-phase2-stage2} to try to reduce its $a$ value from $[\lambda,\lambda^2)$ to $[0,\lambda)$. The procedure we use in the self-stabilizing settings to transform a $\Delta^{1/4}$-arbdefective $\lambda^2$-coloring to a $(2\cdot\Delta^{1/4})$-arbdefective $\lambda$-coloring is almost identical to the one we use in the locally-iterative settings. The only difference is that we alter the definition of $\overline{M}_t(v)$ by adding an extra condition $T_v[u]=1$, which means the orientation of edge $(u,v)$ is $v$ pointing to $u$. This ensures that $|\overline{M}_t(v)|$ is still bounded by $\Delta^{1/4}$. Once the reduction occurs in some round $t$, vertex $v$ obtains an $a_t(v)\in[\lambda]$, and updates $b_t(v)$ to differentiate itself from the neighbors that may have colliding $a$ value. By an analysis similar to the locally-iterative setting, such $b_t(v)$ must exist. It also sets $T_v[u]=1$ if and only if $u\in M_t(v)\cup \overline{M}_t(v)$, recording the orientation of such edges. (Recall that $T_v$ replaces the role of $c$, hence we do not update $c(v)$ when the reduction occurs. Moreover, $T_v$ here is used to maintain the arboricity of a $(2\cdot\Delta^{1/4})$-arbdefective $\lambda$-coloring, whereas in \Cref{alg:self-stabilized-phase1-to-phase2-stage1}, vector $T_v$ is used to maintain the arboricity of a $\Delta^{1/4}$-arbdefective $\lambda^2$-coloring.) \subsubsection{The transition-out stage of the quadratic reduction phase} \begin{algorithm}[t!] \caption{The transition-out stage of the quadratic reduction phase at vertex $v$ in round $t$}\label{alg:self-stabilizing-phase2-stage3} \begin{algorithmic}[1] \State Send $\langle \phi_{t-1}(v), T_v[u] \rangle$ to each neighbor $u\in N(v)$, where $T_v[u]$ is the entry corresponds to $u$ in $T_v$. \If{($\phi_{t-1}(v)\in I_2$ \textbf{and} $a_{t-1}(v)<\lambda$)} \If{(($\exists u\in N(v), \phi_{t-1}(u)\in I_2,a_{t-1}(v)=a_{t-1}(u),b_{t-1}(u)=b_{t-1}(v)$) \Comment{Error-checking.} \Statex \hspace{\algorithmicindent}\quad \textbf{or} ($\exists u\in N(v), \phi_{t-1}(u)\in I_2, a_{t-1}(v)=a_{t-1}(u), T_u[v]+T_v[u]=0$) \Statex \hspace{\algorithmicindent}\quad \textbf{or} ($|\{u\mid u\in N(v), T_v[u]=1\}|>2\cdot \Delta^{1/4}$))} \State $\phi_t(v)\gets \ell_3+\ell_2+\sum_1^{r^*} n_i+id(v)$. \Else \Comment{Run the transition-out stage.} \If{($\forall u\in N(v)$, either ($\phi_{t-1}(u)\in I_2$ and $a_{t-1}(v) \leq a_{t-1}(u)<\lambda$) or ($\phi_{t-1}(u)\in I_3$))}\label{alg-line:self-stabilizing-phase2-stage3-if-cond-2}\label{alg-line:self-stabilizing-phase2-stage3-start} \State $L_{t-1}(v)\gets\{\phi_{t-1}(u)\mid u\in N(v),\phi_{t-1}(u)\in I_3\}$. \State $A_{t-1}(v)\gets\{u\mid u\in N(v),\phi_{t-1}(u)\in I_2, a_{t-1}(u)=a_{t-1}(v), T_v[u]=1\}$. \For{(every $i\in[\mu]$)} \State $L_{(t-1,i)}(v)\gets\{ (\lfloor{b_{t-1}(v)/\tau}\rfloor\cdot{x^2}+(b_{t-1}(v)\bmod\tau)\cdot{x}+i)\bmod\mu + x\cdot\mu \mid x\in[\mu] \}$. \EndFor \If{($d_{t-1}(v)= \mu$)}\label{alg-line:self-stabilizing-phase2-stage3-if-cond-3} \State $d_t(v)\gets$ the integer $\hat{i}$ in $[\mu]$ that minimizes $|L_{(t-1,\hat{i})}(v)\cap L_{t-1}(v)|$.\label{alg-line:self-stabilizing-phase2-stage3-d-rule} \State $\phi_t(v) \gets \ell_3 + \langle a_{t}(v),b_{t}(v),c_{t}(v),d_{t}(v)\rangle$. \Else \State $L_{t-1}'(v)\gets\{\phi_{t-1}(u)\mid u\in N(v), \phi_{t-1}(u)\in I_3, T_v[u]=0\}$. \If{($|L_{t-1}'(v)\cap L_{(t-1,d_{t-1}(v))}|>\Delta/\mu$)}\label{alg-line:self-stabilizing-phase2-stage3-proper-d}\label{alg-line:self-stabilizing-phase2-stage3-if-cond-4} \State $d_t=\mu$.\label{alg-line:self-stabilizing-phase2-stage3-reset-d} \State $\phi_t(v) \gets \ell_3 + \langle a_{t}(v),b_{t}(v),c_{t}(v),d_{t}(v)\rangle$. \ElsIf{(every $u\in A_{t-1}(v)$ has $d_{t-1}(u)\neq\mu$)}\label{alg-line:self-stabilizing-phase2-stage3-if-cond-5} \State $\phi_{t}(v)\gets \min L_{(t-1,d_{t-1}(v))}(v) \setminus \left( L_{t-1}(v) \cup \bigcup_{u\in A_{t-1}(v)} L_{(t-1,d_{t-1}(u))}(u) \right)$. \EndIf \EndIf \EndIf \EndIf \label{alg-line:self-stabilizing-phase2-stage3-end} \EndIf \end{algorithmic} \end{algorithm} At the beginning of a round $t$, if vertex $v$ has color $\phi_{t-1}(v)\in I_2$ and $a(v)\in[\lambda]$, then it is in the transition-out stage. Once again, it does error-checking before proceeding. (See \Cref{alg:self-stabilizing-phase2-stage3} for the pseudocode.) Specficially, vertex $v$ checks if any of the following conditions is satisfied: \begin{itemize} \item There exists a neighbor $u$ of $v$ such that $a(v)=a(u)$ and $b(v)=b(u)$, effectively implying $u$ and $v$ have identical color. \item There exists a neighbor $u$ of $v$ such that $a(v)=a(u)$ yet $T_v[u]+T_u[v]=0$, implying that the orientation of edge $(u,v)$ is still undetermined when $a(v)=a(u)$. \item The number of neighbors $u\in N(v)$ with $T_v[u]=1$ is larger than $2\cdot\Delta^{1/4}$, violating the bounded arboricity assumption during the transition-out stage. \end{itemize} If any of these conditions is satisfied, then vertex $v$ treats itself in an improper state and resets its color to $\ell_3+\ell_2+\sum_{i=1}^{r^*}n_i+id(v)$. Otherwise, it executes \cref{alg-line:self-stabilizing-phase2-stage3-start} to \cref{alg-line:self-stabilizing-phase2-stage3-end} of \Cref{alg:self-stabilizing-phase2-stage3} to transform its color from $I_2$ to $I_3$. The transformation is similar to the transition-out stage of the locally-iterative algorithm, except that we redefine $A_t(v)\triangleq \{u\mid u\in N(v),\phi_{t-1}(u)\in I_2, a_{t-1}(u)=a_{t-1}(v), T_v[u]=1\}$, replacing the constraint on $c$ values with a constraint on $T_v$. More specifically, in a round $t$ in the transition-out stage of the self-stabilizing algorithm, for a vertex $v$ in proper state with $\phi_{t-1}(v)\in I_2$ and $a_{t-1}(v)\in [\lambda]$, if every $u\in N(v)$ satisfies either ``$\phi_{t-1}(u)\in I_2$ and $a_{t-1}(v)\leq a_{t-1}(u)<\lambda$'', or ``$\phi_{t-1}(u)\in I_3$'', then it is ready to transform from interval $I_2$ to $I_3$. In such scenario, if $d_{t-1}(v)=\mu$, then it updates $d(v)$ in the same manner as in the locally-iterative algorithm. Otherwise, if $d_{t-1}(v)\neq\mu$, then vertex $v$ makes sure $d_{t-1}(v)$ is proper for further operations by examining whether $|L_{t-1}'(v)\cap L_{(t-1,d_{t-1}(v))}|\leq\Delta/\mu$ is satisfied, where $L'_{t-1}(v)\triangleq \{\phi_{t-1}(u)\mid u\in N(v), \phi_{t-1}(v)\in I_3, T_v[u]=0\}$. If $|L_{t-1}'(v)\cap L_{(t-1,d_{t-1}(v))}|\leq\Delta/\mu$, then vertex $v$ finds a color in $I_3$ by first finding the smallest integer $\hat{k}\in[\mu]$ satisfying: $$P_{(t-1,v,d_{t-1}(v))}(\hat{k}) + \mu\cdot{\hat{k}} \in L_{(t-1,d_{t-1}(v))}(v) \setminus \left( L_{t-1}(v) \cup \bigcup_{u\in A_{t-1}(v)} L_{(t-1,d_{t-1}(u))}(u) \right),$$ and then sets $\phi_t(v)=P_{(t-1,v,d_{t-1}(v))}(\hat{k}) + \mu\cdot{\hat{k}}$. Otherwise, it resets its $d$ value to $\mu$, so that later it can obtain a proper $d(v)$. Before proceeding to the next part, we note that once there are no errors occurring in the system: (1) the above mentioned $\hat{k}$ must exist, which is proved in the following claim; and (2) the above mentioned mechanism of resetting of $d$ occurs at most once for each vertex, which is showed in the proof of \Cref{lemma:self-stabilizing-time-complexity-I2-part2}. \begin{claim}\label{claim:self-stabilizing-phase2-stage3-bounded-k} Consider a round $t$, consider a vertex $v$ that passes the error-checking procedure at the beginning of round $t$, and satisfies ``$\phi_{t-1}(v)\in I_2$, $a_{t-1}(v)\in [\lambda]$ and $d_{t-1}(v)\neq \mu$''. If no error occurs in round $t$ and $|L_{(t-1,d_{t-1}(v))}(v)\cap L'_{t-1}(v)|\leq \Delta/\mu$, then let $\hat{k}\in[\mu]$ be the smallest integer satisfying $$P_{(t-1,v,d_{t-1}(v))}(k_t) + \mu\cdot{k_t} \in L_{(t-1,d_{t-1}(v))}(v) \setminus \left( L_{t-1}(v) \cup \bigcup_{u\in \{w\mid w\in A_{t-1}(v), d_{t-1}(w)\neq \mu \}} L_{(t-1,d_{t-1}(u))}(u) \right),$$ it holds that $$0\leq\hat{k}\leq \Delta/\mu +4\cdot\Delta^{1/4}.$$ \end{claim} \begin{proof} For simplicity, let $\mathfrak{A}$ denote set $L_{(t-1,d_{t-1}(v))}(v)$, let $\mathfrak{B}_1$ denote set $L_{t-1}(v)$ and let $\mathfrak{B}_2$ denote set $\bigcup_{u\in \{w\mid w\in A_{t-1}(v), d_{t-1}(w)\neq \mu \}} L_{(t-1,d_{t-1}(u))}(u)$. Similar to the proof of \Cref{claim:phase2-stage3-bounded-k}, we bound $\hat{k}$ by bounding $|\mathfrak{A}\cap(\mathfrak{B}_1\cup \mathfrak{B}_2)|$. Let $\mathfrak{B}_{1,0}$ denote $L_{t-1}'(v)$ and let $\mathfrak{B_{1,1}}$ denote $\mathfrak{B}_1\setminus\mathfrak{B}_{1,0}$. That is, $\mathfrak{B}_{1,1}\triangleq\{\phi_{t-1}(u)\mid u\in N(v), \phi_{t-1}(u)\in I_3, T_v[u]=1\}$. Since vertex $v$ passes the error-checking at the beginning of round $t$, we have $|\mathfrak{B}_{1,1}|+|A_{t-1}(v)|\leq 2\cdot \Delta^{1/4}$. Moreover, for every neighbor $u$ of $v$ with $a_{t-1}(u)=a_{t-1}(v)$, we have $b_{t-1}(u)\neq b_{t-1}(v)$, which leads to $|\mathfrak{A}\cap \mathfrak{B}_2|\leq 2\cdot |A_{t-1}(v)|$. Hence, we have: \begin{align*} |\mathfrak{A}\cap(\mathfrak{B}_1\cup \mathfrak{B}_2)| &= |\mathfrak{A}\cap(\mathfrak{B}_{1,0}\cup \mathfrak{B}_{1,1} \cup \mathfrak{B}_2)| \\ &\leq |\mathfrak{A}\cap\mathfrak{B}_{1,0}| +|\mathfrak{A}\cap\mathfrak{B}_{1,1}| + |\mathfrak{A}\cap\mathfrak{B}_{2}| \\ &\leq \Delta/\mu+ |\mathfrak{B}_{1,1}| + 2\cdot |A_{t-1}(v)| \\ &\leq \Delta/\mu+4\cdot \Delta^{1/4}, \end{align*} which implies $0\leq \hat{k}\leq \Delta/\mu+4\cdot \Delta^{1/4}$. \end{proof} \subsubsection{The standard reduction phase} For a vertex $v$ with its color in $I_3$, it considers itself in the standard reduction phase, whose error-checking procedure is very simple: if the color of itself collides with any neighbor, then it resets $\phi(v)$ to $\ell_3+\ell_2+\sum_1^{r^*} n_i+id(v)$. Otherwise, vertex $v$ considers itself in a proper state, and runs the standard reduction procedure described in the locally-iterative settings: if all neighbors of $v$ have colors in $I_3$, and if $v$ has the maximum color value in its inclusive one-hop neighborhood, then $v$ sets its color to be the minimum value in $[\Delta+1]$ that has not been used by any of its neighbors yet. The pseudocode of the standard reduction phase in the self-stabilizing setting is given in \Cref{alg:self-stabilizing-phase3}. \begin{algorithm} \caption{The standard reduction phase at vertex $v$ in round $t$}\label{alg:self-stabilizing-phase3} \begin{algorithmic}[1] \State Send $\langle \phi_{t-1}(v), T_v[u] \rangle$ to each neighbor $u\in N(v)$, where $T_v[u]$ is the entry corresponds to $u$ in $T_v$. \If{($\phi_{t-1}(v)\in I_3$)} \If{($\exists u\in N(v)$ with $\phi_{t-1}(u)=\phi_{t-1}(v)$)} \Comment{Error-checking.} \State $\phi_t(v)\gets \ell_3+\ell_2+\sum_1^{r^*} n_i+id(v)$. \Else \Comment{Run the standard reduction phase.} \If {(for every $u\in N(v)$ it holds that $\phi_{t-1}(u)\in I_3$)} \If {(for every $u\in N(v)$ it holds that $\phi_{t-1}(u)< \phi_{t-1}(v)$)} \State $\phi_t(v)\gets\min([\Delta+1]\setminus \{\phi_{t-1}(u)\mid u\in N(v)\})$. \EndIf \EndIf \EndIf \EndIf \end{algorithmic} \end{algorithm} \subsection{Algorithm analysis}\label{subsec:stab-alg-analysis} We now argue the correctness and the stabilization time of our algorithm. \subsubsection{Correctness} Recall that if $T_0$ is the last round in which the adversary corrupts vertices' states, our algorithm guarantees, the error-checking procedure will detect any anomalies at the beginning of round $T_0+1$, and resets the colors of those vertices. Moreover, staring from round $T_0+2$, the error-checking procedure will always pass, allowing the algorithm to make progress without disruption. This property is summarized in \Cref{lemma:self-stabilizing-correctness}. To prove the it, we divide vertices into different categories according their their color values at the end of round $T_0+1$. We first consider vertices with color values in $I_1$ by the end of round $T_0+1$. \begin{lemma}\label{lemma:self-stabilizing-correctness-I1} If $T_0$ is the last round in which the adversary makes any changes to the RAM areas of vertices, then for every round $t\geq T_0+1$, for every vertex $v$ with $\phi_{t}(v)\in I_1$, the error-checking procedure will not reset vertex $v$'s color in the next round. \end{lemma} \begin{proof} According to the algorithm description, vertex $v$ has $\phi_t(v) \in I_1$ if and only if it is in some improper state at the beginning of round $t$ or it is in some proper state and $\phi_{t-1}\in I_1^{(t')}$ for some $t'\in[r^*]$. \begin{itemize} \item If a vertex $v\in V$ find itself in some improper state at the beginning of round $t$, then it resets $\phi_{t}(v)=\ell_3+\ell_2+\sum_1^{r^*} n_i+id(v)\in I_1$. For every neighbor $u\in N(v)$, in round $t$, either $u$ resets $\phi_{t}(u)=\ell_3+\ell_2+\sum_1^{r^*} n_i+id(u)$, or $u$ obtains a color $\phi_t(u)<\ell_3+\ell_2+\sum_1^{r^*} n_i$. In both cases, $\phi_t(v)\neq\phi_t(u)$. Hence, by \Cref{alg:self-stabilized-phase1-to-phase2-stage1}, the error-checking procedure will not reset $v$'s color in the next round. \item If vertex $v$ finds itself in some proper state and $\phi_{t-1}\in I_1^{(t')}$ for some $t'\in[r^*]$, then it computes its new color $\phi_t(v)\in I_1^{(t'+1)}$ based on set family $\mathcal{F}_{t'}$. For every neighbor $u\in N(v)$, in round $t$, if the error-checking fails at $u$, then $u$ resets $\phi_{t}(u)=\ell_3+\ell_2+\sum_1^{r^*} n_i+id(u)$, implying $\phi_t(v)\neq\phi_t(u)$. Otherwise, if the error-checking passes at $u$ and $\phi_{t-1}(u)\notin I_1^{(t')}$, by algorithm description we know $\phi_{t}(u)\notin I_1^{(t'+1)}$, implying $\phi_t(v)\neq\phi_t(u)$. Lastly, if the error-checking passes at $u$ and $\phi_{t-1}(u)\in I_1^{(t')}$, then due to the $\Delta$-cover-freeness of $\mathcal{F}_{t'}$, we know $\phi_t(v)\neq\phi_t(u)$. Hence, by \Cref{alg:self-stabilized-phase1-to-phase2-stage1}, the error-checking procedure will not reset $v$'s color in the next round. \end{itemize} This completes the proof of the lemma. \end{proof} Next, we consider vertices with color values in $I_2$ by the end of round $T_0+1$, and we further divide vertices in this category into two sub-categories: ones with $a_t(v)\geq\lambda$, and ones with $a_t(v)<\lambda$. \begin{lemma}\label{lemma:self-stabilizing-correctness-I2-large-a} If $T_0$ is the last round in which the adversary makes any changes to the RAM areas of vertices, then for every round $t\geq T_0+1$, for every vertex $v$ with $\phi_{t}(v)\in I_2$ and $a_{t}(v)\geq\lambda$, the error-checking procedure will not reset vertex $v$'s color in the next round. \end{lemma} \begin{proof} According to the algorithm description, $v$ has $\phi_t(v)\in I_2$ and $a_t(v)\geq \lambda$ if and only if it is in some proper state at the beginning of round $t$, and either ``$\phi_{t-1}(v)\in I_1^{(r^*)}$'', or ``$\phi_{t-1}(v)\in I_2$ and $a_{t-1}(v)\geq \lambda$''. \smallskip\textsc{Scenario I}: vertex $v$ is in some proper state at the beginning of round $t$ and has $\phi_{t-1}(v)\in I_1^{(r^*)}$. In such case, $v$ transforms its color from interval $I_1$ to $I_2$ in round $t$. In particular, vertex $v$ first selects the smallest element $\hat{x}\in S_a^{(\phi_{t-1}(v))}$ satisfying $|N_1'(v,\hat{x})\cup N_2'(v,\hat{x})|\leq \Delta^{1/4}$, and sets $a_t(v)=\hat{x}+\lambda\in[\lambda^2]$. Notice, such $\hat{x}$ must exist. To see this, recall the definition of $N_1'(v,x)$ and $N_2'(v,x)$: \begin{align*} N_1'(v,x) &\triangleq \{u\mid u\in N(v), \phi_{t-1}(u)\in I_1^{(t')}, x\in S_a^{(\phi_{t-1}(u))}\},\\ N_2'(v,x) &\triangleq \{ u\mid u\in N(v), \phi_{t-1}(u) \in I_2, x+\lambda=\hat{a}_{t-1}(u)\cdot \lambda + (\hat{a}_{t-1}(u)+ \tilde{a}_{t-1}(u))\bmod\lambda \}. \end{align*} Since $\phi_{t-1}(v)\in I_1^{(r^*)}$, we have $t'=r^*$. We say a neighbor $u\in N(v)$ creates a ``collision'' for some element $x\in S_a^{(\phi_{t-1}(v))}$ if ``$\phi_{t-1}(u)\in I_1^{(t')}\text{ and }x\in S_a^{(\phi_{t-1}(u))}$'' or ``$\phi_{t-1}(u) \in I_2\text{ and }x+\lambda=\hat{a}_{t-1}(u)\cdot \lambda + (\hat{a}_{t-1}(u)+ \tilde{a}_{t-1}(u))\bmod \lambda$''. Call the former as type one collision, and the latter as type two collision. If $\hat{x}$ cannot be found, then for each $x\in S_a^{(\phi_{t-1}(v))}$, the number of collisions created by by all neighbors for $x$ must reach $\Delta^{1/4}+1$; furthermore, the total number of collisions created by all neighbors for all elements in $S_a^{(\phi_{t-1}(v))}$ must reach $|S_a^{(\phi_{t-1}(v))}|\cdot(\Delta^{1/4}+1)>(\Delta+1)\log(n_{r^*})$. Now, for every $u\in N(v)$ with $\phi_{t-1}(u)\in I_1^{(t')}$, vertex $u$ can create at most $\log(n_{r^*})$ (type one) collisions for all elements in $S_a^{(\phi_{t-1}(v))}$, as $|S_a^{(\phi_{t-1}(v))}\cap S_a^{(\phi_{t-1}(u))}|\leq\log(n_{r^*})$. Moreover, for every $u\in N(v)$ with $\phi_{t-1}(u)\in I_2$, it can create at most one (type two) collision. Thus, the total number of collisions that can be created by the $\Delta$ neighbors of $v$ is bounded by $\Delta\log(n_{r^*})$. By now, we conclude $\hat{x}$ must exist. By algorithm description, it is easy to verify that the neighbors in $N_1'(v,\hat{x})\cup N_2'(v,\hat{x})$ are all the neighbors that might have colliding $a$ value with $v$ by the end of round $t$. Vertex $v$ then selects $b_t(v)$ which will not conflict with any neighbor in $N_1'(v,\hat{x})\cup N_2'(v,\hat{x})$, and sets $T_v[u]=1$ if and only if $u\in N_1'(v,\hat{x})\cup N_2'(v,\hat{x})$. Since $|N_1'(v,\hat{x})\cup N_2'(v,\hat{x})|\leq\Delta^{1/4}$ and $\mathcal{F}_b$ is a $\Delta$-union-$(\Delta^{1/4}+1)$-cover-free, $b_t(v)$ must exist. Moreover, $b_t(v)\in [m_2]$ by the definition of $\mathcal{F}_b$. At this point, we can conclude: (1) every neighbor $u\in N(v)$ has either $a_t(u)\neq a_t(v)$ or $b_t(u)\neq b_t(v)$; (2) every neighbor $u\in N(v)$ that may have $a_t(u)=a_t(v)$ satisfies $T_v[u]=1$, which leads to $T_v[u]+T_u[v]\neq 0$; (3) the number of neighbors $u$ with $T_v[u]=1$ is bounded by $|N_1'(v,\hat{x})\cup N_2'(v,\hat{x})|<\Delta^{1/4}$; and (4) $b_t(v)\in [m_2]$. \smallskip\textsc{Scenario II}: vertex $v$ is in some proper state at the beginning of round $t$ and satisfies: $\phi_{t-1}(v)\in I_2$ and $a_{t-1}(v)\geq \lambda$. Then, by algorithm description, for every $u\in N(v)$ with $\phi_t(u)\in I_2$ and $a_t(u)=a_t(v)\geq \lambda$, it must be in some proper state at the beginning of round $t$. Moreover, for each such $u$, either ``$\phi_{t-1}(u)\in I_1^{(r^*)}$'' or ``$\phi_{t-1}(u)\in I_2$ and $a_{t-1}(u)\geq \lambda$''. \begin{itemize} \item If it is the case ``$\phi_{t-1}(u)\in I_1^{(r^*)}$'', then by the same argument as in \textsc{Scenario I} (but from the perspective of $u$), vertex $u$ must select a $b_t(u)\in [m_2]$ not equal to $b_t(v)$, and sets $T_u[v]=1$, which leads to $T_v[u]+T_u[v]\neq 0$. Moreover, since $v$ is in some proper state at the beginning of round $t$, the error-checking procedure in \Cref{alg:self-stabilizing-phase2-stage2} passes, which implies: (1) $b_t(v)=b_{t-1}(v)\in[m_2]$; and (2) $|\{w\mid w\in N(v), T_v[w]=1\}|\leq \Delta^{1/4}$ at the beginning of round $t$. Since $T_v$ stays unchanged in round $t$, we know $|\{w\mid w\in N(v), T_v[w]=1\}|\leq \Delta^{1/4}$ still holds at the end of round $t$. \item If it is the case ``$\phi_{t-1}(u)\in I_2$ and $a_{t-1}(u)\geq \lambda$'', by \Cref{claim:phase2-stage2_last_round_equal_a} and the assumption that $u,v$ are both in proper states at the beginning of round $t$, we have $a_{t-1}(u)=a_{t-1}(v)\geq\lambda$. Since $u,v$ are both in proper states at the beginning of round $t$, the error-checking procedure in \Cref{alg:self-stabilizing-phase2-stage2} passes, which further implies: (1) $b_t(u)=b_{t-1}(u)\neq b_t(v)=b_{t-1}(v)$, as well as $b_t(u)\in[m_2]$ and $b_t(v)\in[m_2]$; (2) $T_v[u]+T_u[v]\neq 0$ at the beginning of round $t$; and (3) $|\{w\mid w\in N(v), T_v[w]=1\}|\leq \Delta^{1/4}$ at the beginning of round $t$. Since vectors $T_v$ and $T_u$ stay unchanged in round $t$, we know $T_v[u]+T_u[v]\neq 0$ and $|\{w\mid w\in N(v), T_v[w]=1\}|\leq \Delta^{1/4}$ are both true at the end of round $t$. \end{itemize} \smallskip Finally, notice that according to the analysis for the two scenarios, every vertex with $\phi_{t}(v)\in I_2$ and $a_t(v)\geq\lambda$ has $b_t(v)\in [m_2]$. Hence, for every vertex $v$, every $u\in \{v\}\cup N(v)$ with $\phi_t(u)\in I_2$ and $a_t(u)\geq \lambda$ has $b_t(u)\in [m_2]$. By now, we conclude that, at the beginning of round $t+1$, the error-checking procedure in \Cref{alg:self-stabilizing-phase2-stage2} will not reset vertex $v$'s color. \end{proof} \begin{lemma}\label{lemma:self-stabilizing-correctness-I2-small-a} If $T_0$ is the last round in which the adversary makes any changes to the RAM areas of vertices, then for every round $t\geq T_0+1$, for every vertex $v$ with $\phi_{t}(v)\in I_2$ and $a_{t}(v)<\lambda$, the error-checking procedure will not reset vertex $v$'s color in the next round. \end{lemma} \begin{proof} According to the algorithm description, a vertex $v$ has $\phi_t(v)\in I_2$ and $a_t(v)<\lambda$ if and only if it is in some proper state at the beginning of round $t$ and has either ``$\phi_{t-1}(v)\in I_2$ and $a_{t-1}(v)\geq \lambda$'' or ``$\phi_{t-1}(v)\in I_2$ and $a_{t-1}(v)< \lambda$''. \smallskip\textsc{Scenario I}: vertex $v$ is in some proper state at the beginning of round $t$ and has $\phi_{t-1}(v)\in I_2$ and $a_{t-1}(v)\geq \lambda$. In this case, vertex $v$ runs \Cref{alg:self-stabilizing-phase2-stage2} and reduces its $a$ value from $[\lambda,\lambda^2)$ to $[\lambda]$ in round $t$. Define $\overline{M}_{t,0}\triangleq \{ u\mid u\in N(v),\phi_{t-1}(u)\in I_2,\hat{a}_{t-1}(u)=\hat{a}_{t-1}(v),\tilde{a}_{t-1}(u)=\tilde{a}_{t-1}(v), T_v[u]=0\}$. By definition, $M_t(v)\cup \overline{M}_t(v)\cup\overline{M}_{t,0}(v)$ contains all neighbors that may have colliding $a$ value with $u$ by the of round $t$. For neighbors in $M_t(v)\cup \overline{M}_t(v)$, vertex $v$ selects a $b$ value that will not be used by any of them. Vertex $v$ also sets $T_v[u]=1$ for every $u\in M_t(v)\cup \overline{M}_t(v)$ by algorithm description, hence $T_v[u]+T_u[v]\neq 0$ by the end of round $t$. For every neighbor $u$ in $\overline{M}_{t,0}(v)$, since vertex $v$ is in some proper state at the beginning of round $t$, we have $T_u[v]=1$ at the beginning of round $t$. Thus, if indeed $u$ reduces its $a$ value to $[\lambda]$ in round $t$ which leads to $a_t(u)=a_t(v)$, we have $v\in \overline{M}_t(u)$ and vertex $u$ will select a $b_t(u)$ not equal to $b_t(v)$ and set $T_u(v)=1$. By now, we know that every neighbor $u$ of $v$ with $\phi_t(u)\in I_2$ has either $a_t(u)\neq a_t(v)$ or $b_t(u)\neq b_t(v)$. Moreover, every neighbor $u$ with $\phi_t(u)\in I_2$ and $a_t(u)=a_t(v)$ has $T_v[u]+T_u[v]=1$. Since vertex $v$ reduces its $a$ value in round $t$, by the description of the algorithm, we have $|M_t(v)|\leq \Delta^{1/4}$. Since vertex $v$ is in proper state in round $t-1$ (otherwise it cannot be the case that $\phi_{t-1}(v)\in I_2$), we have the number of neighbors $u$ with $T_v[u]=1$ is bounded by $\Delta^{1/4}$ at the end of round $t-1$; that is, $|\overline{M}_t(v)|\leq \Delta^{1/4}$. Therefore, by the end of round $t$, the number of neighbors of $v$ with $T_v[u]=1$ is bounded by $|M_t(v)\cup \overline{M}_t(v)|\leq |M_t(v)| + |\overline{M}_t(v)| \leq 2\cdot \Delta^{1/4}$. At this point, we conclude that, in \textsc{Scenario I}, at the beginning of round $t+1$, the error-checking procedure in \Cref{alg:self-stabilizing-phase2-stage3} will not reset vertex $v$'s color. \smallskip\textsc{Scenario II}: vertex $v$ is in some proper state at the beginning of round $t$ and has $\phi_{t-1}(v)\in I_2$ and $a_{t-1}(v)< \lambda$. In this case, it maintains its value $a$, value $b$, and vector $T_v$ unchanged in round $t$. For any neighbor $u\in N(v)$ with $a_t(v)=a_t(u)$, vertex $u$ must in some proper state at the beginning of round $t$ and either satisfies $a_{t-1}(u)=a_{t-1}(v)<\lambda$ or reduces its $a$ value from $[\lambda,\lambda^2)$ to $[\lambda]$ in round $t$. \begin{itemize} \item For a neighbor $u\in N(v)$ that is in some proper state at the beginning of round $t$ and satisfies $a_{t-1}(u)=a_{t-1}(v)<\lambda$: we have $a_{t-1}(v)=a_t(v)=a_t(u)=a_{t-1}(u)$, $b_{t-1}(u)=b_t(u)$, and $T_u$ stays unchanged in round $t$. Since vertex $v$ is in proper state in round $t-1$ (otherwise it cannot be the case that $\phi_{t-1}(v)\in I_2$), we have $b_{t-1}(u)\neq b_{t-1}(v)$ and $T_v[u]+T_u[v]\neq 0$ at the beginning of round $t$. Thus, we have $b_t(u)=b_{t-1}(u)\neq b_{t-1}(u)=b_{t}(v)$, and $T_u[v]+T_v[u]\neq 0$ still holds at the end of round $t$. \item For a neighbor $u\in N(v)$ that is in some proper state at the beginning of round $t$ and reduces its $a$ value from $[\lambda,\lambda^2)$ to $[\lambda]$ in round $t$: we have $v\in M_t(u)$. By an analysis similar to \textsc{Scenario I} but from the perspective of $u$, vertex $u$ will select a $b$ value different from $b_t(v)$ and set $T_u[v]=1$. \end{itemize} Lastly, since $v$ is in proper state in round $t$, and since vector $T_v$ stays unchanged in round $t$, we know the number of neighbors with $T_v[u]=1$ at the end of round $t$ is still bounded by $2\cdot \Delta^{1/4}$. At this point, we conclude that, in \textsc{Scenario II}, at the beginning of round $t+1$, the error-checking procedure in \Cref{alg:self-stabilizing-phase2-stage3} will not reset vertex $v$'s color. \end{proof} We continue to consider vertices with color values in $I_3$ by the end of round $T_0+1$. \begin{lemma}\label{lemma:self-stabilizing-correctness-I3} If $T_0$ is the last round in which the adversary makes any changes to the RAM areas of vertices, then for every round $t\geq T_0+1$, for every vertex $v$ with $\phi_{t}(v)\in I_3$, the error-checking procedure will not reset vertex $v$'s color in the next round. \end{lemma} \begin{proof} According to algorithm description, a vertex $v$ has $\phi_t(v)\in I_3$ if and only if it is in some proper state at the beginning of round $t$ and has either ``$\phi_{t-1}(v)\in I_2$ and transforms to $I_3$ in round $t$'' or ``$\phi_{t-1}(v)\in I_3$''. \begin{itemize} \item \textsc{Scenario I}: vertex $v$ is in some proper state at the beginning of round $t$ with $\phi_{t-1}(v)\in I_2$ and transforms to $I_3$ in round $t$. In this case, any neighbor $u\in N(v)$ with $\phi_t(u)\in I_3$ must be in some proper state at the beginning of round $t$. Since faults no longer occur in round $t$, by an identical argument as in the proof of \Cref{lemma:phase2-stage3-proper-color}, it holds that $\phi_t(u)\neq \phi_t(v)$. \item \textsc{Scenario II}: vertex $v$ is in some proper state at the beginning of round $t$ with $\phi_{t-1}(v)\in I_3$. Any neighbor $u\in N(v)$ that may have $\phi_t(u)\in I_3$ must be in some proper state at the beginning of round $t$. Moreover, either ``$\phi_{t-1}(u)\in I_2$ and vertex $u$ transforms its color to $I_3$ in round $t$'' or ``$\phi_{t-1}(u)\in I_3$''. In the former case, by an analysis similar to \textsc{Scenario I} (but swapping the role of $u$ and $v$), it holds that $\phi_t(u)\neq \phi_t(v)$. In the latter case, we have $\phi_{t-1}(u)\neq\phi_{t-1}(v)$ since $u,v$ are both in proper states at the beginning of round $t$. Moreover, by \Cref{alg:self-stabilizing-phase3}, in round $t$, at most one of $u,v$ will change its color, and the updated color of that vertex will not conflict with the other vertex, thus $\phi_t(u)\neq \phi_t(v)$. \end{itemize} We conclude that at the beginning of round $t+1$, the error-checking procedure in \Cref{alg:self-stabilizing-phase3} will not reset vertex $v$'s color. \end{proof} The following lemma is the last missing piece before we can prove \Cref{lemma:self-stabilizing-correctness}. \begin{lemma}\label{lemma:self-stabilizing-correctness-color-in-I123} If $T_0$ is the last round in which the adversary makes any changes to the RAM areas of vertices, then for every round $t\geq T_0+1$, for every vertex $v$, it holds that $\phi_t(v)\in I_1\cup I_2\cup I_3$. \end{lemma} \begin{proof} If vertex $v$ finds itself in some improper state at the beginning of round $t$, then it resets itself by setting $\phi_{t}(v)=\ell_3+\ell_2+\sum_1^{r^*} n_i+id(v)\in I_1$. Otherwise, we have vertex $v$ in proper state with its color in $I_1\cup I_2\cup I_3$. By the description of the algorithm, if no error occurs in round $t$, for vertex $v$ with $\phi_{t-1}(v)\in I_1$, it has $\phi_{t}(v)\in I_1\cup I_2$; for vertex $v$ with $\phi_{t-1}(v)\in I_2$, it has $\phi_{t}(v)\in I_2\cup I_3$; for vertex $v$ with $\phi_{t-1}(v)\in I_3$, it has $\phi_{t}(v)\in I_3$. This completes the proof of the lemma. \end{proof} At this point, it is easy to see \Cref{lemma:self-stabilizing-correctness-I1}, \Cref{lemma:self-stabilizing-correctness-I2-large-a}, \Cref{lemma:self-stabilizing-correctness-I2-small-a}, \Cref{lemma:self-stabilizing-correctness-I3}, and \Cref{lemma:self-stabilizing-correctness-color-in-I123} together immediately lead to \Cref{lemma:self-stabilizing-correctness}. \subsubsection{Stabilization time} To analyze the time cost of our self-stabilizing algorithm, which is summarized in \Cref{lemma:self-stabilizing-time-complexity}, we take a similar approach as in the analysis of the locally-iterative algorithm. Specifically, we will show once the adversary stops disrupting algorithm execution, the maximum amount of time for vertices to progress through each phase/stage is limited, resulting in a bounded stabilization time. We begin with the Linial phase and the transition-in stage. \begin{lemma}\label{lemma:self-stabilizing-time-complexity-I1} If $T_0$ is the last round in which the adversary makes any changes to the RAM areas of vertices, then for every round $t\geq T_0+r^*+2$, every vertex $v$ has $\phi_t(v)\in I_2\cup I_3$. \end{lemma} \begin{proof} By the correctness guarantee provided by \Cref{lemma:self-stabilizing-correctness}, we have that for every round from $T_0+2$, at the beginning of that round, every vertex $v$ has its color in $I_1\cup I_2\cup I_3$ and is in a proper state. Hence, by algorithm description, in a round $t'\geq T_0+2$, every vertex $v$ with $\phi_{t'-1}(v)\in I_1^{(j)}$ computes its new color $\phi_{t'}(v)\in I_1^{(j+1)}$, where $j\in[r^*]$; every vertex $v$ with $\phi_{t'-1}(v)\in I_1^{(r^*)}$ computes its new color $\phi_t(v)\in I_2$; and every vertex $v$ with $\phi_{t'-1}(v)\in I_2\cup I_3$ computes its new color $\phi_t(v)\in I_2\cup I_3$. Now, by an induction on $k$ from $0$ to $r^*$ (both inclusive), it is easy to see, by the end of round $T_0+1+k$, for any vertex $v$, its color $\phi_{T_0+1+k}(v)$ is in: $$\left(\bigcup_{i=k}^{r^*} I_1^{(i)}\right) \cup I_2\cup I_3.$$ Therefore, for every vertex $v$, it holds that $\phi_{T_0+1+r^*}(v)\in I_1^{(r^*)}\cup I_2\cup I_3$. After one more round, for every vertex $v$, it holds that $\phi_{T_0+2+r^*}(v)\in I_2\cup I_3$. \end{proof} Next, we consider the core stage and the transition-out stage. \begin{lemma}\label{lemma:self-stabilizing-time-complexity-I2-part1} Assume $T_0$ is the last round in which the adversary makes any changes to the RAM areas of vertices, for every vertex $v$, let $t^*_v\geq T_0+r^*+2$ be the smallest round number such that either ``$\phi_{t^*_v}(v)\in I_2$ and $a_{t^*_v}(v)\in [\lambda]$'' or ``$\phi_{t^*_v}(v)\in I_3$'' is satisfied. Then, it holds that $t^*_v\leq T_0+r^*+3+\lambda$. \end{lemma} \begin{proof} By \Cref{lemma:self-stabilizing-time-complexity-I1}, every vertex $v$ has $\phi_{T_0+r^*+2}\in I_2\cup I_3$. If ``$\phi_{T_0+r^*+2}(v)\in I_2$ and $a_{T_0+r^*+2}(v)\in [\lambda]$'' or ``$\phi_{T_0+r^*+2}\in I_3$'', then we are already done. Otherwise, vertex $v$ has ``$\phi_{T_0+r^*+2}(v)\in I_2$ and $a_{T_0+r^*+2}(v)\in [\lambda,\lambda^2)$'' and runs \Cref{alg-line:self-stabilizing-phase2-stage2-start} to \Cref{alg-line:self-stabilizing-phase2-stage2-end} of \Cref{alg:self-stabilizing-phase2-stage2} from round $T_0+r^*+3$ to $t_v^*$. In such case, we use the same proof strategy as in the proof of \Cref{lemma:phase2-stage2-time-complexity} (see \cpageref{proof:lemma:phase2-stage2-time-complexity}). Specifically, the first claim and the second claim in that proof still hold with an offset $(T_0+1)$ on round number. Combining the two claims, we know starting from round $T_0+r^*+3$, within $\lambda$ rounds, there must exists a round $t$ in which, the reduction condition $|M_t(v)|\leq \Delta$ is satisfied. As a result, by the end of round $t\leq T_0+r^*+3+\lambda $, we have $a_t(v)\in [\lambda]$ and $t^*_v=t$. \end{proof} \begin{lemma}\label{lemma:self-stabilizing-time-complexity-I2-part2} Assume $T_0$ is the last round in which the adversary makes any changes to the RAM areas of vertices, for every vertex $v$, let $t^{\#}_v> T_0+r^*+\lambda+3$ be the smallest round number such that $\phi_{t^{\#}_v}(v)\in I_3$. Then, it holds that $t^{\#}_v\leq T_0+r^*+3+4\lambda$. \end{lemma} \begin{proof} By \Cref{lemma:self-stabilizing-time-complexity-I2-part1}, every vertex $v$ has either ``$\phi_{T_0+r^*+\lambda+3}(v)\in I_2$ and $a_{T_0+r^*+\lambda+3}(v) \in [\lambda]$'' or ``$\phi_{T_0+r^*+\lambda+3}(v)\in I_3$''. If $\phi_{T_0+r^*+\lambda+3}(v)\in I_3$, then $t^{\#}_v\leq T_0+r^*+4\lambda+3$ holds trivially and we are done. So, assume this is not the case. Consider a vertex $v$ with $\phi_{T_0+r^*+\lambda+3}(v)\in I_2$ and $a_{T_0+r^*+\lambda+3}(v) \in [\lambda]$, let $t^-_v>T_0+r^*+\lambda+3$ be the smallest round number such that every $u\in N(v)$ with $\phi_{t^-_v-1}(u)\in I_2$ has $a_{{t^-_v}-1}(v)\leq a_{{t^-_v}-1}(u)<\lambda$ or $\phi_{{t^-_v}-1}(u)\in I_3$. (That is, the ``if'' condition in \cref{alg-line:self-stabilizing-phase2-stage3-if-cond-2} of \Cref{alg:self-stabilizing-phase2-stage3} is first satisfied in round $t^-_v$.) Further define $t^+_v\geq t^-_v$ to be the smallest round number such that $\phi_{t^+_v}(v)\in I_2$ and $d_{t^+_v}(v)\neq \mu$ or $\phi_{t^+_v}(v)\in I_3$. By definition and the algorithm description, we have $t^*_v\leq t^-_v\leq t^+_v\leq t^{\#}_v$. Moreover, if faults no longer occur, it is easy to verify that once the ``if'' condition in \cref{alg-line:self-stabilizing-phase2-stage3-if-cond-2} of \Cref{alg:self-stabilizing-phase2-stage3} is satisfied for vertex $v$ in round $t^-_v$, then it is satisfied for any round $t> t^-_v$ as long as $\phi_{t-1}(v)\in I_2$. To prove the lemma, we prove a stronger claim: for any vertex $v\in V$ with $\phi_{T_0+r^*+\lambda+3}(v)\in I_2$ and $a_{T_0+r^*+\lambda+3}(v) \in [\lambda]$, it holds that $t^-_v\leq T_0+r^*+\lambda +1+ 3(a_{T_0+r^*+\lambda+3}(v)+1)$, $t^+_v\leq T_0+r^*+\lambda +2+ 3(a_{T_0+r^*+\lambda+3}(v)+1)$ and $t^{\#}_v\leq T_0+r^*+\lambda +3 + 3(a_{T_0+r^*+\lambda+3}(v)+1)$. We prove the claim via an induction on the value of $a$ at the end of round $T_0+r^*+\lambda+3$, which is in $[\lambda]$. For the base case, fix a vertex $w$ with the minimum $a$ value at the end of round $T_0+r^*+\lambda+3$. By \Cref{lemma:self-stabilizing-time-complexity-I2-part1} and algorithm description, every vertex $v\in V$ has either $\phi_{T_0+r^*+\lambda+3}(v)\in I_2$ and $a_{T_0+r^*+\lambda+3}(v) \in [\lambda]$ or $\phi_{T_0+r^*+\lambda+3}(v)\in I_3$. Recall $w$ has the minimum $a$ value at the end of round $T_0+r^*+\lambda+3$, we know $t^-_w=T_0+r^*+\lambda+4$. In round $t^-_w$, there are three potential cases: \begin{itemize} \item Case 1: $d_{t^-_w-1}(w)=\mu$. Then vertex $w$ selects a value $d$ in round $t^-_w$, and we have $t^+_w=t^-_w$. \item Case 2: $d_{t^-_w-1}(w)\neq\mu$ and the ``if'' condition in \cref{alg-line:self-stabilizing-phase2-stage3-proper-d} of \Cref{alg:self-stabilizing-phase2-stage3} is satisfied. Then, vertex $w$ sets $d_{t^-_w}(t)=\mu$ and in round $t^-_w+1$ it will select a new $d$ value not equaling to $\mu$. Thus, we have $t^+_w=t^-_w+1$ in this case. \item Case 3: $d_{t^-_w-1}(w)\neq \mu$ and the ``if'' condition in \cref{alg-line:self-stabilizing-phase2-stage3-proper-d} of \Cref{alg:self-stabilizing-phase2-stage3} is not satisfied. Then, in round $t^-_w$, vertex $w$ either transforms its color to $I_3$ or stay in $I_2$. In both cases, we have $t^+_w=t^-_w$. \end{itemize} Before proceeding, we prove an auxiliary claim, which intuitively states that once there are no errors, resetting $d$ to $\mu$ (i.e., \cref{alg-line:self-stabilizing-phase2-stage3-reset-d} of \Cref{alg:self-stabilizing-phase2-stage3}) occurs at most once for each vertex. \begin{claim-inline}\label{claim:self-stabilizing-phase2-stage3-resetting-d} For any round $t>t^+_w$ with $\phi_{t-1}(w)\in I_2$, it holds that $|L_{(t-1,d_{t-1}(w))}(v)\cap L'_{t-1}(w)|\leq \Delta/\mu$. \end{claim-inline} \begin{proof} We prove by induction on $t$, and we begin with the base case $t=t^+_w+1$. \begin{itemize} \item In case 1 and case 2, vertex $w$ selects a new $d$ value in round $t^+_w=t-1$. By \cref{alg-line:self-stabilizing-phase2-stage3-d-rule} in \Cref{alg:self-stabilizing-phase2-stage3} for setting $d_{t-1}(w)$ and the pigeonhole principle, we have $|L_{(t-1,d_{t-1}(w))}(w)\cap L_{t-2}(w)|\leq\Delta/\mu$. Since $L_{t-2}'(w)\subseteq L_{t-2}(w)$, we have $|L_{(t-1,d_{t-1}(w))}(w)\cap L'_{t-2}(w)|\leq\Delta/\mu$. Observe that, some neighbors of $w$ may map their colors from $I_2$ to $I_3$ in round $t-1$, we continued to prove that $L'_{t-2}(w)=L'_{t-1}(w)$. Consider such a neighbor $u$ of $w$, it must have $a_{t-1}(w)= a_{t-1}(u)$, as being able to map its color from $I_2$ to $I_3$ means the ``if'' condition in \cref{alg-line:self-stabilizing-phase2-stage3-if-cond-2} of \Cref{alg:self-stabilizing-phase2-stage3} is satisfied for $u$ in round $t-1$. Since vertex $w$ is in proper state, we have $T_w[u]+T_u[w]\neq 0$ at the beginning of round $t$. For the case $T_w[u]\neq 0$, although $u$ has a color in $I_3$, its new color will not be in $L_{t-1}'(w)$ as $T_w[u]\neq 0$. For the case $T_u[w]\neq 0$, since $d_{t-2}(w)=\mu$, the ``if'' condition at \cref{alg-line:self-stabilizing-phase2-stage3-if-cond-5} of \Cref{alg:self-stabilizing-phase2-stage3} will not be satisfied for $u$ in round $t-1$, meaning $u$ cannot map its color from $I_2$ to $I_3$ in round $t-1$. Thus we have $L'_{t-2}(w)=L'_{t-1}(w)$ and $|L_{(t-1,d_{t-1}(w))}(w)\cap L'_{t-1}(w)|= |L_{(t-1,d_{t-1}(w))}(w)\cap L'_{t-2}(w)| \leq \Delta/\mu$. \item In case 3, for vertex $w$, the ``if'' condition in \cref{alg-line:self-stabilizing-phase2-stage3-proper-d} of \Cref{alg:self-stabilizing-phase2-stage3} is not satisfied in round $t^+_w=t-1$, thus $|L_{(t-2,d_{t-2}(w))}(w)\cap L_{t-2}'(w)|\leq \Delta/\mu$. Since $d_{t-1}(w)=d_{t-2}(w)$, we have $L_{(t-2,d_{t-2}(w))}(w) = L_{(t-1,d_{t-1}(w))}(w)$. Consider a neighbor $u$ of $w$ that maps its color from $I_2$ to $I_3$ in round $t-1$, it must be the case that $a_{t-1}(w)= a_{t-1}(u)$. We continued to prove that $\phi_{t-1}(u)$ either is not in $L_{(t-1,d_{t-1}(w))}(w)$ or not in $L_{t-1}'(w)$. Since vertex $w$ is in proper state, we have $T_w[u]+T_u[w]\neq 0$ at the beginning of round $t$. For the case $T_w[u]\neq 0$, although $u$ has a color in $I_3$, its new color will not be in $L_{t-1}'(w)$ as it has $T_w[u]\neq 0$. For the case $T_u[w]\neq 0$, then the color in $I_3$ selected by $u$ in round $t-1$ is not in $L_{(t-2,d_{t-2}(v))}(v)=L_{(t-1,d_{t-1}(v))}(v)$. Thus we have $|L_{(t-1,d_{t-1}(v))}(v)\cap L'_{t-1}(v)|= |L_{(t-2,d_{t-2}(v))}(v)\cap L'_{t-2}(v)| \leq \Delta/\mu$. \end{itemize} By now we have proved the base case. Notice that the inductive step can be proved by the same argument as in case 3, we conclude the claim is true. \end{proof} We resume the lemma proof. Due to the above claim, we know for any round $t>t^+_w$ with $\phi_{t-1}(w)\in I_2$, it holds that $d_{t-1}(w)\neq \mu$ and $|L_{(t-1,d_{t-1}(w))}(w)\cap L'_{t-2}(w)|\leq \Delta/\mu$, and its $d$ value will not change anymore. Now, recall vertex $w$ has the minimum $a$ value at the end of round $T_0+r^*+\lambda+3$, and that $t^-_w=T_0+r^*+\lambda+4$, $t^+_w\leq t^-_w+1= T_0+r^*+\lambda+5$. In round $T_0+r^*+\lambda+6$, if $\phi_{T_0+r^*+\lambda+5}\in I_2$, then the ``if" condition in \cref{alg-line:self-stabilizing-phase2-stage3-if-cond-2} and \cref{alg-line:self-stabilizing-phase2-stage3-if-cond-5} of \Cref{alg:self-stabilizing-phase2-stage3} will be satisfied, and the ``if" condition in \cref{alg-line:self-stabilizing-phase2-stage3-if-cond-3} and \cref{alg-line:self-stabilizing-phase2-stage3-if-cond-4} of \Cref{alg:self-stabilizing-phase2-stage3} will not not be satisfied. As a result, by \Cref{claim:self-stabilizing-phase2-stage3-bounded-k}, vertex $w$ will obtain a new color $\phi_{T_0+r^*+\lambda+6}\in I_3$. Hence, we have $t^{\#}_w=T_0+r^*+\lambda+6$. This completes the proof for the base case. Assume our claim holds for every vertex $v$ with $a_{T_0+r^*+\lambda+3}(v)\leq i \in [\lambda-1]$. Consider a vertex $w$ with $a_{T_0+r^*+\lambda+3}(w)=i+1$. By the induction hypothesis, for every vertex $v$ with $a_{T_0+r^*+\lambda+3}(v)\leq i$, it holds that $t^{\#}_v\leq T_0+r^*+\lambda +3 + 3(a_{T_0+r^*+\lambda+3}(v)+1)\leq T_0+r^*+\lambda +3 + 3(i+1) = T_0+r^*+\lambda + 3((i+1)+1)$. Then, we have $t^-_w\leq T_0+r^*+\lambda + 3((i+1)+1) + 1$. Apply the same argument as in the base case, we have $t^+_w\leq T_0+r^*+\lambda + 3((i+1)+1) + 2$ and $t^{\#}_w\leq T_0+r^*+\lambda + 3((i+1)+1) + 3$. This completes the proof for the inductive step. We conclude that for every vertex $v$ with $\phi_{T_0+r^*+\lambda+3}(v)\in I_2$ and $a_{T_0+r^*+\lambda+3}(v) \in [\lambda]$, it holds that $t^{\#}_w\leq T_0+r^*+4\lambda +3$. This completes the proof of this lemma. \end{proof} We can now prove \Cref{lemma:self-stabilizing-time-complexity}, which bounds the stabilization time of our algorithm \begin{proof}[Proof of \Cref{lemma:self-stabilizing-time-complexity}] We prove a more accurate time bound: if $T_0$ is the last round in which the adversary makes any changes to the RAM areas of vertices, then every vertex has its color in $[\Delta+1]$ by the end of round any round $t\geq T_0+r^*+4\lambda+2+2(\sqrt{m_3}+1)\mu$. By \Cref{lemma:self-stabilizing-time-complexity-I2-part2}, by the end of round $T_0+r^*+4\lambda+3$, every vertex must have its color in $I_3$, and will run \Cref{alg:self-stabilizing-phase3} starting from round $T_0+r^*+4\lambda+4$. In each such round, by \Cref{lemma:self-stabilizing-correctness}, every vertex is in some proper state. Hence, by \Cref{alg:self-stabilizing-phase3}, if there still exists a vertex with color not in $[\Delta+1]$, then the maximum value of the color used by any vertex will be reduced by at least one. Recall that every vertex in interval $I_3$ has its color in $[\ell_3]$ with $\ell_3=\Delta+2(\sqrt{m_3}+1)\cdot\mu$. Therefore, by the end of round $T_0+r^*+4\lambda+3+(\ell_3-(\Delta+1))=T_0+r^*+4\lambda+2+2(\sqrt{m_3}+1)\mu$, every vertex has its color in $[\Delta+1]$. Moreover, in any later round, by \Cref{alg:self-stabilizing-phase3}, the color of any vertex will remain in $[\Delta+1]$. \end{proof} \section{Conclusion}\label{sec:conclusion} In this paper, we give the first locally-iterative $(\Delta+1)$-vertex-coloring algorithm with sublinear-in-$\Delta$ running time. This algorithm can also be transformed into a self-stabilizing algorithm, achieving sublinear-in-$\Delta$ stabilization time. Looking ahead, a natural question to ask is can we do better? Due to the trade-off between the runtime of the intermediate phase and the number of colors used in the coloring produced by the intermediate phase, $\Tilde{O}(\Delta^{3/4})+\log^*n$ might be the best bound our approach could attain. Nevertheless, the possibility that more elaborate tools or more clever techniques could result in faster algorithms still exist, and this is a very interesting direction worth further exploration. On the other hand, compared with the seminal work by Barenboim, Elkin and Goldenberg~\cite{barenboim21}, our algorithm is more sophisticated and is not applicable in some settings (that algorithms in \cite{barenboim21} could work), such as the Bit-Round model. Finding a more elegant and ``natural'' sublinear-in-$\Delta$ locally-iterative coloring algorithm, and perhaps supporting more settings, is another direction for future research. \bibliographystyle{plainnat} \input{main.bbl} \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} In this study we consider atomic excitations which arise during the nuclear $\beta^{-}$ decay in light few-electron atoms. Our main goal is to determine numerically the corresponding final state probabilities, or, in other words, the absolute probabilities of formation of the final system(s) in certain bound and/or unbound states which arise after the nuclear $\beta^{-}$ decay in light few-electron atoms. A basic theoretical analysis of atomic excitations during the nuclear $\beta^{-}$ decay has been performed in our earlier works \cite{FrTa} and \cite{Fr98}. In this study we will not repeat all steps and arguments from those works. Instead, below we shall bring our attention to some new problems which have not been solved in earlier studies. Note only that our analysis and computations of atomic excitations are based on the sudden approximation \cite{Mig1}, \cite{Mig2}. In turn, the sudden approximation follows from the well known experimental fact that the velocities of the emitted $\beta^{-}$ electrons are significantly larger than the usual velocities of atomic electrons. In many actual cases such velocities are close to the speed of light in vacuum, i.e. $v_{\beta} \approx c$. It follows from here that the emitted $\beta^-$ electron leaves the external shells of an atom for a time which is approximately $\tau_{\beta} \approx a_0 / c = \alpha \tau_a = \alpha \hbar / (e^4 m_e)$, where $\alpha \approx \frac{1}{137}$ is the fine structure constant, $\hbar$ is the reduced Planck constant, $m_e$ is the electron mass (at rest), $a_0$ is the Bohr radius, $c$ is the speed of light in vacuum and $\tau_a = \hbar / (e^4 m_e) \approx 2.418884 \cdot 10^{-17}$ $sec$ is the atomic time. For internal atomic/electron shells one also finds that $\tau_{\beta} \ll \tau_a$, since the passing time $\tau_{\beta}$ for $\beta^-$ electron decreases with the radius of the electron shell. The general equation of the $\beta^-$ decay can be written in the form \begin{equation} Q \rightarrow (Q + 1)^{+} + e^{-} + \nu \end{equation} where $Q$ is the nuclear charge of the incident nucleus, while $e^-$ and $\nu$ are the emitted (fast) electron and neutrino, respectively. The emitted electron is usually very fast and its Lorentz $\gamma-$factor ($\gamma = E / m_e c^2$) is bounded between 2 and 15 - 18. In all actual cases, the nuclear $\beta^-$ decay proceeds in many-electron atoms/ions, rather than in bare nuclei. The arising atomic system with the nuclear charge $(Q + 1)^{+}$ is also many-electron ion (or atom). Our main goal in this study is to determine the final state probabilities for this newly arising atomic system. Suppose that the incident atom was in one of its bound states, e.g., in the $A$-state. The final ion is formed in one of its states (bound or unbound), e.g., in the $B$-state. The aim of theoretical analysis of nuclear $\beta^{\pm}$ decays in atomic systems is to evaluate the corresponding transition amplitude ${\cal A}_{AB} = \mid \langle A \mid B \rangle \mid$ and final state probability $p_{AB} = {\cal A}^2_{AB} = \mid \langle A \mid B \rangle \mid^2$. This problem has attracted a significant theoretical attention (see, e.g., \cite{FrTa}, \cite{Finb}, \cite{Skoro}, \cite{Schw}), since various $\beta^{-}$ decaying nuclei are of great interest in various applications to modern technology, scientific research, nuclear medicine, etc. For instance, the $\beta^{-}$ decaying isotope ${}^{131}$I (so-called `radioiodine') is extensively used in nuclear medicine both diagnostically and therapeutically. Examples of its use in radiation therapy include the treatment of thyrotoxicosis and thyroid cancer. Diagnostic tests exploit the mechanism of absorption of iodine by the normal cells of the thyroid gland. Iodine-131 can be used to destroy thyroid cells theraputically. Other $\beta^{-}$ decaying isotopes of iodine are used (mainly as a radioactive labels) in modern biology, physical and organic chemistry \cite{HaAd}. Another well known $\beta^{-}$ decaying isotope is strontium-90. It finds extensive use in medicine and industry, as a radioactive source for thickness gauges and for superficial of some cancers. Controlled amounts of ${}^{90}$Sr and can be used in treatment of bone cancer. The radioactive decay of strontium-90 generates significant amount of heat. Strontium fluoride of strontium-90 (${}^{90}$SrF$_2$) is widely used as a heat source in many remote thermoelectric generators, since it is much cheaper and less dangerous than the alternative source based on ${}^{238}$Pu. Strontium-90 is also used as a radioactive tracer in medicine and agriculture. The isotope ${}^{90}$Sr can be found in significant amount in spent nuclear fuel and in radioactive waste from nuclear reactors and in nuclear fallout from nuclear tests. It is interesting to note that the fission product yield of ${}^{90}$Sr sharply depends upon the type of explosive nuclear (fission) device. Relatively large output of ${}^{90}$Sr in the nuclear fallout is a strong indication that the original nuclear explosive device was made from uranium-233 (or uranium-235), rather than from plutonium-239. Advanced nuclear explosive devices which contain substantial amounts of ${}^{245}$Cm/${}^{247}$Cm and/or ${}^{249}$Cf/${}^{251}$Cf produce significantly smaller yields of ${}^{90}$Sr, than analogous devices made from ${}^{239}$Pu. A brief discussion of different applications of other $\beta^{-}$ decaying atoms can be found, e.g., in \cite{FrTa} (see also \cite{HaAd}). Note that for ${}^{131}$I, ${}^{90}$Sr and for many other $\beta^{-}$ decaying isotopes/atoms our knowledge about the final (or post-decay) atomic states is far from complete, since in almost all cases we cannot determine the final state probabilities. Currently, for some of the $\beta^{-}$ decaying atoms we can only predict approximate probabilities to find the final ions/atoms in their ground state(s). Analogous evaluations of for the probability to form the first excited (bound) states and for the total probability of electron ionization are very approximate. Probabilities to form other excited states, including various unbound states, in the final atomic systems have never been evaluated (even approximately) for $\ge$ 99.9 \% of all $\beta^{-}$ decaying atoms. The goal of this and following studies is to correct such a situation at least for some light atoms. In general, the results of experiments, in which the final state probabilities for $\beta^{\pm}$ decaying atoms and molecules are measured, can be considered as a very serious quantative test for modern theory of electron density distribution in atoms and molecules. Formally, the current theory of $\beta^{-}$ decay in atoms (and molecules) is self-consistent and it does not include any unsolved problem. All troubles of the current theoretical evaluations are mainly related with the relatively low accuracy of the wave functions used in calculations. For instance, in \cite{FrTa} we have calculated a large number of probabilities for the `ground-state to ground-state' transitions. In fact, such probabilities are now known for all atoms from He up to Ar \cite{FrTa}. However, analogous calculations of the `ground-state to excited-states' probabilities are significantly more difficult to perform, since for many atoms/ions we do not have sufficiently accurate wave functions of the excited states. Finally, the computed values of final state probabilities for the excited states are not accurate. Furthermore, these values are often oscillate, if the number of basis functions increases. Analogous computations for the $\beta^{+}$ decays in atoms are even more complicated. In particular, it is very hard to determine the final state probabilities accurately, if a negatively charged ion is formed in the result of the atomic $\beta^{+}$ decay. In such cases one needs to use highly accurate methods which are specifically designed for accurate computations of the negatively charged ions. In this study we have developed such a method, and this allows us to determine the final state probabilities in those cases when negatively charged ions are formed after the nuclear $\beta^{+}$ decays in some few-electron atoms and ions. The probabilities to form bound negatively charged ions which are computed below have never been determined in earlier studies. Another interesting problem which has never been discussed is the emission of the fast secondary electrons during nuclear $\beta^{\pm}$ decays in many-electron atoms and molecules. The present work has the following structure. In the next Section we discuss a few numerical methods which are used to determine the bound state wave functions of the incident and final states in few-electron atoms and ions. Section III contains a brief discussion of the final state probabilities computed for some $\beta^{-}$ decaying light atoms. Here we consider the He, Li and Be atoms. Our present analysis is extensive and it includes a few excited states in each of the final ion. In Section IV we determine the `ground state to ground state' and `excited state to ground state' transition probabilities for the $\beta^{+}$ decay in some light atoms. The final atomic system in this case is a negatively charged ion. Emission of the fast, secondary electron (or $\delta-$electrons) during the nuclear $\beta^{\pm}$ decay in atoms are considered in Section V. The concluding remarks can be found in the last Section. \section{Method} Let us assume that we have an $N-$electron atom which is described by its bound state wave function $\Psi_i$, i.e. $H_{0} \Psi_i = E_i \Psi_i$, where $H_{0}$ is the atomic Hamiltonian (see, e.g., \cite{LLQ}), $E_i$ is the corresponding eigenvalue (or total energy, for short) and $\Psi_i$ is the eigenfunction of the incident bound state which has a finite norm, i.e. $\mid \Psi_i \mid^2$ = 1. Consider now a sudden change of the Hamiltonian of atomic system. By sudden change we mean that the change in the original Hamiltonian $H_{0}$ occurs in a time which is very short compared with the periods of (atomic) transitions from the given state $i$ to other states. The electron density distribution and the corresponding wave function cannot change for such a short time and remain the same as before perturbation. This means that after such a process we find the new atomic system with the new Hamiltonian $H_f$, but with the old electron density distribution. Such an electron density distribution is described by the old wave function $\Psi_i$. The new Hamiltonian $H_f$ has a complete system of eigenfunctions, i.e. $H_f \Phi^{(k)}_f = E_k \Phi^{(k)}_f$. Therefore, at the final stage of the process we have only states with the wave functions $\Phi^{(k)}_f$. The incident wave function $\Psi_i$ is now represented in the form of an expansion $\Psi_i = \sum_{k} A_k \Phi^{(k)}_f$, where the coefficients $A_k$ can be considered as the transition (probability) amplitudes. The corresponding probabilities $p_k = \mid A_k \mid^2$ determine the probability to detect the final system in its state $\Phi^{(k)}_f$, if the initial state of the system was described by the wave function $\Psi_i$. Note that the system of notations used here correspond to the case of the discrete spectra in both the incident and final atomic systems. In general, the expansion $\Psi_i = \sum_{k} A_k \Phi^{(k)}_f$ must contain different parts which represent the discrete and continuous spectra, respectively. Thus, to determine the probability amplitudes $A_k$ we need to compute the overlap integrals between two $N-$electron wave functions $\Psi_i$ and $\Phi^{(k)}_f$ functions for different $k$, i.e. \begin{equation} A_k = \int \Psi^{*}_i({\bf r}_1, \ldots, {\bf r}_N) \Phi^{(k)}_f({\bf r}_1, \ldots, {\bf r}_N) d^3{\bf r}_1 \cdot \ldots \cdot d^3{\bf r}_N \label{Int} \end{equation} In general, this value is complex, but the corresponding probabilities $p_k = \mid A_k \mid^2$ are always real and their values are bounded between 0 and 1. As follows from Eq.(\ref{Int}) any of the final states must have the same $L$ and $S$ quantum numbers as the incident state. Here and everywhere below the notation $L$ designates the angular (electron) momentum of the atom, while the notation $S$ denotes the total electron spin. Note that the $L$ and $S$ quantum numbers are used in the non-relativistic $LS$-classification scheme which is appropriate for light atoms and ions. Briefly, we can say that the angular (electron) momentum of the atom and its total electron spin are conserved during the nuclear $\beta^{-}$ decay. This means that the original problem of determining the final state probabilities in the case of $\beta^{-}$ decay in atoms is reduced to the construction of highly accurate wave functions for the incident and final states with the same $L$ and $S$ quantum numbers. This means the conservation of the angular momentum ${\bf L}$ and total electron spin ${\bf S}$ during the nuclear $\beta^{-}$ decay in many-electron atoms. In addition to these two quantum numbers the spatial parity of the incident wave function is also conserved. The conservation of the angular (electron) momentum $L$ and total electron spin $S$ of the atom during the nuclear $\beta^{-}$ decay follows directly from the perturbation theory. In fact, these conservation rules are not fundamental, i.e. they are obeyed only in the lowest order approximations upon $\alpha = \frac{e^2}{\hbar c} \approx \frac{1}{137}$, where $\alpha$ is the fine structure constant. It can be shown that in higher order approximations upon $\alpha$ the $L$ and $S$ quantum numbers do not conserve (see discussion in Section V below). The leading correction to the non-relativistic results (i.e. to the final state probabilities) is $\approx \alpha^2 (\alpha Q)^2$, where $Q$ is the electric nuclear charge (in atomic units). In light atoms such a correction is very small $\approx \alpha^4$ and can be ignored. In heavy atoms with $Q \approx 100$ the overall contribution of this correction is substantially larger, but these atoms are not considered in this work. \subsection{Variational wave functions} Numerical evaluations of the overlap integral, Eq.(\ref{Int}), require the knowledge of highly accurate wave functions of the incident and final atomic systems. To determine such wave functions for the ground and excited states of different atoms and ions in this work we perform extensive calculations of few-electron atomic systems. Then, by using our accurate wave functions we determine the corresponding transition amplitudes and the final state probabilities. This is the second step of our procedure. In this Section we discuss the methods used to construct highly accurate wave functions of few-electron atoms and ions. In general, the wave functions of the excited states which have the same symmetry as the ground state can be found as the solutions of the corresponding eigenvalue problem. The energies of the different bound states are calculated by optimizing the orbital exponents of the corresponding root of the eigenvalue equation. Furthermore, our wave functions are simultaneously the eigenfunctions of the angular momentum $\hat L^2$ and spin $\hat S^2$ operators, respectively. Therefore, these eigenfunctions can be used in numerical calculations of various bound state properties. The Slater orbitals are the natural basis for all atomic calculations. In this study we also use the basis of radial functions constructed from Slater orbitals. To perform numerical computations of few-electron atoms and ions in this study we apply the Hylleraas-Configuration Interaction method (Hy-CI) and the Configuration Interaction method (CI) with Slater orbitals. Both these methods are included in our package of computer codes. The Hy-CI method, introduced by Sims and Hagstrom \cite{SimsJCP,Sims-Be}, combines the use of orbitals with higher angular momentum (as in regular CI procedure) and inclusion of the interelectronic distance $r_{ij}$ into the wave function (as for Hylleraas-type trial wave functions). The Hy-CI and CI wave functions for an $n$-electron systems are defined as: \begin{equation} \Psi =\sum_{k=1}^NC_k\Phi _k,\qquad \Phi _k=\hat{O}(\hat{L}^2)\hat{\mathcal{ }}\phi _k\chi \label{wave} \end{equation} where $\Phi _k$ are symmetry adapted configurations, $N$ is the number of configurations and the constants $C_k$ are determined variationally. The operator $\hat{O}(\hat{L}^2)$ projects over the proper spatial space, so that every configuration is eigenfunction of the square of the angular momentum operator $\hat{L}^2$. $\hat{\mathcal{A}}$ is the $n$-particle antisymmetrization operator, and $\chi $ is the spin eigenfunction: \begin{equation} \chi =\left[ (\alpha \beta -\beta \alpha )...(\alpha \beta -\beta \alpha )\alpha \right] \label{spin} \end{equation} where for even electron systems the last $\alpha$ spin function is omitted. The spatial part of the basis functions are Hartree products of Slater orbitals: \begin{equation} \phi _k=r_{ij}^{\nu}\prod_{i=1}^n\phi _i(r_i,\theta _i,\varphi _i) \label{Hartree} \end{equation} where the power $\nu$ takes the values $0$, or $1$. For $\nu=0$ the wave function reduces effectively to a CI wave function. The basis functions $\phi _k$, are the products of Slater orbitals defined as follows \begin{equation} \phi (\mathbf{r}) =r^{n-1}e^{-\alpha r} Y_l^m(\theta ,\phi ) \label{Slater} \end{equation} where $Y_l^m(\theta ,\phi )$ are the spherical harmonics. The phases used in our definition of $Y_l^m(\theta ,\phi )$ correspond to the choice made by Condon and Shortley \cite{Condon}, i.e. \begin{equation} Y_l^m(\theta ,\phi )=(-1)^m\left[ \frac{2l+1}{4\pi }\frac{(l-m)!}{(l+m)! \right] ^{1/2}P_l^m(\cos {\theta })e^{im\phi } \label{spherical} \end{equation} where $P_l^m(\cos {\theta })$ are the associated Legendre functions. The integrals occurring in our calculation are up to four-electron integrals in the Hy-CI method and two-electron integrals in the CI method. Expressions for all these integrals are given in Refs. \cite{Ruiz3e,Ruiz4e,Sims3e}. The calculation of the overlap between the wave functions of bound states require only the usual two- and three-electron integrals. Currently, the non-relativistic total energy of the ground state of helium atom is known to very high accuracy (up to 40 decimal digits) \cite{Nakatsuji-He}. Many excited $S$-, $P$-, $D$-, $F$-, etc, states in two-electron helium atom have also been computed to high numerical accuracy (see, e.g., \cite{Nakatsuji-exc,Drake,Sims-exc}). The ground $1^1S$-state of the helium-like Li$^{+}$ ion (or ${}^{\infty}$Li$^{+}$ ion) has been determined to high accuracy \cite{Nakatsuji-He2,Frolov-Li+}, while the $2^1S, \ldots, 7^1S$ states in the Li$^{+}$ ion are known to significantly less accuracy \cite{Perkins,Weiss,Pekeris}. Highly accurate calculations of the excited $S$-states in the Li$^{+}$ ion higher than $7^1$S have never been performed. As a reference calculation in the case of helium-like two-electron atoms we start with a Hy-CI energy of the ${}^{\infty}$He atom $-2.90372437699$ a.u. This energy was obtained with the use of 820 configurations and a basis set which included the $s$-, $p$-, $d$- and $f$-Slater orbitals $[18s,16p,16d,16f]$. This total energy has uncertainty which is less than $1 \cdot 10^{-9}$ $a.u.$ The `optimal' exponent $\alpha$ = 2.9814 has been obtained by optimizing 404 configurations constructed with a smaller basis $[11s,11p,11d,11f]$. The best Hy-CI energy obtained with a single exponent for the ground state of the He atom with the infinitely heavy nucleus is $-2.90372437701$ a.u. (974 configurations). All these calculations have been performed with the use of quadruple precision, or 30 decimal digits per computer word. Some special measures have been taken to avoid any linear dependence for this basis set. The total energies of different bound $n^1S-$states in the Li$^{+}$ ion are shown in Table I. In calculations of the overlap, which involve the wave functions of the He atom and Li$^{+}$ ion, we have used the wave functions for atom and ion with the same number of terms. The orbital exponents of different states were always different. In fact, the orbital exponents of every excited state have been optimized at several stages and used for the larger basis (for more detail, see Table II). The optimal values of exponents are shown in Table II. Every time when a new exponent has been introduced in a series of calculations, a complete re-optimization has been made. Currently, our best calculations have been performed with $820$ configurations, but the optimal exponents have been obtained in calculations with a smaller basis $[14s,14p,14d,14f]$ ($680$ Hy-CI configurations). The use of a single exponent (considering double occupancy of the orbitals) for all configurations has been sufficient to obtain highly accurate energies. The total energies obtained in this study for the $2^1S$-, $3^1S-$ and $4^1S$-states of the Li$^{+}$ ion are the lowest values obtained to-date. Note that our resulting wave functions derived after optimization are not orthogonalized. Therefore the overlaps between configurations must be determined. In turn, this problem is reduced to numerical calculation of the overlap integrals. The symmetry adapted configurations have been constructed for $S$-symmetry as $s(1)s(2)$, $s(1)s(2)r_{12}$, $p(1)p(2)$, $p(1)p(2)r_{12}$, $d(1)d(2)r_{12}$, $f(1)f(2)$ and $f(1)f(2)r_{12}$. Using the short notation, e.g., $p_0(1)p_0(2)=p_0p_0$, $p_{1}(1)p_{-1}(2)=p_{1}p_{-1}$, etc, we can write the symmetry adapted configurations $pp$, $dd$ and $ff$ in the form: \begin{eqnarray} pp &=&p_0p_0-p_1p_{-1}-p_{-1}p_1 \nonumber \\ dd &=&d_0d_0-d_1d_{-1}-d_{-1}d_1+d_2d_{-2}+d_{-2}d_2 \nonumber \\ ff &=&f_0f_0-f_1f_{-1}-f_{-1}f_1+f_2f_{-2}+f_{-2}f_2-f_{-3}f_3-f_{-3}f_3 \label{Eqq8} \end{eqnarray} In Table II we also show the convergence of the energy with respect to several truncated wave function expansions. The exponents used in every calculation are given explicitly for each state. It was observed that for the determination of higher excited states the diffuse functions are needed and the wave functions expansions become larger. The total energies of the first four excited states can be determined to the accuracy which is better than $\pm 1 \cdot 10^{-6}$ $a.u.$ However, such an accuracy rapidly decreases for the highly excited states. The value of the calculated overlap integral, which includes the excited states of Li$^{+}$ and the ground state of the He atom, substantially depends upon the overall accuracy of the calculated energy. For low-lying states we have determined the overlaps with overall accuracy $\approx$ 4-5 stable decimal digits. For higher states such an accuracy decreases, but the absolute values of overlaps become very small and tend to zero. In calculations of the Li and Be atoms and corresponding isoelectronic ions Be$^+$ and Li$^{-}$ we have used the wave functions constructed with the use of L-S Configuration Interaction method. In the CI calculations we have used double precision, which was been sufficient for our purposes. It was checked by performing analogous calculations with the quadruple precision. Calculations with double precision are significantly faster. The method used for calculations and optimization of the orbital exponents is very similar to the method used above for two-electron systems. For the three-electron systems Li and Be$^+$ we use the Full Configuration Interaction method (FCI). In such cases, therefore, there are no configurations which have been either selected, or eliminated. We have used a set of $s$-, $p$- and $d$-Slater orbitals and two exponents, considering double occupancy of the orbitals. The exponents are the same for all configurations. We have optimized the exponents for the smaller basis used, i.e. n=3 [3s,2p,1d] or n=4 and employed them in the calculations with the larger basis sets $n=5,6$. Eventually, the exponents in larger calculation are also optimized. The configurations are symmetry adapted, and constructed combining the two-electron configurations of Eq.(\ref{Eqq8}) with one $s$-orbital. These configurations are $sss$, $spp$, $pps$, $sdd$ and $dds$. The obtained energies have been determined with $\approx 1 \cdot 10^{-3} a.u.$ accuracy for the ground and excited states of the Li atom and Be$^+$ ion. They are shown in Table III. The overlaps between the wave functions of the ground state of Li atom and the ground and excited states of Be$^+$ have been calculated numerically (see Table III). The values converge adequately and the overlaps rapidly decrease for higher excited state. For four-electron atomic systems we optimize the orbital exponents using a small basis $n=4$ (this means $[4s3p2d1f])$, and use those exponents in larger calculations with $n=5,6$. The configurations are grouped in blocks for a given $n$ and according to the type (i.e. $ssss$, $sspp$, $ppss$, $spps$, $\ldots$). Then the blocks of configurations have been filtered with a threshold of average single configuration contribution of $\approx 1 \cdot 10^{-4}$. All blocks of configurations with small contribution to the total energy have been eliminated after being tested. This could not produce any substantial lost in the total energy. In reality, the corresponding error was $\le 1 \cdot 10^{-3}$ $a.u.$ In addition, all configurations in our calculations have been ordered according to their orbitals: $s$-, $p$-, $d$-, and $f$-orbitals, and within these groups by approximately energetic order. As the ground state of the Be atom is also a ${}^1S$-state, the configurations can be constructed combining the two-electron symmetry adapted configurations of Eq.(\ref{Eqq8}). Resulting configurations are: $ssss$, $sspp$, $spps$, $ppss$, $pppp$, $ssdd$, $sdds$, $ddss$, $sppd$, $dpps$, $sdpp$, $ppdd$, $pddp$, $ddpp$, $ssff$, $ddff$, $ffff$. A set of two exponents (double occupancy of the shells) has been used. With this restriction, the configurations showed above represent all possible cases that can be formed. Nevertheless, the configurations ddff,ffff$ have been eliminated because their contributions were less than the threshold. An additional configuration type of S-symmetry $sppd$ and its permutations $dpps$ and $sdpp$ contribute considerably to the energy calculations on four-electron systems, but not in three-electron ones, where they contribute $\le 1 \cdot 10^{-4}$ $a.u.$. This configuration is somehow more complex: \begin{eqnarray} sppd &=& sp_0p_0d_0 + sp_1p_1d_{-2} + sp_{-1}p_{-1}d_2 + sp_1p_{-1}d_0 \nonumber \\ &+& sp_{-1}p_1d_0 - sp_1p_0d_{-1} - sp_{-1}p_0d_1 - sp_0p_1d_{-1} - sp_0p_{-1}d_1 \label{Eqq9} \end{eqnarray} Table IV contains the probability amplitude and final state probability for the $\beta^{+}$-decay of the four-electron Be atom into four-electron Li$^{-}$ ion. In this case in numerical calculation of the overlap we follow the same method of calculation used above for two-electron systems. However, for four-electron atomic systems no Hy-CI terms have been included. We are planning to include such terms in future studies. Since the computed CI energies are known to the accuracy $\pm 1 \cdot 10^{-3}$ $a.u.$, then we restrict here our calculations to the lowest three $S$-states in the incident Be atom. The calculated ground state energy of the Li$^{-}$ ion is $-7.498913845101$ a.u. (for 2155 CI configurations), and it is close to the best-to-date value $-7.50058250$ a.u. \cite{Frolov-Li-} known in the literature for this system. The calculated total energy of the ground state of Be atom is $-14.665206189$ a.u. is very close to the best results of recent calculations $-14.667356486$ a.u. \cite{Adam-Be}. This value agrees very well with the value $-14.66544500$ a.u. calculated by Bunge with approximately the same basis \cite{Bunge-Be}. As expected the excited states of the Be atom can be determined with less accuracy than the ground state. The calculated total energies together with the overlaps between wave functions of the ground/excited states of the Be atom and the ground state of the Li$^{-}$ ion can be found in Table V. Finally, the `ground state to ground state' transition probability for $\beta^{-}$-decaying Be atom (to B$^+$ ion) can be found in Table VI. The reference ground state energies for the Be atom are given in Table V. Note that our ground state energy of the B$^+$ ion has an overall accuracy which is better than $\pm 1 \cdot 10^{-3}$ $a.u.$ \section{Results for $\beta^{-}$ decaying light atoms} As we mentioned above in this study we consider the $\beta^{-}$ decays in a number of few-electron atoms He, Li, and Be. In all our calculations we assume that before the nuclear $\beta^{-}$ decay each of the atoms was in its ground state (except calculations shown in Table V). Furthermore, the probability of direct electron ionization during $\beta^{-}$ decay was assumed to be small. Its contribution is essentially ignored in this study. Numerical evaluation of the corresponding small correction can be found in Section V below. Briefly, this means that all ions which are formed after the nuclear $\beta^{-}$ decay contain the same number of electrons as the original atoms. In other words, all final state probabilities can be determined with the use of Eq.(\ref{Int}) where the overlap integral contains two $N-$electron wave functions. For instance, the nuclear $\beta^{-}$ decay of the He atom produces the two-electron Li$^{+}$ ion. If the incident He atom was in its ground $1^1S(L = 0)-$state, then, in respect with the conservation rules formulated above, the final two-electron Li$^{+}$ ion will be in one of its bound $n^1S(L = 0)-$states, where $n = 1, 2, 3, \ldots$, or in an unbound state. In this study we consider the bound $n^1S(L = 0)-$states in the Li$^{+}$ ion up to $n = 8$. The transition amplitudes $A_{g \rightarrow n}$ and corresponding probabilities $p_{g \rightarrow n} = \mid A_{g \rightarrow n} \mid^2$ for the nuclear $\beta^{-}$ decay of the He atom can be found in Table I. Table I also contains the total energies of all $n^1S(L = 0)-$states (for $n = 1, 2, \ldots, 8$) in the Li$^{+}$ ion. These energies indicate, in principle, the overall quality of the bound state wave functions used in our calculations of the overlap integrals, Eq.(\ref{Int}). The wave function of the ground $1^1S(L = 0)-$state in the incident He atom corresponds to the energy $E$ = -2.90372437701 a.u. which is very good for Hy-CI wave function with $N \le 974$ terms. Note that there are a few simple rules which must be obeyed, in principle, for any distribution of the final state probabilities $p_{g \rightarrow n}$ obtained in numerical calculations. For simplicity, let us restrict ourselves to the cases when all final states are also bound and each of these states is labeled with the integer quantum number $n (n \ge 0)$. This quantum number $n$ is often called the `excitation number' and/or `index of excitation'. The value $n = 0$ corresponds to the ground state in few-electron atom, i.e. $n = g$. The first rule for probability distribution is simple and states that the numerical values of such probabilities rapidly decrease, if the excitation number $n$ increases, i.e. it must be $p_{g \rightarrow n} > p_{g \rightarrow (n+1)}$ for an arbitrary $n$ ($n \ge 0$). In reality this inequality is even stronger, i.e. $p_{g \rightarrow (n+1)} \ll p_{g \rightarrow n}$. In some actual calculations one can find an opposite inequality for the final state probabilities. Usually, it is directly related with very slow convergence rate(s) for the wave functions of the incident and final atomic systems. Numerical values of these final state probabilities cannot be used in actual applications. They must be improved in future calculations with better convergent basis sets. The only expectation from this rule can be found in those cases, when the ground state wave function of the incident system and the trial wave function of one of the excited states of the final ion are almost orthogonal to each other. The final state probability is a very small value for such an excited state. In many cases, it is directly follows from an additional symmetry of the basis functions used to construct the variational wave functions. The second rule states that the sum of all partial probabilities must converge to the value which exceeds $\approx$ 0.75 (if the initial system was a neutral atom), but always less than unity. In fact, the difference \begin{equation} P_{ion}(g) = 1 - \sum^{N_{max}}_{n=1} p_{g \rightarrow n} \label{sum} \end{equation} is the total probability of electron ionization (from the ground state $g$) during the nuclear $\beta^{-}$ decay in a neutral atom. Ionization means that after $\beta^{-}-$decay the total number of bound electrons decreases by unity. It is clear that the sum in Eq.(\ref{sum}) must be infinite, i.e. $N_{max} = \infty$. In actual computations, however, there is a problem of slow convergence for the wave functions of highly excited bound states. This means that in actual cases the sum Eq.(\ref{sum}) is usually finite. The actual maximal value of $N$, in $N_{max}$, in Eq.(\ref{sum}) is determined by the first rule mentioned above, i.e. in the sum, Eq.(\ref{sum}), we can use only those bound states for which the inequality $p_{g \rightarrow n} > p_{g \rightarrow (n+1)}$ is obeyed. The approximate value of $P_{ion}$ determined for the nuclear $\beta^{-}$ decay in the He atom with the use of our results from Table I is $P_{ion} \approx 0.108$. In other words, the one-electron Li$^{2+}$ ions are formed in $\approx$ 10.8 \% of all $\beta^{-}$ decays of the He atoms. In actual experimental conditions these ions can be observed in the $\beta^{-}$ decays of the ${}^6$He atoms. The half-life of the ${}^6$He atom against such a $\beta^{-}$ decay is $\approx$ 0.82 $sec$. In general, the method described above can be used to determine the total probability of ionization during the nuclear $\beta^{-}$ decay in any neutral atom. It is very simple and has many advantages in comparison with the so-called `direct' methods. In these direct methods the wave functions of the out-going electron and double-charged final ion must be explicitly constructed. Then one needs to compute the overlap integral between the product of these two wave functions and wave function of the incident atom. This step corresponds to the sudden approximation used above. However, in actual calculations we cannot assume that the out-going electron is always in the $s-$wave. Briefly, this means that we need to include many configurations in which the final (free) electron moves in the $p-, d-, f-, \ldots$ waves, while the double-charged final ion is in one of its $P-, D-, F-, \ldots$ states, respectively. If the incident atom was in one of its $S-$states, then only the $sS-, pP-, dD-$, etc, configurations for the final system must be used in calculations. The total energies some of these configurations are close to each other. To reach a `realistic' accuracy one needs to consider a very large number (up to few dozens) of different configurations (with different $L$) in the final system. In general, each of these computations is not easy to conduct with relatively high accuracy. This significantly complicates all direct calculations of the ionization probabilities. An interesting and actual question is the convergence of computational results obtained for the transition amplitudes and transition probabilities. Recently, a number of papers have been published about nuclear $\beta^{-}$ decay in different atoms and ions. In all these works it was assumed that the determined transition amplitudes and corresponding probabilities are `exact', i.e., they will not change noticeably in similar future calculations. In many cases, however, the following calculations show that such results were not exact and overall changes in some cases are relatively large. In particular, all calculations of the transition amplitudes and transition probabilities performed with the use of Hartree, Hartree-Fock and Hartree-Fock-CI methods cannot be considered as very accurate unless some additional measures have been taken. In this study we decided to analyze this problem in detail. The result of our analysis can be found in Table II where various transition amplitudes and transition probabilities are determined with the different number(s) of basis functions. As follows from Table II our method provides a very good convergence rate for the ground and low-excited $n^1S(L = 0)-$states in the Li$^{+}$ ion. For the excited $n^1S(L = 0)-$states with $n \ge 6$ the overall convergence rate drops drastically. In such cases to keep the overall accuracy of our calculations of the corresponding overlap integrals we need to use larger numbers of basis functions. In general, it is very hard to compute transition probabilities for highly excited (bound) states of the final atomic system. On the other hand, the numerical values of these probabilities decrease rapidly when the `excitation number' $n$ increases. Therefore, by using a few known transition probabilities into the lowest bound states of the final system we can accurately evaluate the total `ground state to bound states' probability and total `ionization probability' for an arbitrary $\beta^{-}$-decaying atom. Our results obtained for the atomic transition amplitudes and corresponding transition probabilities for the nuclear $\beta^{-}$ decay in the Li atom can be found in Table III. In these calculations we assume that the original Li atom was in its ground (doublet) $1^2S-$state. Due to the conservation of the $L$ and $S$ quantum numbers the final Be$^{+}$ ion will be in one of its bound (doublet) $n^{2}S-$states. The final states probability amplitudes and corresponding probabilities have been computed with the use Eq.(\ref{Int}). The `ground state to ground state' probability and the corresponding transition amplitude for the $\beta^{-}$-decaying B atom are shown in Table VI. Note that for all elements discussed in this study our computed `ground-state to ground-state' probabilities coincide well with the corresponding results from \cite{FrTa}. However, if the final ion is in one of its excited states, then our current results have substantially better accuracy. This is directly related with the better overall accuracy of our current wave functions. The knowledge of the final state probabilities allows one to predict the excitations of the final atomic fragment, i.e. in the final atom/ion. In general, any excited state in few-electron atom decays with the emission of a few optical quanta. These transitions produce an unique spectrum of post-decay optical radiation. By using the computed final state probabilities we can estimate the spectrum and intensity of the post-decay optical radiation which is observed for some time $\tau$ (usually $\tau \approx 1 \cdot 10^{-9} - 1 \cdot 10^{-2}$ $sec$) after the nuclear $\beta^{-}$ decay. In the case of $\beta^{-}$ decaying ${}^6$He atom (from its ground state) the post-decay optical radiation corresponds to the chain of optical transitions from the final $n^1S$-state of the Li$^{+}$ ion into its ground $1^1S$-state. For instance, for the $3^1S$-state in the Li$^{+}$ ion this chain of dipole transitions is: $3^1S \rightarrow 2^{1}P \rightarrow 1^1S$. Various collisions between Li$^{+}$ ions and He/Li atoms and possible electron capture by the Li$^{+}$ ion must also be taken into account. The arising (optical) spectrum of post-decay radiation is very complex, but it can be studied, in principle, with the use of theoretical and current experimental methods. \section{Formation of the negatively charged ions during the $\beta^{+}$ decay in few-electron atoms} Formation of the negatively charged ions (or anions) during the nuclear $\beta^{+}$ decay in many-electron atoms is a very interesting experimental problem. On the other hand, it is very interesting to evaluate the corresponding final state probabilities by using our computational methods described above. It is clear $a$ $priori$ that such probabilities can be found with the use of the sudden approximation (exactly as it was made above for the nuclear $\beta^{-}$ decay). Formally, in the case of the nuclear $\beta^{+}$ decay in many-electron atoms one needs to determine the same overlap integral, Eq.(\ref{Int}), between the incident and final $N$-electron wave functions. This is exactly the same procedure as described above for the nuclear $\beta^{-}$ decay, but actual computations of the overlap integrals, Eq.(\ref{Int}), is a significantly more complicated problem in those cases when the negatively charged ions are involved. The first complication follows from the experimental fact that many atoms do not form stable negatively charged ions. However, if such negatively charged ions are stable, then the construction of highly accurate variational wave function(s) for these ions is a very hard problem. Briefly, this means that the final state probabilities obtained for the nuclear $\beta^{+}$ decay in many-electron atoms are not very reliable, if they have been determined for the negatively charged ions. Nevertheless, in this study we have determined probabilities for the ground-state (atom) to ground-state (negative ion) transition for the nuclear $\beta^{+}$ decay in some light atoms. Note that each of the negatively charged atomic ions have either one bound (ground) state, or no bound states et al. In particular, we consider a possibility to form the ${}^7$Li$^{-}$ ion during the nuclear $\beta^{+}$ decay of the ${}^7$Be nucleus. It should be mentioned that more than 99 \% of all ${}^7$Be nuclei decay by the electron capture. If the $K-$electron capture in the ${}^7$Be atom occurs, then the Li$^{-}$ ion cannot be formed. However, any experimental observation of the Li$^{-}$ ions from decaying ${}^7$Be nuclei will be an actual indication of the competing $\beta^{+}$ decay. As follows from Table IV the total probability to form the bound Li$^{-}$ ion during such a decay is evaluated as $\approx$ 0.2065. This means that the Li$^{-}$ ions will form in $\approx$ 20.65 \% of all nuclear $\beta^{+}$ decays of Be atoms, i.e. in one of five such decays we can observe the negatively charged Li$^{-}$ ion. Another interesting result can be found in Table V. As follows from that Table the probability to form the bound Li$^{-}$ ion is larger $\approx$ 35.5 \% in those cases, when the incident Be atom was in its excited $2^1S-$state. It indicates clearly that the distribution of the final state probabilities in those cases when the negatively charged ions are formed is very different from the known distributions of $\beta^{-}$-decaying neutral atoms. Another interesting $\beta^{+}$-decaying atomic system with small number of electrons is the ${}^{11}$B$^{-}$ ion. This ion arises during the nuclear $\beta^{+}$ decay of the ${}^{11}$C atom ($\tau_{\beta^{+}}({}^{11}$C) $\approx 20.4$ min). Very likely, the formation of the ${}^{11}$B$^{-}$ ion will be the first actual experiment which can confirm the direct formation of the negatively charged ions during the nuclear $\beta^{+}$ decay. The formation of negatively charged ions during the nuclear $\beta^{+}$ decay has a great theoretical interest, since the probability to form such ions directly related to the change in distribution of the outer most electron(s). Furthermore, the density distribution of the outer most electron(s) for all negatively charged ions is very similar to each other. As is well known (see, e.g., \cite{Ost}) the radial wave function $R(r)$ of an arbitrary $N-$electron atomic system with the nuclear charge $Q$ at large $r$ has the following asymptotic form \begin{equation} R^{Q}(r) \sim r^{b-1} \cdot \exp(-t r) = r^{\frac{Q^{*}}{t}-1} \cdot \exp(-t r) \label{asym1} \end{equation} where $t = \sqrt{2 I}, b = Q^{*}/t$ and $Q^{*} = Q - N + 1$. Here the notation $I$ stands for the first ionization potential. For negatively charged ions $Q^{*} = Q - N + 1 = 0$ and $R^{Q}(r) = \frac{1}{r} \exp(-t r)$, i.e., it does not depend explicitly upon $Q$. This substantially simplifies all following evaluations and makes them universal for all negatively charged ions. In particular, we can expect that the total probabilities of negative ions formation will accurately be represented by one relatively simple formula which contains only a few parameters. This means that, if we know such probabilities for some of the negatively charged ions, then we can accurately predict analogous values for other similar ions. \section{Emission of the fast $\delta-$electrons during the nuclear $\beta^{-}$ decay in atoms} The sudden approximation used above allows one to determine the final state probabilities for the $\beta^{\pm}$ decays in many-electron atoms. Briefly, the analysis of atomic excitations is reduced to the description of changes in electron-density distribution produced by a sudden change of the nuclear electric charge $Q \rightarrow Q \pm 1$. The electronic/positronic nature of the $\beta^{\pm}$ decay is not critically important for our method. However, the sudden approximation is true only in the lowest order approximations upon the fine structure constant $\alpha$. This means that, if we are interested in highly accurate results for the final state probabilities, then we need to consider and evaluate the corresponding correction(s). The leading contribution comes from the lowest-order correction on electron-electron scattering which is $\approx \alpha^2 (\alpha Q)^2$. In heavy atoms with $Q \approx 100$ such a correction is relatively large $\approx \alpha^2$, but in light, few-electron atoms it is significantly smaller $\approx \alpha^4$. Nevertheless, this correction describes the new phenomenon, i.e. the emission of the fast secondary electrons, which are traditionally called the $\delta-$electrons. Let us discuss this phenomenon in detail. As is well known from Quantum Electrodynamics (see, e.g., \cite{AB}, \cite{Grei}) the differential cross-section of the electron-electron scattering is written in the form \begin{eqnarray} d\sigma = 2 \pi \alpha^4 a^{2}_{0} \frac{dx}{\gamma^2 - 1} \Bigl[ 1 + \frac{(\gamma - 1)^2 \gamma^2}{x^2 (\gamma - 1 - x)^2} - \frac{2 \gamma^2 + 2 \gamma - 1}{x (\gamma - 1 - x)} \Bigr] \label{cross} \end{eqnarray} where $a_0$ is the Bohr radius, $\gamma$ is the $\gamma$-factor of the $\beta$-electron emitted from the nucleus, while the parameter $x$ is the energy lost by the $\beta$-electron (or gained by the atomic electron $a$), i.e. \begin{equation} x = \frac{\epsilon_{\beta} - \epsilon^{\prime}_{\beta}}{m_e c^2} = \frac{\epsilon^{\prime}_{a} - \epsilon_{a}}{m_e c^2} \label{eq14} \end{equation} where the superscript $\prime$ designates the particle after the process. It is usually assumed that one of the two electrons (atomic electron in our case) was at rest before electron-electron collision/scattering, i.e. $\epsilon_{a} = m_e c^2$. The formula Eq.(\ref{cross}) is the closed expression for the differential cross-section of electron-electron scattering which depends upon the parameter $x$, Eq.(\ref{eq14}), and $\gamma-$factor of the $\beta^{-}$ electron. As follows from Eq.(\ref{cross}) the probability to observe/produce a fast $\delta$-electron during the nuclear $\beta^{-}$-decay is very small in comparison with `regular' atomic processes, since it contains an additional factor $\alpha^4 \approx 2.83 \cdot 10^{-8}$. Note also that the formula, Eq.(\ref{cross}), is derived for a free electron which is located at the distance $a_0$ from atomic nucleus. The actual $K-$electrons in heavy atoms are significantly closer to the nucleus than electrons from outer electron shells. The effective radius of the $K-$electron shell is smaller than $a_0$ in $\approx Q^2$ times. This means that the factor $2 \pi \alpha^4 a^{2}_{0}$ in formula, Eq.(\ref{cross}), must be multiplied by an additional factor $Q^2$. For light atoms considered in our study the overall probability to observe the emission of the fast $\delta-$electrons during the nuclear $\beta^{-}$ decay is very small. The situation changes for heavy atoms with $Q \approx$ 90 - 100, but such atoms are not discussed here. The emission of the fast $\delta-$electrons can also be observed during the nuclear $\beta^{+}$ decay in many-electron atoms. In such a case, the formula for the cross-section of electron-positron scattering takes the form \cite{AB}, \cite{Grei} \begin{eqnarray} d\sigma = 2 \pi \alpha^4 a^{2}_{0} \frac{dx}{\gamma^2 - 1} \Bigl[ \frac{\gamma^2}{x^2} - \frac{2 \gamma^2 + 4 \gamma + 1}{(\gamma + 1) x} + \frac{3 \gamma^2 + 6 \gamma + 4}{(\gamma + 1)^2} - \frac{2 \gamma}{(\gamma + 1)^2} \Delta + \frac{1}{(\gamma + 1)^2} \Delta^2 \Bigr] \label{cross1} \end{eqnarray} where $\gamma$ is the $\gamma$-factor of the positron emitted from the nucleus, while all other notations are the exactly same as in Eq.(\ref{cross}). Note again that in light atomic systems the cross-section Eq.(\ref{cross1}) is very small. In heavy atoms the situation changes and in one of $\approx$ 17,000 nuclear $\beta^{+}$ decays we can also observe the emission of the fast $\delta-$electron. \section{Discussion and Conclusion} We have considered atomic excitations arising during the nuclear $\beta^{-}$ decay. For some light few-electron atoms such final state probabilities and the total ionization probabilities have been determined numerically to a very good numerical accuracy. Our interest to light atoms is directly related to the fact that currently the highly accurate wave functions of the ground and 6 - 8 low-lying excited states can only be constructed for some few-electron atoms and ions. Consideration of the 6 - 8 bound states in the final atomic system allows us to perform a complete analysis of atomic excitations during the nuclear $\beta^{-}$ decay. We also consider the formation of negatively charged ions during the nuclear $\beta^{+}$ decay. By using our highly accurate wave functions for the negatively charged ions we have determined the `ground state to ground state' probabilities for some nuclear $\beta^{+}$ decays in which such negatively charged ions are formed (or can be formed). It should be mentioned that for the first time atomic excitations during the nuclear $\beta^{\pm}$ decay were observed in 1912 (all earlier references on this matter can be found, e.g., in \cite{Mig1}, \cite{Finb}, \cite{Skoro}, \cite{Schw}). In general, atomic and molecular excitations arising during the nuclear $\beta^{\pm}$ decay have many interesting aspects for theoretical study and experimental investigation. Analysis of the direct atomic excitations in earlier studies was substantially restricted by the use of non-accurate atomic wave functions. In this study we have applied highly accurate wave functions for all few-electron atoms and ions. The overall accuracy of our predictions for many excited states has increased significantly. In future studies we want to extend our analysis to atomic systems with more electrons. A separate goal will be a consideration of different atomic (and molecular) excitations, analysis of the post-decay radiation, etc. Note that the final state probabilities determined above for a number of $\beta^{-}$-decaying light atoms can also be used as important numerical tests for other similar values needed in the analysis of various nuclear reactions in few-electron atoms/ions. For instance, for exothermic nuclear $(n;t)-, (n;p)-$ and $(n;\alpha)-$reactions in few-electron atoms/ions \cite{FrWa} one needs to determine the numerical value of the following integral \begin{equation} A_k({\bf V}) = \int \Psi^{*}_i({\bf r}_1, \ldots, {\bf r}_N) \exp[\imath {\bf V} \cdot (\sum^{N}_{i=1} {\bf r}_i)] \Phi^{(k)}_f({\bf r}_1, \ldots, {\bf r}_N) d^3{\bf r}_1 \ldots d^3{\bf r}_N \label{Int1} \end{equation} where $N$ is the total number of bound electrons (here we assume that $N$ does not change during the nuclear reaction), while ${\bf V}$ is the nuclear velocity in the final state, i.e. after the nuclear reaction. Note that in the limit ${\bf V} \rightarrow 0$ the value $A_k({\bf V})$ from Eq.(\ref{Int1}) converges to the $A_k$ value from Eq.(\ref{Int}). This explains why the final state probabilities determined by Eq.(\ref{Int}) are often considered as the `nucleus-at-rest' limit of atomic probabilities obtained for more general nuclear reactions. In conclusion, we want to note that this work opens a new avenue in the analysis of atomic excitations during the nuclear $\beta^{\pm}-$decay in atoms and molecules. Currently, many aspects of this problem are of significant experimental and theoretical interest. In particular, the study of atomic excitations arising in the nuclear $\beta^{\pm}$ decay can improve our understanding of many atomic and QED processes. Furthermore, the complete and accurate analysis of atomic excitations during various nuclear reactions and processes is a complex problem which requires an extensive development of new numerical methods and algorithms. It should be mentioned that a sudden change of the electric charge of atomic nucleus and following changes in the electron density distribution during the nuclear $\beta^{-}$ decay must be a great interest for the density functional theory (DFT) of atoms and molecules. Note also that analysis of possible molecular excitations arising during the nuclear $\beta^{-}$ decay in molecules is a significantly more complicated problem, than analogous problem for atoms. Nevertheless, some useful conclusions about different excitations in molecular systems can be made and corresponding probabilities can be evaluated numerically. In fact, in the last five-seven years we have achieved a remarkable progress in understanding of atomic excitations during various nuclear processes, reactions and decays. Unfortunately, except a very few experimental papers published as a rule years ago (see, e.g., \cite{Carl1}, \cite{Carl2} and \cite{Scie}) the current theory of atomic excitations during various nuclear reactions has no experimental support. This is a very strange situation, since all required (atomic) experiments are very easy to perform. Currently, we can only hope that this our work will stimulate some experimental activity in the area. \newpage \begin{center} {\bf Acknowledgements} \end{center} It is a pleasure to thank James S. Sims (NIST) for fruitful discussions on the Hy-CI method. One of us (AMF) wants to thank the University of Western Ontario and the Dean of Science of UWO Prof. David M. Wardlaw for financial support. MBR would like to thank Carlos Bunge for advises on the CI method, and Prof. Peter Otto of the University Erlangen-N\"urnberg for supporting this project.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} R Coronae Borealis (RCB) stars are defined by their light curves displaying sudden and large drops in brightness with slower recoveries to the baseline level, with these events occurring randomly in time. These spectacular dips are caused when the star forms dense dust clouds on the line of sight to Earth hiding the star. The RCB stars are all hydrogen deficient supergiants, with various abundance anomalies, including enriched nitrogen and carbon. RCB stars are rare, with only 76 known in our Milky Way. The evolutionary status of RCB stars is that they might be from recent coalescences of a double white dwarf binary or from a final helium shell flash in a born-again asymptotic giant branch (AGB) star. Clayton (1996; 2012) presents full reviews of RCB stars. Most RCB stars are relatively cool, with surface temperatures of 5,000-7,000 K. However, four of the known RCB stars have a greatly hotter surface temperature, from 15,000-25,000 K, and these are called the `hot RCB stars'. The four known hot RCB stars are V348 Sgr, MV Sgr, and DY Cen in our Milky Way plus HV 2671 in the Large Magellanic Cloud. DY Cen and MV Sgr are hydrogen deficient (just like the cool RCB stars) and mostly composed of helium, while both have relatively infrequent drops in brightness. V348 Sgr and HV 2671 are very carbon-rich (55\% carbon, most of the rest helium), and elemental abundances like in central stars of planetary nebulae, while both have frequent episodes of brightness declines. A plausible idea is that the two greatly different composition indicate two formation mechanisms, with DY Cen and MV Sgr simply being the `progeny' of the normal RCB stars as they heat up, and with V348 Sgr and HV 2671 being somehow formed during a final helium-shell flash on post-AGB stars. Thus, the birth mechanism of the hot RCB stars could be either from born-again systems or white dwarf mergers. Nevertheless, it is still unclear as to the relationship between the hot RCB stars and the cool RCB stars, as well as to other classes of stars (the born-again stars and the Wolf-Rayet central stars of planetary nebulae). De Marco et al. (2002) present a full review of the hot RCB stars. The key high-level science question is to understand the evolutionary state of the RCB stars, both hot and cold. For this, a critical piece of evidence is to watch them evolve in real time. Like for the born-again post-AGB stars (i.e., V605 Aql, Sakurai's object, and FG Sge), substantial movement across the HR diagram might be seen on time scales of a decade and a century. For this, De Marco et al. (2002) have pointed out that the baseline levels for MV Sgr, V348 Sgr, and DY Cen are apparently fading over the last century, and this would only be from movement from right-to-left across the top of the HR diagram. These century-long light curves have inevitable big problems for two reasons. The light curves were compiled from magnitudes in the $V$, $m_{vis}$, and $m_{pg}$ systems, which makes for large systematic uncertainties between old and new magnitudes, with exactly this able to make apparent systematic declines when none are real. De Marco et al. did attempt to correct for such effects. The big effect not mentioned is that all the old photometry is always systematically in error by from 0.1 to $>$1.0 mag simply because the old standard stars had these errors. In the days before photoelectric photometers could reach to cover faint sequences (i.e., before the late 1970s), the standard stars and comparison sequences were all calibrated by photographic transfers from the Harvard-Groningen Selected Areas and the North Polar Sequence. These photographic transfers always had problems, due to effects like reciprocity failure, in that claimed magnitudes were always reported systematically different than what we take on the modern magnitude scales. In general, the errors are small for stars brighter than tenth magnitude and start increasing steeply as the stars get fainter. Thorough studies of errors for old standard fields and comparison stars are given by Sandage (2001) and Patat et al. (1997), while I have many examples of poor old calibrations (e.g., Schaefer 1994; 1995; 1996; 1998) The old `photographic magnitude system' (i.e., $m_{pg}$) is really just a poorly calibrated B magnitude. All the early magnitudes of hot RCB stars are photographic magnitudes, all from the Harvard plates, all from old measures, and so there are inevitably possibly-large systematic errors in their long-term light curves. It might be that these systematic errors have created the apparent secular fading of the hot RCB stars, or it might be that the errors are such that the claim of secular fading is still correct. In this paper, I will solve these problems for the old photometry, and answer the question of whether the hot RCB stars are fading or not. To do this, I visited the Harvard College Observatory (HCO) in October 2015, and remeasured many magnitudes from plates dated 1896 to 1989, all on the modern Johnson B magnitude system. For the modern portions of the light curves, I used a variety of sources from the literature and from the {\it American Association of Variable Star Observers} (AAVSO), all in the Johnson B magnitude system. My combined light curves, all in a single uniform system, have a larger time range and many more magnitudes than given in De Marco et al. I have further extended their work by adding the B-band light curve for the fourth and last-known hot RCB star, HV 2671. \section{Photometry with Archival Plates} From around 1890 to the late 1970s, a large part of astronomy was from photometry as based on photographic sky pictures. These pictures were recorded on blue-sensitive emulsion attached to one side of a glass plate, with the plate being exposed to star light in a special holder at the focus of the telescope. The developed emulsion had the sky being nearly transparent, while stars were round black points. Photographic emulsions have only a small dynamic range between the sky and saturation, so star images are almost all completely saturated (i.e., black) in the centers, with only a small annulus of grey in the outer tails of the star profile. With this, essentially, the only change in the star image as the magnitude varies is the diameter of the star image. Given the variability from plate-to-plate and the non-linearity of the emulsion, the only way to calibrate the image-diameter-versus-magnitude relation is to use comparison stars nearby on the plate. The procedure is to make some measure of the image diameter for the target star as well as for a sequence of nearby comparison stars with a tight spacing of magnitudes, both brighter and fainter than the target. With this, the magnitude of the target can be measured by interpolation in the image-diameter-versus-magnitude relation as determined for each plate from the comparison sequence. The image radii vary such that the square of the radii or the logarithm of the radii are linear with the magnitude, depending on the brightness of the star (Schaefer 1981). In general, the full calibration curve (image radius versus magnitude) is nonlinear with either magnitude or flux. This condition violates one of the requirements ingrained in observers with CCDs, because then magnitudes cannot be calculated from any application of the magnitude equation with one comparison star. The long-standing traditional solution is to use a whole sequence of comparison stars, strung out over a wide range of brightnesses, so that the radius-versus-magnitude relation is empirically determined for the brightness of the target star. The image diameters can be measured with machines called `iris diaphragm photometers' (first developed in the 1930s) and with photoelectric scanners (first developed in the 1970s). From the earliest days, the dominant method for measuring image diameters was simply for a trained observer to make size comparisons between the diameter of the target star and the diameters of nearby comparison stars. The human eye is remarkably accurate at side-by-side comparisons of the sizes of round objects. The procedure is to view the glass plate on a light table through a loupe or a low-power microscope, compare the target's size to the size of nearby stars, and judge by-eye the relative placement of the target star's diameter. To give a specific and typical example, if the target is judged to be halfway between two comparison stars of magnitude 12.2 and 12.6, then the target has a magnitude of 12.4. In practice, an inexperienced worker can produce magnitudes with an accuracy of 0.3 mag or so, while an experienced worker can produce magnitudes with a one-sigma uncertainty of 0.1 mag. For many situations, where magnitudes from many plates can be averaged together, the real accuracy of the light curves can be 0.02 mag or better. For an experienced worker, the by-eye method provides equal or better accuracy as compared to iris diaphragm photometers or scanning. The by-eye method is very simple, cheap, and fast, whereas the instrumented methods are always complex, slow, and costly. Harvard College Observatory has a collection of roughly 500,000 glass plates recording the entire sky from 1889 to 1989. (There are few plates from 1954 to 1969 due to the notorious `Menzel Gap'.) These are almost all blue-sensitive emulsion on glass plates 8$\times$10 inch in size, stored in paper envelopes, and placed on shelves in time order. The plates were taken with a wide variety of telescope, from essentially camera lenses up to 24-inch apertures, with the plate sizes covering widths from 5$\degr$ to 42$\degr$. The limiting magnitudes for the normal-quality plates vary from B=12 to deeper than B=18. The Harvard plates have 1000-4000 plates covering any given position on the sky. Harvard has about half the existing archival plates in the world, and is nearly the only source for targets in the southern skies. Historically, from 1890 to 1960, the Harvard plates dominated the world of variable stars for anything fainter than about eleventh magnitude. To take an example of the hot RCB stars, all four were discovered with the Harvard plates, and the only published information of any type from before the 1950s is the Harvard light curves. For many questions of modern astrophysics, light curves with 0.1 mag or 0.02 mag accuracy are more than adequate, so the accuracy attainable with CCDs is completely irrelevant. In the world of variable stars, the stars are displaying phenomena on all time scales. Modern studies can cover variable star phenomena on time scales faster than the duration of a single telescope run, and multiple telescope runs can be pasted together to get a picture of phenomena up to a decade in time scale. But to measure phenomena on time scales from a decade to a century, the only means is to use archival data. For most stars, the only source of archival information older than a decade or two is from archival photographic plates, and that largely means the Harvard plates. For the hot RCB stars, in looking for any secular trend (as associated with the evolution of the stars), the only solution is to get fully calibrated light curves from Harvard. Historically, the Harvard plates were the predecessor of the Johnson B system through the North Polar Sequence. In modern times, the native magnitudes of the Harvard blue plates have always been found to have a near-zero color term with respect to the Johnson B system. This means that as long as the comparison star magnitudes are on the Johnson B system, then the resultant magnitudes are exactly in the Johnson B system. For my measures of the Harvard plates, I have taken all my comparison star magnitudes from the B-band measures of the AAVSO Photometric All-Sky Survey ({\it APASS}, Henden \& Munari 2014). These magnitudes are tied to the Johnson B magnitudes with high accuracy (Munari et al. 2014), as calibrated from the standard stars of Landolt (2009). Thus, my modern measures of the Harvard plates are accurately in the Johnson B system. Critically, both the very extensive {\it DASCH} program (Grindlay et al. 2012) and my own extensive measures prove that long term light curves from Harvard of normal (i.e., constant) stars do {\it not} produce any measurable slope or trend (i.e., typically $<$0.05 magnitude per century) over the last century. Further, these check star magnitudes are consistent with the modern measures. This is the proof that any observed secular trend is not some data or analysis artifact. \section{Century-Long Light Curves For Hot RCB Stars} The goal of this paper is to get the century-long light curves for all four known hot RCB stars so as to test for any secular fading in the maximum light. For this, the only way to get the old data is from Harvard, and these are only in the Johnson B-band. To minimize the mixing of bands, I will take the AAVSO and literature magnitudes for the B-band. The Johnson B magnitudes for the four Hot RCB stars are listed in Table 1. These do not include the magnitudes where the star was substantially fainter than the maximum brightness. \begin{table} \centering \caption{B Magnitudes from Harvard Plates.} \label{tab:table1} \begin{tabular}{llll} \hline Star & Julian Date & B (mag) & Plate\\ \hline DY Cen & 2415898 & 12.7 & B29838 \\ DY Cen & 2416255 & 13.0 & B31827 \\ DY Cen & 2416959 & 13.3 & AM3470 \\ DY Cen & 2417257 & 12.6 & AM4107 \\ DY Cen & 2418405 & 12.8 & B40009 \\ DY Cen & 2418428 & 12.6 & AK286 \\ DY Cen & 2418437 & 12.6 & AM6102 \\ DY Cen & 2418507 & 12.7 & AM6372 \\ DY Cen & 2418869 & 13.3 & B41635 \\ DY Cen & 2420960 & 12.2 & AM11620 \\ DY Cen & 2421024 & 12.2 & AM11923 \\ DY Cen & 2421315 & 12.2 & AM12906 \\ DY Cen & 2421333 & 12.2 & AM12992 \\ DY Cen & 2421333 & 12.7 & AM13003 \\ DY Cen & 2421338 & 13.0 & AM13037 \\ DY Cen & 2421342 & 12.7 & AM13047 \\ DY Cen & 2422073 & 12.8 & AM14631 \\ DY Cen & 2422130 & 12.8 & AM14775 \\ DY Cen & 2422137 & 12.7 & AM14806 \\ DY Cen & 2422162 & 12.5 & AM14853 \\ DY Cen & 2422172 & 12.6 & AM14878 \\ DY Cen & 2422176 & 12.1 & AM14889 \\ DY Cen & 2422436 & 12.5 & AM15123 \\ DY Cen & 2422437 & 12.4 & AM15127 \\ DY Cen & 2422456 & 12.5 & AM15165 \\ DY Cen & 2422483 & 12.1 & AM15231 \\ DY Cen & 2422493 & 12.1 & AM15255 \\ DY Cen & 2422517 & 12.0 & AM15292 \\ DY Cen & 2422544 & 12.8 & AM15382 \\ DY Cen & 2423180 & 12.5 & AM15747 \\ DY Cen & 2426470 & 12.5 & RB1688 \\ DY Cen & 2426480 & 12.4 & RB1729 \\ DY Cen & 2426490 & 12.7 & RB1798 \\ DY Cen & 2426497 & 11.9 & RB1821 \\ DY Cen & 2426531 & 12.0 & RB1900 \\ DY Cen & 2426546 & 12.8 & RB1935 \\ DY Cen & 2426771 & 12.0 & RB2507 \\ DY Cen & 2426843 & 12.0 & RB2753 \\ DY Cen & 2426899 & 12.7 & RB3075 \\ DY Cen & 2431904 & 12.0 & RB14293 \\ DY Cen & 2431950 & 12.7 & RB14385 \\ DY Cen & 2432011 & 12.3 & RB14552 \\ DY Cen & 2432328 & 12.5 & RB15102 \\ DY Cen & 2432648 & 12.5 & RB15596 \\ DY Cen & 2432681 & 11.9 & RB15654 \\ DY Cen & 2432758 & 12.5 & RB15795 \\ DY Cen & 2433054 & 12.5 & RB16284 \\ DY Cen & 2445490 & 13.8 & DSB1047 \\ DY Cen & 2445813 & 13.5 & DSB1286 \\ DY Cen & 2445848 & 13.5 & DSB1325 \\ DY Cen & 2445872 & 13.1 & DSB1359 \\ DY Cen & 2445900 & 13.3 & DSB1388 \\ DY Cen & 2446243 & 13.5 & DSB1708 \\ DY Cen & 2446257 & 13.6 & DSB1711 \\ DY Cen & 2446291 & 13.5 & DSB1736 \\ DY Cen & 2446497 & 13.5 & DSB1903 \\ DY Cen & 2446527 & 13.2 & DSB1932 \\ DY Cen & 2446827 & 13.7 & DSB2153 \\ DY Cen & 2446945 & 13.5 & DSB2227 \\ DY Cen & 2447002 & 13.8 & DSB2300 \\ DY Cen & 2447022 & 13.8 & DSB2334 \\ DY Cen & 2447241 & 13.4 & DSB2493 \\ DY Cen & 2447267 & 13.3 & DSB2541 \\ \hline \end{tabular} \end{table} \begin{table} \centering \contcaption{B Magnitudes from Harvard Plates.} \label{tab:continued} \begin{tabular}{llll} \hline Star & Julian Date & B (mag) & Plate\\ \hline DY Cen & 2447298 & 13.7 & DSB2574 \\ DY Cen & 2447322 & 13.5 & DSB2588 \\ DY Cen & 2447357 & 13.3 & DSB2630 \\ DY Cen & 2447380 & 13.6 & DSB2653 \\ DY Cen & 2447590 & 13.5 & DSB2794 \\ DY Cen & 2447682 & 13.6 & DSB2827 \\ DY Cen & 2447761 & 13.6 & DSB2863 \\ MV Sgr & 2417077 & 12.6 & AM3801 \\ MV Sgr & 2422605 & 13.3 & MC16930 \\ MV Sgr & 2422606 & 13.0 & MC16931 \\ MV Sgr & 2425746 & 12.7 & RB334 \\ MV Sgr & 2425751 & 12.7 & RB345 \\ MV Sgr & 2425778 & 12.7 & RB402 \\ MV Sgr & 2428306 & 13.0 & MA5360 \\ MV Sgr & 2428672 & 12.9 & MA6375 \\ MV Sgr & 2429080 & 12.7 & MA7302 \\ MV Sgr & 2429429 & 12.1 & RB8838 \\ MV Sgr & 2429435 & 12.5 & RB8861 \\ MV Sgr & 2429441 & 12.6 & RB8891 \\ MV Sgr & 2429485 & 12.3 & RB9039 \\ MV Sgr & 2429547 & 12.0 & RB9181 \\ MV Sgr & 2429732 & 12.4 & RB9488 \\ MV Sgr & 2429793 & 12.4 & RB9691 \\ MV Sgr & 2429808 & 12.5 & RB9732 \\ MV Sgr & 2429811 & 12.4 & RB9755 \\ MV Sgr & 2429869 & 12.4 & RB9967 \\ MV Sgr & 2443616 & 13.5 & DSB512 \\ MV Sgr & 2444165 & 13.5 & DSB644 \\ MV Sgr & 2444821 & 13.5 & DSB769 \\ MV Sgr & 2445140 & 13.3 & DSB911 \\ MV Sgr & 2445173 & 13.3 & DSB919 \\ MV Sgr & 2445551 & 13.2 & DSB1087 \\ MV Sgr & 2445801 & 13.5 & DSB1274 \\ MV Sgr & 2445824 & 13.5 & DSB1305 \\ MV Sgr & 2445858 & 13.0 & DSB1346 \\ MV Sgr & 2445908 & 13.5 & DSB1408 \\ MV Sgr & 2446210 & 13.6 & DSB1672 \\ MV Sgr & 2446233 & 13.3 & DSB1702 \\ MV Sgr & 2446294 & 13.3 & DSB1755 \\ MV Sgr & 2446624 & 13.0 & DSB2022 \\ V348 Sgr & 2413724 & 12.1 & A1837 \\ V348 Sgr & 2415533 & 12.0 & AM808 \\ V348 Sgr & 2415576 & 11.7 & AM907 \\ V348 Sgr & 2415633 & 11.6 & AM1028 \\ V348 Sgr & 2415635 & 11.6 & AM1043 \\ V348 Sgr & 2417704 & 11.9 & AM4804 \\ V348 Sgr & 2417748 & 11.7 & AM4931 \\ V348 Sgr & 2417759 & 12.0 & AM4954 \\ V348 Sgr & 2417788 & 11.6 & AM5024 \\ V348 Sgr & 2417814 & 11.8 & AM5090 \\ V348 Sgr & 2417821 & 11.8 & AM5114 \\ V348 Sgr & 2418028 & 11.8 & AM5340 \\ V348 Sgr & 2418043 & 11.8 & AM5390 \\ V348 Sgr & 2418070 & 11.6 & AM5444 \\ V348 Sgr & 2418396 & 11.7 & AM6011 \\ V348 Sgr & 2418429 & 11.4 & AK288 \\ V348 Sgr & 2418439 & 11.6 & AM6114 \\ V348 Sgr & 2418454 & 11.7 & AM6170 \\ V348 Sgr & 2418502 & 11.8 & AM6347 \\ V348 Sgr & 2418532 & 11.7 & AM6454 \\ V348 Sgr & 2418822 & 11.8 & AM6952 \\ V348 Sgr & 2418849 & 11.8 & AM7033 \\ V348 Sgr & 2418856 & 11.6 & AM7063 \\ \hline \end{tabular} \end{table} \begin{table} \centering \contcaption{B Magnitudes from Harvard Plates.} \label{tab:continued} \begin{tabular}{llll} \hline Star & Julian Date & B (mag) & Plate\\ \hline V348 Sgr & 2419205 & 11.4 & AM7457 \\ V348 Sgr & 2419205 & 11.5 & AM7458 \\ V348 Sgr & 2419234 & 11.9 & AM7549 \\ V348 Sgr & 2419562 & 12.0 & AM8301 \\ V348 Sgr & 2419563 & 12.0 & AM8307 \\ V348 Sgr & 2419594 & 11.8 & AM8423 \\ V348 Sgr & 2419605 & 12.1 & AM8492 \\ V348 Sgr & 2419618 & 11.9 & AM8522 \\ V348 Sgr & 2419633 & 11.6 & AM8559 \\ V348 Sgr & 2422084 & 12.1 & AM14682 \\ V348 Sgr & 2422152 & 12.1 & MC16930 \\ V348 Sgr & 2422152 & 12.0 & MC16931 \\ V348 Sgr & 2422515 & 12.2 & MC16838 \\ V348 Sgr & 2422517 & 11.8 & AM15294 \\ V348 Sgr & 2422581 & 11.9 & AM15486 \\ V348 Sgr & 2422582 & 11.5 & MF07104 \\ V348 Sgr & 2423179 & 11.8 & AM15745 \\ V348 Sgr & 2423182 & 11.6 & AM15767 \\ V348 Sgr & 2423192 & 11.9 & A11979 \\ V348 Sgr & 2423195 & 11.8 & AM15795 \\ V348 Sgr & 2423210 & 11.5 & AM15842 \\ V348 Sgr & 2423223 & 11.7 & AM15869 \\ V348 Sgr & 2423236 & 11.8 & AM15909 \\ V348 Sgr & 2423248 & 11.5 & AM15931 \\ V348 Sgr & 2423249 & 11.6 & AM15934 \\ V348 Sgr & 2423347 & 11.9 & AM16120 \\ V348 Sgr & 2423663 & 11.5 & AM16382 \\ V348 Sgr & 2425706 & 12.0 & RB228 \\ V348 Sgr & 2425746 & 12.1 & RB334 \\ V348 Sgr & 2425751 & 11.9 & RB345 \\ V348 Sgr & 2425778 & 11.9 & RB402 \\ V348 Sgr & 2425798 & 11.9 & RB434 \\ V348 Sgr & 2426802 & 11.7 & RB2554 \\ V348 Sgr & 2426810 & 11.9 & RB2611 \\ V348 Sgr & 2426871 & 11.8 & RB2851 \\ V348 Sgr & 2426872 & 11.8 & RB2869 \\ V348 Sgr & 2427901 & 12.1 & RB6045 \\ V348 Sgr & 2428013 & 11.7 & RB6313 \\ V348 Sgr & 2428035 & 11.9 & AM17031 \\ V348 Sgr & 2428041 & 11.6 & AM17056 \\ V348 Sgr & 2429485 & 11.9 & RB9039 \\ V348 Sgr & 2429732 & 12.0 & RB9488 \\ V348 Sgr & 2430095 & 11.8 & RB10453 \\ V348 Sgr & 2430107 & 11.9 & AM21531 \\ V348 Sgr & 2430110 & 12.1 & AM21549 \\ V348 Sgr & 2430111 & 11.8 & RB10532 \\ V348 Sgr & 2430111 & 12.0 & RB10534 \\ V348 Sgr & 2430111 & 11.8 & RB10535 \\ V348 Sgr & 2430111 & 11.7 & RB10540 \\ V348 Sgr & 2430113 & 11.9 & RB10560 \\ V348 Sgr & 2430113 & 11.8 & RB10566 \\ V348 Sgr & 2430113 & 11.6 & RB10568 \\ V348 Sgr & 2430118 & 11.8 & RB10585 \\ V348 Sgr & 2430120 & 11.8 & RB10593 \\ V348 Sgr & 2430121 & 12.1 & RB10598 \\ V348 Sgr & 2430136 & 11.9 & RB10666 \\ V348 Sgr & 2430137 & 11.5 & RB10668 \\ V348 Sgr & 2430139 & 11.7 & RB10680 \\ V348 Sgr & 2430140 & 11.9 & RB10692 \\ V348 Sgr & 2430141 & 11.4 & RB10698 \\ V348 Sgr & 2430141 & 11.6 & RB10699 \\ V348 Sgr & 2430153 & 12.0 & RB10768 \\ V348 Sgr & 2430162 & 11.6 & RB10827 \\ V348 Sgr & 2430163 & 11.8 & RB10828 \\ \hline \end{tabular} \end{table} \begin{table} \centering \contcaption{B Magnitudes from Harvard Plates.} \label{tab:continued} \begin{tabular}{llll} \hline Star & Julian Date & B (mag) & Plate\\ \hline V348 Sgr & 2430163 & 11.7 & RB10829 \\ V348 Sgr & 2430220 & 11.7 & AM21975 \\ V348 Sgr & 2430221 & 11.9 & AX4088 \\ V348 Sgr & 2430299 & 11.9 & RB11123 \\ V348 Sgr & 2431158 & 12.1 & RB12537 \\ V348 Sgr & 2431172 & 11.6 & AM23552 \\ HV 2671 & 2413878 & 16.2 & A2172 \\ HV 2671 & 2414253 & 15.4 & B20843 \\ HV 2671 & 2416398 & 15.4 & B32728 \\ HV 2671 & 2416817 & 16.0 & A7098 \\ HV 2671 & 2423466 & 16.3 & A12286 \\ HV 2671 & 2423487 & 16.1 & A12288 \\ HV 2671 & 2423683 & 16.2 & A12699 \\ HV 2671 & 2423684 & 15.8 & A12700 \\ HV 2671 & 2423707 & 16.1 & A12788 \\ HV 2671 & 2423733 & 16.0 & A12830 \\ HV 2671 & 2423735 & 16.1 & A12834 \\ HV 2671 & 2423738 & 16.1 & A12848 \\ HV 2671 & 2423739 & 16.2 & A12851 \\ HV 2671 & 2423741 & 15.5 & A12855 \\ HV 2671 & 2423753 & 16.2 & A12865 \\ HV 2671 & 2425941 & 16.3 & A14366 \\ HV 2671 & 2426309 & 16.0 & A15041 \\ HV 2671 & 2426309 & 15.5 & MF15038 \\ HV 2671 & 2426322 & 16.3 & A15064 \\ HV 2671 & 2426328 & 15.9 & A15075 \\ HV 2671 & 2426335 & 16.1 & A15087 \\ HV 2671 & 2426410 & 16.1 & A15233 \\ HV 2671 & 2426412 & 15.9 & A15250 \\ HV 2671 & 2426413 & 16.1 & A15254 \\ HV 2671 & 2426414 & 16.2 & A15256 \\ HV 2671 & 2426421 & 16.1 & A15264 \\ HV 2671 & 2426426 & 16.1 & A15266 \\ HV 2671 & 2426441 & 16.1 & A15278 \\ HV 2671 & 2426444 & 16.2 & A15287 \\ HV 2671 & 2426452 & 16.3 & A15293 \\ HV 2671 & 2426453 & 16.1 & A15298 \\ HV 2671 & 2426454 & 15.9 & A15303 \\ HV 2671 & 2426455 & 16.1 & A15308 \\ HV 2671 & 2426456 & 16.3 & A15314 \\ HV 2671 & 2426566 & 16.3 & A15631 \\ HV 2671 & 2426568 & 16.4 & A15651 \\ HV 2671 & 2426569 & 16.3 & A15661 \\ HV 2671 & 2426572 & 16.4 & A15680 \\ HV 2671 & 2426573 & 16.2 & A15686 \\ HV 2671 & 2426578 & 16.4 & A15703 \\ HV 2671 & 2426606 & 15.2 & MF16077 \\ HV 2671 & 2426608 & 15.3 & MF16082 \\ HV 2671 & 2426636 & 16.4 & A15806 \\ HV 2671 & 2426657 & 15.7 & MF16170 \\ HV 2671 & 2426679 & 16.4 & A15838 \\ HV 2671 & 2426680 & 15.6 & MF16250 \\ HV 2671 & 2426684 & 16.3 & A15847 \\ HV 2671 & 2426687 & 16.0 & A15851 \\ HV 2671 & 2426687 & 15.7 & MF16282 \\ HV 2671 & 2426690 & 16.2 & A15858 \\ HV 2671 & 2426710 & 16.3 & A15872 \\ HV 2671 & 2426710 & 15.5 & MF16324 \\ HV 2671 & 2426720 & 15.4 & MF16389 \\ HV 2671 & 2426802 & 15.6 & MF16591 \\ HV 2671 & 2426931 & 15.3 & B56513 \\ HV 2671 & 2426946 & 15.6 & B56559 \\ HV 2671 & 2426947 & 16.2 & A16203 \\ HV 2671 & 2426950 & 15.7 & B56593 \\ \hline \end{tabular} \end{table} \begin{table} \centering \contcaption{B Magnitudes from Harvard Plates.} \label{tab:continued} \begin{tabular}{llll} \hline Star & Julian Date & B (mag) & Plate\\ \hline HV 2671 & 2426956 & 16.2 & B56627 \\ HV 2671 & 2426957 & 15.7 & B56637 \\ HV 2671 & 2426978 & 16.2 & A16254 \\ HV 2671 & 2427311 & 15.9 & A16561 \\ HV 2671 & 2427749 & 16.3 & A17232 \\ HV 2671 & 2427777 & 16.0 & A17258 \\ HV 2671 & 2427800 & 16.4 & A17287 \\ HV 2671 & 2427800 & 16.3 & A17288 \\ HV 2671 & 2427800 & 16.2 & A17289 \\ HV 2671 & 2427800 & 16.2 & A17290 \\ HV 2671 & 2427800 & 16.3 & A17291 \\ HV 2671 & 2427801 & 16.3 & A17295 \\ HV 2671 & 2427802 & 15.8 & A17298 \\ HV 2671 & 2427807 & 16.2 & A17302 \\ HV 2671 & 2427807 & 16.2 & A17303 \\ HV 2671 & 2427808 & 16.1 & A17307 \\ HV 2671 & 2427808 & 16.4 & A17308 \\ HV 2671 & 2427808 & 16.4 & A17309 \\ HV 2671 & 2427808 & 16.0 & A17311 \\ HV 2671 & 2427811 & 16.2 & A17315 \\ HV 2671 & 2429584 & 16.5 & A21491 \\ HV 2671 & 2429606 & 16.2 & B65009 \\ HV 2671 & 2429671 & 16.1 & B65083 \\ HV 2671 & 2429674 & 16.3 & A21606 \\ HV 2671 & 2429690 & 16.2 & A21621 \\ HV 2671 & 2429879 & 15.9 & B65919 \\ HV 2671 & 2429905 & 16.0 & A22207 \\ HV 2671 & 2429934 & 16.3 & A22269 \\ HV 2671 & 2429939 & 16.1 & A22277 \\ HV 2671 & 2429956 & 16.2 & A22305 \\ HV 2671 & 2429970 & 16.1 & A22330 \\ HV 2671 & 2429994 & 16.0 & A22340 \\ HV 2671 & 2430023 & 16.0 & B66141 \\ HV 2671 & 2430045 & 15.7 & MF28653 \\ HV 2671 & 2430057 & 16.2 & MF22404 \\ HV 2671 & 2430058 & 16.2 & A22409 \\ HV 2671 & 2430080 & 15.9 & B66300 \\ HV 2671 & 2430101 & 15.7 & MF28870 \\ HV 2671 & 2430110 & 15.8 & MF28953 \\ HV 2671 & 2430111 & 16.3 & MF28967 \\ HV 2671 & 2430112 & 16.2 & MF28976 \\ HV 2671 & 2430240 & 16.1 & B67078 \\ HV 2671 & 2430264 & 15.7 & B67149 \\ HV 2671 & 2430314 & 16.2 & A22980 \\ HV 2671 & 2430314 & 16.0 & B67253 \\ HV 2671 & 2430315 & 16.3 & A22987 \\ HV 2671 & 2430318 & 16.0 & A22992 \\ HV 2671 & 2430318 & 16.2 & A22994 \\ HV 2671 & 2430319 & 16.1 & A22995 \\ HV 2671 & 2430320 & 16.3 & A23002 \\ HV 2671 & 2430322 & 16.1 & A23007 \\ HV 2671 & 2430323 & 16.1 & A23008 \\ HV 2671 & 2430324 & 16.0 & A23011 \\ HV 2671 & 2430325 & 16.0 & A23017 \\ HV 2671 & 2430328 & 16.1 & A23020 \\ HV 2671 & 2430372 & 16.0 & B67322 \\ HV 2671 & 2430373 & 16.2 & A23044 \\ HV 2671 & 2430373 & 15.9 & B67325 \\ HV 2671 & 2430373 & 16.0 & B67327 \\ HV 2671 & 2430373 & 16.2 & A23046 \\ HV 2671 & 2430373 & 16.0 & A23047 \\ HV 2671 & 2430375 & 15.8 & B67328 \\ HV 2671 & 2430375 & 15.7 & B67330 \\ HV 2671 & 2430586 & 16.1 & A23415 \\ \hline \end{tabular} \end{table} \begin{table} \centering \contcaption{B Magnitudes from Harvard Plates.} \label{tab:continued} \begin{tabular}{llll} \hline Star & Julian Date & B (mag) & Plate\\ \hline HV 2671 & 2430591 & 16.1 & A23424 \\ HV 2671 & 2430591 & 15.9 & B67968 \\ HV 2671 & 2430594 & 16.0 & A23427 \\ HV 2671 & 2430606 & 16.2 & A23430 \\ HV 2671 & 2430621 & 16.1 & A23450 \\ HV 2671 & 2430621 & 15.8 & B68040 \\ HV 2671 & 2430625 & 16.1 & A23453 \\ HV 2671 & 2430640 & 16.3 & A23458 \\ HV 2671 & 2430641 & 16.3 & A23462 \\ HV 2671 & 2430642 & 16.0 & A23466 \\ HV 2671 & 2430648 & 16.1 & A23471 \\ HV 2671 & 2430666 & 16.1 & A23485 \\ HV 2671 & 2430673 & 16.2 & A23490 \\ HV 2671 & 2430696 & 16.3 & A23502 \\ HV 2671 & 2430713 & 16.2 & A23513 \\ HV 2671 & 2430749 & 15.9 & A23528 \\ HV 2671 & 2430750 & 16.2 & A23530 \\ HV 2671 & 2430766 & 15.4 & MF31282 \\ HV 2671 & 2430767 & 16.2 & A23570 \\ HV 2671 & 2430782 & 15.6 & MF31352 \\ HV 2671 & 2430791 & 15.1 & MF31364 \\ HV 2671 & 2430792 & 15.3 & MF31381 \\ HV 2671 & 2430793 & 15.5 & MF31390 \\ HV 2671 & 2430809 & 15.7 & B68351 \\ HV 2671 & 2431804 & 16.3 & A25189 \\ HV 2671 & 2431814 & 15.5 & B71365 \\ HV 2671 & 2431817 & 15.8 & MF35012 \\ HV 2671 & 2431823 & 16.1 & A25194 \\ HV 2671 & 2431873 & 16.2 & A25218 \\ HV 2671 & 2431874 & 16.0 & B71427 \\ HV 2671 & 2432070 & 16.3 & B72205 \\ HV 2671 & 2432070 & 16.3 & A25565 \\ HV 2671 & 2432940 & 15.6 & A26696 \\ HV 2671 & 2433161 & 15.9 & A26976 \\ HV 2671 & 2433181 & 16.2 & A26998 \\ \hline \end{tabular} \end{table} The archived magnitudes from the AAVSO are freely available on-line. The AAVSO B-band measures are all taken with CCDs, with photometric uncertainties of 0.03 mag or better. Observers are designated with a three-letter designation, with HMB being Dr. Franz-Josef Hambsch in Belgium, DSI being Giorgio di Scala in Australia, and SXN being Michael Simonson in the United States. These are all calibrated with APASS comparison stars, and are thus in the Johnson B system. I have also pulled a variety of magnitudes from the literature, and these are all CCD measures. (The one exception is the single magnitude from Herbig in 1964 for MV Sgr.) These have been calibrated ultimately from the Landolt fields, and thus are also in the Johnson B system. Intercomparison of modern published B magnitudes always shows that different sources disagree with each other up to $\sim$0.1 mag, even for known-constant stars and for effectively simultaneous measures of slow variable stars. This is likely being due to different color terms and calibrations between observers. Within each literature source, the quoted error bars are usually $\sim$0.01 mag, but these are always measurement errors and do not include systematic errors that will appear as a constant offset for each source. Fortunately, the hot RCB stars show variations that are greatly larger than these usual calibration problems, so the existence and slope of the trends remain unaffected. In all, to within the normal errors, all the literature magnitudes are on the modern Johnson B system. The archival magnitudes in the literature (Hoffleit 1930; 1958; 1959; Woods 1928) are not used, because all have big photometric differences from the modern B magnitude system due to problems with the comparison star sequences, as was universal for the era. I have examined the exact same plates at Harvard, plus many more, all on the modern Johnson B system, so my magnitudes now supersede the old ones in the literature. \begin{figure} \includegraphics[width=1.1\columnwidth]{Fig1.pdf} \caption{V348 Sgr in B from AAVSO in 2014-2015. The observer was Dr. Franz-Josef Hambsch, in Belgium with a 14-inch telescope. This Johnson B light curve has 483 points, for which 164 magnitudes have been selected as representing the star at maximum light, while Hambsch also has Johnson V magnitudes on all the same nights. This light curve illustrates that the complete recovery from a dip only asymptotically approaches some presumably-dust-free maximum. It also illustrates that the time duration when the star is in the dip but just below maximum is a very small fraction of the time. A further point is that we can confidently measure the magnitudes at maximum to better accuracy than the maximum can be defined. The main point of this figure is that the recent maximum of V348 Sgr is close to B=12.93, whereas the Harvard plates show a maximum light around B=11.8 over a century earlier, with this being proof of a secular decline.} \label{fig:Fig. 1} \end{figure} A substantial problem is to select out the magnitudes taken when the star is at maximum light. Part of the problem is that the brightness recovery from a decline is only asymptotic, so all magnitudes will still have some residual dust to varying degrees. This is illustrated in Fig. 1, where V348 Sgr never quite recovers completely to some dust-free maximum brightness. An adequate solution is simply that this effect should be the same for old and new magnitudes, so there should be negligible effect on any trends. The biggest part of the problem is that most of the magnitudes are isolated in time, so we cannot recognize whether the star is at maximum or is in a dip. Magnitudes greatly fainter than some maximum are easily recognized and rejected, but magnitudes from the start or end of a dip, with the brightness only somewhat below the true maximum, can be included, resulting in an apparent fainter maximum. The inclusion of more or fewer in-decline magnitudes will make the star's maximum appear to be fainter or brighter. Fortunately, this problem is minimized by several means. First, dips are deep with fast drop offs, so there will be only a small fraction of the time during which the star will be close-but-below maximum light. That is, contaminated magnitudes must be rare and statistically negligible. Second, for DY Cen and MV Sgr, the dips are rare, so there is little opportunity for contamination. Third, I have rejected plates taken near times of known dips, whether or not the plate shows the star near a maximum. Fourth, for the AAVSO light curves, there are a high density of observations so that dips can be easily recognized (e.g., see Fig. 1) and avoided. Fifth, in generating a light curve at maximum, the effect of including magnitudes in dips) will only matter for measuring secular fading if the early and late measures have different inclusion fractions for dip-magnitudes, and this does not seem plausible. In general, operationally, when I have no additional information, I have tossed out magnitudes if they are more than a magnitude fainter than the maximum for that star and decade. There is a plausible chance that inclusion of the initial and final parts of dips has slightly lowered some of the averages over time. In general, the problem is likely to be minimal in the averages, and certainly the effect is smaller than the trends observed. Thus, I conclude that this problem is not a significant contributor to the observed trends for any of the hot RCB stars. DY Cen has had no minimum from 1960 to 2016, as shown by the densely-sampled light curves from the Royal Astronomical Society of New Zealand and from the AAVSO (De Marco et al. 2002). With the Harvard plates, I can extend this back to 1935, although the interval from 1954 to 1960 is poorly covered due to the Menzel Gap. Before 1930, Hoffleit (1930) identified four dust dips with the Harvard plates, while I have added further dips. The known dips are in 1897, 1901, 1904.4, 1906.3-1908.5, 1914.5, 1915.3, 1918.5, 1924.1-1924.6, 1929.2-1929.5, 1931.2, 1932.6 and 1934.2. The coverage of the dips is patchy, but it appears that durations are a few months, other than the cases noted. Further dips are likely to have occurred, mainly during the part of the year when DY Cen is the closest to the Sun. We are left with a stark situation where DY Cen has many dust dips from 1895 to 1934, but none from 1935 to 2016. With this, I have constructed a maximum light curve for each of the hot RCB stars with Harvard, AAVSO, and literature magnitudes, all in the Johnson B system. A simple plot of all these magnitudes shows the usual scatter, with this somewhat hiding secular trends. To solve this, I have averaged the magnitudes by source and time interval. The one-sigma uncertainty is taken to be the RMS scatter in the magnitudes divided by the square root of the number of magnitudes. These averages are presented in Table 2 and Fig. 2. We see that all four hot RCB stars have an obvious secular decline of roughly one magnitude per century. This then provides the direct confirmation of the result in De Marco et al. (2002). The light curves show roughly linear declines. There is substantial scatter around these linear declines, with it being unclear whether this is due to real variation in the star's maximum light, due to inclusion of magnitudes just below maximum, or due to measurement error. The secular decline need not be linear or even monotonic. We can quantify the secular decline by an average decline rate, derived from a linear fit. I have made chi-square fits for the light curves in Table 1 for a linear decline. The resultant fits have reduced chi-square values that are much greater than unity, pointing to the variations around the simple straight line being much larger than the nominal error bars. As such, the formal one-sigma error bars from the chi-square fits for the slope are not meaningful. The fitted slopes are 1.15 for DY Cen, 1.29 for MV Sgr, 1.29 for V348 Sgr, and 0.73 for HV 2671, all in units of magnitudes per century. For these four slopes, the average is 1.11 magnitude per century, with an RMS of 0.26 magnitude per century. The calculation of an averaged linear slope is making no implication that any of the stars has a constant linear slope, nor that the stars all have the same linear slope. Indeed, for DY Cen, the light curve appears to be more of a parabola than a line. Further, the Hot RCB stars are apparently a diverse class, so an average decline rate will be some sort of a mixture of rates for stars with different histories and masses. Still, the average fade rate of 1.11$\pm$0.26 magnitude per century has utility in expressing the typical decline rate and its variations, in quantitatively showing that the Hot RCB stars are fast fading, and in providing a representative rate for model calculations. While still with only one band, the AAVSO visual light curves are long enough and with enough accuracy that we can get an independent measure of the secular fading rate. For DY Cen, 6438 visual magnitudes cover the time from April 1978 to October 2015 with no dips, with an average decline rate of 1.87 magnitudes per century. For V348 Sgr, the frequent dips make it harder to pick out a decline by eye from the full visual light curve, yet the maximum magnitudes are around 11.5 in the 1950's and around 12.2 for the last decade, for a decline rate of approximately 1.3 magnitudes per century. \begin{table} \centering \caption{Hot RCB star light curves.} \label{tab:table1} \begin{tabular}{llll} \hline Star & Years & $\langle$B$\rangle$ (mag) & Source\\ \hline DY Cen & 1902--1910 & 12.84 $\pm$ 0.10 & HCO (9 plates) \\ DY Cen & 1916--1922 & 12.46 $\pm$ 0.06 & HCO (21 plates) \\ DY Cen & 1931--1932 & 12.33 $\pm$ 0.12 & HCO (9 plates) \\ DY Cen & 1946--1949 & 12.36 $\pm$ 0.10 & HCO (8 plates) \\ DY Cen & 1970 & 12.62 $\pm$ 0.03 & Marino \& Walker (1971) \\ DY Cen & 1972 & 12.70 $\pm$ 0.03 & Sherwood (1975)$^a$ \\ DY Cen & 1983--1989 & 13.51 $\pm$ 0.04 & HCO (23 plates) \\ DY Cen & 1983 & 12.96 $\pm$ 0.02 & Kilkenny et al. (1985) \\ DY Cen & 1985 & 13.03 $\pm$ 0.02 & Goldsmith et al. (1990) \\ DY Cen & 1987 & 13.11 $\pm$ 0.02 & Pollacco \& Hill (1991) \\ DY Cen & 1988 & 13.22 $\pm$ 0.04 & Jones et al. (1989) \\ DY Cen & 2006--2007 & 13.45 $\pm$ 0.01 & AAVSO (DSI, 38 mags) \\ DY Cen & 2013--2015 & 13.82 $\pm$ 0.09 & AAVSO (SXN, 25 mags) \\ MV Sgr & 1905 & 12.60 $\pm$ 0.20 & HCO (1 plate) \\ MV Sgr & 1920 & 13.15 $\pm$ 0.20 & HCO (2 plates) \\ MV Sgr & 1929 & 12.70 $\pm$ 0.20 & HCO (3 plates) \\ MV Sgr & 1934-1940 & 12.48 $\pm$ 0.08 & HCO (13 plates) \\ MV Sgr & 1963 & 12.96 $\pm$ 0.10 & Herbig (1964) \\ MV Sgr & 1978--1986 & 13.36 $\pm$ 0.05 & HCO (14 plates) \\ MV Sgr & 1985 & 13.62 $\pm$ 0.03 & Goldsmith et al. (1990) \\ MV Sgr & 2006--2014 & 13.59 $\pm$ 0.01 & AAVSO (DSI, 76 mags) \\ MV Sgr & 2011--2015 & 13.90 $\pm$ 0.01 & AAVSO (SXN, 71 mags) \\ V348 Sgr & 1896--1901 & 11.80 $\pm$ 0.09 & HCO (5 plates) \\ V348 Sgr & 1907--1912 & 11.75 $\pm$ 0.03 & HCO (27 plates) \\ V348 Sgr & 1919--1923 & 11.79 $\pm$ 0.04 & HCO (18 plates) \\ V348 Sgr & 1929--1935 & 11.87 $\pm$ 0.03 & HCO (13 plates) \\ V348 Sgr & 1939--1944 & 11.81 $\pm$ 0.03 & HCO (30 plates) \\ V348 Sgr & 1970 & 12.50 $\pm$ 0.10 & Heck et al. (1985) \\ V348 Sgr & 1972--1974 & 12.78 $\pm$ 0.28 & Heck et al. (1985) \\ V348 Sgr & 1981 & 12.78 $\pm$ 0.01 & Heck et al. (1985) \\ V348 Sgr & 2014--2015 & 12.93 $\pm$ 0.02 & AAVSO (HMB, 164 mags) \\ HV 2671 & 1896--1904 & 15.76 $\pm$ 0.26 & HCO (4 plates) \\ HV 2671 & 1923 & 16.05 $\pm$ 0.10 & HCO (11 plates) \\ HV 2671 & 1929--1935 & 16.07 $\pm$ 0.04 & HCO (63 plates) \\ HV 2671 & 1939--1943 & 16.02 $\pm$ 0.03 & HCO (68 plates) \\ HV 2671 & 1945--1949 & 16.02 $\pm$ 0.08 & HCO (11 plates) \\ HV 2671 & 1993--1999 & 16.75 $\pm$ 0.1 & Alcock et al. (1996) \\ HV 2671 & 2001--2009 & 16.41 $\pm$ 0.1 & Soszynski et al. (2009) \\ \hline \end{tabular} $^a$As quoted in Rao et al. (1993) \end{table} \begin{figure} \includegraphics[width=1.1\columnwidth]{Fig2a.pdf} \includegraphics[width=1.1\columnwidth]{Fig2b.pdf} \caption{Century-long Johnson B light curves for all four known hot RCB stars. The main point of this figure and this paper is that all four known hot RCB stars have obvious and significant secular declines. The thick lines are from the formal chi-square fit, which represent the average secular fading of the stars. The scatter around these best fits is much larger than the nominal error bars, and it is not clear whether this is due to the intrinsic variations of the maximum brightness, the inclusion of just-below-maximum in-dip magnitudes, or ordinary photometric errors. The four panels are for DY Cen, MV Sgr, V348 Sgr, and HV 2671. The fading of DY Cen is apparent only since 1960 or so, whereas the star was {\it brightening} before the 1930s.} \label{fig:Fig. 2} \end{figure} \begin{figure} \includegraphics[width=1.1\columnwidth]{Fig2c.pdf} \includegraphics[width=1.1\columnwidth]{Fig2d.pdf} \label{fig:Fig. 2cd} \end{figure} \section{DIscussion} In essence, I have merely confirmed and extended the conclusion of De Marco et al. (2002) that the hot RCB stars are secularly fading. My improvements have been to use a single photometric system all with modern comparison stars, to collect many more magnitudes over a much wider time range, as well as to measure the decline rate for the fourth hot RCB star. De Marco et al. (2002) have interpreted these secular declines as being due to the star evolving to hotter temperature at constant luminosity, such that the bolometric correction to the optical band makes for an apparent dimming. (The only other plausible explanation is some sort of general increase in the circumstellar dust density, but such would lead to color changes that are not observed in the cases of DY Cen and MV Sgr.) This interpretation matches with the general idea that the hot RCB stars are moving horizontally across the top of the HR diagram as part of their normal and fast evolution. Pandey et al. (2014) have explicitly tested this interpretation for DY Cen, where archival spectra give surface temperatures of 19,400$\pm$400 K in 1987, 23,000$\pm$300 K in 2002, and 24,800$\pm$600 K in 2010. This is 5,400 K in 23 years, or 23,500 K per century. This increase in the stellar temperature is confirmed and reflected in the dramatic change in the excitation of the nebula around DY Cen (Rao et al. 2013). We can translate this rate of temperature change for DY Cen into a magnitude decline rate. The calculation of the change in bolometric corrections and the change of B-V color is presented in Fig. 1 of Pandey et al. (2014) for the relevant conditions. For a temperature of 19,400 K, they give V=12.78 and B-V=-0.80 (with an arbitrary zero point), for B=11.98. For a temperature of 24,800 K, they give V=13.38 and B-V=-0.85 (with the same arbitrary zero point), for B=12.53. Thus, the observed temperature decline in 23 years corresponds to a fading by 0.55 mag, for a decline rate of 2.39 magnitudes per century. This is close to the average decline rate for the years 1983 to 2015 (see Fig. 2a). So the observed temperature change is consistent with the observed decline rate. For the evolution of DY Cen going back in time, the temperature must be relatively low in old times, resulting in a large bolometric correction. A simple extrapolation back to 1905 puts the temperature to near zero, so the temperature changes cannot be linear with time. Nevertheless, the temperature back in 1905 should be relatively quite cold. The bolometric correction for the B band is minimized for a stellar temperature of 7,500 K, so that for evolution at constant luminosity, the B magnitude will be brightest at that temperature and dimmer as the temperature departs from this value to both hotter and colder temperatures. So we then have a ready interpretation of the long-term trend in the maximum magnitude (Fig. 2a), with DY Cen starting in 1906 out colder than 7,500 K, heating up to 7,500 K in 1932 when the star was at its brightest, then continuing heating of the star makes it dim over the next decades. The correction from a constant luminosity to the B band can be taken from Table 15.7 of Cox et al. (2000), where the minimum correction is at 7,500 K, and where the corrections of 0.6 mag are for temperatures of 5,800 K and 11,000 K. With this, DY Cen had temperatures of 5,800 K around 1906, 7,500 K around 1932, and 11,000 K around the 1970s. So we now have a simple explanation for why the DY Cen maxima back around 1906 was substantially dimmer than around in 1932. In all, a continuous temperature increase from 5,800 K in 1905 to 24,800 K in 2010 can account for both the observed change in maximum magnitudes and the observed changes in the temperature. I have made a crude model that accounts for the stellar temperature and maximum magnitude as a function of time. From the models of Saio (1988), I take the logarithm of the temperature to be linear with time, with this being approximately right for a given star under hot RCB conditions. I then set the linear relation by using the observed temperature in 2010 plus the 7,500 K condition for 1932. With these temperatures, I get the bolometric corrections and B-V colors for supergiants from Cox et al. (2000), and add a constant to get the B-band magnitude outside of a decline. This model light curve is displayed in Fig. 3. This model result is not perfect, with the worst problem being that the bolometric correction for 24,800 K should make DY Cen close to 2.0 mag fainter in 2010 than in 1932, whereas it is observed to be more like 1.3 mag fainter. This problem is easily solved if there is extra light in the DY Cen system, perhaps from a wide binary companion or from the circumstellar material. Nevertheless, it is clear that the model captures the essence of a normal RCB stars heating up from around 5,800 K in 1906 to 24,800 K in 2010. \begin{figure} \includegraphics[width=1.1\columnwidth]{Fig3.pdf} \caption{DY Cen evolution from normal RCB to hot RCB to an extreme helium star. Over the last century, the maximum brightness has first brightened, come to a peak round 1932, then started a secular decline continuing to today. The temperature is observed to go from 19,400 K in 1987 to 24,800 K in 2010. This is all consistent with the expected evolution from right to left across the HR diagram at constant luminosity. DY Cen started in 1906 with a temperature of near 5,800 K as a normal RCB star, and rapidly heated up. As it heated up, the bolometric correction lessened, making the star appear brighter, until around 1932 when the bolometric correction is its smallest for a temperature near 7,500 K. As the star kept heating up, the bolometric correction got larger, making the maximum magnitude dim, with this continuing to today. A crude model of this is shown here, with the logarithm of the temperature assumed to change linearly with time, for comparison with the observed peak magnitude. The frequency of RCB dips has also change, with many known dips before 1934, but none known after 1934. All known dips are shown displayed onto the model light curve.} \label{fig:Fig. 3} \end{figure} So we have actually watched DY Cen start out as an ordinary RCB star (with temperature around 5,800 K), and then heat up to become a hot RCB star, and now appear as an extreme helium star with no dust dips. This evolution has taken close to one century. So we have a real measure of the duration of the hot RCB stage, and it is about one century. This is a very fast phase of evolution. This explains why so few hot RCB stars have been seen in our Milky Way galaxy, despite them being supergiant stars. The heating up of DY Cen is also associated with a sharp drop off in the frequency of dips. De Marco et al. (2002) note that DY Cen had only four knowns dips from 1897 to 1927, and zero known dips since 1960. With the Harvard plates, I have identified eight additional dips, all from 1904 to 1934. Apparently, the heating of the star's surface temperature is connected with the turn off of the dust formation, perhaps caused by a stoppage of pulsations as the star leaves some instability strip. We realize that there is only a narrow time window over which the hot RCB phenomenon can be recognized, with only a few decades from the time when DY Cen was sufficiently hotter than the upper limit for normal RCB stars up until the time when the dust dips turn off. We can translate the typical decline rate of 1.11 magnitude per century into a temperature change rate. For a case with effective surface temperature of 15,000 K, Pandey et al. (2014) give V=12.15 and B-V=-0.75 (with the same arbitrary zero point), for B=11.40. For a temperature of 20,000 K, they give V=12.88 and B-V=-0.79, for B=12.09. For a 5,000 K temperature change over the range for hot RCB stars, the B magnitude changes by 0.69 magnitudes. If this change happens over 62 years, then the B magnitude will fade at the rate of 1.11 magnitude per century. DY Cen is similar to extreme helium stars (supergiants composed mostly of helium with near one percent carbon and temperatures 9000-35,000 K). Jeffery et al. (2001) found that four out of twelve extreme helium stars are heating up with rates from 20 to 120 degrees K per year. (A useful program would be to search for B-band brightness changes from the 1890s to the present with the Harvard plates for the two stars with the fastest temperature changes; HD 160641 and BD -1$\degr$3438.) Such surface temperature changes are expected from models of extreme helium stars with masses of $\sim$0.9 M$_{\odot}$ (Saio 1988). The majority of stars with no measured change in surface temperature are presumably less massive, perhaps $\sim$0.7 M$_{\odot}$. DY Cen is changing at a rate of 235 K per year from 1987 to 2010. If the models of Saio (1988) are applicable to DY Cen, then this star would be $\sim$1.0 M$_{\odot}$. With the realization as to how some `cold' RCB stars should evolve on a time scale of a century, we can look for the same changes amongst the known normal RCB stars. That is, the normal RCB stars are heating up, having their maximum magnitudes getting brighter, and their frequency of declines falling to near zero. But such changes have never been seen for any star that is now a `cold' RCB star. A small number of cold RCB stars have century-long light curves with no apparent change in their brightness at maximum, while R CrB itself has a 230 year record of unchanging peaks. So the heating up of the cold RCB stars must usually be too slow to produce observable effects. Still, some fraction of the now-cold RCB stars might be like DY Cen a century ago. Perhaps only the most-massive cold RCB stars will be evolving fast enough for the changes to be detectable. A practical plan to search for fast evolving cold RCB stars is to construct a century-long light curve for many of them. This could show secular changes in the magnitude at maximum as well as in the frequencies of declines. In practice, the primary sources are archival data from AAVSO and Harvard. In any such study, care must be used to place all magnitudes onto a consistent magnitude system. (For example, old AAVSO magnitudes will require corrections for changes in the comparison sequences, and these can only be gotten from old charts archived at AAVSO Headquarters.) A substantial problem in seeking changes in the decline-frequency will be to adjust for the variations in time-coverage over the decades. With this, we have a plan for someone to make a systematic survey of century-long light curves for normal RCB stars so as to measure their evolution across the HR diagram. In general, stars evolve on such slow time scales that astronomers have not been able to see the changes over time. Other than for supernovae, evolutionary changes have only been seen for a few post-AGB stars, including the born-again stars and the Stingray (Schaefer \& Edwards 2015). Now, we can add the four hot RCB stars, with observed temperature rises of 8,000 K or more over the last century.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{sec:sec:QECIntro} The micro-computer revolution of the late 20th century has arguably been of greater impact to the world that any other technological revolution in history. The advent of transistors, integrated circuits, and the modern microprocessor has spawned literally hundreds of devices from pocket calculators to the iPod, all now integrated through an extensive worldwide communications system. However, as we enter the 21st century, the rate at which computational power is increasing is driving us very quickly to the realm of quantum physics. The component size of individual transistors on modern microprocessors are becoming so small that quantum effects will soon begin to dominate over classical electronic properties. Unfortunately the current designs for micro-electronics mean that quantum mechanical behavior will tend to result in unpredictable and unwanted behavior. Therefore, we have two choices: to keep trying to suppressing quantum effects in classically fabricated electronics or move to the field of quantum information processing (QIP) where we instead exploit them. This leads to a paradigm shift in the way we view and process information and has lead to considerable interest from physicists, engineers, computer scientists and mathematicians. The counter-intuitive and strange rules of quantum physics offers enormous possibilities for information processing and the development of a large scale quantum computer is the holy grail of many groups worldwide. While the advent of Shor's algorithm~\cite{S94} certainly spawned great interest in quantum information processing and demonstrated that the utilization of a quantum computer could lead to algorithms far more efficient than those used in classical computing, there was a great deal of debate surrounding the practicality of building a large scale, controllable, quantum system. It was well known even before the introduction of quantum information that coherent quantum states were extremely fragile and many believed that to maintain large, multi-qubit, coherent quantum states for a long enough time to complete {\em any} quantum algorithm was unrealistic~\cite{U95}. Additionally, classical error correction techniques are intrinsically based on a digital framework. Hence, can the vast amount of knowledge gained from classical coding theory be adapted to the quantum regime where while the {\em readout} of qubits is digital but actual manipulations are analogue. Starting in 1995, several papers appeared, in rapid succession, proposing codes which were appropriate to perform error correction on quantum data~\cite{S95,S96,CS96,LMPZ96}. This was the last theoretical aspect needed to convince the general community that quantum computation was indeed a possibility. Since this initial introduction, the progress in this field has been extensive. Initial work on error correction focused heavily on developing quantum codes~\cite{S96++,CG97,G96,PVK96}, introducing a more rigorous theoretical framework for the structure and properties of Quantum Error Correction (QEC)~\cite{KL00,CRSS98,G97,KLV99,KLP05} and the introduction of concepts such as fault-tolerant quantum computation~\cite{S96+,DS96,G98} which leads directly to the threshold theorem for concatenated QEC~\cite{KLZ96,AB97}. In more recent years QEC protocols have been developed for various systems, such as continuous variables~\cite{LS98,B98,L08,ATKYBLF08}, ion-traps and other systems containing motional degrees of freedom~\cite{LW03+,ST98}, adiabatic computation~\cite{JFS06} and globally controlled quantum computers~\cite{BBK03}. Additionally, work still continues on not only developing more complicated (and in some ways, more technologically useful) protocols such as subsystem codes~\cite{B06} and topological codes~\cite{K97,DKLP02,RHG07,FSG08} but also advanced techniques to implement error correction in a fault-tolerant manner~\cite{S97,S02,DA07}. Along with QEC, other methods of protecting quantum information were also developed. These other techniques would technically be placed in a separate category of error avoidance rather than error correction. The most well known technique of error avoidance is protocols such as decoherence free subspaces (DFS)~\cite{DG97,DG98,ZR97,ZR97+,DG98+,LW03}. While this protocol has the mathematical structure of a self correcting quantum code, it is largely a technique to suppress certain, well structured, noise models. As with QEC, this field of error avoidance is vast, now incorporating ideas from optimal control to create specially designed control sequences to counteract the effect of errors induced from environmental coupling. These new methods of dynamical decoupling can take simple structures such as Bang-Bang control~\cite{VL98,VT98,Z99}, to more complicated and generalized protocols to help decouple qubits from the environment~\cite{VKL99,FLP04,VK03,VK05}. This review deals exclusively with the concepts of QEC and fault-tolerant quantum computation. Many papers have reviewed error correction and Fault-tolerance~\cite{G97+,NC00,G00,KLABVZ02,S03+,G09}, however to cater for a large audience, we attempt to describe QEC and Fault-tolerance in a much more basic manner, largely through examples. Instead of providing a more rigorous review of error correction, we instead try to focus on more practical issues involved when working with these ideas. For those who have recently begun investigating quantum information processing or those who are focused on other important theoretical and/or experimental aspects related to quantum computing, searching through this enormous collection of work is daunting especially if a basic working knowledge of QEC is all that is required. We hope that this review of the basic aspects of QEC and fault-tolerance will allow those with little knowledge of the field to quickly become accustomed to the various techniques and tricks that are commonly used. We begin the discussion in section~\ref{sec:prelim} where we share some preliminary thoughts on the required properties of any quantum error correcting protocol. In section~\ref{sec:error} we review some basic noise models from the context of how they influence quantum algorithms. Section~\ref{sec:sec:QEC} introduces quantum error correction through the traditional example of the 3-qubit code, illustrating the circuits used for encoding and correction and why the principal of redundant encoding suppresses the failure of encoded qubits. Section~\ref{sec:sec:QEC} then introduces the stabilizer formalism~\cite{G97+}, demonstrating how QEC circuits are synthesized once the structure of the code is known. In section~\ref{sec:sec:decoherence} we then briefly return to the noise models and relate the abstract analysis of QEC, where errors are assumed to be discrete and probabilistic, to some of the physical mechanisms which can cause errors. Sections~\ref{sec:Fault-tolerance} and~\ref{sec:operations} introduces the concept of fault-tolerant error correction, the threshold theorem and how logical gate operations can be applied directly to quantum data. We then move on to circuit synthesis in section~\ref{sec:FTcircuit} presenting a basic fault-tolerant circuit design for logical state preparation using the $[[7,1,3]]$ Steane code as a representative example of how to synthesize fault-tolerant circuits from the stabilizer structure of quantum codes. Finally in section~\ref{sec:modern} we review specific codes for qubit loss and examine two of the more modern techniques for error correction. We briefly examine quantum subsystem codes~\cite{B06} and topological surface codes~\cite{DKLP02,FSG08} due to both their theoretical elegance and their increasing relevance in quantum architecture designs~\cite{DFSG08}. \section{Preliminaries} \label{sec:prelim} Before discussing specifically the effect of errors and the basics of Quantum Error Correction (QEC) we first dispense with the very basics of qubits and quantum gates. We assume a basic working knowledge with quantum information~\cite{EJ96,NC00} and this brief discussion is used simply to define our notation for the remainder of this review. The fundamental unit of quantum information, the qubit, which unlike classical bits can exist in coherent superpositions of two states, denoted $\ket{0}$ and $\ket{1}$. These basis states can be photonic polarization, spin states, electronic states of an ion or charge states of superconducting systems. An arbitrary state of an individual qubit, $\ket{\phi}$, can be expressed as, \begin{equation} \ket{\phi} = \alpha\ket{0} + \beta\ket{1} \end{equation} where normalization requires, $|\alpha|^2+|\beta|^2 = 1$. Quantum gate operations are represented by unitary operations acting on the Hilbert space of a qubit array. Unlike classical information processing, conservation of probability for quantum states require that all operations be reversible and hence unitary. When describing a quantum gate on an individual qubit, any dynamical operation, $G$, is a member of the unitary group $U(2)$, which consists of all $2\times 2$ matrices where $G^{\dagger} = G^{-1}$. Up to a global (and unphysical) phase factor, any single qubit operation can be expressed as a linear combination of the generators of $SU(2)$ as, \begin{equation} G = c_I \sigma_I + c_x \sigma_x + c_y\sigma_y + c_z\sigma_z \end{equation} where, \begin{equation} \sigma_x = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}, \quad \sigma_y = \begin{pmatrix} 0 & -i \\ i & 0 \end{pmatrix}, \quad \sigma_z = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix}, \end{equation} are the Pauli matrices, $\sigma_I$ is the $2\times 2$ identity matrix and the co-efficients $(c_I,c_x,c_y,c_z) \in \mathcal{C}$ satisfy $|c_I|^2+|c_x|^2+|c_y|^2+|c_z|^2 = 1$. The concept of Quantum Error Correction (QEC) is fundamental to the large scale viability of quantum information processing. Although the field is largely based on classical coding theory, there are several issues that need to be considered when transferring classical error correction to the quantum regime. First, coding based on data-copying, which is extensively used in classical error correction cannot be used due to the no-cloning theorem of quantum mechanics~\cite{WZ82}. This result implies that there exists no transformation resulting in the following mapping, \begin{equation} U\ket{\phi} \otimes \ket{\psi} = \ket{\phi} \otimes \ket{\phi}. \end{equation} i.e. it is impossible to perfectly copy an unknown quantum state. This means that quantum data cannot be protected from errors by simply making multiple copies. Secondly, direct measurement cannot be used to effectively protect against errors, since this will act to destroy any quantum superposition that is being used for computation. Error correction protocols must therefore be employed which can detect and correct errors without determining {\em any} information regarding the qubit state. Finally, unlike classical information, qubits can experience traditional bit errors, $\ket{0} \leftrightarrow \ket{1}$ but unlike classical information, qubit are also susceptible to phase errors $\ket{1} \leftrightarrow -\ket{1}$. Hence any error correcting procedure needs to be able to simultaneously correct for both. At its most basic level, QEC utilizes the idea of redundant encoding where quantum data is protected by extending the size of the Hilbert space for a single, logically encoded qubit and essentially spreading out the information over multiple qubits. This way, errors only perturb codeword states by small amounts which can then be detected and corrected, without directly measuring the quantum state of any qubit. \section{Quantum Errors: Cause and Effect}\label{sec:error} Before we even begin discussing the details of quantum error correction, we first examine some of the common sources of errors in quantum information processing and contextualize what they imply for computation. We will consider several important sources of errors and how they influence a trivial, single qubit, quantum algorithm. This trivial algorithm will be a computation consisting of a single qubit, intitilized in the $\ket{0}$ state undergoing $N$ identity operations. Such that the final, error free state is, \begin{equation} \ket{\psi}_{\text{final}} = \prod^N \sigma_I\ket{0} = \ket{0}, \end{equation} Measurement of the qubit in the $\ket{0}$, $\ket{1}$ basis will consequently yield the result 0 with a probability of unity. We examine, independently, several common sources of error from the effect they have on this simple quantum algorithm. Hopefully, this introductory section will show that while quantum errors are complicated physical effects, in QIP the relevant measure is the theoretical success probability of a given quantum algorithm. \subsection{Coherent Quantum Errors: You don't know what you are doing!} The first possible source of error is coherent, systematic control errors. This type of error is typically associated with bad system control and/or characterization where imprecise manipulation of the qubit introduces inaccurate Hamiltonian dynamics. As this source of error is produced by inaccurate control of the system dynamics it does not produce mixed states from pure states (i.e. it is a coherent, unitary error and does not destroy the quantum coherence of the qubit but instead causes you to apply an undesired gate operation). In our trivial algorithm, we are able to model this several different ways. To keep things simple, we assume that incorrect characterization of the control dynamics leads to an identity gate which is not $\sigma_I$, but instead introduces a small rotation around the $X$-axis of the Bloch sphere, i.e. \begin{equation} \ket{\psi}_{\text{final}} = \prod^N e^{i\epsilon \sigma_x}\ket{0} = \cos(N\epsilon)\ket{0} + i\sin(N\epsilon)\ket{1}. \end{equation} We now measure the system in the $\ket{0}$ or $\ket{1}$ state. In the ideal case, the computer should collapse to the state $\ket{0}$ with a probability of one, $P(\ket{0})=1$. However we now find, \begin{equation} \begin{aligned} &P(\ket{0}) = \cos^2(N\epsilon) \approx 1- (N\epsilon)^2, \\ &P(\ket{1}) = \sin^2(N\epsilon) \approx (N\epsilon)^2. \end{aligned} \end{equation} Hence, the probability of error in this trivial quantum algorithm is given by $p_{\text{error}} \approx (N\epsilon)^2$, which will be small given that $N\epsilon \ll 1$. The systematic error in this system is proportional to both the small systematic over rotation and the total number of applied identity operations. \subsection{Decoherence: The devil is in the environment} Environmental decoherence is another important source of errors in quantum systems. Once again we will take a very basic example of a decoherence model and examine how it influences our trivial algorithm. Later in section~\ref{sec:sec:decoherence} we will illustrate a more complicated decoherence model that arises from standard mechanisms. Consider a very simple environment, which is another two level quantum system. This environment has two basis states, $\ket{e_0}$ and $\ket{e_1}$ which satisfies the completeness relations, \begin{equation} \bra{e_i}e_j\rangle = \delta_{ij}, \quad \ket{e_0}\bra{e_0} + \ket{e_1}\bra{e_1} = I. \end{equation} We will also assume that the environment couples to the qubit in a specific way. When the qubit is in the $\ket{1}$ state, the coupling flips the environmental state while if the qubit is in the $\ket{0}$ state nothing happens to the environment. Additionally, as we anticipate the effect of this decoherence model we will slightly alter our trivial algorithm. Rather than considering a qubit prepared in the $\ket{0}$ state and applying $N$ identity operations, we instead modify the algorithm to the following, \begin{equation} \begin{aligned} \ket{\psi}_{\text{final}} = H\sigma_IH\ket{0} &=H\sigma_I \frac{1}{\sqrt{2}}(\ket{0} + \ket{1}) \\ &= H \frac{1}{\sqrt{2}}(\ket{0}+\ket{1}) = \ket{0}. \end{aligned} \end{equation} Essentially we are performing two $H \equiv $ Hadamard operations separated by a wait stage, represented by the Identity gate. Finally, this model assumes the system/environment interaction only occurs during this wait stage of the algorithm. As with the previous algorithm we should measure the state $\ket{0}$ with probability one. The reason for modifying our trivial algorithm is because this specific decoherence model acts to reduce coherence between the $\ket{0}$ and $\ket{1}$ basis states and hence we require a coherent superposition to observe any effect from the environmental coupling. We now assume that the environment starts in the pure state, $\ket{E} = \ket{e_0}$, and couples to the system such that, \begin{equation} H\sigma_I H\ket{0}\ket{E} = \frac{1}{2}(\ket{0}+\ket{1})\ket{e_0} + \frac{1}{2}(\ket{0}-\ket{1})\ket{e_1} \end{equation} As we are considering environmental decoherence, pure states will be transformed into classical mixtures, hence we now move into the density matrix representation for the state $H\sigma_I H\ket{0}\ket{E}$, \begin{equation} \begin{aligned} \rho_{f} &= \frac{1}{4}( \ket{0}\bra{0} + \ket{0}\bra{1}+\ket{1}\bra{0}+\ket{1}\bra{1})\ket{e_0}\bra{e_0}\\ &+\frac{1}{4}( \ket{0}\bra{0} - \ket{0}\bra{1}-\ket{1}\bra{0}+\ket{1}\bra{1})\ket{e_1}\bra{e_1} \\ &+ \frac{1}{4}( \ket{0}\bra{0} - \ket{0}\bra{1}+\ket{1}\bra{0}-\ket{1}\bra{1})\ket{e_0}\bra{e_1}\\ &+\frac{1}{4}( \ket{0}\bra{0} + \ket{0}\bra{1}-\ket{1}\bra{0}-\ket{1}\bra{1})\ket{e_1}\bra{e_0}. \end{aligned} \end{equation} Since we do not measure the environmental degrees of freedom, we trace over this part of the system, giving, \begin{equation} \begin{aligned} \text{Tr}_{E}(\rho_f) &= \frac{1}{4}( \ket{0}\bra{0} + \ket{0}\bra{1}+\ket{1}\bra{0}+\ket{1}\bra{1})\\ &+\frac{1}{4}( \ket{0}\bra{0} - \ket{0}\bra{1}-\ket{1}\bra{0}+\ket{1}\bra{1}) \\ &= \frac{1}{2}(\ket{0}\bra{0}+\ket{1}\bra{1}). \end{aligned} \end{equation} Measurement of the system will consequently return $\ket{0}$ 50\% of the time and $\ket{1}$ 50\% of the time. This final state is a complete mixture of the qubit states and is consequently a classical system. The coupling to the environment removed all the coherence between the $\ket{0}$ and $\ket{1}$ states and consequently the second Hadamard transform, intended to rotate $(\ket{0}+\ket{1})/\sqrt{2} \rightarrow \ket{0}$ has no effect. Since we assumed that the system/enviroment coupling during the wait stage causes the environmental degree of freedom to ``flip" when the qubit is in the $\ket{1}$ state, this decoherence model implicitly incorporates a temporal effect. The temporal interval of our identity gate in the above algorithm is long enough to enact this full controlled-flip operation. If we assumed a controlled rotation that is not a full flip on the environment, the final mixture will not be 50/50. Instead there would be a residual coherence between the qubit states and an increased probability of our algorithm returning a $\ket{0}$. Section~\ref{sec:sec:decoherence} revisits the decoherence model and illustrates how time-dependence is explicitly incorporated. \subsection{Loss, Leakage, Measurement and Initialization: Variations of the above} \label{sec:lossleakage} Other sources of error such as qubit initialization, measurement errors, qubit loss and qubit leakage are modeled in a very similar manner. Measurement errors and qubit loss are modeled in the same way as environmental decoherence. Measurement errors are described by utilizing the following measurement projection onto a qubit space, \begin{equation} A = (1-p_M)\ket{0}\bra{0} + p_M\ket{1}\bra{1} \end{equation} where $p_M \in [0,1]$ is the probability of measurement error. If we have a pure state $\rho = \ket{0}\bra{0}$, the probability of measuring a $\ket{0}$ is, \begin{equation} P(\ket{0}) = \text{Tr}(A\rho) = (1-p) \end{equation} indicating that the correct result is observed with probability $1-p$. Qubit loss is modeled in a slightly similar manner. When a qubit is lost, it is essentially coupled to the environment which acts to measure the system, with the classical information lost. This coupling follows the decoherence analysis shown earlier, where a 50/50 mixed state of the qubit results. Therefore the projector onto the qubit space is given by $A = \frac{1}{2}(\ket{0}\bra{0} + \ket{1}\bra{1})$, which is identical to simply tracing over the lost qubit and equivalent to a measurement error of probability $p=0.5$. With this type of error channel, not only is the physical object lost (and hence cannot be directly measured), but an initially pure qubit is converted to a completely mixed state. While this model of qubit loss is equivalent to environmental coupling, correcting this type of error requires additional machinery on top of standard QEC protocols. The difficulty with qubit loss is the initial detection of whether the qubit is actually present. While standard correction protocols can protect against the loss of information on a qubit, this still assumes that the physical object still exists in the computer. Hence in loss correction protocols, an initial non-demolition detection method must be employed (which determines if the qubit is actually present without performing a projective measurement on the computational state) before standard correction can be utilized to correct the error. Initialization of the qubit can be modeled either using a coherent systematic error model or using the decoherence model. The specific methodology depends largely on the physical mechanisms used to initialize the system. If a decoherence model is employed, initialization is modeled exactly the same way as imperfect measurement. If we have a probability $p_I$ of initialization error, the initial state of the system is given by the mixture, \begin{equation} \rho_i = (1-p_I)\ket{0}\bra{0} + p_I\ket{1}\bra{1}. \end{equation} In contrast, we could consider an initialization model which is achieved via a coherent unitary operation where the target is the desired initial state. In this case, the initial state is pure, but contains a non-zero amplitude of the undesired target, for example, \begin{equation} \ket{\psi}_i = \alpha\ket{0} + \beta\ket{1} \end{equation} where $|\alpha|^2+|\beta|^2 = 1$ and $|\beta|^2 \ll 1$. The interpretation of these two types of initialization models is identical to the coherent and incoherent models presented. Again, the effect of these types of errors relates to the probabilities of measuring the system in an erred state. One final type of error that we can briefly mention is the problem of qubit leakage. Qubit leakage manifests itself due to the fact that most systems utilized for qubit applications are not simple two level quantum systems. For example, Fig~\ref{fig:calcium} (from Ref.~\cite{S97+}) illustrates the energy level structure for a $^{43}$Ca$^+$ ion utilized for ion trap quantum computing at Oxford. \begin{figure}[ht] \begin{center} \includegraphics[width=0.55\textwidth]{Calcium.pdf} \caption{(from Ref.~\cite{S97+}) Energy level structure for the $^{43}$Ca$^+$ investigated by the Oxford ion-trapping group. The structure of this ion is clearly not a 2-level quantum system. Hence leakage into non-qubit states is an important factor to consider.} \label{fig:calcium} \end{center} \end{figure} The qubit in this system is defined only with two electronic states, however the system itself contains many more levels (including some which are used for qubit readout and initialization through optical pumping and photo-luminescence). As with systematic errors, leakage can occur when improper control is applied to such a system. In the case of ion-traps, qubit transitions are performed by focusing finely tuned lasers resonant on the relevant transitions. If the laser frequency fluctuates or additional levels are not sufficiently detuned from the qubit resonance, the following transformation could occur, \begin{equation} U\ket{0} = \alpha\ket{0} + \beta\ket{1} + \gamma\ket{2}, \end{equation} where the state $\ket{2}$ is a third level which is now populated due to improper control. The actual effect of this type of error can manifest in several different ways. The primary problem with leakage is that it violates the basic assumption of a qubit structure to the computer. As quantum circuits and algorithms are fundamentally designed assuming the computational array is a collection of 2-level systems, operators of the above form (which in this case is operating over a 3-level space) will naturally induce unwanted dynamics. Another important implication of applying non-qubit operations is how these levels interact with the environment and hence how decoherence effects the system. For example, in the above case, the unwanted level, $\ket{2}$, may be extremely short lived leading to an emission of a photon and the system relaxing back to the ground state. For these reasons, leakage is one of the most problematic error channels to correct using QEC. In general, leakage induced errors need to be corrected via the non-demolition detection of a leakage event (i.e. determining if the quantum system is confined to a qubit without performing a measurement discriminating the $\ket{0}$ and $\ket{1}$ states~\cite{P98,GBP97,VWW05}) or through the use of complicated pulse control which acts to re-focus a improperly confined quantum gate back to the qubit subspace~\cite{WBL02,BLWZ05}. In the context of mass manufacturing of qubit systems, leakage would be quantified immediately after the fabrication of a device, using intrinsic characterization protocols such as those discussed in Ref.~\cite{DSOCH07}. If a particular system is found to be improperly confined to the qubit subspace it would simply be discarded. Employing characterization at this stage would then eliminate the need to implement pulse control of leakage, shortening gate times and ultimately reducing error rates in the computer. In this section we introduced the basic ideas of quantum errors and how they effect the success of a quantum algorithm. Section~\ref{sec:sec:decoherence} will return in a more focused manner to error models and how they relate to error correction in a quantum computer. \section{QEC, a good starting point: The 3-qubit code} The 3-qubit bit-flip code is traditionally used as a basic introduction to the concept of Quantum Error Correction. However, it should be emphasized that the 3-qubit code {\em does not} represent a full quantum code. This is due to the fact that the code cannot simultaneously correct for both bit and phase flips (see section.~\ref{sec:sec:decoherence}), which is a sufficient condition for correcting errors for an arbitrary error mapping on a single qubit. This code is a standard repetition code which was extended by Shor~\cite{S95} to the full 9-qubit quantum code which was the first demonstration that QEC was possible. The 3-qubit code encodes a single logical qubit into three physical qubits with the property that it can correct for a single $\sigma_x \equiv X$ bit-flip error. The two logical basis states $\ket{0}_L$ and $\ket{1}_L$ are defined as, \begin{equation} \ket{0}_L = \ket{000}, \quad \quad \ket{1}_L = \ket{111}, \end{equation} such that an arbitrary single qubit state $\ket{\psi} = \alpha\ket{0} + \beta\ket{1}$ is mapped to, \begin{equation} \begin{aligned} \alpha\ket{0} + \beta\ket{1} &\rightarrow \alpha\ket{0}_L + \beta\ket{1}_L \\ &= \alpha\ket{000} + \beta\ket{111} = \ket{\psi}_L. \end{aligned} \end{equation} Fig.~\ref{fig:3qubit} illustrates the quantum circuit required to encode a single logical qubit via the initialization of two ancilla qubits and two CNOT gates. \begin{figure}[ht] \begin{center} \includegraphics[width=0.3\textwidth]{3qubit.pdf} \caption{Quantum Circuit to prepare the $\ket{0}_L$ state for the 3-qubit code where an arbitrary single qubit state, $\ket{\psi}$ is coupled to two freshly initialized ancilla qubits via CNOT gates to prepare $\ket{\psi}_L$.} \label{fig:3qubit} \end{center} \end{figure} The reason why this code is able to correct for a single bit flip error is the binary distance between the two codeword states. Notice that three individual bit flips are required to take $\ket{0}_L \leftrightarrow \ket{1}_L$, hence if we assume $\ket{\psi} = \ket{0}_L$, a single bit flip on any qubit leaves the final state closer to $\ket{0}_L$ than $\ket{1}_L$. The distance between two codeword states, $d$, defines the number of errors that can be corrected, $t$, as, $t = \lfloor(d-1)/2\rfloor$. In this case, $d=3$, hence $t=1$. How are we able to correct errors using this code without directly measuring or obtaining information about the logical state? Two additional ancilla qubits are introduced, which are used to extract {\em syndrome} information (information regarding possible errors) from the data block without discriminating the exact state of any qubit, Fig.~\ref{fig:3qubit2} illustrates. \begin{figure*}[ht] \begin{center} \includegraphics[width=0.65\textwidth]{3qubit2.pdf} \caption{Circuit required to encode and correct for a single $X$-error. We assume that after encoding a single bit-flip occurs on one of the three qubits (or no error occurs). Two initialized ancilla are then coupled to the data block which only checks the parity between qubits. These ancilla are then measured, with the measurement result indicating where (or if) an error has occurred, without directly measuring any of the data qubits. Using this {\em syndrome} information, the error can be corrected with a classically controlled $X$ gate. } \label{fig:3qubit2} \end{center} \end{figure*} For the sake of simplicity we assume that all gate operations are perfect and the only place where the qubits are susceptible to error is the region between encoding and correction. We will return to this issue in section~\ref{sec:Fault-tolerance} when we discuss Fault-tolerance. We also assume that at most, a single, complete bit flip error occurs on one of the three data qubits. Correction proceeds by introducing two ancilla qubits and performing a sequence of CNOT gates, which checks the parity of the three qubits. Table~\ref{tab:errors} summarizes the state of the whole system, for each possible error, just prior to measurement. \begin{table}[ht!] \begin{center} \vspace*{4pt} \begin{tabular}{c|c} Error Location & Final State, $\ket{\text{data}}\ket{\text{ancilla}}$ \\ \hline No Error & $\alpha\ket{000}\ket{00} + \beta\ket{111}\ket{00}$ \\ Qubit 1 & $\alpha\ket{100}\ket{11} + \beta\ket{011}\ket{11}$ \\ Qubit 2 & $\alpha\ket{010}\ket{10} + \beta\ket{101}\ket{10}$ \\ Qubit 3 & $\alpha\ket{001}\ket{01} + \beta\ket{110}\ket{01}$ \\ \end{tabular} \caption{Final state of the five qubit system prior to the syndrome measurement for no error or a single $X$ error on one of the qubits. The last two qubits represent the state of the ancilla. Note that each possible error will result in a unique measurement result (syndrome) of the ancilla qubits. This allows for a $X$ correction gate to be applied to the data block which is classically controlled from the syndrome result. At no point during correction do we learn anything about $\alpha$ or $\beta$.} \label{tab:errors} \end{center} \end{table} For each possible situation, either no error or a single bit-flip error, the ancilla qubits are flipped to a unique state based on the parity of the data block. These qubits are then measured to obtain the classical {\em syndrome} result. The result of the measurement will then dictate if an $X$ correction gate needs to be applied to a specific qubit, i.e. \begin{widetext} \begin{equation} \begin{aligned} &\text{Ancilla Measurement:} \quad \ket{00}, \quad \text{Collapsed State:} \quad \alpha\ket{000} + \beta\ket{111} \quad \therefore \text{Clean State} \\ &\text{Ancilla Measurement:} \quad \ket{01}, \quad \text{Collapsed State:} \quad \alpha\ket{001} + \beta\ket{110} \quad \therefore \text{Bit Flip on Qubit 3} \\ &\text{Ancilla Measurement:} \quad \ket{10}, \quad \text{Collapsed State:} \quad \alpha\ket{010} + \beta\ket{101} \quad \therefore \text{Bit Flip on Qubit 2} \\ &\text{Ancilla Measurement:} \quad \ket{11}, \quad \text{Collapsed State:} \quad \alpha\ket{100} + \beta\ket{011} \quad \therefore \text{Bit Flip on Qubit 1} \\ \end{aligned} \end{equation} \end{widetext} Provided that only a single error has occurred, the data block is restored. Notice that at no point during correction do we gain any information regarding the co-efficients $\alpha$ and $\beta$, hence the computational wave-function will remain intact during correction. This code will only work if a maximum of one error occurs. If two $X$ errors occur, then by tracking the circuit through you will see that the syndrome result becomes ambiguous. For example, if an $X$ error occurs on both qubits one and two, then the syndrome result will be $\ket{01}$. This will cause us to mis-correct by applying an $X$ gate to qubit 3. Therefore, two errors will induce a logical bit flip and causes the code to fail, as expected. To be absolutely clear on how QEC acts to restore the system and protect against errors. Let us now consider a different and more physically realistic error mapping. We will assume that the errors acting on the qubits are coherent rotations of the form $U = \exp (i\epsilon \sigma_x)$ on each qubit, with $\epsilon \ll 1$. We choose coherent rotations so that we can remain in the state vector representation. This is not a necessary requirement, however more general incoherent mappings would require us to move to density matrices. We assume that each qubit experiences the same error, hence the error operator acting on the state is, \begin{equation} \begin{aligned} \ket{\psi}_E = E&\ket{\psi}_L,\\ E = U^{\otimes 3} &= (\cos(\epsilon)I + i\sin(\epsilon)\sigma_x)^{\otimes 3} \\ &= c_0III+ c_1 (\sigma_x\sigma_I\sigma_I+\sigma_I\sigma_x\sigma_I+\sigma_I\sigma_I\sigma_x) \\ &+ c_2 (\sigma_x\sigma_x\sigma_I+\sigma_I\sigma_x\sigma_x+\sigma_x\sigma_I\sigma_x) \\ &+ c_3 \sigma_x\sigma_x\sigma_x. \end{aligned} \end{equation} where, \begin{equation} \begin{aligned} &c_0 = \cos^3(\epsilon), \\ &c_1 = i\cos^2(\epsilon)\sin(\epsilon) \\ &c_2 = -\cos(\epsilon)\sin^2(\epsilon)\\ &c_3 = -i\sin^3(\epsilon). \end{aligned} \end{equation} Now let's examine the transformation that occurs when we run the error correction circuit in Fig.~\ref{fig:3qubit2}, which we denote via the unitary transformation, $U_{QEC}$, over {\em both} the data and ancilla qubits, \begin{widetext} \begin{equation} \begin{aligned} U_{QEC} E\ket{\psi}_L\ket{00} &= c_0\ket{\psi}_L\ket{00} +c_1 (\sigma_x\sigma_I\sigma_I\ket{\psi}_L\ket{11} + \sigma_I\sigma_x\sigma_I\ket{\psi}_L\ket{10} + \sigma_I\sigma_I\sigma_x\ket{\psi}_L\ket{01}) \\ &+c_2 (\sigma_x\sigma_x\sigma_I \ket{\psi}_L\ket{01} + \sigma_I\sigma_x\sigma_x\ket{\psi}_L\ket{11} +\sigma_x\sigma_I\sigma_x\ket{\psi}_L\ket{10}) +c_3\sigma_x\sigma_x\sigma_x\ket{\psi}_L\ket{00} \end{aligned} \end{equation} \end{widetext} Once again, the ancilla block is measured and the appropriate correction operator is applied, yielding the results (up to renormalization), \begin{widetext} \begin{equation} \begin{aligned} &\text{Ancilla Measurement:} \quad \ket{00}, \quad \text{Collapsed State (with correction) :} \quad c_0\ket{\psi}_L + c_3\sigma_x\sigma_x\sigma_x\ket{\psi}_L \\ &\text{Ancilla Measurement:} \quad \ket{01}, \quad \text{Collapsed State (with correction) :} \quad c_1\ket{\psi}_L + c_2\sigma_x\sigma_x\sigma_x\ket{\psi}_L \\ &\text{Ancilla Measurement:} \quad \ket{10}, \quad \text{Collapsed State (with correction) :} \quad c_1\ket{\psi}_L + c_2\sigma_x\sigma_x\sigma_x\ket{\psi}_L \\ &\text{Ancilla Measurement:} \quad \ket{11}, \quad \text{Collapsed State (with correction) :} \quad c_1\ket{\psi}_L + c_2\sigma_x\sigma_x\sigma_x\ket{\psi}_L \\ \end{aligned} \end{equation} \end{widetext} In each case, after correction (based on the syndrome result), we are left with approximately the same state. A superposition of a ``clean state" with the logically flipped state, $\sigma_x\sigma_x\sigma_x\ket{\psi}$. The important thing to notice is the amplitudes related to the terms in the superposition. If we consider the unitary $U$ acting on a single, unencoded qubit, the rotation takes, \begin{equation} U\ket{\psi} = \cos(\epsilon)\ket{\psi} + i\sin(\epsilon)\sigma_x\ket{\psi}, \end{equation} Consequently, the fidelity of the single qubit state is, \begin{equation} F_{\text{unencoded}} = |\bra{\psi}U\ket{\psi}|^2 = \cos^2{\epsilon} \approx 1-\epsilon^2 \end{equation} In contrast, the fidelity of the encoded qubit state after a cycle of error correction is, \begin{equation} \begin{aligned} F_{\text{no detection}} = \frac{|c_0|^2}{|c_0|^2+|c_3|^2} &= \frac{\cos^6(\epsilon)}{\cos^6(\epsilon)+\sin^6(\epsilon)} \\ &\approx 1-\epsilon^6, \end{aligned} \end{equation} with probability $1-3\epsilon^2+O(\epsilon^4)$ and \begin{equation} \begin{aligned} F_{\text{error detected}} &= \frac{|c_1|^2}{|c_1|^2+|c_2|^2} \\ &= \frac{\cos^4(\epsilon)\sin^2(\epsilon)}{\cos^4(\epsilon)\sin^2(\epsilon)+\sin^4(\epsilon)\cos^2(\epsilon)} \\ &\approx 1-\epsilon^2. \end{aligned} \end{equation} with probability $3\epsilon^2 + O(\epsilon^4)$. This is the crux of how QEC suppresses errors at the logical level. During a round of error correction, if no error is detected (which if the error rate is small, occurs with high probability), the error on the resulting state is suppressed from $O(\epsilon^2)$ to $O(\epsilon^6)$, while if a single error is detected, the fidelity of the resulting state remains the same. This is expected, as the 3-qubit code is a single error correcting code. If one error has already been corrected then the failure rate of the logical system is conditional on experiencing one further error (which will be proportional to $\epsilon^2$). As $\epsilon \ll 1$ the majority of correction cycles will detect no error and the fidelity of the resulting encoded state is higher than when unencoded. Note, that as $\epsilon^2 \rightarrow 1/3$ the benefit of the code disappears as every correction cycle detects an error and the resulting fidelity is no better than an unencoded qubit It should be stressed that {\bf no error correction scheme will, in general, restore a corrupted state to a perfectly clean code-state}. The resulting state will contain a superposition of a clean state and corrupted states, the point is that the fidelity of the corrupted states, at the logical level, is greater than the corresponding fidelity for unencoded qubits. Consequently the probability of measuring the correct result at the end of a specific algorithm increases when the system is encoded. This example shows the basic principles of error correction. As mentioned earlier, the 3-qubit code does not represent a full quantum code and the error model that we considered neglected imperfect gates and the possibility of errors occurring during state preparation and/or correction. In the coming sections we will briefly take a look at several full quantum codes, both used for quantum memory and computation and we will introduce the concept of full QEC using stabilizer codes. This will then lead to a description of full fault-tolerant quantum error correction. \section{The Nine Qubit Code: The First Full Quantum code} The nine qubit error correcting code was first developed by Shor~\cite{S95} in 1995 and is based largely on the 3-qubit repetition code. The Shor code is a degenerate single error correcting code able to correct a logical qubit from one discrete bit flip, one discrete phase flip or one of each on any of the nine physical qubits and is therefore sufficient to correct for any continuous linear combination of errors on a single qubit. The two basis states for the code are, \begin{equation} \begin{aligned} \ket{0}_L = \frac{1}{\sqrt{8}}(\ket{000}+\ket{111})(\ket{000}+\ket{111})(\ket{000}+\ket{111}) \\ \ket{1}_L = \frac{1}{\sqrt{8}}(\ket{000}-\ket{111})(\ket{000}-\ket{111})(\ket{000}-\ket{111}) \\ \end{aligned} \end{equation} and the circuit to perform the encoding is shown in Fig.~\ref{fig:9encode}. \begin{figure}[ht] \begin{center} \includegraphics[width=0.4\textwidth]{9encode.pdf} \caption{Circuit required to encode a single qubit with Shor's nine qubit code.} \label{fig:9encode} \end{center} \end{figure} Correction for $X$ errors, for each block of three qubits encoded to $(\ket{000}\pm \ket{111})/\sqrt{2}$ is identical to the three qubit code shown earlier. By performing the correction circuit shown in Fig.~\ref{fig:3qubit2} for each block of three qubits, single $\sigma_x \equiv X$ errors can be detected and corrected. Phase errors ($\sigma_z \equiv Z$) are corrected by examining the sign differences between the three blocks. The circuit shown in Fig.~\ref{fig:9qubit2} achieves this. \begin{figure*}[ht] \begin{center} \includegraphics[width=0.8\textwidth]{9qubit2.pdf} \caption{Circuit required to perform phase correction for the 9-qubit code. } \label{fig:9qubit2} \end{center} \end{figure*} The first set of six CNOT gates compares the sign of blocks one and two of the code state and the second set of CNOT gates compares the sign for blocks two and three. Note that a phase flip on {\em any} one qubit in a block of three has the same effect, this is why the 9-qubit code is referred to as a degenerate code. In other error correcting codes, such as the 5- or 7-qubit codes~\cite{S96,LMPZ96}, there is a one-to-one mapping between correctable errors and unique states, in degenerate codes such as this, the mapping is not unique. Hence provided we know in which block the error occurs it does not matter which qubit we apply the correction operator to. As the 9-qubit code can correct for single $X$ errors in any one block of three and a single phase error on any of the nine qubits, this code is a full quantum error correcting code (we will detail in section~\ref{sec:sec:decoherence} why phase and bit correction is sufficient for the correction of arbitrary qubit errors). Even if a bit {\em and} phase error occurs on the same qubit, the $X$ correction circuit will detect and correct for bit flips while the $Z$ correction circuit will detect and correct for phase flips. As mentioned, the $X$ error correction does have the ability to correct for up to three individual bit flips (provided each bit flip occurs in a different block of three). However, in general the 9-qubit code is only a single error correcting code as it cannot handle multiple errors if they occur in certain locations. The 9-qubit code is in fact a member of a broader class of error correcting codes known as Bacon-Shor or subsystem codes~\cite{B06}. Subsystem codes have the property that certain subgroups of error operators do not corrupt the logical space. This can be seen by considering phase errors that occur in pairs for any block of three. For example, a phase flip on qubits one, two, four and five will leave both logical states unchanged. Subsystem codes are very nice codes from an architectural point of view. Error correction circuits and gates are generally simpler than for non-subsystem codes, allowing for circuit structures more amenable to the physical restrictions of a computer architecture~\cite{AC07}. Additionally as subsystem codes that can correct for a larger number of errors have a similar structure, we are able to perform dynamical switching between codes, in a fault-tolerant manner, which allows us to adapt the error protection in the computer to be changed depending on the noise present at a physical level~\cite{SEDH07}. We will return and revisit subsystem codes later in section~\ref{sec:subsystem} \section{Quantum Error Detection} \label{sec:detection} So far we have focused on the ability to not only detect errors, but also to correct them. Another approach is to not enforce the correction requirement. Post-selected quantum computation, developed by Knill~\cite{K05} demonstrated that large scale quantum computing could be achieved with much higher noise rates when error detection is employed instead of more costly correction protocols. The basic idea in post-selected schemes is to encode the computer with error detecting circuits and if errors are detected, the relevant subroutine of the quantum algorithm is reset and run again, instead of performing active correction. One of the downside to these types of schemes is that although they lead to large tolerable error rates, the resource requirements are unrealistically high. The simplest error detecting circuit is the 4-qubit code~\cite{GBP97}. This encodes two logical qubits into four physical qubits with the ability to detect a single error on either of the two logical qubits. The four basis states for the code are, \begin{equation} \begin{aligned} &\ket{00} = \frac{1}{\sqrt{2}}(\ket{0000}+\ket{1111}), \\ &\ket{01} = \frac{1}{\sqrt{2}}(\ket{1100}+\ket{0011}), \\ &\ket{10} = \frac{1}{\sqrt{2}}(\ket{1010}+\ket{0101}), \\ &\ket{11} = \frac{1}{\sqrt{2}}(\ket{0110}+\ket{1001}). \end{aligned} \end{equation} Fig.~\ref{fig:4qubit} illustrates the error detection circuit that can be utilized to detect a single bit and/or phase flip on one of these encoded qubits. \begin{figure*}[ht] \begin{center} \includegraphics[width=0.8\textwidth]{4qubit.pdf} \caption{Circuit required detect errors in the 4-qubit error detection code. If both ancilla measurements return $\ket{0}$, then the code state is error free. If either measurement returns $\ket{1}$, an error has occurred. Unlike the 9-qubit code, the detection of an error does not give sufficient information to correct the state.} \label{fig:4qubit} \end{center} \end{figure*} If a single bit and/or phase flip occurs on one of the four qubits then the ancilla qubits will be measured in the $\ket{1}$ state. For example, let us consider the cases when a single bit flip occurs on one of each of the four qubits. The state of the system, just prior to the measurement of the ancilla is in table.~\ref{tab:errors2}. \begin{table}[ht!] \begin{center} \vspace*{4pt} \begin{tabular}{c|c} Error Location & Final State, $\ket{\text{data}}\ket{\text{ancilla}}$ \\ \hline No Error & $\ket{\psi}_L\ket{00}$ \\ Qubit 1 & $X_1\ket{\psi}_L\ket{10}$ \\ Qubit 2 & $X_2\ket{\psi}_L\ket{10}$ \\ Qubit 3 & $X_3\ket{\psi}_L\ket{10}$ \\ Qubit 4 & $X_4\ket{\psi}_L\ket{10}$ \\ \end{tabular} \caption{Qubit and ancilla state, just prior to measurement for the 4-qubit error detection code when a single bit-flip has occurred on at most one of the four qubits.} \label{tab:errors2} \end{center} \end{table} Regardless of the location of the bit flip, the ancilla system is measured in the state $\ket{10}$. Similarly if one considers a single phase error on any of the four qubits the ancilla measurement will return $\ket{01}$. In both cases no information is obtained regarding {\em where} the error has occurred, hence it is not possible to correct the state. Instead the subroutine can be reset and re-run. \section{Stabilizer Formalism} \label{sec:sec:QEC} So far we have presented error correcting codes from the perspective of their state representations and their preparation and correction circuits. This is a rather inefficient method for describing the codes as the state representations and circuits clearly differ from code to code. The majority of error correcting codes that are used within the literature are members of a class known as stabilizer codes. Stabilizer codes are very useful to work with. The general formalism applies broadly and there exists general rules to construct preparation circuits, correction circuits and fault-tolerant logical gate operations once the stabilizer structure of the code is specified. The stabilizer formalism which was first introduced by Daniel Gottesman~\cite{G97+} uses essentially the Heisenberg representation for quantum mechanics which describes quantum states in terms of operators rather that the basis states themselves. An arbitrary state $\ket{\psi}$ is defined to be stabilized by some operator, $K$, if it is a $+1$ eigenstate of $K$, i.e. \begin{equation} K\ket{\psi} = \ket{\psi}. \end{equation} For example, the single qubit state $\ket{0}$ is stabilized by the operator $K = \sigma_z$, i.e. \begin{equation} \sigma_z\ket{0} = \ket{0} \end{equation} Defining multi-qubit states with respect to this formalism relies on the group structure of multi-qubit operators. Within the group of all possible, single qubit operators, there exists a subgoup, denoted the Pauli group, $\mathcal{P}$, which contains the following elements, \begin{equation} \mathcal{P} = \{\pm \sigma_I, \pm i \sigma_I, \pm \sigma_x, \pm i \sigma_x,\pm \sigma_y, \pm i \sigma_y,\pm \sigma_z, \pm i \sigma_z\}. \end{equation} It is easy to check that these matrices form a group under multiplication through the commutation and anti-commutation rules for the Pauli set, $\{\sigma_i \} = \{ \sigma_x,\sigma_y,\sigma_z\}$, \begin{equation} [\sigma_i,\sigma_j] = 2i\epsilon_{ijk}\sigma_k, \quad \quad \{\sigma_i,\sigma_j\} = 2\delta_{ij}, \end{equation} where, \begin{equation} \epsilon_{ijk} = \Bigg \{ \begin{array}{l} +1\text{ for } (i,j,k) \in \{(1,2,3), (2,3,1), (3,1,2)\}\\ -1 \text{ for } (i,j,k) \in \{(1,3,2), (3,2,1), (2,1,3)\}\\ 0 \text{ for } i=j, j=k, \text{ or } k=i \end{array} \end{equation} and \begin{equation} \delta_{ij} = \Bigg \{ \begin{array}{cr} 1\text{ for } i = j\\ 0 \text{ for } i \neq j. \end{array} \end{equation} The Pauli group extends over N-qubits by simply taking the $N$ fold tensor product of $\mathcal{P}$, i.e. \begin{equation} \begin{aligned} \mathcal{P}_N &= \mathcal{P}^{\otimes N} \\ &= \{\pm \sigma_I, \pm i \sigma_I, \pm \sigma_x, \pm i \sigma_x,\pm \sigma_y, \pm i \sigma_y,\pm \sigma_z, \pm i \sigma_z\}^{\otimes N}. \end{aligned} \end{equation} An $N$-qubit stabilizer state, $\ket{\psi}_N$ is then defined via an $N$-element Abelian subgroup, $\mathcal{G}$, of the $N$-qubit Pauli group, in which $\ket{\psi}_N$ is a $+1$ eigenstate of each element, \begin{equation} \begin{aligned} \mathcal{G} &= \\ &\{\; G_i \;|\; G_i\ket{\psi} = \ket{\psi}, \; [G_i,G_j] = 0 \; \forall \; (i,j) \} \subset \mathcal{P}_N. \label{eq:stabdef} \end{aligned} \end{equation} Given this definition, the state $\ket{\psi}_N$ can be equivalently defined either through the state vector representation {\em or} by specifying the stabilizer set, $\mathcal{G}$. Many extremely useful multi-qubit states are stabilizer states, including two-qubit Bell states, Greenberger-Horne-Zeilinger (GHZ) states~\cite{GHZ89,GHSZ90}, Cluster states~\cite{BR01,RB01} and codeword states for QEC. As an example, consider a three qubit GHZ state, defined as, \begin{equation} \ket{\text{GHZ}}_3 = \frac{\ket{000} + \ket{111}}{\sqrt{2}}. \end{equation} This state can be expressed via any three linearly independent elements of the $\ket{\text{GHZ}}_3$ stabilizer group for example, \begin{equation} \begin{aligned} G_1 &= \sigma_x\otimes \sigma_x \otimes \sigma_x \equiv XXX, \\ G_2 &= \sigma_z\otimes \sigma_z \otimes \sigma_I \equiv ZZI, \\ G_3 &= \sigma_I \otimes \sigma_z \otimes \sigma_z \equiv IZZ. \end{aligned} \end{equation} where the right-hand side of each equation is the short-hand representation of stabilizers. Note that these three operators form an Abelian group [Eq.~\ref{eq:stabdef}] as, \begin{equation} \begin{aligned} [G_i,G_j]\ket{\psi} &= G_iG_j\ket{\psi} - G_jG_i\ket{\psi} \\ &= \ket{\psi}-\ket{\psi} = 0, \quad \forall \quad [i,j,\ket{\psi}]. \end{aligned} \end{equation} Similarly, the four orthogonal Bell states, \begin{equation} \begin{aligned} \ket{\Phi^{\pm}} &= \frac{\ket{00} \pm \ket{11}}{\sqrt{2}}, \\ \ket{\Psi^{\pm}} &= \frac{\ket{01} \pm \ket{10}}{\sqrt{2}}, \end{aligned} \end{equation} are stabilized by the operators, $G_1 = (-1)^aXX$, and $G_2 = (-1)^b ZZ$. Where $[a,b] \in \{0,1\}$ and each of the four Bell states correspond to the four unique pairs, $\{\Phi^+,\Psi^+,\Phi^-,\Psi^-\} = \{ [0,0],[0,1],[1,0],[1,1]\}$. \section{QEC with stabilizer codes}\label{sec:sec:QEC2} The use of the stabilizer formalism to describe quantum error correction codes is extremely useful since it allows for easy synthesis of correction circuits and also clearly shows how logical operations can be performed directly on encoded data. As an introduction we will focus on arguably the most well known quantum code, the 7-qubit Steane code, first proposed in 1996~\cite{S96}. The 7-qubit code represents a full quantum code that encodes seven physical qubits into one logical qubit, with the ability to correct for a single $X$ and/or $Z$ error. The $\ket{0}_L$ and $\ket{1}_L$ basis states are defined as, \begin{widetext} \begin{equation} \begin{aligned} |0\rangle_L = \frac{1}{\sqrt{8}}(&|0000000\rangle + |1010101\rangle + |0110011\rangle + |1100110\rangle + |0001111\rangle + |1011010\rangle + |0111100\rangle + |1101001\rangle),\\ |1\rangle_L = \frac{1}{\sqrt{8}}(&|1111111\rangle + |0101010\rangle + |1001100\rangle + |0011001\rangle + |1110000\rangle + |0100101\rangle + |1000011\rangle + |0010110\rangle). \label{eq:log} \end{aligned} \end{equation} \end{widetext} The stabilizer set for the 7-qubit code is fully specified by the six operators, \begin{equation} \begin{aligned} &K^1 = IIIXXXX, \quad \quad K^2 = XIXIXIX,\\ &K^3 = IXXIIXX, \quad \quad K^4 = IIIZZZZ \\ &K^5 = ZIZIZIZ, \quad \quad K^6 = IZZIIZZ. \end{aligned} \label{eq:stab7} \end{equation} As the 7-qubit codeword states are specified by only six stabilizers, the code contains two basis states, which are the logical states. With a final operator, $K^7 = ZZZZZZZ=Z^{\otimes 7}$ fixing the state to one of the codewords, $K^7\ket{0}_L = \ket{0}_L$ and $K^7\ket{1}_L = -\ket{1}_L$. The 7-qubit code is defined as a $[[n,k,d]] = [[7,1,3]]$ quantum code, where $n=7$ physical qubits encode $k=1$ logical qubit with a distance between basis states $d=3$, correcting $t = (d-1)/2 = 1$ error. Notice that the stabilizer set separates into $X$ and $Z$ sectors which defines the code as a Calderbank-Shor-Steane (CSS) code. CSS codes are extreamly useful since they allow for straightforward logical gate operations to be applied directly to the encoded data [Section~\ref{sec:operations}] and are reasonably easy to derive from classical codes. Although the 7-qubit code is the most well known Stabilizer code, there are other stabilizer codes which encode multiple logical qubits and correct for more errors~\cite{G97+}. The downside to these lager codes is that they require more physical qubits and more complicated error correction circuits. Tables~\ref{tab:9qubit} and~\ref{tab:5qubit} shows the stabilizer structure of two other well known codes, the 9-qubit code~\cite{S95} which we have examined and the 5-qubit code~\cite{LMPZ96} which represents the smallest possible quantum code that corrects for a single error. \begin{table}[ht] \begin{center} \vspace*{4pt} \begin{tabular}{c|c|c|c|c|c|c|c|c|c} $K^1$ & $Z$&$Z$&$I$&$I$&$I$&$I$&$I$&$I$&$I$ \\ $K^2$ & $Z$&$I$&$Z$&$I$&$I$&$I$&$I$&$I$&$I$ \\ $K^3$ & $I$&$I$&$I$&$Z$&$Z$&$I$&$I$&$I$&$I$ \\ $K^4$ & $I$&$I$&$I$&$Z$&$I$&$Z$&$I$&$I$&$I$ \\ $K^5$ & $I$&$I$&$I$&$I$&$I$&$I$&$Z$&$Z$&$I$ \\ $K^6$ & $I$&$I$&$I$&$I$&$I$&$I$&$Z$&$I$&$Z$ \\ $K^7$ & $X$&$X$&$X$&$X$&$X$&$X$&$I$&$I$&$I$ \\ $K^8$ & $X$&$X$&$X$&$I$&$I$&$I$&$X$&$X$&$X$ \\ \end{tabular} \caption{The eight Stabilizers for the 9-qubit Shor code, encoding nine physical qubits into one logical qubit to correct for a single $X$ and/or $Z$ error. } \label{tab:9qubit} \end{center} \end{table} \begin{table}[ht] \begin{center} \vspace*{4pt} \begin{tabular}{c|c|c|c|c|c} $K^1$ & $X$&$Z$&$Z$&$X$&$I$ \\ $K^2$ & $I$&$X$&$Z$&$Z$&$X$ \\ $K^3$ & $X$&$I$&$X$&$Z$&$Z$ \\ $K^4$ & $Z$&$X$&$I$&$X$&$Z$ \\ \end{tabular} \caption{The Four Stabilizers for the [[5,1,3]] quantum code, encoding five physical qubits into one logical qubit to correct for a single $X$ and/or $Z$ error. Unlike the 7- and 9-qubit codes, the [[5,1,3]] code is a non-CSS code, since the stabilizer set does not separate into $X$ and $Z$ sectors.} \label{tab:5qubit} \end{center} \end{table} \subsection{State Preparation} Using the stabilizer structure for QEC codes, the logical state preparation and error correcting procedure is straightforward. Recall that the codeword states are defined as $+1$ eigenstates of the stabilizer set. In order to prepare a logical state from some arbitrary input, we need to forcibly project qubits into eigenstates of these operators. Consider the circuit shown in Fig.~\ref{fig:opmeas}. \begin{figure}[ht] \begin{center} \includegraphics[width=0.45\textwidth]{opmeas.pdf} \caption{Quantum Circuit required to project an arbitrary state, $\ket{\psi}_I$ into a $\pm 1$ eigenstate of the Hermitian operator, $U = U^{\dagger}$. The measurement result of the ancilla determines which eigenstate $\ket{\psi}_I$ is projected to.} \label{fig:opmeas} \end{center} \end{figure} For some arbitrary input state, $\ket{\psi}_I$, an ancilla which is initialized in the $\ket{0}$ state is used as a control qubit for a Hermitian operation ($U^{\dagger} = U$) on $\ket{\psi}_I$. After the second Hadamard gate is performed, the state of the system is, \begin{equation} \ket{\psi}_F = \frac{1}{2} ( \ket{\psi}_I + U\ket{\psi}_I)\ket{0} + \frac{1}{2}(\ket{\psi}_I - U\ket{\psi}_I)\ket{1}. \end{equation} The ancilla qubit is then measured in the computational basis. If the result is $\ket{0}$, the input state is projected to (neglecting normalization), \begin{equation} \ket{\psi}_F = \ket{\psi}_I+U\ket{\psi}_I. \end{equation} Since $U$ is Hermitian, $U\ket{\psi}_F=\ket{\psi}_F$, hence $\ket{\psi}_F$ is a $+1$ eigenstate of $U$. If the ancilla is measured to be $\ket{1}$, then the input is projected to the state, \begin{equation} \ket{\psi}_F = \ket{\psi}_I-U\ket{\psi}_I, \end{equation} which is the $-1$ eigenstate of $U$. Therefore, provided $U$ is Hermitian, the general circuit of Fig.~\ref{fig:opmeas} will project an arbitrary input state to a $\pm 1$ eigenstate of $U$. This procedure is well known and is refered to as either a ``parity" or ``operator" measurement~\cite{NC00}. From this construction it should be clear how QEC state preparation proceeds. Taking the $[[7,1,3]]$ code as an example, 7-qubits are first initialized in the state $\ket{0}^{\otimes 7}$, after which the circuit shown in Fig.~\ref{fig:opmeas} is applied three times with $U = (K^1,K^2,K^3)$, projecting the input state into a simultaneous $\pm 1$ eigenstate of each $X$ stabilizer describing the $[[7,1,3]]$ code. The result of each operator measurement is then used to classically control a single qubit $Z$ gate which is applied to one of the seven qubits at the end of the preparation. This single $Z$ gate converts any $-1$ projected eigenstates into $+1$ eigenstates. Notice that the final three stabilizers do not need to be measured due to the input state, $\ket{0}^{\otimes 7}$, already being a $+1$ eigenstate of $(K^4,K^5,K^6)$. Fig.~\ref{fig:7qubitprep} illustrates the final circuit, where instead of one ancilla, three are utilized to speed up the state preparation by performing each operator measurement in parallel. \begin{figure}[ht] \begin{center} \includegraphics[width=0.45\textwidth]{7qubitprep.pdf} \caption{Quantum circuit to prepare the $[[7,1,3]]$ logical $\ket{0}$ state. The input state $\ket{0}^{\otimes 7}$ is projected into an eigenstate of each of the $X$ stabilizers shown in Eq. \ref{eq:stab7}. After each ancilla measurement the classical results are used to apply a single qubit $Z$ gate to qubit $i = 1^{M_2}+2^{M_3}+4^{M_1}$ which converts the state from a $-1$ eigenstates of $(K^1,K^2,K^3)$ to $+1$ eigenstates.} \label{fig:7qubitprep} \end{center} \end{figure} As a quick aside, let us detail exactly how the relavant logical basis states can be derived from the stabilizer structure of the code by utilizing the preparation circuit illustrated above. Instead of the 7-qubit code, we will use the stabilizer set shown in Table~\ref{tab:5qubit} to calculate the $\ket{0}_L$ state for the 5-qubit code. The four code stabilizers are given by, \begin{equation} \begin{aligned} &K^1 = XZZXI, \quad \quad K^2 = IXZZX,\\ &K^3 = XIXZZ, \quad \quad K^4 = ZXIXZ. \end{aligned} \end{equation} As with the 7-qubit code, projecting an arbitrary state into a $+1$ eigenstate of these operators define the two, logical basis states $\ket{0}_L$ and $\ket{1}_L$, with the operator $\bar{Z} = ZZZZZ$, fixing the state to either $\ket{0}_L$ or $\ket{1}_L$. Therefore, calculating $\ket{0}_L$ from some initial un-encoded state requires us to project the initial state into a $+1$ eigenstate of these operators. If we take the initial, un-encoded state as $\ket{00000}$, then it already is a $+1$ eigenstate of $\bar{Z}$. Therefore, to find $\ket{0}_L$ we simply calculate, \begin{equation} \begin{aligned} \ket{0}_L&=\prod_{i=1}^4 (I^{\otimes 5} + K^i)\ket{00000}, \end{aligned} \end{equation} up to normalization. Expanding out this product, we find, \begin{equation} \begin{aligned} \ket{0}_L = \frac{1}{4}( &\ket{00000}+\ket{01010}+\ket{10100}-\ket{11110}+\\ &\ket{01001}-\ket{00011}-\ket{11101}-\ket{10111}+\\ &\ket{10010}-\ket{11000}-\ket{00110}-\ket{01100}-\\ &\ket{11011}-\ket{10001}-\ket{01111}+\ket{00101}). \end{aligned} \end{equation} Note, that the above state vector does not match up with those given in~\cite{LMPZ96}. However, these vectors are equivalent up to local rotations on each qubit. Therefore, matching up the original state requires locally perturbing the stabilizer set to reflect these rotations. \subsection{Error Correction} Error correction using stabilizer codes is a straightforward extension of state preparation. Consider an arbitrary single qubit state that has been encoded, \begin{equation} \alpha\ket{0} + \beta\ket{1} \rightarrow \alpha\ket{0}_L + \beta\ket{1}_L = \ket{\psi}_L. \end{equation} Now assume that an error occurs on one (or multiple) qubits which is described via the operator $E$, where $E$ is a combination of $X$ and/or $Z$ errors over the $N$ physical qubits of the logical state. By definition of stabilizer codes, $K^i\ket{\psi}_L = \ket{\psi}_L$, $i \in [1,..,N-k]$, for a code encoding $k$ logical qubits. Hence the erred state, $E\ket{\psi}_L$, satisfies, \begin{equation} K^iE\ket{\psi}_L = (-1)^m EK^i\ket{\psi}_L = (-1)^m E\ket{\psi}_L. \end{equation} where $m$ is defined as, $m=0$, if $[E,K^i]=0$ and $m=1$, if $\{E,K^i\} = 0$. Therefore, if the error operator commutes with the stabilizer, the state remains a $+1$ eigenstate of $K^i$, if the error operator anti-commutes with the stabilizer then the logical state is flips to now be a $-1$ eigenstate of $K^i$. Hence the general procedure for error correction is identical to state preparation. Each of the code stabilizers are sequentially measured. Since a error free state is already a $+1$ eigenstate of all the stabilizers, any error which anti-commutes with a stabilizer will flip the eigenstate and consequently the parity measurement will return a result of $\ket{1}$. Taking the $[[7,1,3]]$ code as an example, you can see that if the error operator is $E = X_i$, where $i = (1,...,7)$, representing a bit-flip on any {\em one} of the 7 physical qubits, then regardless of the location, $E$ will anti-commute with a unique combination of $(K^4,K^5,K^6)$. Hence the classical results of measuring these three operators will indicate if and where a single $X$ error has occurred. Similarly, if $E=Z_i$, then the error operator will anti-commute with a unique combination of, $(K^1,K^2,K^3)$. Consequently, the first three stabilizers for the $[[7,1,3]]$ code correspond to $Z$ sector correction while the second three stabilizers correspond to $X$ sector correction. Note, that correction for Pauli $Y$ errors are also taken care of by correcting in the $X$ and $Z$ sector since a $Y$ error on a single qubit is equivalent to both an $X$ and $Z$ error on the same qubit, i.e. $Y = iXZ$. Fig. \ref{fig:correct} illustrates the circuit for full error correction with the $[[7,1,3]]$ code. As you can see it is simply an extension of the preparation circuit [Fig. \ref{fig:7qubitprep}] where all six stabilizers are measured across the data block. \begin{figure*}[ht] \begin{center} \includegraphics[width=\textwidth]{7qubitcorr.pdf} \caption{Quantum circuit to to correct for a single $X$ and/or $Z$ error using the $[[7,1,3]]$ code. Each of the 6 stabilizers are measured, with the first three detecting and correcting for $Z$ errors, while the last three detect and correct for $X$ errors.} \label{fig:correct} \end{center} \end{figure*} Even though we have specifically used the $[[7,1,3]]$ code as an example, the procedure for error correction and state preparation is identical for all stabilizer codes allowing for full correction for both bit and phase errors without obtaining any information regarding the state of the logical qubit. \section{Digitization of Quantum Errors}\label{sec:sec:decoherence} Up until now we have remained fairly abstract regarding the analysis of quantum errors. Specifically, we have examined QEC from the standpoint of a discrete set of Pauli errors occurring at certain locations within a larger quantum circuit. In this section we examine how this analysis of errors relates to more realistic processes such as environmental decoherence and systematic gate errors. Digitization of quantum noise is often assumed when people examine the stability of quantum circuit design or attempt to calculate thresholds for concatenated error correction. However, the equivalence of discrete Pauli errors to more general, continuous, noise only makes sense when we consider the stabilizer nature of the correction procedure. Recall from section~\ref{sec:sec:QEC} that correction is performed by re-projecting a potentially corrupt data block into $+1$ eigenstates of the stabilizer set. It is this process that acts to digitize quantum noise, since a general continuous mapping from a ``clean" codeword state to a corrupt one will not satisfy the stabilizer conditions. we will first introduce how a coherent systematic error, caused by imperfect implementation of quantum gates, are digitized during correction, after which we will briefly discuss environmental decoherence from the standpoint of the Markovian decoherence model. \subsection{Systematic gate errors} We have already shown an example of how systematic gate errors are digitized into a discrete set of Pauli operators in Sec.~\ref{sec:error}. However, in that case we only considered a very restrictive type of error, namely the coherent operator $U=\exp(i\epsilon X)$. We can easily extend this analysis to cover all forms of systematic gate errors. Consider an $N$ qubit unitary operation, $U_N$, which is valid on encoded data. Assume that $U_N$ is applied inaccurately such that the resultant operation is actually $U_N'$. Given a general encoded state $\ket{\psi}_N$, the final state can be expressed as, \begin{equation} U_N' \ket{\psi}_L = U_E U_N \ket{\psi}_L = \sum_j \alpha_j E_j \ket{\psi'}_L, \end{equation} where $\ket{\psi'}_L = U_N\ket{\psi}_L$ is the perfectly applied $N$ qubit gate, (i.e. the stabilizer set for $\ket{\psi'}_L$ remains invariant under the operation $U_N$ [see Sec.~\ref{sec:operations}]). and $U_E$ is a coherent error operator which is expanded in terms of the $N$ qubit Pauli Group, $E_j \in P_N$. Now append two ancilla blocks, $\ket{A_0}^X$ and $\ket{A_0}^Z$, which are all initialized and are used for $X$ and $Z$ sector correction, then run a full error correction cycle, which we represent by the unitary operator, $U_{\text{QEC}}$. It will be assumed that $\ket{\psi}_L$ is encoded with a {\em hypothetical} QEC code which can correct for $N$ errors (both $X$ and/or $Z$), hence there is a one-to-one mapping between the error operators, $E_j$, and the orthogonal basis states of the ancilla blocks, \begin{equation} \begin{aligned} &U_{\text{QEC}}U_N'\ket{\psi}_L\ket{A_0}^X\ket{A_0}^Z \\ &= U_{\text{QEC}} \sum_j \alpha_jE_j\ket{\psi'}_L \ket{A_0}^X\ket{A_0}^Z \\ &= \sum_j \alpha_j E_j \ket{\psi'}_L\ket{A_j}^X\ket{A_j}^Z. \end{aligned} \end{equation} The ancilla blocks are then measured, projecting the data blocks into the state $E_j\ket{\psi'}_L$ with probability $|\alpha_j|^2$, after which the correction $E_j^{\dagger}$ is applied based on the syndrome result. As the error operation $E_j$ is simply an element of $\mathcal{P}_N$, correcting for $X$ and $Z$ independently is sufficient to correct for all error operators (as $Y$ errors are corrected when a bit and phase error is detected and corrected on the same qubit). For very small systematic inaccuracies, the expansion co-efficient, $\alpha_0$, which corresponds to $E_0 = I^{\otimes N}$ will be very close to 1, with all other co-efficients small. Hence during correction there will be a very high probability that no error is detected. This is the digitization effect of quantum error correction. Since codeword states are specific eigenstates of the stabilizers, then the re-projection of the state when each stabilizer is measured forces any continuous noise operator to collapse to the discrete Pauli set, with the magnitude of the error dictating the probability that the data block is projected into discrete perturbation of a ``clean" state. \subsection{Environmental decoherence} A complete analysis of environmental decoherence in relation to quantum information is a lengthy topic. Instead of a detailed review, we will instead simply present a specific example to highlight how QEC relates to environmental effects. The Lindblad formalism~\cite{G91,NC00,DWM03} provides an elegant method for analyzing the effect of decoherence on open quantum systems. This model does have several assumptions, most notably that the environmental bath couples weakly to the system (Born approximation) and that each qubit experiences un-correlated noise (Markovian approximation). While these assumptions are utilized for a variety of systems~\cite{BHPC03,BM03,BKD04}, it is known that they may not hold in some cases~\cite{HMCS00,MCMS05,APNYT04,ALKH02}. Particularly in superconducting systems where decoherence can be caused by small numbers of fluctuating charges. In this case more specific decoherence models need to be considered. Using this formalism, the evolution of the density matrix can be written as, \begin{equation} \partial_t \rho = -\frac{i}{\hbar} [H,\rho] + \sum_k \Gamma_k \mathcal{L}[\rho]. \end{equation} Where $H$ is the Hamiltonian, representing coherent, dynamical evolution of the system and $\mathcal{L}_k[\rho]=([L_k,\rho L_k^{\dagger}]+[L_k\rho, L_k^{\dagger}])/2$ represents the incoherent evolution. The operators $L_k$ are known as the Lindblad quantum jump operators and are used to model specific decoherence channels, with each operator parameterized by some rate $\Gamma_k \geq 0$. This differential equation is known as the quantum louiville equation or more generally, the density matrix master equation. To link Markovian decoherence to QEC, consider a special set of decoherence channels that help to simplify the calculation, representing a single qubit undergoing dephasing, spontaneous emission and spontaneous absorption. Dephasing of a single qubit is modelled by the Lindblad operator $L_1 = Z$ while spontaneous emission/absorption are modelled by the operators $L_2 = \ket{0}\bra{1}$ and $L_3 = \ket{1}\bra{0}$ respectively. For the sake of simplicity we assume that absorption/emission occur at the same rate, $\Gamma$. Consequently, the density matrix evolution is given by, \begin{equation} \partial_t \rho = -\frac{i}{\hbar}[H,\rho] + \Gamma_Z (Z\rho Z - \rho) + \frac{\Gamma}{2}(X\rho X + Y\rho Y -2\rho). \label{eq:diff} \end{equation} If it is assumed that the qubit is not undergoing any coherent evolution ($H = 0$), i.e. a memory stage within a quantum algorithm, then Eq.~\ref{eq:diff} can be solved by re-expressing the density matrix in the Bloch formalism. Set $\rho(t) = I/2 + x(t)X + y(t)Y + z(t)Z$, then Eq. \ref{eq:diff}, with $H=0$, reduces to, $\partial_t S(t) = AS(t)$ with $S(t) = (x(t),y(t),z(t))^T$ and \begin{equation} A = \begin{pmatrix} -(\Gamma + 2\Gamma_z) & 0 &0 \\ 0 &-(\Gamma + 2\Gamma_z) &0\\ 0 &0 &-2\Gamma \end{pmatrix}. \end{equation} This differential equation is easy to solve, leading to, \begin{equation} \begin{aligned} \rho(t) &= [1-p(t)]\rho(0) + p_x(t) X\rho(0) X \\ &+ p_y(t) Y \rho(0) Y + p_z(t) Z \rho(0) Z, \end{aligned} \end{equation} where, \begin{equation} \begin{aligned} p_x(t) = & p_y(t) = \frac{1}{4}(1-e^{-2\Gamma t}), \\ &p_z(t) = \frac{1}{4}(1+e^{-2\Gamma t}-2e^{-(\Gamma +2\Gamma_z)t}), \\ &p(t) = p_x(t) + p_y(t) + p_z(t). \end{aligned} \end{equation} If this single qubit is part of a QEC encoded data block, then each term represents a single error on the qubit experiencing decoherence. Two blocks of initialized ancilla qubits are added to the system and the error correction protocol run. Once the ancilla qubits are measured, the state will collapse to no error, with probability $1-p(t)$, or a single $X$,$Y$ or $Z$ error, with probabilities $p_x(t),p_y(t)$ and $p_z(t)$. We can also see how temporal effects are incorporated into the error correction model. The temporal integration window $t$ of the master equation will influence how probable an error is detected and corrected for a fixed rate $\Gamma$. The longer between correction cycles, the more probable the qubit experiences an error. \subsection{More General mappings} Both the systematic gate errors and the errors induced by environmental decoherence illustrate the digitization effect of quantum error correction. However, we can quite easily generalize digitization to arbitrary mappings of the density matrix. In this case consider a more general Krauss map on a multi-qubit density matrix, \begin{equation} \rho \rightarrow \sum_k A_k^{\dagger}\rho A_k \end{equation} where $\sum A_k^{\dagger}A_k = I$. For the sake of simplicity let us choose a simple mapping where $A_1 = (Z_1+iZ_2)/\sqrt{2}$ and $A_k = 0$ for $k\neq 1$. This mapping essentially represents dephasing on two qubits. However, this type of mapping (when considered in the context of error correction) represents independent $Z$ errors on either qubit one or two. To illustrate, first expand out the density matrix (neglecting normalization), \begin{equation} \rho \rightarrow A_1^{\dagger}\rho A_1 = Z_1\rho Z_1 + Z_2\rho Z_2 - iZ_1\rho Z_2 + iZ_2 \rho Z_1 \end{equation} Note that only the first two terms in this expansion, on their own, represent physical mixtures, the last two off-diagonal terms are actually irrelevant in the context of QEC and are removed during correction. To illustrate we again assume that $\rho$ represents a protected qubit, where $Z_1$ and $Z_2$ are {\em physical} errors on qubits comprising the codeblock. As we are only considering phase errors in this example, we will ignore $X$ correction (but the analysis automatically generalizes if the error mapping contains $X$ terms). A fresh ancilla block, represented by the density matrix $\rho^z_0$ is coupled to the system and the unitary $U_{QEC}$ is run, \begin{widetext} \begin{equation} \begin{aligned} U_{QEC}^{\dagger}\rho'\otimes \rho^z_0 U_{QEC} = &U_{QEC}^{\dagger}Z_1\rho Z_1\otimes \rho^z_0 U_{QEC} + U_{QEC}^{\dagger}Z_2\rho Z_2\otimes \rho^z_0 U_{QEC}\\ - &iU_{QEC}^{\dagger}Z_1\rho Z_2\otimes \rho^z_0 U_{QEC} + iU_{QEC}^{\dagger}Z_2 \rho Z_1\otimes \rho^z_0 U_{QEC} \\ = &Z_1\rho Z_1 \otimes \ket{Z_1}\bra{Z_1} +Z_2\rho Z_2 \otimes \ket{Z_2}\bra{Z_2} -iZ_1\rho Z_2 \otimes \ket{Z_1}\bra{Z_2} +iZ_2\rho Z_1 \ket{Z_2}\bra{Z_1} \end{aligned} \end{equation} \end{widetext} where $\ket{Z_1}$ and $\ket{Z_2}$ represent the two orthogonal syndrome states of the ancilla that are used to detect phase errors on qubits one and two respectively. The important part of the above expression is that when the syndrome qubits are measured we are calculating $\text{Tr}(\rho \ket{Z_1}\bra{Z_1})$ or $\text{Tr}(\rho \ket{Z_2}\bra{Z_2})$, therefore the two cross terms in the above expression are never observed. In this mapping the only two possible states that exist after the measurement of the ancilla system are, \begin{equation} \begin{aligned} Z_1\rho Z_1 \otimes \ket{Z_1}\bra{Z_1} \quad \text{with Probability } =\frac{1}{2} \\ Z_2\rho Z_2 \otimes \ket{Z_2}\bra{Z_2} \quad \text{with Probability } =\frac{1}{2} \end{aligned} \end{equation} Therefore, not only are the cross terms eliminated via error correction but the final density matrix again collapses to a single error perturbation of ``clean" codeword states with no correlated errors. Consequently, in standard QEC analysis it is assumed that after each elementary gate operation, measurement, initialization and memory step, a hypothetical error correction cycle is run. This cycle digitizes all continuous errors (either systematic or environmental) into either an $X$ and/or $Z$ error on each qubit. This cycle is assumed to be error free and take zero time. In this way error correction can be analyzed by assuming perfect gate operations and discrete, probabilistic errors. The probability of each error occuring can then be independently calculated via a systematic gate analysis or through the evolution of the master equation. \section{fault-tolerant Quantum Error Correction and the threshold theorem.}\label{sec:Fault-tolerance} Section~\ref{sec:sec:QEC} detailed the protocols required to correct for quantum errors, however this implementation of QEC assumed the following, \begin{enumerate} \item Errors only occur during ``memory" regions, i.e. when quantum operations or error correction are not being performed and we assume errors do not occur on ancilla qubits. \item The quantum gates themselves do not induce any systematic errors within the logical data block. \end{enumerate} Clearly these are two very unrealistic assumptions and error correction procedures and logical gate operations need to be designed such that they can still correct for errors. \subsection{Fault-tolerance} The concept of Fault-tolerance in computation is not a new idea, it was first developed in relation to classical computing~\cite{N55,G83,A87}. However, in recent years the precise manufacturing of digital circuitry has made large scale error correction and fault-tolerant circuits largely unnecessary. The basic principle of Fault-tolerance is that the circuits used for gate operations and error correction procedures should not cause errors to cascade. This can be seen clearly when we look at a simple CNOT operation between two qubits [Fig.~\ref{fig:CNOT}]. In this circuit we are performing a sequence of three CNOT gates which act to take the state $\ket{111}\ket{000} \rightarrow \ket{111}\ket{111}$. In Fig.~\ref{fig:CNOT}a. we consider a single $X$ error which occurs on the top most qubit prior to the first CNOT. This single error will cascade through each of the three gates such that the $X$ error has now propagated to four qubits. Fig.~\ref{fig:CNOT}b. shows a slightly modified design that implements the same operation, but the single $X$ error now only propagates to two of the six qubits. \begin{figure*}[ht] \begin{center} \includegraphics[width=0.85\textwidth]{CNOT.pdf} \caption{Two circuits to implement the transformation $\ket{111}\ket{000} \rightarrow \ket{111}\ket{111}$. a) shows a version where a single $X$ error can cascade into four errors while b) shows an equivalent circuit where the error only propagates to a second qubit. } \label{fig:CNOT} \end{center} \end{figure*} If we consider each block of three as a single logical qubit, then the staggered circuit will only induce a total of one error in each logical block, given a single $X$ error occurred somewhere during the gate operations. This is the one of the standard definitions of Fault-tolerance. {\em fault-tolerant circuit element: A single error will cause \textbf{at most} one error in the output for each logical qubit block.} It should be stressed that the idea of Fault-tolerance is a discrete definition, either a certain quantum operation is fault-tolerant or it is not. What is defined to be fault-tolerant can change depending on the error correction code used. For example, for a single error correcting code, the above definition is the only one available (since any more than one error in a logical qubit will result in the error correction procedure failing). However, if the quantum code employed is able to correct multiple errors, then the definition of Fault-tolernace can be relaxed, i.e. if the code can correct three errors then circuits may be designed such that a single failure results in at most two errors in the output (which is then correctable). In general, for an code correcting $t=\lfloor (d-1)/2 \rfloor$ errors, fault-tolerance requires that $\leq t$ errors during an operation does not result in $> t$ errors in the output for each logical qubit. \subsection{Threshold Theorem} \label{sec:threshold} The threshold theorem is truly a remarkable result in quantum information and is a consequence of fault-tolerant circuit design and the ability to perform dynamical error correction. Rather than present a detailed derivation of the theorem for a variety of noise models, we will instead take a very simple case where we utilize a quantum code that can only correct for a single error, using a model that assumes uncorrelated, errors on individual qubits. For more rigorous derivations of the theorem see~\cite{AB97,G97+,A07}. Consider a quantum computer where each physical qubit experiences either an $X$ and/or $Z$ error independently with probability $p$, per gate operation. Furthermore, it is assumed that each logical gate operation and error correction circuit is designed according to the rules of Fault-tolerance and that a cycle of error correction is performed after each elementary {\em logical} gate operation. If an error occurs during a logical gate operation, then Fault-tolerance ensures this error will only propagate to at most one error in each block, after which a cycle of error correction will remove the error. Hence if the failure probability of un-encoded qubits per time step is $p$, then a single level of error correction will ensure that the logical step fails only when two (or more) errors occur. Hence the failure rate of each logical operation, to leading order, is now $p^1_L = cp^2$, where $p^1_L$ is the failure rate (per logical gate operation) of a 1st level logical qubit and $c$ is the upper bound for the number of possible 2-error combinations which can occur at a physical level within the circuit consisting of the correction cycle $+$ gate operation $+$ correction cycle~\cite{A07}. We now repeat the process, re-encoding the computer such that a level-2 logical qubit is formed, using the same $[[n,k,d]]$ quantum code, from $n$, level-1 encoded qubits. It is assumed that all error correcting procedures and gate operations at the 2nd level are self-similar to the level-1 operations (i.e. the circuit structures for the level-2 encoding are identical to the level-1 encoding). Therefore, if the level-1 failure rate per logical time step is $p^1_L$, then by the same argument, the failure rate of a 2-level operation is given by, $p^2_L = c(p^1_L)^2 = c^3p^4$. This iterative procedure is then repeated (referred to as concatenation) up to the $k$th level, such that the logical failure rate, per time step, of a $k$-level encoded qubit is given by, \begin{equation} p^k_L = \frac{(cp)^{2^k}}{c}. \label{eq:threshold} \end{equation} Eq. \ref{eq:threshold} implies that for a finite {\em physical} error rate, $p$, per qubit, per time step, the failure rate of the $k$th-level encoded qubit can be made arbitrarily small by simply increasing $k$, dependent on $cp < 1$. This inequality defines the threshold. The physical error rate experienced by each qubit per time step must be $p_{th} < 1/c$ to ensure that multiple levels of error correction reduce the failure rate of logical components. Hence, provided sufficient resources are available, an arbitrarily large quantum circuit can be successfully implemented, to arbitrary accuracy, once the physical error rate is below threshold. The calculation of thresholds is therefore an extremely important aspect to quantum architecture design. Initial estimates at the threshold, which gave $p_{th} \approx 10^{-4}$~\cite{K97,AB97,G97+} did not sufficiently model physical systems in an accurate way. Recent results~\cite{SFH07,SDT07,SBFRYSGF06,MCTBCCC04,BSO05} have been estimated for more realistic quantum processor architectures, showing significant differences in threshold when architectural considerations are taken into account. \section{fault-tolerant operations on encoded data}\label{sec:operations} Sections~\ref{sec:sec:QEC} and~\ref{sec:Fault-tolerance} showed how fault-tolerant QEC allows for any quantum algorithm to be run to arbitrary accuracy. However, the results of the threshold theorem assume that logical operations can be performed directly on the encoded data without the need for continual decoding and re-encoding. Using stabilizer codes, a large class of operations can be performed on logical data in an inherently fault-tolerant way. If a given logical state, $\ket{\psi}_L$, is stabilized by $K$, and the logical operation $U$ is applied, the new state, $U\ket{\psi}_L$ is stabilized by $UKU^{\dagger}$, i.e, \begin{equation} UKU^{\dagger}U\ket{\psi}_L = UK\ket{\psi}_L = U\ket{\psi}_L. \end{equation} In order for the codeword states to remain valid, the stabilizer set for the code, $\mathcal{G}$, must remain fixed through every operation. Hence for $U$ to be a valid operation on the data, $U\mathcal{G}U^{\dagger} = \mathcal{G}$. \subsection{Single Qubit Operations} The logical $\bar{X}$ and $\bar{Z}$ operations on a single encoded qubit are the first examples of valid codeword operations. Taking the $[[7,1,3]]$ code as an example, $\bar{X}$ and $\bar{Z}$ are given by, \begin{equation} \bar{X} = XXXXXXX \equiv X^{\otimes 7}, \quad \bar{Z} = ZZZZZZZ \equiv Z^{\otimes 7}. \label{eq:logop} \end{equation} Since the single qubit Pauli operators satisfy $XZX = -Z$ and $ZXZ = -X$ then, $\bar{X}K^{i}\bar{X} = K^{i}$ and $\bar{Z}K^{i}\bar{Z} = K^{i}$ for each of the $[[7,1,3]]$ stabilizers given in Eq.~\ref{eq:stab7}. The fact that each stabilizer has a weight of four guarantees that $UKU^{\dagger}$ picks up an even number of $-1$ factors. Since the stabilizers remain fixed the operations are valid. However, what transformations do Eq.~\ref{eq:logop} actually perform on encoded data? For a single qubit, a bit-flip operation $X$ takes $\ket{0} \leftrightarrow \ket{1}$. Recall that for a single qubit $Z\ket{0} = \ket{0}$ and $Z\ket{1} = -\ket{1}$, hence for $\bar{X}$ to actually induce a logical bit-flip it must take, $\ket{0}_L \leftrightarrow \ket{1}_L$. For the $[[7,1,3]]$ code, the final operator which fixes the logical state is $K^7 = Z^{\otimes 7}$, where $K^7\ket{0}_L = \ket{0}_L$ and $K^7\ket{1}_L = -\ket{1}_L$. As $\bar{X}K^7\bar{X} = -K^7$, any state stabilized by $K^7$ becomes stabilized by $-K^7$ (and vice-versa) after the operation of $\bar{X}$. Therefore, $\bar{X}$ represents a logical bit flip. The same argument can be used for $\bar{Z}$ by considering the stabilizer properties of the states $\ket{\pm} = (\ket{0} \pm \ket{1})/\sqrt{2}$. Hence, the logical bit- and phase-flip gates can be applied directly to logical data by simply using seven single qubit $X$ or $Z$ gates, [Fig.~\ref{fig:transversal}]. \begin{figure*}[ht] \begin{center} \includegraphics[width=0.75\textwidth]{transversal.pdf} \caption{Bit-wise application of single qubit gates in the $[[7,1,3]]$ code. Logical $X$, $Z$ $H$ and $P$ gates can trivially be applied by using seven single qubit gates, fault-tolerantly. Note that the application of seven $P$ gates results in the logical $\bar{P^{\dagger}}$ being applied and vice-versa.} \label{fig:transversal} \end{center} \end{figure*} Two other useful gates which can be applied in this manner is the Hadamard rotation and phase gate, \begin{equation} H = \frac{1}{\sqrt{2}} \begin{pmatrix} 1 & 1 \\ 1 & -1 \end{pmatrix}, \quad \quad P = \begin{pmatrix} 1 & 0 \\ 0 & i \end{pmatrix}. \end{equation} These gates are useful since when combined with the two-qubit CNOT gate, they can generate a subgroup of all multi-qubit gates known as the Clifford group (gates which map Pauli group operators back to the Pauli group). Again, using the stabilizers of the $[[7,1,3]]$ code and the fact that for single qubits, \begin{equation} \begin{aligned} HXH = Z, \quad \quad HZH = X, \\ PXP^{\dagger} = iXZ, \quad \quad PZP^{\dagger} = Z, \end{aligned} \end{equation} a seven qubit bit-wise Hadamard gate will switch $X$ with $Z$ and therefore will simply flip $\{K^1,K^2,K^3\}$ with $\{K^4,K^5,K^6\}$, and is a valid operation. The bit-wise application of the $P$ gate will leave any $Z$ stabilizer invariant, but takes $X \rightarrow iXZ$. This is still valid since provided there are a multiple of four non-identity operators for the stabilizer set, the factors of $i$ will cancel. Hence seven bit-wise $P$ gates is valid for the $[[7,1,3]]$ code. What does $\bar{H}$ and $\bar{P}$ do to the logical state? For a single qubit, the Hadamard gate flips any $Z$ stabilized state to a $X$ stabilized state, i.e $\ket{0,1} \leftrightarrow \ket{+,-}$. Looking at the transformation of $K^7$, $\bar{H}K^7\bar{H} = X^{\otimes 7}$, the bit-wise Hadamard gate will invoke a logical Hadamard operation. The single qubit $P$ gate leaves a $Z$ stabilized state invariant, while an $X$ eigenstate becomes stabilized by $iXZ$. Hence, $\bar{P^{\dagger}}(X^{\otimes 7})\bar{P} = -i(XZ)^{\otimes 7}$ and the bit-wise gate, $\bar{P}$, represents a logical $P^{\dagger}$ gate on the data block. Similarly, bit-wise $\bar{P^{\dagger}}$ gates enact a logical $P$ gate [Fig.~\ref{fig:transversal}]. Each of these fault-tolerant operations on a logically encoded block are commonly referred to as transversal operations, as a logical operation is obtained by a set of individual operations acting transversally on the physical qubits. \subsection{Two-qubit gate.} A two-qubit logical CNOT operation can also be applied in the same transversal way. For un-encoded qubits, a CNOT operation performs the following mapping on the two qubit stabilizer set, \begin{equation} \begin{aligned} &X\otimes I \rightarrow X\otimes X, \\ &I\otimes Z \rightarrow Z\otimes Z, \\ &Z\otimes I \rightarrow Z\otimes I, \\ &I\otimes X \rightarrow I\otimes X. \end{aligned} \label{eq:CNOTtrans} \end{equation} Where the first operator corresponds to the control qubit and the second operator corresponds to the target. Now consider the bit-wise application of seven CNOT gates between logically encoded blocks of data [Fig.~\ref{fig:transversal2}]. First the stabilizer set must remain invariant, i.e, \begin{equation} \mathcal{G} = \{K^{i}\otimes K^{j}\} \rightarrow \{K^{i}\otimes K^{j}\} \; \forall \; (i,j). \end{equation} Table~\ref{tab:stabtrans} details the transformation for all the stabilizers under seven bit-wise CNOT gates, demonstrating that this operation is valid on the $[[7,1,3]]$ code. The transformations in Eq.~\ref{eq:CNOTtrans} are trivially extended to the logical space, showing that seven bit-wise CNOT gates invoke a logical CNOT operation. \begin{equation} \begin{aligned} &\bar{X}\otimes I \rightarrow \bar{X}\otimes \bar{X}, \\ &I\otimes \bar{Z} \rightarrow \bar{Z}\otimes \bar{Z}, \\ &\bar{Z}\otimes I \rightarrow \bar{Z}\otimes I, \\ &I\otimes \bar{X} \rightarrow I\otimes \bar{X}. \end{aligned} \label{eq:CNOTtrans2} \end{equation} \begin{table*} \begin{tabular}{|c|c|c|c|c|c|c|} \hline $K^i \otimes K^j$ & $K^1$ & $K^2$ &$K^3$ &$K^4$ &$K^5$ &$K^6$ \\ \hline $K^1$ & $K^1\otimes I$& $K^1\otimes K^1K^2$ & $K^1\otimes K^1K^3$ & $K^1K^4 \otimes K^1K^4$ & $K^1K^5\otimes K^1K^5$ & $K^1K^6\otimes K^1K^6$\\ \hline $K^2$ & $K^2\otimes K^1K^2$ & $K^2\otimes I$ & $K^2 \otimes K^2K^3$ &$K^2K^4\otimes K^2K^4$ &$K^2K^5\otimes K^2K^5$ & $K^2K^6\otimes K^2K^6$\\ \hline $K^3$ & $K^3\otimes K^3K^1$ & $K^3\otimes K^3K^2$ &$K^3\otimes I$ & $K^3K^4\otimes K^3K^4$ & $K^3K^5\otimes K^3K^5$ &$K^3K^6\otimes K^3K^6$ \\ \hline $K^4$ & $K^4\otimes K^1$ & $K^4\otimes K^2$ & $K^4\otimes K^3$ & $I\otimes K^4$ & $K^4K^5\otimes K^5$ &$K^4K^6\otimes K^6$ \\ \hline $K^5$ & $K^5\otimes K^1$ & $K^5\otimes K^2$ & $K^5\otimes K^3$ & $K^5K^4\otimes K^4$ & $I\otimes K^5$ &$K^5K^6\otimes K^6$ \\ \hline $K^6$ & $K^6\otimes K^1$ & $K^6\otimes K^2$ & $K^6\otimes K^3$ & $K^6K^4\otimes K^4$ & $K^6K^5\otimes K^5$ &$I\otimes K^6$ \\ \hline \end{tabular} \caption{Transformations of the $[[7,1,3]]$ stabilizer set under the gate operation $U=$CNOT$^{\otimes 7}$, where $\mathcal{G} \rightarrow U^{\dagger}\mathcal{G}U$. Note that the transformation does not take any stabilizer outside the group generated by $K^i \otimes K^j\; (i,j)\in [1,..,6]$, therefore $U=$CNOT$^{\otimes 7}$ represents a valid operation on the codespace.} \label{tab:stabtrans} \end{table*} The issue of Fault-tolerance with these logical operations should be clear. The $\bar{X}$,$\bar{Z}$, $\bar{H}$ and $\bar{P}$ gates are trivially fault-tolerant since the logical operation is performed through seven bit-wise single qubit gates. The logical CNOT is also fault-tolerant since each two-qubit gate only operates between counterpart qubits in each logical block. Hence if any gate is inaccurate, then at most a single error will be introduced in each block. \begin{figure}[ht] \begin{center} \includegraphics[width=0.45\textwidth]{transversal2.pdf} \caption{Bit-wise application of a CNOT gate between two logical qubits. Since each CNOT only couples corresponding qubits in each block, this operation is inherently fault-tolerant.} \label{fig:transversal2} \end{center} \end{figure} In contrast to the [[7,1,3]] code, let us also take a quick look at the [[5,1,3]] code. As mentioned in section~\ref{sec:sec:QEC2} the [[5,1,3]] code is a non-CSS code, meaning the the Clifford group of gates cannot be fully implemented in a transversal manner. To see this clearly we can examine how the stabilizer group for the code transforms under a transversal Hadamard operation, \begin{equation} \begin{pmatrix} X & Z & Z & X & I \\ I & X & Z & Z & X \\ X & I & X & Z & Z \\ Z & X & I & X & Z \end{pmatrix} \quad \longrightarrow \quad \begin{pmatrix} Z & X& X & Z & I \\ I & Z & X & X & Z \\ Z & I & Z & X & X \\ X & Z & I & Z & X \end{pmatrix} \end{equation} The stabilizer group is not preserved under this transformation, therefore the transversal Hadamard operation is not valid for the [[5,1,3]] code. One thing to briefly note is that there is a method for performing logical Hadamard and phase gates on the [[5,1,3]] code~\cite{G97+}. However, it essentially involves performing a valid, transversal, three-qubit gate and then measuring out two of the logical ancillae. While these gates are useful for operating on quantum data, they do not represent a universal set for quantum computation. In fact it has been shown that by using the stabilizer formalism, these operations can be efficiently simulated on a classical device~\cite{G98,AG04}. In order to achieve universality one of the following gates are generally added to the available set, \begin{equation} T = \begin{pmatrix} 1 & 0 \\ 0 & e^{i\pi/4} \end{pmatrix}, \end{equation} or the Toffoli gate~\cite{T81}. However, neither of these two gates are members of the Clifford group and applying them in a similar way to the other gates will transform the stabilizers out the group and consequently does not represent a valid operation. Circuits implementing these two gates in a fault-tolerant manner have been developed~\cite{NC00,GC99,SI05,SFH07}, but at this stage the circuits are complicated and resource intensive. This has practical implications to encoded operations. If universality is achieved by adding the $T$ gate to the list, arbitrary single qubit rotations require long gate sequences (utilizing the Solovay-Kitaev theorem~\cite{K97,DN06}) to approximate arbitrary logical qubit rotations and these sequences often require many $T$ gates~\cite{F05+}. Finding more efficient methods to achieve universality on encoded data is therefore still an active area of research. \section{fault-tolerant circuit design for logical state preparation}\label{sec:FTcircuit} Section~\ref{sec:Fault-tolerance} introduced the basic rules for fault-tolerant circuit design and how these rules lead to the threshold theorem for concatenated error correction. However, what does a full fault-tolerant quantum circuit look like? Here, we introduce a full fault-tolerant circuit to prepare the $[[7,1,3]]$ logical $\ket{0}$ state. As the $[[7,1,3]]$ code is a single error correcting code, we use the one-to-one definition of Fault-tolerance and therefore only need to consider the propagation of a single error during the preparation (any more that one error during correction represents a higher order effect and is ignored). As described in Section~\ref{sec:sec:QEC}, logical state preparation can be done by initializing an appropriate number of physical qubits and measuring each of the $X$ stabilizers that describe the code. Therefore, a circuit which allows the measurement of a Hermitian operator in a fault-tolerant manner needs to be constructed. The general structure of the circuit used was first developed by Shor~\cite{S96+}, however it should be noted that several more recent methods for fault-tolerant state preparation and correction now exist~\cite{S97,S02,DA07} which are more efficient than Shor's original method. The circuits shown in Fig.~\ref{fig:opmeas2}a and~\ref{fig:opmeas2}b, which measure the stabilizer $K^1 = IIIXXXX$ are not fault-tolerant, since a single ancilla is used to control each of the four CNOT gates. Instead, four ancilla qubits are used which are prepared in the state $\ket{\mathcal{A}} = (\ket{0000}+\ket{1111})/\sqrt{2}$. This can be done by initializing four qubits in the $\ket{0}$ state and applying a Hadamard then a sequence of CNOT gates. Each of these four ancilla are used to control a separate CNOT gate, after which the ancilla state is decoded and measured. \begin{figure*}[ht] \begin{center} \includegraphics[width=0.8\textwidth]{opmeas2.pdf} \caption{Three circuits which measure the stabilizer $K^1$. Fig a) represents a generic operator measurement where a multi-qubit controlled gate is available. Fig. b) decomposes this into single- and two-qubit gates, but in a non-fault-tolerant manner. Fig. c) introduces four ancilla such that each CNOT is controlled via a separate qubit. This ensures Fault-tolerance.} \label{fig:opmeas2} \end{center} \end{figure*} By ensuring that each CNOT is controlled via a separate ancilla, any $X$ error will only propagate to a single qubit in the data block. However, during the preparation of the ancilla state there is the possibility that a single $X$ error can propagate to multiple ancilla, which are then fed forward into the data block. In order to combat this, the ancilla block needs to be verified against possible $X$ errors. Tracking through all the possible locations where a single $X$ error can occur during ancilla preparation leads to the following unique states. \begin{equation} \begin{aligned} &\ket{\mathcal{A}}_1 = \frac{1}{\sqrt{2}}(\ket{0000}+\ket{1111}),\\ &\ket{\mathcal{A}}_2 = \frac{1}{\sqrt{2}}(\ket{0001} + \ket{1110}),\\ &\ket{\mathcal{A}}_3 = \frac{1}{\sqrt{2}}(\ket{0011} + \ket{1100}),\\ &\ket{\mathcal{A}}_4 = \frac{1}{\sqrt{2}}(\ket{0111} + \ket{1000}).\\ &\ket{\mathcal{A}}_5 = \frac{1}{\sqrt{2}}(\ket{0100} + \ket{1011}).\\ \end{aligned} \end{equation} From these possibilities, the last four states have a different parity between the first and forth qubit. Hence to verify this state, a fifth ancilla is added, initialized and used to perform a parity check on the ancilla block. This fifth ancilla is then measured. If the result is $\ket{0}$, the ancilla block is clean and can be coupled to the data. If the ancilla result is $\ket{1}$, then either a single error has occured in the ancilla preparation or on this verification qubit. In either case, the entire ancilla block is re-initialized and the ancilla prepared again. This is continued until the verification qubit is measured to be $\ket{0}$ [Fig.~\ref{fig:opmeas3}]. \begin{figure*}[ht] \begin{center} \includegraphics[width=0.8\textwidth]{opmeas3.pdf} \caption{Circuit required to measure the stabilizer $K^1$, fault-tolerantly. A four qubit GHZ state is used as ancilla with the state requiring verification against multiple $X$ errors. After the state has passed verification it is coupled to the data block and a syndrome is extracted.} \label{fig:opmeas3} \end{center} \end{figure*} \begin{figure*}[ht] \begin{center} \includegraphics[width=0.7\textwidth,height=0.9\textheight]{opmeas4.pdf} \caption{ Circuit required to prepare the $[[7,1,3]]$ logical $\ket{0}$ state fault-tolerantly. Each of the $X$ stabilizers are sequentially measured using the circuit in Fig.~\ref{fig:opmeas3}. To maintain Fault-tolerance, each stabilizer is measured 2-3 times with a majority vote taken.} \label{fig:stateprep} \end{center} \end{figure*} The re-preparation of the ancilla block protects against $X$ errors, which can propagate forward through the CNOT gates. $Z$ errors on the other hand, propagate in the other direction. Any $Z$ error which occurs in the ancilla block will propagate straight through to the final measurement. This results in the measurement not corresponding to the eigenstate the data is projected to and can possibly result in mis-correction once all stabilizers have been measured. To protect against this, each stabilizer is measured 2-3 times and a majority vote of the measurement results taken. As any additional error represents a second order process, if the first or second measurement has been corrupted by an induced $Z$ error, then the third measurement will only contain additional errors if a higher order error process has occurred. Therefore, we are free to ignore this possibility and assume that the third measurement is error free. The full circuit for $[[7,1,3]]$ state preparation is shown in Fig.~\ref{fig:stateprep}, where each stabilizer is measured 2-3 times. The total circuit requires a minimum of 12 qubits (7-data qubits and a 5-qubit ancilla block). As you can see, the circuit constructions for full fault-tolerant state preparation (and error correction) are not simple circuits. However, they are easy to design in generic ways when employing stabilizer coding. \section{Loss Protection} So far we have focused the discussion on correction techniques which assume that error processes maintain the assumption of a qubit structure to the Hilbert space. As we noted in section~\ref{sec:lossleakage}, the loss of physical qubits within the computer violates this assumption and in general requires additional correction machinery beyond what we have already discussed. For the sake of completeness, this section examines some correction techniques for qubit loss. Specifically, we detail one such scheme which was developed with single photon based architectures in mind. Protecting against qubit loss requires a different approach than other general forms of quantum errors such as environmental decoherence or systematic control imperfections. The cumbersome aspect related to correcting qubit loss is detecting the presence of a qubit at the physical level. The specific machinery that is required for loss detection is dependent on the underlying physical architecture, but the basic principal is that the presence or absence of the physical qubit must be determined without discriminating the actual quantum state. Certain systems allow for loss detection is a more convenient way than others. Electronic spin qubits, for example, can employ Single Electron Transistors (SET) to detect the presence or absence of the charge without performing measurement on the spin degree of freedom~\cite{DS00,CGJH05,AJWSD01}. Optics in contrast requires more complicated non-demolition measurement~\cite{MW84,IHY85,POWBR04,MNBS05}. This is due to the fact that typical photonic measurement is performed via photo-detectors which have the disadvantage of physically destroying the photon. Once the detection of the presence of the physical qubit has been performed, a freshly initialized qubit can be injected to replace the lost qubit. Once this has been completed, the standard error correcting procedure can correct for the error. A freshly initialized qubit state, $\ket{0}$ can be represented as projective collapse of a general qubit state, $\ket{\psi}$, as, \begin{equation} \ket{0} \propto \ket{\psi}+Z\ket{\psi}. \end{equation} If we consider this qubit as part of an encoded block, then the above corresponds to a 50\% error probability of experiencing a phase flip on this qubit. Therefore, a loss event that is corrected by non-demolition detection and standard QEC essentially guarantees a correction event in the QEC cycle. Therefore the probability of loss needs to be at a comparable rate to standard errors as the correction cycle after a loss detection event will, with high probability, detect and correct the error. Additionally, if a loss event is detected and the qubit replaced, the error detection code shown in section~\ref{sec:detection} becomes a single qubit correction code. This is due to the fact that erasure errors have known locations. Consequently error detection is sufficient to perform full correction, in contrast to non-erasure errors where the location is unknown. A second method for loss correction is related to systems that have high loss rates compared to systematic and environmental errors. The most prevalent in optical systems. Due to the high mobility of single photons and their relative immunity to environmental interactions, loss is a major error channel that generally dominates over other error sources. The use of error detection and correction codes for photon loss is undesirable due to the need for non-demolition detection of the lost qubit. While techniques exist for measuring the presence or absence of a photon without direct detection have been developed and implemented~\cite{POWBR04}, they require multiple ancilla photons and controlled interactions. Ultimately it is more desirable to redesign the loss correction code such that it can be employed directly with photo-detection rather than more complicated non-demolition techniques. One such scheme was developed by Ralph, Hayes and Gilchrist in 2005~\cite{RHG05}. This scheme was a more efficient extension of an original Parity encoding method developed by Knill, Laflamme and Milburn to protect against photon loss in their controlled-$\sigma_z$ gate~\cite{KLM01}. The general Parity encoding for a logical qubit is an $N$ photon GHZ state in the conjugate basis, i.e, \begin{equation} \begin{aligned} &\ket{0}_L^N = \frac{1}{\sqrt{2}}(\ket{+}^{\otimes N} + \ket{-}^{\otimes N}), \\ &\ket{1}_L^N = \frac{1}{\sqrt{2}}(\ket{+}^{\otimes N} - \ket{-}^{\otimes N}), \end{aligned} \end{equation} where $\ket{\pm} = (\ket{0}\pm \ket{1})/\sqrt{2}$. The motivation with this type of encoding is that measuring any qubit in the $\ket{0,1}$ basis simply removes it from the state, reducing the resulting state by one, i.e, \begin{equation} \begin{aligned} P_{0,N} \ket{0}_L^N &= (I_N + Z_N)\ket{0}_L^N \\ & = \frac{1}{\sqrt{2}}(\ket{+}^{N-1} + \ket{-}^{N-1})\ket{0}_N = \ket{0}_L^{N-1}\ket{0}_N \\ P_{1,N} \ket{0}_L^N &= (I_N - Z_N)\ket{0}_L^N \\ &= \frac{1}{\sqrt{2}}(\ket{+}^{N-1} - \ket{-}^{N-1})\ket{1}_N = \ket{1}_L^{N-1}\ket{1}_N \\ \end{aligned} \label{eq:lossenc} \end{equation} where $P_{0,1,N}$ are the projectors corresponding to measurement in the $\ket{0,1}$ basis on the $N^{th}$ qubit (up to normalization). The effect for the $\ket{1}_L$ state is similar. Measuring the $N^{th}$ qubit in the $\ket{0}$ state simply removes it from the encoded state, reducing the logical zero state by one, while measuring the $N^{th}$ qubit as $\ket{1}$ enacts a logical bit flip at the same time as reducing the size of the logical state. However, since the measurement result is known, this encoded bit flip can be corrected for. Instead of introducing the full scheme developed in~\cite{RHG05}, we instead just give the general idea of how such encoding allows for loss detection without non-demolition measurements. Photon loss in this model is assumed equivalent to measuring the photon in the $\ket{0},\ket{1}$ basis, but not knowing the answer [Sec~\ref{sec:lossleakage}]. Our ignorance of the measurement result could lead to a logical bit flip error on the encoded state, therefore we require the ability to protect against logical bit flip errors on the above states. As already shown, the 3-qubit code allows us to achieve such correction. Therefore the final step in this scheme is encoding the above states into a redundancy code (a generalized version of the 3-qubit code), where an arbitrary logical state, $\ket{\psi}_L$ is now given by, \begin{equation} \ket{\psi}_L = \alpha\ket{0}_1^N \ket{0}_2^N...\ket{0}_q^N + \beta \ket{1}_1^N \ket{1}_2^N ... \ket{1}_q^N \end{equation} where $\ket{0}^N,\ket{1}^N$ are the parity encoded states shown in Eq.~\ref{eq:lossenc} and the fully encoded state is $q$-blocks of these parity states. This form of encoding protects against the loss of qubits by first encoding the system into a code structure that allows for the removal of qubits without destroying the computational state and then protecting against logical errors that are induced by loss events. In effect it maps errors that are un-correctable by standard QEC to error channels that are correctable, in this case qubit loss $\rightarrow$ qubit bit-flips. This is common with pathological error channels. If a specific type of error violates the standard ``qubit" assumption of QEC, additional correction techniques are always required to map this type of error to a correctable form, consequently additional physical resources are usually needed. \section{Some Modern Developments in Error Correction} \label{sec:modern} Up until this stage we have restricted our discussions on error correction to the most basic principals and codes. The ideas and methodologies we have detailed represent the introductory techniques that were developed when error correction was first proposed. For those readers who are only looking for a basic introduction to the field, you can quite easily skip the remainder of this paper. Providing a fair and encompassing review of the more modern and advanced error correction techniques that have been developed is far outside our goal for this review. However, we would be remiss if we did not briefly examine some of the more advanced error correction techniques that have been proposed for large scale quantum information processing. For the remainder of this discussion we choose two closely related error correction techniques, subsystem coding and topological coding which have been receiving significant attention in the fields of architecture design and large scale quantum information processing. While some readers may disagree, we review these two modern error correction protocols because they are currently two of the most useful correction techniques when discussing the physical construction of a quantum computer. We again attempt to keep the discussion of these techniques light and provide specific examples when possible. However, it should be stressed that these error correcting protocols are far more complicated than the basic codes shown earlier. Topological error correction alone has since its introduction, essentially become its own research topic within the broader error correction field. Hence we encourage the reader who is interested to refer to the cited articles below for more rigorous and detailed treatment of these techniques. \subsection{Subsystem Codes} \label{sec:subsystem} Quantum subsystem codes~\cite{B06} are one of the newer and highly flexible techniques to implement quantum error correction. The traditional stabilizer codes that we have reviewed are more formally identified as subspace codes, where information is encoded in a relevant coding subspace of a larger multi-qubit system. In contrast, subsystem coding identifies multiple subspaces of the multi-qubit system as equivalent for storing quantum information. Specifically, multiple states are identified with the logical $\ket{0}_L$ and $\ket{1}_L$ states. The primary benefit to utilizing subsystem codes is the general nature of their construction. Moving from smaller to larger error correcting codes is conceptually straightforward, error correction circuits are much simpler to construct when encoding information in multiple subsystems~\cite{AC07} and the generality of their construction introduces the ability to perform dynamical code switching in a fault-tolerant manner~\cite{SEDH07}. This final property gives subsystem coding significant flexibility as the strength of error correction within a quantum computer can be changed, fault-tolerantly, during operation of the device. As with the other codes presented in this review, subsystem codes are stabilizer codes but now defined over a square lattice. The lattice dimensions represent the $X$ and $Z$ error correction properties and the size of the lattice in either of these two dimensions dictates the total number of errors the code can correct. In general, a $\mathcal{C}$($n_1$,$n_2$) subsystem code is defined over a $n_1\times n_2$ square lattice which encodes one logical qubit into $n_1n_2$ physical qubits with the ability to correct at least $\lfloor\frac{n_1-1}{2}\rfloor$ $Z$ errors and at least $\lfloor\frac{n_2-1}{2}\rfloor$ $X$ errors. Again, keeping with the spirit of this review, we instead focus on a specific example, the $\mathcal{C}$(3,3) subsystem code. This code, encoding 9 physical qubits into one logical qubit can correct for one $X$ and one $Z$ error. In order to define the code structure we begin with a $3\times 3$ lattice of qubits, where each qubit is identified with the vertices of the lattice (note that this 2D structure represents the structure of the code, it does not imply that a physical array of qubits {\em must} be arranged into a 2D lattice). \begin{figure}[ht!] \begin{center} \includegraphics[width=0.4\textwidth]{stabil.pdf} \caption{Stabilizer structure for the $\mathcal{C}$(3,3) code. Fig a. gives two of the four stabilizers from the group $\mathcal{S}$. Fig. b. illustrates one of the four encoded Pauli operators from each subsystem defined with the Gauge group, $\mathcal{T}$. Fig. c. gives the two logical operators from the group $\mathcal{L}$ which enact valid operations on the encoded qubit. } \label{fig:stab} \end{center} \end{figure} Fig.~\ref{fig:stab} illustrate three sets of stabilizer operators which are defined over the lattice. The first group, illustrated in Fig.~\ref{fig:stab}a. is the stabilizer group, $\mathcal{S}$, which is generated by the operators, \begin{equation} \begin{aligned} \mathcal{S} = \langle X_{i,*} X_{i+1,*} ; Z_{*,j}Z_{*,j+1} \ | \ i \in \Z_{2} ; j \in \Z_{2} \rangle, \end{aligned} \label{stabilizers:bs} \end{equation} where we retain the notation utilized in~\cite{AC07,SEDH07} $U_{i,*}$ and $U_{*,j}$ represent an operator, $U$, acting on all qubits in a given row, $i$, or column, $j$, respectively, and $\Z_2=\{1,2\}$. The second relevant subsystem is known as the gauge group~[Fig.~\ref{fig:stab}b.], $\mathcal{T}$, and is described via the non-Abelian group generated by the pairwise operators \begin{equation} \begin{aligned} \mathcal{T} = &\langle X_{i,j}X_{i+1,j} \ | \ i \in \Z_{2} ; j \in \Z_{3} \rangle, \\ &\langle Z_{i,j}Z_{i,j+1} \ | \ i \in \Z_{3} ; j \in \Z_{2} \rangle. \end{aligned} \end{equation} The third relevant subsystem is the logical space~[Fig.~\ref{fig:stab}c], $\mathcal{L}$, which can be defined through the logical Pauli operators \begin{equation} \mathcal{L} = \langle Z_{*,1} ; X_{1,*} \rangle, \end{equation} which when combined with $\mathcal{S}$ form a non-Abelian group. The stabilizer group $\mathcal{S}$, defines all relevant code states, i.e. {\em every} valid logical space is a $+1$ eigenvalue of this set. For the $\mathcal{C}$(3,3) code, there are a total of nine physical qubits and a total of four independent stabilizers in $\mathcal{S}$, hence there are five degrees of freedom left in the system which can house $2^5$ logical states which are simultaneous eigenstates of $\mathcal{S}$. This is where the gauge group, $\mathcal{T}$, becomes relevant. As the gauge group is non-Abelian, there is no valid code state which is a simultaneous eigenstate of all operators in $\mathcal{T}$. However, if you examine closely there are a total of four encoded Pauli operations within $\mathcal{T}$. Fig~\ref{fig:stab}b. illustrates two such operators. As all elements of $\mathcal{T}$ commute with all elements of $\mathcal{S}$ we can identify each of these four sets of valid ``logical" qubits to be equivalent, i.e. we define $\{\ket{0}_L,\ket{1}_L\}$ pairs which are eigenstates of $\mathcal{S}$ and an abelian subgroup of $\mathcal{T}$ and then ignore exactly what gauge group we are in (each of the four possible $\ket{0}_L$ states can be used to store a single logical qubit in the $\ket{0}$ state, regardless of which particular $\ket{0}_L$ gauge state we are in). Hence, each of these gauge states represent a subsystem of the code, with each subsystem logically equivalent. The final group we considered is the logical group $\mathcal{L}$. This is the set of two Pauli operators which enact a logical $X$ or $Z$ gate on the encoded qubit {\em regardless} of the gauge choice and consequently represent true logical operations to our encoded space. In a more formal sense, the definition of these three group structures allows us to decompose the Hilbert space of the system. If we let $\mathcal{H}$ denote the Hilbert space of the physical system, $\mathcal{S}$ forms an Abelian group and hence can act as a stabilizer set denoting subspaces of $\mathcal{H}$. If we describe each of these subspaces by the binary vector, $\vec{e}$, formed from the eigenvalues of the stabilizers, $\mathcal{S}$, then each subspace splits into a tensor product structure \begin{equation} \mathcal{H} = \bigoplus_{\vec{e}} \mathcal{H}_{\mathcal{T}} \otimes \mathcal{H}_{\mathcal{L}}, \end{equation} where elements of $\mathcal{T}$ act only on the subsystem $\mathcal{H}_{\mathcal{T}}$ and the operators $\mathcal{L}$ act only on the subsystem $\mathcal{H}_{\mathcal{L}}$. Therefore, in the context of storing qubit information, a logical qubit is encoded into the two dimensional subsystem $\mathcal{H}_{\mathcal{L}}$. As the system is already stabilized by operators in $\mathcal{S}$ and the operators in $\mathcal{T}$ act only on the space $\mathcal{H}_{\mathcal{T}}$, qubit information is only altered when operators in the group $\mathcal{L}$ act on the system. This formal definition of how subsystem coding works may be more complicated than the standard stabilizer codes shown earlier, but this slightly more complicated coding structure has significant benefits when we consider how error correction is performed. In general, to perform error correction, each of the stabilizers of the codespace must be checked to determine which eigenvalue changes have occurred due to errors. In the case of subsystem code this would appear to be problematic. The stabilizer group, $\mathcal{S}$, consist of qubit operators that scale with the size of the code. In our specific example, each of the $X$ and $Z$ stabilizers are six-dimensional (and in general, for a $n_1\times n_2$ lattice, the $X$ stabilizers are $2n_1$ dimensional and the $Z$ stabilizers are $2n_2$ dimensional). If techniques such as Shor's method~[Section~\ref{sec:FTcircuit}] were used, we would need to prepare a large ancilla state to perform fault-tolerant correction, which would also scale linearly with the size of the code, this is clearly undesirable. However, due to the gauge structure of subsystem codes we are able to decompose the error correction procedure~\cite{AC07}. Each of the stabilizers in $\mathcal{S}$ are simply the product of certain elements from $\mathcal{T}$, for example, \begin{equation} \begin{aligned} &X_{1,1}X_{1,2}X_{1,3}X_{2,1}X_{2,2}X_{2,3} \in \mathcal{S} \\ = &( X_{1,1}X_{2,1} ) . (X_{1,2}X_{2,2}).(X_{1,3}X_{2,3}) \in \mathcal{T}. \end{aligned} \end{equation} Therefore if we check the eigenvalues of the three, 2-dimensional operators from $\mathcal{T}$ we are able to calculate what the eigenvalue is for the 6-dimensional stabilizer. This decomposition of the stabilizer set for the code can only occur since the decomposition is in terms of operators from $\mathcal{T}$ which, when measured, has no effect on the logical information encoded within the system. In fact, when error correction is performed the gauge state of the system will almost always change based on the order in which the eigenvalues of the gauge operators are checked. This exploitation of the gauge properties of subsystem coding is extremely beneficial for fault-tolerant designs for correction circuits. As the stabilizer operators can now be decomposed into multiple 2-dimensional operators, fault-tolerant circuits for error correction do not require any encoded ancilla states. Furthermore, if we decide to scale the code-space to correct more errors (increasing the lattice size representing the code) we do not require measuring operators with higher dimensionality. Fig~\ref{fig:BScheck} taken from Ref.~\cite{AC07} illustrates the fault-tolerant circuit constructions for Bacon-Shor subsystem codes. \begin{figure}[ht] \begin{center} \begin{tabular}{ccc} \put(-30,10){(a)} \put(95,10){(b)} \Qcircuit @C=1ex @R=2.3ex @!R { \put(0.1,15){\footnotesize{$j,k$}} & \qw & \targ & \qw & \qw & \qw & \qw \\ & \qw \put(-12,12){\footnotesize{$j{+}1,k$}} & \qw & \targ & \qw & \qw & \qw\\ & \push{|+\rangle \hspace{0.1cm}} & \ctrl{-2} & \ctrl{-1} & \gate{H} & \meter } &\hspace{1.2cm} & \hspace{0.2cm} \Qcircuit @C=1ex @R=2.3ex @!R { \put(0.1,15){\footnotesize{$k,j$}} & \qw & \ctrl{+2} & \qw & \qw & \qw \\ & \qw \put(-12,12){\footnotesize{$k,j{+}1$}} & \qw & \ctrl{+1} & \qw & \qw \\ & \push{|0\rangle \hspace{0.1cm}} & \targ & \targ & \meter & } \vspace{-0.2cm} \end{tabular} \end{center} \caption{\label{fig:BScheck} (From Ref.~\cite{AC07}) Circuits for measuring the gauge operators and hence performing error correction for subsystem codes. Fig. a. measures, fault-tolerantly, the operator $X_{j,k}X_{j+1,k}$ with only one ancilla. Fig. b. measures $Z_{k,j}Z_{k,j+1}$. The results of these two qubit parity checks can be used to calculate the parity of the higher dimensional stabilizer operators of the code. } \end{figure} As each ancilla qubit is only coupled to two data qubits, no further circuit constructions are required to ensure fault-tolerance. The classical results from these 2-dimensional parity checks are then combined to calculate the parity of the higher dimensional stabilizer of the subsystem code. A second benefit to utilizing subsystem codes is the ability to construct fault-tolerant circuits to perform dynamical code switching. When using more traditional error correction codes it is difficult, if not impossible, to fault-tolerantly switch between codes with different error correcting properties. The Steane [[7,1,3]] code is a single error correcting code for both $X$ and $Z$ channels. If during the operation of a quantum computer, the user wished to increase the error correcting power of their code to two errors in the $X$ and $Z$ channel they would first decode the quantum data and then re-encode with the higher distance code. This is clearly a non fault-tolerant procedure as any error occurring on the decoded information will cause catastrophic failure. Due to the general lattice structure of subsystem codes switching too and from higher order codes can be achieved without decoding and re-encoding information, allowing the user of the computer to dynamically adjust the error correction during the computation. Figs.~\ref{figure:switch1},~\ref{figure:switch2} and~\ref{figure:switch3}, taken from Ref.~\cite{SEDH07} illustrates circuits to perform fault-tolerant switching from the $\mathcal{C}$(3,3) and $\mathcal{C}$(3,5) subsystem code. As noted before, the $\mathcal{C}$(3,3) is a single $X$, single $Z$ error correcting code while the $\mathcal{C}$(3,5) is a single $X$, two $Z$ error correcting code. We will not detail why these circuits successfully implement fault-tolerant code switching, instead we encourage readers to refer to Ref.~\cite{SEDH07} for further details. \begin{figure}[ht!] \[ \Qcircuit @C=0.4em @R=0.36em @!R { & \qw & \qw & \qw & \qw & \qw & \qw & \qw \\ & \qw & \qw & \qw & \qw & \targ & \qw & \qw \\ & \qw & \qw & \targ & \qw & \qw & \qw & \qw \\ \push{\vert {0} \rangle \hspace{0.1cm}} & \qw & \qw & \qw & \targ & \qw & \qw & \qw \\ \push{\vert {0} \rangle \hspace{0.1cm}} & \qw & \targ & \qw & \qw & \qw & \qw & \qw \\ \push{\vert {0} \rangle \hspace{0.1cm}} & \gate{H} & \qw & \qw & \ctrl{-2} & \ctrl{-4} & \gate{H} & \meter{} & & \\ \push{\vert {0} \rangle \hspace{0.1cm}} & \gate{H} & \ctrl{-2} & \ctrl{-4} & \qw & \qw & \gate{H} & \meter{} & & \\ \put(-0.4,244.5){\footnotesize{$1,j$}} \put(-0.4,228.0){\footnotesize{$2,j$}} \put(-0.4,211.5){\footnotesize{$3,j$}} \put(87,244.5){\footnotesize{$1,j$}} \put(87,228.0){\footnotesize{$2,j$}} \put(87,211.5){\footnotesize{$3,j$}} \put(87,195.0){\footnotesize{$4,j$}} \put(87,178.0){\footnotesize{$5,j$}} }\] \vspace{-17pt} \caption{(From Ref.~\cite{SEDH07}). Circuit to convert from the $\mathcal{C}$($3$,$3$) subsystem code to the $\mathcal{C}$($5$,$3$) code for one column, $j$, of the lattice structure of the code.} \label{figure:switch1} \end{figure} \begin{figure}[ht!] \[ \Qcircuit @C=0.4em @R=0.36em @!R { & \qw & \qw & / \qw & \qw & \qw & \qw & \qw & \qw & \gate{X} & \gate{X} & \gate{X^{\otimes3}} & \qw & \qw & \qw & \qw & \qw & \qw & \qw & \qw & \qw & \qw \\ & \qw & \qw & / \qw & \qw & \qw & \qw & \qw & \qw & \qw \cwx & \qw \cwx & \qw \cwx & \qw & \qw & \qw & \qw & \qw & \qw & \qw & \qw & \qw & \qw \\ & \qw & \qw & / \qw & \qw & \qw & \qw & \qw & \qw & \qw \cwx & \qw \cwx & \qw \cwx & \qw & \qw & \qw & \qw & \qw & \qw & \qw & \qw & \qw & \qw \\ & \qw & \qw & / \qw & \qw & \qw & \qw& \qw & \gate{\mathcal{P}} & \qw \cwx & \gate{X} \cwx & \meter{} \cwx & \\ & \qw & \qw & / \qw & \qw & \qw & \qw& \gate{\mathcal{P}} & \qw & \gate{X} \cwx & \qw \cwx & \meter{} \cwx[-1] & \\ \push{\vert {0^{\otimes3}} \rangle \hspace{0.1cm}} & \qw & \qw & / \qw & \qw & \qw & \qw & \qw & \gate{}\qwx[-2] & \qw \cwx & \meter{} \cwx[-1] & \\ \push{\vert {0^{\otimes3}} \rangle \hspace{0.1cm}} & \qw & \qw & / \qw & \qw & \qw & \qw & \gate{} \qwx[-2] & \qw & \meter{} \cwx[-1] & & \\ \put(-0.4,247.5){\footnotesize{$i=1$}} \put(-0.4,231.0){\footnotesize{$i=2$}} \put(-0.4,214.5){\footnotesize{$i=3$}} \put(-0.4,198.0){\footnotesize{$i=4$}} \put(-0.4,181.0){\footnotesize{$i=5$}} \put(154,247.5){\footnotesize{$i=1$}} \put(154,231.0){\footnotesize{$i=2$}} \put(154,214.5){\footnotesize{$i=3$}} }\] \vspace{-17pt} \caption{(From Ref.~\cite{SEDH07}). Downconversion from the $\mathcal{C}$($5$,$3$) code to the $\mathcal{C}$($3$,$3$)code . $\mathcal{P}$ is the gate sequence in Fig.~\ref{figure:switch3}.} \label{figure:switch2} \end{figure} \begin{figure}[ht!] \[ \Qcircuit @C=0.4em @R=0.36em @!R { & \qw & \qw & \qw & \qw & \qw & \ctrl{4} & \ctrl{3} & \qw & \qw & \\ & \qw & \qw & \qw & \ctrl{4} & \ctrl{3} & \qw & \qw & \qw & \qw & \\ & \qw & \qw & \ctrl{3} & \qw & \qw & \qw & \qw & \ctrl{1} & \qw & \\ \push{\vert 0 \rangle \hspace{0.1cm}} & \qw & \qw & \qw & \qw & \qw & \qw & \targ & \targ & \meter{} \\ \push{\vert 0 \rangle \hspace{0.1cm}} & \qw & \qw & \qw & \qw & \targ & \targ & \qw & \qw & \meter{} \\ \push{\vert 0 \rangle \hspace{0.1cm}} & \qw & \qw & \targ & \targ & \qw & \qw & \qw & \qw & \meter{} \\ \put(-0.4,211.5){\footnotesize{$i,1$}} \put(-0.4,195.0){\footnotesize{$i,2$}} \put(-0.4,178.5){\footnotesize{$i,3$}} }\] \vspace{-17pt} \caption{(From Ref.~\cite{SEDH07}) $X$ parity measurement under $\mathcal{C}$($5$,$3$) for one row, $i$, of the lattice structure.} \label{figure:switch3} \end{figure} \subsection{Topological Codes} A similar coding technique to the Bacon-Shor subsystem codes is the idea of topological error correction, first introduced with the Toric code of Kitaev in 1997~\cite{K97}. Topological coding is similar to subsystem codes in that the code structure is defined on a lattice (which, in general, can be be of dimension $> 2$) and the scaling of the code to correct more errors is conceptually straightforward. However, in topological coding schemes the protection afforded to logical information relies on the unlikely application of error chains which define non-trivial topological paths over the code surface. Topological error correction is a complicated area of quantum error correction and fault-tolerance and any attempt to fairly summarize the field is not possible within this review. In brief, there are two ways of approaching the problem. The first is simply to treat topological codes as a class of stabilizer codes over a qubit system. This approach is more amenable to current current information technologies and is being adapted to methods in cluster state computing~\cite{RHG07,FG08}, optics~\cite{DFSG08,DMN08}, ion-traps~\cite{SJ08} and superconducting systems~\cite{IFIITB02}. The second approach is to construct a physical Hamiltonian model based on the structure of the topological code. This leads to the more complicated field on anyonic quantum computation~\cite{K97}. By translating a coding structure into a physical Hamiltonian system, excitations from the ground state of this Hamiltonian exhibit natural robustness against local errors (since their physical Hamiltonian symmetries reflect the coding structure imposed). Specifically, quasi-particles arising from a Hamiltonian approach to quantum codes exhibit fractional quantum statistics (they acquire fractional phase shifts when their positions are exchanged twice with other anyons, in contrast to Bosons or Fermions which always acquire $\pm 1$ phase shifts). The unique properties of anyons therefore allow for natural, robust, error protection and anyon/anyon interactions are performed by rotating anyones around each other. However, the major issue with this model is that it relies on quasi-particle excitations that do not, in general, arise naturally. Although certain physical systems have been shown to exhibit anyonic excitations, most notably in the fractional quantum hall effect~\cite{NSSFS08} the ability to first manufacture a reliable anyonic system in addition to reliably design and construct a large scale computing system based on anyons is a daunting task. As there are several extremely good discussions of both anyonic~\cite{NSSFS08} and non-anyonic topological computing~\cite{DKLP02,FSG08,FG08} we will not review any of the anyonic methods for topological computing and simply provide a brief example of one topological coding scheme, namely the surface code~\cite{BK01, DKLP02,FSG08}. The surface code for quantum error correction is an extremely good error correction model for several reasons. As it is defined over a 2-dimensional lattice of qubits it can be implemented on architectures that only allow for the coupling of nearest neighbor qubits (rather than the arbitrary long distance coupling of qubits in separate regions of the computer). The surface code also exhibits one of the highest fault-tolerant thresholds of any quantum error correction scheme, recent simulations estimate a threshold approaching 1\%~\cite{RHG07}. Finally, the surface code can correct problematic error channels such as qubit loss and leakage naturally. The surface code, as with subsystem codes, is a stabilizer code defined over a 2-dimensional qubit lattice. Fig.~\ref{fig:surface1} illustrates. We now identify each edge of the 2D lattice with a physical qubit. The stabilizer set consists of two types of operators, the first is the set of $Z^{\otimes 4}$ operators which circle every lattice face (or plaquette). The second is the set of $X^{\otimes 4}$ operators which encircle every vertex of the lattice. The stabilizer set is consequently generated by the operators, \begin{equation} A_p = \bigotimes_{j \in b(p)} Z_j, \quad B_v = \bigotimes_{j \in s(v)} X_j \end{equation} where $b(p)$ is the four qubits surrounding a plaquette and $s(v)$ is the four qubits surrounding each vertex in the lattice and identity operators on the other qubits are implied. First note that all of these operators commute as any plaquette and vertex stabilizer will share either zero or two qubits. If the lattice is not periodic in either dimension, this stabilizer set completely specifies one unique state, i.e. for a $N\times N$ lattice there are $2N^2$ qubits and $2N^2$ stabilizer generators. Hence this stabilizer set defines a unique multi-qubit entangled state which is generally referred to as a ``clean" surface. \begin{figure}[ht!] \begin{center} \includegraphics[width=0.4\textwidth]{surface1.pdf} \caption{General structure of the surface code. The edges of the lattice correspond to physical qubits. The four qubits surrounding each face (or plaquette) are +1 eigenstates of the operators $A_p$ while the four qubits surrounding each vertex are +1 eigenstates of the operators $B_v$. If all eigenstate conditions are met, a unique multi-qubit state is defined as a ``clean" surface.} \label{fig:surface1} \end{center} \end{figure} \begin{figure*}[ht!] \begin{center} \includegraphics[width=0.8\textwidth]{surface2.pdf} \caption{The surface code imbeds two self similar lattices that are interlaced, generally referred to as the primal and dual lattice. Fig. a. illustrates one lattice where plaquettes are defined with the stabilizers $A_p$. Fig b. illustrates the dual structure where plaquettes are now defined by the stabilizer set $B_v$. The two lattice structures are interlaced and are related by shifting along the diagonal by half a lattice cell. Each of these equivalent lattices are independently responsible for $X$ and $Z$ error correction} \label{fig:surface2} \end{center} \end{figure*} \begin{figure*}[ht!] \begin{center} \includegraphics[width=\textwidth]{surface3.pdf} \caption{Examples of error chains and their effect on the eigenvalues for each plaquette stabilizer. a). A single $X$ error causes the parity of two adjacent cells to flip. b) and c). Longer chains of errors only cause the end cells to flip eigenvalue as each intermediate cell will have two $X$ errors and hence the eigenvalue for the stabilizer will flip twice.} \label{fig:surface3} \end{center} \end{figure*} Detailing exactly how this surface can be utilized to perform robust quantum computation is far outside the scope of this review and there are several papers to which such a discussion can be referred~\cite{RH07,RHG07,FSG08,FG08}. Instead, we can quite adequately show how robust error correction is possible by simply examining how a ``clean" surface can be maintained in the presence of errors. The $X$ and $Z$ stabilizer sets, $A_p$ and $B_v$ define two equivalent 2D lattices which are interlaced, Fig.~\ref{fig:surface2}, illustrates. If the total 2D lattice is shifted along the diagonal by half a cell then the operators $B_v$ are now arranged around a plaquette and the operators $A_p$ are arranged around a lattice vertex. Since protection against $X$ errors are achieved by detecting eigenvalue flips of $Z$ stabilizers and visa-versa, these two interlaced lattices correspond to error correction against $X$ and $Z$ errors respectively. Therefore we can quite happily restrict our discussion to one possible error channel, for example correcting $X$ errors (since the correction for $Z$ errors proceeds identically when considering the stabilizers $B_v$ instead of $A_p$). Fig~\ref{fig:surface3}a. illustrates the effect that a singe $X$ error has on a pair of adjacent plaquettes. Since $X$ and $Z$ anti-commute, a single bit-flip error on one qubit in the surface will flip the eigenvalue of the $Z^{\otimes 4}$ stabilizers on the two plaquettes adjacent to the respective qubit. As single qubit errors act to flip the eigenvalue of adjacent plaquette stabilizers we examine how chains of errors affect the surface. Figs~\ref{fig:surface3}b. and Fig.~\ref{fig:surface3}c. examine two longer chains of errors. As you can see, if multiple errors occur, only the eigenvalues of the stabilizers associated with the ends of the error chains flip. Each plaquette along the chain will always have two $X$ errors occurring on different boundaries and consequently the eigenvalue of the $Z^{\otimes 4}$ stabilizer around these plaquettes will flip twice. If we now consider an additional ancilla qubit which sits in the center of each plaquette and can couple to the four surrounding qubits, we can check the parity by running the simple parity circuit shown in Fig~\ref{fig:surface4}. If we assume that we initially prepare a perfect ``clean" surface, we then, at some later time, check the parity of every plaquette over the surface. \begin{figure}[bt] \begin{center} \includegraphics[width=0.3\textwidth]{surface4.pdf} \caption{a). Lattice structure to check the parity of a surface plaquette. An additional ancilla qubit is coupled to the four neighboring qubits that comprise each plaquette. b). Quantum circuit to check the parity of the $Z^{\otimes 4}$ stabilizer for each surface plaquette.} \label{fig:surface4} \end{center} \end{figure} If $X$ errors have occurred on a certain subset of qubits, the parity associated with the endpoints of error chains will have flipped. We now take this 2-dimensional {\em classical} data tree of eigenvalue flips and pair them up into the most likely set of error chains. Since it is assumed that the probability of error on any individual qubit is low, the most likely set of errors which reflects the eigenvalue changes observed is the minimum weight set (i.e. connect up all plaquettes where eigenvalues have changed into pairs such that the total length of all connections is minimized). This classical data processing is quite common in computer science and minimum weight matching algorithms such as the Blossom package~\cite{CR99,K08} have a running time polynomial in the total number of data points in the classical set. Once this minimal matching is achieved, we can identify the likely error chains corresponding to the end points and correction can be applied accordingly. The failure of this code is therefore dictated by error chains that cannot be detected through changes in plaquette eigenvalues. If you examine Fig~\ref{fig:surface5}, we consider an error chain that connects one edge of the surface lattice to another. In this case every plaquette has two associated qubits that have experienced a bit flip and no eigenvalues in the surface have changed. Since we have assumed that we are only wishing to maintain a ``clean" surface, these error chains have no effect, but when one considers the case of storing information in the lattice, these types of error chains correspond to logical errors on the qubit~\cite{FSG08}. Hence undetectable errors are chains which connect boundaries of the surface to other boundaries (in the case of information processing, qubits are artificial boundaries within the larger lattice surface) \begin{figure}[ht!] \begin{center} \includegraphics[width=0.45\textwidth]{surface5.pdf} \caption{Example of a chain of errors which do not cause any eigenvalue changes in the surface. If errors connect boundaries to other boundaries, the error correction protocol will not detect them. In the case of a ``clean" surface, these error chains are invariants of the surface code. When computation is considered, qubit information are artificial boundaries within the surface. Hence if error chains connect these information qubits to other boundaries, logical errors occur.} \label{fig:surface5} \end{center} \end{figure} It should be stressed that this is a simplified description of the full protocol, but it does encapsulate the basic idea. The important thing to realize is that the failure rate of the error correction procedure is suppressed, exponentially with the size of the lattice. In order for a series of single qubit errors to be undetectable, they must form a chain connecting one boundary in the surface with another. If we consider an error model where each qubit experiences a bit flip, independently, with probability $p$, then an error chain of one occurs with probability $p$, error chains of weight two occur with probability $O(p^2)$, chains of three $O(p^3)$ etc... If we have an $N \times N$ lattice and we extend the surface by {\em one} plaquette in each dimension, then the probability of having an error chain connecting two boundaries will drop by a factor of $p^2$ (two extra qubits have to experience an error one on each boundary). Extending and $N\times N$ lattice by one plaquette in each dimension requires $O(N)$ extra qubits, hence this type of error correcting code suppresses the probability of having undetectable errors exponentially with a qubit resource cost which grows linearly. As we showed in Section~\ref{sec:Fault-tolerance}, standard concatenated coding techniques allow for a error rate suppression which scales with the concatenation level as a double exponential while the resource increase scales exponentially. For the surface code, the error rate suppression scales exponentially while the resource increase scales linearly. While these scaling relations might be mathematically equivalent, the surface code offers much more flexibility at the architectural level. Being able to increase the error protection in the computer with only a linear change in the number of physical qubits is far more beneficial than using an exponential increase in resources when utilizing concatenated correction. Specifically, consider the case where a error protected computer is operating at a logical error rate which is just above what is required for an algorithm. If concatenated error correction is employed, then adding another later of correction will not only increase the number of qubits by an exponential amount, but it will also drop the effective logical error rate far below what is actually required. In contrast, if surface codes are employed, we increase the qubit resources by a linear factor and drop the logical error rate sufficiently for successful application of the algorithm. We now leave the discussion regarding topological correction models. We emphasize again that this was a {\em very} broad overview of the general concept of topological codes. There are many details and subtleties that we have deliberately left out if this discussion and we urge the reader, if they are interested, to refer to the referenced articles for a much more thorough treatment of this topic. \section{Conclusions and future outlook} This review has hopefully provided a basic introduction to some of the most important theoretical aspects of quantum error correction and fault-tolerant quantum computation. The ultimate goal of this discussion was not to provide a rigorous theoretical framework for QEC and fault-tolerance, but instead attempted to illustrate most of the important rules, results and techniques that have evolved out of this field. We not only covered the basic aspects of QEC through specific examples, but also we have briefly discussed how physical errors influence quantum computation and how these processes are interpreted within the context of QEC. One of the more important aspects of this review is the discussion related to the stabilizer formalism, circuit synthesis and fault-tolerant circuit construction. Stabilizers are arguably the most useful theoretical formalism in QEC as once it is sufficiently understood, most of the important properties of error correcting codes can be investigated and understood largely by inspection. The study of quantum error correction and fault-tolerance is still and active area of QIP research. Although the library of quantum codes and error correction techniques are vast there is still a significant disconnect between the abstract framework of quantum coding and the more physically realistic implementation of error correction for large scale quantum information. There are several future possibilities for the direction of quantum information processing. Even with the development of many of these advanced techniques, the physical construction and accuracy of current qubit fabrication is still insufficient to obtain any benefit from QEC. Many in the field now acknowledge that the future development of quantum computation will most likely split into two broad categories. The first is arguably the more physically realistic, namely few qubit application in quantum simulation. Quantum simulation, i.e. using quantum systems to efficiently simulate other quantum systems was proposed by Richard Feynmann in the early 1980's~\cite{F81} and was one of the primary motivations to the development of the field. In the ideal case, it is argued that having access to a quantum computer with on the order of 10-100 physical qubits could allow for simulating physical systems large enough to be impractical for current classical computers. If we limit our quantum array to the 100 qubit level, then even implementing active error correction techniques would not be desirable. Instead, higher quality fabrication and control as well as techniques in error avoidance (which require far less resources than error correction) would instead be used in order to lower effective error rates below what is required to run few qubit applications. Beyond few qubit quantum simulation we move to truely large scale quantum computation, i.e. implementing large algorithms such as Shor on qubit arrays well beyond 1000 physical qubits. This would undoubtably require active techniques in error correction. Future work needs to focus on adapting the many codes and fault-tolerant techniques to the architectural level. As we noted in section~\ref{sec:threshold}, the implementation of QEC at the design level largely influences the fault-tolerant threshold exhibited by the code itself. Being able to efficiently incorporate both the actual quantum code and the error correction procedures at the physical level is extremely important when developing an experimentally viable, large scale quantum computer. There are many differing opinions within the quantum computing community as to the future prospects for quantum information processing. Many remain pessimistic regarding the development of a million qubit device and instead look towards quantum simulation in the absence of active error correction as the realistic goal of quantum information. However, in the past few years, the theoretical advances in error correction and the fantastic speed in the experimental development of few qubit devices continues to offer hope for the near-term construction of a large scale device, incorporating many of the ideas presented within this review. While we could never foresee the possible successes or failures in quantum information science, we remain hopeful that a large scale quantum computer is still a goal worth pursuing. \section{Acknowledgments} The authors wish to thank A. M. Stephens, R. Van Meter, A.G. Fowler, L.C.L. Hollenberg, A. D. Greentree for helpful comments and acknowledge the support of MEXT, JST, the EU project QAP. \bibliographystyle{alpha}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \noindent With a seminal paper in 2008 Klyachko, Can, Binicioglu, and Shumovsky (KCBS) showed how an indivisible (single-particle, unentangled) spin-1 system could be used to experimentally show contradiction with a fundamental assumption of the classical world-view: namely that of non-contextual realism; via an inequality \cite{pent}. Two recent experiments have shown such a violation \cite{exp1,exp2}, proving quantum contextuality in these systems.\\ \noindent The question of \emph{how much} contextuality is required to violate this inequality now naturally arises. As such it is useful to first reiterate briefly, what exactly contextuality means. An outcome of a particular measurement may be considered to be contextual if it depends on a separate outcome and/or measurement. For example, take the canonical case of an entangled pair of spin-1/2 particles. The assumption of local-realism states that, once the measurement events are space-like separated, a measurement performed on the first particle can not affect the outcome of a measurement on the second particle. This is a form of non-contextuality, specifically \--- that the second measurement ($b$) and outcome ($B$) has no contextual relationship with the measurement ($a$) and outcome ($A$) of the first. Thus, locality is a subclass of non-contextuality, reinforced by a physical motivation. The assumption of non-contextual realism leads to the famous Bell \cite{bell} and CHSH \cite{chsh} inequalities.\\ \noindent Apart from the assumption of locality there are other forms of non-contextuality. For the case of a spin-1 particle, non-contextuality may be defined in terms of what observables are co-measurable (i.e. which observables commute). That is, if $[\hat{A},\hat{B}]=0$, $[\hat{A},\hat{C}]=0$, but $[\hat{B},\hat{C}]\neq 0$ then we say that a measurement of $\hat{A}$ is co-measurable with both $\hat{B}$ and $\hat{C}$ (though $\hat{B}$ and $\hat{C}$ are not). Thus, measurements of $\hat{A}$ are non-contextual with $\hat{B}$ (or $\hat{C}$). For a more in-depth discussion see, for example, Ref.\cite{peres}.\\ \noindent Experimentally, a controllable spin-1 system may be implemented, for example, by splitting a photon between three spatial modes. The set of commuting operators then simply become heralded detections on these modes. For more details see Ref.\cite{exp1}.\\ \noindent However it could be argued that this assumption of non-contextuality via co-measurability is philosophically weaker than the assumption of locality since the locality assumption has a very strong physical basis. Thus there is some motivation for either finding a more physically robust assumption on which to base the construction of the inequality, or manipulating the physical implementation of the spin-1 system such that the assumption of non-contextuality is transmuted to an assumption of locality. The latter is possible by space-like separating the measurement events on the photonic modes, but this is not the subject of this paper.\\ \noindent For an entangled pair of spin-1/2 particles it is possible to construct an explicit non-local hidden variable model which replicates the quantum bound of the CHSH inequality. It is also possible to show that an inequality obeyed by this specific model (as well as all others of the same class) is in contradiction with quantum physics. The derivation of this inequality \--- the Leggett inequality \--- proceeds by requiring that the individual photons obey the spin projection rule on the marginal probabilities, which for a photonic spin-1/2 system is the well known Malus law for polarization \cite{leg}. Alternatively the derivation may proceed by assuming that marginal probabilities may not take negative values \cite{bra}.\\ \noindent It was the original goal of the research reported here to find a similar inequality for the photonic spin-1 system. Where the assumption of non-contextuality is replaced with an assumption with a clear physical motivation: that of obedience of the spin-projection rules, a physically provable property. Instead what was found was that such a procedure can not succeed for the pentagram inequality, demonstrating interesting differences between the spin-1 and entangled spin-1/2 systems. This, however, does not rule out the possibility that differently structured inequalities (perhaps with more measurement contexts \--- though more measurement \emph{directions} will not be useful, as we shall see) could discriminate between semi-contextual realism and quantum mechanics.\\ \noindent In the following section we review the derivation of the pentagram inequality with emphasis on the photonic implementation. In section 3 we derive a contextual inequality (restricted as in the Leggett inequality to following the spin-projection rules on the marginal probabilities) for a spin-1 system which replicates the quantum bound (i.e. not violated by QM) \--- in contrast to the Leggett inequality, which \emph{is} violated by quantum mechanics. In section 4 we derive an explicit hidden variable model which replicates exactly quantum physics for the system in question. In section 5 we review our results and offer some potential explanations for the behavior of this model. \section{The Pentagram Inequality} \noindent Consider a single spin-1 particle. The operators representing spin-squared measurements along three orthogonal real directions (e.g. $\hat{S}_{x}^{2}$, $\hat{S}_{y}^{2}$, $\hat{S}_{z}^{2}$) commute and are thus co-measurable. These operators act on states which may be represented as vectors in $\mathbb{C}^{3}$. Similarly we may take three co-measurable projection operators working on a single photon split between three spatially separate optical modes (eg. $|x\rangle\langle x|$, $|y\rangle\langle y|$, $|z\rangle\langle z|$). In the latter case we may also picture the operators as measurements along real directions. \\ \noindent Now take operators of the form $\hat{a}_{j}\equiv2|j\rangle\langle j|-1$, where $j$ is some direction in real-space. We have $[\hat{a}_{j}, \hat{a}_{k}]=0$ if $j$ and $k$ are orthogonal directions, thus the $a$'s are co-measurable for orthogonal directions. A single measurement of $\hat{a}_{j}$ will yield +1 or -1, depending on whether there is, or is not, a photon in the optical mode represented by the direction $j$. Non-contextually, we could make a series of five measurements \begin{eqnarray} a_{1}a_{2}+a_{2}a_{3}+a_{3}a_{4}+a_{4}a_{5}+a_{5}a_{1}. \end{eqnarray} \noindent Represented by pairwise orthogonal directions (i.e. 1 is orthogonal to 2 and 5, etc.) which can be visualized as a pentagram, see Fig. \ref{pent1}. \begin{center} \begin{figure}[h]\centering \includegraphics[scale=0.5]{pentagram.eps} \caption{\label{pent1}(Color online) Five pairwise-orthogonal directions visualized as a pentagram. Each direction is orthogonal to the two directions connected to it by the pentagram. The directions themselves are unit vectors labeled by by letters (where $x$ and $y$ are defined as the standard basis vectors in $\mathbb{R}^3$, along with $z$.), and by numbers such that sequentially numbered directions are orthogonal (modulo 5).} \end{figure} \end{center} \noindent If we consider the realist world view to be correct, we must consistently assign values to all these potential measurements. If we try to minimize this function we discover that there is a limit given by \begin{eqnarray} a_{1}a_{2}+a_{2}a_{3}+a_{3}a_{4}+a_{4}a_{5}+a_{5}a_{1}\geq -3. \end{eqnarray} \noindent To see why this must be the case, first assign values $a_{1}=+1$ and $a_{2}=-1$, minimizing the first term. Non-contextual realism then requires we also make the assignment $a_{2}=-1$ in the second term, so to minimize the second term we make the choice $a_{3}=+1$, and so on. Proceeding this way we discover that we are required to have at least one term be equal to $+1$, meaning that the series of measurements can not yield a result below negative three, likewise for the statistical averages \begin{eqnarray} \overline{a_{1}a_{2}}+\overline{a_{2}a_{3}}+\overline{a_{3}a_{4}}+\overline{a_{4}a_{5}}+\overline{a_{5}a_{1}}\geq -3. \end{eqnarray} \noindent To make an observation violating this inequality would be to exclude non-contextual realism as a valid world-view. Quantum mechanically this expression is state dependent, but if we choose a ``symmetric state" (that is a state represented as a vector aligned with the symmetry axis of the five directions) we find that, indeed, this inequality is violated \begin{eqnarray} \langle a_{1}a_{2}\rangle+\langle a_{2}a_{3}\rangle+\langle a_{3}a_{4}\rangle+\langle a_{4}a_{5}\rangle+\langle a_{5}a_{1}\rangle \simeq -3.944. \end{eqnarray} \noindent Where the triangle brackets represent quantum mechanical expectation values, in contrast with the over-bars which will be used to represent statistical averages. \section{A Contextual Pentagram Inequality} \noindent Non-contextuality states that, if we perform a specific measurement, along with a second co-measurable (commuting) measurement, then the second measurement cannot affect the first. With regard to the pentagram inequality, non-contextuality demands that we must assign the same value to $a_{x}$ in both $a_{x}a_{y}$ and $a_{x}a_{w}$. However it could be argued that though classical mechanics is non-contextual, some hypothetical hidden variable model is not. Thus if we relax the constraint of non-contextuality we must add a new label to each measurement of the form $a_{xy}a_{yx}$, such that the first index labels the measurement being performed and the second index labels the context. Now, the hidden variable model may assign values to all the elements of the series of measurements completely arbitrarily. The newly contextualized pentagram inequality becomes \begin{eqnarray} \overline{a_{12}a_{21}}+\overline{a_{23}a_{32}}+\overline{a_{34}a_{43}}+\overline{a_{45}a_{54}}+\overline{a_{51}a_{15}}\geq -5. \end{eqnarray} \noindent Which reaches below even the quantum limit. \\ \noindent However, in place of the restraint of non-contextuality we may add the requirement that the quantum mechanical spin-projection laws be obeyed. This is done in close analogy with the Leggett inequality, which deals with an entangled pair of spin-1/2 particles (photons in the polarization degrees of freedom) and relaxes locality (a form of contextuality), but requires that Malus' Law be obeyed by the individual particles. In the single-photon, three-rail analog of a spin-1 particle the rule that must be enforced is: The marginal probability for detection of a photon in a particular spatial mode \--- that is the probability that the photon be found in that mode regardless of other conditions \--- must obey the quantum mechanical projection rule. Physically this could be seen as the result of photons obeying the proper beam-splitter operations \--- something experimentally testable and understood classically. Mathematically $|\langle\psi|j\rangle|^{2}$, where $|\psi\rangle$ is the state vector and $|j\rangle$ is the optical mode being measured, visualized as a direction in $\mathbb{R}^{3}$. Though we use the language of QM, this could be formulated classically. This assumption, in a sense, is stronger than non-contextuality, since it is an assumption on the ``back end" as opposed to an assumption on the ``front end". That is, the constraint is one that deals with experimental results, and involves no assumptions about the fundamental nature of the theory. \\ \noindent We now derive an inequality which uses the spin projection assumption, but not the non-contextuality assumption. First we quickly derive a rule we will need. It involves the unused ``$z$" modes. Specifically, we require that if a photon is not found to be in either of the two observed modes, that it be in the unobserved mode. Or, mathematically, $P(^{-}_{j+1},^{-}_{j})=P(^{-}_{j+1})P(^{-}_{j}|^{-}_{j+1})=P(^{+}_{``z"})$. Where $P(^{-}_{j+1})$ is the probability that a measurement on $j+1^{\mathrm{th}}$ optical mode will yield a negative result (that is, not contain a photon). Likewise a $+$ represents the probability that the mode will contain a photon. We make use of the standard Bayesian notation for conditional probabilities. In order to ``break" the original Klyachko inequality, all that was needed was \emph{setting} dependence, however the expression we have written is \emph{outcome} dependent. The ``$z$" is in quotes because it stands for \emph{whichever} direction is mutually orthogonal to the two measurement directions in question. For symmetric states $P(^{+}_{``z"})$ is the same for each orthogonal pair (as the angle between all five vectors and the symmetric vector is the same), we will denote this number by the real constant $q$. Thus, we obtain \begin{eqnarray} P(^{-}_{j}|^{-}_{j+1})=\frac{P(^{+}_{``z"})}{P(^{-}_{j+1})}=\frac{q}{1-c}\label{r2} \end{eqnarray} \noindent This result utilizes the fact that the chance that a photon will be found in any particular mode is $c\equiv|\langle\psi|j\rangle|^{2}$ for symmetric states, and thus the chance that it will not be in that mode is $P(_{j}^{-})=1-c$ for all $j$'s.\\ \noindent Now we can begin with the derivation. Start with the contextualized series of five measurements, written as a sum \begin{eqnarray} \overline{a_{12}a_{21}}+\overline{a_{23}a_{32}}+\overline{a_{34}a_{43}}+\overline{a_{45}a_{54}}+\overline{a_{51}a_{15}}\nonumber\\ =\sum_{j=1}^{5}\overline{a_{j,j+1}a_{j+1,j}}. \end{eqnarray} \noindent Where again the sum is modulo five. Using the standard inequality two-outcome measurement outcomes $\overline{AB}\geq|\overline{A}+\overline{B}|-1$ we have \begin{eqnarray} \sum_{j=1}^{5}\overline{a_{j,j+1}a_{j+1,j}}\geq\sum_{j=1}^{5}\left|\overline{a_{j,j+1}}+\overline{a_{j+1,j}}\right|-5. \end{eqnarray} \noindent Now using a few successive applications of the triangle inequality $|a|+|b|\geq|a+b|$, \begin{eqnarray} \sum_{j=1}^{5}\overline{a_{j,j+1}a_{j+1,j}}\geq\left|\sum_{j=1}^{5}\left(\overline{a_{j,j+1}}+\overline{a_{j+1,j}}\right)\right|-5. \end{eqnarray} \noindent Each average may be rewritten in terms of the probabilities of the potential outcomes and the values of those outcomes (simply $+1$ and $-1$), as \begin{eqnarray} \overline{a_{j,j+1}}&=&P(^{-}_{j+1})P(^{+}_{j}|^{-}_{j+1})-P(^{-}_{j+1})P(^{-}_{j}|^{-}_{j+1})\nonumber\\ & &-P(^{+}_{j+1})P(^{-}_{j}|^{+}_{j+1}). \end{eqnarray} \noindent Therefore \begin{widetext} \begin{eqnarray} \sum_{j=1}^{5}\overline{a_{j,j+1}a_{j+1,j}}&\geq&\left|\sum_{j=1}^{5}\left(P(^{-}_{j+1})P(^{+}_{j}|^{-}_{j+1})-P(^{-}_{j+1})P(^{-}_{j}|^{-}_{j+1})-P(^{+}_{j+1})P(^{-}_{j}|^{+}_{j+1})\right.\right.\nonumber\\ & &\left.\left.+P(^{-}_{j})P(^{+}_{j+1}|^{-}_{j})-P(^{-}_{j})P(^{-}_{j+1}|^{-}_{j})-P(^{+}_{j})P(^{-}_{j+1}|^{+}_{j})\right)\right|-5\nonumber\\ &\geq&\left|\sum_{j=1}^{5}\left(P(^{-}_{j+1})P(^{+}_{j}|^{-}_{j+1})-P(^{-}_{j+1})P(^{-}_{j}|^{-}_{j+1})-P(^{+}_{j+1})P(^{-}_{j}|^{+}_{j+1})\right.\right.\nonumber\\ & &\left.\left.+P(^{+}_{j+1})P(^{-}_{j}|^{+}_{j+1})-P(^{-}_{j+1})P(^{-}_{j}|^{-}_{j+1})-P(^{-}_{j+1})P(^{+}_{j}|^{-}_{j+1})\right)\right|-5\nonumber\\ &\geq& 2(1-c)\left|\sum_{j=1}^{5}P(^{-}_{j}|^{-}_{j+1})\right|-5. \end{eqnarray} \end{widetext} \noindent Where, after the second inequality, we have used Bayes' rule for conditional probabilities \begin{eqnarray} P(A|B)=P(B|A)\frac{P(A)}{P(B)}. \end{eqnarray} \noindent In the second line we have simplified the expression and invoked the spin projection rules on the marginals. Now using the derived rule, Eq.(\ref{r2}), we obtain \begin{eqnarray} \sum_{j=1}^{5}\overline{a_{j,j+1}a_{j+1,j}}\geq10q-5\simeq-3.944. \end{eqnarray} \noindent Which is exactly the quantum mechanical result for symmetric states. This proves that a contextual hidden variable model for an analog single spin-1 particle \--- restricted by spin projection rules \--- is capable of reaching the quantum mechanical result. Thus such a model is not in contradiction with quantum mechanics, unlike an entangled spin-1/2 system.\\ \noindent It is worth pointing out a similarity with a derivation in the original KCBS paper \cite{pent}. The authors of that paper show that the pentagram inequality can be violated by a symmetric state undergoing a series of projections on the spin-zero case (of the three possible spin states of a spin-1 particle) of the form $|\langle\mathcal{L}|\psi\rangle|^{2}$, $|\mathcal{L}\rangle$ being the eigenstate to be projected on. The measurement in the KCBS case constitutes a projection onto the state of the entire system, whereas in our case the projection is only on a measurement event in a single optical mode. The two become equivalent if no distinction is made between marginal and joint probabilities \--- but this is precisely the case we consider. So, the inequality we derive here can be seen as a contextual-realistic replication of this violation in the extremal (equality fulfilling) case without recourse to use of the full quantum formalism of the spin-1 system. \section{An Explicit Contextual Hidden Variable Model} \noindent The previous section demonstrated that a contextual hidden variable model could, in principle, replicate the results of quantum mechanics for a single spin-1 particle and a series of five measurements. In this section we present one such explicit model, based on the non-local hidden variable model of Leggett. The model is not elegant or intuitive, but it has the advantage of being ``both ways" contextual. That is, each measurement is contextual on the other, and it is not necessary to causally order the events. A much simpler model could be presented (one which is almost identical to Leggett's), but it would not have the symmetry property of the one presented here. It is worthwhile to mention that it is not necessary to understand the mechanics of the model to follow the discussion that will come after. It is presented only for the benefit of the interested reader.\\ \noindent Roughly, what is necessary is that each unique ordering of setting and context vectors be mapped into two real ``threshold values" the relative sizes of which determine the necessary statistics.\\ \noindent For a measurement labeled by both a direction ($i$) and a context ($j$) \begin{eqnarray} &A_{ij}&\left(\lambda,\lambda_{ij}(\vec{i},\vec{j},\vec{\psi}),\gamma_{ij}(\vec{i},\vec{j},\vec{\psi})\right)\equiv\nonumber\\ & &\left\{ \begin{array}{ll} -1 \quad\mathrm{for} & \lambda\in\left[\lambda_{ij},\gamma_{ij}\right]; \\ +1 \quad\mathrm{for} & \lambda\in \left[0,\lambda_{ij}\right)\mathrm{or}\left(\gamma_{ij},1\right]. \end{array} \right. \end{eqnarray} \noindent Where $\vec{i}$, and $\vec{j}$ are the vectors representing the measurement direction, and the context, respectively; $\vec{\psi}$ is the vector representing the state. The parameter $\lambda$ is the hidden variable, we place no restrictions on its distribution other than it is real number bound between zero and one. The numerical values $\lambda_{ij}$, and $\gamma_{ij}$ are thresholds which determine what proportion of values for the hidden variable result in either an outcome of $-1$ or $+1$. The first threshold value is given by \begin{eqnarray} \gamma_{ij}=\lambda_{ij}-\left|\vec{i}\cdot\vec{\psi}\right|^{2}+1. \end{eqnarray} \noindent Without yet defining the second threshold value we can already see that the marginals yield the correct expression \begin{eqnarray} \overline{A_{ij}}=\int d\lambda A_{ij}&=&+1\int^{\lambda_{ij}}_{0}d\lambda-1\int^{\gamma_{ij}}_{\lambda_{ij}}d\lambda+1\int^{1}_{\gamma_{ij}}d\lambda\nonumber\\ &=&2\lambda_{ij}-2\gamma_{ij}+1=2\left|\vec{i}\cdot\vec{\psi}\right|^{2}-1. \end{eqnarray} \noindent The second threshold value is given by \begin{widetext} \begin{eqnarray} \lambda_{ij}&=&H\left(\left|\vec{j}\cdot\vec{\psi}\right|^{2}-\left|\vec{i}\cdot\vec{\psi}\right|^{2}\right)\left|\vec{i}\cdot\vec{\psi}\right|^{2}+\delta\left(\left|\vec{j}\cdot\vec{\psi}\right|^{2}-\left|\vec{i}\cdot\vec{\psi}\right|^{2}\right)\nonumber\\ & &\times\left\{\left[\frac{\left|\vec{i}\cdot\vec{\psi}\right|^{2}}{2}\left(1+\frac{\left(\vec{j}\times\vec{i}\right)\cdot\vec{\psi}}{\left|\left(\vec{j}\times\vec{i}\right)\cdot\vec{\psi}\right|}\right)\right]+\delta\left(\left(\vec{j}\times\vec{i}\right)\cdot\vec{\psi}\right)\frac{1}{2}\left[1+\left(\hat{R}_{\vec{v}}\left(\frac{\pi}{2}\right)\left[\vec{j}\times\vec{i}\right]\right)\cdot\vec{\psi}\right]\left|\vec{i}\cdot{\psi}\right|^{2}\right\} \end{eqnarray} \end{widetext} \noindent Where \begin{eqnarray} H(t)&\equiv&\left\{ \begin{array}{ll} 1, & \mathrm{for}\quad t>0; \\ 0, & \mathrm{for}\quad t\leq0, \end{array} \right. \\ \delta(t)&\equiv&\left\{ \begin{array}{ll} 1, & \mathrm{for}\quad t=0; \\ 0, & \mathrm{for}\quad t\neq0. \end{array} \right. \end{eqnarray} \noindent And $\hat{R}_{\vec{v}}(\theta)$ is a rotation operator which rotates a vector around the vector $\vec{v}$ (defined shortly) by angle $\theta$. Key to seeing how this model operates is that $A_{ij}$'s threshold values must be different from $A_{ji}$'s, otherwise attempting to integrate over the possible values of $\lambda$ for correlation values between the two will always yield zero \--- so the model must have some built-in asymmetry with regards to ordering; thus the step functions and cross products. To further understand this formulation, we should consider each term within the context of when it is non-zero. The $H$ \--- Heaviside step function \--- is only ``switched on" (i.e. non-zero) when the state projection onto the context vector is larger than the projection onto the measurement direction than on the actual measurement direction. For symmetric states (especially important for our analysis) \--- that is those with equal projections onto the context and direction vectors \--- the first delta function switches on. This delta function is distributed across two terms. The first term contains cross products which enforce the necessary asymmetry and yields the correct threshold values when the symmetrically projecting state vector is not in the plane defined by the $i$ and $j$ directions. However when it is in-plane this term is zero and the next term switches on. The final term contains a rotation operator in real-space $\hat{R}_{\vec{v}}$ which rotates about a vector which is perpendicular to the plane defined by the set of all possible symmetric states (again, here symmetric means only symmetric with regard to $\vec{i}$ and $\vec{j}$) by $\pi/2$ radians. This rotation allows the hidden variable model to assign working threshold values in a similar fashion to the previous term.\\ \noindent Now, if it is the case, for example, that $\left|\vec{j}\cdot\vec{\psi}\right|^{2}>\left|\vec{i}\cdot\vec{\psi}\right|^{2}$, then $\lambda_{ij}=\left|\vec{i}\cdot\vec{\psi}\right|^{2}$, $\lambda_{ji}=0$, $\gamma_{ij}=1$, and $\gamma_{ji}=1-\left|\vec{j}\cdot\vec{\psi}\right|^{2}$.\\ \noindent Now we can compute the correlation of $A_{ij}$ and $A_{ji}$. For orthogonal measurement directions it must be the case that $\lambda_{ij}\leq\gamma_{ji}$, so we have \begin{eqnarray} \overline{A_{ij}A_{ji}}&=&-1\int^{\lambda_{ij}}_{0}d\lambda+1\int^{\gamma_{ji}}_{\lambda_{ij}}d\lambda-1\int^{1}_{\gamma_{ji}}d\lambda ,\nonumber\\ &=&-2(\lambda_{ij}-\gamma_{ji})-1,\nonumber\\ &=&-2\left(\left|\vec{j}\cdot\vec{\psi}\right|^{2}+\left|\vec{i}\cdot\vec{\psi}\right|^{2}\right)+1. \end{eqnarray} \noindent Which matches the quantum mechanical expression. If we had the opposite case, $\left|\vec{j}\cdot\vec{\psi}\right|^{2}<\left|\vec{i}\cdot\vec{\psi}\right|^{2}$, the model would have produced the same result. Now for the case of symmetric states \--- where the above formulation would break down \--- the step function is zero and the second term switches on. The factor $\left(\vec{j}\times\vec{i}\right)\cdot\vec{\psi}/\left|\left(\vec{j}\times\vec{i}\right)\cdot\vec{\psi}\right|$, produces a plus sign for one ordering of $i$ and $j$ and a negative for the other \--- creating the necessary asymmetry. The averaging procedure for the correlations then proceeds exactly as above. However for states that are both symmetric, and in the plane defined by both measurement directions, the previous formulation breaks down, and the second delta function switches on. The vector $\vec{i}\times\vec{j}$ is rotated such that it also is in the plane defined by $\vec{i}$ and $\vec{j}$ meaning that for one ordering of $i$ and $j$ in the threshold values the term factor is equal to one and for the other it is zero. Again, finding the average of the correlations proceeds as above and reproduces the correct quantum mechanical expression. Thus this hidden variable model \--- though perhaps overcomplicated \--- reproduces quantum mechanics under all possible circumstances involving a spin-1 system with two contexts, and is completely internally consistent. The addition of more measurement directions to the inequality can also be simulated by the HVM as all projectors have the same statistics as QM.\\ \section{Discussion and Conclusions} \noindent The question now is why this procedure fails to find a contradiction with quantum physics. In the case of the entangled pair of spin-1/2 particles such a contradiction naturally arises. What is different about the spin-1 system?\\ \noindent The inequality derived in section three states that a contextual hidden variable model, which is constrained by the spin projection rules, \emph{may} reproduce the results of quantum mechanics and reach the floor of -3.994 for symmetric states in the pentagram inequality. Section four shows that such a hidden variable model does exist explicitly. It is significant that the inequality derived utilizes outcome dependent contextuality, whereas the explicit model only requires setting dependence to replicate quantum mechanics. Thus it could be inferred that a version of the contextual pentagram inequality could be derived which utilizes only setting dependence and still reaches the -3.944 floor. It is intriguing to note that the restriction to spin projection rules is powerful enough to bring the contextual floor up to -3.944 from -5.\\ \noindent Perhaps some insight may be gained by examining the recently articulated principle of ``global exclusive disjunction" \cite{ac}. Briefly, the principle of exclusive disjunction states that the sum of probabilities of events that are pair-wise exclusive can not be larger than one. \emph{Global} exclusive disjunction states that this principle must be upheld when events in an inequality are considered jointly with other events, and places a lower bound on the quantum-contextual pentagram inequality. For more details see the cited reference. Since the explicit hidden variable model outlined in this manuscript replicates the quantum probabilities for both marginals and conditionals it follows that exclusive disjunction (and consequently global exclusive disjunction) is satisfied. This can be seen as leading to the lower bound on the explicit model.\\ \noindent Another point of interest is that, since this work drew inspiration from the Leggett inequality for entangled spin-1/2 particles, it would make sense that \--- in similarity with Leggett \--- some non-standard rotation of the the state vector in $\mathbb{C}^{3}$ would yield a contradiction with quantum mechanics (in the case of Leggett the polarization measurement projection vector must be rotated outside of the real plane of the Poincar\'{e} sphere to achieve violation). This is in fact not the case as the explicit model can recover the quantum mechanical results for \emph{any} state vector in a two-context system. It is unknown why this is, it may be that the entanglement of two subsystems is more ``powerful" than the spin-1 particle. Or there may be some altogether different cause. Perhaps a generalization to more contexts will yield a contradiction with quantum theory. These questions remain open and we hope this manuscript stimulates further interest in this issue.\\ \section*{Acknowledgements} \noindent The authors would like to acknowledge Johannes Kofler for many useful discussions, as well as Anton Zeilinger for further discussions and support. This work was supported by the ERC Advanced Grant QIT4QAD, and the Austrian Science Fund FWF within the SFB F40 (FoQuS) and W1210-2 (CoQuS).
{ "redpajama_set_name": "RedPajamaArXiv" }
\section*{ONLINE APPENDIX} \section{Optimal Mechanism for Case 2 ($v(t_2)\ge V_H$)} \label{sec:case2} In this section, we will discuss the second case of our main result, where $v(t_2)\ge V_H$. In this case, if we still use $t_1$ as the reference point and follow the same analysis of Section \ref{sec:case1}, we will end up having a mechanism with $u(t_2)<v(t_2)$, hence infeasible. To solve this problem, we write the revenue expression $REV(\pi,p)$ using $t_2$ as the reference point. Although the resulting mechanism looks different, the approach for deriving it is quite similar to that in Section \ref{sec:case1}. \begin{lemma} \label{lem:optimal-case2} If $v(t_2) \geq V_H $, the optimal information selling mechanism is the threshold mechanism $\pi^*$ with threshold ${\theta}^*(t) = -\bar{\phi}^+(t) $ for each type $t$. The payment is determined by the following equation and is monotone non-increasing in $t$: \begin{gather*} p^*(t) = \int_{q\in Q} g(q) \pi^*(q,t)v(q,t)\,\mathrm{d} q + \int_{t}^{t_2} P_{\pi^*}(x)\,\mathrm{d} x- v(t_2). \end{gather*} \end{lemma} \begin{proof Similar to the proof of Lemma \ref{lem:optimal-case1}, this proof also contains 3 steps: {\bfseries Step 1.} In the first step, we derive the revenue of information selling in a slightly different way than in the proof of Lemma \ref{lem:optimal-case1}, i.e., using $t_2$ as the reference point instead of $t_1$. \begin{align} REV (\pi, p) =&\int_{t_1}^{t_2} f(t)\left[\int_{q \in Q} g(q) \pi^*(q,t)v(q,t) \,\mathrm{d} q -u(t) \right]\,\mathrm{d} t \nonumber\\ =&\int_{t_1}^{t_2} f(t)\left[\int_{q \in Q} g(q)\pi^*(q,t)v(q,t) \,\mathrm{d} q -u(t_2) + \int_{t}^{t_2} P_{\pi^*}(x)\,\mathrm{d} x \right]\,\mathrm{d} t \nonumber\\ =&\int_{t_1}^{t_2} f(t)\left[\int_{q \in Q} g(q) \pi^*(q,t)v(q,t) \,\mathrm{d} q \right]\,\mathrm{d} t + \int_{t_1}^{t_2} \int_{t}^{t_2}f(t) P_{\pi^*}(x)\,\mathrm{d} x\mathrm{d} t -u(t_2) \nonumber\\ =&\int_{t_1}^{t_2} f(t)\left[\int_{q \in Q} g(q) \pi^*(q,t)v(q,t) \,\mathrm{d} q \right]\,\mathrm{d}t + \int_{t_1}^{t_2} \int_{t_1}^{x}f(t) P_{\pi^*}(x)\,\mathrm{d} t\mathrm{d} x -u(t_2) \nonumber\\ =&\int_{q \in Q} g(q)\left[\int_{t_1}^{t_2} f(t) \pi^*(q,t)v(q,t) \,\mathrm{d} q \right]\,\mathrm{d} t + \int_{t_1}^{t_2} F(t)\int_{q \in Q} g(q)\left[ \pi^* (q,t)v_1(q) \right]\,\mathrm{d} q\mathrm{d} t-u(t_2) \nonumber\\ =&\int_{q \in Q} g(q)\left[ \int_{t_1}^{t_2} f(t)\pi^*(q,t)v_1(q) \left[ \bar{\phi}(t) + \rho(q)\right]\,\mathrm{d} t \right]\,\mathrm{d} q -u(t_2) \label{eq:revenue-2} \end{align} {\bfseries Step 2.} Now we show that the given mechanism $(\pi^*,p)$ achieves the maximum revenue. The threshold ${\theta}^*(t) = -\bar{\phi}(t)$ is picked such that $\pi^*(q,t) =1$ whenever $ \bar{\phi}(t) + \rho(q) \ge 0$ and $\pi^*(q,t) =0$ whenever $ \bar{\phi}(t) + \rho(q) < 0$. That is, the revenue function \eqref{eq:revenue-2}, expressed with $t_2$ as the reference point, is entry-wise maximized. This clearly achieves the maximum possible value in the first term of Equation \eqref{eq:revenue-2}. The second term is maximized by choosing the minimum possible $u(t_2)$ value, i.e., $u(t_2) = v(t_2)$. With the defined payment function, we indeed choose \begin{gather*} u(t_2) = \int_{q\in Q} g(q) \pi^*(q,t_2)v(q,t_2) \,\mathrm{d} q - p^*(t_2) = v(t_2) - \int_{t_2}^{t_2} P_{\pi^*}(x) \,\mathrm{d} x = v(t_2). \end{gather*} {\bfseries Step 3.} In the final step, we argue that this choice also leads to a feasible mechanism, satisfying the characterization of Lemma \eqref{lem:feasible-M}. Since the upper virtual value function $\bar{\phi}$ is monotone non-decreasing, we know that the threshold ${\theta}^*(t) = -\bar{\phi}(t)$ is monotone non-increasing in $t$. This implies that \begin{equation*} P_{\pi^*}(t) = \int_{q \in Q} \pi^*(q, t) g(q)v_1(q) \,\mathrm{d} q = \int_{q: \rho(q) \geq {\theta}^*(t)} g(q)v_1(q) \,\mathrm{d} q \end{equation*} is monotone non-decreasing in $t$ since a larger $t$ leads to a smaller $\bar{\theta}(t)$ and thus larger integral domain for $q$. So constraint \eqref{eq:signal-monotonicity} is satisfied. We now prove that $(\pi^*, p^*)$ satisfies constraint \eqref{eq:buyer-utility-identify2}. Plugging the payment function $\pi^*(t)$ into the definition of $u(t)$, we get \begin{gather*} u(t) = \int_{q\in Q} g(q) \pi^*(q,t)v(q,t) \,\mathrm{d} q - p^*(t) = v(t_2) - \int_{t}^{t_2} P_{\pi^*}(x) \,\mathrm{d} x. \end{gather*} It is easy to see that $u(t_2) = v(t_2)$, which can be plugged back to the above equality to obtain constraint \eqref{eq:buyer-utility-identify2}. For constraint \eqref{eq:ir-t2}, we already have $u(t_2) = v(t_2)$. And \begin{align*} u(t_1) = v(t_2) - \int_{t_1}^{t_2} P_{\pi^*}(x)\,\mathrm{d} x \geq \qopname\relax n{max} \{v(t_1), 0 \} + \int_{t_1}^{t_2} P_{\pi^*}(x)\,\mathrm{d} x - \int_{t_1}^{t_2} P_{\pi^*}(x) \,\mathrm{d} x \geq 0. \end{align*} Finally, we show that the payment $p^*(t)$ is non-negative i.e., $p^*(t)$ satisfies constraint \eqref{eq:non-negativity}. We first prove that the $p^*(t)$ is non-increasing in $t$. % % % For any $t>t_1$ and $t'\in (t, \bar{\phi}(t))$, we have \begin{align} p^*(t') -p^*(t) =& \int_{q\in Q} g(q) \pi^*(q,t')v(q,t') \,\mathrm{d} q - \int_{q\in Q} g(q) \pi^*(q,t)v(q,t) \,\mathrm{d} q - \int_{t}^{t'} P_{\pi^*}(x) dx \nonumber\\ =& \int_{q: \rho(q) \geq {\theta}^*(t')} g(q) v(q,t') \,\mathrm{d} q - \int_{q: \rho(q) \geq {\theta}^*(t)} g(q) v(q,t) \,\mathrm{d} q - \int_{t}^{t'} P_{\pi^*}(x)\,\mathrm{d} x. \label{eq:payment_decreasing} \end{align} Observe that ${\theta}^*(t) =-\bar{\phi}(t)\geq -\bar{\phi}(t')= {\theta}^*(t')$. So the first term in the right-hand side can be written as: \begin{gather*} \int_{q: \rho(q) \geq {\theta}^*(t')} g(q) v(q,t') \,\mathrm{d} q=\int_{q: \rho(q) \geq {\theta}^*(t)} g(q) v(q,t') \,\mathrm{d} q+\int_{q: {\theta}^*(t)\le \rho(q) \leq {\theta}^*(t')} g(q) v(q,t') \,\mathrm{d} q. \end{gather*} When ${\theta}^*(t')\le \rho(q) \leq {\theta}^*(t)$, we have $v(q, t')=v_1(q)[t'+\rho(q)]\le v_1(q)[\bar{\phi}(t)+{\theta}^*(t)]= 0$, where the inequality is due to the choice of $t'$. Therefore, the second term in the right-hand side of the above equation is negative. As a result, \begin{gather*} \int_{q: \rho(q) \geq {\theta}^*(t')} g(q) v(q,t') \,\mathrm{d} q\le \int_{q: \rho(q) \geq {\theta}(t)} g(q) v(q,t') \,\mathrm{d} q. \end{gather*} Combined with Equation \eqref{eq:payment_decreasing}, we get \begin{align*} p^*(t') -p^*(t) \le & \int_{q: \rho(q) \geq {\theta}^*(t)} g(q) v(q,t') \,\mathrm{d} q - \int_{q: \rho(q) \geq {\theta}(t)} g(q) v(q,t) \,\mathrm{d} q - \int_{t}^{t'} P_{\pi^*}(x)\,\mathrm{d} x\\ =&\int_{q: \rho(q) \geq {\theta}^*(t)}g(q)v_1(q)(t'-t)\,\mathrm{d} q- \int_{t}^{t'} P_{\pi^*}(x)\,\mathrm{d} x\\ =&(t'-t)P_{\pi^*}(t)-\int_{t}^{t'} P_{\pi^*}(x)\,\mathrm{d} x\\ \le&0. \end{align*} \sz{Another direct way is Since \begin{align*} p(t)=& \int_{q\in Q} g(q) \pi^*(q,t)v(q,t)\,\mathrm{d} q + \int_{t}^{t_2} P_{\pi^*}(x)\,\mathrm{d} x- v(t_2)\\ =& \int_{q\in Q} g(q) \pi^*(q,t)v(q,t)\,\mathrm{d} q + u(t_2) - \int_{t_1}^{t} P_{\pi^*}(x)\,\mathrm{d} x- v(t_2)\\ =&\int_{q:\rho(q)\ge -\bar{\phi}^+(t)}g(q)v(q, t)\,\mathrm{d}q-\int_{t_1}^t \int_{q \in Q} g(q) \pi(q,x) v_1(q)\,\mathrm{d}q\mathrm{d}x + u(t_2) - v(t_2)\\ =&\int_{q:\rho(q)\ge -\bar{\phi}^+(t)}g(q)v(q, t)\,\mathrm{d}q-\int_{t_1}^t \int_{q:\rho(q)\ge -\bar{\phi}^+(x)} g(q) v_1(q)\,\mathrm{d}q\mathrm{d}x + u(t_2) - v(t_2) \end{align*} the differential of $p(t)$ is \begin{align*} \mathrm{d}p(t)=&\left[\int_{q:\rho(q)\ge -\bar{\phi}^+(t)}g(q)\frac{\partial v(q,t)}{\partial t}\,\mathrm{d}q\right]\mathrm{d}t+\left[\frac{\mathrm{d}}{\mathrm{d}\bar{\phi}^+(t)}\int_{q:\rho(q)\ge -\bar{\phi}^+(t)}g(q)v(q, t)\,\mathrm{d}q\right]\mathrm{d}\bar{\phi}^+(t)\\ &-\left[\int_{q:\rho(q)\ge -\bar{\phi}^+(t)} g(q) v_1(q)\,\mathrm{d}q\right]\mathrm{d}t\\ =&\left[\frac{\mathrm{d}}{\mathrm{d}\bar{\phi}^+(t)}\int_{q:\rho(q)\ge -\bar{\phi}^+(t)}g(q)v(q, t)\,\mathrm{d}q\right]\mathrm{d}\bar{\phi}^+(t)\\ =&\left[\sum_{q:\rho(q)=-\bar{\phi}^+(t)}g(q)v(q,t)\right]\mathrm{d}\bar{\phi}^+(t)\\ =&\left[\sum_{q:\rho(q)=-\bar{\phi}^+(t)}g(q) v_1(q)[t - \bar{\phi}^+(t)] \right]\mathrm{d}\bar{\phi}^+(t) \end{align*} Because we have proved $\bar{\phi}^+(t) \geq t$, we have $[t - \bar{\phi}^+(t)] \leq 0$ . Since $d\bar{\phi}^+(t)$, $g(q)$, $v_1(q)$ are positive, we get $\forall t \: dp(t)\leq 0 $. } This shows that $p^*(t)$ is monotone non-increasing in the interval $(t,\bar{\phi}(t))$ for any $t>t_1$. Since set of intervals $\{(t,\bar{\phi}(t))\mid t\in T\}$ covers interval $(t_1,t_2]$ and $p(t)$ is clearly continuous, we can conclude that $p^*(t)$ is monotone non-increasing in the entire interval $T$\footnote{Similar techniques are also used to proved existence of solutions for differential equations.}. Thus, to show that the payment is always non-negative, we only need to prove that $p^*(t_2)\ge 0$: \begin{align*} p^*(t_2) &= \int_{q \in Q} g(q) \pi^*(q,t_2)v(q,t_2)\,\mathrm{d} q - v(t_2) + \int_{t_2}^{t_2} P_{\pi^*}(x)\,\mathrm{d} x \\ &= \int_{q \in Q} \pi^*(q,t_2) g(q) v(q,t_2)\, \mathrm{d} q - \int_{q \in Q} g(q) v(q,t_2)\, \mathrm{d} q\\ & \geq 0. \end{align*} The last inequality comes from the fact that $\pi^*(q,t_2) = 1$ for all $q$ such that $v(q,t_2)\geq 0$. \end{proof} \begin{remark} \label{remark:case_2_condition_eq} The condition $v(t_2)\ge V_H$ in Lemma \ref{lem:optimal-case2} is equivalent to: \begin{gather} v(t_1) \geq - \int_{t_1}^{t_2} \int_{q: \rho(q) \leq {\theta}^*(x)} g(q) v_1(q)\,\mathrm{d} q \mathrm{d} x.\label{eq:case2_condition} \end{gather} To show the equivalence, note that \begin{gather*} v(t_2)=v(t_1) + (t_2 - t_1) \int_{q \in Q} v_1(q) g(q) \,\mathrm{d} q. \end{gather*} Combining with $v(t_2)\ge V_H$ yields \begin{gather*} v(t_1) -\qopname\relax n{max} \{v(t_1), 0 \} \geq \int_{t_1}^{t_2} \int_{q: \rho(q) \geq {\theta}(x)} g(q) v_1(q)\,\mathrm{d} q \mathrm{d} x - (t_2 - t_1) \int_{q \in Q} v_1(q) g(q)\,\mathrm{d} q. \end{gather*} Therefore, \begin{gather*} \qopname\relax n{min} \{v(t_1), 0 \} \geq - \int_{t_1}^{t_2} \int_{q: \rho(q) \leq {\theta}^*(x)} g(q) v_1(q) \,\mathrm{d} q \mathrm{d} x, \end{gather*} which is equivalent to inequality \eqref{eq:case2_condition} since the right-hand side is negative. \end{remark} For the instances in between this two cases, such that $\int_{t_1}^{t_2} \int_{\underline{\theta_x}}^{q_2} g(q) v_1(q) dq dx \leq v(t_2) \leq \int_{t_1}^{t_2} \int_{\bar{\theta_x}}^{q_2} g(q) \\ v_1(q) dq dx $. They are rare but existing, because $\forall t \; \underline{\phi}(t) \leq \bar{\phi}(t) $. Thus, $\forall t \; \underline{\theta_x} \geq \bar{\theta_x} $. Since $g(q)v_1(q)$ is positive, the left-hand-side integral will be smaller than the right-hand-side. Now, let's define them as cases 3. Neither of this payment function and allocation rule will solve case 3. Because if we use the threshold and payment function for case 1, $s(t_1) = 0$, $s(t_2) = u(t_2) - v(t_2) = \int_{t_1}^{t_2} \int_{\underline{\theta_x}}^{q_2} g(q) v_1(q) dq dx - v(t_2) <= 0$. This will not guarantee the IR constraint. Ditto for case 2. Thus, the prove of case 3 will need to find a balance and take care of both sides' surplus $s(t_1)$ and $s(t_2)$. The high-level idea in proof of case 3 is to combine the two virtual value $\bar{\phi}(t)$ and $\underline{\phi}(t)$ tactfully to maximize the revenue function while satisfying IR constraint. \begin{corollary} \label{same-payment} If recommendation policies are the same for any two types, then the payment is the same for these two types. Formally, $ \forall t \, \forall t'$, where \, $ \forall q , \pi(q,t) = \pi(q,t')$, then $p(t) = p(t') $. \end{corollary} \begin{proof} We can assume without generality $t \leq t'$. Because $ \forall q, \pi(q,t) = \pi(q,t')$ and monotonicity of $\theta(t)$, we have $ \forall t \leq t^* \leq t'\, \forall q, \pi(q,t) = \pi(q,t^*) =\pi(q,t')$. \begin{align*} p(t) &= \int_{q \in Q} g(q) \pi(q,t)v(q,t) dq - \int_{t_1}^{t} P(x) dx. \\ &= \int_{q \in Q} g(q) \pi(q,t')v(q,t) dq - \int_{t_1}^{t} P(x) dx. \\ &= \int_{q \in Q} g(q) \pi(q,t')v(q,t') dq - (t' - t)\int_{q \in Q} g(q) \pi(q,t')v_1(q) dq - \int_{t_1}^{t} P(x) dx. \\ &= \int_{q \in Q} g(q) \pi(q,t')v(q,t') dq - \int_{t}^{t'} \int_{q \in Q} g(q) \pi(q,x)v_1(q) dq dx - \int_{t_1}^{t} P(x) dx. \\ &= \int_{q \in Q} g(q) \pi(q,t')v(q,t') dq - \int_{t}^{t'} P(x) dx - \int_{t_1}^{t} P(x) dx. \\ &= \int_{q \in Q} g(q) \pi(q,t')v(q,t') dq - \int_{t_1}^{t'} P(x) dx. \\ &= p(t') \end{align*} For case 2, we can think the tail terms as $- v(t_2) + \int_{t}^{t_2} P(x) dx = - u(t_2) + \int_{t}^{t_2} P(x) dx = - u(t_1) - \int_{t_1}^{t} P(x) dx$. Since $u(t_1)$ is a constant, this proof still ho. \end{proof} \section{Optimal Mechanism for Case 3 ($V_L< v(t_2)< V_H$)} In this section, we consider the case where $V_L< v(t_2)< V_H$. The techniques used to handle previous two cases do not apply to this case, as we cannot use either $t_1$ or $t_2$ as the reference point. Instead, we use both $t_1$ and $t_2$ as reference points. We will first derive a different revenue function that features a new virtual value function. The new virtual value function is a convex combination of both the lower and the upper virtual value functions with a carefully chosen coefficient. Then we prove that the threshold mechanism defined according to the new virtual value function could achieve the maximum possible revenue. \begin{lemma} \label{lem:case_3} If $V_L < v(t_2) < V_H$, define $\widetilde{\phi}(t) = c \underline{\phi}(t) + (1-c) \bar{\phi}(t) $ to be the combined virtual value function, where $c \in (0,1) $ is a constant that satisfies \begin{gather*} \int_{t_1}^{t_2}\int_{q:\rho(q)\ge-\widetilde{\phi}(t)} g(q) v_1(q)\,\mathrm{d} q \mathrm{d} t = v(t_2). \end{gather*} Let $\widetilde{\phi}^+(t)$ be the ironed version of $\widetilde{\phi}(t)$. The optimal mechanism is the threshold mechanism $\pi^*$ with threshold $\theta^*(t) =-\widetilde{\phi}^+(t)$ for each type $t$. The payment is determined by the following equation and is monotone non-decreasing in $t$ when $F(t)\leq c$ and monotone non-increasing when $F(t)>c$: \begin{gather*} {p}^*(t) = \int_{q \in Q} g(q) {\pi}^*(q,t)v(q,t) \,\mathrm{d} q - \int_{t_1}^{t} P_{{\pi}^*}(x)\,\mathrm{d} x. \end{gather*} \end{lemma} The proof of Lemma \ref{lem:case_3} consists of two cases, which depends on whether the function $\widetilde{\phi}(t)$ is regular or not. But before proving the optimality of our mechanism, we will first argue that the constant $c$ described in Lemma \ref{lem:case_3} actually exists. Then we will show that the proposed threshold mechanism is feasible. \begin{lemma} \label{lem:existence_of_C} If $V_L < v(t_2) < V_H$, there exists a constant $C\in (0,1)$, such that \begin{gather*} \int_{t_1}^{t_2}\int_{q:\rho(q)\ge-\widetilde{\phi}(t)} g(q) v_1(q)\,\mathrm{d} q \mathrm{d} t = v(t_2). \end{gather*} \end{lemma} \begin{proof} Remark \ref{remark:case_2_condition_eq} implies that $v(t_2)<V_H$ is equivalent to the following: \begin{gather} v(t_1) < - \int_{t_1}^{t_2} \int_{q: \rho(q) \leq \bar{\theta}_x} g(q) v_1(q) \,\mathrm{d} q \mathrm{d} x. \end{gather} The right-hand side of the above inequality is clearly non-positive. Thus $\qopname\relax n{max} \{v(t_1), 0 \}=0$, and the condition $V_L < v(t_2) < V_H$ can be written as: \begin{gather*} \int_{t_1}^{t_2} \int_{q: \rho(q) \geq \underline{\theta}_x} g(q) v_1(q) \,\mathrm{d} q\mathrm{d} x < v(t_2) < \int_{t_1}^{t_2} \int_{q: \rho(q) \geq \bar{\theta}_x} g(q) v_1(q) \,\mathrm{d} q\mathrm{d} x. \end{gather*} When $C = 0$, we have $\underline{\theta}_x=-\underline{\phi}(t)=-\widetilde{\phi}(t)$ and \begin{gather*} \int_{t_1}^{t_2}\int_{q:\rho(q)\ge-\widetilde{\phi}(t)} g(q) v_1(q)\,\mathrm{d} q \mathrm{d} t = \int_{t_1}^{t_2} \int_{q: \rho(q) \geq -\underline{\phi}(t)} g(q) v_1(q) \,\mathrm{d} q\mathrm{d} t < v(t_2). \end{gather*} When $C = 1$, we have $\bar{\theta}_x=-\bar{\phi}(t)=-\widetilde{\phi}(t)$ and \begin{gather*} \int_{t_1}^{t_2} \int_{\rho(q) \geq - C \underline{\phi}(t) - (1-C) \bar{\phi}(t) } g(q) v_1(q) dq dt = \int_{t_1}^{t_2} \int_{q: \rho(q) \geq -\bar{\phi}(t)} g(q) v_1(q) \,\mathrm{d} q\mathrm{d} t > v(t_2). \end{gather*} \sz{ We can write \begin{align*} \int_{t_1}^{t_2} \int_{q: \rho(q) \geq -\widetilde{\phi}(t)} g(q) v_1(q) \,\mathrm{d} q\mathrm{d} t &= \int_{t_1}^{t_2} \int_{\rho(q) \geq - C \underline{\phi}(t) - (1-C) \bar{\phi}(t) } g(q) v_1(q) dq dt \\ &= \int_{t_1}^{t_2} \int_{\rho(q) \geq - (t+\frac{F(t)}{f(t)}+\frac{C}{f(t)}) } g(q) v_1(q) dq dt \\ &= Y(C) \end{align*} Now, we are going to show this function is continuous on $C \in (0,1)$, because supposing $\epsilon$ is positive, \begin{align*} Y(C+\epsilon) - Y(C) &= \int_{t_1}^{t_2} \int_{\rho(q) \geq - (t+\frac{F(t)}{f(t)}+\frac{C+\epsilon}{f(t)}) } g(q) v_1(q) dq dt - \int_{t_1}^{t_2} \int_{\rho(q) \geq - (t+\frac{F(t)}{f(t)}+\frac{C}{f(t)}) } g(q) v_1(q) dq dt\\ &= \int_{t_1}^{t_2} \int_{\rho(q) \geq - (t+\frac{F(t)}{f(t)}+\frac{C+\epsilon}{f(t)}) }^{\rho(q) < - (t+\frac{F(t)}{f(t)}+\frac{C}{f(t)}) } g(q) v_1(q) dq dt \end{align*} \begin{align*} Y(C) - Y(C-\epsilon) &= \int_{t_1}^{t_2} \int_{\rho(q) \geq - (t+\frac{F(t)}{f(t)}+\frac{C}{f(t)}) } g(q) v_1(q) dq dt - \int_{t_1}^{t_2} \int_{\rho(q) \geq - (t+\frac{F(t)}{f(t)}+\frac{C -\epsilon}{f(t)}) } g(q) v_1(q) dq dt\\ &= \int_{t_1}^{t_2} \int_{\rho(q) \geq - (t+\frac{F(t)}{f(t)}+\frac{C}{f(t)}) }^{\rho(q) < - (t+\frac{F(t)}{f(t)}+\frac{C-\epsilon}{f(t)}) } g(q) v_1(q) dq dt \end{align*} If $t+\frac{F(t)}{f(t)}$ is a constant, it is fine, since $\frac{C}{f(t)}$ is not a constant. For example, if $\rho(q)$ is a constant, the concern is that $\lim_{\varepsilon \to 0} Y(C+\epsilon) - Y(C) = 0$ may not be 0. There are three cases. 1. Given any $\rho(q)$ not in the range of $q$ at all. $Y(C)$ will just be a constant over its domain, which is continuous. 2. If $\rho(q)$ is in the range of $q$, suppose $\rho(q) = - (t+\frac{F(t)}{f(t)}+\frac{C}{f(t)})$. It is true that $Y(C)$ will suddenly increase at $\rho(q) = - (t+\frac{F(t)}{f(t)}+\frac{C}{f(t)}) $. But its right limits $\lim_{\varepsilon \to 0} Y(C+\epsilon) - Y(C) = 0$ is still 0. However, the left limits $\lim_{\varepsilon \to 0} Y(C) - Y(C- \epsilon) > 0$. So, this function is not continuous and it is actually common existed even when $\rho(q)$ is not a constant. Thus, we need partial recommendations on the boundary cases. Define $C$ as, $$ \qopname\relax n{argmax}_{C}Y(C) =\{ C \mid \int_{t_1}^{t_2} \int_{\rho(q) \geq -(t+\frac{F(t)}{f(t)}+\frac{C}{f(t)}) } g(q) v_1(q)\, \mathrm{d} q \mathrm{d} t \leq v(t_2) \}$$ And define $D$ as the following, $$D = \frac{v(t_2) - \int_{t_1}^{t_2} \int_{\rho(q) \geq - (t+\frac{F(t)}{f(t)}+\frac{C}{f(t)}) } g(q) v_1(q)\, \mathrm{d} q \mathrm{d} t }{\int_{\rho(q) = - (t+\frac{F(t)}{f(t)}+\frac{C}{f(t)}) \bar{\phi}(t) } g(q) v_1(q)\, \mathrm{d} q \mathrm{d} t }$$ $D$ is a constant where $D \in (0,1)$. $D > 0$ because we have both the numerator and denominator positive in above equation. Since $C$ is from argmax, this means the numerator is greater than denominator. Otherwise, $C$ can be larger to include the denominator and still be less than $v(t_2)$ Our recommendation mechanism will be \begin{gather*} \pi(q,t)= \begin{cases} 1 & \text{ if } \rho(q) > -\widetilde{\phi}^+(t)\\ D & \text{ if } \rho(q) = -\widetilde{\phi}^+(t)\\ 0 & \text{ otherwise} \end{cases}. \end{gather*} } Since $\int_{t_1}^{t_2} \int_{\rho(q) \geq - \widetilde{\phi}(t)} g(q) v_1(q) \,\mathrm{d} q\mathrm{d} t$ is continuous in $C$ \swr{Reealy?}, we must have $C\in(0,1)$ such that $$ \int_{t_1}^{t_2} \int_{\rho(q) \geq - C \underline{\phi}(t) - (1-C) \bar{\phi}(t) } g(q) v_1(q)\, \mathrm{d} q \mathrm{d} t = v(t_2) $$. \end{proof} Lemma \ref{lem:existence_of_C} implies that the mechanism proposed in Lemma \ref{lem:case_3} exists. Now we show that it is also feasible. \begin{lemma} \label{lem:case_3_feasible} The mechanism $(\widetilde{\pi}, \widetilde{p})$ is feasible. \end{lemma} \begin{proof} To prove Lemma \ref{lem:case_3_feasible}, it suffices to show that mechanism $(\widetilde{\pi}, \widetilde{p})$ satisfies constraints \eqref{eq:signal-monotonicity}, \eqref{eq:buyer-utility-identify2}, \eqref{eq:ir-t2}, and \eqref{eq:non-negativity}. By definition, \begin{gather*} P_{\widetilde{\pi}}(t) = \int_{q:\rho(q)\ge \widetilde{\theta}_t} g(q) v_1(q) \,\mathrm{d} q. \end{gather*} Since $\widetilde{\phi}^+(t)$ is non-decreasing, $\widetilde{\theta}_t=-\widetilde{\phi}^+(t)$ is non-increasing in $t$. Thus the integral domain of $P_{\widetilde{\pi}}(t)$ gets larger as $t$ increases, So $P_{\widetilde{\pi}}(t)$ is non-decreasing, satisfying constraint \eqref{eq:signal-monotonicity}. To show that $\widetilde{\phi}^+(t)$ satisfies constraint \eqref{eq:buyer-utility-identify2}, note that \begin{gather*} u(t)=\int_{q \in Q} g(q) \pi(q,t)v(q,t_1) \,\mathrm{d} q -p(t)=\int_{t_1}^{t} P_{\widetilde{\pi}}(x)\,\mathrm{d} x. \end{gather*} Thus $u(t_1)=0$ and \begin{gather*} u(t)=u(t_1)+\int_{t_1}^{t} P_{\widetilde{\pi}}(x)\,\mathrm{d} x. \end{gather*} As for constraint \eqref{eq:ir-t2}, we already have $u(t_1)=0$. And \begin{gather*} u(t_2) = \int_{t_1}^{t_2} P_{\widetilde{\pi}}(x)\,\mathrm{d} x = \int_{t_1}^{t_2} \int_{\widetilde{\phi}_q(t)\ge C } g(q) v_1(q) \,\mathrm{d} q \mathrm{d} t = v(t_2), \end{gather*} where the last equation is the definition of the constant $C$. Finally, we show that the payment is non-negative, i.e., mechanism $(\widetilde{\pi}, \widetilde{p})$ satisfies constraint \eqref{eq:non-negativity}. We claim that $p(t)$ is monotone non-decreasing when $F(t)\le C$, and monotone non-increasing when $F(t)\ge C$. Note that \begin{gather*} \widetilde{\phi}(t)=C \underline{\phi}(t) + (1-C) \bar{\phi}(t)=t-\frac{C-F(t)}{f(t)}. \end{gather*} Let $t_C$ be such that $F(t_C)=C$. So for any $t$ with $t<t_C$, we have $\widetilde{\phi}(t)<t$, and for any $t$ with $t>t_C$, we have $\widetilde{\phi}(t)>t$. We will show that this is also true for the ironed function $\widetilde{\phi}^+(t)$. In fact, $t_C$ must lie in a non-ironed interval. Otherwise, if $t_C$ is in an ironed interval $(a,b)$, then at least one of the points $(a, \widetilde{H}(a))$ and $(b, \widetilde{H}(b))$ lies strictly below the tangent line of function $\widetilde{\phi}(t)$ at point $(z_C, \widetilde{H}(z_C))$, where $z_C=F(t_C)$ and $\widehat{H}(\cdot)$ is the corresponding function when ironing $\widetilde{\phi}(t)$. Assume, without loss of generality, that the point is $(a, \widetilde{H}(a))$. Then there exists $s\in (a, z_C)$ such that $\widetilde{H}'(s)=\frac{\widetilde{H}(z_C)-\widetilde{H}(a)}{z_C-a}$. This means $\widetilde{H}'(s)> \widetilde{H}'(z_C)$ since $(a, \widetilde{H}(a))$ is strictly below the tangent line at $(z_C, \widetilde{H}(z_C))$. This contradicts the fact that $\widetilde{\phi}(t)<t, \forall t<t_C$, implying that $t_C$ is in a non-ironed interval. Also, we have $\widetilde{\phi}^+(t_C)=t_C$. For any $t<t_C$, if $t$ is in a non-ironed interval, then $\widetilde{\phi}^+(t)=\widetilde{\phi}(t)<t$. If $t$ is in an ironed interval $I=(a,b)$, then we know that $t_C\not\in I$ as $t_C$ is in a non-ironed interval. Since $\widetilde{H}(\cdot)$ is linear in $I$, we have $\widetilde{\phi}^+(t)=\widetilde{\phi}^+(a)=\widetilde{\phi}(a)<a<t$. The case where $t>t_C$ follows from similar arguments. For any $t<t_C$, let $t'$ be any number in the interval $[\widetilde{\phi}^+(t), t]$. Thus $\widetilde{\phi}^+(t)\ge \widetilde{\phi}^+(t')$. And \begin{align*} \widetilde{p}(t)-\widetilde{p}(t')=&\int_{q\in Q} g(q) \widetilde{\pi}(q,t)v(q,t)\,\mathrm{d} q -\int_{q\in Q} g(q) \widetilde{\pi}(q,t')v(q,t')\,\mathrm{d} q - \int_{t'}^{t} P_{\widetilde{\pi}}(x) \,\mathrm{d} x\\ =&\int_{q:\rho(q)\ge \widetilde{\theta}_t}g(q)v(q,t)\,\mathrm{d} q-\int_{q:\rho(q)\ge \widetilde{\theta}_{t'}}g(q)v(q,t')\,\mathrm{d} q- \int_{t'}^{t} P_{\widetilde{\pi}}(x) \,\mathrm{d} x. \end{align*} When $\rho(q)\ge \widetilde{\theta}_{t}=-\widetilde{\phi}^+(t)$, we have $v(q,t')=v_1(q)[t'+\rho(q)]\ge v_1(q)[t'-\widetilde{\phi}^+(t)]\ge 0$, where the last inequality is because of the choice of $t'$. So the second term in the above equation satisfies: \begin{align*} \int_{q:\rho(q)\ge \widetilde{\theta}_{t'}}g(q)v(q,t')\,\mathrm{d} q=&\int_{q:\rho(q)\ge \widetilde{\theta}_{t}}g(q)v(q,t')\,\mathrm{d} q-\int_{q:\widetilde{\theta}_{t}\le\rho(q)\le \widetilde{\theta}_{t'}}g(q)v(q,t')\,\mathrm{d} q\\ \ge&\int_{q:\rho(q)\ge \widetilde{\theta}_{t}}g(q)v(q,t')\,\mathrm{d} q. \end{align*} Thus, \begin{align*} \widetilde{p}(t)-\widetilde{p}(t')\ge&\int_{q:\rho(q)\ge \widetilde{\theta}_t}g(q)v(q,t)\,\mathrm{d} q-\int_{q:\rho(q)\ge \widetilde{\theta}_{t}}g(q)v(q,t')\,\mathrm{d} q- \int_{t'}^{t} P_{\widetilde{\pi}}(x) \,\mathrm{d} x\\ =&\int_{q:\rho(q)\ge \widetilde{\theta}_t}g(q)v_1(q)(t-t')\,\mathrm{d} q-\int_{t'}^{t} P_{\widetilde{\pi}}(x) \,\mathrm{d} x\\ =&(t-t')P_{\widetilde{\pi}}(t)-\int_{t'}^{t} P_{\widetilde{\pi}}(x) \,\mathrm{d} x\\ \ge &0, \end{align*} where the last inequality is due to the monotonicity of $P_{\widetilde{\pi}}(t)$. Therefore, the payment function $p(t)$ is monotone non-decreasing in the interval $[\underline{\phi}(t),t]$. Since the set of intervals $\{[\underline{\phi}(t),t]\mid t\in T\}$ covers $[t_1,t_C]$, we conclude that $p(t)$ is monotone non-decreasing in $[t_1,t_C]$. Using similar analyses, we can show that $p(t)$ is monotone non-increasing in the interval $[t_C,t_2]$. Therefore, to prove that $p(t)\ge 0$ for all $t\in T$, it suffices to show that $p(t_1)\ge0$ and $p(t_2)\ge 0$. Indeed, we have \begin{align*} \widetilde{p}(t_1) = \int_{q \in Q} \widetilde{\pi}(q,t_1) g(q) v(q,t_1)\, \mathrm{d} q - u(t_1) = \int_{q:\rho(q)\geq\widetilde{\theta}_{t_1}} g(q) v(q,t_1)\, \mathrm{d} q \geq 0. \end{align*} The inequality holds because when $\rho(q)\ge\widetilde{\theta}_{t_1}=-\widetilde{\phi}^+(t_1)\ge -t_1$, we have $v(q,t_1)=v_1(q)[t_1+\rho(q)]\ge 0$. And \begin{align*} \widetilde{p}(t_2) =& \int_{q \in Q} \widetilde{\pi}(q,t_2) g(q) v(q,t_2)\, \mathrm{d} q - u(t_2)\\ =&\int_{\rho(q) \geq \widetilde{\theta}_{t_2} } g(q) v_1(q)[t_2+\rho(q)]\, \mathrm{d} q -\int_{q \in Q} g(q) v_1(q)[t_2 + \rho(q)]\,\mathrm{d} q\\ =&-\int_{\rho(q) < \widetilde{\theta}_{t_2} } g(q) v_1(q)[t_2+\rho(q)]\, \mathrm{d} q\\ \ge& 0, \end{align*} where the inequality is because $\rho(q) < \widetilde{\theta}_{t_2}=-\widetilde{\phi}^+(t_2)\le -t_2$, \end{proof} Now we are ready to prove the regular case of Lemma \ref{lem:case_3}. \begin{proof}[Proof of Lemma \ref{lem:case_3} (regular case)] Different from the proofs of Lemma \ref{lem:optimal-case1} and \ref{lem:optimal-case2}, in this proof, we will derive a different revenue function by using both $t_1$ and $t_2$ as reference points. Let $\beta\in[0,1]$ be a real number, and write the revenue as a convex combination of Equation \eqref{eq:revenue-1} and \eqref{eq:revenue-2}: \begin{align*} REV(\pi, p)=&\beta \left[\int_{q \in Q} g(q)\int_{t_1}^{t_2} f(t)\pi(q,t) v_1(q) \left[\underline{\phi}(t) + \rho(q)\right]\,\mathrm{d} t \, \mathrm{d} q -u(t_1) \right] \\ &+ (1-\beta) \left[\int_{q \in Q} g(q) \int_{t_1}^{t_2} f(t)\pi(q,t)v_1(q) \left[ \bar{\phi}(t) + \rho(q)\right]\,\mathrm{d}t \mathrm{d}q -u(t_2) \right] \\ = & \int_{t_1}^{t_2}\int_{q\in Q} \left[\beta \underline{\phi}(t) + (1-\beta)\bar{\phi}(t) - \rho(q)\right] \pi(q,t) f(t) g(q) v_1(q)\,\mathrm{d} q\mathrm{d} t - \beta u(t_1) - (1-\beta)u(t_2). \end{align*} Since Equation \eqref{eq:revenue-1} and \eqref{eq:revenue-2} are just different representations of the revenue of any mechanism $(\pi,p)$, the above equation holds for any $\beta$. Choose $\beta$ to be the constant $C$ described in Lemma \ref{lem:case_3}, and we have: \begin{gather} \label{eq:revenue-3} REV(\pi, p)=\int_{t_1}^{t_2}\int_{q\in Q} \left[\widetilde{\phi}(t) + \rho(q)\right] \pi(q,t) f(t) g(q) v_1(q)\,\mathrm{d} q\mathrm{d} t - C u(t_1) - (1-C)u(t_2). \end{gather} Now consider the mechanism $(\widetilde{\pi}, \widetilde{p})$. According to Lemma \ref{lem:case_3_feasible}, this mechanism is feasible. Since the function $\widetilde{\phi}(t)$ is regular, we have $\widetilde{\phi}^+(t)=\widetilde{\phi}(t),\forall t\in T$. Mechanism $(\widetilde{\pi}, \widetilde{p})$ clearly maximizes the first term of Equation \eqref{eq:revenue-3} as $\widetilde{\pi}(q, t)=1$ whenever $\rho(q)\ge \widetilde{\theta}_t= -\widetilde{\phi}^+(t)=-\widetilde{\phi}(t)$ and $0$ whenever $\rho(q)< \widetilde{\theta}_t$. Furthermore, according to the proof of Lemma \ref{lem:case_3_feasible}, we have $u(t_1)=0$ and $u(t_2)=v(t_2)$. These also maximize the last two terms of Equation \eqref{eq:revenue-3} as the feasibility constraint \eqref{eq:ir-t2} requires $u(t_1)\ge0$ and $u(t_2)\ge v(t_2)$. Therefore, we can conclude that mechanism $(\widetilde{\pi}, \widetilde{p})$ maximizes Equation \eqref{eq:revenue-3}, hence optimal. \end{proof} To prove for the irregular case, we still use the ironing trick, and show that the mechanism $(\widetilde{\pi}, \widetilde{p})$ (defined according to $\widetilde{\phi}^+(t)$) achieves the same revenue as the entry-wise maximization mechanism (though infeasible) defined according to non-ironed $\widetilde{\phi}(t)$. \begin{proof}[Proof of Lemma \ref{lem:case_3} (irregular case)] Let $(\pi, p)$ be the feasible mechanism.Let $\widetilde{H}(\cdot),\widetilde{L}(\cdot),\widetilde{h}(\cdot),\widetilde{l}(\cdot)$ be the corresponding functions when ironing the function $\widetilde{\phi}(t)$. Applying Equation \eqref{eq:revenue-3} to mechanism $(\pi^*, p^*)$, we have \begin{gather*} REV(\pi, p)=\int_{t_1}^{t_2}\int_{q\in Q} \left[\widetilde{\phi}(t) + \rho(q)\right] \pi(q,t) f(t) g(q) v_1(q)\,\mathrm{d} q\mathrm{d} t - C u(t_1) - (1-C)u(t_2). \end{gather*} By definition, we have $\widetilde{h}(F(t))=\widetilde{\phi}(t)$ and $\widetilde{l}(F(t))=\widetilde{\phi}^+(t)$. So the first term in the above equation can be written as \begin{align*} &\int_{t_1}^{t_2}\int_{q\in Q} \left[\widetilde{\phi}(t) + \rho(q)\right] \pi(q,t) f(t) g(q) v_1(q)\,\mathrm{d} q\mathrm{d} t\\ =&\int_{t_1}^{t_2}\int_{q\in Q} \left[\widetilde{\phi}^+(t) + \rho(q)\right] \pi(q,t) f(t) g(q) v_1(q)\,\mathrm{d} q\mathrm{d} t \\ &+\int_{t_1}^{t_2}\int_{q\in Q} \left[\widetilde{h}(F(t)) - \widetilde{l}(F(t))\right] \pi(q,t) f(t) g(q) v_1(q)\,\mathrm{d} q\mathrm{d} t. \end{align*} Using integration by parts, we can simplify the second term as follows: \begin{align*} & \int_{t_1}^{t_2}\int_{q\in Q} \left[\widetilde{h}(F(t)) - \widetilde{l}(F(t))\right] \pi(q,t) f(t) g(q) v_1(q)\,\mathrm{d} q\mathrm{d} t \\ =& \int_{t_1}^{t_2} \left[\widetilde{h}(F(t))- \widetilde{l}(F(t))\right] P_{\pi}(t) \,\mathrm{d} F(t) \\ =& \left.\left[\widetilde{H}(F(t))- \widetilde{L}(F(t))\right] P_{\pi}(t) \right|_{t_1}^{t_2} - \int_{t_1}^{t_2} \left[\widetilde{H}(F(t))- \widetilde{L}(F(t))\right] \,\mathrm{d} P_{\pi}(t) \\ \end{align*} Because $\widetilde{L}$ is the ``convex hull'' of $\widetilde{H}$, so $\widetilde{L}(0) = \widetilde{H}(0)$ and $\widetilde{L}(1) = \widetilde{H}(1)$. Thus the first term above is simply $0$, and we have \begin{align} REV(\pi, p)=&\int_{t_1}^{t_2}\int_{q\in Q} \left[\widetilde{\phi}(t) + \rho(q)\right] \pi(q,t) f(t) g(q) v_1(q)\,\mathrm{d} q\mathrm{d} t- C u(t_1) - (1-C)u(t_2)\nonumber\\ = &\int_{t_1}^{t_2}\int_{q\in Q} \left[\widetilde{\phi}^+(t) + \rho(q)\right] \pi(q,t) f(t) g(q) v_1(q)\,\mathrm{d} q\mathrm{d} t \nonumber\\ & - \int_{t_1}^{t_2} \left[\widetilde{H}(F(t))- \widetilde{L}(F(t))\right] \,\mathrm{d} P_{\pi}(t)- C u(t_1) - (1-C)u(t_2)\label{eq:new-rev-3} \end{align} Now consider mechanism $(\widetilde{\pi}, \widetilde{p})$, which is feasible according to Lemma \ref{lem:case_3_feasible}. This mechanism clearly maximizes the first term in Equation \eqref{eq:new-rev-3} as $\widetilde{\pi}(q, t)=1$ if and only if $\widetilde{\phi}(t) + \rho(q)\ge 0$. We also have $u(t_1)=0$ and $u(t_2)=v(t_2)$ as shown in the proof of Lemma \ref{lem:case_3_feasible}, which also optimizes the last two terms as constraint \eqref{eq:ir-t2} requires $u(t_1)\ge 0$ and $u(t_2)\ge v(t_2)$. As for the second term, note that $\widetilde{H}(F(t))- \widetilde{L}(F(t))\ge 0$ by definition, and $\mathrm{d} P_{\pi}(t)\ge0$ for any feasible mechanism. Thus the second term is always non-negative. However, we claim that with mechanism $(\widetilde{\pi}, \widetilde{p})$, this term is actually 0. The only interesting case is when $\widetilde{H}(F(t))- \widetilde{L}(F(t))\ne 0$. In this case, $t$ must lie in an ironed interval $I$. Thus $\widetilde{L}(z)$ is linear in the interval $I$, where $z=F(t)$. This implies that $\widetilde{\phi}^+(t)=\widetilde{l}(z)=\widetilde{L}'(z)$ is constant. So \begin{gather*} P_{\widetilde{\pi}}(t)=\int_{ q \in Q}\widetilde{\pi}(q,t)g(q)v_1(q)\,\mathrm{d} q=\int_{ q :\rho(q)\ge -\widetilde{\phi}^+(t)}g(q)v_1(q)\,\mathrm{d} q \end{gather*} is also constant in the interval $I$, which leads to $\mathrm{d} P_{\widetilde{\pi}}(t)$ being 0. Therefore, the mechanism $(\widetilde{\pi}, \widetilde{p})$ optimizes all the 4 terms in Equation \eqref{eq:new-rev-3} simultaneously, thus is the optimal feasible mechanism. \end{proof} \section{The Discrete Case} \begin{definition}[Upper/Lower-Boundary Virtual Value Function] We call $\underline{\varphi}(t_k) = t_k - (t_{k+1}-t_{k})\frac{1-F(t_k)}{f(t_k)}$ the \emph{lower-boundary} virtual value function and call $\bar{\varphi}(t_k) = t_k + (t_k - t_{k-1})\frac{F(t_{k-1})}{f(t_k)} $ the \emph{upper-boundary} virtual value function. We say a virtual value function, upper-boundary or lower-boundary, is regular if it is monotone non-decreasing in $t$. \end{definition} \begin{theorem}\label{dis-theorem} \begin{enumerate} \item If $v(t_N) < \qopname\relax n{max}(0,v(t_1)) + \sum_{i=2}^N(t_i - t_{i-1})\sum_{\rho(q) \geq \underline{\theta}(t_{i-1}) }g(q)v_1(q)$ and $\underline{\varphi}(t_k)$ is monotone non-decreasing, the optimal information selling mechanism is a threshold mechanism on $\rho(q)$ with threshold $\underline{\theta}(t_k) = -\underline{\varphi}(t_k) $ for each type $t$. The payment is determined by the following identity and is monotone non-decreasing in $t$: $$p(t_k) = \sum_{q\in Q} g(q) \pi(q,t_k)v(q,t_k) - \sum_{i=2}^k(t_i - t_{i-1})P(t_{i-1})$$ \item If $v(t_N) \geq \qopname\relax n{max}(0,v(t_1)) + \sum_{i=2}^N(t_i - t_{i-1})\sum_{\rho(q) \geq \bar{\theta}(t_{i}) }g(q)v_1(q)$ and $\bar{\varphi}(t_k)$ is monotone non-decreasing, the optimal information selling mechanism is a threshold mechanism on $\rho(q)$ with threshold $\theta(t_k) = -\bar{\varphi}(t_k) $ for each type $t$. The payment is determined by the following identity and is monotone non-increasing in $t$: $$p(t_k) = \sum_{q\in Q} g(q) \pi(q,t_k)v(q,t_k) -v(t_N) + \sum_{i=k+1}^N(t_i - t_{i-1})P(t_{i})$$ \item For $v(t_N)$ not in the above 2 cases where $ \qopname\relax n{max}(0,v(t_1)) + \sum_{i=2}^N(t_i - t_{i-1})\sum_{\rho(q) \geq \underline{\theta}(t_{i-1})} \\ g(q)v_1(q) \leq v(t_N) < \qopname\relax n{max}(0,v(t_1)) + \sum_{i=2}^N(t_i - t_{i-1})\sum_{\rho(q) \geq \bar{\theta}(t_{i}) }g(q)v_1(q)$ (or equivalently, $ - \sum_{i=2}^N(t_i - t_{i-1})\sum_{\rho(q) < \underline{\theta}(t_{i-1}) }g(q)v_1(q) \leq v(t_1) < - \sum_{i=2}^N(t_i - t_{i-1})\sum_{\rho(q) < \bar{\theta}(t_{i}) }g(q)v_1(q)$)and $\widetilde{\varphi}_q(t_i) = f(t_i) \frac{[\underline{\varphi}(t_i) + \rho(q)]}{(t_{i+1}-t_{i}) }$ is non-decreasing in t for any q, the optimal mechanism is a threshold mechanism on $\rho(q)$ with threshold $\theta(t_i) = t_i+ (t_{i+1}-t_{i})\frac{C-F(t_i)}{f(t_i)} $, where $C \in (0,1]$ is a constant and satisfies $\sum_t\sum_{q:}_{ \widetilde{\varphi}_q(t) + 1 >= C } g(q) v_1(q) (t_{i+1}-t_{i}) = v(t_N)$. The payment function is, \begin{gather*} p(t_k) = \begin{cases} \sum_{q\in Q} g(q) \pi(q,t_k)v(q,t_k) - \sum_{i=2}^k(t_i - t_{i-1})P(t_{i-1}) & \text{for } F(t_k) \leq C\\ \sum_{q\in Q} g(q) \pi(q,t_k)v(q,t_k) -v(t_N) + \sum_{i=k+1}^N(t_i - t_{i-1})P(t_{i}) & \text{for } F(t_k) > C \end{cases} \end{gather*} $P(t_k)$ is monotone non-decreasing in t when $F(t) \leq C $ and monotone non-increasing when $F(t) > C$. \end{enumerate} \end{theorem} \todo{add a few examples and derive their optimal scheme and see what it is like. $v(q,t) = t*q - 1$ where } \todo{add a proof sketch here...} \sz{I find a proof sketch will be very hard to explain, because the mechanism formula itself is hard to explain. This example can only give a procedure impression to readers, instead of not any intuitive sense why this $\pi(q,t)$ and $p(t)$ is optimal. I think pointing out some interesting points like recommending buying things with $v(q,t) < 0$ and same policy will induce same payment may be better to attract readers.} \hf{why still discrete? We will need a closed-form mechanism for CONTINUOUS cases. We are looking for closed form characterization, some statement like "the threshold for type $t$ is $2t^2 - 5t$", while not a table for different $t$. These tables are not insightful as they can be computed from an LP, so why it is useful to list them here. Please see what Myerson wrote at page 10 here of his Optimal Auction Design paper. That is what we are looking for.} In the table, we define an example to show how this algorithm works. Define $v_1(q) = q, \; v_0(q) = -6$, we have $v(q,t) = q*t - 6$. And both $t$ and $q$ are uniformly distributed. Recall, $\rho(q) = \frac{v_0(q)}{v_1(q)}$ We have this value table. \begin{table}[!htb] \begin{minipage}{.5\linewidth} \caption{Value table} \centering \begin{tabular}{|c| c| c| c|} \hline & $t_1 = 3$ & $t_2 = 4 $ & $t_3 = 5$ \\ [0.5ex] \hline $q_1 = 1, \; \rho(q_1) = -6$ & -3 & -2 & -1 \\ \hline $q_2 = 2, \; \rho(q_2) = -3$ & 0 & 2 & 4 \\ \hline $q_3 = 3, \; \rho(q_3) = -2$ & 3 & 6 & 9 \\ \hline \end{tabular} \end{minipage}% \begin{minipage}{.5\linewidth} \centering \caption{Threshold table} \begin{tabular}{|c| c| c| c|} \hline & $t_1 = 3$ & $t_2 = 4 $ & $t_3 = 5$ \\ [0.5ex] \hline $\bar{\theta}(t_k)$& -3 & -5 & -7 \\ [0.5ex] \hline \end{tabular} \end{minipage} \end{table} This is a instance of theorem \ref{dis-theorem} case 2. Based on $\bar{\theta}(t_k) = - \bar{\varphi}(t_k) = -( t_k + (t_k - t_{k-1})\frac{F(t_{k-1})}{f(t_k)} )$, we can get the threshold table. In this threshold policy, if $\rho(q)$ passes the threshold, the recommendation policy will recommend $purchase$ with probability 1, otherwise, with probability 0. Base on payment function $p(t_k) = \sum_{q\in Q} g(q) \pi(q,t_k)v(q,t_k) -v(t_N) + \sum_{i=k+1}^N(t_i - t_{i-1})\sum_{q\in Q} g(q) \pi(q,t_i)v_1(q) $. The optimal recommendation policy and the payment are the following. \begin{table}[!htb] \begin{minipage}{.5\linewidth} \caption{Recommendation Policy} \centering \begin{tabular}{|c| c| c| c|} \hline & $t_1 = 3$ & $t_2 = 4 $ & $t_3 = 5$ \\ [0.5ex] \hline $q_1 = 1, \; \rho(q_1) = -6$ & 0 & 0 & 1 \\ \hline $q_2 = 2, \; \rho(q_2) = -3$ & 1 & 1 & 1 \\ \hline $q_3 = 3, \; \rho(q_3) = -2$ & 1 & 1 & 1 \\ \hline \end{tabular} \end{minipage}% \begin{minipage}{.5\linewidth} \centering \caption{Payment} \begin{tabular}{|c| c| c| c|} \hline & $t_1 = 3$ & $t_2 = 4 $ & $t_3 = 5$ \\ [0.5ex] \hline $p(t)$& $\frac{2}{3}$ & $\frac{2}{3}$ & 0 \\ [0.5ex] \hline \end{tabular} \end{minipage} \end{table} We can see although $v(q_1,t_3) = -1$, the optimal way is to recommend $purchase$ it without charging $t_3$ anything since the $\rho(q_1)$ passes the threshold defined on virtual value $\bar{\varphi}(t_3)$. Also, for $t_1$ and $t_2$, because they are recommending the same policy, the $p(t)$ is the same, which will also be proved later. Now, we are going to extend our theorem to discrete cases \cite{soton263443}, where things will get different. For example, the $P(t)$ will no longer be the derivative of utility function $u(t)$ because t are not continuous anymore. We first introduce the notations. We still model the value function $v(q,t) = v_1(q)t+v_0(q)$ and is an increasing function in $q$ and $t$. The value $q$ is drawn from a finite set $Q = \{ q_i | i = 1,...,M \}$ according to a distribution $g_i$. We assume that $q_1 < q_2 < ... < q_M$. The distribution $g$ has the probabilities $g(q)$ for each $q_i \in Q$ s.t. $\qopname\relax n{\mathbf{Pr}}[q = q_i ] = g(q)$. The value $t$ is drawn from a finite set $T = \{ t_i | i = 1,...,N \}$ according to a distribution $f$. We assume that $t_1 < t_2 < ... < t_N$. The distribution $f$ has the probabilities $f(t_i)$ for each $t_i \in T$ s.t. $\qopname\relax n{\mathbf{Pr}}[t = t_i ] = f(t)$. Now, define the new goal and the constraints, they are the same except we use summation here instead of integral. \begin{equation}\label{dis-cons:obj} \text{ Seller's Objective: } \quad \qopname\relax n{max}_{\pi, p} \sum_{t\in T} f(t) p(t) \end{equation} \begin{equation}\label{dis-cons:obedience} \text{Obedience: } \quad \sum_{q \in Q} \pi(q, t) \cdot g(q) v(q,t) \geq \qopname\relax n{max} \{ 0, v(t) \}, \, \, \, \forall t \in [T] \end{equation} \begin{equation}\label{dis-cons:IC} \text{IC: } \quad \sum_{q \in Q} \pi(q, t) \cdot g(q) v(q,t) - p(t) \geq \sum_{q \in Q} \pi(q, t') \cdot g(q) v(q,t) - p(t'), \, \, \, \forall, t, t' \in [T] \end{equation} \begin{equation}\label{dis-cons:IR} \text{IR Constraint: } \quad \sum_{q \in Q} \pi(q, t) \cdot g(q) v(q,t) - p(t) \geq \qopname\relax n{max} \{ 0, v(t) \}, \, \, \, \forall, t \in [T] \end{equation} Lemma \ref{lem:positive-pay} still holds in here, so we can omit the obedience constraint \eqref{dis-cons:obedience}. Define the utility function: \begin{equation} \text{Utility of Buyer Type }t: \quad u(t)= \sum_{q \in Q} \left[ g(q) \pi(q,t)v(q,t) \right] -p(t) \end{equation} From \eqref{dis-cons:IC}, we have \begin{gather} \sum_{q \in Q} [\pi(q, t) - \pi(q, t')] \cdot g(q) v(q,t) \geq p(t) - p(t'), \label{dis-eq:ic1}\\ \sum_{q \in Q} [\pi(q, t') - \pi(q, t)] \cdot g(q) v(q,t') \geq p(t') - p(t).\label{dis-eq:ic2} \end{gather} Combining Inequality \eqref{dis-eq:ic1} and \eqref{dis-eq:ic2}, we obtain the following constraint for any pair of types $t, t'$: \begin{gather*} \sum_{q \in Q} [\pi(q, t') - \pi(q, t)] \cdot g(q) v(q,t) \leq p(t') - p(t) \leq \sum_{q \in Q} [\pi(q, t') - \pi(q, t)] \cdot g(q) v(q,t') . \end{gather*} \begin{eqnarray*}\label{eq:IC-outcome1} 0 &\leq& \sum_{q \in Q} [\pi(q, t') - \pi(q, t)] \cdot g(q) [v(q,t') - v(q, t)]\\ & = & [ t' - t] \sum_{q \in Q} [\pi(q, t') - \pi(q, t)] \cdot g(q) v_1(q) \end{eqnarray*} Define $$ P(t) = \sum_{q \in Q} \pi(q, t) \cdot g(q) v_1(q) $$ We have,$(t' - t) ( P(t')-P(t )) \geq 0 $ implies $P(t)$ is non-decreasing in $t$. \begin{eqnarray*} u(t) &=& \sum_{q \in Q} \left[ g(q) \pi(q,t)v(q,t) \right] -p(t) \\ &\geq& \sum_{q \in Q} \left[ g(q) \pi(q,t')v(q,t) \right] -p(t') & \mbox{by Ineq. \eqref{dis-eq:ic1}} \\ &=& \sum_{q \in Q} \left[ g(q) \pi(q,t')v(q,t) \right] + u(t') - \sum_{q \in Q} \left[ g(q) \pi(q,t')v(q,t') \right] & \mbox{Def of $u(t')$}\\ &=& \sum_{q \in Q} \left[ g(q) \pi(q,t')[ v(q,t) - v(q, t')] \right] + u(t') & \mbox{Algebraic Manipulation}\\ &=& (t-t') P(t') + u(t') & \mbox{Def. of $P(t)$ and $v(q,t)$}\\ \end{eqnarray*} As a consequence, Inequality \eqref{dis-eq:ic1} implies $u(t) -u(t') \geq (t-t')P(t') $. Together with a similar derivation from Inequality \eqref{dis-eq:ic2}, we have the following inequality \begin{equation} (t-t') P(t') \le u(t)-u(t')\le (t-t') P(t). \end{equation} \begin{equation} (t_i-t_{i-1}) P(t_{i-1}) \le u(t_i)-u(t_{i-1})\le (t_i-t_{i-1}) P(t_i). \end{equation} Since we have $t_{i-1} < t_{i}$, we know $P(t_i)$ is monotone non-decreasing in $t$. So, for any IC mechanims there exist $P^\sim(t_i) \in [P(t_{i-1}),P(t_i)]$ such that $$ u(t_k) = u(t_1) + \sum_{i=2}^k(t_i-t_{i-1}) P^\sim(t_i) $$ Since $P(t)$ is monotone non-decreasing in $t$, we can denote $P^\sim(t_i)$ as $P(t^\sim_i)$ and $t^\sim_i \in [t_{i-1},t_i]$. As $ u(t) = \sum_{q \in Q} \left[ g(q) \pi(q,t)v(q,t) \right] -p(t) $, we have \begin{eqnarray*} p(t_i) &=& \sum_{q \in Q} \left[ g(q) \pi(q,t_i)v(q,t_i) \right] -u(t_i) \\ &=& -u(t_1) + \sum_{q \in Q} \left[ g(q) \pi(q,t_i)v(q,t_i) \right] - \sum_{k=2}^i(t_k-t_{k-1}) P^\sim(t_k) \end{eqnarray*} Lemma \ref{lem:positive-pay} also works for discrete case. So, $p(t)\geq 0, \; \forall t \in T$ . \begin{lemma}\label{dis-lem:feasible-M}[Characterization of Feasible Mechanisms] A mechanism $(\pi, p)$ with non-negative payments is feasible if and only if it satisfies the following constraints: \begin{eqnarray}\label{dis-eq:signal-monotonicity} && P(t) \text{ is monotone non-decreasing in } t \\\label{dis-eq:buyer-utility-identify2} && u(t_k) = u(t_1) + \sum_{i=2}^k(t_i-t_{i-1}) P^\sim(t_i), \, \, \, \forall t \in [T] \\\label{dis-eq:ir-t2} & & u(t_N) \geq v(t_N), \, \, \, u(t_1) \geq 0 \\ \label{dis-eq:non-negativity} & & p(t) \geq 0, \, \, \forall t \in [T]. \end{eqnarray} \end{lemma} \begin{proof} The necessity of these constraints come from the above derivation and the IR constraint for type $t_N$ and $t_1$. We now prove that they are sufficient. That is, Constraint \eqref{eq:signal-monotonicity}-\eqref{eq:non-negativity} implies Obedience, IC and IR constraints \eqref{dis-cons:obedience}, \eqref{dis-cons:IC} and \eqref{dis-cons:IR}. IC constraint \eqref{dis-cons:IC} is equivalent to \begin{equation*} u(t) \geq u(t') + \sum_{q \in Q} \pi(q, t') \cdot g(q) [v(q,t) - v(q,t')] = u(t') + (t-t') P(t'). \end{equation*} Therefore Constraint \eqref{dis-eq:signal-monotonicity} and \eqref{dis-eq:buyer-utility-identify2} implies IC constraint \eqref{dis-cons:IC} because for any of $t_x,t_y$ where $x <= y$ we have \begin{equation*} u(t_y) - u(t_x) = \sum_{i=x+1}^{y} (t_i-t_{i-1})P^\sim(t_i) \geq \sum_{i=x+1}^{y} (t_i-t_{i-1})P^\sim(t_{x+1}) = (t_y-t_x)P^\sim(t_{x+1}) \geq (t_y-t_x)P(t_x) \end{equation*} Similarly, when $x > y$, $$ u(t_y) - u(t_x) = - \sum_{i=y+1}^{x} (t_i-t_{i-1})P^\sim(t_i) \geq -\sum_{i=y+1}^{x} (t_i-t_{i-1})P^\sim(t_{x}) = (t_y-t_x)P^\sim(t_{x}) \geq (t_y-t_x)P(t_x) $$ The IR constraint \eqref{dis-cons:IR} is equivalent to $u(t) \geq 0$ and $u(t) \geq v(t)$. Since $P(t)\geq 0, \; \forall t \in T$, Constraint \eqref{dis-eq:buyer-utility-identify2}, together with $u(t_1) \geq 0$, implies $u(t) \geq 0, \; \forall t \in T$. We now leverage $u(t_N) \geq v(t_N)$ to prove $u(t) \geq v(t), \; \forall t \in T$, as follows: \begin{eqnarray*} u(t_k) &=& u(t_1) + \sum_{i=2}^k(t_i-t_{i-1}) P^\sim(t_i) \\ & = & u(t_N) - \sum_{i=k+1}^N(t_i-t_{i-1}) P^\sim(t_i) \\ & \geq & v(t_N) - \sum_{i=k+1}^N(t_i-t_{i-1}) P^\sim(t_i) \\ & \geq & v(t_N) - \sum_{i=k+1}^N(t_i-t_{i-1}) P(t_i) \\ & = & \sum_{q \in Q} g(q) v_1(q)[t_N + \rho(q)] - \sum_{i=k+1}^N(t_i-t_{i-1}) \sum_{q \in Q} \pi(q, t_i) \cdot g(q) v_1(q) \\ & \geq & \sum_{q \in Q} g(q) v_1(q)[t_N + \rho(q)] - \sum_{i=k+1}^N(t_i-t_{i-1}) \sum_{q \in Q} g(q) v_1(q) \\ & = & \sum_{q \in Q} g(q) v_1(q)[t_k + \rho(q)] = v(t_k) \\ \end{eqnarray*} Finally, the Obedience constraint \eqref{dis-cons:obedience} follows from IR constraint \eqref{dis-cons:IR} and $p(t) \geq 0, \; \forall t \in T$. \end{proof} Next we characterize the buyer's surplus from participating the information selling mechanism, defined as follows: \begin{equation}\label{eq:buyer-surplus} \text{Buyer Surplus: } \quad s(t) = u(t) - \qopname\relax n{max} \{ 0, v(t) \} \end{equation} That is, the buyer surplus is the difference between his utility from the information selling mechanism and his utility from directly picking the better action among $0,1$ without purchasing any information. Note that the IR constraint \eqref{dis-cons:IR} is (naturally) equivalent to $s(t) \geq 0$. Recall that the buyer of type $t$ has expected utility $v(t) = \sum_{q \in Q} v(q, t) g(q) $ for action 1 without purchasing any information. Since $v(q, t)$ is monotone non-decreasing in $t$, we know that $v(t)$ is also monotone non-decreasing. Let $\bar{t}$ be the buyer type at which $v(t)$ crosses $0$ The following lemma characterize how the buyer's surplus changes as a function of his type. \begin{lemma}\label{lem:surplus-concave} Let $\bar{t}$ be the buyer type such that $v(\bar{t}) = \sum_{q \in Q} v(q, \bar{t}) g(q) = 0 $. In any feasible mechanism $(\pi, p)$ with non-negative payments, the buyer's surplus $s(t) $ is non-negative, monotone non-decreasing for $t \in [t_1, \bar{t}]$, and monotone non-increasing for $t \in [\bar{t}, t_2]$. \end{lemma} \begin{proof} When $t_k \leq \bar{t}$, we have $v(t_k) \leq 0$, therefore $s(t_k) = u(t_k) = u(t_1) + \sum_{i=2}^k(t_i-t_{i-1}) P^\sim(t_i) dx$ by Equation \eqref{eq:ir-t2}. Since $u(t_1) \geq 0$ and $\forall i\geq 2, \; (t_i - t_{i-1}) \geq 0$ and $P(t_i) \geq 0$, it is easy to see that $s(t)$ is non-negative and monotone non-decreasing. When $t_k \geq \bar{t}$, we have $v(t_k) \geq 0$, therefore \begin{eqnarray*} s(t_k) &=& u(t_k) - v(t_k) \\ &=& \bigg[ u(t_1) + \sum_{i=2}^k(t_i-t_{i-1}) P^\sim(t_i) \bigg] - \bigg[ \sum_{q \in Q} v_1(q) [t + \rho(q)] g(q) \bigg] \\ &=& \bigg[ u(t_1) + \sum_{i=2}^k(t_i-t_{i-1}) P^\sim(t_i) \bigg] - \bigg[ \sum_{i=2}^k(t_i-t_{i-1})\sum_{q \in Q} v_1(q)g(q) + v(t_1)\bigg] \\ &=& \bigg[ u(t_1) + \sum_{i=2}^k(t_i-t_{i-1}) \sum_{q \in Q}\pi(q,t_i^\sim) v_1(q)g(q) \bigg] - \bigg[ \sum_{i=2}^k(t_i-t_{i-1})\sum_{q \in Q} v_1(q)g(q) + v(t_1)\bigg] \\ &=& u(t_1) - v(t_1) + \bigg[ \sum_{i=2}^k(t_i-t_{i-1}) \sum_{q \in Q}(\pi(q,t_i^\sim)-1) v_1(q)g(q) \bigg]. \\ \end{eqnarray*} Since $[\pi(q,t^\sim_i) - 1] \leq 0 $ and $v_1(q) g(q) \geq 0$, we thus have $s(t)$ is monotone non-increasing in $t$. Consequently, $ s(t) \geq s(t_N) = u(t_N) - v(t_N) \geq 0$ by Inequality \eqref{eq:ir-t2}. \end{proof} \subsection{Characterization of the Optimal Mechanism} We are now ready to characterize the optimal mechanism using Lemma \ref{dis-lem:feasible-M}. We first derive the seller's revenue as a function of the mechanism $(\pi, p)$ that is assumed to satisfy Lemma \ref{dis-lem:feasible-M}. Define $t_0 = t_1$ We have \begin{align*} REV (\pi, p)&=\sum_{k=1}^{N} f(t_k)p(t_k)\\ &=\sum_{k=1}^{N} f(t_k)\left(\sum_{q\in Q} g(q) \left[ \pi(q,t_k)v(q,t_k)\right] -u(t_k) \right)\\ &=\sum_{k=1}^{N} f(t_k)\left(\sum_{q\in Q} g(q) \left[ \pi(q,t_k)v(q,t_k)\right] - \sum_{i=2}^k(t_i-t_{i-1}) P^\sim(t_i) -u(t_1) \right)\\ &=\sum_{k=1}^{N} f(t_k)\left(\sum_{q\in Q} g(q) \left[ \pi(q,t_k)v(q,t_k)\right] \right)- \sum_{k=1}^{N} \sum_{i=2}^k f(t_k)(t_i-t_{i-1}) P^\sim(t_i) -u(t_1) \\ &=\sum_{k=1}^{N} f(t_k)\left(\sum_{q\in Q} g(q) \left[ \pi(q,t_k)v(q,t_k)\right] \right)- \sum_{i=2}^{N} \sum_{k=i}^N f(t_k)(t_i-t_{i-1}) P^\sim(t_i) -u(t_1) \\ &=\sum_{k=1}^{N} f(t_k)\left(\sum_{q\in Q} g(q) \left[ \pi(q,t_k)v(q,t_k)\right] \right)- \sum_{i=2}^{N} (1-F(t_{i-1}))(t_i-t_{i-1}) P^\sim(t_i) -u(t_1) \\ &=\sum_{k=1}^{N} f(t_k)\left(\sum_{q\in Q} g(q) \left[ \pi(q,t_k)v(q,t_k)\right] \right)- \sum_{i=1}^{N-1} (1-F(t_{i}))(t_{i+1}-t_{i}) P^\sim(t_{i+1}) -u(t_1) \\ &=\sum_{k=1}^{N} f(t_k)\left(\sum_{q\in Q} g(q) \left[ \pi(q,t_k)v(q,t_k)\right] \right)- \sum_{i=1}^{N} (1-F(t_{i}))(t_{i+1}-t_{i}) P^\sim(t_{i+1}) -u(t_1) \\ &=\sum_{q\in Q} g(q)\left( \sum_{k=1}^{N} f(t_k)\left[ \pi(q,t_k)v(q,t_k)\right] \right)- \sum_{k=1}^{N} (1-F(t_{k}))(t_{k+1}-t_{k}) P^\sim(t_{k+1}) -u(t_1) \\ \end{align*} Since $P(t)$ is monotone non-decreasing in $t$ and $P^\sim(t_i) \in [P(t_{i-1}),P(t_i)]$, to achieve maximized revenue, we should always set $P^\sim(t_i) =P(t_{i-1})$. \begin{align*} REV (\pi, p) &=\sum_{q\in Q} g(q)\left( \sum_{k=1}^{N} f(t_k)\left[ \pi(q,t_k)v(q,t_k)\right] \right)- \sum_{k=1}^{N} (1-F(t_{k}))(t_{k+1}-t_{k}) P(t_{k}) -u(t_1) \\ &=\sum_{q\in Q} g(q)\left( \sum_{k=1}^{N} f(t_k)\left[ \pi(q,t_k)v(q,t_k)\right] \right)- \sum_{k=1}^{N} (1-F(t_{k}))(t_{k+1}-t_{k}) \sum_{q\in Q} \pi(q,t_{k})g(q)v_1(q) -u(t_1) \\ &=\sum_{q\in Q} g(q)\left[ \sum_{k=1}^{N} f(t_k)\pi(q,t_k)\left( v(q,t_k) -(t_{k+1}-t_{k})v_1(q)\frac{1-F(t_k)}{f(t_k)}\right) \right] -u(t_1) \\ &=\sum_{q\in Q} g(q)\left[ \sum_{k=1}^{N} f(t_k)\pi(q,t_k) v_1(q) [\underline{\varphi}(t_k) + \rho(q)] \right] -u(t_1) \end{align*} where $\underline{\varphi}(t_k) = t_k - (t_{k+1}-t_{k})\frac{1-F(t_k)}{f(t_k)}$ is the standard virtual value function format used in the classic mechanism design literature. Note that the above derivation used the equality $u(t_k) = u(t_1) + \sum_{i=2}^k(t_i-t_{i-1}) P^\sim(t_i)$ to expand $u(t)$ with $u(t_1)$ as the reference point. This is also the original Myerson's approach. This approach works in Myerons's optimal auction design because there the buyer's surplus equals the buyer's utility from participating the mechanism since the only outside option is to not purchase resulting in utility $0$. Therefore, in Myerons's optimal auction design, $u(t_1) \geq 0$ guarantees IR constraint, i.e., $u(t) \geq 0$, for any feasible mechanism. This however, ceases to be true in our setup because $s(t_1) \geq 0$ does not guarantee $s(t_2) \geq 0$. In fact, Lemma \ref{lem:surplus-concave} shows that $s(t)$ is a concave function with $s(\bar{t})$ as the maximum surplus where $\bar{t}$ is a zero of $v(t)$ function. Nevertheless, we know that the optimal mechanism must satisfy either $s(t_1) = 0$ or $s(t_2) = 0 $ since otherwise, we can shift the entire $s(t)$ curve down by a constant --- achieved by asking each buyer type to pay the same additional amount --- until one of them reach $0$. The above discussion is why we have to derive another utility representation by using the equality \begin{align*} u(t_k) & = u(t_1) + \sum_{i=2}^k(t_i-t_{i-1})P^\sim(t_i) \\ u(t_N) & = u(t_1) + \sum_{i=2}^N(t_i-t_{i-1})P^\sim(t_i) \\ u(t_1) &= u(t_k) - \sum_{i=2}^k(t_i-t_{i-1})P^\sim(t_i)\\ u(t_N) & = u(t_k) - \sum_{i=2}^k(t_i-t_{i-1})P^\sim(t_i)+ \sum_{i=2}^N(t_i-t_{i-1})P^\sim(t_i) \\ u(t_k) & = u(t_N) + \sum_{i=2}^k(t_i-t_{i-1})P^\sim(t_i) - \sum_{i=2}^N(t_i-t_{i-1})P^\sim(t_i) \\ u(t_k) & = u(t_N) - \sum_{i=k+1}^N(t_i-t_{i-1})P^\sim(t_i) \\ \end{align*} to expand $u(t)$ with $u(t_N)$ as the reference point. This gives us a slightly different representation of the same $REV (\pi, p)$. Set $t_0 = t_1$. Revenue derivation with $t_N$ as the reference point can be expressed as follows: \begin{align*} REV (\pi, p)&=\sum_{k=1}^{N} f(t_k)p(t_k)\\ &=\sum_{k=1}^{N} f(t_k)\left(\sum_{q\in Q} g(q) \left[ \pi(q,t_k)v(q,t_k)\right] -u(t_k) \right)\\ &=\sum_{k=1}^{N} f(t_k)\left(\sum_{q\in Q} g(q) \left[ \pi(q,t_k)v(q,t_k)\right] - u(t_N) + \sum_{i=k+1}^N(t_i-t_{i-1})P^\sim(t_i) \right)\\ &=\sum_{k=1}^{N} f(t_k)\left(\sum_{q\in Q} g(q) \left[ \pi(q,t_k)v(q,t_k)\right] \right)+ \sum_{k=1}^{N} \sum_{i=k+1}^N f(t_k)(t_i-t_{i-1}) P^\sim(t_i) -u(t_N) \\ &=\sum_{k=1}^{N} f(t_k)\left(\sum_{q\in Q} g(q) \left[ \pi(q,t_k)v(q,t_k)\right] \right)+ \sum_{k=1}^{N-1} \sum_{i=k+1}^N f(t_k)(t_i-t_{i-1}) P^\sim(t_i) -u(t_N) \\ &=\sum_{k=1}^{N} f(t_k)\left(\sum_{q\in Q} g(q) \left[ \pi(q,t_k)v(q,t_k)\right] \right)+ \sum_{i=2}^{N} \sum_{k=1}^{i-1} f(t_k)(t_i-t_{i-1}) P^\sim(t_i) -u(t_N) \\ &=\sum_{k=1}^{N} f(t_k)\left(\sum_{q\in Q} g(q) \left[ \pi(q,t_k)v(q,t_k)\right] \right)+ \sum_{i=2}^{N} F(t_{i-1})(t_i-t_{i-1}) P^\sim(t_i) -u(t_N) \\ &=\sum_{k=1}^{N} f(t_k)\left(\sum_{q\in Q} g(q) \left[ \pi(q,t_k)v(q,t_k)\right] \right)+ \sum_{i=1}^{N} F(t_{i-1})(t_i-t_{i-1}) P^\sim(t_i) -u(t_N) \\ &=\sum_{q\in Q} g(q)\left( \sum_{k=1}^{N} f(t_k)\left[ \pi(q,t_k)v(q,t_k)\right] \right)+ \sum_{k=1}^{N} F(t_{k-1})(t_{k}-t_{k-1}) P^\sim(t_{k}) -u(t_N) \\ \end{align*} Since $P(t)$ is monotone non-decreasing in $t$ and $P^\sim(t_i) \in [P(t_{i-1}),P(t_i)]$, to achieve maximized revenue, we should always set $P^\sim(t_i) =P(t_{i})$. \begin{align*} REV (\pi, p) &=\sum_{q\in Q} g(q)\left( \sum_{k=1}^{N} f(t_k)\left[ \pi(q,t_k)v(q,t_k)\right] \right)+ \sum_{k=1}^{N} F(t_{k-1})(t_{k}-t_{k-1}) P(t_{k}) -u(t_N) \\ &=\sum_{q\in Q} g(q)\left( \sum_{k=1}^{N} f(t_k)\left[ \pi(q,t_k)v(q,t_k)\right] \right) + \sum_{k=1}^{N} F(t_{k-1})(t_{k}-t_{k-1}) \sum_{q\in Q} \pi(q,t_{k})g(q)v_1(q) -u(t_N) \\ &=\sum_{q\in Q} g(q)\left[ \sum_{k=1}^{N} f(t_k)\pi(q,t_k)\left( v(q,t_k) +(t_{k}-t_{k-1})v_1(q)\frac{F(t_{k-1})}{f(t_k)}\right) \right] -u(t_N) \\ &=\sum_{q\in Q} g(q)\left[ \sum_{k=1}^{N} f(t_k)\pi(q,t_k) v_1(q) [\underline{\varphi}(t_k) + \rho(q)] \right] -u(t_N) \end{align*} where $\bar{\varphi}(t_k) = t_k + (t_k - t_{k-1})\frac{F(t_{k-1})}{f(t_k)} $ is a transformed type function, which differs from the virtual value function in classic auction design. From the revenue function using $t_2$ as the reference point, we have the following optimization problem. \begin{align*} \qopname\relax n{max}_{\pi} & \sum_{q\in Q} g(q)\left[ \sum_{k=1}^{N} f(t_k)\pi(q,t_k) v_1(q) [\underline{\varphi}(t_k) + \rho(q)] \right] \\ \text{subject to} & \\ & P(t) \text{ is monotone non-decreasing in } t \\ & u(t_k) = u(t_1) + \sum_{i=2}^k(t_i-t_{i-1}) P^\sim(t_i), \, \, \, \forall t \in [T] \\ & u(t_N) \geq v(t_N), \, \, \, u(t_1) \geq 0 \\ & p(t) \geq 0, \, \, \forall t \in [T]. \\ \end{align*} \begin{definition}[Upper/Lower-Boundary Virtual Value Function] We call $\underline{\varphi}(t_k) = t_k - (t_{k+1}-t_{k})\frac{1-F(t_k)}{f(t_k)}$ the \emph{lower-boundary} virtual value function and call $\bar{\varphi}(t_k) = t_k + (t_k - t_{k-1})\frac{F(t_{k-1})}{f(t_k)} $ the \emph{upper-boundary} virtual value function. We say a virtual value function, upper-boundary or lower-boundary, is regular if it is monotone non-decreasing in $t$. \end{definition} \begin{lemma}\label{dis-lem:two-simple-cases} \begin{enumerate} \item If $v(t_N) < \qopname\relax n{max}(0,v(t_1)) + \sum_{i=2}^N(t_i - t_{i-1})P(t_{i-1}) = \qopname\relax n{max}(0,v(t_1)) + \sum_{i=2}^N(t_i - t_{i-1})\sum_{\rho(q) \geq \underline{\theta}(t_{i-1}) }g(q)v_1(q)$ (or equivalently, $v(t_1) < - \sum_{i=2}^N(t_i - t_{i-1})\sum_{\rho(q) < \underline{\theta}(t_{i-1}) }g(q)v_1(q) $) and $\underline{\varphi}(t_k)$ is monotone non-decreasing, the optimal information selling mechanism is a threshold mechanism on $\rho(q)$ with threshold $\underline{\theta}(t_k) = -\underline{\varphi}(t_k) $ for each type $t$. The payment is determined by the following identity and is monotone non-decreasing in $t$: $$p(t_k) = \sum_{q\in Q} g(q) \pi(q,t_k)v(q,t_k) - \sum_{i=2}^k(t_i - t_{i-1})P(t_{i-1})$$ \item If $v(t_N) \geq \qopname\relax n{max}(0,v(t_1)) + \sum_{i=2}^N(t_i - t_{i-1})P(t_{i}) =\qopname\relax n{max}(0,v(t_1)) + \sum_{i=2}^N(t_i - t_{i-1})\sum_{\rho(q) \geq \bar{\theta}(t_{i}) }g(q)v_1(q)$ (or equivalently, $v(t_1) \geq - \sum_{i=2}^N(t_i - t_{i-1})\sum_{\rho(q) < \bar{\theta}(t_{i}) }g(q)v_1(q) $)and $\bar{\varphi}(t_k)$ is monotone non-decreasing, the optimal information selling mechanism is a threshold mechanism on $\rho(q)$ with threshold $\theta(t_k) = -\bar{\varphi}(t_k) $ for each type $t$. The payment is determined by the following identity and is monotone non-increasing in $t$: $$p(t_k) = \sum_{q\in Q} g(q) \pi(q,t_k)v(q,t_k) -v(t_N) + \sum_{i=k+1}^N(t_i - t_{i-1})P(t_{i})$$ \end{enumerate} \end{lemma} \begin{proof} \textbf{Case 1:} \textbf{Optimality:} First, prove the equivalence \begin{align*} & v(t_N) < \qopname\relax n{max}(0,v(t_1)) + \sum_{i=2}^N(t_i - t_{i-1})\sum_{\rho(q) \geq \underline{\theta}(t_{i-1}) }g(q)v_1(q)\\ \Longleftrightarrow \quad & v(t_1) + (t_N - t_1) \sum_{q \in Q} v_1(q) g(q) < \qopname\relax n{max} \{ 0,v(t_1) \} +\sum_{i=2}^N(t_i - t_{i-1})\sum_{\rho(q) \geq \underline{\theta}(t_{i-1}) }g(q)v_1(q)\\ \Longleftrightarrow \quad & v(t_1) -\qopname\relax n{max} \{0,v(t_1) \} < \sum_{i=2}^N(t_i - t_{i-1})\sum_{\rho(q) \geq \underline{\theta}(t_{i-1}) }g(q)v_1(q) - (t_N - t_1) \sum_{q \in Q} v_1(q) g(q) \\ \Longleftrightarrow \quad & v(t_1) -\qopname\relax n{max} \{0,v(t_1) \} < \sum_{i=2}^N(t_i - t_{i-1})\sum_{\rho(q) \geq \underline{\theta}(t_{i-1}) }g(q)v_1(q) - \sum_{i=2}^N(t_i - t_{i-1}) \sum_{q \in Q} v_1(q) g(q) \\ \Longleftrightarrow \quad & v(t_1) -\qopname\relax n{max} \{0 , v(t_1)\} < - \sum_{i=2}^N(t_i - t_{i-1})\sum_{\rho(q) < \underline{\theta}(t_{i-1}) }g(q)v_1(q) \\ \Longleftrightarrow \quad & v(t_1) < - \sum_{i=2}^N(t_i - t_{i-1})\sum_{\rho(q) < \underline{\theta}(t_{i-1}) }g(q)v_1(q) \\ \end{align*} The last equivalence is because supposing $v(t_1) \geq 0$, we will have $0<- \sum_{i=2}^N(t_i - t_{i-1})\sum_{\rho(q) < \underline{\theta}(t_{i-1}) }\\g(q)v_1(q)$. Since $\forall q, \; g(q)$ and $v_1(q)$ are non-negative, this equation doesn't hold. So, $v(t_1) < 0$. Then, we get the last equivalence. The threshold $\underline{\theta}(t) = -\underline{\varphi}(t)$ is picked such that $\pi(q,t) =1$ whenever $ \underline{\varphi}(t) + \rho(q) > 0$ and $\pi(q,t) =0$ whenever $ \underline{\varphi}(t) + \rho(q) < 0$. That is, the revenue function, expressed with $t_1$ as reference point, is entry-wise maximized. This clearly achieves the maximum possible value in the first term of the revenue $REV(\pi, p)$ expression. The second term is maximized by choosing the minimum possible $u(t_1)$ value, i.e., $u(t_1) = 0$. With the defined payment function, we indeed choose $u(t_1) = \sum_{q\in Q} [g(q) \pi(q,t_1)v(q,t_1)] - p(t_1) = \sum_{i=2}^1(t_i - t_{i-1})P(t_{i-1}) = 0 $. Thus, this mechanism is optimal. \textbf{Feasibility:} Next, we argue that this choice also leads to a feasible mechanism, satisfying the characterization of Lemma \eqref{lem:feasible-M}. Since the lower-boundary virtual value function $\underline{\varphi}(t)$ is monotone non-decreasing in $t$, we know that the threshold $\underline{\theta}(t) = -\underline{\varphi}(t)$ is monotone non-increasing in $t$. This implies that \begin{equation*} P(t_k) = \sum_{q \in Q} \pi(q, t_k) g(q)v_1(q) = \sum_{q: \rho(q) \geq {\underline{\theta}}(t_k)} g(q)v_1(q) \end{equation*} is monotone non-decreasing in $t$ since a larger $t$ leads to a smaller lower bound $\underline{\theta}(t)$ and $g(q)$ and $v_1(q)$ are both non-negative. The utility identify follows from definitions and the payment p(t) we specified for this case: \begin{eqnarray*} u(t_k) = \sum_{q\in Q} [g(q) \pi(q,t_k)v(q,t_k)] - p(t_k) = \sum_{i=2}^k(t_i - t_{i-1})P(t_{i-1}) \end{eqnarray*} Thus $u(t_1) = \sum_{i=2}^1(t_i - t_{i-1})P(t_{i-1}) = 0$, which can be substituted into the above equality to obtain the utility identify. $ u(t_k) = u(t_1) + \sum_{i=2}^k(t_i - t_{i-1})P(t_{i-1}) = u(t_1) + \sum_{i=2}^k(t_i-t_{i-1}) P^\sim(t_i)$ For constraint \eqref{dis-eq:ir-t2}, we have \begin{align*} u(t_1) &= \sum_{q\in Q} \pi(q,t_1) g(q) v(q,t_1) - p(t_1) = 0 \\ u(t_N) &= \sum_{q \in Q} \pi(q,t_N) g(q) v(q,t_N) - p(t_N) =\sum_{i=2}^N(t_i - t_{i-1})P(t_{i-1}) \geq 0 \geq v(t_N) \end{align*} Finally, we argue that the payment is non-negative. We first prove that the payment is non-decreasing in $t$ for any $t$ with $0<=F(t)<=1$. We know that the threshold $\underline{\theta}(t)$ satisfies $ \underline{\varphi}(t) + \underline{\theta}(t) = 0$. Since $\underline{\varphi}(t) = t - \frac{1 - F(t)}{f(t)} \leq t$, we have $\underline{\theta}(t) \geq -t$. For any $t_k > t_j$, we have $\underline{\theta}(t_k) \leq \underline{\theta}(t_j)$. Thus, $-t \leq \underline{\theta}(t_k) \leq \underline{\theta}(t_j) $. \begin{eqnarray*} p(t_k) &=& \sum_{q\in Q} g(q) \pi(q,t_k)v(q,t_k) - \sum_{i=2}^k(t_i - t_{i-1})P(t_{i-1}) \\ & = & \sum_{q:\rho(q)\geq\underline{\theta}(t_k)} \big[g(q) v(q, t_j) \big] + \sum_{q\in Q} \big[g(q) \pi(q,t_k)v_1(q)(t_k-t_j) \big] - u(t_j) - \sum_{i=j+1}^k(t_i - t_{i-1})P(t_{i-1}) \\ & \geq & \sum_{q:\rho(q)\geq\underline{\theta}(t_j)} \big[g(q)v(q,t_j) \big] - u(t_j) + (t_k-t_j) P(t_k) - \sum_{i=j+1}^k(t_i - t_{i-1})P(t_{i-1}) \\ & = & \sum_{q\in Q} [g(q) \pi(q,t_j)v(q,t_j)] - u(t_j) + (t_k -t_j) P(t_k) - \sum_{i=j+1}^k(t_i - t_{i-1})P(t_{i-1}) \\ & =& p(t_j) + (t_k -t_j) P(t_k) - \sum_{i=j+1}^k(t_i - t_{i-1})P(t_{i-1}) \\ & \geq & p(t_j) \end{eqnarray*} The first inequality is due to $-t \leq \underline{\theta}(t_k) \leq \underline{\theta}(t_j) $ and thus $\sum_{q:\rho(q)\geq\underline{\theta}(t_k)} \big[g(q) v(q, t_j) \big]$ sums over more positive terms compared with $\sum_{q:\rho(q)\geq\underline{\theta}(t_j)} \big[g(q)v(q,t_j) \big] $. The last inequality is due to monotonicity of $P(x)$ and our choice of $t_k>t_j$. Therefore, to prove the non-negativity of the payment, we only need to prove $p(t_1) \geq 0$, as follows: \begin{align*} p(t_1) &= \sum_{q \in Q} \pi(q,t) g(q) v(q,t_1) - u(t_1) \\ &= \sum_{q \in Q} \pi(q,t_1) g(q) v(q,t_1) - s(t_1) \\ &= \sum_{q \in Q} \pi(q,t_1) g(q) v(q,t_1) \\ &= \sum_{q:\rho(q)\geq\underline{\theta}(t_1)} g(q) v(q,t_1) \\ & \geq 0 \end{align*} The last inequality is because $\underline{\theta}(t_1) \geq -t$. \textbf{Case 2:} \textbf{Optimality:} First, prove the equivalence \begin{align*} & v(t_N) \geq \qopname\relax n{max}(0,v(t_1)) + \sum_{i=2}^N(t_i - t_{i-1})\sum_{\rho(q) \geq \bar{\theta}(t_{i}) }g(q)v_1(q)\\ \Longleftrightarrow \quad & v(t_1) + (t_N - t_1) \sum_{q \in Q} v_1(q) g(q) \geq \qopname\relax n{max} \{ 0,v(t_1) \} +\sum_{i=2}^N(t_i - t_{i-1})\sum_{\rho(q) \geq \bar{\theta}(t_{i}) }g(q)v_1(q)\\ \Longleftrightarrow \quad & v(t_1) -\qopname\relax n{max} \{0,v(t_1) \} \geq \sum_{i=2}^N(t_i - t_{i-1})\sum_{\rho(q) \geq \bar{\theta}(t_{i}) }g(q)v_1(q) - (t_N - t_1) \sum_{q \in Q} v_1(q) g(q) \\ \Longleftrightarrow \quad & v(t_1) -\qopname\relax n{max} \{0,v(t_1) \} \geq \sum_{i=2}^N(t_i - t_{i-1})\sum_{\rho(q) \geq \bar{\theta}(t_{i}) }g(q)v_1(q) - \sum_{i=2}^N(t_i - t_{i-1}) \sum_{q \in Q} v_1(q) g(q) \\ \Longleftrightarrow \quad & v(t_1) -\qopname\relax n{max} \{0 , v(t_1)\} \geq - \sum_{i=2}^N(t_i - t_{i-1})\sum_{\rho(q) < \bar{\theta}(t_{i}) }g(q)v_1(q) \\ \Longleftrightarrow \quad & v(t_1) \geq - \sum_{i=2}^N(t_i - t_{i-1})\sum_{\rho(q) < \bar{\theta}(t_{i}) }g(q)v_1(q) \\ \end{align*} The threshold $\bar{\theta}(t) = -\bar{\varphi}(t)$ is picked such that $\pi(q,t) =1$ whenever $ \bar{\varphi}(t) + \rho(q) > 0$ and $\pi(q,t) =0$ whenever $ \bar{\varphi}(t) + \rho(q) < 0$. That is, the revenue function, expressed with $t_N$ as reference point, is entry-wise maximized. This clearly achieves the maximum possible value in the first term of the revenue $REV(\pi, p)$ expression. The second term is maximized by choosing the minimum possible $u(t_N)$ value, i.e., $u(t_N) = v(t_N)$. With the defined payment function, we indeed choose $u(t_N) = \sum_{q\in Q} [g(q) \pi(q,t_N)v(q,t_N)] - p(t_N) = v(t_N) - \sum_{t_N}^{t_N} P(x) dx = v(t_N)$. Thus, this mechanism is optimal. \textbf{Feasibility:} Next, we argue that this choice also leads to a feasible mechanism, satisfying the characterization of Lemma \eqref{lem:feasible-M}. Since the upper-boundary virtual value function $\bar{\varphi}$ is monotone increasing, we know that the threshold $\bar{\theta}(t) = -\bar{\varphi}(t)$ is monotone non-increasing in $t$. This implies that \begin{equation*} P(t_k) = \sum_{q \in Q} \pi(q, t_k) g(q)v_1(q) = \sum_{q: \rho(q) \geq \bar{\theta}(t_k)} g(q)v_1(q) \end{equation*} is monotone non-decreasing in $t$ since a larger $t$ leads to a smaller $\bar{\theta}(t)$ and thus larger integral domain for $q$. We now prove the utility identify. By definition of $u(t)$ and our choice of payment function, we have \begin{eqnarray*} u(t_k) = \sum_{q\in Q} [g(q) \pi(q,t_k)v(q,t_k)] - p(t_k) = v(t_N) - \sum_{i=k+1}^N(t_i - t_{i-1})P(t_{i}). \end{eqnarray*} Thus $u(t_N) = v(t_N) - \sum_{t_N}^{t_N} P(x) dx = v(t_N)$, which can be substituted into the above equality to obtain the utility identify, as required by Lemma \eqref{dis-lem:feasible-M}. For constraint \eqref{eq:ir-t2}, we have \begin{align*} u(t_N) &= \sum_{q \in Q} \pi(q,t_N) g(q) v(q,t_N) - p(t_N) = v(t_N) \\ u(t_1) &= \sum_{q \in Q} \pi(q,t_1) g(q) v(q,t_1) - p(t_1) \\ &=v(t_N) - \sum_{i=2}^N(t_i - t_{i-1})P(t_{i})\\ &\geq \qopname\relax n{max}(0,v(t_1)) + \sum_{i=2}^N(t_i - t_{i-1})\sum_{\rho(q) \geq \bar{\theta}(t_{i}) }g(q)v_1(q) - \sum_{i=2}^N(t_i - t_{i-1})P(t_{i})\\ &= \qopname\relax n{max}(0,v(t_1)) +\sum_{i=2}^N(t_i - t_{i-1})P(t_{i}) - \sum_{i=2}^N(t_i - t_{i-1})P(t_{i})\\ &\geq 0\\ \end{align*} Finally, we argue that the payment is non-negative. We first prove that the payment is non-increasing in $t$ for any $t$ with $F(t)>0$. We know that the threshold $\bar{\theta} _t$ satisfies $ \bar{\varphi}(t) + \bar{\theta}(t) = 0$. Since $\bar{\varphi}(t) = t +\frac{F(t)}{f(t)} > t$ due to our choice of $t$ with $F(t)>0$, we have $\bar{\theta}(t) < -t$. Note that the types with $F(t) = 0$ does not need to be taken care as their total probability measure is $0$. For any $t_k > t_j$, we have $\bar{\theta}(t_k) \leq \bar{\theta}(t_j) < -t_k$ because $\bar{\theta}(t_k) = - t_k - \frac{F(t_k)}{f(t_k)}$ is a non-increasing function. \begin{eqnarray*} p(t_k) -p(t_j) &=& \sum_{q\in Q} g(q) \pi(q,t_k)v(q,t_k) - \sum_{q\in Q} g(q) \pi(q,t_j)v(q,t_j) - \sum_{i=j+1}^K(t_i - t_{i-1})P(t_{i}) \\ &=& \sum_{q: \rho(q) \geq \bar{\theta}(t_k)} g(q) v(q,t_k) - \sum_{q: \rho(q) \geq \bar{\theta}(t_j)} g(q) v(q,t_j) - \sum_{i=j+1}^K(t_i - t_{i-1})P(t_{i}) \\ &\leq& \sum_{q: \rho(q) \geq \bar{\theta}(t_j)} g(q) v(q,t_k) - \sum_{q: \rho(q) \geq \bar{\theta}(t_j)} g(q) v(q,t_j) - \sum_{i=j+1}^K(t_i - t_{i-1})P(t_{i}) \\ &= & \sum_{q: \rho(q) \geq \bar{\theta}(t_j)} g(q) (v(q,t_k)-v(q,t_j)) - \sum_{i=j+1}^K(t_i - t_{i-1})P(t_{i}) \\ &=& (t_k-t_j) \sum_{ q: \rho(q) \geq \bar{\theta}(t_j) } g(q) v_1(q) - \sum_{i=j+1}^K(t_i - t_{i-1})P(t_{i}) \\ &=& (t_k-t_j) \sum_{ q \in Q} g(q) v_1(q) \pi(q,t_j) - \sum_{i=j+1}^K(t_i - t_{i-1})P(t_{i}) \\ &=& (t_k-t_j) P(t_j) - \sum_{i=j+1}^K(t_i - t_{i-1})P(t_{i}) \\ & \leq & 0 \end{eqnarray*} The first inequality is due to $\bar{\theta}(t_k) \leq \bar{\theta}(t_j) < -t_k$ and thus $\sum_{q: \rho(q) \geq \bar{\theta}(t_j)} g(q) v(q,t_k) $ sums over less negative terms compared with $\sum_{\bar{\theta}(t_k)}^{q_2} g(q) v(q,t_k) $. The last inequality is due to monotonicity of $P(x)$ and our choice of $t_k>t_j$. Therefore, to prove the non-negativity of the payment, we only need to prove $p(t_N) \geq 0$, as follows: \begin{align*} p(t_N) &= \sum_{q \in Q} [g(q) \pi(q,t_N)v(q,t_N)] - v(t_N) + \sum_{i=N+1}^N(t_i - t_{i-1})P(t_{i}) \\ &= \sum_{q \in Q} \pi(q,t_N) g(q) v(q,t_N) - \sum_{q \in Q} g(q) v(q,t_N)\\ & \geq 0 \end{align*} The last inequality is because $\pi(q,t_N) = 1, \forall v(q,t_N)\geq 0$. This concludes our proof of non-negative and non-increasing payoff for Case 2, and also concludes the proof for this case. \end{proof} \begin{lemma} For $v(t_N)$ not in the above 2 cases where $ \qopname\relax n{max}(0,v(t_1)) + \sum_{i=2}^N(t_i - t_{i-1})\sum_{\rho(q) \geq \underline{\theta}(t_{i-1})} \\ g(q)v_1(q) \leq v(t_N) < \qopname\relax n{max}(0,v(t_1)) + \sum_{i=2}^N(t_i - t_{i-1})\sum_{\rho(q) \geq \bar{\theta}(t_{i}) }g(q)v_1(q)$ (or equivalently, $ - \sum_{i=2}^N(t_i - t_{i-1})\sum_{\rho(q) < \underline{\theta}(t_{i-1}) }g(q)v_1(q) \leq v(t_1) < - \sum_{i=2}^N(t_i - t_{i-1})\sum_{\rho(q) < \bar{\theta}(t_{i}) }g(q)v_1(q)$)and $\widetilde{\varphi}_q(t_i) = f(t_i) \frac{[\underline{\varphi}(t_i) + \rho(q)]}{(t_{i+1}-t_{i}) }$ is non-decreasing in t for any q, the optimal mechanism is a threshold mechanism on $\rho(q)$ with threshold $\theta(t_i) = t_i+ (t_{i+1}-t_{i})\frac{C-F(t_i)}{f(t_i)} $, where $C \in (0,1]$ is a constant and satisfies $\sum_t\sum_{q:}_{ \widetilde{\varphi}_q(t) + 1 >= C } g(q) v_1(q) (t_{i+1}-t_{i}) = v(t_N)$. The payment is determined by the following identity and is monotone non-decreasing in t when $F(t) \leq C $ and monotone non-increasing when $F(t) > C$. Define \begin{align*} \underline{p}(t_k) &= \sum_{q\in Q} g(q) \pi(q,t_k)v(q,t_k) - \sum_{i=2}^k(t_i - t_{i-1})P(t_{i-1}) \\ \bar{p}(t_k) &= \sum_{q\in Q} g(q) \pi(q,t_k)v(q,t_k) -v(t_N) + \sum_{i=k+1}^N(t_i - t_{i-1})P(t_{i}) \end{align*} The payment function for this mechanism is: \begin{gather*} p(t_k) = \begin{cases} \underline{p}(t_k) & \text{for } F(t_k) \leq C\\ \bar{p}(t_k) & \text{for } F(t_k) > C \end{cases} \end{gather*} \end{lemma} \begin{proof} \textbf{Reduction and Optimality:} Our initial goal is to maximize revenue function: \begin{align*} \text{max } &\sum_{i=1}^{N} \sum_{q\in Q} \pi(q,t_i) g(q) v_1(q) f(t_i) [\underline{\varphi}(t_i) + \rho(q)] -u(t_1) & \mbox{by Revenue Equation} \\ \text{subject to} & \sum_{i=1}^N \sum_{q \in Q} \pi(q,t_{i})g(q)v_1(q) (t_{i+1}-t_{i}) \geq v(t_N)\\ & P(t) \text{is monotone in }t \end{align*} Now, we are going to simplify it. From condition $ - \sum_{i=2}^N(t_i - t_{i-1})\sum_{\rho(q) < \underline{\theta}(t_{i-1}) }g(q)v_1(q) \leq v(t_1) < - \sum_{i=2}^N(t_i - t_{i-1})\sum_{\rho(q) < \bar{\theta}(t_{i}) }g(q)v_1(q)$), we know $v(t_1)<0$ Thus, \begin{equation} \label{dis-relation-v(t_N)} \sum_{i=2}^N(t_i - t_{i-1})\sum_{\rho(q) \geq \underline{\theta}(t_{i-1})} g(q)v_1(q) \leq v(t_N) < \sum_{i=2}^N(t_i - t_{i-1})\sum_{\rho(q) \geq \bar{\theta}(t_{i}) }g(q)v_1(q) \end{equation} Because of IR constraint to make sure $s(t_1) = u(t_1) - \qopname\relax n{max}(0,v(t_1)) = u(t_1) \geq 0$, we need $u(t_1) \geq 0$. For the constraint, since we defined $t_{N+1} = t_N$ \begin{align*} \sum_{i=1}^N \sum_{q \in Q} \pi(q,t_{i})g(q)v_1(q) (t_{i+1}-t_{i}) &= \sum_{i=2}^N \sum_{q \in Q} \pi(q,t_{i-1})g(q)v_1(q) (t_{i}-t_{i-1}) \\ &=\sum_{i=2}^N(t_i-t_{i-1}) \pi(q,t_{i-1})g(q)v_1(q) \\ &=\sum_{i=2}^N(t_i-t_{i-1}) P(t_{i-1}) \\ &\geq v(t_N) \end{align*} With this mechanism, if $F(t_1) \leq C$ \begin{align} u(t_1) &= \sum_{q\in Q} g(q) \pi(q,t_1)v(q,t_1) - p(t_1) \nonumber\\ &= \sum_{q\in Q} g(q) \pi(q,t_1)v(q,t_1) - \underline{p}(t_1) \nonumber\\ &= 0 \label{dis-u_t_1} \end{align} If $F(t_1) > C$ \begin{align} u(t_1) &= \sum_{q\in Q} g(q) \pi(q,t_1)v(q,t_1) - p(t_1) \nonumber\\ &= \sum_{q\in Q} g(q) \pi(q,t_1)v(q,t_1) - \bar{p}(t_1) \nonumber\\ &= v(t_N) - \sum_{i=2}^N(t_i - t_{i-1})P(t_{i}) \nonumber \\ &= 0 \label{dis-u_t_1} \end{align} By setting $ v(t_N) - \sum_{i=2}^N(t_i - t_{i-1})P(t_{i})$, we can make $u(t_1) = 0$. In later proof, we will show this also maximizes the first term at the same time. Thus, $u(t_1)$ can always achieve minimized $u(t_1) = 0$, which indeed minimized the second term. So, we reduce our goal to \begin{align*} \text{max } &\sum_{i=1}^{N} \sum_{q\in Q} \pi(q,t_i) g(q) v_1(q) f(t_i) [\underline{\varphi}(t_k) + \rho(q)] & \mbox{by Revenue Equation} \\ \text{subject to} & \sum_{i=1}^N \sum_{q \in Q} \pi(q,t_{i})g(q)v_1(q) (t_{i+1}-t_{i}) \geq v(t_N)\\ & P(t) \text{is monotone in }t \end{align*} For the goal, we can write it in another way \begin{align*} &\sum_{i=1}^{N} \sum_{q\in Q} \pi(q,t_i) g(q) v_1(q) f(t_i) [\underline{\varphi}(t_i) + \rho(q)] \\ &= \sum_{i=1}^{N} \sum_{q\in Q} \pi(q,t_i) g(q) v_1(q) (t_{i+1}-t_{i}) f(t_i) \frac{[\underline{\varphi}(t_i) + \rho(q)]}{(t_{i+1}-t_{i}) } \\ &= \sum_{i=1}^{N} \sum_{q\in Q} \pi(q,t_i) g(q) v_1(q) (t_{i+1}-t_{i}) f(t_i) \frac{[t_i - (t_{i+1}-t_{i})\frac{1-F(t_i)}{f(t_i)} + \rho(q)]}{(t_{i+1}-t_{i}) } \\ &= \sum_{i=1}^{N} \sum_{q\in Q} \pi(q,t_i) g(q) v_1(q) (t_{i+1}-t_{i}) f(t_i) [\frac{t_i + \rho(q)}{(t_{i+1}-t_{i}) } - \frac{1-F(t_i)}{f(t_i)}] \\ \end{align*} Now, the object function is \begin{align*} \text{max } &\sum_{i=1}^{N} \sum_{q\in Q} \pi(q,t_i) g(q) v_1(q) (t_{i+1}-t_{i}) f(t_i) \frac{[\underline{\varphi}(t_i) + \rho(q)]}{(t_{i+1}-t_{i}) } & \mbox{by Revenue Equation} \\ \text{subject to} & \sum_{i=1}^N \sum_{q \in Q} \pi(q,t_{i})g(q)v_1(q) (t_{i+1}-t_{i}) \geq v(t_N)\\ \label{dis-v(t_N)-upper} & P(t) \text{is monotone in }t \end{align*} Define $\widetilde{\varphi}_q(t_i) = f(t_i) \frac{[\underline{\varphi}(t_i) + \rho(q)]}{(t_{i+1}-t_{i}) }$. We can think this optimization problem as a fractional knapsack problem where we are required to pick at least $v(t_N)$ units of items. Each item can be picked up to $g(q)v_1(q) (t_{i+1}-t_{i})$ units since $0 \leq \pi(q,t) \leq 1 $ and each unit of this item gives $\widetilde{\varphi}_q(t)$ value. Since $f(t_i) > 0$ and $(t_{i+1}-t_{i}) > 0$, the optimal solution is to pick all the items with $[\underline{\varphi}(t_i) + \rho(q)] \geq 0$ and if we haven't reach the units lower bound, pick the largest $\widetilde{\varphi}_q(t)$ items until we reach the units lower bound $v(t_N)$ With $\widetilde{\varphi}_q(t) $ is non-decreasing in $t$, the solution is a threshold mechanism on $\rho(q)$, because $\widetilde{\varphi}_q(t)$ is non-decreasing on $\rho(q)$. By \eqref{dis-relation-v(t_N)} and optimization constraint, \begin{equation*} \sum_{i=1}^N\sum_{\rho(q) \geq \underline{\theta}(t_{i})} g(q)v_1(q) (t_{i+1} - t_{i}) \leq v(t_N) \leq \sum_{i=1}^N \sum_{q \in Q} \pi(q,t_{i})g(q)v_1(q) (t_{i+1}-t_{i}) \end{equation*} Since the threshold $\underline{\theta_(t_i)} = -\underline{\varphi}(t_i)$, this means whenever $[\underline{\varphi}(t_i) + \rho(q)] \geq 0$, $\pi(q,t_i)=1$. However, even all the positive $[\underline{\varphi}(t_i) + \rho(q)]$ terms have been chosen, we may still didn't reach $v(t_N)$. So, we may have to choose negative $[\underline{\varphi}(t_i) + \rho(q)]$ terms. And since $f(t_i)> 0$ and $(t_{i+1}-t_{i})>0$, $\widetilde{\varphi}_q(t_i) = f(t_i) \frac{[\underline{\varphi}(t_i) + \rho(q)]}{(t_{i+1}-t_{i}) } < 0$. So, we may have to choose items with negative $\widetilde{\varphi}_q(t_i)$ value for the knapsack. The optimal way is still to choose the largest $\widetilde{\varphi}_q(t_i)$ value item until reaching the unit lower bound $v(t_N)$. Notably, part of chosen $\widetilde{\varphi}_q(t_i)$ value is negative. So, we can simplify the optimization problem to \begin{align} \text{max } &\sum_{i=1}^{N} \sum_{q\in Q} \pi(q,t_i) g(q) v_1(q) (t_{i+1}-t_{i}) f(t_i) \frac{[\underline{\varphi}(t_i) + \rho(q)]}{(t_{i+1}-t_{i}) } \nonumber \\ \text{subject to} & \sum_{i=1}^N \sum_{q \in Q} \pi(q,t_{i})g(q)v_1(q) (t_{i+1}-t_{i}) = v(t_N) \label{dis-v-t-n} \\ & P(t) \text{is monotone in }t \nonumber \end{align} \textbf{Feasibility:} Let's prove this payment function gives us a feasible optimal mechanism. For $P(t)$ is monotone in $t$, Define $\rho^*_t(q)$ where $ f(t_i) \frac{[\underline{\varphi}(t_i) + \rho^*_t(q)]}{(t_{i+1}-t_{i}) } =C$. $\rho^*_t(q)$ is non-increasing in t, since $\widetilde{\varphi}_q(t_i) = f(t_i) \frac{[\underline{\varphi}(t_i) + \rho^*_t(q)]}{(t_{i+1}-t_{i}) }$ is no-decreasing in t for all q. So, larger t will include more $q$ in its sum range. Thus, $P(t) = \int_{\rho(q) \geq \rho^*_{t}(q) } g(q) v_1(q) $ is non-decreasing in $t$. By \eqref{dis-u_t_1},$u(t_1) = 0$. At $t_N$, we just get $ \sum_{i=2}^N(t_i-t_{i-1}) P(t_{i-1}) \geq v(t_N) $ \begin{align*} \bar{p}(t_N) &= \sum_{q\in Q} g(q) \pi(q,t_N)v(q,t_N) -v(t_N) + \sum_{i=N+1}^N(t_i - t_{i-1})P(t_{i}) \\ &=\sum_{q\in Q} g(q) \pi(q,t_N)v(q,t_N) -v(t_N) \\ &= \sum_{q\in Q} g(q) \pi(q,t_N)v(q,t_N) - \sum_{i=2}^N(t_i - t_{i-1})P(t_{i-1}) \\ & = \underline{p}(t_N) \end{align*} \begin{align*} u(t_n) &=\sum_{q\in Q} g(q) \pi(q,t_N)v(q,t_N) - \bar{p}(t_N) = v(t_n) \end{align*} For $\forall t_k \; s.t. \; F(t_k)\leq C$ \begin{align*} u(t_k)&=\sum_{q\in Q} g(q) \pi(q,t_k)v(q,t_k) - \underline{p}(t_k) \\ &= \sum_{q\in Q} g(q) \pi(q,t_k)v(q,t_k) - \sum_{q\in Q} g(q) \pi(q,t_k)v(q,t_k) + \sum_{i=2}^k(t_i - t_{i-1})P(t_{i-1}) \\ &= u(t_1) + \sum_{i=2}^k(t_i - t_{i-1})P(t_{i-1}) \\ \end{align*} For $\forall t_k \; s.t. \; F(t_k) > C$ \begin{align*} u(t_k)&=\sum_{q\in Q} g(q) \pi(q,t_k)v(q,t_k) - \bar{p}(t_k) \\ &= \sum_{q\in Q} g(q) \pi(q,t_k)v(q,t_k) - \sum_{q\in Q} g(q) \pi(q,t_k)v(q,t_k) +v(t_N) - \sum_{i=k+1}^N(t_i - t_{i-1})P(t_{i}) \\ &=\sum_{i=2}^N(t_i-t_{i-1}) P(t_{i-1}) - \sum_{i=k+1}^N(t_i - t_{i-1})P(t_{i}) \\ &= \sum_{i=2}^k(t_i - t_{i-1})P(t_{i-1}) \\ &= u(t_1) + \sum_{i=2}^k(t_i - t_{i-1})P(t_{i-1}) \\ \end{align*} Now, constraints \eqref{dis-eq:buyer-utility-identify2}\eqref{dis-eq:ir-t2}\eqref{dis-eq:non-negativity} which are equivalent to IC, IR have been satisfied. By \eqref{dis-relation-v(t_N)}, $\sum_{i=2}^N(t_i - t_{i-1})\sum_{\rho(q) \geq \underline{\theta}(t_{i-1})} g(q)v_1(q) \leq v(t_N) < \sum_{i=2}^N(t_i - t_{i-1})\sum_{\rho(q) \geq \bar{\theta}(t_{i}) }g(q)v_1(q) $. $$ \widetilde{\varphi}_q(t_i) = f(t_i) \frac{[\underline{\varphi}(t_i) + \rho(q)]}{(t_{i+1}-t_{i}) } = f(t_i) [\frac{t_i + \rho(q)}{(t_{i+1}-t_{i}) } - \frac{1-F(t_i)}{f(t_i)}] = \frac{f(t_i)(t_i + \rho(q))}{(t_{i+1}-t_{i}) } +F(t_i) - 1 $$ As we discussed above, to satisfy \eqref{dis-v-t-n}, in addition to including all the $\frac{f(t_i)(t_i + \rho(q))}{(t_{i+1}-t_{i}) } +F(t_i) - 1 \geq 0$ which is $ \frac{f(t_i)(t_i + \rho(q))}{(t_{i+1}-t_{i}) } + F(t_i) \geq 1$ terms, we have to add some negative $\widetilde{\varphi}_q(t_i)$ terms to our knapsack where $\frac{f(t_i)(t_i + \rho(q))}{(t_{i+1}-t_{i}) } + F(t_i) < 1$. These terms will not be too much because $v(t_N) < \sum_{i=2}^N(t_i - t_{i-1})\sum_{\rho(q) \geq \bar{\theta}(t_{i}) }g(q)v_1(q) $, which means we only need to include part of the terms where $f(t_i) \frac{[\bar{\varphi}(t_i) + \rho(q)]}{(t_{i+1}-t_{i}) } = \frac{f(t_i)(t_i + \rho(q))}{(t_{i+1}-t_{i}) } +F(t_i) \geq 0$ to satisfy \eqref{dis-v-t-n}. Thus, every recommended negative $\widetilde{\varphi}_q(t_i)$ terms in the optimal policy will be included in the policy where we use $ f(t_i) \frac{[\bar{\varphi}(t_i) + \rho(q)]}{(t_{i+1}-t_{i}) } \geq 0 $ to set $\pi(q,t) = 1$. Also, $s(t_1) = u(t_1) - max(0,v(t_1))= u(t_1) = 0$ and $s(t_N) = u(t_N) - max(0,v(t_N))= u(t_N) - v(t_N)= 0$. Now, we are going to prove $\forall t\; p(t)>0$ to satisfy the obedience constraint. With proof above, there is a constant $C \in (0,1]$, where when $\widetilde{\varphi}_q(t_i) +1 >=C$, $\pi(q,t)=1$. Define $\rho^*_{t_i}(q)$ where $ \frac{f(t_i)(t_i +\rho^*_{t_i}(q))}{(t_{i+1}-t_{i}) } +F(t_i) =C$. So we have $t_i+ \rho^*_{t_i}(q) =(t_{i+1}-t_{i})\frac{(C-F(t_i))}{f(t_i)}$ for any $t_i$. \begin{align*} p(t_1) &= -u(t_1) + \sum_{\rho(q) \geq \rho^*_{t_1}(q)} g(q) v(q,t_1) \\ &= \sum_{\rho(q) \geq \rho^*_{t_1}(q)} g(q) v_1(q)[t_1+\rho(q)] \\ &\geq \sum_{\rho(q) \geq \rho^*_{t_1}(q)} g(q) v_1(q)[t_1+ \rho^*_{t_1}(q) ] \\ &= \sum_{\rho(q) \geq \rho^*_{t_1}(q)} g(q) v_1(q) (t_{2}-t_{1}) \frac{C - F(t_1)}{f(t_1)} \\ &\geq 0 \end{align*} where the last inequality is because $C\geq 0$ and $F(t_1)=0$. Similarly, utilizing $u(t_N) = v(t_N) = \sum_{q \in Q} g(q) v_1(q)[t_N + \rho(q)]$, we have \begin{align*} p(t_N) &= -u(t_N) + \sum_{\rho(q) \geq \rho^*_{t_N}(q)} g(q) v(q,t_N) \\ &= - \sum_{q \in Q} g(q) v_1(q)[t_N + \rho(q)] + \sum_{\rho(q) \geq \rho^*_{t_N}(q) } g(q) v_1(q)[t_N+\rho(q)] \\ &= - \sum_{\rho(q) <\rho^*_{t_N}(q) } g(q) v_1(q)[t_N+\rho(q)] \\ &\geq - \sum_{\rho(q) <\rho^*_{t_N}(q) } g(q) v_1(q)[t_N+\rho^*_{t_N}(q)] \\ & = - \sum_{\rho(q) <\rho^*_{t_N}(q) } g(q) v_1(q) (t_{N+1}-t_{N}) \frac{C - F(t_N)}{f(t_N)} \\ &= 0 \end{align*} where the last equality is because $t_{N+1}-t_{N} = 0$. Define $q^*(t)$ as the short notation for ${\rho^*_t}^{-1}(q)$. $$ p(t_k) = \sum_{q\in Q} g(q) \pi(q,t_k)v(q,t_k) - \sum_{i=2}^k(t_i - t_{i-1})P(t_{i-1}) $$ \begin{align*} &p(t_k) - p(t_{k-1})\\ =& \sum_{q \in Q} g(q) \pi(q,t_k)v(q,t_k) - \sum_{i=2}^k(t_i - t_{i-1})P(t_{i-1}) - \sum_{q \in Q} g(q) \pi(q,t_{k-1})v(q,t_{k-1}) + \sum_{i=2}^{k-1}(t_i - t_{i-1})P(t_{i-1}) \\ =& \sum_{q \in Q} g(q) \pi(q,t_k) v(q,t_k) -(t_k - t_{k-1})\sum_{q \in Q} \pi(q,t_{k-1})g(q)v_1(q) -\sum_{q \in Q} g(q) \pi(q,t_{k-1}) v(q,t_{k-1})\\ =&\sum_{q: \rho(q) \geq \rho^*_{t_k}(q)}g(q)v(q,t_k)-\sum_{q: \rho(q) \geq \rho^*_{t_{k-1}}(q)} g(q)v(q,t_{k-1})- (t_k - t_{k-1}) \sum_{q: \rho(q) \geq \rho^*_{t_{k-1}}(q)} g(q) v_1(q)\\ =&\sum_{q: \rho(q) \geq \rho^*_{t_{k-1}}(q)}g(q)v(q,t_k)-\sum_{q: \rho(q) \geq \rho^*_{t_{k-1}}(q)} g(q)v(q,t_{k-1}) + \sum_{q:\rho^*_{t_{k}}(q) \leq \rho(q) < \rho^*_{t_{k-1}}(q)}g(q)v(q,t_{k}) \\ &- (t_k - t_{k-1}) \sum_{q: \rho(q) \geq \rho^*_{t_{k-1}}(q)} g(q) v_1(q)\\ =&(t_k - t_{k-1}) \sum_{q: \rho(q) \geq \rho^*_{t_{k-1}}(q)} g(q) v_1(q) + \sum_{q:\rho^*_{t_{k}}(q) \leq \rho(q) < \rho^*_{t_{k-1}}(q)}g(q)v(q,t_{k}) - (t_k - t_{k-1}) \sum_{q: \rho(q) \geq \rho^*_{t_{k-1}}(q)} g(q) v_1(q)\\ =& \sum_{q:\rho^*_{t_{k}}(q) \leq \rho(q) < \rho^*_{t_{k-1}}(q)}g(q)v(q,t_{k})\\ =& \sum_{q:\rho^*_{t_{k}}(q) \leq \rho(q) < \rho^*_{t_{k-1}}(q)} g(q) v_1(\rho^{-1}(q)) (t_{k} + \rho(q)) \\ \geq& \sum_{q:\rho^*_{t_{k}}(q) \leq \rho(q) < \rho^*_{t_{k-1}}(q)} g(q) v_1(\rho^{-1}(q)) (t_{k} + \rho^*_{t_{k}}(q)) \\ =& \sum_{q:\rho^*_{t_{k}}(q) \leq \rho(q) < \rho^*_{t_{k-1}}(q)} g(q) v_1(\rho^{-1}(q))(t_{k+1} - t_k)\frac{C-F(t_k)}{f(t_k)} \end{align*} Because of $g(q) v_1(\rho^{-1}(q))(t_{k+1} - t_k) > 0 $, we have $\forall k, s.t. \; F(t_k) \leq C, p(t_k) - p(t_{k-1}) \geq 0$. Since we have $p(t_1) \geq 0$, $p(t_k)$ is non-negative and non-decreasing $\forall k, s.t. \; F(t_k) \leq C$. $$ \bar{p}(t_k) = \sum_{q\in Q} g(q) \pi(q,t_k)v(q,t_k) -v(t_N) + \sum_{i=k+1}^N(t_i - t_{i-1})P(t_{i}) $$ \begin{align*} &p(t_k) - p(t_{k-1})\\ =& \sum_{q \in Q} g(q) \pi(q,t_k) v(q,t_k) -(t_k - t_{k-1})\sum_{q \in Q} \pi(q,t_{k})g(q)v_1(q) -\sum_{q \in Q} g(q) \pi(q,t_{k-1}) v(q,t_{k-1})\\ =&\sum_{q: \rho(q) \geq \rho^*_{t_k}(q)}g(q)v(q,t_k)-\sum_{q: \rho(q) \geq \rho^*_{t_{k-1}}(q)} g(q)v(q,t_{k-1})- (t_k - t_{k-1}) \sum_{q: \rho(q) \geq \rho^*_{t_{k}}(q)} g(q) v_1(q)\\ =&\sum_{q: \rho(q) \geq \rho^*_{t_{k}}(q)}g(q)v(q,t_k)-\sum_{q: \rho(q) \geq \rho^*_{t_{k}}(q)} g(q)v(q,t_{k-1}) + \sum_{q:\rho^*_{t_{k}}(q) \leq \rho(q) < \rho^*_{t_{k-1}}(q)}g(q)v(q,t_{k-1}) \\ &- (t_k - t_{k-1}) \sum_{q: \rho(q) \geq \rho^*_{t_{k}}(q)} g(q) v_1(q)\\ =&(t_k - t_{k-1}) \sum_{q: \rho(q) \geq \rho^*_{t_{k}}(q)} g(q) v_1(q) + \sum_{q:\rho^*_{t_{k}}(q) \leq \rho(q) < \rho^*_{t_{k-1}}(q)}g(q)v(q,t_{k-1}) - (t_k - t_{k-1}) \sum_{q: \rho(q) \geq \rho^*_{t_{k}}(q)} g(q) v_1(q)\\ =& \sum_{q:\rho^*_{t_{k}}(q) \leq \rho(q) < \rho^*_{t_{k-1}}(q)}g(q)v(q,t_{k-1})\\ =& \sum_{q:\rho^*_{t_{k}}(q) \leq \rho(q) < \rho^*_{t_{k-1}}(q)} g(q) v_1(\rho^{-1}(q)) (t_{k-1} + \rho(q)) \\ \leq& \sum_{q:\rho^*_{t_{k}}(q) \leq \rho(q) < \rho^*_{t_{k-1}}(q)} g(q) v_1(\rho^{-1}(q)) (t_{k-1} + \rho^*_{t_{k-1}}(q)) \\ =& \sum_{q:\rho^*_{t_{k}}(q) \leq \rho(q) < \rho^*_{t_{k-1}}(q)} g(q) v_1(\rho^{-1}(q))(t_{k} - t_{k-1})\frac{C-F(t_{k-1})}{f(t_{k-1})} \end{align*} Because of $g(q) v_1(\rho^{-1}(q))(t_{k} - t_{k-1}) > 0 $, $\forall k, s.t. \; F(t_{k-1}) \geq C \Longrightarrow p(t_{k}) - p(t_{k-1}) \geq 0$. In other words, $\forall k, s.t. \; F(t_{k}) \geq C \Longrightarrow p(t_{k}) - p(t_{k+1}) \leq 0$. Since we have $p(t_N) \geq 0$, $p(t_{k})$ is non-negative and non-increasing $\forall t_k \; F(t_{k}) \geq C$. We proved $\forall t, \; p(t)\geq 0$. Thus, this mechanism is optimal and feasible. \end{proof} \begin{corollary} \label{dis-same-payment} If recommendation policies are the same for any two types, then the payment is the same for these two types. Formally, $ \forall t_i \, \forall t_j$, where \, $ \forall q , \pi(q,t_i) = \pi(q,t_j)$, then $p(t_i) = p(t_j) $. \end{corollary} \begin{proof} We can assume without generality $t_i \leq t_j$. Because $ \forall q, \pi(q,t_i) = \pi(q,t_j)$ and monotonicity of $\theta(t_i)$, we have $ \forall t_i \leq t_k \leq t_j\, \forall q, \pi(q,t_i) = \pi(q,t_k) =\pi(q,t_j)$. For case 1 in lemma \ref{dis-lem:two-simple-cases}. \begin{align*} p(t_i) &= \sum_{q \in Q} g(q) \pi(q,t_i)v(q,t_i) - \sum_{k=2}^i(t_k - t_{k-1})P(t_{k-1}) \\ &= \sum_{q \in Q} g(q) \pi(q,t_j)v(q,t_j) - (t_j - t_i)\sum_{q \in Q} g(q) \pi(q,t_j)v_1(q) - \sum_{k=2}^i(t_k - t_{k-1})P(t_{k-1}) \\ &= \sum_{q \in Q} g(q) \pi(q,t_j)v(q,t_j) - \sum_{k=i+1}^{j} (t_k - t_{k-1}) \sum_{q \in Q} g(q) \pi(q,t_{k-1})v_1(q) - \sum_{k=2}^i(t_k - t_{k-1})P(t_{k-1}) \\ &= \sum_{q \in Q} g(q) \pi(q,t_j)v(q,t_j) - \sum_{k=2}^j(t_k - t_{k-1})P(t_{k-1}) \\ &= p(t_j) \end{align*} Ditto for other cases. \end{proof} \section{Additional Properties about the Optimal Mechanism} \begin{proposition}[Existence of Optimal Threshold Mechanism] There always exists an optimal mechanism that is a threshold mechanism. \end{proposition} \begin{proof} \todo{need to revise the proof to make it nicer and smooth} From the revenue function \eqref{eq:revenue-2} using $t_2$ as the reference point, we have the following optimization problem. \begin{align*} \qopname\relax n{max}_{\pi} & \int_{t_1}^{t_2} \int_{q \in Q} \left[ g(q) \pi(q,t) f(t)[v_1(q) \bar{\phi}(t) + v_0(q)]\,\mathrm{d}t \right] \mathrm{d}q \\ \text{subject to} & \int_{t_1}^{t_2} \int_{q \in Q} g(q) \pi(q,t) v_1(q) dq dt \geq \int_{q \in Q} g(q) v(q,t_2) dq\\ & P(t) \text{is monotone in }t \end{align*} To prove the lemma, we show that for any feasible solution to the above program, there exists a feasible threshold mechanism that weakly increases the objective function. Given a feasible mechanism $\pi(q, t)$, let threshold $\theta(t)$ be \begin{gather*} \theta(t)=\qopname\relax n{min}\left\{\theta^*(t)\,\left|\, \int_{q: \rho(q) \geq \theta^*(t)}g(q)v_1(q)\,\mathrm{d}q=P(t)=\int_{q \in Q}g(q)\pi(q,t)v_1(q)\,\mathrm{d}q\right.\right\}. \end{gather*} $\theta(t)$ always exists, since the function $\int_{q: \rho(q) \geq \theta(t)}g(q)v_1(q)\,\mathrm{d}q$ is continuous and decreasing in $\theta(t)$, and $ \int_{q_2}^{q_2}g(q)v_1(q)\,\mathrm{d}q=0\le P(t)\le \int_{q \in Q}g(q)v_1(q)\,\mathrm{d}q$. Define a new allocation function as follows: \begin{gather*} \pi^*(q,t)= \begin{cases} 1 & \text{ if } \rho(q)\ge \theta(t)\\ 0 & \text{ otherwise} \end{cases}. \end{gather*} It is clear that the corresponding $P(t)$ function is the same as the original $\pi(q,t)$: \begin{gather} P^*(t)=\int_{q \in Q}g(q)\pi^*(q,t)v_1(q)\,\mathrm{d}q=\int_{q:\rho(q) \geq \theta(t)}g(q)v_1(q)\,\mathrm{d}q=P(t). \label{eq:Q_equal} \end{gather} Moreover, the left-hand-side of the $\int_{t_1}^{t_2} \int_{q \in Q} g(q) \pi(q,t) v_1(q) dq dt \geq \int_{q \in Q} g(q) v(q,t_2) dq$ constraint in this optimization problem \begin{gather*} \int_{t_1}^{t_2} \int_{q \in Q} g(q) \pi(q,t) v_1(q)\, \mathrm{d}q \mathrm{d}t=\int_{t_1}^{t_2}P(t)=\int_{t_1}^{t_2}P^*(t)\,\mathrm{d}t \end{gather*} also remains the same. The above two equations imply that $\pi^*(q,t)$ is also feasible. Now it suffices to show that $\pi^*(q,t)$ leads to a higher revenue. Define \begin{gather*} \Phi(\rho(q), t) = f(t)\left[ \phi(t)+\rho(q) \right]. \end{gather*} $\Phi(\rho(q),t)$ is monotone increasing in $\rho(q)$ and $f(t)$ is always non-negative. Thus if $\rho(q)\ge \theta(t)$, $\pi^*(q,t)=1\ge \pi(q,t)$, and $\Phi(\rho(q),t)\ge \Phi(\theta(t),t)$, and if $\rho(q)< \theta(t)$, $\pi^*(q,t)=0\le \pi(q,t)$, and $\Phi(\rho(q),t)\le \Phi(\theta(t),t)$. Therefore, \begin{gather} \left[ \pi^*(q,t)-\pi(q,t) \right]\left[ \Phi(\rho(q),t)-\Phi(\theta(t),t) \right]\ge 0, \forall q, t. \label{eq:monotone} \end{gather} The revenue difference of the two mechanisms $\pi(q,t)$ and $\pi^*(q,t)$ is: \begin{align*} &\int_{t_1}^{t_2} \int_{q \in Q} g(q) \left[\pi^*(q,t)-\pi(q,t)\right] f(t)[v_1(q) \phi(t) + v_0(q)]\,\mathrm{d}q \mathrm{d}t\\ =&\int_{t_1}^{t_2} \int_{q \in Q} g(q) \left[\pi^*(q,t)-\pi(q,t)\right] v_1(q)\Phi(\rho(q),t)\,\mathrm{d}q \mathrm{d}t\\ \ge&\int_{t_1}^{t_2} \int_{q \in Q} g(q) \left[\pi^*(q,t)-\pi(q,t)\right] v_1(q)\Phi(\theta(t),t)\,\mathrm{d}q \mathrm{d}t\\ =&\int_{t_1}^{t_2} \Phi(\theta(t),t) \int_{q \in Q} g(q) \left[\pi^*(q,t)-\pi(q,t)\right] v_1(q)\,\mathrm{d}q \mathrm{d}t\\ =&\int_{t_1}^{t_2} \Phi(\theta(t),t) \left[P^*(t)-P(t)\right]\, \mathrm{d}t\\ =&0, \end{align*} where the inequality is because of Equation \eqref{eq:monotone} and the last equation is due to Equation \eqref{eq:Q_equal}. \end{proof} \begin{proposition}[Information Determines Payments] If the signaling scheme is the same for any two different buyer types, then the payment is the same for these two types. Formally, for any $ t \not = t'$, if $ \pi(q,t) = \pi(q,t')$ for all $q$, then $p(t) = p(t') $. \end{proposition} \begin{proof} We can assume without generality $t \leq t'$. Because $ \forall q, \pi(q,t) = \pi(q,t')$ and monotonicity of $\theta(t)$, we have $ \forall t \leq t^* \leq t'\, \forall q, \pi(q,t) = \pi(q,t^*) =\pi(q,t')$. \begin{align*} p(t) &= \int_{q \in Q} g(q) \pi(q,t)v(q,t) dq - \int_{t_1}^{t} P(x) dx \\ &= \int_{q \in Q} g(q) \pi(q,t')v(q,t) dq - \int_{t_1}^{t} P(x) dx \\ &= \int_{q \in Q} g(q) \pi(q,t')v(q,t') dq - (t' - t)\int_{q \in Q} g(q) \pi(q,t')v_1(q) dq - \int_{t_1}^{t} P(x) dx \\ &= \int_{q \in Q} g(q) \pi(q,t')v(q,t') dq - \int_{t}^{t'} \int_{q \in Q} g(q) \pi(q,x)v_1(q) dq dx - \int_{t_1}^{t} P(x) dx \\ &= \int_{q \in Q} g(q) \pi(q,t')v(q,t') dq - \int_{t}^{t'} P(x) dx - \int_{t_1}^{t} P(x) dx \\ &= \int_{q \in Q} g(q) \pi(q,t')v(q,t') dq - \int_{t_1}^{t'} P(x) dx \\ &= p(t') \end{align*} Ditto for other cases. \end{proof} Now, let us discuss the relationship between $\widetilde{\phi}_q(t)$ and $\bar{\phi}(t),\underline{\phi}(t)$. \begin{lemma} If $\widetilde{\phi}_q(t) = f(t)t + F(t) + f(t)\rho(q) = f(t) [\bar{\phi}(t) + \rho(q)]$ is non-decreasing in t for any q, then $\bar{\phi}(t) $ and $\underline{\phi}(t) $ is non-decreasing in t in the range of $\rho(q)$. \end{lemma} \begin{proof} We can prove it by contradiction. If $\bar{\phi}(t) $ is not non-decreasing in t, then $\exists t_1<t_2 \; s.t. \; \bar{\phi}(t_1) > \bar{\phi}(t_2)$. Take $\rho(q) = - \frac{\bar{\phi}(t_1) + \bar{\phi}(t_2)}{2}$ \begin{align*} \widetilde{\phi}_q(t_1) &= f(t_1)t_1 + F(t_1) + f(t_1)\rho(q) \\ &= f(t_1) [\bar{\phi}(t_1) + \rho(q)] \\ &= f(t_1) [\frac{\bar{\phi}(t_1) - \bar{\phi}(t_2)}{2}] \\ &\geq 0 \\ \widetilde{\phi}_q(t_2) &= f(t_2)t_2 + F(t_2) + f(t_2)\rho(q) \\ &= f(t_2) [\bar{\phi}(t_2) + \rho(q)] \\ &= f(t_2) [\frac{\bar{\phi}(t_2) - \bar{\phi}(t_1)}{2}] \\ &\leq 0 \end{align*} Contradiction. So, $\bar{\phi}(t)$ should be non-decreasing in the range of $\rho(q)$. Ditto for $\underline{\phi}(t)$. \end{proof} \section{Proof of the Main Theorem } In this section, we prove Theorem \ref{thm:opt-scheme}. Due to space limit, we will only provide a complete proof for Case 3. The core idea for proving Case 1 and 2 is similar. We thus defer them to Appendix \ref{sec:case1} and \ref{sec:case2}, respectively. The proof has two major steps: (1) characterizing useful properties of (any) feasible mechanisms; (2) leveraging the properties to derive the optimal mechanism. While the first step is also based on the analysis of the IC constraints as in classic mechanism design, the conclusions we obtain are quite different since our problem's constraints are different. Significantly deviating from the Myersonian approaches for classic mechanism design is our second main step, which arguably is much more involved due to additional constraints that we have to handle (this is also reflected in the more complex format of our optimal mechanism). \subsection{Useful Properties of Feasible Mechanisms} \label{sec:proof:characterize} Define \emph{feasible} mechanisms as the set of mechanisms $(\pi, p)$ that satisfy all the constraints of program \eqref{lp:opt-formulation} (but not necessarily maximizing its objective). We first characterize the space of feasible mechanisms. To describe our characterization, it is useful to introduce the following quantity. \begin{equation} \label{P-def} P_{\pi}(t) = \int_{q \in Q} \pi(q, t) g(q) \alpha(q)\,\mathrm{d} q \end{equation} Note that $P_{\pi}(t)$ can be interpreted as the expected \emph{weighted} probability (with weight $\alpha(q)$) of being recommended the active action $1$. The following lemma summarizes our characterization. To illustrate the intuition, we only provide a proof of sufficiency here and defer the proof of necessity to Appendix \ref{appendix:feasible-M}. \begin{lemma}[{\bfseries Characterization of Feasible Mechanisms}] A mechanism $(\pi, p)$ with non-negative payments is feasible if and only if it satisfies the following constraints: \begin{align} & P_{\pi}(t) \text{ is monotone non-decreasing in } t \label{eq:signal-monotonicity} \\ & u(t) = u(t_1) + \int_{t_1}^{t} P_{\pi}(x)\,\mathrm{d} x, \forall t \in T \label{eq:buyer-utility-identify2} \\ & u(t_2) \geq v(t_2), \, \, \, u(t_1) \geq 0 \label{eq:ir-t2} \\ & p(t) \geq 0, \, \, \forall t \in T \label{eq:non-negativity} \end{align} \label{lem:feasible-M} \end{lemma} \begin{proof}[Proof of Sufficiency] We prove that constraints \eqref{eq:signal-monotonicity}--\eqref{eq:non-negativity} imply all the necessary constraints \eqref{cons:obedience}, \eqref{cons:IR} and \eqref{cons:IC}. The IC constraint \eqref{cons:IC} is equivalent to \begin{equation*} u(t) \geq u(t') + \int_{q \in Q} \pi(q, t') \cdot g(q) [v(q,t) - v(q,t')]\,\mathrm{d} q = u(t') + (t-t') P_{\pi}(t'). \end{equation*} Therefore, constraints \eqref{eq:signal-monotonicity} and \eqref{eq:buyer-utility-identify2} imply the IC constraint \eqref{cons:IC} because if $t' < t$, we have \begin{equation*} u(t) - u(t') = \int_{t'}^{t} P_{\pi}(x) \,\mathrm{d} x \geq \int_{t'}^{t} P_{\pi}(t') \,\mathrm{d} x = (t-t')P_{\pi}(t'). \end{equation*} Similarly, when $t' > t$, we also have $ u(t) - u(t') \geq (t-t')P_{\pi}(t')$. The IR constraint \eqref{cons:IR} is equivalent to $u(t) \geq 0$ and $u(t) \geq v(t)$. Since $P_{\pi}(x)\geq 0$, constraint \eqref{eq:buyer-utility-identify2}, together with $u(t_1) \geq 0$, implies $u(t) \geq 0$ for any $t$. We now leverage $u(t_2) \geq v(t_2)$ to prove $u(t) \geq v(t)$ for any $t$, as follows: \begin{gather*} u(t) = u(t_1) + \int_{t_1}^{t} P_{\pi}(x)\,\mathrm{d} x = u(t_2) - \int_{t}^{t_2} P_{\pi}(x)\,\mathrm{d} x \geq v(t_2) - \int_{t}^{t_2} P_{\pi}(x)\,\mathrm{d} x. \end{gather*} Using the definition of $v(t_2)$ and $P_{\pi}(x)$, we get \begin{align*} u(t)= & \int_{q \in Q} g(q) \alpha(q)[t_2 + \beta(q)]\,\mathrm{d} q - \int_{t}^{t_2} \int_{q \in Q} \pi(q, x) g(q) \alpha(q)\,\mathrm{d} q \mathrm{d} x \\ \geq & \int_{q \in Q} g(q) \alpha(q)[t_2 + \beta(q)]\,\mathrm{d} q - \int_{t}^{t_2} \int_{q \in Q} g(q) \alpha(q)\,\mathrm{d} q \mathrm{d} x \\ = & \int_{q \in Q} g(q) \alpha(q)[t + \beta(q)] \,\mathrm{d} q \\ =& v(t) . \end{align*} Finally, the obedience constraint \eqref{cons:obedience} follows from the IR constraint \eqref{cons:IR} and $p(t) \geq 0$. \end{proof} Note that condition \eqref{eq:signal-monotonicity} is analogous to Myerson's allocation monotonicity condition in the auction design problem, but also differs in the sense that the value of an item in auction design only depends on the buyer type $t$ with no weight associated to it. In information selling, the value of taking the active action will depend on the utility coefficient $\alpha(q)$. Next we characterize the buyer's surplus $s(t) = u(t) - \qopname\relax n{max} \{ 0, v(t) \}$, as expressed in Equation \eqref{eq:buyer-surplus}, from participating in the information selling mechanism. Recall that, with only the prior information, a buyer of type $t$ has expected utility $v(t) = \int_{q \in Q} v(q, t) g(q)\,\mathrm{d} q $ for the active action. Since $v(q, t)$ is monotone non-decreasing in $t$, we know that $v(t)$ is also monotone non-decreasing. Let $\bar{t}$ be any buyer type at which $v(t)=0$. The following lemma characterize how the buyer's surplus changes as a function of his type. \begin{lemma}\label{lem:surplus-concave} Let $\bar{t}$ be any buyer type such that $v(\bar{t}) = \int_{q \in Q} v(q, \bar{t}) g(q) \,\mathrm{d} q = 0 $. In any feasible mechanism $(\pi, p)$ with non-negative payments, the buyer's surplus $s(t) $ is monotone non-decreasing for $t \in [t_1, \bar{t}]$ and monotone non-increasing for $t \in [\bar{t}, t_2]$.\footnote{$\bar{t}$ can be any one of them if there are multiple $t$ such that $v(t)=0$. If no $\bar{t}\in [t_1, t_2]$ makes $v(\bar{t})=0$, then either $v(t)<0$ or $v(t)>0$ for any $t\in T$ and in this case $s(t)$ is monotone within $T$. } \end{lemma} \begin{proof} When $t \leq \bar{t}$, we have $v(t) \leq 0$. Therefore, without participating in the mechanism to purchase additional information, the buyer will get maximum utility $0$ by taking the passive action. So his surplus for participation is $$s(t) = u(t) = u(t_1) + \int_{t_1}^{t} P_{\pi}(x) \,\mathrm{d} x$$ by the utility identify in Equation \eqref{eq:buyer-utility-identify2}. Since $u(t_1) \geq 0$ and $P_{\pi}(x) \geq 0$, it is easy to see that $s(t)$ is non-negative and monotone non-decreasing in $t$. When $t \geq \bar{t}$, we have $v(t) \geq 0$. So the buyer's maximum utility is $v(t)$ without participating in the information selling mechanism. We thus have \begin{align*} s(t) =& u(t) - v(t) \\ =& \left[ u(t_1) + \int_{t_1}^{t} \int_{q \in Q} \pi(q,x) \alpha(q) g(q) \,\mathrm{d} q \mathrm{d} x \right] - \left[ \int_{q \in Q} \alpha(q) [t + \beta(q)] g(q) \,\mathrm{d} q \right] \\ =& \left[ u(t_1) + \int_{t_1}^{t} \int_{q \in Q} \pi(q,x) \alpha(q) g(q) \,\mathrm{d} q\mathrm{d} x \right] - \left[ \int_{t_1}^{t } \int_{q \in Q} \alpha(q) g(q) \,\mathrm{d} q\mathrm{d} x + v(t_1) \right] \\ =& u(t_1) - v(t_1) + \left[ \int_{t_1}^{t} \int_{q \in Q} [\pi(q,x) - 1] \alpha(q) g(q) \,\mathrm{d} q\mathrm{d} x \right]. \end{align*} Since $\pi(q,x) - 1 \leq 0 $ and $\alpha(q) g(q) \geq 0$, we thus have that $s(t)$ is monotone non-increasing in $t$. Notably, $ s(t) \geq s(t_2) = u(t_2) - v(t_2) \geq 0$ by inequality \eqref{eq:ir-t2}. \end{proof} \section{Introduction} \label{sec:intro} In numerous situations, a decision maker wishes to take an active move but is uncertain about its outcome and payoff. Such active moves range from financial decisions of investing a stock or startup to daily-life decisions of purchasing a house or a used car, from macro-level enterprise decisions of developing a new product to micro-level decisions of approving a loan applicant or displaying online ads to a particular Internet user. In all these situations, the decision maker's payoff for the active move relies on uncertain information regarding, e.g., potential of the invested company, quality of the house, popularity of the new product, credit of the loan applicant, etc. Certainly, the decision maker typically also has a passive backup option of not making the move, in which case he obtains a safe utility without any risk. To decide between the \emph{active} and the \emph{passive} action, the decision maker can turn to an information seller who can access more accurate information about the uncertainties and thus help to better estimate the payoff for his action. Given the value of the seller's information to the decision maker, the seller can make a profit from how much the information helped to improve utilities of the decision maker, i.e., the information buyer. This paper studies how a monopolistic information \emph{seller} (she) can design an optimal pricing mechanism to sell her information to an information \emph{buyer} (he). The buyer (a decision maker) needs to take one of two actions. The \emph{active} action results in a payoff $v(q,t)$ where $t$ captures the buyer's private \emph{type} and the \emph{state of nature} $q$ summarizes the payoff-relevant uncertainty unknown to the buyer. The \emph{passive} action for the buyer always results in the same utility, normalized to $0$, regardless of $q,t$. Both $q$ and $t$ are random variables drawn independently from publicly known distributions. That is, the type $t$ captures the buyer's private preference and is assumed to be irrelevant to the informational variable $q$.\footnote{This independence assumption is relaxed in subsection \ref{section-dependence}.} The seller can design experiments to reveal partial information about state $q$, and would like to design an optimal mechanism to sell her information to a buyer randomly drawn from the type distribution. We assume both the experiment itself and its outcomes (i.e., realized signals) are contractible. As an example, consider a credit assessment company selling credit information to a loan company. In the loan company's payoff function $v(q,t)$ of the active action, informational variable $q$ captures the credit information of a randomly arriving loan applicant and can only be observed by the credit company. Type $t$ captures the loan company's profit from the loan given that the applicant will pay back the loan on time, and is independent of the applicant's credit information $q$. While our model allows $q,t$ to be abstract variables from measurable sets in general (e.g., $q$ may contain employment history, loan history, etc.), it will be conceptually convenient to think of $q,t$ as numerical variables. For instance, consider $v(q,t) = qt- 2$ where: (1) $t$ is the loan company's profit from the loan; (2) $q\in[0,1]$ is a particular applicant's payback rate which can be estimated by the credit company through data-driven prediction techniques today; (3) constant $2$ integrates operation costs. The \emph{passive action} of rejecting the loan applicant results in utility $0$. We shall capture the credit company's optimal mechanism for selling its payback rate prediction $q$. The described problem setup above is a very basic monopoly pricing problem. However, the problem of selling information turns out to differ significantly from the classic pricing problem for selling goods. First, when selling (physical or digital) goods, the seller's allocation rule can be described by a probability of giving out the goods and a risk-neutral buyer's utility is linear in the allocation variable. However, when revealing information to a buyer through experiments, the design variable of an experiment for each buyer type is high-dimensional or can even be a functional when the state is a continuum. Moreover, the buyer's utility is generally non-linear in the variables that describe an experiment \citep{bergemann2019information}. Second, in selling goods, any individually rational buyer would participate as long as their expected utility is at least $0$. However, in our setup of selling information, the buyer may already have positive utility from his active action even without participating in the mechanism. An individually rational buyer would participate in the mechanism only when his utility will become even higher. These differences make the seller's optimization task more challenging. This will be evident later in our characterization of the optimal mechanism, which turns out to be significantly different from, and arguably more intricate than, the optimal pricing mechanism for selling goods by \cite{myerson81}. \subsection*{Main Result} We consider the above information selling problem and characterize in closed-form the revenue-optimal mechanism, among all \emph{sequential mechanisms} that includes all possible ways through which the seller may sequentially reveal information and ask for payments. To simplify the exposition, we assume that the buyer's value function is \emph{linear} and \emph{monotone non-decreasing} in $t$, i.e., $v(q,t) = \alpha(q)[t+\beta(q)]$ for some $\alpha(q) \geq 0$ and $\beta(q)$. In Subsection \ref{generalized-utility}, we discuss how our analysis and results can be generalized to any convex and monotone (in $t$) value functions $v(q,t)$. Assuming $v(q,t) = \alpha(q)[t+\beta(q)]$, we show that there always exists an optimal mechanism of a simple format --- a multi-entry menu where each entry containing a threshold experiment and a payment for each buyer type. In this optimal mechanism, the buyer is incentivized to report his true type $t$ first.\footnote{Equivalently, it is the best interest for each buyer type to choose the particular menu intended for him. That is, the mechanism is incentive compatible. } The seller then charges the buyer $p_{t}$ and, afterwards, designs an experiment to reveal whether the realized state $q$ satisfies $\beta(q) \geq \theta_t$ or not for some carefully chosen threshold $\theta_t$. We thus call the mechanism a \emph{threshold mechanism}. The thresholds and payments generally vary for different buyer types, and are carefully designed to accommodate the amount of risk each buyer type can tolerate. That is, the optimal mechanism features both price discrimination and information discrimination. We fully characterize the threshold and payment in the optimal mechanism. Depending on the setting, the negative of the threshold (i.e., $-\theta_t$) turns out to equal either the (\emph{lower}) virtual value of type $t$ as defined by \cite{myerson81}, or its variant which we coin the \emph{upper} virtual value, or a novel convex combination of both coined the \emph{mixed} virtual value. The above optimal mechanism exhibits multiple interesting properties. First, the optimal mechanism turns out to only need to price the experiment with one round information revelation, even though the seller in our model is allowed to price experiment outcomes (i.e., signals) and use multiple rounds of information revelation. This is due to the independence of the informational variable $q$ and buyer type $t$, which makes an upfront payment and an ``aggregrated'' experiment without loss of generality. Second, the special buyer type $\bar{t}$ who is a-priori indifferent between active and passive action has the largest surplus from participating the mechanism. This is aligned with our intuition that this buyer type should benefit the most from additional information since the two actions appear indistinguishable to him a-priori. Moreover, we show that the buyer surplus as a function of his type $t$ is increasing and convex when $t\leq \bar{t}$ but immediately transitions to be decreasing and convex when $t \geq \bar{t}$. However, the buyer payment may be increasing or decreasing in $t$, depending on the setting. Third, information discrimination turns out to be crucial for revenue. We show that if information discrimination is not allowed, i.e., suppose the same experiment must be used for all buyer types, then the best the seller can do in this case is to reveal full information and charge the Myerson's reserve price. We demonstrate via an example that the revenue in this case may be arbitrarily worse than the optimal. However, under the monotone hazard rate assumption of the buyer type distribution, we show that the optimal single-entry menu can always guarantee at least $1/e(\approx 0.368)$ fraction of the optimal revenue. \subsection*{Related Works} \textbf{Related works on selling information.} The most related literature to our work is the recent study by \cite{bergemann2018design}, who also consider selling information to a decision maker. In their model, the state of nature affects the payoff of every action. They characterize the optimal mechanism for the special cases with binary states and actions or with binary buyer types, whereas only partial properties about the optimal mechanism can be derived for the general case. In contrast, in our setup the state only affects the payoff of the buyer's active action. This restriction allows us to characterize the closed-form solution of the optimal mechanism with many (even continuous) states and buyer types, and for general buyer payoff functions. Moreover, our design space of mechanisms allows multiple rounds of information revelation and also allows contracting the experiment outcomes (i.e., realized signals), though it turns out that the optimal mechanism only needs to price one-round experiments.\footnote{This is first observed by \cite{Babaioff12} yet we will provide a formal argument later for completeness.} While \cite{bergemann2018design} also restrict their design space to mechanisms that only price one-round experiments, they pointed out that this restriction does lose generality in their general setup. That is, the seller may derive strictly more revenue by using multi-rounds of experiments or by contracting the experiment outcomes. \cite{advice} studied the pricing of advice in a principal-agent model motivated by consulting. The principal as a consultant in their model can contract the agent's actions. With such strong bargaining power, their main result shows that even the principal observes completely irrelevant information about the agent's payoffs, the principal can still obtain revenue that is as high as in the situation where she fully observes the agents' payoffs. However, different from consulting service, our model of information selling assumes that only information itself (i.e., the experiment or the experiment outcomes) is contractible and the buyer's actions are not contractible. Therefore, the main result of \cite{advice} clearly does not hold in our model --- if the seller's information is irrelevant to the buyer's payoffs in our model, she will certainly get zero revenue. Interestingly, the format of our optimal mechanism turns out to bear somewhat similar structure to the optimal contract of \cite{advice}, however our results are derived through different techniques and apply to much more general buyer value functions, whereas \cite{advice} restrict to simpler agent utility functions (i.e., the sum of the agent type and the state) and only log-concave agent type distributions. \cite{horner2016selling} study the problem where a firm faces a decision on whether to hire an agent, who has a binary private type, i.e., competent or not. The firm and agent can interact for many rounds by making money transfer and taking test to elicit information about the agent's type. They analyze the equilibrium when the number of rounds of interactions grows large. Both the model and the nature of their results are different from us. There has also been recent interest of algorithmic studies that formulate optimization programs to compute the optimal mechanism for selling information to a decision maker. \cite{Babaioff12} prove revelation principle types of results and characterize the format of the optimal mechanism, depending on whether the state and buyer type are correlated or not; they then develop optimization programs to compute the optimal mechanism. The efficiency for solving these programs were later improved by \cite{chen2020selling}. \textbf{Information revelation while selling goods.} Another relevant yet significantly different problem is the \emph{joint} design of information revelation scheme and the mechanism for selling goods, when the seller has private information about the goods. \cite{esHo2007optimal} studied revenue-maximizing mechanisms for selling a single indivisible good to multiple buyers when the auctioneer can release, without observing, additional signals about the item. \cite{reverse} derive closed-form optimal mechanism for selling goods to a single buyer with strategic disclosure of the seller's private information. In both models, it is still primarily the goods that are sold although, intuitively, part of their price includes the charge of the revealed information. However, in our setting, the seller is a pure information seller without goods. This leads to significant technical differences in determining participation constraints and how much information to reveal, and consequently leads to different mechanism properties. For instance, the payment function in their solution is monotone decreasing in the buyer type whereas payment in our optimal mechanism may not even be monotone in the buyer type. On the technical side, both works above rely on the Monotone Hazard Rate (MHR) assumption on buyer's type distribution whereas our results apply to general distributions. From the optimal mechanism design perspective, \cite{10.1145/2940716.2940789} show that the joint design of signaling schemes and auction mechanisms reduces to the design of multi-item auctions with additive buyer values. \cite{bergemann2021selling} recently study information revelation in second-price auctions, motivated by the sale of impressions in online advertising. \textbf{Contract design with outside options.} Our model is conceptually related to contract design of countervailing incentives in principal-agent models with outside options \citep{countervailing1,countervailing3,countervailing2,countervailing4}. However, both the seller's objective and design space (e.g., information revelation schemes) in our model are significantly different. For instance, the principal's payoff depends on the agent's actions in these agency problems, whereas the seller's revenue only depends on the buyer's payment and nothing else. From a technical point of view, most related to us is the work of \cite{countervailing2}. They systematically consider how the function shape of the agent's outside option affects the agent's participation constraint, and consequently affects the format of the optimal mechanism. This is also one of the key technical challenges we had to address. However, the nature of their results crucially differs from us --- they cast the model as an optimal control problem and then analyze its properties, whereas we directly solve out from a closed-form optimal solution. \textbf{Information design. } Finally, our work is also relevant to the recent rich body of works on information design, a.k.a., bayesian persuasion \citep{kamenica2011bayesian}. Specifically, the most relevant literature to ours is the persuasion problem with a privately informed receiver \citep{kolotilin2017persuasion,guo2019interval}. Similar to us, both papers study models with binary receiver actions. However, the design objective between persuasion and selling information is quite different and thus the solutions are not quite comparable. Like us, \cite{kolotilin2017persuasion} also assume independence between the sender's information and receiver's private type, however the upper-censorship (or lower-censorship) structure of their optimal signaling scheme differs from our threshold experiments. \cite{guo2019interval} study persuasion when the receiver has private information about the state, captured as his type. They show a nested interval structure of the optimal signaling scheme, which is relevant to, yet still different from, our threshold structure of the optimal disclosure. \subsection{Ironing Non-Regular Virtual Values for case 3} As mentioned in lemma \ref{midcase} We can simplify the optimization problem to \begin{align*} \qopname\relax n{max}_{\pi} & \int_{t_1}^{t_2} \int_{q \in Q} g(q) \pi(q,t) v_1(q) f(t) [\underline{\phi}(t) + \rho(q)]\,\mathrm{d}q \mathrm{d}t \\ \text{subject to} & \int_{t_1}^{t_2} \int_{q \in Q} g(q) \pi(q,t) v_1(q) dq dt = v(t_2)\\ & P(t) \text{is monotone in }t \end{align*} $\underline{\phi}(t) = t - \frac{1-F(t)}{f(t)}$ Without regularity where $\widetilde{\phi}_q(t) = f(t)[\underline{\phi}(t) + \rho(q)] = f(t)t + F(t) + f(t)\rho(q) -1$ is non-decreasing in $t$ for any $q$, this mechanism will not be feasible, since $P(t)$ will not be monotone non-decreasing in $t$. So, we are going to use ironing trick \cite{RePEc:inm:ormoor:v:6:y:1981:i:1:p:58-73} to extend our solution to general case. For information buyer $t$, we now define four functions which have the unit interval [0,1] as their domain. First, for any $w$ in [0,1], let $$ h_q(w) = f(F^{-1}(w))F^{-1}(w) + F^{-1}(w) + f(F^{-1}(w))\rho(q) -1 = \widetilde{\phi}_q(F^{-1}(w) ) $$ and let $$ H_q(w) = \int_{0}^{w} h(r) dr $$ Next let $L_q: [0,1] \rightarrow \mathbb{R}$ be the convex hull of the function $H_q(\cdot)$, \begin{align*} L_q(w) &= \text{conv} \; H_q(w)\\ &= \qopname\relax n{min}\{\omega H_q(r_1) + (1-\omega)H_q(r_2) \| \\ &{\omega,r_1,r_2} \subset [0,1] \; \text{and} \; \omega r_1 + (1-\omega)r_2 = w \} \end{align*} That is $L_q(\cdot)$ is the highest convex function on $[0,1]$ such that $\forall w \; L_q(w) \leq H_q(w)$. As a convex function, $L_q$ is continuously differentiable except at countbaly many points and its derivative is monotone increasing. We define $l_q: [0,1] \rightarrow \mathbb{R}$ so that $$ l_q(w) = L_q'(w) $$ whenever this derivative is defined, and we extend $l_q(\cdot)$ to all of $[0,1]$ by right-continuity. We define $\bar{c}_q : [t_1,t_2] \rightarrow \mathbb{R}$ so that $$ \bar{c}_q(t) = l_q(F(t)) $$ It is straightforward to check that, in the regular case when $c_q(\cdot)$ is non-decreasing, we get $L_q = H_q, l_q = h_q, \bar{c}_q = \widetilde{\phi}_q(t)$. We can now state our main result: the optimal mechanism is a threshold mechanism on $\rho(q)$ with threshold $\theta(t) = \bar{c}_{t}^{-1}(C) $, where $C$ is a constant between -1 to 0 and satisfies $\int_{(t,q): \bar{c}_q(t) >= C } g(q) v_1(q) dq dt = v(t_2)$. \sz{After ironing, we do not know whether $\bar{c}_q(t)$ is increasing on $\rho(q)$ and we do not have the formula of $\bar{c}_q(t)$, so we can't define the threshold in a close form. But we can assume it for now.} \begin{lemma} \label{ironing-lemma} With recommendation policy, \begin{gather*} \bar{\pi}(q,t)= \begin{cases} 1 & \text{ if } \rho(q)\ge \bar{c}_{t}^{-1}(C) \\ 0 & \text{ otherwise} \end{cases}. \end{gather*} and payment function $$ \bar{p}(t) = \int_{q\in Q} [\bar{\pi}(q,t)g(q) v(q,t)] dq - \int_{t_1}^{t}\int_{q \in Q} \bar{\pi}(q,t) g(q) v_1(q)dq dx $$ Above $(\bar{\pi}(q,t) , \bar{p}(t) ) $ represents an optimal information selling mechanism for lemma \ref{lem:optimal-case1} without requiring $\widetilde{\phi}_q(t)$ is non-decreasing for any $q$. \end{lemma} \begin{proof} \begin{align} REV (\pi, p) =&\int_{q \in Q} g(q)\left[ \int_{t_1}^{t_2} f(t)\pi(q,t) v_1(q) [\underline{\phi}(t) + \rho(q)]\,\mathrm{d}t \right] \mathrm{d}q -u(t_1) \nonumber \\ = &\int_{q \in Q} \int_{t_1}^{t_2} [\underline{\phi}(t) + \rho(q)] f(t)\pi(q,t) g(q) v_1(q) dt dq \nonumber \\ = &\int_{q \in Q} \int_{t_1}^{t_2} h(F(t)) \pi(q,t) g(q) v_1(q) dt dq \nonumber \\ = &\int_{q \in Q} \int_{t_1}^{t_2} \bar{c}(t) \pi(q,t) g(q) v_1(q) dt dq \nonumber\\ &+ \int_{q \in Q} \int_{t_1}^{t_2} [h(F(t))- l(F(t))]\pi(q,t) g(q) v_1(q) dt dq \nonumber \\ = &\int_{q \in Q} \int_{t_1}^{t_2} \bar{c}(t) \pi(q,t) g(q) v_1(q) dt dq \nonumber \\ & - \int_{t_1}^{t_2} [H(F(t))- L(F(t))] d P(t) \label{ironing-rev} \end{align} Now consider $(\bar{\pi}(q,t) , \bar{p}(t) ) $ as defined in the lemma. Observe that $\bar{\pi}(q,t)$ always sets $\pi(q,t) = 1, \; \forall q,t, \; s.t.\; \rho(q)\ge \bar{c}_{t}^{-1}(C)$. Thus, any other $\pi(q,t)$ will choose some term $< C$ to substitute the term $\leq C$. So, we have, \begin{equation} \label{ironing-1} \int_{q \in Q} \int_{t_1}^{t_2} [\bar{c}(t) + \rho(q)] f(t)\bar{\pi}(q,t) g(q) v_1(q) dt dq \geq \\ \int_{q \in Q} \int_{t_1}^{t_2} [\bar{c}(t) + \rho(q)] f(t)\pi(q,t) g(q) v_1(q) dt dq \end{equation} $$ P(t) = \int_{q \in Q} \pi(q, t) g(q) v_1(q)dq $$ Using integration by parts, we derive the following equations. \sz{Because the first term is weakly increasing, how to prove similar things like the following is critical. Where we do not have $f(t)$ in the second term.} \begin{align*} & \int_{q \in Q} \int_{t_1}^{t_2} [h(F(t))- l(F(t))] f(t)\pi(q,t) g(q) v_1(q) dt dq \\ &= \int_{t_1}^{t_2} [h(F(t))- l(F(t))] f(t)P(t) dt \\ &= [H(F(t))- L(F(t))] P(t) \|_{t=t_1}^{t_2} - \int_{t_1}^{t_2} [H(F(t))- L(F(t))] d P(t) \end{align*} \begin{align*} & \int_{q \in Q} \int_{t_1}^{t_2} [h(F(t))- l(F(t))]\pi(q,t) g(q) v_1(q) dt dq \\ &= \int_{t_1}^{t_2} [h(F(t))- l(F(t))]P(t) dt \\ \end{align*} If we want to use integral by parts, we need to compute $\int h(F(t)) dt $ \begin{align*} & \int h(F(t) dt = \int \frac{1}{f(t)} h(F(t))f(t) dt\\ &= \frac{1}{f(t)} H(F(t)) - \int (\frac{1}{f(t)})' h(F(t)f(t) dt \\ &= \frac{1}{f(t)} H(F(t)) - (\frac{1}{f(t)})' H(F(t)) + \int (\frac{1}{f(t)})'' h(F(t))f(t) dt \\ &= \sum_{i=0}^{n}(-1)^i \frac{1}{f(t)}^{(i)} H(F(t)) + (-1)^n \cdot \int (\frac{1}{f(t)})^{(n)} h(F(t))f(t) dt \\ \end{align*} Because $L$ is the convex hull of $H$ on $[0,1]$ and H is continuous, so $L(0) = H(0)$ and $L(1) = H(1)$. Thus the endpoint terms in $ [H(F(t))- L(F(t))] P(t) \|_{t=t_1}^{t_2}$ are $0$. With $\bar{p}(t)$, $u(t_1) = 0$. For any $\pi(q,t)$ which satisfies \eqref{eq:signal-monotonicity}, since $H \geq L$ and we are looking for a feasible mechanism where $P(t)$ is monotone non-decreasing, we must have \begin{equation} \label{ironing-2} \int_{t_1}^{t_2} [H(F(t))- L(F(t))] d P(t) \geq 0 \end{equation} To see that $ \bar{P}(t) = \int_{q \in Q} \pi(q, t) g(q) v_1(q)dq $ satisfies \eqref{eq:signal-monotonicity}, observe first that $\bar{c}(t)$ is a non-decreasing function of $t$, because $F$ and $l$ are both non-decreasing functions. Thus, $\bar{\pi}(q,t)$ is non-decreasing on $t$ and $\bar{P}(t)$ is also non-decreasing on $t$. Since $L$ is the convex hull of $H$, we know that $L$ must be flat when ever $L < H$; that is, if $L(r)<H(r)$ then $l'(r) = L''(r) = 0$. So, if $H(F(t))-L(F(t)) > 0$ then $\bar{c}(t)$ and $\bar{P}(t)$ are constant in some neighborhood of $t$. This implies that \begin{equation} \label{ironing-3} \int_{t_1}^{t_2} [H(F(t))- L(F(t))] d \bar{P}(t) = 0 \end{equation} Substituting \eqref{ironing-1} \eqref{ironing-2} \eqref{ironing-3} into \eqref{ironing-rev}, we can see $\pi(q,t)$ maximizes the revenue function, subject to \eqref{eq:signal-monotonicity}. This fact, together lemma \ref{lem:optimal-case1} proves our main theorem \ref{thm:opt-scheme}. \end{proof} \subsection{Ironing Non-Regular Virtual Values} \todo{need to revise this section to make it nicer and smooth} Without regularity where $\underline{\phi}(t) = t - \frac{1-F(t)}{f(t)}$ is non-decreasing in $t$, this mechanism will not be feasible, since $P(t)$ will not be monotone non-decreasing in $t$. So, we are going to use ironing trick introduced by \cite{RePEc:inm:ormoor:v:6:y:1981:i:1:p:58-73} and defined in definition \ref{def:ironing} to extend our solution to general case. Now, we can extend our lemma on case 1 to a more general case without requiring $\underline{\phi}(t)$ is non-decreasing, since we can iron $\underline{\phi}(t)$ to get an non-decreasing $\underline{\phi}^+(t)$. We now show in lemma \ref{ironing-lemma} after ironing, revenue is weakly increased. \begin{lemma} \label{ironing-lemma} With recommendation policy, \begin{gather*} {\pi}^*(q,t)= \begin{cases} 1 & \text{ if } \rho(q)\ge -\underline{\phi}^+(t)\\ 0 & \text{ otherwise} \end{cases}. \end{gather*} and payment function $$ {p}^*(t) = \int_{q\in Q} {\pi}^*(q,t)g(q) v(q,t) \,\mathrm{d} q - \int_{t_1}^{t}P_{{\pi}^*}(x) \,\mathrm{d} x. $$ Above $({\pi}^*, {p}^* ) $ represents an optimal information selling mechanism for Lemma \ref{lem:optimal-case1} without requiring $\underline{\phi}(t)$ is non-decreasing. \end{lemma} \begin{proof} When the lower virtual value function is not monotone non-decreasing, the mechanism defined in Lemma \ref{lem:optimal-case1} is infeasible. We still consider the following revenue function for any feasible mechanism \begin{gather*} REV ({\pi}, {p})=\int_{q \in Q} \int_{t_1}^{t_2} \left[\underline{\phi}(t) + \rho(q)\right] f(t){\pi}(q,t) g(q) v_1(q) \,\mathrm{d} t \mathrm{d} q-u(t_1). \end{gather*} The proof contains the following 2 steps: \begin{enumerate} \item Show that mechanism $(\pi^*,p^*)$ is feasible, thus the above revenue function applies; \item Prove that mechanism $(\pi^*,p^*)$ maximizes the revenue function. \end{enumerate} The first step is omitted as it is the same as in the proof of Lemma \ref{lem:optimal-case1}. We start step 2 by manipulating the revenue function. Let $\underline{H}(\cdot)$, $\underline{h}(\cdot)$, $\underline{L}(\cdot)$ and $\underline{l}(\cdot)$ be the corresponding functions defined in Definition \ref{def:ironing} when ironing the lower virtual value function $\underline{\phi}(t)$. We can write the first term of the revenue function as follows: \begin{align*} &\int_{q \in Q} \int_{t_1}^{t_2} \left[\underline{\phi}(t) + \rho(q)\right] f(t){\pi}(q,t) g(q) v_1(q) \,\mathrm{d} t \mathrm{d} q \\ = &\int_{q \in Q} \int_{t_1}^{t_2} \left[\underline{\phi}^+(t) + \rho(q)\right] f(t){\pi}(q,t) g(q) v_1(q) \,\mathrm{d} t \mathrm{d} q \\ &+ \int_{q \in Q} \int_{t_1}^{t_2} [\underline{h}(F(t))- \underline{l}(F(t))] f(t){\pi}(q,t) g(q) v_1(q) \,\mathrm{d} t \mathrm{d} q. \end{align*} This is because, by definition, $\underline{\phi}^+(t)=\underline{l}(F(t))$ and $\underline{\phi}(t)=\underline{h}(F(t))$. Using integration by parts, we can simplify the second term \begin{align*} & \int_{q \in Q} \int_{t_1}^{t_2} [\underline{h}(F(t))- \underline{l}(F(t))] f(t){\pi}(q,t) g(q) v_1(q) \,\mathrm{d} t \mathrm{d} q \\ =& \int_{t_1}^{t_2} [\underline{h}(F(t))- \underline{l}(F(t))] P_{{\pi}}(t) \,\mathrm{d} F(t) \\ =& [\underline{H}(F(t))- \underline{L}(F(t))] P_{{\pi}}(t) |_{t_1}^{t_2} - \int_{t_1}^{t_2} [\underline{H}(F(t))- \underline{L}(F(t))] \,\mathrm{d} P_{{\pi}}(t) \\ \end{align*} Because $\underline{L}$ is the ``convex hull'' of $\underline{H}$ on $[0,1]$, $\underline{L}(0) = \underline{H}(0)$ and $\underline{L}(1) = \underline{H}(1)$. Thus the term $ [\underline{H}(F(t))- \underline{L}(F(t))] P_{{\pi}}(t)|_{t_1}^{t_2}$ is simply $0$, and we have \begin{align} REV ({\pi}, {p}) = &\int_{q \in Q} \int_{t_1}^{t_2} \left[\underline{\phi}^+(t) + \rho(q)\right] f(t){\pi}(q,t) g(q) v_1(q) \,\mathrm{d} t \mathrm{d} q \nonumber \\ & - \int_{t_1}^{t_2} [\underline{H}(F(t))- \underline{L}(F(t))] \,\mathrm{d} P_{{\pi}}(t) -u(t_1) \label{ironing-rev} \end{align} Now consider mechanism $({\pi}^*, {p}^*)$. $\pi^*$ maximizes the first term since ${\pi}^*(q,t) = 1, \forall q,t$ with $\rho(q)+\underline{\phi}^+(t)\ge 0$. Also, by definition, we have \begin{gather*} u(t)=\int_{q\in Q} {\pi}^*(q,t)g(q) v(q,t) \,\mathrm{d} q-p(t)=\int_{t_1}^{t}P_{{\pi^*}}(x) \,\mathrm{d} x. \end{gather*} Thus we have $u(t_1)=0$. As for the second term, note that $\underline{H}(F(t))- \underline{L}(F(t))\ge 0$ by definition, and $\mathrm{d} P_{\pi}(t)\ge0$ for any feasible mechanism. Thus the second term is always non-negative. However, we claim that with mechanism $({\pi}^*, {p}^*)$, this term is actually 0. The only interesting case is when $\underline{H}(F(t))> \underline{L}(F(t))\ne 0$. In this case, $t$ must lie in an ironed interval $I$. Thus $\underline{L}(z)$ is linear in the interval $I$, where $z=F(t)$. This implies that $\underline{\phi}^+(t)=\underline{l}(z)=\underline{L}'(z)$ is constant. So \begin{gather*} P_{{\pi}^*}(t)=\int_{ q \in Q}{\pi}^*(q,t)g(q)v_1(q)\,\mathrm{d} q=\int_{ q :\rho(q)\ge -\underline{\phi}^+(t)}g(q)v_1(q)\,\mathrm{d} q \end{gather*} is also constant in the interval $I$, which leads to $\mathrm{d} P_{\widetilde{\pi}}(t)$ being 0. Therefore, mechanism $({\pi}^*, {p}^*)$ optimizes all 3 terms in Equation \eqref{ironing-rev} simultaneously, hence optimal. \end{proof} \textbf{For Case 2} We can also apply the ironing procedure defined on lemma \ref{ironing-lemma} to lemma \ref{lem:optimal-case2} with $\bar{\phi}(t) = t + \frac{F(t)}{f(t)}$, because it has similar object function. Proof in lemma \ref{lem:optimal-case2} holds except we should additionally prove that under the ironing procedure and $\bar{\theta}(t) = -\bar{\phi}^+(t)$, we still have: $\forall t\geq t', \; \bar{\theta}(t) \leq \bar{\theta}(t') \leq -t$ This still holds because suppose there are $k$ ironing intervals. Ironing i happens between $[t^i_{s},t^i_{e}]$, we have $\forall t \in [t^i_{s},t^i_{e}], \; \bar{\phi}^+(t^i_{e}) = \bar{\phi}^+(t) = \bar{\phi}^+(t^i_{s})$. For any $t$ not in any $[t^i_{s},t^i_{e}]$ intervals, $\bar{\phi}^+(t) = \bar{\phi}(t) \geq t $. Thus, $\bar{\theta}(t) \leq -t $ For any $t$ in $[t^i_{s},t^i_{e}]$ intervals, $\bar{\phi}^+(t) = l(F(t)) = l(F(t^i_{e})) = L'(F(t^i_{e})) \geq H'(F(t^i_{e})) = h(F(t^i_{e})) = \bar{\phi}(t^i_{e}) \geq t^i_{e} \geq t $. Thus, $\bar{\theta}(t) \leq -t $. Since $\bar{\phi}^+(t)$ is non-decreasing in $t$ and $\bar{\theta}(t) = -\bar{\phi}^+(t)$, $\bar{\theta}(t)$ is non-increasing in $t$. Thus, we can conclude that $\forall t\geq t', \; \bar{\theta}(t) \leq \bar{\theta}(t') \leq -t$ So, ironing procedure will also works on lemma \ref{lem:optimal-case2}. Now, we are going to discuss how to solve the case in case 3 where $V_L < v(t_2) < V_H$ and iron it accordingly. \begin{lemma} \label{midcase} \item If $V_L < v(t_2) < V_H$, with combined virtual value function $\widetilde{\phi}^+(t)$ which is the ironed version of $\widetilde{\phi}(t) = C \underline{\phi}(t) + (1-C) \bar{\phi}(t) $, where $C \in (0,1) $ is a constant that satisfies $\iint_{(t,q): \widetilde{\phi}_q(t)>= C } g(q) v_1(q)\,dq \mathrm{d} t = v(t_2)$. The optimal mechanism is a threshold mechanism with threshold $\widetilde{\theta}(t) =-\widetilde{\phi}^+(t)$ for each type $t$. The payment is determined by the following equation and is monotone non-decreasing in $t$ when $F(t)\leq C$ and monotone non-increasing when $F(t)>C$: \begin{gather*} p(t) = \int_{q \in Q} g(q) \pi(q,t)v(q,t) \,dq - \int_{t_1}^{t} P_{\widetilde{\pi}}(x)\,dx. \end{gather*} \end{lemma} \begin{proof} From equation \ref{eq:revenue-1} and \ref{eq:revenue-2}, we can combine the revenue function by using $C$ fraction of equation \ref{eq:revenue-1} and $(1-C)$ fraction of equation \ref{eq:revenue-2}, where $C$ is an arbitrary constant. Since equation \ref{eq:revenue-1} and \ref{eq:revenue-2} are just two different view of revenue, change $C$ will not affect the value of its combination. Now, we have the following revenue function for mechanism with $u(t) = u(t_1) + \int_{t_1}^{t} P(x) dx$ and it will not change on $C$. \begin{align} REV = & C \left[\int_{q \in Q} g(q)\int_{t_1}^{t_2} f(t)\pi(q,t) v_1(q) \left(\underline{\phi}(t) + \rho(q)\right)\,\mathrm{d} t \, \mathrm{d} q -u(t_1) \right] \\ &+ (1-C) \left[\int_{q \in Q} g(q) \int_{t_1}^{t_2} f(t)\pi(q,t)v_1(q) [ \bar{\phi}(t) + \rho(q)]\,\mathrm{d}t \mathrm{d}q -u(t_2) \right] \nonumber \\ = & \int_{t_1}^{t_2}\int_{q\in Q} [C \underline{\phi}(t) + (1-C)\bar{\phi}(t) - \rho(q)] \pi(q,t) f(t) g(q) v_1(q) dq dt - C\cdot u(t_1) - (1-C)u(t_2) \label{combined-revenue} \end{align} Now, our goal is to maximize revenue function: \begin{align*} \text{max } REV (\pi, p) =&\int_{t_1}^{t_2}\int_{q\in Q} [C \underline{\phi}(t) + (1-C)\bar{\phi}(t) - \rho(q)] \pi(q,t) f(t) g(q) v_1(q) dq dt - C\cdot u(t_1) - (1-C)u(t_2) \\ \text{subject to } & P(t) \text{ is monotone non-decreasing in } t \\ & u(t_2) \geq v(t_2), u(t_1) \geq 0 \\ & p(t) \geq 0, \forall t \in [T]. \end{align*} Now, given an optimal feasible mechanism with recommendation policy $\pi^*(q,t)$ and payment function $p^*(t)$ which solves above optimization problem optimally, we are going to prove there is a threshold mechanism which can weakly increase this revenue. This threshold mechanism has the following definition. This threshold mechanism has the threshold $\widetilde{\theta}(t) =-[C\underline{\phi}(t)+(1-C)\bar{\phi}(t)]$, where $C$ is a constant and satisfies: \begin{gather*} \int_{t_1}^{t_2} \int_{\rho(q) \geq - \widetilde{\phi}(t) } g(q) v_1(q) dq dt = v(t_2) \end{gather*} And the recommendation policy is: \begin{gather*} \pi(q,t)= \begin{cases} 1 & \text{ if } \rho(q)\ge -\widetilde{\phi}(t)\\ 0 & \text{ otherwise} \end{cases}. \end{gather*} The payment is determined by the following equation : \begin{gather*} p(t) = \int_{q \in Q} g(q) \pi(q,t)v(q,t) \,dq - \int_{t_1}^{t} P(x)\,dx. \end{gather*} Now, we are going to prove it. The given optimal feasible mechanism with recommendation policy $\pi^*(q,t)$ gives us the following revenue. $$ REV ^*(\pi^*, p^*) =\int_{t_1}^{t_2}\int_{q\in Q} [C \underline{\phi}(t) + (1-C)\bar{\phi}(t) - \rho(q)] \pi^*(q,t) f(t) g(q) v_1(q) dq dt - C\cdot u^*(t_1) - (1-C)u^*(t_2) $$ Since this function is directly got by equation \ref{combined-revenue}, changing $C$ in this function will not affect it values. So, we can set $C$ such that $$ \int_{t_1}^{t_2} \int_{\rho(q) \geq - \widetilde{\phi}(t)} g(q) v_1(q) dq dt = v(t_2) $$ without changing $REV ^*(\pi^*, p^*)$. There exists a constant $C \in (0,1)$ which satisfies the above condition, because from the condition of case 3 where $V_L < v(t_2) < V_H$, we have \begin{gather*} \qopname\relax n{max} \{v(t_1), 0 \} + \int_{t_1}^{t_2} \int_{q: \rho(q) \geq \underline{\theta}_x} g(q) v_1(q) \,dqdx < v(t_2) < \qopname\relax n{max} \{v(t_1), 0 \} + \int_{t_1}^{t_2} \int_{q: \rho(q) \geq \bar{\theta}_x} g(q) v_1(q) \,dqdx \end{gather*} With the equivalence \ref{remark:case_1_condition_eq}, \begin{align*} & v(t_2) \geq \qopname\relax n{max} \{v(t_1), 0 \} + \int_{t_1}^{t_2} \int_{q: \rho(q) \geq \bar{\theta}_x} g(q) v_1(q) dq dx \Longleftrightarrow v(t_1) \geq - \int_{t_1}^{t_2} \int_{q: \rho(q) \leq \bar{\theta}_x} g(q) v_1(q) dq dx \end{align*} by contrapositive, we can get the following equivalence, \begin{align} v(t_2) <\qopname\relax n{max} \{v(t_1), 0 \} + \int_{t_1}^{t_2} \int_{q: \rho(q) \geq \bar{\theta}_x} g(q) v_1(q) dq dx \Longleftrightarrow v(t_1) < - \int_{t_1}^{t_2} \int_{q: \rho(q) \leq \bar{\theta}_x} g(q) v_1(q) dq dx \label{case3-v1-v2-2} \end{align} Since case 3's condition includes the antecedent of equivalence \ref{case3-v1-v2-2} and $\forall q \; g(q)v_1(q) \geq 0 $, we get \begin{align*} v(t_1) < - \int_{t_1}^{t_2} \int_{q: \rho(q) \leq \bar{\theta}_x} g(q) v_1(q) dq dx \leq 0 \end{align*} Now, because $\bar{\theta}_x = -\underline{\phi}(t) $, $\underline{\theta}_x = -\bar{\phi}(t) $ and $v(t_1) < 0 $, case 3's condition can be written as, \begin{gather*} \int_{t_1}^{t_2} \int_{q: \rho(q) \geq -\underline{\phi}(t)} g(q) v_1(q) \,dqdx < v(t_2) < \int_{t_1}^{t_2} \int_{q: \rho(q) \geq -\bar{\phi}(t)} g(q) v_1(q) \,dqdx \end{gather*} When C = 0, we have \begin{equation} \label{C_0} int_{t_1}^{t_2} \int_{\rho(q) \geq - C \underline{\phi}(t) - (1-C) \bar{\phi}(t) } g(q) v_1(q) dq dt = \int_{t_1}^{t_2} \int_{q: \rho(q) \geq -\underline{\phi}(t)} g(q) v_1(q) \,dqdx < v(t_2) \end{equation} When C = 1, we have \begin{equation} \label{C_1} \int_{t_1}^{t_2} \int_{\rho(q) \geq - C \underline{\phi}(t) - (1-C) \bar{\phi}(t) } g(q) v_1(q) dq dt = \int_{t_1}^{t_2} \int_{q: \rho(q) \geq -\bar{\phi}(t)} g(q) v_1(q) \,dqdx > v(t_2) \end{equation} Now, we are going to show this function is continuous on $C \in (0,1)$. We can write \begin{align*} \int_{t_1}^{t_2} \int_{q: \rho(q) \geq -\widetilde{\phi}(t)} g(q) v_1(q) \,\mathrm{d} q\mathrm{d} t &= \int_{t_1}^{t_2} \int_{\rho(q) \geq - C \underline{\phi}(t) - (1-C) \bar{\phi}(t) } g(q) v_1(q) dq dt \\ &= \int_{t_1}^{t_2} \int_{\rho(q) \geq - (t+\frac{F(t)}{f(t)}+\frac{C}{f(t)}) } g(q) v_1(q) dq dt \\ &= Y(C) \end{align*} \sz{see note lemma 4.11 how to set C and D} Now, we can see that with $\pi(q,t)$ and $p(t)$ in the threshold mechanism. We have \begin{align} u(t_1) &= \int_{q \in Q} g(q) \pi(q,t_1)v(q,t_1) dq -p(t_1) = \int_{t_1}^{t_1} P(x) = 0 \label{case3-ut1}\\ u(t_2) &= \int_{q \in Q} g(q) \pi(q,t_2)v(q,t_2) dq -p(t_2) = \int_{t_1}^{t_1} P(x) = \int_{t_1}^{t_2} \int_{\widetilde{\phi}_q(t)>= C } g(q) v_1(q) dq dt = v(t_2) \label{case3-ut2} \end{align} Since to satisfy IR constraint \ref{eq:ir-t2}, we must have $ u(t_2) \geq v(t_2), u(t_1) \geq 0$ for any feasible mechanism. Thus, \begin{align} u(t_1) &\leq u^*(t_1) \label{case3-opt1} \\ u(t_2) &\leq u^*(t_2) \label{case3-opt2} \end{align} Since the threshold policy $\pi(q,t) = 1 \geq \pi^*(q,t) $ whenever $[C \underline{\phi}(t) + (1-C)\bar{\phi}(t) - \rho(q)]\geq 0$ and $\pi(q,t) = 0 \leq \pi^*(q,t) $ whenever $[C \underline{\phi}(t) + (1-C)\bar{\phi}(t) - \rho(q)] < 0$, and $f(t)g(q)v_1(q) \geq 0$, we have \begin{align} \int_{t_1}^{t_2}\int_{q\in Q} [C \underline{\phi}(t) + (1-C)\bar{\phi}(t) - \rho(q)] \pi(q,t) f(t) g(q) v_1(q) dq dt \geq \nonumber \\ \int_{t_1}^{t_2}\int_{q\in Q} [C \underline{\phi}(t) + (1-C)\bar{\phi}(t) - \rho(q)] \pi^*(q,t) f(t) g(q) v_1(q) dq dt \label{case3-opt3} \end{align} Based on inequality \ref{case3-opt1} \ref{case3-opt2} \ref{case3-opt3}, we have \begin{align*} REV (\pi, p) &=\int_{t_1}^{t_2}\int_{q\in Q} [C \underline{\phi}(t) + (1-C)\bar{\phi}(t) - \rho(q)] \pi(q,t) f(t) g(q) v_1(q) dq dt - C\cdot u(t_1) - (1-C)u(t_2) \\ & \geq\int_{t_1}^{t_2}\int_{q\in Q} [C \underline{\phi}(t) + (1-C)\bar{\phi}(t) - \rho(q)] \pi^*(q,t) f(t) g(q) v_1(q) dq dt - C\cdot u^*(t_1) - (1-C)u^*(t_2) \\ & = REV ^*(\pi^*, p^*) \end{align*} So, threshold mechanism $(\pi(q,t),p(t))$ is an optimal mechanism. Now, applying the ironing procedure on $\widetilde{\phi}(t) = C \underline{\phi}(t) + (1-C) \bar{\phi}(t) = t+\frac{F(t)-C}{f(t)}$. We will get $\widetilde{\phi}^+(t)$ which is non-decreasing on t. For the same reason we proved in the ironing section, the recommendation policy that is \begin{gather*} \pi(q,t)= \begin{cases} 1 & \text{ if } \rho(q)\ge -\widetilde{\phi}^+(t)\\ 0 & \text{ otherwise} \end{cases}. \end{gather*} with the payment function \begin{gather*} p(t) = \int_{q \in Q} g(q) \pi(q,t)v(q,t) \,dq - \int_{t_1}^{t} P(x)\,dx. \end{gather*} can weakly increase the revenue. So, now the optimal mechanism is this new $((\pi(q,t),p(t))$ defined with $\widetilde{\phi}^+(t)$. \textbf{Feasibility:} Now, we are going to prove this mechanism $((\pi(q,t),p(t))$ is also feasible. Since $\widetilde{\phi}^+(t) = C \underline{\phi}(t) + (1-C) \bar{\phi}(t) $ is non-decreasing in t, larger $t$ will lets $P(t) = \int_{t_1}^{t_2} \int_{\rho(q) >= - \widetilde{\phi}^+(t) } g(q) v_1(q) dq dt$ includes more $g(q)v_1(q)$ term which are positive for any q. So we have $P(x)$ is non-decreasing. Thus, constraint \ref{eq:signal-monotonicity} is satisfied. Since $u(t_1) = 0$, \begin{align*} u(t)= \int_{q \in Q} \left[ g(q) \pi(q,t)v(q,t) \right] dq -p(t) = \int_{t_1}^{t} P(x)dx = u(t_1) + \int_{t_1}^{t} P(x)dx \end{align*} Thus, constraint \ref{eq:buyer-utility-identify2} is satisfied. \begin{align*} u(t_1) &= \int_{q \in Q} g(q) \pi(q,t_1)v(q,t_1) dq -p(t_1) = \int_{t_1}^{t_1} P(x) = 0 \\ u(t_2) &= \int_{q \in Q} g(q) \pi(q,t_2)v(q,t_2) dq -p(t_2) = \int_{t_1}^{t_1} P(x) = \int_{t_1}^{t_2} \int_{\rho(q) >= -\widetilde{\phi}^+(t) } g(q) v_1(q) dq dt = v(t_2) \end{align*} Thus, constraint \ref{eq:ir-t2} is satisfied. Now, we are going to prove payment non-negativity. First we are going to show $\forall t, F(t)\leq C, \text{ we have } \widetilde{\phi}^+(t) \leq t $. This holds because suppose there are $k$ ironing intervals. Ironing i happens between $[t^i_{s},t^i_{e}]$, we have $[t^i_{s},t^i_{e}]$, we have $\forall t \in [t^i_{s},t^i_{e}], \; \widetilde{\phi}^+(t^i_{s}) = \widetilde{\phi}^+(t) = \widetilde{\phi}^+(t^i_{e})$. For any $t$ not in any $[t^i_{s},t^i_{e}]$ intervals and $F(t)\leq C$, $\widetilde{\phi}^+(t) = \widetilde{\phi}(t) = t+\frac{F(t)-C}{f(t)} \leq t $. For any $t$ in $[t^i_{s},t^i_{e}]$ intervals and $F(t)\leq C$, $\widetilde{\phi}^+(t) = l(F(t)) = l(F(t^i_{s})) = L'(F(t^i_{s})) \leq H'(F(t^i_{s})) = h(F(t^i_{s})) = \widetilde{\phi}(t^i_{s}) \leq t^i_{s}+\frac{F(t^i_{s})-C}{f(t^i_{s})} \leq t^i_{s} \leq t $. Thus, $\forall t, F(t)\leq C, \text{ we have } \widetilde{\phi}^+(t) \leq t $. Now, we are going to prove $\forall t, F(t) > C, \text{ we have } \widetilde{\phi}^+(t) \geq t $. This holds because suppose there are $k$ ironing intervals. Ironing i happens between $[t^i_{s},t^i_{e}]$, we have $\forall t \in [t^i_{s},t^i_{e}], \; \widetilde{\phi}^+(t^i_{s}) = \widetilde{\phi}^+(t) = \widetilde{\phi}^+(t^i_{e})$. For any $t$ not in any $[t^i_{s},t^i_{e}]$ intervals and $F(t)> C$, $\widetilde{\phi}^+(t) = \widetilde{\phi}(t) = t+\frac{F(t)-C}{f(t)} \geq t $. For any $t$ in $[t^i_{s},t^i_{e}]$ intervals and $F(t)> C$, $\widetilde{\phi}^+(t) = l(F(t)) = l(F(t^i_{e})) = L'(F(t^i_{e})) \geq H'(F(t^i_{e})) = h(F(t^i_{e})) = \widetilde{\phi}(t^i_{e}) \geq t^i_{e}+\frac{F(t^i_{e})-C}{f(t^i_{e})} \geq t^i_{e} \geq t $. Thus, $\forall t, F(t) > C, \text{ we have } \widetilde{\phi}^+(t) \geq t $. Now, we can conclude $\forall t, F(t)\leq C, \text{ we have } \widetilde{\phi}^+(t) \leq t $ and $\forall t, F(t) > C, \text{ we have } \widetilde{\phi}^+(t) \geq t $. \begin{align*} p(t_1) &= -u(t_1) + \int_{\rho(q) \geq -\widetilde{\phi}^+(t) } g(q) v(q,t_1) dq \\ &= \int_{\rho(q) \geq -\widetilde{\phi}^+(t) } g(q) v_1(q)[t_1+\rho(q)] dq \\ &\geq \int_{\rho(q) \geq -\widetilde{\phi}^+(t) } g(q) v_1(q)[t_1 - \widetilde{\phi}^+(t_1)] dq \\ &\geq \int_{\rho(q) \geq -\widetilde{\phi}^+(t) } g(q) v_1(q)[t_1 - t_1] dq \\ &= 0 \end{align*} where the last inequality is because $\forall t, F(t)\leq C, \text{ we have } \widetilde{\phi}^+(t) \leq t $. Similarly, utilizing $u(t_2) = v(t_2) = \int_{q \in Q} g(q) v_1(q)[t_2 + \rho(q)]$, we have \begin{align*} p(t_2) &= -u(t_2) + \int_{\rho(q) \geq -\widetilde{\phi}^+(t) } g(q) v(q,t_2) dq \\ &= - \int_{q \in Q} g(q) v_1(q)[t_2 + \rho(q)] + \int_{\rho(q) \geq -\widetilde{\phi}^+(t) } g(q) v_1(q)[t_2+\rho(q)] dq \\ &= - \int_{\rho(q) \leq -\widetilde{\phi}^+(t) } g(q) v_1(q)[t_2+\rho(q)] dq \\ &\geq - \int_{\rho(q) \leq -\widetilde{\phi}^+(t) } g(q) v_1(q)[t_2-\widetilde{\phi}^+(t_2)] dq \\ &\geq - \int_{\rho(q) \leq -\widetilde{\phi}^+(t) } g(q) v_1(q)[t_2-t_2)] dq \\ &= 0 \end{align*} where the last inequality is because $\forall t, F(t) > C, \text{ we have } \widetilde{\phi}^+(t) \geq t $. Since \begin{align*} p(t)=&\int_{q \in Q} g(q) \pi(q,t)v(q,t) \,\mathrm{d}q -\int_{t_1}^{t} P(x) \,\mathrm{d}x\\ =&\int_{q:\rho(q)\ge -\widetilde{\phi}^+(t)}g(q)v(q, t)\,\mathrm{d}q-\int_{t_1}^t \int_{q \in Q} g(q) \pi(q,x) v_1(q)\,\mathrm{d}q\mathrm{d}x\\ =&\int_{q:\rho(q)\ge -\widetilde{\phi}^+(t)}g(q)v(q, t)\,\mathrm{d}q-\int_{t_1}^t \int_{q:\rho(q)\ge -\widetilde{\phi}^+(x)} g(q) v_1(q)\,\mathrm{d}q\mathrm{d}x \end{align*} the differential of $p(t)$ is \sz{change sum to integral} \begin{align*} \mathrm{d}p(t)=&\left[\int_{q:\rho(q)\ge -\widetilde{\phi}^+(t)}g(q)\frac{\partial v(q,t)}{\partial t}\,\mathrm{d}q\right]\mathrm{d}t+\left[\frac{\mathrm{d}}{\mathrm{d}\widetilde{\phi}^+(t)}\int_{q:\rho(q)\ge -\widetilde{\phi}^+(t)}g(q)v(q, t)\,\mathrm{d}q\right]\mathrm{d}\widetilde{\phi}^+(t)\\ &-\left[\int_{q:\rho(q)\ge -\widetilde{\phi}^+(t)} g(q) v_1(q)\,\mathrm{d}q\right]\mathrm{d}t\\ =&\left[\frac{\mathrm{d}}{\mathrm{d}\widetilde{\phi}^+(t)}\int_{q:\rho(q)\ge -\widetilde{\phi}^+(t)}g(q)v(q, t)\,\mathrm{d}q\right]\mathrm{d}\widetilde{\phi}^+(t)\\ =& \int_{q: -\widetilde{\phi}^+(t) + \mathrm{d}\widetilde{\phi}^+(t) \geq \rho(q)\ge -\widetilde{\phi}^+(t)}g(q)v_1(q)[t+\rho(q)]\,\mathrm{d}q \\ \geq & \int_{q: -\widetilde{\phi}^+(t) + \mathrm{d}\widetilde{\phi}^+(t) \geq \rho(q)\ge -\widetilde{\phi}^+(t)}g(q)v_1(q)[t -\widetilde{\phi}^+(t) ]\,\mathrm{d}q \\ \geq & 0 \\ \end{align*} where the last inequality is due to our choice of $t$ with $ F(t)\leq C$ so that $ \widetilde{\phi}^+(t) \leq t $. Therefore, when $F(t)\le C$, $dp(t)$ is a non-negative. $\forall t, \; F(t) \leq C$, $p(t)$ is a non-decreasing function of $t$ . $\forall t, F(t) > C, \text{ we have } \widetilde{\phi}^+(t) \geq t $, Thus, \begin{align*} \mathrm{d}p(t) =&\left[\sum_{q:\rho(q)=-\widetilde{\phi}^+(t)}g(q) v_1(q)[t - \widetilde{\phi}^+(t)] \right]\mathrm{d}\widetilde{\phi}^+(t) \\ \leq&\left[\sum_{q:\rho(q)=-\widetilde{\phi}^+(t)}g(q) v_1(q)[t - t] \right]\mathrm{d}\widetilde{\phi}^+(t) \\ = 0 \end{align*} Therefore, when $F(t) > C$, $dp(t)$ is a non-positive. $\forall t, \; F(t) \leq C$, $p(t)$ is a non-increasing function of $t$. We also have $p(t_1)\geq 0$ and $p(t_2)\geq 0$. Thus, $p(t)$ is non-negative and non-decreasing $\forall t, \; F(t) \leq C$ and non-negative and non-increasing $\forall t, \; F(t) \leq C$. Thus, constraint \ref{eq:non-negativity} is satisfied. Now, all constraints in lemma \ref{lem:feasible-M} are satisfied. We proved this mechanism is optimal and feasible. $\mathrm{d} x \mathrm{d} \mathrm{d}x$ \end{proof} \section{Model: Selling Threshold Tests} Motivated by product quality test, we consider the following information selling problem between an information seller and an information buyer. The buyer is modeled as a potential purchaser of some goods, e.g., a house or a used car, and is deciding between \emph{two actions} --- purchase (action $1$) or not purchase (action $0$). The buyer has value $v(q,t)$ for the goods where $q \in \mathcal{Q}$ is the goods quality and $t\in \mathcal{T}$ is the buyer's private type regarding the goods. Different from previous work on information selling \cite{chen2020selling}, we impose some structural property to the buyer's value function $v(q,t)$, which naturally captures the product quality testing application of our interests. That is, we assume that $v(q,t)$ is an increasing function in both $q$ and $t$ \todo{the assumption of monotonicity in $t$ may not be necessary, would be nice if can remove, but let's assume it for now}. We assume that both the product quality and buyer type are random, and let $Q,T$ denote the corresponding random variable for $q, t$, respectively. Let $G$ be the distribution of $Q$ and $F$ be the distribution of $T$. Note that, $Q,T$ may be correlated. As a convenient notation, let $\bar{v}_t = \qopname\relax n{\mathbf{E}}_{Q|t} v(Q,t)$ denote the expected value of a buyer of type $t$ where the expectation is over the randomness of $Q$ conditioned on $t$. There is an exogenous and fixed threshold value $\alpha$. The buyer would purchase the goods only when the expected goods value, conditioning on any information the buyer has, is at least $\alpha$. Formally, a buyer of type $t$ always has utility $0$ for action $0$ of not purchasing and utility $\qopname\relax n{\mathbf{E}}_{q} [v(q,t) ] - \alpha$ for purchasing the goods. After normalization, we can without loss of generality assume that $\alpha = 0$. Thus, the buyer will take action 1 if and only if $\qopname\relax n{\mathbf{E}}_{q} [v(q,t) ] \geq 0$ Note that the above setting can be viewed as a special case of the general information selling setting, as studied in \cite{Babaioff12,chen2020selling}, with two buyer actions and a special structure on the buyer value function. Therefore, the results proved there also hold in our setting. However, the goal of this study is to derive more structural properties and realistic mechanisms for our special setup motivated by the ubiquitous application of product quality testing. We first consider the case where $Q,T$ are independent. By \cite{chen2020selling}, we know that the optimal mechanism is a ``consulting mechanism" which, upon seeing the product quality, charges the buyer of type $t$ a payment $p_t$ and meanwhile makes an incentive compatibility recommendation based on signaling scheme $\pi_t(q,a)$ where action $a \in \{ 0, 1 \}$. We thus can formulate the problem as the following convex program (CP). \begin{lp}\label{lp:OptScheme} \maxi{ \sum_{t} f(t) \cdot p_t} \mbox{subject to} \qcon{ \sum_{q} \pi_t(q,1) \cdot v(q,t) \geq 0 }{t} \qcon{ \sum_{q} \pi_t(q,0) \cdot v(q,t) \leq 0 }{t} \con{ \sum_{q} \pi_t(q,1) \cdot v(q,t)- p_t \geq} \qcon{ \quad \qopname\relax n{max} \{ \sum_{q} \pi_{t'}(q,1) \cdot v(q,t) , 0 \} + \qopname\relax n{max} \{ \sum_{q} \pi_{t'}(q,0) \cdot v(q,t), 0 \} - p_{t'} }{t, t'} \qcon{ \sum_{q} \pi_t(q,1) \cdot v(q,t) - p_t \geq \qopname\relax n{max}\{ 0, \bar{v}_t \} }{t} \qcon{ \pi_t(q,1) + \pi_t(q,0) = g(q) }{q, t} \qcon{\pi_t(q,0), \pi_t(q,1) \geq 0}{q,t} \end{lp} where the first two are obedience constraints, enforcing the recommendation is obedient (recall that $0$ is the utility of taking action $0$). The third constraint is the incentive compatible constraint, enforcing that a buyer of type $t$ should not misreport type $t'$. The forth constraint describes individual rationality. That is, participating the information selling mechanism should lead to utility that is at least the utility of no participation $\qopname\relax n{max}\{ 0, \bar{v}_t \} $. The last two constraints are feasibility of the information selling scheme. One special type of scheme of our interest is the threshold test scheme. \begin{definition}[Threshold Testing] An information selling scheme $\{ \pi_t \}_{t\in \mathcal{T}}$ is a \emph{threshold testing} scheme if for any buyer type $t$, there exists a threshold $\theta_t$ such that: (1) $\pi_t(q,1) = g(q)$ for any $q > \theta_t$; (2) $\pi_t(q,0) = 0$ for any $q < \theta_t$. A threshold testing scheme is \emph{monotone} if $\theta_t \geq \theta_{t'}$ whenever $t \geq t'$. \end{definition} That is, a threshold test will always recommend action 1 (purchase) whenever the quality passed the threshold $\theta_t$ and recommend action 0 (not purchasing) if the quality did not pass. Note that if the quality $q = \theta_t$, the seller is allowed to recommend action 0 or 1 randomly. The threshold testing scheme is monotone if a higher type always has a higher threshold. Note that such threshold testing are ubiquitous in reality, e.g., used car inspection, car smoke check, house inspection, medical tests, production inspection, etc. Our first main conjecture is the follows. \begin{conjecture} There always exists an optimal information selling mechanism that is a monotone threshold test. \end{conjecture} \subsection{Analysis for Discrete $Q,T$} We start by simplifying CP \ref{lp:OptScheme} with a few claims. \begin{claim} There always exists an optimal solution to CP \ref{lp:OptScheme} such that $p_t \geq 0$ for all $t$. Therefore, the first constraint is dominated by the forth constraint in this optimal solution. \end{claim} \begin{proof} If $p_t < 0$ for some $t$ in some optimal solution. Then by changing to $p_t = 0$ (higher payment) and $\pi_t$ to revealing no information (less information) will always increase revenue. Moreover, this will not violate any incentive compatibility constraint neither since deviating to $t$ from any other type $t'$ now will only pay more but gets less information. \end{proof} \begin{claim} The third constraint can be reduced to $\sum_{q} \pi_t(q,1) \cdot v(q,t) - p_t \geq \sum_{q} \pi_{t'}(q,1) \cdot v(q,t) - p_{t'}$. \end{claim} \begin{proof} Note that if both $\qopname\relax n{max} \{ \sum_{q} \pi_{t'}(q,1) \cdot v(q,t), 0 \} $ and $ \qopname\relax n{max} \{ \sum_{q} \pi_{t'}(q,0) \cdot v(q,t) , 0 \} $ achieve the maximum at the non-zero term, then the third constraint will be implied by the fifth constraint. It can also be shown that it can never be the case that the first max is achieved at zero but the second is achieved at the non-zero term. When both max are zero, the third term is dominated by the forth. So the only remaining situation is what stated in the claim. \end{proof} Utilizing the above two claims, CP \eqref{lp:OptScheme} can be reduced to the following LP: \begin{lp}\label{lp:OptScheme-simple} \maxi{ \sum_{t} f(t) \cdot p_t} \mbox{subject to} \qcon{ \sum_{q} \pi_t(q,0) \cdot v(q,t) \leq 0 }{t} \qcon{\sum_{q} \pi_t(q,1) \cdot v(q,t) - p_t \geq \sum_{q} \pi_{t'}(q,1) \cdot v(q,t) - p_{t'} }{t, t'} \qcon{ \sum_{q} \pi_t(q,1) \cdot v(q,t) - p_t \geq \qopname\relax n{max}\{ 0, \bar{v}_t \} }{t} \qcon{ \pi_t(q,1) + \pi_t(q,0) = g(q) }{q, t} \qcon{\pi_t(q,0), \pi_t(q,1) \geq 0}{q,t} \end{lp} Slightly re-writing this LP: \begin{lp}\label{lp:OptScheme-variant} \maxi{ \sum_{t} f(t) \cdot p_t} \mbox{subject to} \qcon{ \sum_{q} \pi_t(q,0) \cdot v(q,t) \leq 0 }{t} \qcon{\sum_{q} \big[ \pi_{t'}(q,1) - \pi_{t}(q,1)\big] \cdot v(q,t) + p_t - p_{t'} \leq 0 }{t, t'} \qcon{ p_t - \sum_{q} \pi_t(q,1) \cdot v(q,t) \leq - \qopname\relax n{max}\{ 0, \bar{v}_t \} }{t} \qcon{ \pi_t(q,1) + \pi_t(q,0) = g(q) }{q, t} \qcon{\pi_t(q,0), \pi_t(q,1) \geq 0}{q,t} \end{lp} Let $y_t, \beta_{t,t'}, \lambda_t, w_{q,t}$ be the corresponding dual variables for first four constraints , we obtain the following dual program. \begin{lp}\label{lp:OptScheme-dual} \mini{ \sum_{t} \sum_{q} g(q) \cdot w_{q,t} - \sum_{t} \lambda_t \cdot \qopname\relax n{max} \{ 0, \bar{v}(t) \} } \mbox{subject to} \qcon{ - v(q,t) \cdot \big[ \lambda_t + \sum_{t'} \beta_{t,t'} \big] + \sum_{t'} v(q,t') \cdot \beta_{t',t} + w_{q,t} \geq 0}{q, t} \qcon{ v(q,t) \cdot y_t + w_{q,t} \geq 0}{q, t} \qcon{ \sum_{t'} \beta_{t,t'} - \sum_{t'} \beta_{t',t} + \lambda_t = f(t) }{t} \qcon{y, \beta, \lambda \geq 0}{q,t} \end{lp} \begin{claim} LP \eqref{lp:OptScheme-dual} always has an optimal solution such that $y_t = \infty$ wh \end{claim} \todo{Want to argue that the above LP and its dual have the desired property as in Conjecture 1. One idea is to prove that for any fixed $t$, there is some $\theta_t$ such that the first constraint in the above dual is tight when $q > \theta_t$ and the second constraint is tight when $q < \theta_t$. } \subsection{Myersonian Approach for Continuous $Q,T$} Here we consider the case where both $t\in[0,\infty)$ and $q\in[0,\infty)$ are continuous. Assume that $v(q,0)=0,\forall q$, and $q\sim G(q)$ and $t\sim F(t)$. IC implies: \begin{gather} \qopname\relax n{\mathbf{E}}_{q\sim G}\left[ \pi_{t}(q,1)v(q,t) \right]-p_t\ge \qopname\relax n{\mathbf{E}}_{q\sim G}\left[ \pi_{t'}(q,1)v(q,t) \right]-p_{t'},\label{eq:ic1}\\ \qopname\relax n{\mathbf{E}}_{q\sim G}\left[ \pi_{t'}(q,1)v(q,t') \right]-p_{t'}\ge \qopname\relax n{\mathbf{E}}_{q\sim G}\left[ \pi_{t}(q,1)v(q,t') \right]-p_{t}.\label{eq:ic2} \end{gather} Thus, \begin{gather} \qopname\relax n{\mathbf{E}}_{q\sim G}\left[ \left(\pi_{t'}(q,1)-\pi_{t}(q,1)\right)v(q,t) \right]\le p_{t'}-p_t\le \qopname\relax n{\mathbf{E}}_{q\sim G}\left[ \left(\pi_{t'}(q,1)-\pi_{t}(q,1)\right)v(q,t') \right],\nonumber\\ \qopname\relax n{\mathbf{E}}_{q\sim G}\left[ \left(\pi_{t'}(q,1)-\pi_{t}(q,1)\right)\left(v(q,t')-v(q,t)\right) \right]\ge 0.\label{eq:allocation_monotonicity} \end{gather} Equation \eqref{eq:allocation_monotonicity} is called allocation monotonicity in the standard auction setting. Define \begin{gather*} u(t)= \qopname\relax n{\mathbf{E}}_{q\sim G}\left[ \pi_{t}(q,1)v(q,t) \right]-p_t \end{gather*} With Equation \eqref{eq:ic1} and \eqref{eq:ic2}, we also get: \begin{align*} u(t)&\ge \qopname\relax n{\mathbf{E}}_{q\sim G}\left[ \pi_{t'}(q,1)v(q,t') \right]-p_{t'}+\qopname\relax n{\mathbf{E}}_{q\sim G}\left[ \pi_{t'}(q,1)\left( v(q,t)-v(q,t') \right) \right]\\ &=u(t')+\qopname\relax n{\mathbf{E}}_{q\sim G}\left[ \pi_{t'}(q,1)\left( v(q,t)-v(q,t') \right) \right]. \end{align*} Therefore, \begin{gather*} \qopname\relax n{\mathbf{E}}_{q\sim G}\left[ \pi_{t'}(q,1)\left( v(q,t)-v(q,t') \right) \right]\le u(t)-u(t')\le \qopname\relax n{\mathbf{E}}_{q\sim G}\left[ \pi_{t}(q,1)\left( v(q,t)-v(q,t') \right) \right]. \end{gather*} Assume $t'<t$, divide the above inequality by $t-t'$, and let $t'\to t$, \begin{gather*} \odv{u(t)}{t}=\qopname\relax n{\mathbf{E}}_{q\sim G}\left[ \pi_{t}(q,1)\pdv{v(q,t)}{t} \right]. \end{gather*} \begin{align*} REV&=\int_{0}^{+\infty}f(t)p_t\,\mathrm{d}t\\ &=\int_{0}^{+\infty}f(t)\left(\qopname\relax n{\mathbf{E}}_{q\sim G}\left[ \pi_{t}(q,1)v(q,t)-u(t) \right]\right)\,\mathrm{d}t\\ &=\int_{0}^{+\infty}f(t)\left(\qopname\relax n{\mathbf{E}}_{q\sim G}\left[ \pi_{t}(q,1)v(q,t)-\int_{0}^{t}\odv{u(s)}{s}\mathrm{d}s-u(0) \right]\right)\,\mathrm{d}t\\ &=\int_{0}^{+\infty}f(t)\left(\qopname\relax n{\mathbf{E}}_{q\sim G}\left[ \pi_{t}(q,1)v(q,t) \right]\right)\,\mathrm{d}t-\int_{0}^{+\infty}\int_{s}^{+\infty}f(t)\odv{u(s)}{s}\,\mathrm{d}t\,\mathrm{d}s-u(0)\\ &=\int_{0}^{+\infty}f(t)\left(\qopname\relax n{\mathbf{E}}_{q\sim G}\left[ \pi_{t}(q,1)v(q,t) \right]\right)\,\mathrm{d}t-\int_{0}^{+\infty}(1-F(s))\odv{u(s)}{s}\,\mathrm{d}s-u(0)\\ &=\qopname\relax n{\mathbf{E}}_{q\sim G}\left[\int_{0}^{+\infty}f(t)\left(\pi_{t}(q,1)v(q,t) \right)\right]\,\mathrm{d}t-\int_{0}^{+\infty}(1-F(t))\qopname\relax n{\mathbf{E}}_{q\sim G}\left[ \pi_{t}(q,1)\pdv{v(q,t)}{t} \right]\,\mathrm{d}t-u(0)\\ &=\qopname\relax n{\mathbf{E}}_{q\sim G}\left[ \int_{0}^{+\infty}f(t)\pi_t(q,1)\left(v(q,t)-\pdv{v(q,t)}{t}\frac{1-F(t)}{f(t)}\right)\,\mathrm{d}t \right]-u(0) \end{align*} As $\odv{u(t)}{t}\ge0$, we can set $u(0)=0$. Let \begin{gather*} \varphi(t|q)=v(q,t)-\pdv{v(q,t)}{t}\frac{1-F(t)}{f(t)}. \end{gather*} We have \begin{gather*} REV=\qopname\relax n{\mathbf{E}}_{q\sim G}\left[ \int_{0}^{+\infty}f(t)\pi_t(q,1)\varphi(t|q)\,\mathrm{d}t \right]. \end{gather*} Similar to the Myerson auction, there is a threshold $\theta_q$ for each $q$. We may need the ``ironing'' trick and define \begin{gather*} R(\eta|q)=v\cdot \eta, \end{gather*} where $\eta=1-F(t)$. And we also have $\pdv{R(\eta|q)}{\eta}=\varphi(t|q)$. With Equation \eqref{eq:allocation_monotonicity}, any threshold testing mechanism must satisfy that $\theta_t$ is decreasing in $t$. If the optimal mechanism is a threshold testing mechanism, then $\theta_q$ must also be a decreasing function of $q$. With the regularity assumption, we have \begin{gather*} v(q,\theta_q)-\pdv{v(q,\theta_q)}{\theta_q}\frac{1-F(\theta_q)}{f(\theta_q)}=0. \end{gather*} Therefore, the optimal mechanism is a threshold testing mechanism if and only if \begin{gather*} \odv{\theta_q}{q}\le 0, \end{gather*} where \begin{gather*} \odv{\theta_q}{q}=-\frac{\pdv{v(q,\theta_q)}{q}-\pdv{v(q,\theta_q)}{q,\theta_q}\frac{1-F(\theta_q)}{f(\theta_q)}}{\pdv{v(q,\theta_q)}{\theta_q}-\pdv[2]{v(q,\theta_q)}{\theta_q}\frac{-f^2(\theta_q)-(1-F(\theta_q))f'(\theta_q)}{f^2(\theta_q)}} \end{gather*} \input{optimal} \section{Other Directions to Pursue } One reasonable simplification is to assume $v(q,t) = q \cdot t$ (this is the click through rate setting where $q$ is the CTR and $t$ is advertiser's value per click). \begin{question} The above setting can be generalized to the cases where there are multiple threshold test principals in the market, and examine the equilibrium prices of threshold tests. \end{question} \begin{question} The above setting can be generalized to cases where there is also the product seller in the market who knows $q$. However, he is not trusted by the purchasing agent. He would like to turn to the principal to choose a threshold test for his product. In this case, what is the principal's optimal mechanism. What if the buyer also does not know the quality of his product, but may have a better distribution, i.e., an additional signal about $q$? \end{question} \begin{question} Threshold tests happen a lot in practice, like used car inspection, house inspection, GRE test, SAT exam. In exam test, the agent has a slightly different utility function, he will get a utility $u(\theta)$ for passing a test with threshold $\theta$ and utility $0$ for not passing the test. What is the optimal threshold test pricing for such exam tests? \end{question} \bibliographystyle{ACM-Reference-Format} \section{The Optimal Mechanism} In this section, we present the characterization of the optimal pricing mechanism. Mathematically, we derive an optimal solution in closed-form to the functional optimization problem \eqref{lp:opt-formulation}. The optimal mechanism we obtain turns out to belong to the following category of \emph{threshold mechanisms}. \begin{definition}[\textbf{Threshold Mechanisms}] A mechanism $(\pi, p)$ is called a threshold mechanism if it only uses threshold experiments. That is, there exists a function $\theta(t)$, such that for any $t \in [t_1, t_2]$, \begin{gather*} \pi(q, t)= \begin{cases} 1 & \mathrm{if~} \beta(q)\ge \theta(t)\\ 0 & \mathrm{otherwise} \end{cases}. \end{gather*} In this case, $\pi(q,t)$ is fully described by the \emph{threshold function} $\theta(t)$. \label{def:threshold} \end{definition} Note that the term ``threshold'' is only a property about the experiments and does not pose any constraint on the payment function $p(t)$. To formally present our mechanism, we will need the following notions of \emph{lower}, \emph{upper} and \emph{mixed} virtual value functions. \begin{definition}[\textbf{Lower/Upper/Mixed Virtual Value function}] For any type $t$ with PDF $f(t)$ and CDF $F(t)$, the function $\underline{\phi}(t) = t - \frac{1-F(t)}{f(t)}$ is called the \emph{lower virtual value function} and $\bar{\phi}(t) = t + \frac{F(t)}{f(t)}$ is called the \emph{upper virtual value function}. Moreover, for any $c\in [0,1]$, $ \phi_c(t) = c\underline{\phi}(t) + (1-c) \bar{\phi}(t)$ is called a \emph{mixed virtual value function}. Any virtual value function is \emph{regular} if it is monotone non-decreasing in $t$. \end{definition} The lower virtual value function $\underline{\phi}(t)$ is precisely the virtual value function commonly used in classic mechanism design settings \citep{myerson81}. We remark that while the upper and mixed virtual value function were not formally defined before, they have implicitly shown up in previous works and typically give rise when the IR constraints are binding at the largest type (e.g., \cite{advice}). However, the specific formulation for the information selling problem allows us to characterize the optimal mechanism for much more general buyer utility functions (see more detailed comparison in the related work). \textbf{Ironing.} When a virtual value function is irregular, we will need to apply the so-called ``ironing'' trick to make it monotone non-decreasing in $t$. \cite{myerson81} developed a procedure for ironing the lower virtual value function $\underline{\phi}(t)$. This procedure can be easily generalized to iron any function about the buyer type $t$, specifically, also to the three types of the virtual value functions defined above. For any virtual value function $\phi(t)$ (upper, lower or mixed), let $ \phi ^{+}(t)$ denote the ironed version of $\phi(t)$ obtained via the standard ironing procedure of \cite{myerson81} (for completeness, we give a formal description of this ironing procedure in Appendix \ref{append:ironing-procedure}).\footnote{For techniques to iron a general function, we refer the reader to a recent work by \cite{toikka2011ironing}.} If a virtual value function $ \phi (t) $ is already non-decreasing, it remains the same after ironing, i.e., $\phi^+(t) = \phi(t), \forall t$. With ${\phi}_c(t) = c\underline{\phi}(t) + (1-c) \bar{\phi}(t)$, the following useful properties of the ironed mixed virtual value functions will be needed for proving our main result (and may also be of independent interest in general). Their proofs are technical and are deferred to Appendix \ref{appendix:virtual_value_order_proof}. \begin{lemma} [\textbf{Useful Properties of Ironed Mixed Virtual Values}] \hspace{22mm} \begin{enumerate} \item For any $0 \leq c < c' \leq 1$, ${\phi}_c^+ (t) \geq {\phi}_{c'}^+(t)$ for any $t$; \item For any $c \in [0 ,1]$, let $t_c$ be the buyer type such that $F(t_c) = c$. Then we have $\phi_c^+(t)\leq t, \forall t\leq t_c$ and $\phi_c^+(t)\geq t, \forall t\geq t_c$. This also implies $ \underline{\phi}^{+}(t)< t< \bar{\phi}^{+}(t) , \forall t \in (t_1, t_2)$. \end{enumerate} \label{lem:virtual_value_order} \end{lemma} Notably, the second property above also implies that $\phi_c^+(t_c) = t_c$ always holds. We will be readily prepared to state the optimal mechanism after introducing the following two quantities: \begin{eqnarray}\label{eq:upper-lower-bound1} V_L&=&\qopname\relax n{max} \{v(t_1), 0 \} + \int_{t_1}^{t_2} \int_{q: \beta(q) \geq - \underline{\phi}^+(x)} g(q) \alpha(q) \,\mathrm{d} q\mathrm{d} x, \\ \label{eq:upper-lower-bound2} V_H &=&\qopname\relax n{max} \{v(t_1), 0 \} + \int_{t_1}^{t_2} \int_{q: \beta(q) \geq - \bar{\phi}^+(x)} g(q) \alpha(q) \,\mathrm{d} q\mathrm{d} x, \end{eqnarray} where $\bar{\phi}^+(x)$ and $\underline{\phi}^+(x)$ are the ironed upper and lower virtual value functions, respectively. Note that Lemma \ref{lem:virtual_value_order} implies $- \underline{\phi}^+(x) \geq - \bar{\phi}^+(x)$ and consequently $V_L \leq V_H$ since $g(q) \alpha(q)$ is always non-negative and thus $V_L$ integrates over a smaller region. Our main result is then summarized in the following theorem. \begin{theorem}[\textbf{Characterization of an Optimal Mechanism}] \quad \begin{enumerate} \item If $v(t_2) \leq V_L $, the threshold mechanism with threshold function $\theta^*(t) = -\underline{\phi}^+(t) $ and the following payment function represents an optimal mechanism: \begin{gather*} p^*(t) = \int_{q\in Q} \pi^*(q,t)g(q) v(q,t) \,\mathrm{d} q - \int_{t_1}^{t} \int_{q \in Q} \pi^*(q, x) g(q) \alpha(q) \,\mathrm{d} q\mathrm{d} x. \end{gather*} where $\pi^*$ is determined by $\theta^*(t)$ as in Definition \ref{def:threshold}. Moreover, $p^*(t)$ is monotone non-decreasing for $t\in[t_1, t_2]$. \item If $v(t_2) \geq V_H $, the threshold mechanism with threshold function $\theta^*(t) = -\bar{\phi}^+(t) $ and the following payment function represents an optimal mechanism: \begin{gather*} p^*(t) = \int_{q\in Q} \pi^*(q,t)g(q) v(q,t) \,\mathrm{d} q + \int_{t}^{t_2} \int_{q \in Q} \pi^*(q, x) g(q) \alpha(q) \,\mathrm{d} q\mathrm{d} x - v(t_2), \end{gather*} where $\pi^*$ is determined by $\theta^*(t)$ as in Definition \ref{def:threshold}. Moreover, $p^*(t)$ is monotone non-increasing for $t\in[t_1, t_2]$. \item If $V_L < v(t_2) < V_H$, let $c \in (0,1) $ be a constant that satisfies \begin{gather*} \int_{t_1}^{t_2}\int_{q:\beta(q)\ge-\phi_c^+(t)} g(q) \alpha(q)\,\mathrm{d} q \mathrm{d} t = v(t_2), \end{gather*} where $\phi_c^+(t)$ is the ironed version of the mixed virtual value function $\phi_c(t)$. Then the threshold mechanism with threshold function $\theta^*(t) =-\phi_c^+(t)$ and the following payment function represents an optimal mechanism: \begin{gather*} p^*(t) = \int_{q\in Q} \pi^*(q,t)g(q) v(q,t) \,\mathrm{d} q - \int_{t_1}^{t} \int_{q \in Q} \pi^*(q, x) g(q) \alpha(q) \,\mathrm{d} q\mathrm{d} x. \end{gather*} Moreover, $p^*(t)$ is monotone non-decreasing in $t$ when $F(t)\leq c$ and monotone non-increasing when $F(t)>c$. \end{enumerate} Let $\bar{t}$ satisfy $v(\bar{t}) = 0$. In all cases above, the buyer surplus function $s(t)$ is convex and monotone non-decreasing when $t \leq \bar{t}$, but immediately transitions to be convex and monotone non-increasing when $t \geq \bar{t}$. \label{thm:opt-scheme} \end{theorem} The following are a few remarks regarding Theorem \ref{thm:opt-scheme}. \begin{remark} The optimal mechanism generally features both price discrimination and information discrimination (see also a concrete example in Section \ref{susec:ex1}). This crucially differs from the sale of a physical goods to a buyer in which the optimal mechanism does \emph{not} exhibit price discrimination. Notably, the price discrimination is a consequence of information discrimination. That is, for any two buyer types $t, t'$, if their experiments are the same, then their payment must also be the same, i.e., $p^*(t) = p^*(t')$. This is a simple consequence of the IC constraint --- if $p^*(t) > p^*(t')$, the buyer of type $t$ would misreport $t'$ by which she gets the same information but pays less. \end{remark} \begin{remark} In all three cases of Theorem \ref{thm:opt-scheme}, a threshold mechanism is optimal, though the format of the optimal mechanism depends on how $v(t_2)$ compares to $V_L, V_H$. Threshold mechanisms are ubiquitous in reality. In various formats of quality testing, inspection and recommendation services, we often pay for these ``experiments'' in order to see whether some goods pass a test or or services deserve a recommendation. These can be viewed as threshold mechanisms for selling information. From this perspective, Theorem \ref{thm:opt-scheme} characterizes the optimal design for selling such experiments/information. \end{remark} \begin{remark} We briefly discuss the choice of the constant $c$ in Case 3 of Theorem \ref{thm:opt-scheme}. As we will show later in our proof, $v(t_2) \leq V_H$ implies $v(t_1) \leq 0$ for any feasible mechanism. Therefore, in Case 3, the $V_L, V_H$ defined in Equation \eqref{eq:upper-lower-bound1} and \eqref{eq:upper-lower-bound2} only has the integral term. Therefore, the condition of Case 3 boils down to $$\int_{t_1}^{t_2} \int_{q: \beta(q) \geq - \underline{\phi}^+(x)} g(q) \alpha(q) \,\mathrm{d} q\mathrm{d} x < v(t_2) < \int_{t_1}^{t_2} \int_{q: \beta(q) \geq - \bar{\phi}^+(x)} g(q) \alpha(q) \,\mathrm{d} q\mathrm{d} x. $$ Since $\underline{\phi}^+(x) < \bar{\phi}^+(x)$ for any $x$, any $c \in (0,1)$ will ``interpolate'' the two integral region $\{q: \beta(q) \geq - \bar{\phi}^+(x)\}$ and $\{q: \beta(q) \geq - \underline{\phi}^+(x)\}$. Since we assume that the distribution has no point mass, the following expression \begin{gather*} \int_{t_1}^{t_2} \int_{q: \beta(q) \geq - \phi_c^+(x)} g(q) \alpha(q) \,\mathrm{d} q\mathrm{d} x \end{gather*} is continuous in $c$.\footnote{This is the only place where the assumption that the distribution of $\beta$ has no point masses is needed. Without this assumption, the threshold mechanism will need randomization for those $q$ with $\beta(q) = \phi_c^+(t)$. See Appendix \ref{appendix:partial_recommendation} for the refined characterization of the optimal mechanism for general $\beta$. } Lemma \ref{lem:virtual_value_order} implies that it is also monotone weakly decreasing in $c$. This thus leads to a unique choice of the constant $c\in (0,1)$ that makes the above term equal $v(t_2)$. We can pin down this $c$ via a simple binary search. \end{remark} \subsection{ An Example }\label{susec:ex1} Consider the sale of credit assessment example in Section \ref{sec:intro} with $v(q,t) = qt - 2 = \alpha(q)[t+\beta(q)]$ where $\alpha(q) = q$ and $\beta(q) = \frac{-2}{q}$. Suppose $q \in Q = [0,1]$ is uniformly distributed, i.e., $g(q) = 1$. Let $t \in T = [2,3]$ also be uniformly distributed with $f(t) = 1$.\footnote{ Besides credit assessment, this setup also captures other applications such as online advertising. Here, $q$ is the probability that an Internet user will purchase the product of an advertiser (the information buyer) and $t$ is the advertiser's revenue from selling a product. The constant $2$ captures the advertiser's payment for displaying his ads to an Internet user. The information seller may be a marketing company who can predict each Internet user's probability of conversion with her rich data and powerful machine learning technology. } In this example, $\underline{\phi}(t)$ is already non-decreasing, and thus the ironing procedure is not needed. We have $\underline{\phi}^+(t) = \underline{\phi}(t) = t - \frac{1 - F(t)}{f(t)} = 2t - 3$. Note that $v(t) = \int_{q \in Q}g(q)v(q,t)\,\mathrm{d} q = \int_{0}^1 (tq - 2)\, \mathrm{d} q = \frac{t}{2}-2 $ for any $t \in [2,3]$. Since $V_L$ defined in Equation \eqref{eq:upper-lower-bound2} is clearly non-negative, we have $v(t_2) = -0.5 < 0 \leq V_L$, so the instance falls into Case 1 of Theorem \ref{thm:opt-scheme}. This implies that an optimal mechanism can be specified by a threshold experiment $\theta^*(t) = -\underline{\phi}^+(t) = 3 - 2t$. That is, for any buyer type $t$ the mechanism will make obedient recommendation of the active action when $\beta(q) \geq -\underline{\phi}^+(t)$, or concretely, when $q \geq \frac{2}{2t-3}$. Now there are two situations. \begin{itemize} \item When $t < 2.5$, we have $\frac{2}{2t-3} > 1$. This means the mechanism will never recommend the active action since $q$ is at most 1. Therefore, we have $\pi^*(t,q) = 0$ for all $q\in Q$ in this case and the payment is $p^*=0$. For these buyer types, the seller simply sells no information to them and charges them $0$ as well. \item When $t \ge 2.5$, the mechanism will recommend the active action when $q \geq \frac{2}{2t-3}$, which is a threshold in $(0,1)$ and decreases in $t$. In this situation, the payment function $p^*(t)$ can then be computed as follows \begin{align*} p^*(t) = \int _{\frac{2}{2t-3}}^1(qt\:-\:2)\,\mathrm{d} q-\int _{2.5}^t\int _{\frac{2}{2x-3}}^1 q\,\mathrm{d} q\mathrm{d} x = -0.25 +\frac{4t-9}{\left(2t-3\right)^2}. \end{align*} For these buyer types, their utility from the mechanism will be \begin{align*} u(t) = \int _{\frac{2}{2t-3}}^1(qt\:-\:2)\,\mathrm{d} q- p^*(t) = -1.75+\frac{t}{2}+\frac{1}{2t-3}. \end{align*} \end{itemize} Notably, to achieve the optimal revenue, the above mechanism does not simply recommend the active action whenever $v(q,t) \geq 0$. For example, when $t = 2.3$, the mechanism reveals no information (and asks for no charge as well) even for $q$ with $v(q,t) > 0$. Therefore, the revenue-optimal mechanism generally uses non-trivial information structures. Moreover, the optimal mechanism uses a menu with infinitely many entries. \subsection{The Power of Information Discrimination} The above example shows that the optimal mechanism features information discrimination, i.e., reveals different information to different buyer types, which then leads to price discrimination. One might wonder how well a mechanism can perform if information discrimination is not allowed. Our following proposition shows that in this case, the optimal mechanism is to simply post a uniform price and then reveal full information to any buyer who is willing to pay. To describe the mechanism, we introduce a notation $e(t)$ that captures the value of full information for any buyer with type $t$: \begin{align} \label{e(t)} e(t) = \int_{q\in Q} \qopname\relax n{max} \{ v(q,t) \,,\, 0\}g(q)\, \mathrm{d} q - \qopname\relax n{max} \bigg\{\int_{q\in Q} v(q,t) g(q) \,\mathrm{d} q \, ,\, 0\bigg\} \end{align} That is, $e(t)$ equals the additional value buyer type $t$ obtains by fully observing $q$. We then have the following proposition. \begin{proposition} \label{lem:full-revealing} If information discrimination is not allowed, then the optimal mechanism is to charge the Myerson's reserve price $r^*$ with respect to value $e(t)$, i.e., $r^* = \qopname\relax n{argmax}_{r} \, [ r \cdot \qopname\relax n{\mathbf{Pr}}_{t\sim F} (e(t) \geq r) ] $, and then reveal full information to any buyer who pays. \end{proposition} The proof of Proposition \ref{lem:full-revealing} is straightforward. In any incentive-compatible optimal mechanism with a single experiment, the buyer payment must be the same due to IC constraints. Therefore, this optimal payment must be Myerson's reserve price with respect to the value of that experiment. However, switching any experiment to a full information revelation experiment will never be worse. A formal argument is provided in Appendix \ref{appendix:full-revealing}. Let $RevSingle^*$ denote the optimal revenue obtained in Proposition \ref{lem:full-revealing} without information discrimination, whereas $Rev^*$ denote the optimal revenue of Theorem \ref{thm:opt-scheme}. To understand how much power information discrimination brings, we can study the ratio $\frac{RevSingle ^*}{Rev^*} \in [0,1]$. Clearly, the larger this ratio is, the less crucial information discrimination is to revenue. It turns out that information discrimination is generally important for securing a high revenue. Specifically, in Appendix \ref{appendix:bound-1}, we exhibit a concrete example showing that the ratio $\frac{RevSingle ^*}{Rev^*}$ can be arbitrarily close to $0$. This is the case even when the value distribution of $e(t)$ is a regular distribution. Interestingly, it turns out that if the distribution of $e(t)$ defined in Equation \eqref{e(t)}, with randomness inherited from type $t$, has monotone hazard rate, then the optimal revenue without information discrimination can always guarantee at least a $1/e$ fraction of the optimal revenue. The proof of this proposition can be found in Appendix \ref{appendix:bound-2}. \begin{proposition} \label{lem:bound-2} If distribution of $e(t)$ has monotone hazard rate (with randomness inherited from $t\sim F$), then we always have $\frac{RevSingle ^*}{Rev^*} \geq \frac{1}{e}\, $. \end{proposition} \section{Previous work} We thus can formulate the problem as the following convex program (CP). \begin{lp}\label{lp:OptScheme} \maxi{ \sum_{t} f(t) \cdot p(t)} \mbox{subject to} \qcon{ \sum_{q} \pi(q,t) g(q) v(q,t) \geq 0 }{t} \qcon{ \sum_{q} (1-\pi(q,t)) g(q) v(q,t) \leq 0 }{t} \con{ \sum_{q} \pi(q,t) g(q) v(q,t)- p_t \geq} \qcon{ \quad \qopname\relax n{max} \{ \sum_{q} \pi_{t'}(q,1) g(q) v(q,t) , 0 \} + \qopname\relax n{max} \{ \sum_{q} \pi_{t'}(q,0) g(q) v(q,t), 0 \} - p_{t'} }{t, t'} \qcon{ \sum_{q} \pi_t(q,1) \cdot v(q,t) - p_t \geq \qopname\relax n{max}\{ 0, \bar{v}_t \} }{t} \qcon{ \pi_t(q,1) + \pi_t(q,0) = g(q) }{q, t} \qcon{\pi_t(q,0), \pi_t(q,1) \geq 0}{q,t} \end{lp} where the first two are obedience constraints, enforcing the recommendation is obedient (recall that $0$ is the utility of taking action $0$). The third constraint is the incentive compatible constraint, enforcing that a buyer of type $t$ should not misreport type $t'$. The forth constraint describes individual rationality. That is, participating the information selling mechanism should lead to utility that is at least the utility of no participation $\qopname\relax n{max}\{ 0, \bar{v}_t \} $. The last two constraints are feasibility of the information selling scheme. \subsection{Characterization of Optimal Mechanism} We start by simplifying CP \ref{lp:OptScheme} with a few claims. \begin{claim} There always exists an optimal solution to CP \ref{lp:OptScheme} such that $p_t \geq 0$ for all $t$. Therefore, the first constraint is dominated by the forth constraint in this optimal solution. \end{claim} \begin{proof} If $p_t < 0$ for some $t$ in some optimal solution. Then by changing to $p_t = 0$ (higher payment) and $\pi_t$ to revealing no information (less information) will always increase revenue. Moreover, this will not violate any incentive compatibility constraint neither since deviating to $t$ from any other type $t'$ now will only pay more but gets less information. \end{proof} \begin{claim} The third constraint can be reduced to $\sum_{q} \pi_t(q,1) \cdot v(q,t) - p_t \geq \sum_{q} \pi_{t'}(q,1) \cdot v(q,t) - p_{t'}$. \end{claim} \begin{proof} Note that if both $\qopname\relax n{max} \{ \sum_{q} \pi_{t'}(q,1) \cdot v(q,t), 0 \} $ and $ \qopname\relax n{max} \{ \sum_{q} \pi_{t'}(q,0) \cdot v(q,t) , 0 \} $ achieve the maximum at the non-zero term, then the third constraint will be implied by the fifth constraint. It can also be shown that it can never be the case that the first max is achieved at zero but the second is achieved at the non-zero term. When both max are zero, the third term is dominated by the forth. So the only remaining situation is what stated in the claim. \end{proof} Utilizing the above two claims, CP \eqref{lp:OptScheme} can be reduced to the following LP: \begin{lp}\label{lp:OptScheme-simple} \maxi{ \sum_{t} f(t) \cdot p_t} \mbox{subject to} \qcon{ \sum_{q} \pi_t(q,0) \cdot v(q,t) \leq 0 }{t} \qcon{\sum_{q} \pi_t(q,1) \cdot v(q,t) - p_t \geq \sum_{q} \pi_{t'}(q,1) \cdot v(q,t) - p_{t'} }{t, t'} \qcon{ \sum_{q} \pi_t(q,1) \cdot v(q,t) - p_t \geq \qopname\relax n{max}\{ 0, \bar{v}_t \} }{t} \qcon{ \pi_t(q,1) + \pi_t(q,0) = g(q) }{q, t} \qcon{\pi_t(q,0), \pi_t(q,1) \geq 0}{q,t} \end{lp} Slightly re-writing this LP: \begin{lp}\label{lp:OptScheme-variant} \maxi{ \sum_{t} f(t) \cdot p_t} \mbox{subject to} \qcon{ \sum_{q} \pi_t(q,0) \cdot v(q,t) \leq 0 }{t} \qcon{\sum_{q} \big[ \pi_{t'}(q,1) - \pi_{t}(q,1)\big] \cdot v(q,t) + p_t - p_{t'} \leq 0 }{t, t'} \qcon{ p_t - \sum_{q} \pi_t(q,1) \cdot v(q,t) \leq - \qopname\relax n{max}\{ 0, \bar{v}_t \} }{t} \qcon{ \pi_t(q,1) + \pi_t(q,0) = g(q) }{q, t} \qcon{\pi_t(q,0), \pi_t(q,1) \geq 0}{q,t} \end{lp} Let $y_t, \beta_{t,t'}, \lambda_t, w_{q,t}$ be the corresponding dual variables for first four constraints , we obtain the following dual program. \begin{lp}\label{lp:OptScheme-dual} \mini{ \sum_{t} \sum_{q} g(q) \cdot w_{q,t} - \sum_{t} \lambda_t \cdot \qopname\relax n{max} \{ 0, \bar{v}(t) \} } \mbox{subject to} \qcon{ - v(q,t) \cdot \big[ \lambda_t + \sum_{t'} \beta_{t,t'} \big] + \sum_{t'} v(q,t') \cdot \beta_{t',t} + w_{q,t} \geq 0}{q, t} \qcon{ v(q,t) \cdot y_t + w_{q,t} \geq 0}{q, t} \qcon{ \sum_{t'} \beta_{t,t'} - \sum_{t'} \beta_{t',t} + \lambda_t = f(t) }{t} \qcon{y, \beta, \lambda \geq 0}{q,t} \end{lp} \begin{claim} LP \eqref{lp:OptScheme-dual} always has an optimal solution such that $y_t = \infty$ wh \end{claim} \todo{Want to argue that the above LP and its dual have the desired property as in Conjecture 1. One idea is to prove that for any fixed $t$, there is some $\theta_t$ such that the first constraint in the above dual is tight when $q > \theta_t$ and the second constraint is tight when $q < \theta_t$. } \section{Previous Random Thoughts} \subsection{Myersonian Approach for Continuous $Q,T$} Here we consider the case where both $t\in[0,\infty)$ and $q\in[0,\infty)$ are continuous. Assume that $v(q,0)=0,\forall q$, and $q\sim G(q)$ and $t\sim F(t)$. Thus, \begin{gather} \qopname\relax n{\mathbf{E}}_{q\sim G}\left[ \left(\pi_{t'}(q,1)-\pi_{t}(q,1)\right)v(q,t) \right]\le p_{t'}-p_t\le \qopname\relax n{\mathbf{E}}_{q\sim G}\left[ \left(\pi_{t'}(q,1)-\pi_{t}(q,1)\right)v(q,t') \right],\nonumber\\ \qopname\relax n{\mathbf{E}}_{q\sim G}\left[ \left(\pi_{t'}(q,1)-\pi_{t}(q,1)\right)\left(v(q,t')-v(q,t)\right) \right]\ge 0.\label{eq:allocation_monotonicity} \end{gather} Equation \eqref{eq:allocation_monotonicity} is called allocation monotonicity in the standard auction setting. Define \begin{gather*} u(t)= \qopname\relax n{\mathbf{E}}_{q\sim G}\left[ \pi_{t}(q,1)v(q,t) \right]-p_t \end{gather*} With Equation \eqref{eq:ic1} and \eqref{eq:ic2}, we also get: \begin{align*} u(t)&\ge \qopname\relax n{\mathbf{E}}_{q\sim G}\left[ \pi_{t'}(q,1)v(q,t') \right]-p_{t'}+\qopname\relax n{\mathbf{E}}_{q\sim G}\left[ \pi_{t'}(q,1)\left( v(q,t)-v(q,t') \right) \right]\\ &=u(t')+\qopname\relax n{\mathbf{E}}_{q\sim G}\left[ \pi_{t'}(q,1)\left( v(q,t)-v(q,t') \right) \right]. \end{align*} Therefore, \begin{gather*} \qopname\relax n{\mathbf{E}}_{q\sim G}\left[ \pi_{t'}(q,1)\left( v(q,t)-v(q,t') \right) \right]\le u(t)-u(t')\le \qopname\relax n{\mathbf{E}}_{q\sim G}\left[ \pi_{t}(q,1)\left( v(q,t)-v(q,t') \right) \right]. \end{gather*} Assume $t'<t$, divide the above inequality by $t-t'$, and let $t'\to t$, \begin{gather*} \odv{u(t)}{t}=\qopname\relax n{\mathbf{E}}_{q\sim G}\left[ \pi_{t}(q,1)\pdv{v(q,t)}{t} \right]. \end{gather*} \begin{align*} REV&=\int_{0}^{+\infty}f(t)p_t\,\mathrm{d}t\\ &=\int_{0}^{+\infty}f(t)\left(\qopname\relax n{\mathbf{E}}_{q\sim G}\left[ \pi_{t}(q,1)v(q,t)-u(t) \right]\right)\,\mathrm{d}t\\ &=\int_{0}^{+\infty}f(t)\left(\qopname\relax n{\mathbf{E}}_{q\sim G}\left[ \pi_{t}(q,1)v(q,t)-\int_{0}^{t}\odv{u(s)}{s}\mathrm{d}s-u(0) \right]\right)\,\mathrm{d}t\\ &=\int_{0}^{+\infty}f(t)\left(\qopname\relax n{\mathbf{E}}_{q\sim G}\left[ \pi_{t}(q,1)v(q,t) \right]\right)\,\mathrm{d}t-\int_{0}^{+\infty}\int_{s}^{+\infty}f(t)\odv{u(s)}{s}\,\mathrm{d}t\,\mathrm{d}s-u(0)\\ &=\int_{0}^{+\infty}f(t)\left(\qopname\relax n{\mathbf{E}}_{q\sim G}\left[ \pi_{t}(q,1)v(q,t) \right]\right)\,\mathrm{d}t-\int_{0}^{+\infty}(1-F(s))\odv{u(s)}{s}\,\mathrm{d}s-u(0)\\ &=\qopname\relax n{\mathbf{E}}_{q\sim G}\left[\int_{0}^{+\infty}f(t)\left(\pi_{t}(q,1)v(q,t) \right)\right]\,\mathrm{d}t-\int_{0}^{+\infty}(1-F(t))\qopname\relax n{\mathbf{E}}_{q\sim G}\left[ \pi_{t}(q,1)\pdv{v(q,t)}{t} \right]\,\mathrm{d}t-u(0)\\ &=\qopname\relax n{\mathbf{E}}_{q\sim G}\left[ \int_{0}^{+\infty}f(t)\pi_t(q,1)\left(v(q,t)-\pdv{v(q,t)}{t}\frac{1-F(t)}{f(t)}\right)\,\mathrm{d}t \right]-u(0) \end{align*} As $\odv{u(t)}{t}\ge0$, we can set $u(0)=0$. Let \begin{gather*} \varphi(t|q)=v(q,t)-\pdv{v(q,t)}{t}\frac{1-F(t)}{f(t)}. \end{gather*} We have \begin{gather*} REV=\qopname\relax n{\mathbf{E}}_{q\sim G}\left[ \int_{0}^{+\infty}f(t)\pi_t(q,1)\varphi(t|q)\,\mathrm{d}t \right]. \end{gather*} Similar to the Myerson auction, there is a threshold $\theta_q$ for each $q$. We may need the ``ironing'' trick and define \begin{gather*} R(\eta|q)=v\cdot \eta, \end{gather*} where $\eta=1-F(t)$. And we also have $\pdv{R(\eta|q)}{\eta}=\varphi(t|q)$. With Equation \eqref{eq:allocation_monotonicity}, any threshold testing mechanism must satisfy that $\theta_t$ is decreasing in $t$. If the optimal mechanism is a threshold testing mechanism, then $\theta_q$ must also be a decreasing function of $q$. With the regularity assumption, we have \begin{gather*} v(q,\theta_q)-\pdv{v(q,\theta_q)}{\theta_q}\frac{1-F(\theta_q)}{f(\theta_q)}=0. \end{gather*} Therefore, the optimal mechanism is a threshold testing mechanism if and only if \begin{gather*} \odv{\theta_q}{q}\le 0, \end{gather*} where \begin{gather*} \odv{\theta_q}{q}=-\frac{\pdv{v(q,\theta_q)}{q}-\pdv{v(q,\theta_q)}{q,\theta_q}\frac{1-F(\theta_q)}{f(\theta_q)}}{\pdv{v(q,\theta_q)}{\theta_q}-\pdv[2]{v(q,\theta_q)}{\theta_q}\frac{-f^2(\theta_q)-(1-F(\theta_q))f'(\theta_q)}{f^2(\theta_q)}} \end{gather*} \hf{ The feasible mechanism characterization with general $v(q,t)$ is the following, which I do not know how to prove \begin{lemma}\label{lem:feasible} A mechanism $(\pi, p)$ is feasible if and only if it satisfies the following constraints: \begin{eqnarray} && \int_{q_1}^{q_2} g(q) [\pi(q,t) - \pi(q,t')][v(q,t) - v(q,t')] \geq 0 \\ && u(t) = u(t_1) + \int_{q_1}^{q_2}\int_{t_1}^{t} g(q) \pi(q,x) \frac{\partial v(q,x)}{\partial x} dx, \, \, \, \forall t \in [T] \\ & & u(t_2) \geq v(t_2) \\ & & p(t) \geq 0, \, u(t) \geq 0 \, \, \, \forall t \in [T] \end{eqnarray} \end{lemma} \begin{proof} An equivalence to Condition (20) is the \begin{equation} p'(t) = \int_{q_1}^{q_2} g(q) \frac{\partial \pi(q,t)}{\partial t} v(q,t) dq, \, \, \, \forall t \in [T] \end{equation} This comes from the definition of $u(t):$ \begin{equation} u(t) = \int_{q_1}^{q_2} g(q) \pi(q,t) v(q,t) dq - p(t), \, \, \, \forall t \in [T] \end{equation} Therefore, the IC constraint is equivalent to \begin{eqnarray} && \int_{t'}^t \int_{q_1}^{q_2} \left[ g(q) \pi(q,s) \frac{\partial v(q,s)}{\partial s} \right] dq ds \geq \int_{q_1}^{q_2} \pi(q, t') \cdot g(q) [v(q,t) - v(q,t')] dq \\ &\Longleftrightarrow& \int_{q_1}^{q_2} g(q) \Bigg[ \int_{t'}^t \left[ \pi(q,s) \frac{\partial v(q,s)}{\partial s} \right] ds - \pi(q, t') [v(q,t) - v(q,t')] \Bigg] dq \geq 0 \\ &\Longleftrightarrow& \int_{q_1}^{q_2} g(q) \Bigg[ \int_{t'}^t \left[ \pi(q,s) \frac{\partial v(q,s)}{\partial s} \right] ds - \int_{t'}^t \pi(q, t') \frac{\partial v(q,s)}{ \partial s} ds \Bigg] dq \geq 0 \\ &\Longleftrightarrow& \int_{q_1}^{q_2} g(q) \Bigg[ \int_{t'}^t \left[ [\pi(q,s) - \pi(q, t')] \frac{\partial v(q,s)}{\partial s} \right] ds \Bigg] dq \geq 0 \\ &\Longleftrightarrow& \int_{q_1}^{q_2} g(q) \Bigg[ [\pi(q,s) - \pi(q, t')]v(q,s)|_{t'}^t - \int_{t'}^t \left[ \frac{\partial \pi(q,s)}{\partial s} v(q,s) \right] ds \Bigg] dq \geq 0 \\ &\Longleftrightarrow& \int_{q_1}^{q_2} g(q) \Bigg[ [ \pi(q,t) - \pi(q, t')]v(q,t) - \int_{t'}^t \left[ \frac{\partial \pi(q,s)}{\partial s} v(q,s) \right] ds \Bigg] dq \geq 0 \\ &\Longleftrightarrow& \int_{q_1}^{q_2} g(q) \Bigg[ \int_{t'}^t \left[ \frac{\partial \pi(q,s)}{\partial s} [v(q,t) - v(q,s)] \right] ds \Bigg] dq \geq 0 \end{eqnarray} \end{proof} } \subsection{Other Directions to Pursue } One reasonable simplification is to assume $v(q,t) = q \cdot t$ (this is the click through rate setting where $q$ is the CTR and $t$ is advertiser's value per click). \begin{question} The above setting can be generalized to the cases where there are multiple threshold test principals in the market, and examine the equilibrium prices of threshold tests. \end{question} \begin{question} The above setting can be generalized to cases where there is also the product seller in the market who knows $q$. However, he is not trusted by the purchasing agent. He would like to turn to the principal to choose a threshold test for his product. In this case, what is the principal's optimal mechanism. What if the buyer also does not know the quality of his product, but may have a better distribution, i.e., an additional signal about $q$? \end{question} \begin{question} Threshold tests happen a lot in practice, like used car inspection, house inspection, GRE test, SAT exam. In exam test, the agent has a slightly different utility function, he will get a utility $u(\theta)$ for passing a test with threshold $\theta$ and utility $0$ for not passing the test. What is the optimal threshold test pricing for such exam tests? \end{question} \subsection{Deriving the Optimal Mechanism for Case 3} \label{sec:case3} With the characteristics of feasible mechanisms in subsection \ref{sec:proof:characterize}, we are now ready to derive the optimal mechanism. This is where our proof starts to significantly deviate from standard approaches for classic mechanism design settings. To see the reasons, recall that Lemma \ref{lem:surplus-concave} shows that the buyer surplus $s(t)$ in our problem generally increases first and then decreases. In single-item auction design, however, the buyer's utilities are always increasing in their types and thus the optimal auction can always set the buyer's surplus to be $0$ at the lowest type \citep{myerson81}. In our case, however, both $s(t_1)$ and $s(t_2)$ could be the point with the lowest surplus, and we have to figure out which one will be the lowest point under what conditions. Moreover, the participation constraints require $u(t_2) \geq v(t_2)$ and $u(t_1) \geq 0$.\footnote{Generally, the IR constraints require $u(t) \geq \qopname\relax n{max} \{ v(t), 0 \}, \forall t \in T$, but Lemma \ref{lem:feasible-M} reduces the IR constraints to $u(t_2) \geq v(t_2)$ and $u(t_1) \geq 0$.} To insure theses constraints, the format of the optimal mechanism and its derivation both become more involved. It turns out that whether the minimum buyer surplus will be achieved at point $t_1$ or point $t_2$ or simultaneously at both $t_1$, $t_2$ depends on how large $v(t_1)$ and $v(t_2)$ are. Specifically, the optimal mechanism has different forms depending on whether $v(t_2)\le V_L$, $v(t_2)\ge V_H$, or $V_L< v(t_2)< V_H$, where $V_L$ and $V_H$ are defined in Equation \eqref{eq:upper-lower-bound1} and \eqref{eq:upper-lower-bound2}. To further illustrate these conditions, the following lemma shows that the conditions for the above three cases can be equivalently expressed in terms of $v(t_1)$ as well. \begin{lemma} \label{lem:case_condition_t1} Define \begin{gather*} V'_L=-\int_{t_1}^{t_2} \int_{q: \beta(q) \geq - \underline{\phi}^+(x)} g(q) \alpha(q) \,\mathrm{d} q\mathrm{d} x\\ V'_H=-\int_{t_1}^{t_2} \int_{q: \beta(q) \geq - \bar{\phi}^+(x)} g(q) \alpha(q) \,\mathrm{d} q\mathrm{d} x. \end{gather*} Then the three conditions $v(t_2)\le V_L$, $v(t_2)\ge V_H$, and $V_L< v(t_2)< V_H$ are equivalent to $v(t_1)\le V'_L$, $v(t_1)\ge V'_H$, and $V'_L< v(t_1)< V'_H$, respectively. \end{lemma} \begin{proof} We will only show that $v(t_2)\le V_L$ is equivalent to $v(t_1)\le V'_L$, as the other two cases follows from similar arguments. By definition, we have \begin{gather*} v(t_2)=\int_{ q \in Q}g(q)\alpha(q)[t_2+\beta(q)]\,\mathrm{d} x=v(t_1)+(t_2-t_1)\int_{ q \in Q}g(q)\alpha(q)\,\mathrm{d} q. \end{gather*} Thus $v(t_2)\le V_L$ can be written as: \begin{gather*} v(t_1)+(t_2-t_1)\int_{ q \in Q}g(q)\alpha(q)\,\mathrm{d} q\le \qopname\relax n{max} \{v(t_1), 0 \} + \int_{t_1}^{t_2} \int_{q: \beta(q) \geq -\underline{\phi}^+(x)} g(q) \alpha(q) \,\mathrm{d} q\mathrm{d} x. \end{gather*} Some re-arrangements yield: \begin{gather*} v(t_1)-\qopname\relax n{max} \{v(t_1), 0 \}\le \int_{t_1}^{t_2} \int_{q: \beta(q) \geq -\underline{\phi}^+(x)} g(q) \alpha(q) \,\mathrm{d} q\mathrm{d} x-(t_2-t_1)\int_{ q \in Q}g(q)\alpha(q)\,\mathrm{d} q, \end{gather*} which is equivalent to: \begin{gather*} \qopname\relax n{min}\{v(t_1),0\} \le - \int_{t_1}^{t_2} \int_{q: \beta(q) \leq -\underline{\phi}^+(x)} g(q) \alpha(q) \,\mathrm{d} q \mathrm{d} x=V'_L. \end{gather*} Note that the right-hand side is always non-positive. So the left-hand side has to be $v(t_1)$. Thus the condition $v(t_2)\le V_L$ is equivalent to $v(t_1)\le V'_L$, and also implies that $v(t_1)\le 0$. \end{proof} In the remainder of this section, we will focus on the case with $V_L< v(t_2)< V_H$. For convenience of reference, we re-state the Case 3 of Theorem \ref{thm:opt-scheme} in the following proposition. \begin{proposition} \label{lem:case_3} If $V_L < v(t_2) < V_H$, let $c \in (0,1) $ be a constant that satisfies \begin{gather} \int_{t_1}^{t_2}\int_{q:\beta(q)\ge-\phi_c^+(t)} g(q) \alpha(q)\,\mathrm{d} q \mathrm{d} t = v(t_2), \label{eq:choice_c} \end{gather} where $\phi_c^+(t)$ is the ironed version of the mixed virtual value function $\phi_c(t)$. Then the threshold mechanism with threshold signaling function $\theta^*(t) =-\phi_c^+(t)$ and the following payment function represents an optimal mechanism: \begin{gather*} p^*(t) = \int_{q\in Q} \pi^*(q,t)g(q) v(q,t) \,\mathrm{d} q - \int_{t_1}^{t} \int_{q \in Q} \pi^*(q, x) g(q) \alpha(q) \,\mathrm{d} q\mathrm{d} x. \end{gather*} Moreover, $p^*(t)$ is non-decreasing in $t$ when $F(t)\leq c$ and monotone non-increasing when $F(t)>c$. \end{proposition} Before proving the optimality of our mechanism, we first argue that the constant $c$ described in Proposition \ref{lem:case_3} actually exists and thus the mechanism is well-defined. \begin{lemma} \label{lem:existence_of_C} If $V_L < v(t_2) < V_H$, there exists a constant $c\in (0,1)$ that satisfies Equation \eqref{eq:choice_c}. \end{lemma} \begin{proof} Lemma \ref{lem:case_condition_t1} implies that the condition $v(t_2)<V_H$ is equivalent to the following: \begin{gather} v(t_1) < - \int_{t_1}^{t_2} \int_{q: \beta(q) \leq -\bar{\phi}^+(t)} g(q) \alpha(q) \,\mathrm{d} q \mathrm{d} t. \end{gather} The right-hand side of the above inequality is clearly non-positive. Thus $v(t_1)\le 0$ and $\qopname\relax n{max} \{v(t_1), 0 \}=0$. The condition $V_L < v(t_2) < V_H$ can be written as: \begin{gather*} \int_{t_1}^{t_2} \int_{q: \beta(q) \geq -\underline{\phi}^+(t)} g(q) \alpha(q) \,\mathrm{d} q\mathrm{d} t < v(t_2) < \int_{t_1}^{t_2} \int_{q: \beta(q) \geq -\bar{\phi}^+(t)} g(q) \alpha(q) \,\mathrm{d} q\mathrm{d} t. \end{gather*} When $c = 0$, we have $-\underline{\phi}^+(t)=-\phi_c^+(t)$ and \begin{gather} \int_{t_1}^{t_2}\int_{q:\beta(q)\ge-\phi_c^+(t)} g(q) \alpha(q)\,\mathrm{d} q \mathrm{d} t = \int_{t_1}^{t_2} \int_{q: \beta(q) \geq -\underline{\phi}^+(t)} g(q) \alpha(q) \,\mathrm{d} q\mathrm{d} t < v(t_2). \label{eq:c=0} \end{gather} When $c = 1$, we have $-\bar{\phi}^+(t)=-\phi_c^+(t)$ and \begin{gather} \int_{t_1}^{t_2} \int_{q: \beta(q) \geq -\phi_c^+(t)} g(q) \alpha(q)\,\mathrm{d} q\mathrm{d} t = \int_{t_1}^{t_2} \int_{q: \beta(q) \geq -\bar{\phi}^+(t)} g(q) \alpha(q) \,\mathrm{d} q\mathrm{d} t > v(t_2). \label{eq:c=1} \end{gather} Now we show that the following function is continuous in $c$ \begin{gather*} \int_{t_1}^{t_2} \int_{q: \beta(q) \geq - \phi^+_c(t)} g(q) \alpha(q) \,\mathrm{d} q\mathrm{d} t. \end{gather*} Specifically, we show that $\phi_c^+$ under $l_1$ norm $\int_{t_1}^{t_2} |\phi_c^+(t)| \mathrm{d} t$ is continuous in $c$. Note that this is not obvious since the ironing procedure involves taking derivatives, which is not a continuous operator in general.\footnote{For example, the function sequence $\{ x^n \}_{n=1}^{\infty}$ tends to constant function $0$ but their derivatives do not. } Fortunately, continuity turns out to hold in our specific problem. Let $h_c(z)=\phi_c(F^{-1}(z))$ be the corresponding function defined in the ironing procedure (Appendix \ref{append:ironing-procedure}) where $z=F(t)$. First, we observe that $h_c(z)$ is continuous in $c$ (all functions in this proof are under the $l_1$ norm), because \begin{align} \lim_{\epsilon \to 0} \int_{0}^{1} | h_{c+\epsilon}(z) - h_c(z)| \mathrm{d} z & = \lim_{\epsilon \to 0} \int_{t_1}^{t_2} f(t) |\phi_{c+\epsilon}(t) - \phi_c(t)| \mathrm{d} t\nonumber\\ &= \lim_{\epsilon \to 0} \int_{t_1}^{t_2} f(t) \left| t-\frac{c+\epsilon-F(t)}{f(t)} - t+\frac{c-F(t)}{f(t)}\right| \mathrm{d} t \nonumber \\ &=\lim_{\epsilon \to 0} \int_{t_1}^{t_2} |\epsilon| \mathrm{d} t \nonumber \\ &= 0 . \label{function-limit} \end{align} Next, we prove $l_c(t)$ is continuous in $c$. By lemma \ref{lem:virtual_value_order}, for any $0 \leq c < c' \leq 1$, ${\phi}_c^+ (t) \geq {\phi}_{c'}^+(t)$ for any $t$. Thus, for any $0 \leq c < c' \leq 1$, $l_c (z) \geq l_{c'}(z)$ for any $z$. Using this monotonicity and the fact that the ironing procedure satisfies $ \int_0^1 h_c(z) \mathrm{d} z = \int_0^1 l_c(z) \mathrm{d} z $, we have \begin{align*} \lim_{\epsilon \to 0^-} \int_{0}^{1} \left|l_{c+\epsilon}(z) - l_c(z)\right| \mathrm{d} z & = \lim_{\epsilon \to 0^-} \left[ \int_{0}^{1} l_{c+\epsilon}(z) \mathrm{d} z - \int_{0}^{1} l_c(z) \mathrm{d} z \right] \\ & =\lim_{\epsilon \to 0^-} \left[ \int_{0}^{1} h_{c+\epsilon}(z) \mathrm{d} z - \int_{0}^{1} h_c(z) \mathrm{d} z \right] \\ & =\lim_{\epsilon \to 0^-} \int_{0}^{1} \left[h_{c+\epsilon}(z) - h_c(z) \right]\mathrm{d} z \\ &= 0, \end{align*} where the last equation is due to the continuity of $h_c(z)$ in $c$ proved above. Similar derivation holds when $\epsilon \to 0^+$, so function $l_c(z)$ is continuous in $c$. Finally, it is straightforward to see that $\phi_c^+(t) = l_c(F(t))$ is continuous in $c$ as well because $f(t)$ has full support on $[t_1,t_2]$, and thus $ f_{\qopname\relax n{min}} \mathrm{d} t \leq \mathrm{d} F(t) \leq f_{\qopname\relax n{max}} \mathrm{d} t$ where $f_{\qopname\relax n{min}}, f_{\qopname\relax n{max}}$ are the smallest and largest value of the $f(t)$ on interval $[t_1,t_2]$. This concludes the argument that $\phi_c^+$ is continuous in $c$ under $l_1$ norm. Thus, we can conclude that the function $\int_{t_1}^{t_2} \int_{\beta(q) \geq - \phi^+_c(t)} g(q) \alpha(q) \,\mathrm{d} q\mathrm{d} t$ is continuous in $c$. Combined with Equation \eqref{eq:c=0} and \eqref{eq:c=1}, we must have $c\in(0,1)$ that satisfies Equation \eqref{eq:choice_c}. \end{proof} We remark that the proof of Lemma \ref{lem:existence_of_C} relies on the assumption that the distribution of $\beta(q)$ does not contain a point mass. If this non-degeneracy assumption does not hold, we can slightly adjust our analysis to still obtain an optimal threshold mechanism but with randomized signals at only the boundary of the threshold experiments. For completeness, we derive the optimal mechanism for this general case in Appendix \ref{appendix:partial_recommendation}. Lemma \ref{lem:existence_of_C} implies that the mechanism proposed in Proposition \ref{lem:case_3} exists. Next we show that it is also feasible. \begin{lemma} \label{lem:case_3_feasible} The mechanism $({\pi}^*, {p}^*)$ defined according to $\phi_c^+(t)$ is feasible. Moreover, it satisfies: (1) $u(t_1) = 0, u(t_2) = v(t_2)$; (2) $p^*(t)$ is non-decreasing in $t$ when $F(t)\leq c$ and monotone non-increasing when $F(t)>c$. \end{lemma} \begin{proof} To prove Lemma \ref{lem:case_3_feasible}, it suffices to show that mechanism $({\pi}^*, {p}^*)$ satisfies all the constraints \eqref{eq:signal-monotonicity}, \eqref{eq:buyer-utility-identify2}, \eqref{eq:ir-t2}, and \eqref{eq:non-negativity} in Lemma \ref{lem:feasible-M}. By definition, \begin{gather*} P_{{\pi^*}}(t) = \int_{q:\beta(q)\ge -\phi_c^+(t)} g(q) \alpha(q) \,\mathrm{d} q. \end{gather*} Since $\phi_c^+(t)$ is already ironed, it is non-increasing in $t$. Thus the integral domain of $P_{{\pi}^*}(t)$ gets larger as $t$ increases. So $P_{{\pi}^*}(t)$ is non-decreasing since $g(q)\alpha(q)\ge 0$ and thus satisfies constraint \eqref{eq:signal-monotonicity}. To show that the mechanism satisfies constraint \eqref{eq:buyer-utility-identify2}, note that the payment function in Proposition \ref{lem:case_3} implies \begin{gather*} u(t)=\int_{q \in Q} g(q) \pi^*(q,t)v(q,t_1) \,\mathrm{d} q -p(t)=\int_{t_1}^{t} P_{{\pi}^*}(x)\,\mathrm{d} x. \end{gather*} We then have $u(t_1)=0$ and consequently $ u(t)=u(t_1)+\int_{t_1}^{t} P_{{\pi}^*}(x)\,\mathrm{d} x$ as in constraint \eqref{eq:buyer-utility-identify2}. As for constraint \eqref{eq:ir-t2}, we already have $u(t_1)=0$. And \begin{gather*} u(t_2) = \int_{t_1}^{t_2} P_{{\pi}^*}(x)\,\mathrm{d} x = \int_{t_1}^{t_2} \int_{q:\beta(q)\ge-\phi_c^+(t)} g(q) \alpha(q) \,\mathrm{d} q \mathrm{d} t = v(t_2), \end{gather*} where the last equality follows from the definition of the constant $c$. More involved is to show the stated properties of the payment function and this is intrinsically related to the obedience constraints. We now argue that $p^*(t)$ is monotone non-decreasing when $F(t)\le c$, and monotone non-increasing when $F(t)\ge c$ (recall that $c\in (0,1)$). Let $t_c$ be the buyer type such that $F(t_c) = c$. By Lemma \ref{lem:virtual_value_order}, we have $\phi_c^+(t)\leq t$ when $ \forall F(t)\leq c$ and $\phi_c^+(t)\geq t$ when $F(t)\geq c$. We first consider the case of $F(t) \leq c$, i.e., $t<t_c$. Let $t'$ be any number in the interval $[\phi_c^+(t), t]$. Thus $ \phi_c^+(t') \leq \phi_c^+(t)\le t$. And \begin{align*} &p^*(t)-p^*(t')\\ =&\int_{q\in Q} g(q) \pi^*(q,t)v(q,t)\,\mathrm{d} q -\int_{q\in Q} g(q) \pi^*(q,t')v(q,t')\,\mathrm{d} q - \int_{t'}^{t} P_{\pi^*}(x) \,\mathrm{d} x\\ =&\int_{q:\beta(q)\ge -\phi_c^+(t)}g(q)v(q,t)\,\mathrm{d} q-\int_{q:\beta(q)\ge -\phi_c^+(t')}g(q)v(q,t')\,\mathrm{d} q- \int_{t'}^{t} P_{\pi^*}(x) \,\mathrm{d} x. \end{align*} When $\beta(q)\ge -\phi_c^+(t)$, we have $v(q,t')=\alpha(q)[t'+\beta(q)]\ge \alpha(q)[t'-\phi_c^+(t)]\ge 0$, where the last inequality is due to the choice of $t'$. So the second term in the above equation satisfies: \begin{align*} &\int_{q:\beta(q)\ge -\phi_c^+(t')}g(q)v(q,t')\,\mathrm{d} q\\ =&\int_{q:\beta(q)\ge -\phi_c^+(t)}g(q)v(q,t')\,\mathrm{d} q-\int_{q:-\phi_c^+(t)\le\beta(q) < -\phi_c^+(t')}g(q)v(q,t')\,\mathrm{d} q\\ \le&\int_{q:\beta(q)\ge -\phi_c^+(t)}g(q)v(q,t')\,\mathrm{d} q. \end{align*} Thus, \begin{align*} &p^*(t)-p^*(t')\\ \ge&\int_{q:\beta(q)\ge -\phi_c^+(t)}g(q)v(q,t)\,\mathrm{d} q-\int_{q:\beta(q)\ge -\phi_c^+(t)}g(q)v(q,t')\,\mathrm{d} q- \int_{t'}^{t} P_{\pi^*}(x) \,\mathrm{d} x\\ =&\int_{q:\beta(q)\ge -\phi_c^+(t)}g(q)\alpha(q)(t-t')\,\mathrm{d} q-\int_{t'}^{t} P_{\pi^*}(x) \,\mathrm{d} x\\ =&(t-t')P_{\pi^*}(t)-\int_{t'}^{t} P_{\pi^*}(x) \,\mathrm{d} x\\ \ge &0, \end{align*} where the last inequality is due to the monotonicity of $P_{\pi^*}(t)$. Therefore, the payment function $p^*(t)$ is monotone non-decreasing in the interval $[\underline{\phi}^+(t),t]$. Since the set of intervals $\{[\underline{\phi}^+(t),t]\mid t\in [t_1,t_c]\}$ covers $[t_1,t_c]$, we conclude that $p^*(t)$ is monotone non-decreasing in $[t_1,t_c]$. Using similar analyses, we can show that $p^*(t)$ is monotone non-increasing in the interval $[t_c,t_2]$. Therefore, to prove that $p^*(t)\ge 0$ for all $t\in T$, it suffices to show that $p^*(t_1)\ge0$ and $p^*(t_2)\ge 0$. Indeed, we have \begin{align*} p^*(t_1) = \int_{q \in Q} \pi^*(q,t_1) g(q) v(q,t_1)\, \mathrm{d} q - u(t_1) = \int_{q:\beta(q)\geq-\phi_c^+(t_1)} g(q) v(q,t_1)\, \mathrm{d} q \geq 0. \end{align*} The last inequality is because when $\beta(q)\ge-\phi_c^+(t_1)\ge -t_1$, we have $v(q,t_1)=\alpha(q)[t_1+\beta(q)]\ge 0$. And \begin{align*} p^*(t_2) =& \int_{q \in Q} \pi^*(q,t_2) g(q) v(q,t_2)\, \mathrm{d} q - u(t_2)\\ =&\int_{q: \beta(q) \geq -\phi_c^+(t_2) } g(q) \alpha(q)[t_2+\beta(q)]\, \mathrm{d} q -\int_{q \in Q} g(q) \alpha(q)[t_2 + \beta(q)]\,\mathrm{d} q\\ =&-\int_{q: \beta(q) < -\phi_c^+(t_2) } g(q) \alpha(q)[t_2+\beta(q)]\, \mathrm{d} q\\ \ge& 0, \end{align*} where the last inequality is because $\beta(q) < -\phi_c^+(t_2)\le -t_2$. \end{proof} Finally, we prove the optimality of mechanism $(\pi^*,p^*)$. Since the optimal mechanism of Proposition \ref{lem:case_3} depends on the ironed mixed virtual value functions $\phi_c^+(t)$ for all $c\in[0,1]$, our derivation here has to employ the ironing trick for $\phi_c$ as well. We will first derive two equivalent representations of the revenue as a function of any feasible mechanism, and then interpolate these two functions which give rise to the mixed virtual value. Finally, we use the Myersonian approach to argue that the defined mechanism $(\pi^*,p^*)$ in Proposition \ref{lem:case_3} maximizes all terms in the revenue function simultaneously. \begin{proof}[Proof of Proposition \ref{lem:case_3}] Let $(\pi, p)$ be any feasible mechanism. We can write the revenue of the seller as: \begin{gather*} REV (\pi, p)=\int_{t_1}^{t_2} f(t)p(t)\,\mathrm{d} t=\int_{t_1}^{t_2} f(t)\left[\int_{q \in Q} g(q) \pi(q,t)v(q,t) \,\mathrm{d} q -u(t) \right]\,\mathrm{d} t. \end{gather*} Applying Equation \eqref{def-u(t)} and \eqref{eq:buyer-utility-identify2}, we get \begin{align*} REV (\pi, p)=&\int_{t_1}^{t_2} f(t)\left[\int_{q \in Q} g(q)\pi(q,t)v(q,t)\,\mathrm{d} q -\int_{t_1}^{t} P_{\pi}(x)\, \mathrm{d} x -u(t_1) \right]\,\mathrm{d} t\\ =&\int_{t_1}^{t_2} f(t)\left[\int_{q \in Q} g(q) \pi(q,t)v(q,t)\,\mathrm{d} q \right]\,\mathrm{d} t-\int_{t_1}^{t_2} \int_{t_1}^{t}f(t) P_{\pi}(x)\,\mathrm{d} x\mathrm{d} t -u(t_1)\\ =&\int_{t_1}^{t_2} f(t)\left[\int_{q \in Q} g(q) \pi(q,t)v(q,t)\,\mathrm{d} q \right]\,\mathrm{d} t-\int_{t_1}^{t_2} \int_{x}^{t_2}f(t) P_{\pi}(x)\,\mathrm{d} t\mathrm{d} x -u(t_1)\\ =&\int_{t_1}^{t_2} f(t)\left[\int_{q \in Q} g(q) \pi(q,t)v(q,t)\,\mathrm{d} q \right]\,\mathrm{d} t-\int_{t_1}^{t_2} [1-F(x)]P_{\pi}(x)\,\mathrm{d} x-u(t_1), \end{align*} where the third equation comes from switching the order of integration. Thus \begin{align} &REV (\pi, p)\nonumber \\ =&\int_{q \in Q} g(q)\left[\int_{t_1}^{t_2} f(t) \pi(q,t)v(q,t)\,\mathrm{d} q \right]\,\mathrm{d} t\\ &-\int_{t_1}^{t_2} [1-F(t)]\int_{q \in Q} g(q) \pi(q,t)\alpha(q) \mathrm{d} q\mathrm{d} t-u(t_1) \nonumber \\ =&\int_{q \in Q} g(q)\left[ \int_{t_1}^{t_2} f(t)\pi(q,t) \left(v(q,t)-\alpha(q)\frac{1-F(t)}{f(t)}\right)\,\mathrm{d} t \right]\,\mathrm{d} q -u(t_1) \nonumber \\ =&\int_{q \in Q} g(q)\left[ \int_{t_1}^{t_2} f(t)\pi(q,t) \alpha(q) \left[\underline{\phi}(t) + \beta(q)\right]\,\mathrm{d} t \right]\, \mathrm{d} q -u(t_1). \label{eq:revenue-1} \end{align} The derived revenue function above uses $u(t_1)$ as the ``reference'' points. Similarly, using a variant of Equation \eqref{eq:buyer-utility-identify2} $u(t) = u(t_2) - \int_{t}^{t_2} P_{\pi}(x)\,\mathrm{d} x $, we can derive an alternative form of the revenue with $u(t_2)$ as the reference point: \begin{gather} REV(\pi, p)=\int_{q \in Q} g(q)\left[ \int_{t_1}^{t_2} f(t)\pi(q,t)\alpha(q) \left[ \bar{\phi}(t) + \beta(q)\right]\,\mathrm{d} t \right]\,\mathrm{d} q -u(t_2). \label{eq:revenue-2} \end{gather} Note that Equation \eqref{eq:revenue-1} and \eqref{eq:revenue-2} are just different representations of the (same) revenue of any feasible mechanism $(\pi,p)$. Thus any convex combination of them also represents the same revenue. Using the constant $c$ given in Proposition \ref{lem:case_3} as the convex coefficient, we have \begin{align*} REV(\pi, p)=&c \left[\int_{q \in Q} g(q)\int_{t_1}^{t_2} f(t)\pi(q,t) \alpha(q) \left[\underline{\phi}(t) + \beta(q)\right]\,\mathrm{d} t \, \mathrm{d} q -u(t_1) \right] \\ &+ (1-c) \left[\int_{q \in Q} g(q) \int_{t_1}^{t_2} f(t)\pi(q,t)\alpha(q) \left[ \bar{\phi}(t) + \beta(q)\right]\,\mathrm{d}t \mathrm{d}q -u(t_2) \right] \\ = & \int_{t_1}^{t_2}\int_{q\in Q} \left[\phi_c(t) + \beta(q)\right] \pi(q,t) f(t) g(q) \alpha(q)\,\mathrm{d} q\mathrm{d} t - c u(t_1) - (1-c)u(t_2). \end{align*} Next we employ the ironing trick. Define $h(z)=\phi_c(F^{-1}(z)), \forall z\in [0,1]$ where $F^{-1}(z)$ is the inverse function of CDF $F(t)$; let $ H(z)=\int_{0}^{z}h(r)\,\mathrm{d} r$, $ L(z)$ be the convex hull of $H(z)$ and $l(z)=L'(z)$. We have $h_c(F(t))=\phi_c(t)$ and $l_c(F(t))=\phi_c^+(t)$ after ironing. So the first term in the right-hand side of the above equation can be written as \begin{align*} &\int_{t_1}^{t_2}\int_{q\in Q} \left[\phi_c(t) + \beta(q)\right] \pi(q,t) f(t) g(q) \alpha(q)\,\mathrm{d} q\mathrm{d} t\\ =&\int_{t_1}^{t_2}\int_{q\in Q} \left[\phi_c^+(t) + \beta(q)\right] \pi(q,t) f(t) g(q) \alpha(q)\,\mathrm{d} q\mathrm{d} t \\ &+\int_{t_1}^{t_2}\int_{q\in Q} \left[h_c(F(t)) - l_c(F(t))\right] \pi(q,t) f(t) g(q) \alpha(q)\,\mathrm{d} q\mathrm{d} t. \end{align*} Using integration by parts, we can simplify the second term as follows: \begin{align*} & \int_{t_1}^{t_2}\int_{q\in Q} \left[h_c(F(t)) - l_c(F(t))\right] \pi(q,t) f(t) g(q) \alpha(q)\,\mathrm{d} q\mathrm{d} t \\ =& \int_{t_1}^{t_2} \left[h_c(F(t))- l_c(F(t))\right] P_{\pi}(t) \,\mathrm{d} F(t) \\ =& \left.\left[H_c(F(t))- L_c(F(t))\right] P_{\pi}(t) \right|_{t_1}^{t_2} - \int_{t_1}^{t_2} \left[H_c(F(t))- L_c(F(t))\right] \,\mathrm{d} P_{\pi}(t). \end{align*} Because $L_c$ is the ``convex hull'' of $H_c$, so $L_c(0) = H_c(0)$ and $L_c(1) = H_c(1)$. Thus the first term above is simply $0$. Therefore, we have \begin{align} REV(\pi, p = &\int_{t_1}^{t_2}\int_{q\in Q} \left[\phi_c^+(t) + \beta(q)\right] \pi(q,t) f(t) g(q) \alpha(q)\,\mathrm{d} q\mathrm{d} t \nonumber\\ & - \int_{t_1}^{t_2} \left[H_c(F(t))- L_c(F(t))\right] \,\mathrm{d} P_{\pi}(t)- c u(t_1) - (1-c)u(t_2).\label{eq:new-rev-3} \end{align} We argue that our feasible mechanism $({\pi}^*, {p}^*)$ simultaneously maximizes all the terms in Equation \eqref{eq:new-rev-3}. Firstly, since ${\pi}^*(q, t)=1$ if and only if $\phi_c^+(t) + \beta(q)\ge 0$, $({\pi}^*, {p}^*)$ maximizes the first term. Secondly, $({\pi}^*, {p}^*)$ also satisfies $u(t_1)=0$ and $u(t_2)=v(t_2)$ as shown in Lemma \ref{lem:case_3_feasible}. Since $u(t_1)\ge 0$ and $u(t_2)\ge v(t_2)$ holds for any feasible mechanism as shown in Lemma \ref{lem:feasible-M}, $({\pi}^*, {p}^*)$ also maximizes the last two terms. Thirdly, for the second term, note that $H_c(F(t))- L_c(F(t))\ge 0$ by definition, and $\mathrm{d} P_{\pi}(t)\ge0$ for any feasible mechanism. Thus this term is always non-negative. However, we claim that with mechanism $({\pi}^*, {p}^*)$, this term is actually 0, i.e., the maximum possible. Clearly, the only interesting case is when $H_c(F(t))- L_c(F(t))> 0$. In this case $t$ must lie in an ironed interval $I$ and thus the convex hull $L_c(z)$ of $H_c(z)$ is linear in the ironing interval. This implies $l_c(z) = \phi_c^+(t)$ (where $z=F(t)$) is a constant and thus $ P_{{\pi}^*}(t)=\int_{ q :\beta(q)\ge -\phi_c^+(t)}g(q)\alpha(q)\,\mathrm{d} q $ is also constant in the interval $I$, leading to $\mathrm{d} P_{{\pi}^*}(t) = 0$. To summarize, the mechanism $({\pi}^*, {p}^*)$ optimizes all the 4 terms in Equation \eqref{eq:new-rev-3} simultaneously, thus is an optimal feasible mechanism. \end{proof} \section{Generalizations} \subsection{Generalized Utility Function} \label{generalized-utility} So far we have derived the optimal mechanism and its properties with value functions that are linear and monotone non-decreasing in $t$, i.e., $ v(q,t)=\alpha(q)(t+\beta(q))$ for some $\alpha(q)\geq 0$. In this section we discuss how our analysis can be easily generalized to any value function that satisfy the following two assumptions: \begin{assumption}[\textbf{Convexity and Monotonicity}] For any $q$, $v(q,t)$ is convex and monotone non-decreasing in $t$. \label{assum1:mono} \end{assumption} \begin{assumption}[\textbf{Monotone Virtual Values}] For any $q\in Q $ and $ c\in [0,1]$, $\phi_c(t) = \frac{v(q,t)}{v'_t(q,t)} -\frac{c-F(t)}{f(t)} $ is non-decreasing in $t$ where $v'_t(q,t)=\frac{\partial v(q,t)}{\partial t}$. \label{assum2:mono-ratio} \end{assumption} Our proof techniques can be applied in almost the same way with the above assumptions, and the threshold structure of the optimal mechanism will also remain similar. Assumption \ref{assum1:mono} is the primary assumption; it retains the monotonicity as assumed before, but generalizes the linear value assumption to the much relaxed requirement of convex values. Convexity is needed to preserve the equivalence between the monotonicity of $P_{\pi}(t)$ and IC constraints as in Lemma \ref{lem:feasible-M}, whereas the monotonicity of $v(q,t)$ in $t$ guarantees that the buyer's surplus will increase first and then decrease, and thus the participation constraint will bind only at type $t_1$ or $t_2$. Assumption \eqref{assum2:mono-ratio} is a technical assumption which is only needed to avoid the ironing procedure so that the point-wise maximizing threshold mechanism still satisfies the monotonicity of $P_{\pi}(t)$ required by any feasible mechanism. We remark that under the widely adopted log-concavity assumption of type distribution $F(t)$, we have $\frac{c-F(t)}{f(t)} $ is non-increasing in $t$ for any $ c\in [0,1]$ \citep{Prkopa1971LogarithmicCM}. Therefore, to satisfy Assumption \ref{assum2:mono-ratio}, we only need an additional assumption that the ratio $\frac{v(q,t)}{v'_t(q,t)}$ is non-decreasing in $t$ for any $q$ We make a few remarks about the generalized analysis. First, the threshold of the optimal mechanism will now depend on a natural generalization of the previous virtual value functions: $\frac{v(q,t)}{ v'_t(q,t)} -\frac{1-F(t)}{f(t)}$ or $\frac{v(q,t)}{v'_t(q,t)} + \frac{F(t)}{f(t)}$ or their mixture. Second, it turns out that, with general value function, the four constraints listed in Lemma \ref{lem:feasible-M} are only necessary conditions but are \emph{no longer sufficient} for feasible mechanisms. However, it becomes a sufficient condition after we augment these four conditions with an additional requirement, i.e., the experiments are monotone in $t$ in the sense that $\forall q, \, \pi(q,t)$ is non-decreasing in $t$. To resolve this issue, we will relax the design space by considering $(\pi, p)$ that satisfies the four necessary constraints of Lemma \ref{lem:feasible-M}. This guarantees that any feasible mechanism is under our consideration since these constraints are necessary for feasibility, however we may suffer the risk of arriving at an infeasible mechanism. Noteworthily, the optimal solution of this relaxed optimization problem under Assumption \ref{assum1:mono} and \ref{assum2:mono-ratio} is a threshold mechanism which satisfies the monotone experiment requirement, i.e., $\forall q, \, \pi^*(q,t)$ is non-decreasing in $t$. This thus closes the gap between necessity and sufficiency, and shows that the mechanism we obtain is indeed a feasible mechanism. Since the detailed derivation for general value function is almost exactly the same as linear utility functions, up to the above two major differences, we omit them in this paper. \subsection{Correlated State and Buyer Type } \label{section-dependence} Finally, we discuss how our results could be \emph{partially} generalized to the setting with correlated state $q$ and buyer type $t$. This setting turns out to require more much careful treatment. First, within the general class of sequential mechanisms as we consider in this work, \citep{Babaioff12} show a similar result as \citep{cremer1988full} for auction design that the optimal mechanism can extract full surplus. However, the full-surplus-extracting optimal mechanism has to use negative payments in order to guarantee that the payment from each buyer type is properly enforced even after they see the realized experiment outcomes.\footnote{Specifically, since the buyer is free to leave the mechanism after seeing an experiment outcome, the full-surplus-extracting mechanism has to ask for a large upfront deposit at the beginning and then return the leftover of the deposit after deducting a buyer's payment for the realized experiment outcome.} Notably, this is in contrast to the independent case, for which our Lemma \ref{lem:positive-pay} shows that the optimal mechanism can always without loss of generality use non-negative payments. Second, suppose negative payments are explicitly forbidden under correlated state and type, prior works gave examples showing that multiple rounds of information revelation can lead to strictly better revenue than any mechanism with a single round of information revelation, regardless whether the experiment outcomes can be contracted \citep{Babaioff12} or cannot be contracted \citep{bergemann2018design}. However, the design of optimal sequential mechanisms turns out to be quite challenging and, to our knowledge, is unknown in general. \cite{bergemann2018design} restrict their analysis to the design space of one-round mechanisms. Towards this end, our Theorem \ref{thm:opt-scheme} can be generalized towards a characterization of the optimal mechanism for correlated $q,t$, but only within the space of \emph{one-round mechanisms} with positive payments. Specifically, with correlated $q,t$, we will need to instead impose Assumption \ref{assum1:mono} and \ref{assum2:mono-ratio} on the function $v(q,t)\mu(q,t)$, where $\mu(q,t)$ is the joint distribution of $q,t$, since they always bind together in all derivations. All our derivations for Theorem \ref{thm:opt-scheme} can then be generalized in a straightforward way and is thus omitted here. \section{The Optimal Mechanism} Here we consider the case where both $t\in T = [t_1, t_2]$ and $q\in Q = [q_1, q_2]$ are continuous. Assume $q\sim G(q)$ and $t\sim F(t)$ are independent. As a useful notation, let $v(t) = \qopname\relax n{\mathbf{E}}_{q \sim G} v(q,t) = \int_{q \in Q} g(q) v(q,t) dq $ denote the expected utility of buyer type $t$ for taking action $1$ without any additional information. One special type of scheme of our interest is the threshold test scheme. \begin{definition}[Threshold Testing] An information selling scheme $\{ \pi_t \}_{t\in \mathcal{T}}$ is a \emph{threshold testing} scheme if for any buyer type $t$, there exists a threshold $\theta_t$ such that: (1) $\pi_t(q,1) = g(q)$ for any $q > \theta_t$; (2) $\pi_t(q,0) = 0$ for any $q < \theta_t$. A threshold testing scheme is \emph{monotone} if $\theta_t \geq \theta_{t'}$ whenever $t \geq t'$. \end{definition} That is, a threshold test will always recommend action 1 (purchase) whenever the quality passed the threshold $\theta_t$ and recommend action 0 (not purchasing) if the quality did not pass. Note that if the quality $q = \theta_t$, the seller is allowed to recommend action 0 or 1 randomly. The threshold testing scheme is monotone if a higher type always has a higher threshold. Note that such threshold testing are ubiquitous in reality, e.g., used car inspection, car smoke check, house inspection, medical tests, production inspection, etc. Our first main conjecture is the follows. \begin{conjecture} There always exists an optimal information selling mechanism that is a monotone threshold test. \end{conjecture} \subsection{Problem Formulation} An information selling scheme consists of a signaling scheme $\pi: Q \times T \to [0,1] $, where $\pi(q, t) \in [0,1]$ is the probability of recommending action $1$ at quality $q$ conditioning on buyer type $t$, and a payment function $p(t)$. The information seller's objective is thus to maximize the following objective of revenue: \begin{equation}\label{cons:obj} \text{ Seller's Objective: } \quad \qopname\relax n{max}_{\pi, p} \int_{t\in T} f(t) p(t) dt \end{equation} The mechanism $(\pi, p)$ needs to satisfy the following sets of conditions. The first set of constraints is the obedience constraint for the signaling scheme. That is, when action $1$ is recommended to a buyer type $t$, the expected conditional buyer utility of taking action $1$ is $ \frac{ \qopname\relax n{\mathbf{E}}_{q\sim G}\left[ \pi(q,t)v(q,t) \right] }{ \qopname\relax n{\mathbf{E}}_{q\sim G} [\pi(q,t)] }$, which should indeed be at least $0$, i.e., the utility of action $0$. This constraint can be mathematically expressed as $\int_{q \in Q} \pi(q, t) \cdot g(q) v(q,t) dq \geq 0$. Conversely, when action $0$ is recommended to a buyer type $t$, the expected conditional buyer utility of taking, again, action $1$ is $ \frac{ \qopname\relax n{\mathbf{E}}_{q\sim G}\left[ (1 - \pi(q,t)) \cdot v(q,t) \right] }{ \qopname\relax n{\mathbf{E}}_{q\sim G} [1 - \pi(q,t)] }$, which should be at most $0$. This constraint can be mathematically expressed $\int_{q \in Q} [1- \pi(q, t)] \cdot g(q) v(q,t) dq \leq 0$ or equivalently $\int_{q \in Q} \pi(q, t) \cdot g(q) v(q,t) dq \geq \int_{q \in Q} g(q) v(q,t) dq = v(t) $. As a consequence, the obedience constraint can be summarized into the following constraint: \begin{eqnarray}\label{cons:obedience} \text{Obedience Constraint: } \quad & & \int_{q \in Q} \pi(q, t) \cdot g(q) v(q,t) dq \geq \qopname\relax n{max} \{ 0, v(t) \}, \, \, \, \forall t \in [T] \end{eqnarray} The second set of constraint is the incentive compatibility (IC) constraint. That is, each buyer type $t$ has no incentive to misreport any other type $t'$. Note that, given any obedient signaling scheme, the expected utility of buyer type $t$ from the scheme, under truthful reporting, is $\int_{q \in Q} \pi(q, t) \cdot g(q) v(q,t)$. Therefore, the IC constraint can be expressed as follows: \begin{equation}\label{cons:IC} \text{IC Constraint: } \quad \int_{q \in Q} \pi(q, t) \cdot g(q) v(q,t) dq - p(t) \geq \int_{q \in Q} \pi(q, t') \cdot g(q) v(q,t) dq - p(t'), \, \, \, \forall, t, t' \in [T] \end{equation} Our last set of constraint is the individual rationality (IR) constraint, expressed as follows. \begin{equation}\label{cons:IR} \text{IR Constraint: } \quad \int_{q \in Q} \pi(q, t) \cdot g(q) v(q,t) dq - p(t) \geq \qopname\relax n{max} \{ 0, v(t) \}, \, \, \, \forall, t \in [T] \end{equation} Therefore, the seller's optimization problem is to find feasible mechanism $(\pi, p)$ to maximize Objective \eqref{cons:obj} subject to Constraints \eqref{cons:obedience}, \eqref{cons:IC} and \eqref{cons:IR}. Note that the payment function $p(t)$ is allowed to be negative for now (\todo{cite previous papers}), though as we shall see later that at optimality it will always be non-negative. The key difference of selling information and classic single-item auction design is that the buyer utility is the expected utility over the item quality whereas the buyer's utility in single-item auction is the expected utility over other bidders' values. Moreover, we have an additional constraint, i.e., the obedience constraint. \todo{add discussions here}. \subsection{Feasible Mechanisms} We now characterize the set of \emph{feasible mechanisms} that satisfy Constraints \eqref{cons:obedience}, \eqref{cons:IC} and \eqref{cons:IR}. Observe that the obedience constraint and the IR constraint differ only by the payment term. However, neither of them implies the other because it is not clear that whether the payment function is always non-negative or not. \todo{explain why this is not obviously true} Our first step is to get rid of the Obedience constraints by proving that there always exists an optimal mechanism that has non-negative payment for any type $t$. \begin{lemma}\label{lem:positive-pay} There exists an optimal IC, IR and obedient mechanism in which $p(t) \geq 0$ for all $t \in T$. \end{lemma} \begin{proof} Let $(\bar{\pi}, \bar{p})$ be any IC, IR and obedient optimal mechanism. We construct a different mechanism $(\pi^*, p^)$ which satisfies the same constraints and remains optimal but $p^*(t) \geq 0$ for any $t$. For convenience, we divide the buyer types into two sets: $T^+ = \{ t\in [T]: \bar{p}(t) \geq 0 \}$ is the set of types who have non-negative payments in mechanism $(\bar{\p}, \bar{p})$ and $T^- = T \setminus T^+$ is the set of types who have negative payment. The $(\pi^*, p^)$ is constructed from $(\bar{\pi}, \bar{p})$ as follows: \begin{enumerate} \item The mechanism for any $t \in T^+$ remains the same: for any $t\in T^+$, let $p^*(t) = \bar{p}(t)$ and $\pi^*(q,t) = \bar{\pi}(q, t)$ for all $q \in Q$; \item The mechanism for any $t \in T^-$ becomes no information and no payment: for any $t \in T^-$, let $p^*(t) = 0$, and $\pi^*(q,t) = 1$ for all $q \in Q$ if if $v(t) \geq 0$ and $\pi^*(q,t) = 0$ for all $q \in Q$ if $v(t) \leq 0$ \end{enumerate} So far, the constructed mechanism $(\pi^*, p^)$ has three useful properties: (1) it yields revenue at least that of $(\bar{\pi}, \bar{p})$ by construction; (2) all the buyer types' payments are non-negative now; (3) Individual rationality constraint is satisfied at every buyer type. The third property follows from the construction: the utility of any buyer type $t \in T^+$ did not change and the utility of a type $t \in T^-$ now pays $0$ and receives no information (i.e., always being recommended action 1 or 0, depending on $v(t) \geq 0$ or not), so IR constraint is always satisfied. However, the major issue with the constructed mechanism $(\pi^*, p^*)$ above is that it is not incentive compatible, i.e., bidder type $t$ may want to misreport $t'$. We first argue that the IC constraint for any $t \in T^+$ is already satisfied. First of all, any type $t \in T^+$ would not have incentive to deviate to another type $t' \in T^+$ due to the original IC constraint of $(\bar{\pi}, \bar{p})$ and the fact that the mechanism for types in $T^+$ remains the same. We claim that any type $t \in T^+$ would not have incentive to deviate to a type $t'$ in $T^-$ as well. This is because the information for $t' \in T^-$ is less (since revealing no information) and the payment is more (since $p^*(t') = 0 > \bar{p}(t')$). Therefore, if in mechanism $(\bar{\pi}, \bar{p})$ the buyer type $t$ does not have incentives to deviate to $t'$, it remains to be true for $(\pi^*, p^*)$. However, buyer type $t \in T^_$ may indeed have incentive to deviate to some type $t' \in T^+$ since they want to receive beneficial information under some amount of payment. Here comes our last step of the construction --- adjusting the above $(\pi^*, p^*)$ to make any type $t \in T^-$ to also satisfy IC without decreasing the revenue neither violates the IR and obedient constraint. To do so, for any $t \in T^-$, let $t' \in T^+$ be the most profitable deviation of type $t$, i.e., the deviation that maximize type $t$'s utility. We adjust $(\pi^*, p^*)$ simply by adopting the scheme of type $t'$ to the type $t$ --- i.e., resetting $\pi^*(t) = \bar{\pi}(t')$ and $p^*(t) = \bar{p}(t')$. After such adjustment, the IC constraint for any type $t \in T^-$ are satisfied by construction because each of these types has indeed their most profitable mechanism. Meanwhile, this will also maintain the IC constraint for any type $t \in T^+$ since the adjustment did not adding more menus. Note that IR constraints remains satisfied since the utility of any type $t \in T^+$ is non-decreasing int his adjustment. The revenue did not decrease as the payment $p^*(t)$ did not decrease in our adjustment for any $t \in T^+$. The only non-obvious part to verify is the obedience constraint. Indeed, the obedience constraints may be violated for type $t \in T^-$ during this adjustment since the recommended optimal action for the $t' \in T^+$ might not be optimal for $t$. To achieve obedience, we simply ``rename" the recommended action for $t$ to indeed be his optimal action. This restores the obedience constraint for $t$. Noe that, this will either not change the revealed information or lead to less revealed information (when type $t$'s optimal action are the same under $\bar{\pi}(\cdot, t')$), and thus will not hurt the IC constraints. \end{proof} As a consequence of Lemma \ref{lem:positive-pay}, we will henceforth restrict out design space to the mechanisms which only uses non-negative payments. For these mechanisms, we can safely omit the obedience constraint \eqref{cons:obedience}. We start by analyzing the IC Constraints. First, Constraint \eqref{cons:IC} can be re-arranged as follows: \begin{equation*} \int_{q \in Q} [\pi(q, t) - \pi(q, t')] \cdot g(q) v(q,t) dq\geq p(t) - p(t'), \end{equation*} Therefore, the IC constraint implies the following two inequalities about any two types $t, t'$: \begin{gather} \int_{q \in Q} [\pi(q, t) - \pi(q, t')] \cdot g(q) v(q,t) dq \geq p(t) - p(t'), \label{eq:ic1}\\ \int_{q \in Q} [\pi(q, t') - \pi(q, t)] \cdot g(q) v(q,t') dq \geq p(t') - p(t).\label{eq:ic2} \end{gather} Combining Inequality \eqref{eq:ic1} and \eqref{eq:ic2}, we obtain the following constraint for any pair of types $t, t'$: \begin{gather*} \int_{q \in Q} [\pi(q, t') - \pi(q, t)] \cdot g(q) v(q,t) dq \leq p(t') - p(t) \leq \int_{q \in Q} [\pi(q, t') - \pi(q, t)] \cdot g(q) v(q,t') dq . \end{gather*} \hf{I think this approach only works when $v(q,t)$ is a linear function of $t$, i.e., $v(q,t) = v_1(q)t + v_0$. In. the following derivations, I assumed $v(q,t) = v_1(q)t + v_0(q)$. } Therefore, the right-hand side of the above inequality must be at least its left-hand side. This implies the following \emph{necessary condition} for any IC information selling mechanism $(\pi, p)$. That is, for any $t, t' \in T$, we have \begin{eqnarray*} 0 &\leq& \int_{q \in Q} [\pi(q, t') - \pi(q, t)] \cdot g(q) [v(q,t') - v(q, t)]dq \\ & = & \int_{q \in Q} [\pi(q, t') - \pi(q, t)] \cdot g(q) v_1(q)[ t' - t]dq \end{eqnarray*} Define $$ Q(t) = \int_{q \in Q} \pi(q, t) \cdot g(q) v_1(q)dq. $$ A simple case analysis for $t' > t$ or $t' < t$ implies that the above inequality is equivalent to $Q(t)$ is monotone non-decreasing in $t$. Note that $Q(t)$ can be interpreted as the expected \emph{weighted} probability of being recommended action $1$, where the weights are $v_1(q)$ and indicate the significance of buyer type $t$ in his value function. We thus term this the \emph{signaling monotonicity}. Note that this is similar to Myerson's allocation monotonicity condition as in auction design, but is different. \todo{cite and add discussions here} We now derive a relation between signaling scheme $\pi$ and payment rule $p$ for any IC mechanism. We start by deriving the buyer's utility. Note that any buyer of type $t$ will derive non-zero utilities only from the action recommendation of $1$ since an action recommendation of $0$ will lead to buyer utility $0$. Therefore, a buyer of type $t$ has the following utility: \begin{gather*} \text{Utility of Buyer Type }t: \quad u(t)= \int_{q \in Q} \left[ g(q) \pi(q,t)v(q,t) \right] dq -p(t) \end{gather*} Re-arranging Inequality \eqref{eq:ic1}, we have \begin{eqnarray*} u(t) &=& \int_{q \in Q} \left[ g(q) \pi(q,t)v(q,t) \right] dq -p(t) \\ &\geq& \int_{q \in Q} \left[ g(q) \pi(q,t')v(q,t) \right] dq -p(t') & \mbox{by Ineq. \eqref{eq:ic1}} \\ &=& \int_{q \in Q} \left[ g(q) \pi(q,t')v(q,t) \right] dq + u(t') - \int_{q \in Q} \left[ g(q) \pi(q,t')v(q,t') \right] dq & \mbox{Def of $u(t')$}\\ &=& \int_{q \in Q} \left[ g(q) \pi(q,t')[ v(q,t) - v(q, t')] \right] dq + u(t') & \mbox{Algebraic Manipulation}\\ &=& (t-t') Q(t') + u(t') & \mbox{Def. of $Q(t)$ and $v(q,t)$}\\ \end{eqnarray*} As a consequence, Inequality \eqref{eq:ic1} implies $u(t) -u(t') \geq (t-t')Q(t') $. Together with a similar derivation from Inequality \eqref{eq:ic2}, we thus have the following inequality \begin{gather*} (t-t') Q(t') \le u(t)-u(t')\le (t-t') Q(t). \end{gather*} Note that the above inequality holds for any $t, t'$. Therefore, by letting $t' \to t$ from the left side and dividing all terms simultaneously by $t - t'$, we obtain the following identify about $u(t)$: \todo{we made differentiability assumption of $u(t)$} \begin{gather}\label{eq:buyer-utility-identify1} \odv{u(t)}{t}= Q(t) \end{gather} Note that, when $t' \to t$ from the right side, we shall obtain the same identify as \eqref{eq:buyer-utility-identify1}. Note that both the signaling monotonicity and the payment payment identify are the necessary outcomes of the incentive compatibility constraints, more precisely, the outcome of Constraints \eqref{eq:ic1} and \eqref{eq:ic2}. Next, we show that these two conditions are also sufficient. \begin{lemma}\label{lem:feasible-M} A mechanism $(\pi, p)$ is feasible if and only if it satisfies the following constraints: \begin{eqnarray}\label{eq:signal-monotonicity} && Q(t) \text{ is monotone non-decreasing in } t \\\label{eq:buyer-utility-identify2} && u(t) = u(t_1) + \int_{t_1}^{t} Q(x) dx, \, \, \, \forall t \in [T] \\\label{eq:ir-t2} & & u(t_2) \geq v(t_2), \, \, \, u(t_1) \geq 0 \\ \label{eq:non-negativity} & & p(t) \geq 0, \, \, \forall t \in [T]. \end{eqnarray} \end{lemma} \begin{proof} The necessity of these constraints come from the above derivation and the IR constraint for type $t_2$. We now prove that they are sufficient. That is, Constraint \eqref{eq:signal-monotonicity}-\eqref{eq:non-negativity} implies Obedience, IC and IR constraints \eqref{cons:obedience}, \eqref{cons:IC} and \eqref{cons:IR}. IC constraint \eqref{cons:IC} is equivalent to \begin{equation*} u(t) \geq u(t') + \int_{q_1}^{q_2} \pi(q, t') \cdot g(q) [v(q,t) - v(q,t')] dq = u(t') + (t-t') Q(t'). \end{equation*} Therefore Constraint \eqref{eq:signal-monotonicity} and \eqref{eq:buyer-utility-identify2} implies IC constraint \eqref{cons:IC} because if $t' < t$ we have \begin{equation*} u(t) - u(t') = \int_{t'}^{t} Q(x) dx \geq \int_{t'}^{t} Q(t') dx = (t-t')Q(t'). \end{equation*} Similarly, when $t' > t$, we also have $ u(t) - u(t') \geq (t-t')Q(t'). $ The IR constraint \eqref{cons:IR} is equivalent to $u(t) \geq 0$ and $u(t) \geq v(t)$. Since $Q(x)\geq 0$, Constraint \eqref{eq:buyer-utility-identify2}, together with $u(t_1) \geq 0$, implies $u(t) \geq 0$ for any $t$. We now leverage $u(t_2) \geq v(t_2)$ to prove that $u(t) \geq v(t)$ also holds, as follows: \begin{eqnarray*} u(t) &=& u(t_1) + \int_{t_1}^{t} Q(x) dx \\ & = & u(t_2) - \int_{t}^{t_2} Q(x) dx \\ & \geq & v(t_2) - \int_{t}^{t_2} Q(x) dx \\ & = & \int_{q_1}^{q_2} g(q) [v_1(q)t_2 + v_0(q)]dq - \int_{t}^{t_2} \int_{q_1}^{q_2} \pi(q, x) \cdot g(q) v_1(q)dq dx \\ & \geq & \int_{q_1}^{q_2} g(q) [v_1(q)t_2 + v_0(q)]dq - \int_{t}^{t_2} \int_{q_1}^{q_2} g(q) v_1(q)dq dx \\ & = & \int_{q_1}^{q_2} g(q) [v_1(q)t + v_0(q)]dq = v(t) \\ \end{eqnarray*} Finally, the Obedience constraint \eqref{cons:obedience} follows from IR constraint \eqref{cons:IR} and $p(t) \geq 0$. \end{proof} \hf{ For general $v(q,t)$, the lemma is the following, which I do not know how to prove \begin{lemma}\label{lem:feasible-M} A mechanism $(\pi, p)$ is feasible if and only if it satisfies the following constraints: \begin{eqnarray}\label{eq:signal-monotonicity} && \int_{q_1}^{q_2} g(q) [\pi(q,t) - \pi(q,t')][v(q,t) - v(q,t')] \geq 0 \\\label{eq:buyer-utility-identify2} && u(t) = u(t_1) + \int_{q_1}^{q_2}\int_{t_1}^{t} g(q) \pi(q,x) \frac{\partial v(q,x)}{\partial x} dx, \, \, \, \forall t \in [T] \\\label{eq:ir-t2} & & u(t_2) \geq v(t_2) \\ \label{eq:non-negativity} & & p(t) \geq 0, \, u(t) \geq 0 \, \, \, \forall t \in [T] \end{eqnarray} \end{lemma} \begin{proof} An equivalence to Condition (20) is the \begin{equation} p'(t) = \int_{q_1}^{q_2} g(q) \frac{\partial \pi(q,t)}{\partial t} v(q,t) dq, \, \, \, \forall t \in [T] \end{equation} This comes from the definition of $u(t):$ \begin{equation} u(t) = \int_{q_1}^{q_2} g(q) \pi(q,t) v(q,t) dq - p(t), \, \, \, \forall t \in [T] \end{equation} Therefore, the IC constraint is equivalent to \begin{eqnarray} && \int_{t'}^t \int_{q_1}^{q_2} \left[ g(q) \pi(q,s) \frac{\partial v(q,s)}{\partial s} \right] dq ds \geq \int_{q_1}^{q_2} \pi(q, t') \cdot g(q) [v(q,t) - v(q,t')] dq \\ &\Longleftrightarrow& \int_{q_1}^{q_2} g(q) \Bigg[ \int_{t'}^t \left[ \pi(q,s) \frac{\partial v(q,s)}{\partial s} \right] ds - \pi(q, t') [v(q,t) - v(q,t')] \Bigg] dq \geq 0 \\ &\Longleftrightarrow& \int_{q_1}^{q_2} g(q) \Bigg[ \int_{t'}^t \left[ \pi(q,s) \frac{\partial v(q,s)}{\partial s} \right] ds - \int_{t'}^t \pi(q, t') \frac{\partial v(q,s)}{ \partial s} ds \Bigg] dq \geq 0 \\ &\Longleftrightarrow& \int_{q_1}^{q_2} g(q) \Bigg[ \int_{t'}^t \left[ [\pi(q,s) - \pi(q, t')] \frac{\partial v(q,s)}{\partial s} \right] ds \Bigg] dq \geq 0 \\ &\Longleftrightarrow& \int_{q_1}^{q_2} g(q) \Bigg[ [\pi(q,s) - \pi(q, t')]v(q,s)|_{t'}^t - \int_{t'}^t \left[ \frac{\partial \pi(q,s)}{\partial s} v(q,s) \right] ds \Bigg] dq \geq 0 \\ &\Longleftrightarrow& \int_{q_1}^{q_2} g(q) \Bigg[ [ \pi(q,t) - \pi(q, t')]v(q,t) - \int_{t'}^t \left[ \frac{\partial \pi(q,s)}{\partial s} v(q,s) \right] ds \Bigg] dq \geq 0 \\ &\Longleftrightarrow& \int_{q_1}^{q_2} g(q) \Bigg[ \int_{t'}^t \left[ \frac{\partial \pi(q,s)}{\partial s} [v(q,t) - v(q,s)] \right] ds \Bigg] dq \geq 0 \end{eqnarray} \end{proof} } \subsection{Optimizing Revenue } Additionally, the definition of $u(t)$ gives us the following identify: \begin{gather*} \frac{ \partial u(t) }{\partial t}= \int_{q \in Q} g(q) \left[ \frac{\partial \pi(q,t) }{\partial t} v(q,t) + \pi(q,t) \frac{\partial v(q,t)}{\partial t} \right] dq -p'(t) \end{gather*} where $p'(t)$ is the derivative of the payment function $p(t)$. Plugging Identify \eqref{eq:buyer-utility-identify} into the above equality gives us the relation between the payment function $t$ and the signaling scheme $\pi$: \begin{equation} \text{Payment Identify: } \quad p'(t) = \int_{q \in Q} g(q) \frac{\partial \pi(q,t) }{\partial t} v(q,t) dq, \, \, \, \forall t \in T \end{equation} Revenue derivation with $t_2$ as the reference point can be expressed as follows: \begin{align*} REV (\pi, p)&=\int_{t_1}^{t_2} f(t)p(t)\,\mathrm{d}t\\ &=\int_{t_1}^{t_2} f(t)\left(\int_{q_1}^{q_2} g(q) \left[ \pi(q,t)v(q,t)\right] dq -u(t) \right)\,\mathrm{d}t\\ &=\int_{t_1}^{t_2} f(t)\left(\int_{q_1}^{q_2} g(q)\pi(q,t)v(q,t) dq -u(t_2) + \int_{t}^{t_2} Q(x) \mathrm{d}x \right)\,\mathrm{d}t\\ &=\int_{t_1}^{t_2} f(t)\left(\int_{q_1}^{q_2} g(q) \pi(q,t)v(q,t) dq \right)\,\mathrm{d}t + \int_{t_1}^{t_2} \int_{t}^{t_2}f(t) Q(x)\,\mathrm{d}x\,\mathrm{d}t -u(t_2)\\ &=\int_{t_1}^{t_2} f(t)\left(\int_{q_1}^{q_2} g(q) \pi(q,t)v(q,t) dq \right)\,\mathrm{d}t + \int_{t_1}^{t_2} \int_{t_1}^{x}f(t) Q(x)\,\mathrm{d}t\,\mathrm{d}x -u(t_2)\\ &=\int_{t_1}^{t_2} f(t)\left(\int_{q_1}^{q_2} g(q) \pi(q,t)v(q,t) dq \right)\,\mathrm{d}t + \int_{t_1}^{t_2} F(x)Q(x)\,\mathrm{d}x-u(t_2)\\ &=\int_{q_1}^{q_2} g(q)\left[\int_{t_1}^{t_2} f(t) \pi(q,t)v(q,t) dq \right]\,\mathrm{d}t + \int_{t_1}^{t_2} F(t)\int_{q_1}^{q_2} g(q)\left[ \pi (q,t)v_1(q) \right] \mathrm{d}q\,\mathrm{d}t-u(t_2)\\ &=\int_{q_1}^{q_2} g(q)\left[ \int_{t_1}^{t_2} f(t)\pi(q,t) \left(v(q,t)+ v_1(q)\frac{F(t)}{f(t)}\right)\,\mathrm{d}t \right] \mathrm{d}q -u(t_2)\\ &=\int_{q_1}^{q_2} g(q)\left[ \int_{t_1}^{t_2} f(t)\pi(q,t) [v_1(q) \varphi(t) + v_0(q)]\,\mathrm{d}t \right] \mathrm{d}q -u(t_2) \end{align*} Revenue derivation with $\bar{t}$ as the reference point where $\bar{t}$ satisfies $\var{v}(\bar{t}) = \int_{q_1}^{q_2} v(q, \bar{t}) g(q) dq = 0$. \begin{align*} REV (\pi, p)&=\int_{t_1}^{t_2} f(t)p(t)\,\mathrm{d}t\\ &=\int_{t_1}^{t_2} f(t)\left(\int_{q_1}^{q_2} g(q) \left[ \pi(q,t)v(q,t)\right] dq -u(t) \right)\,\mathrm{d}t\\ &=\int_{t_1}^{t_2} f(t)\left(\int_{q_1}^{q_2} g(q)\pi(q,t)v(q,t) dq -\int_{\bar{t}}^{t} Q(x) \mathrm{d}x -u(\bar{t}) \right)\,\mathrm{d}t\\ &=\int_{t_1}^{t_2} f(t)\left(\int_{q_1}^{q_2} g(q) \pi(q,t)v(q,t) dq \right)\,\mathrm{d}t-\int_{t_1}^{t_2} \int_{\bar{t}}^{t}f(t) Q(x)\,\mathrm{d}x\,\mathrm{d}t -u(\bar{t})\\ &=\int_{t_1}^{t_2} f(t)\left(\int_{q_1}^{q_2} g(q) \pi(q,t)v(q,t) dq \right)\,\mathrm{d}t+ \int_{t_1}^{\bar{t}} F(x)Q(x)dx - \int_{\bar{t}}^{t_2} (1-F(x))Q(x)\,\mathrm{d}x-u(\bar{t}) \end{align*} {\color{red} \begin{align*} \int_{\bar{t}}^{t_2} f(t)p(t)\,\mathrm{d}t&=\int_{\bar{t}}^{t_2} f(t)\left(\int_{q_1}^{q_2} g(q) \left[ \pi(q,t)v(q,t)\right] dq -u(t) \right)\,\mathrm{d}t\\ &=\int_{\bar{t}}^{t_2} f(t)\left(\int_{q_1}^{q_2} g(q)\pi(q,t)v(q,t) dq -\int_{\bar{t}}^{t} Q(x) \mathrm{d}x -u(\bar{t}) \right)\,\mathrm{d}t\\ &=\int_{\bar{t}}^{t_2} f(t)\left(\int_{q_1}^{q_2} g(q) \pi(q,t)v(q,t) dq \right)\,\mathrm{d}t-\int_{\bar{t}}^{t_2} \int_{\bar{t}}^{t}f(t) Q(x)\,\mathrm{d}x\,\mathrm{d}t -u(\bar{t})\\ &=\int_{\bar{t}}^{t_2} f(t)\left(\int_{q_1}^{q_2} g(q) \pi(q,t)v(q,t) dq \right)\,\mathrm{d}t-\int_{\bar{t}}^{t_2} \int_{x}^{t_2}f(t) Q(x)\,\mathrm{d}t\,\mathrm{d}x -u(\bar{t})\\ &=\int_{\bar{t}}^{t_2} f(t)\left(\int_{q_1}^{q_2} g(q) \pi(q,t)v(q,t) dq \right)\,\mathrm{d}t-\int_{\bar{t}}^{t_2} (1-F(x))Q(x)\,\mathrm{d}x-u(\bar{t})\\ &=\int_{q_1}^{q_2} g(q)\left[\int_{\bar{t}}^{t_2} f(t) \pi(q,t)v(q,t) dq \right]\,\mathrm{d}t-\int_{\bar{t}}^{t_2} (1-F(t))\int_{q_1}^{q_2} g(q)\left[ \pi (q,t)v_1(q) \right] \mathrm{d}q\,\mathrm{d}t-u(\bar{t})\\ &=\int_{q_1}^{q_2} g(q)\left[ \int_{\bar{t}}^{t_2} f(t)\pi(q,t) \left(v(q,t)-v_1(q)\frac{1-F(t)}{f(t)}\right)\,\mathrm{d}t \right] \mathrm{d}q -u(\bar{t})\\ &=\int_{q_1}^{q_2} g(q)\left[ \int_{\bar{t}}^{t_2} f(t)\pi(q,t) [v_1(q) \varphi(t) + v_0(q)]\,\mathrm{d}t \right] \mathrm{d}q -u(\bar{t}) \end{align*} } where $\varphi(t) = t + \frac{F(t)}{f(t)}$ is the called the \emph{virtual value} for type $t$. Revenue derivation with $t_1$ as the reference point: \begin{align*} REV (\pi, p)&=\int_{t_1}^{t_2} f(t)p(t)\,\mathrm{d}t\\ &=\int_{t_1}^{t_2} f(t)\left(\int_{q_1}^{q_2} g(q) \left[ \pi(q,t)v(q,t)\right] dq -u(t) \right)\,\mathrm{d}t\\ &=\int_{t_1}^{t_2} f(t)\left(\int_{q_1}^{q_2} g(q)\pi(q,t)v(q,t) dq -\int_{t_1}^{t} Q(x) \mathrm{d}x -u(t_1) \right)\,\mathrm{d}t\\ &=\int_{t_1}^{t_2} f(t)\left(\int_{q_1}^{q_2} g(q) \pi(q,t)v(q,t) dq \right)\,\mathrm{d}t-\int_{t_1}^{t_2} \int_{t_1}^{t}f(t) Q(x)\,\mathrm{d}x\,\mathrm{d}t -u(t_1)\\ &=\int_{t_1}^{t_2} f(t)\left(\int_{q_1}^{q_2} g(q) \pi(q,t)v(q,t) dq \right)\,\mathrm{d}t-\int_{t_1}^{t_2} \int_{x}^{t_2}f(t) Q(x)\,\mathrm{d}t\,\mathrm{d}x -u(t_1)\\ &=\int_{t_1}^{t_2} f(t)\left(\int_{q_1}^{q_2} g(q) \pi(q,t)v(q,t) dq \right)\,\mathrm{d}t-\int_{t_1}^{t_2} (1-F(x))Q(x)\,\mathrm{d}x-u(t_1)\\ &=\int_{q_1}^{q_2} g(q)\left[\int_{t_1}^{t_2} f(t) \pi(q,t)v(q,t) dq \right]\,\mathrm{d}t-\int_{t_1}^{t_2} (1-F(t))\int_{q_1}^{q_2} g(q)\left[ \pi (q,t)v_1(q) \right] \mathrm{d}q\,\mathrm{d}t-u(t_1)\\ &=\int_{q_1}^{q_2} g(q)\left[ \int_{t_1}^{t_2} f(t)\pi(q,t) \left(v(q,t)-v_1(q)\frac{1-F(t)}{f(t)}\right)\,\mathrm{d}t \right] \mathrm{d}q -u(t_1)\\ &=\int_{q_1}^{q_2} g(q)\left[ \int_{t_1}^{t_2} f(t)\pi(q,t) [v_1(q) \varphi(t) + v_0(q)]\,\mathrm{d}t \right] \mathrm{d}q -u(t_1) \end{align*} \begin{align*} \qopname\relax n{max}_{\pi} & \int_{t_1}^{t_2} \int_{q_1}^{q_2} \left[ g(q) \pi(q,t) f(t)[v_1(q) \varphi(t) + v_0(q)]\,\mathrm{d}t \right] \mathrm{d}q \\ \text{subject to} & \int_{t_1}^{t_2} \int_{q_1}^{q_2} g(q) \pi(q,t) v_1(q) dq dt \geq \int_{q_1}^{q_2} g(q) v(q,t_2) dq\\ & Q(t) \text{ monotonoe in }t \end{align*} \begin{lemma} There exists a threshold mechanism that solves the above mathematical program optimally if $\frac{v_0(q)}{v_1(q)}$ is increasing in $q$. \end{lemma} \begin{proof} To prove the lemma, we show that for any feasible solution to the above program, there exists a feasible threshold mechanism that weakly increases the objective function. Given a feasible mechanism $\pi(q, t)$, let $q^*(t)$ be \begin{gather*} q^*(t)=\qopname\relax n{argmin}_{q^*}\left\{q^*\,\left|\, \int_{q^*}^{q_2}g(q)v_1(q)\,\mathrm{d}q=Q(t)=\int_{q_1}^{q_2}g(q)\pi(q,t)v_1(q)\,\mathrm{d}q\right.\right\}. \end{gather*} $q^*(t)$ always exists, since the function $\int_{q^*}^{q_2}g(q)v_1(q)\,\mathrm{d}q$ is continuous and decreasing in $q^*$, and \begin{gather*} \int_{q_2}^{q_2}g(q)v_1(q)\,\mathrm{d}q=0\le Q(t)\le \int_{q_1}^{q_2}g(q)v_1(q)\,\mathrm{d}q. \end{gather*} Define a new allocation function as follows: \begin{gather*} \pi'(q,t)= \begin{cases} 1 & \text{ if } q\ge q^*(t)\\ 0 & \text{ otherwise} \end{cases}. \end{gather*} It is clear that the corresponding $Q(t)$ function is the same as the original $\pi(q,t)$: \begin{gather} Q'(t)=\int_{q_1}^{q_2}g(q)\pi'(q,t)v_1(q)\,\mathrm{d}q=\int_{q^*(t)}^{q_2}g(q)v_1(q)\,\mathrm{d}q=Q(t). \label{eq:Q_equal} \end{gather} Moreover, the left-hand-side of the first constraint \begin{gather*} \int_{t_1}^{t_2} \int_{q_1}^{q_2} g(q) \pi(q,t) v_1(q)\, \mathrm{d}q \mathrm{d}t=\int_{t_1}^{t_2}Q(t)\,\mathrm{d}t \end{gather*} also remains the same. The above two equations imply that $\pi'(q,t)$ is also feasible. Now it suffices to show that $\pi'(q,t)$ leads to a higher revenue. The proof is immediate if we notice that the original program is very similar to the fractional knapsack problem. However, we still include a proof here to make it self-contained. Define \begin{gather*} \Phi(q, t) = f(t)\left[ \varphi(t)+\frac{v_0(q)}{v_1(q)} \right]. \end{gather*} $\Phi(q,t)$ is monotone increasing in $q$ since $\frac{v_0(q)}{v_1(q)}$ is increasing and $f(t)$ is always non-negative. Thus if $q\ge q^*(t)$, $\pi'(q,t)=1\ge \pi(q,t)$, and $\Phi(q,t)\ge \Phi(q^*(t),t)$, and if $q< q^*(t)$, $\pi'(q,t)=0\le \pi(q,t)$, and $\Phi(q,t)\le \Phi(q^*(t),t)$. Therefore, \begin{gather} \left[ \pi'(q,t)-\pi(q,t) \right]\left[ \Phi(q,t)-\Phi(q^*(t),t) \right]\ge 0, \forall q, t. \label{eq:monotone} \end{gather} The revenue difference of the two mechanisms $\pi(q,t)$ and $\pi'(q,t)$ is: \begin{align*} &\int_{t_1}^{t_2} \int_{q_1}^{q_2} g(q) \left[\pi'(q,t)-\pi(q,t)\right] f(t)[v_1(q) \varphi(t) + v_0(q)]\,\mathrm{d}q \mathrm{d}t\\ =&\int_{t_1}^{t_2} \int_{q_1}^{q_2} g(q) \left[\pi'(q,t)-\pi(q,t)\right] v_1(q)\Phi(q,t)\,\mathrm{d}q \mathrm{d}t\\ \ge&\int_{t_1}^{t_2} \int_{q_1}^{q_2} g(q) \left[\pi'(q,t)-\pi(q,t)\right] v_1(q)\Phi(q^*(t),t)\,\mathrm{d}q \mathrm{d}t\\ =&\int_{t_1}^{t_2} \Phi(q^*(t),t) \int_{q_1}^{q_2} g(q) \left[\pi'(q,t)-\pi(q,t)\right] v_1(q)\,\mathrm{d}q \mathrm{d}t\\ =&\int_{t_1}^{t_2} \Phi(q^*(t),t) \left[Q'(t)-Q(t)\right]\, \mathrm{d}t\\ =&0, \end{align*} where the inequality is because of Equation \eqref{eq:monotone} and the last equation is due to Equation \eqref{eq:Q_equal}. \end{proof} \subsection{The Case with Regular $F$ and Non-negative $v_1(q) $} When $v_1(q) \geq 0$, we know that $Q(t) \geq 0$ for all $t$. The minimum value of $u(t_2)$ is $\qopname\relax n{max} \{ 0, v(t_2) \}$. Therefore, the maximum possible value of $REV (\pi, p)$ will be achieved by setting $u(t_2) = \qopname\relax n{max} \{ 0, v(t_2) \}$ and $\pi(q,t) = 1$ whenever $v_1(q) \varphi(t) + v_0(q) \geq 0$ and $\pi(q,t) = 0$ otherwise. What remains to verify is that this mechanism does satisfy all the conditions in Lemma \ref{lem:feasible-M}. Since $v_1(q) \varphi(t) + v_0(q)$ is monotone increasing in $t$ when $F$ is regular and $v_1(q) \geq 0$. The $\pi$ above is a \emph{threshold signaling scheme} which signals $1$ to buyer type $t$ whenever \hf{need to assume $- \frac{ v_0(q)}{v_1(q) } $ is decreasing in $q$} \begin{equation} - \frac{ v_0(q)}{v_1(q) } \leq \varphi(t) \end{equation} Since $- \frac{ v_0(q)}{v_1(q) } $ is decreasing in $q$ and $\varphi(t)$ is increasing in $t$, there exists a threshold $\theta(t)$ such that $\pi(q,t) = 1$ for any $q \geq \theta(t)$ and $\pi(q,t)=0$ otherwise. Moreover, $\theta(t)$ is decreasing in $t$. This can equivalently be viewed as a therhsold mechanism on $t$ where $\pi(q,t) = 1$ whenever $t \geq \beta{q}$ where $\beta(q)$ is now an increasing function in $q$. In fact, $\beta(q)$ minimize function $U(t;q) = F(t)v(q,t) = F(t)[v_1(q)t + v_0(q) ] $. We have $U'(t;q) = f(t)[v_1(q)t + v_0(q) ] + F(t) v_1(q) = f(t) [v_1(q) \varphi(t) + v_0(q)]$ which is positive for $t \geq \beta(q)$ and negative otherwise. Thus, $\beta(q)$ minimizes $U(t;q)$. It is easy to verify such a threshold policy satisfy monotonicity. We only need to guarantee $u(t_1) \geq 0$, or equivalently $p(t_1) \leq \int_{q_1}^{q_2} g(q) \pi(q,t_1) v(q,t_1) dq = B_{t_1}$, and $u(t_2) \geq v(t_2)$, or equivalently $p(t_2) \leq \int_{q_1}^{q_2} g(q) \pi(q,t_2) v(q,t_2) dq - \int_{q_1}^{q_2} g(q) v(q,t_2) dq =B_{t_2}$. Note that $p(t_2) - p(t_1) = \int_{t_1}^{t_2} p'(t)dt$ which only depends on the signaling scheme $\pi$. Therefore, we would like to figure which of the upper band is tight. Note that \begin{eqnarray} p(t) = \int_{q_1}^{q_2} g(q) \pi(q,t) v(q,t) dq - \int_{t_1}^{t} Q(t) dt - u(t_1) \end{eqnarray} Therefore, \begin{eqnarray} p(t_2) - p(t_1) = \int_{q_1}^{q_2} g(q) \pi(q,t_2) v(q,t_2) dq - \int_{q_1}^{q_2} g(q) \pi(q,t_1) v(q,t_1) dq - \int_{t_1}^{t_2} Q(t) dt \end{eqnarray} \begin{eqnarray} B_{t_2} - B_{t_1} = \int_{q_1}^{q_2} g(q) \pi(q,t_2) v(q,t_2) dq - \int_{q_1}^{q_2} g(q) \pi(q,t_1) v(q,t_1) dq - \int_{q_1}^{q_2} g(q) v(q,t_2) dq \end{eqnarray} Therefore, it remains to compare $\int_{q_1}^{q_2} g(q) v(q,t_2) dq = v(t_2)$ and $\int_{t_1}^{t_2} Q(t) dt$. The following claim illustrate the fundamental difficulty in this optimization. When using $t_2$ as the basis, we know $u_1 = 0$ is guaranteed, thus $u_2$ will be changed when we adjust $\pi$ to achieve obedience. How to change $\pi$ to maximize the Rev function on page 9 with $u(t_2)$, which also depend on $\pi$. Conversely, if we use $t_1$ as basis, we know $u(t_2) = v(t_2)$ and $u(t_1)$ will change and we face the same difficulty. Myerson does not have this problem since in his problem $u(t_1) = 0$ for sure and he does not need to adjust the $\pi$. \begin{claim} $p(t_2) - p(t_1) \leq B_{t_2} - B_{t_1} $ when using $t_1$ as basis and $p(t_2) - p(t_1) \geq B_{t_2} - B_{t_1} $ when using $t_2$ as basis. \end{claim} \begin{proof} Let $\beta(q)$ denote the threshold for $t$. We have \begin{eqnarray} & & p(t_2) - p(t_1) - [B_{t_2} - B_{t_1}] \\ & = & \int_{q_1}^{q_2} g(q) v(q,t_2) dq - \int_{t_1}^{t_2} Q(t) dt \\ &=& \int_{q_1}^{q_2} g(q) v(q,t_2) dq - \int_{t_1}^{t_2} \int_{q_1}^{q_2} g(q) \pi(q,t) v_1(q) dq dt \\ &=& \int_{q_1}^{q_2} g(q) v(q,t_2) dq - \int_{q_1}^{q_2} g(q) (t_2 - \beta(q)) v_1(q) dq \\ &=& \int_{q_1}^{q_2} g(q) [v_1(q)t_2 + v_0(q)] dq - \int_{q_1}^{q_2} g(q) (t_2 - \beta(q)) v_1(q) dq \\ &=& \int_{q_1}^{q_2} g(q) [v_1(q)\beta(q) + v_0(q)] dq \end{eqnarray} because $v_1(q)\beta(q) + v_0(q) \leq 0$ for any $q$ when using $u_2$ as basis on Page 9 and $v_1(q)\beta(q) + v_0(q) \geq 0$ for any $q$ when using $u_1$ as basis on Page 9. \end{proof} \begin{eqnarray} p(t) &=& p(t_2) - \int_{t}^{t_2} p'(x)dx \\ &=& p(t_2) - \int_{t}^{t_2} \int_{q_1}^{q_2} g(q) \frac{\partial \pi(q,x)}{\partial x} v(q,x) dq dx \\ &=& p(t_2) - \int_{q_1}^{q_2} g(q) \pi(q,t2) v(q,t_2) dq + \int_{q_1}^{q_2} g(q) \pi(q,t) v(q,t) dq + \int_{t}^{t_2} Q(x) dx \\ &=& \int_{q_1}^{q_2} g(q) \pi(q,t) v(q,t) dq + \int_{t}^{t_2} Q(x) dx - u(t_2) \\ &=& \int_{q_1}^{q_2} g(q) \pi(q,t) v(q,t) dq + \int_{t}^{t_2} Q(x) dx - \qopname\relax n{max}\{0, v(t_2 \}) \\ &=& \int_{q_1}^{q_2} g(q) \pi(q,t) v(q,t) dq + \int_{t}^{t_2} \int_{q_1}^{q_2} g(q) \pi(q,x) v_1(q) dq dx - \qopname\relax n{max}\{0, v(t_2 \}) \end{eqnarray} If $p(t_1) = \int_{q_1}^{q_2} g(q) \pi(q,t_1) v(q,t_1)$, then $u(t_1) = 0 $. We claim $u(t_2) \geq v(t_2)$ is also satisfied, because \begin{eqnarray} p(t) &=& p(t_2) - \int_{t}^{t_2} p'(x)dx \\ &=& p(t_2) - \int_{t}^{t_2} \int_{q_1}^{q_2} g(q) \frac{\partial \pi(q,x)}{\partial x} v(q,x) dq dx \\ &=& p(t_2) - \int_{q_1}^{q_2} g(q) \pi(q,t2) v(q,t_2) dq + \int_{q_1}^{q_2} g(q) \pi(q,t) v(q,t) dq + \int_{t}^{t_2} Q(x) dx \\ &=& \int_{q_1}^{q_2} g(q) \pi(q,t) v(q,t) dq + \int_{t}^{t_2} Q(x) dx - u(t_2) \\ &=& \int_{q_1}^{q_2} g(q) \pi(q,t) v(q,t) dq + \int_{t}^{t_2} Q(x) dx - \qopname\relax n{max}\{0, v(t_2 \}) \\ &=& \int_{q_1}^{q_2} g(q) \pi(q,t) v(q,t) dq + \int_{t}^{t_2} \int_{q_1}^{q_2} g(q) \pi(q,x) v_1(q) dq dx - \qopname\relax n{max}\{0, v(t_2 \}) \end{eqnarray} Remains to verify $p(t) \geq 0, u(t_1) \geq 0$? \begin{eqnarray} p(t_2) &=& \int_{q_1}^{q_2} g(q) \pi(q,t_2) v(q,t_2) dq - \qopname\relax n{max}\{0, v(t_2 \}) \\ \end{eqnarray} $\pi(q,t_2) = 1$ whenever $q \geq \theta(t_2) $ or equivalently $ v(q,t_2) + v_1(q) \frac{F(t_2)}{f(t_2)} \geq 0 $. What remains to verify is $p(t) \geq 0$, as follows: \begin{eqnarray} p(t) &=& \int_{q_1}^{q_2} g(q) \pi(q,t) v(q,t) dq + \int_{t_1}^t Q(x) dx \\ &=& \int_{q_1}^{q_2} g(q) \pi(q,t) v(q,t) dq + \int_{t_1}^t \int_{q_1}^{q_2} g(q) \pi(q,x) v_1(q) dq dx \\ \end{eqnarray} As $\odv{u(t)}{t}\ge0$, we can set $u(0)=0$. Let \begin{gather*} \varphi(t|q)=v(q,t)-\pdv{v(q,t)}{t}\frac{1-F(t)}{f(t)}. \end{gather*} We have \begin{gather*} REV=\int_{q_1}^{q_2} g(q)\left[ \int_{t_1}^{t_2} f(t)\pi_t(q,1)\varphi(t|q)\,\mathrm{d}t \right]. \end{gather*} Similar to the Myerson auction, there is a threshold $\theta_q$ for each $q$. We may need the ``ironing'' trick and define \begin{gather*} R(\eta|q)=v\cdot \eta, \end{gather*} where $\eta=1-F(t)$. And we also have $\pdv{R(\eta|q)}{\eta}=\varphi(t|q)$. With Equation \eqref{eq:allocation_monotonicity}, any threshold testing mechanism must satisfy that $\theta_t$ is decreasing in $t$. If the optimal mechanism is a threshold testing mechanism, then $\theta_q$ must also be a decreasing function of $q$. With the regularity assumption, we have \begin{gather*} v(q,\theta_q)-\pdv{v(q,\theta_q)}{\theta_q}\frac{1-F(\theta_q)}{f(\theta_q)}=0. \end{gather*} Therefore, the optimal mechanism is a threshold testing mechanism if and only if \begin{gather*} \odv{\theta_q}{q}\le 0, \end{gather*} where \begin{gather*} \odv{\theta_q}{q}=-\frac{\pdv{v(q,\theta_q)}{q}-\pdv{v(q,\theta_q)}{q,\theta_q}\frac{1-F(\theta_q)}{f(\theta_q)}}{\pdv{v(q,\theta_q)}{\theta_q}-\pdv[2]{v(q,\theta_q)}{\theta_q}\frac{-f^2(\theta_q)-(1-F(\theta_q))f'(\theta_q)}{f^2(\theta_q)}} \end{gather*} \section{The Optimal Mechanism} \textbf{An Instance in Case 1:} Define an instance where both the type $t$ of the buyer and the quality $q$ are uniformed distributed over 0 to 2. \\ $t_1 = 0$, $t_2 = 2$, $\forall t \; f(t) = \frac{1}{2}$ \\ $q_1 = 0$, $q_2 = 2$, $\forall q \; g(q) = \frac{1}{2}$ \\ $v_1(q) = q $, $\rho(q) = - \frac{3}{q}$. Thus, $v(q,t) = qt - 3$\\ Now, we can do the computations to decide which case this instance belongs to. The value the Buyer with type $t$ can get with $Purchase$ for all the items are: \\ $v(t_1) = \int_{q \in Q}g(q)v(q,t_1) dq = \int_{q \in Q} \frac{1}{2} (0 -3 ) dq = -3$ \\ $v(t_2) = \int_{q \in Q}g(q)v(q,t_2) dq = \int_{q \in Q} \frac{1}{2} (2q - 3) dq = -1$ \\ $\underline{\phi}(t) = t - \frac{1 - F(t)}{f(t)} = 2t - 2$ \\ $\int_{t_1}^{t_2} \int_{q: \rho(q) \geq \underline{\theta}_x} g(q) v_1(q) dq dx = \int_{0}^{2} \int_{q: -\frac{3}{q} \geq - (2t-2)} (\frac{1}{2} q) dq dx = \frac{1}{16} $ Thus, this instance is in case 1 because of $v(t_2) \leq \qopname\relax n{max} \{v(t_1), 0 \} + \int_{t_1}^{t_2} \int_{q: \rho(q) \geq \underline{\theta}_x} g(q) v_1(q) dq dx $. Thus, the optimal policy is to recommend $purchase$ whenever $\rho(q) \geq -\underline{\phi}(t) $ and $not \; purchase$ otherwise. And for each type $t$, the information price in close form is \begin{align*} p(t) &= \int_{q\in Q} [g(q) \pi(q,t)v(q,t)] dq - \int_{t_1}^{t} P(x) dx \\ &= \int_{q\in Q} [ \mathbb{I}(\rho(q) \geq -\underline{\phi}(t)) \frac{1}{2} (qt - 3)] dq - \int_{0}^{t} \int_{q\in Q} [\mathbb{I}(\rho(q) \geq -\underline{\phi}(x)) \frac{1}{2} q] dq dx \end{align*} \textbf{An Instance in Case 2:} Define an instance where\\ $t_1 = 3$, $t_2 = 6$, $\forall t \; f(t) = \frac{1}{3}$ \\ $q_1 = 1$, $q_2 = 4$, $\forall q \; g(q) = \frac{1}{3}$ \\ $v_1(q) = q $, $v_0(q) = -6$. Thus, $v(q,t) = qt - 6$ and $\rho(q) = - \frac{6}{q}$. \\ \hf{ refer to my comments at the beginning, we will not use $t_0$ any more. } Now, we can do the computations to decide which cases this instance belongs to. \\ $v(t_1) = \int_{q \in Q}g(q)v(q,t_1) dq = \int_{q \in Q} \frac{1}{3} (3q -6 ) dq = 1.5$ \\ $v(t_2) = \int_{q \in Q}g(q)v(q,t_2) dq = \int_{q \in Q} \frac{1}{3} (6q - 6) dq = 9$ \\ $\bar{\phi}(t) = t + \frac{F(t)}{f(t)} = 2t - 3$ \\ $\int_{t_1}^{t_2} \int_{q: \rho(q) \geq \bar{\theta}_x} g(q) v_1(q) dq dx = \int_{3}^{6} \int_{q: -\frac{6}{q} \geq - (2t-3)} (\frac{1}{3} q) dq dx = 7.25 $ Thus, this instance is in case 2 because of $v(t_2) \geq \qopname\relax n{max} \{v(t_1), 0 \} + \int_{t_1}^{t_2} \int_{q: \rho(q) \geq \bar{\theta}_x} g(q) v_1(q) dq dx $ Thus, the optimal policy is to recommend $purchase$ whenever $\rho(q) \geq -\bar{\phi}(t) $ and $not \; purchase$ otherwise. And for each type $t$, the information price in close form is \begin{align*} p(t) &= \int_{q\in Q} [g(q) \pi(q,t)v(q,t)] dq - v(t_2) + \int_{t}^{t_2} P(x) dx \\ &= \int_{q\in Q} [ \mathbb{I}(\rho(q) \geq -\bar{\phi}(t)) \frac{1}{3} (qt - 6)] dq - 9 + \int_{3}^{t} \int_{q\in Q} [\mathbb{I}(\rho(q) \geq -\bar{\phi}(x)) \frac{1}{3} q] dq dx \end{align*} \textbf{An Instance in Case 3:} Define an instance where\\ $t_0 = 0$, $t_1 = 10$, $\forall t \; f(t) = \frac{1}{10}$ \\ $q_0 = 0$, $q_1 = 10$, $\forall q \; g(q) = \frac{1}{10}$ \\ $v_1(q) = q $, $v_0(q) = -30$. Thus, $v(q,t) = qt - 30$ \\ Now, we can do the computations to decide which cases this instance belongs to. \\ $v(t_1) = \int_{q \in Q}g(q)v(q,t_1) dq = \int_{q \in Q} \frac{1}{10} -30 dq = -30$ \\ $v(t_2) = \int_{q \in Q}g(q)v(q,t_2) dq = \int_{q \in Q} \frac{1}{10} (10q - 30) dq = 20$ $\underline{\phi}(t) = t - \frac{1-F(t)}{f(t)} = 2t - 10$ \\ $\int_{t_1}^{t_2} \int_{q: \rho(q) \geq \underline{\theta}_x} g(q) v_1(q) dq dx = \int_{0}^{10} \int_{q: -\frac{30}{q} \geq - (2t -10)} (\frac{1}{10} q) dq dx \approx 12.25 $ $\bar{\phi}(t) = t + \frac{F(t)}{f(t)} = 2t$ \\ $\int_{t_1}^{t_2} \int_{q: \rho(q) \geq \bar{\theta}_x} g(q) v_1(q) dq dx = \int_{0}^{10} \int_{q: -\frac{30}{q} \geq - 2t} (\frac{1}{10} q) dq dx \approx 36.13 $ Thus, this is an instance of case 3 because of $\qopname\relax n{max} \{v(t_1), 0 \} + \int_{t_1}^{t_2} \int_{q: \rho(q) \geq \underline{\theta}_x} g(q) v_1(q) dq dx \leq v(t_2) \leq \qopname\relax n{max} \{v(t_1), 0 \} + \int_{t_1}^{t_2} \int_{q: \rho(q) \geq \bar{\theta}_x} g(q) v_1(q) dq dx $. So, we can use binary search to find a C where $C \in [0,1) $ is a constant and satisfies $\int_{(t,q): \widetilde{\phi}_q(t)>= C } g(q) v_1(q) dq dt = v(t_2)$. Thus, the optimal policy is to recommend $purchase$ whenever $\rho(q) \geq \theta(t) =\frac{C - f(t)t - F(t)}{f(t)}$ and $not \; purchase$ otherwise. And for each type $t$, the information price in close form is \begin{align*} p(t) &= \int_{q\in Q} [g(q) \pi(q,t)v(q,t)] dq - \int_{t_1}^{t} P(x) dx \\ &= \int_{q\in Q} [ \mathbb{I}(\rho(q) \geq \theta(t)) \frac{1}{10} (qt - 30)] dq - \int_{0}^{t} \int_{q\in Q} [\mathbb{I}(\rho(q) \geq \theta(x)) \frac{1}{10} q] dq dx \end{align*} \hf{add a discussion about correlated $t,q$. I think this correlated setting is not that well-motivated in our model since there are not many cases where buyer's value $t$ is correlated with some uncertainty $q$. Also discuss about non-linear buyer utility function, about when $v_1(q)$ is not always positive. } \section{Model and Problem Formulation} \label{sec:model} \subsection{The Setup} We study the following optimal information pricing problem between an information \emph{seller} (she) and an information \emph{buyer} (he). The the buyer{} is a decision maker who faces one of two actions: a \emph{passive} action $0$ and an \emph{active} action $1$. The the buyer{} obtains an uncertain payoff $v(q,t)$ for the active action $1$ where $q \in Q$ is a random \emph{state of nature} unknown to the buyer and $t\in T$ is the buyer's private type. Both $T,Q$ are measurable sets. The buyer's utility for the passive action $0$ is always $0$, irrespective of his type and the state of nature. In other words, the passive action is a backup option for the buyer. For example, if the the buyer{} is a potential purchaser of some goods (e.g., a house or a used car) with uncertain quality, the passive action $0$ corresponds to ``not purchase'' in which case the buyer{} has no gain or loss, whereas the active action $1$ corresponds to ``purchase'' under which the the buyer's utility depends on the quality $q$ of the goods as well as how much he values the goods (captured by his private type $t$). Both $t$ and $q$ are random variables that are independently distributed according to the cumulative distribution functions (CDF) $F(t)$ and $G(q)$, respectively. We assume throughout the paper that that both $F(t)$ and $G(q)$ are continuously differentiable, with corresponding probability density functions (PDF) $f(t)$ and $g(q)$. Both $F(t)$ and $G(q)$ are public knowledge. However, the realized $q$ can only be observed by the information seller. We study the the seller's problem of designing a \emph{revenue}-maximizing pricing mechanism to sell her private observation of $q$ to the the buyer. Notably, the buyer{}'s private type $t$ is only known to himself --- had the the seller{} known the buyer{} type $t$, the seller's optimal pricing mechanism is simply to reveal full information and then charge the buyer{} the \emph{value of (full) information} \citep{bergemann2018design}: $ \int_{q \in Q} \qopname\relax n{max} \{ 0, v(q,t) \} g(q)\mathrm{d} q - \qopname\relax n{max} \{ 0, \int_{q \in Q} v(q,t) g(q) \mathrm{d} q \}$. Throughout we assume that the buyer payoff function $v(q, t)$ is monotone non-decreasing in his type $t$ for any $q \in Q$. For expositional simplicity, we will assume $v(q, t)$ is linear in $t$, i.e., there exist real-valued functions $\alpha(q) \geq 0$ and $\beta(q)$ such that $v(q,t)=\alpha(q)(t+\beta(q))$. In Subsection \ref{generalized-utility}, we show how our results and analysis easily generalize to any convex (in $t$) function $v(q, t)$. Linearity also implies that the buyer's type $t \in \mathbb{R}$ is a real value, which we assume is supported on a closed interval $T = [t_1, t_2]$\footnote{ This implies that the type's density function $f(t)>0, \forall t\in T$.}. However, the state $q$ is allowed to be supported on a general measurable set $Q$ and does \emph{not} need to be a real value. Such an abstract representation of $q$ is useful for accommodating applications where $q$ may include the non-numerical features relevant to the the buyer's decisions (e.g., the brand and production time of a used car). Since $q$ is a random variable, $\beta(q)$ also has a probability distribution. For ease of presentation, we make a mild technical assumption that the distribution of $\beta$ does not have any point mass. However, our analysis applies similarly to the general case in which $\beta(q)$ contains point masses, but just with more complex notations (see Appendix \ref{appendix:partial_recommendation} for more details). With slight abuse of notation, let $v(t)$ denote the buyer's expected utility for action $1$ under his prior beliefs about $q$, namely, when no information is purchased. That is, \begin{equation}\label{def:buyer-expected-v} \text{Buyer's a priori utility of action 1: } \quad v(t)=\int_{q \in Q} v(q,t) g(q) \mathrm{d} q. \quad \end{equation} \subsection{Mechanism Space and the Revelation Principle} To maximize revenue, the the seller{} can design arbitrary mechanisms with possibly multiple rounds of interactions with the buyer. The task of designing a revenue-maximizing mechanism can be intractable unless a well-defined and general mechanism space is specified. Prior work of \cite{bergemann2018design} restricts to the sale of experiments via only a single-round of information revelation. In this work, we consider a richer design space of mechanisms, in which the the seller{} is also allowed to contract the realized experiment outcomes (i.e., signals) and moreover, multiple rounds of information revelation and payments are allowed as well. Specifically, we consider the following set of \emph{sequential mechanisms}.\footnote{This general class of mechanisms was first introduced and studied by \cite{Babaioff12}, and was called the generic interactive protocols in their work. } \begin{definition}[\textbf{Sequential Mechanisms}] A sequential mechanism is a mechanism that results in a finite extensive-form game between the seller and the buyer. Formally, let $C(n)$ be the set of all children nodes of node $n$. Then each non-leaf node $n$ in the game tree is one of the following three types: \begin{itemize} \item \emph{Transfer node}, which is associated with a (possibly negative) monetary transfer $p(n)$ to the seller and has a single child node. \item \emph{Seller node} that reveals information. Any seller node associates each state of nature $q$ with a distribution over $C(n)$, prescribing the probabilities of moving to its children nodes. That is, there is a function $\psi_n:Q\times C(n)\mapsto [0,1]$ for each seller node $n$ with $\sum_{c\in C(n)}\psi_n(c,q)=1, \forall q\in Q$. Thus, a child node $c$ carries information about $q$. \item \emph{Buyer node}, which corresponds to an arbitrary set of buyer choices with every choice leading to a child node. \end{itemize} \end{definition} The buyer's final decision of taking the active or passive action is made \emph{after} the information selling process, and thus is not modelled in the above sequential mechanisms. Therefore, at each seller node, the seller's action is to choose a message to send to the buyer which determines the child node the game will move to; the buyer node has the similar functionality. Note that the mechanism is voluntary and the buyer is free to leave the mechanism at any stage. When designing the revenue-optimal mechanism for selling physical goods, the celebrated revelation principle \citep{10.2307/1912346,10.2307/1914083} enables us to without loss of generality focus only on truthful and direct mechanisms. However, when selling information, sequential mechanisms can bring strictly more revenue than one-round mechanisms. We show that our setting admits a stronger revelation principle that allows us to consider w.l.o.g. the set of truthful, direct and one-round mechanisms. To describe the space of \emph{one-round mechanisms}, we need the notion of \emph{experiments}, which formalize the way a the seller{} reveals information. Given a set of possible signals $\Sigma$, an experiment $\pi: Q \to \Delta_{\Sigma}$ is a mapping from the state $q$ to a distribution over the signals in $\Sigma$. Such an experiment can be mathematically described by $\{ \pi(\sigma| q) \}_{q \in Q, \sigma \in\Sigma}$ where $\pi(\sigma|q)$ is the probability of sending signal $\sigma$ conditioned on state $q$. After observing signal $\sigma$, the buyer infers posterior probability about any state $q$ via standard Bayes updates: \begin{equation} g(q|\sigma) = \frac{\pi(\sigma|q) \cdot g(q) }{ \int_{q'\in Q} \pi(\sigma|q') \cdot g(q') \mathrm{d} q'} = \frac{\pi(\sigma|q) \cdot g(q) }{ \qopname\relax n{\mathbf{E}}_{q' \sim G} [\pi(\sigma|q')] }. \end{equation} Consequently, conditioned on signal $\sigma$, if a buyer of type $t$ takes the active action, his expected utility is $\int_{q \in Q} v(q,t) g(q|\sigma) \mathrm{d} q $. Different experiments reveal different amount of information to the buyer, and thus are of different values. A \emph{one-round mechanism} is a menu of experiments and prices that results in a single-round of interaction between the seller and the buyer. \begin{definition}[\textbf{One-round Mechanisms}] A one-round mechanism $\mathcal{M}$, described by a menu $\{ (p_t, \pi_t) \}_{t \in T}$, proceeds as follows: \begin{enumerate} \item The buyer is asked to report (possibly untruthfully) his type $t$; \item The seller charges the buyer $p_t$; \item The seller reveals information about $q$ according to experiment $\pi_{t}$. \end{enumerate} \end{definition} A one-round mechanism can clearly be represented as a special sequential mechanism, with the 3 steps corresponding to a buyer node, followed by a transfer node, and then followed by a seller node. Though sequential mechanisms can generally contract experiment outcomes (when a seller node is followed by transfer nodes), any one-round mechanism only prices the experiment $\pi_t$ at price $p_t$ but does not contract the experiment outcomes. Let $U(t';t)$ denote the expected utility of a buyer with type $t$ reporting type $t'$, defined as \begin{gather*} U(t';t)=\sum_{\sigma\in \Sigma}\qopname\relax n{max}\bigg\{ \int_{q\in Q}v(q,t)\pi_{t'}(\sigma|q)g(q)\,\mathrm{d} q \, \, , \, \, 0 \bigg\}-p_{t'}. \end{gather*} A one-round mechanism is said to be \emph{incentive compatible}, if it is the buyer's best interest to report his type truthfully, i.e., $U(t;t)\ge U(t';t), \forall t, t' \in T$. The following revelation principle shows that it is without loss of generality to consider direct, incentive compatible mechanisms and one-round in our model. \begin{lemma}[\textbf{Revelation Principle}] \label{lem:revelation} For any sequential mechanism $\mathcal{M}$, there exists a direct, incentive compatible and one-round mechanism that achieves the same expected revenue as $\mathcal{M}$. \end{lemma} Standard revelation principle argument implies that the seller can w.l.o.g incentivize truthful type report at the beginning. To prove Lemma \ref{lem:revelation}, the non-trivial part is to argue that a single-round of payment and information revelation suffice. This is a consequence of our independence assumption between state $q$ and buyer type $t$, such that it allows us to simply combine all steps of information revelation as a single experiment and combine all payments as a single upfront payment. A formal proof is deferred to Appendix \ref{append:revelation}. Notably, the proof of Lemma \ref{lem:revelation} relies crucially on the independence of state $q$ and buyer type $t$. Fundamentally, this is because with correlation among the buyer type and state, a buyer type $t$, if misreporting $t'$, will perceive a different expected payment as the $p_{t'}$ perceived by the buyer type $t'$ since $t$ and $t'$ hold different belief about $q$ and thus the expected payments w.r.t. each signal realization (see the proof for more illustration). Next, we further simplify the mechanism design space. First, we show in Lemma \ref{lem:positive-pay} that it is without loss of generality to consider mechanisms with non-negative payments. While this result is intuitive, we point out that it does not trivially hold. In fact, when $q$ and $t$ are correlated, the full-surplus-extracting sequential mechanism of \citep{Babaioff12} may have to use \emph{negative} payments. The proof of this lemma is deferred to Appendix \ref{appendix:positive-pay}. \begin{lemma}[\textbf{Non-Negative Payments}] There exists an optimal IC, IR and one-round mechanism in which $p_t \geq 0$ for all $t \in T$. \label{lem:positive-pay} \end{lemma} Second, the following known result of \cite{bergemann2018design} shows that when pricing experiments, we can without loss of generality price \emph{responsive} experiments, in which each signal leads to a unique buyer best response action. From this perspective, each signal in a responsive experiment can be viewed as an \emph{obedient} action recommendation. \begin{lemma}[\cite{bergemann2018design}] The outcome of any mechanism can be obtained by using responsive experiments. \label{lem:signal-space} \end{lemma} \subsection{Formulating the Optimal Pricing Problem} Based on the above simplification of the design space, we now formulate the mechanism design problem. We start by introducing (functional) variables to describe a one-round mechanism with responsive experiments. We will think of the payment in the menu $\mathcal{M}$ as a function $p(t)$ of buyer types $t$. Since the buyer has two possible actions, any responsive experiment $\pi_t$ for buyer type $t$ only needs two signals. With slight abuse of notation, we use function $\pi(q,t)\in [0,1]$ to denote the probability of sending signal \texttt{active} (interpreted as an obedient recommendation of the active action), conditioned on state realization $q$. Naturally, $[1- \pi(q,t)]$ is the probability of sending signal \texttt{passive} conditioned on state $q$. Our goal is to derive a feasible menu --- represented by functions $\pi^*(q, t)$ and $p^*(t)$ --- that maximizes the seller's revenue. \begin{gather*} \text{Seller Revenue: } \quad \qopname\relax n{max}_{\pi, p} \int_{t\in T} f(t) p(t) \,\mathrm{d} t. \end{gather*} Note that this is a \emph{functional optimization} problem since both $\pi(q,t), p(t)$ are functional variables that depend on continuous variable $t \in [t_1, t_2](=T)$ and abstract variable $q$ from a measurable set $Q$. The remainder of this section is devoted to formulating constraints on $\pi(q,t), p(t)$ according to Lemma \ref{lem:revelation}, \ref{lem:positive-pay} and \ref{lem:signal-space}. \vspace{2mm} \noindent {\bf Obedience constraints.} Lemma \ref{lem:signal-space} shows that any responsive experiment only needs to have two signals which make obedient recommendation of the active and passive action, respectively. This poses two constraints on the function $\pi(q,t)$:(1) $ \int_{q \in Q} \pi(q, t) v(q,t) g(q) \,\mathrm{d} q \geq 0, \forall t \in T$; (2) $ \int_{q \in Q} [1-\pi(q, t)] v(q,t) g(q) \,\mathrm{d} q \\ \leq 0, \forall t \in T$. The first constraint above ensures that when signal \texttt{active} is sent to buyer type $t$, the buyer's expected value $ \frac{ \qopname\relax n{\mathbf{E}}_{q\sim G}\left[ \pi(q,t)v(q,t) \right] }{ \qopname\relax n{\mathbf{E}}_{q\sim G} [\pi(q,t)] }$ for taking the active action is indeed at least $0$, which is the expected value of taking the passive action. Similarly, the second constraint ensures the obedience of the \texttt{passive} signal. Slightly manipulating the second constraint above, we obtain $\int_{q \in Q} \pi(q, t) v(q,t) g(q) \,\mathrm{d} q \geq \int_{q \in Q} v(q,t) g(q) \, \\ \mathrm{d} q = v(t)$, where $v(t)$ defined in Equation \eqref{def:buyer-expected-v} is the buyer's a priori expected value of the active action. Therefore, we can conveniently summarize the obedience constraint as follows: \begin{gather} \text{Obedience: } \quad \int_{q \in Q} \pi(q, t) v(q,t) g(q) \,\mathrm{d} q \geq \qopname\relax n{max} \{ 0, v(t) \}, \forall t \in T. \label{cons:obedience} \end{gather} \vspace{2mm} \noindent {\bf Individual rationality (IR) constraints.} Since the the buyer{} gets utility $0$ from the passive action, the expected utility of buyer type $t$, if he reports his type \emph{truthfully} and follows the seller's obedient recommendation, is \begin{equation} \label{def-u(t)} u(t)=\qopname\relax n{\mathbf{E}}_{q \sim G}[\pi(q,t)v(q,t)]-p(t)=\int_{q\in Q}\pi(q, t)v(q,t)g(q)\,\mathrm{d} q -p(t), \end{equation} where the first term is the value from his decision making assisted by the seller's information and the second term is the payment to the seller. To ensure the buyer's participation in the mechanism, the following individual rationality (IR) constraint is required: \begin{gather} \text{IR:} \quad \int_{q \in Q} \pi(q, t) v(q,t) g(q) \,\mathrm{d} q - p(t) \geq \qopname\relax n{max} \{ 0, v(t) \}, \forall t \in T, \label{cons:IR} \end{gather} where the right-hand side is the buyer's expected utility of not participating in the mechanism and simply takes the best action according to his prior belief about $q$. Interestingly, since the payment function is always non-negative according to Lemma \ref{lem:positive-pay}, the IR constraint \eqref{cons:IR} turns out to imply the obedience constraint \eqref{cons:obedience}. The buyer surplus $s(t)$ --- the additional utility gain of participating in the mechanism --- as a function of the buyer type $t$ is defined as follows: \begin{equation} \label{eq:buyer-surplus} \text{Buyer surplus:} \quad s(t)= \int_{q \in Q} \pi(q, t) v(q,t) g(q) \,\mathrm{d} q - p(t)-\qopname\relax n{max}\{v(t),0\}. \end{equation} The IR Constraint \eqref{cons:IR} is equivalent to non-negative surplus. \vspace{2mm} \noindent {\bf Incentive compatibility (IC) constraints.} The derivation of the IC constraints turns out to be more involved. IC requires that when reporting truthfully, a buyer of type $t$ should obtain a higher utility than misreporting any other type $t'$. This turns out to require some analyses since when a buyer of type $t$ misreports type $t'$, the resulting experiment $\{ \pi(q,t') \}_{q \in Q}$ may not be obedient for $t$ any more, leading to non-linearity in the IC constraints. Specifically, upon receiving signal \texttt{active}, the expected value of the active action for a type-$t$ buyer misreporting $t'$ is \begin{equation}\label{eq:IC-derivation1} V_a(t';t) \vcentcolon = \int_{q \in Q} \pi(q, t') v (q,t) g(q) \,\mathrm{d} q = \int_{q \in Q} \pi(q, t') \alpha(q)[t+\beta(q)] g(q) \,\mathrm{d} q. \end{equation} Since $\pi(q,t')$ may not be obedient for buyer type $t$, he will choose between active action and the passive action, leading to true expected value $\qopname\relax n{max}\{V_a(t';t), 0\}$ in this situation. Similarly, upon receiving signal \texttt{passive}, the buyer's value is the maximum between $0$ and the following: \begin{equation}\label{eq:IC-derivation2} \int_{q \in Q} [1-\pi(q, t')] v(q,t) g(q) \,\mathrm{d} q = v(t) - V_a(t';t). \end{equation} Combining both situations, the expected utility obtained by a buyer of type $t$ from misreporting type $t'$ is $\qopname\relax n{max}\{V_a(t';t), 0 \} + \qopname\relax n{max} \{v(t) - V_a(t';t), 0 \}-p(t')$. So the incentive compatibility constraint becomes the following: \begin{gather}\label{eq:IC-original} u(t) \geq \qopname\relax n{max}\{V_a(t';t), 0 \} + \qopname\relax n{max} \{v(t) - V_a(t';t), 0 \} - p(t'). \end{gather} Such non-linear constraints are difficult to handle in general. Interestingly, it turns out that we can leverage previous results to reduce Constraint \eqref{eq:IC-original} to linear constraints on $\pi$, with some careful case analysis: \begin{enumerate} \item When $t > t'$, we have $ V_a(t';t) \ge V_a(t';t') \ge 0$, where the first inequality is due to the assumption $\alpha(q)\ge 0$ and the second comes from the obedience constraint \eqref{cons:obedience} for $t'$. In this case, the right-hand side of Constraint \eqref{eq:IC-original} becomes $V_a(t';t) + \qopname\relax n{max} \{v(t) - V_a(t';t), 0 \} - p(t')$, or equivalently $\qopname\relax n{max} \{v(t), V_a(t';t) \} - p(t')$. Note that $u(t) \geq v(t) - p(t')$ is already implied by the IR constraint $u(t) \geq v(t)$ and the condition $p(t')\geq 0$. Therefore, the only non-redundant constraint in this case is $u(t) \geq V_a(t';t) - p(t')$. \item When $t < t'$, we have $ v(t) - V_a(t';t)\le v(t') - V_a(t';t') \le 0 $ for similar reasons. In this case, the right-hand side of the above constraint becomes $\qopname\relax n{max} \{ V_a(t';t), 0 \} - p(t')$. Again, $u(t) \geq - p(t')$ is already implied by the IR constraint $u(t) \geq 0$ and the condition $p(t')\geq 0$. Therefore, the only non-redundant constraint in this case is also $u(t) \geq V_a(t';t) - p(t')$. \end{enumerate} To summarize, given the IR and non-negative payment constraints, the IC constraint can finally be reduced to the following: \begin{gather} \text{IC:} \quad \int_{q \in Q} \pi(q, t) v(q,t) g(q) \,\mathrm{d} q - p(t) \geq \int_{q \in Q} \pi(q, t') v(q,t) g(q) \,\mathrm{d} q - p(t'), \forall t, t' \in T. \label{cons:IC} \end{gather} \vspace{2mm} \noindent {\bf Combined optimization problem.} The derivation and simplification above ultimately lead to the following optimization problem, with functional variables $ \pi(q,t), p(t)$: \begin{lp}\label{lp:opt-formulation} \maxi{ \int_{t\in T} f(t) p(t) \,\mathrm{d} t.} \mbox{subject to} \qcon{\int_{q \in Q} \pi(q, t) v(q,t) g(q) \,\mathrm{d} q - p(t) \geq \qopname\relax n{max} \{ 0, v(t) \} }{ t \in T} \qcon{\int_{q \in Q} [\pi(q, t) - \pi(q,t')] v(q,t) g(q) \,\mathrm{d} q - p(t) + p(t') \geq 0}{t, t' \in T} \con{p(t) \geq 0, \quad \pi(q,t) \in [0,1]} \end{lp}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} There are a number of questions about blazars that we hope to answer through combined observational and theoretical efforts. How is the plasma in blazar jets accelerated to flow velocities near the speed of light and focused to within $\lesssim 1^\circ$ \cite[(Jorstad et al. 2005]{J05}; \cite[Claussen-Brown et al. 2013)]{CB13}? Where and how do extremely luminous outbursts and outrageously short flares of radiation occur? How are relativistic particles accelerated: by shocks, magnetic reconnections, turbulence, or some other process? Possible answers to these questions usually involve magnetic fields. The currently prevailing paradigm of jet launching, collimation, and acceleration requires a strong helical magnetic field, at least within the inner parsec \cite[(e.g., Komissarov et al. 2007]{kom07}; \cite[Tchekhovskoy et al. 2011)]{Sasha11}. Acceleration of particles is thought to depend critically on the geometry of the magnetic field \cite[(e.g., Summerlin \& Baring 2012)]{sb12}. And turbulent magnetic fields can lead to second-order Fermi acceleration of particles, magnetic reconnections \cite[(e.g., Kowal et al. 2012]{kow12}; \cite[Dexter et al. 2014)]{dex14}, and rapid flares \cite[(Marscher \& Jorstad 2010]{mj10}; \cite[Narayan \& Piran 2012]{np12}; \cite[Marscher 2014)]{M14}. Fortunately, the magnetic field geometry directly affects an observable property of a blazar's radiation: its polarization. We can therefore use observations of time-variable polarization at millimeter to optical wavelengths and spatially resolved polarization on VLBI images to infer the geometry of the field and its relation to the emission properties of blazars. \section{Linear Polarization for Different Magnetic Field Configurations} A favorite assumption of emission modelers is that the magnetic field can be approximated to be completely tangled on all scales of interest, except when it is compressed by a shock wave or some other phenomenon \cite[(e.g., Hughes, Aller, \& Aller 1989]{HAA89}; \cite[Cawthorne 2006)]{caw06}. This would lead to zero linear polarization except where such compression occurs, and essentially zero short-term fluctuations. Instead, the linear polarization of the synchrotron radiation tends to be low --- a few to tens of percent ---relative to its value in a uniform field, but non-zero, and it often fluctuates rapidly. A more realistic geometry consists of turbulent cells. Consider the case of $N$ cells, each with a uniform but randomly directed magnetic field of the same magnitude. The mean polarization is then $\langle \Pi \rangle = \Pi_{\rm max}N^{-1/2}$ \cite[(Burn 1966)]{Burn66}, where $\Pi_{\rm max}$ (a weak function of spectral index, usually 70--75\% in an optically thin source) is the value in a uniform field. If turbulent cells are constantly passing through the emission region, the degree of polarization fluctuates with a standard deviation $\sigma(\Pi) \sim \langle \Pi \rangle^{1/2}$, while the electric-vector position angle $\chi$ varies randomly, often executing apparent rotations that can exceed $180^\circ$. These rotations in $\chi$ are usually quite irregular, but can sometimes be surprisingly smooth \cite[(Jones 1988)]{jones88}. Since a helical magnetic field is the main requirement of magnetic launching models of jets, it may be the case that this geometry persists out to parsec scales. \cite[[See Gabuzda (2013)]{gab13} and \cite[Gabuzda et al. (2014)]{gab14} for observational evidence in support of this. On the other hand, current-driven instabilities may eventually disrupt the helical ordering at end of the jet's acceleration/collimation zone (ACZ) where the kinetic energy density reaches equipartition with the magnetic energy density \cite[(e.g., Nalewajko \& Begelman 2012)]{NB12}.] In these models, the helical field propagates down the jet with the plasma. The degree of polarization depends on viewing angle $\theta$ and the bulk Lorentz factor $\Gamma$ \cite[(see Lyutikov et al. 2005)]{lyut05}. If $\theta=0^\circ$, the net linear polarization is zero if the intensity is uniform across the jet, owing to symmetry. If the aberrated viewing angle $\theta^\prime=90^\circ$ (which occurs when $\sin\;\theta = \Gamma^{-1}$), $\chi$ is in the direction of the jet axis if $B_z^\prime > B_t^\prime$, and perpendicular to the axis if $B_t^\prime > B_z^\prime$. The degree of polarization $\Pi$ depends on $B_t^\prime/B_z^\prime$. Other viewing angles yield polarization properties that are qualitatively similar to the side-on case. Note that this dependence of $\chi$ on $\theta^\prime$ applies also to a field geometry that corresponds to any superposition of toroidal and longitudinal field, of which a helical field is a specific case. One could imagine, for example, that the longitudinal field consists of magnetic loops that are stretched parallel to the jet axis by cross-jet velocity gradients \cite[(e.g., Laing 1980)]{Laing80}. Since nature tends to avoid ideal conditions, we should consider the case of a helical or toroidal magnetic field with a non-uniform intensity across a given cross-section of the jet. The polarization of the area with the highest intensity will then determine the net polarization position angle $\chi$, while the degree of polarization $\Pi$ can be quite low if the relative intensity enhancement is weak and tens of percent if there is a particularly bright spot. Furthermore, if the bright spot --- which presumably just has a higher density of radiating particles than the rest of the cross-section --- is offset from the jet axis, the corresponding parcel of plasma can execute a spiral trajectory about the axis owing to rotation of the flow that arises from rotation of the base anchored in the black hole's ergosphere or the inner accretion disk \cite[(Vlahakis 2006)]{vlah06}. If the viewing angle to the jet axis is $0^\circ$, the observer will see rotation of $\chi$ at a uniform rate \cite[(see Marscher 2013 for an illustration)]{M13}. In the more common case when the viewing angle $\theta < \Gamma^{-1}$, (so that $\theta^\prime\ll 90^\circ$), the rate of rotation of $\chi$ will vary smoothly and monotonically during each turn; see \cite[Marscher et al. (2008,]{M08} \cite[2010)]{M10}, where the model is applied to BL Lac and PKS~1510$-$089. \section{Interpretation of Rotations of the Polarization Vector} Rotations of the optical polarization vector in $\gamma$-ray bright blazars appear to be quite common \cite[(see above and, e.g., Larionov et al. 2008]{Lar08}, \cite[2013a]{Lar13a}, \cite[2013b]{Lar13b}; \cite[Abdo et al. (2010)]{abdo10}; \cite[Kiehlmann et al. 2013]{K13}; \cite[Jorstad et al. 2013]{J13}; \cite[Aleksi\'c et al. 2014]{Alek14}; and \cite[Morozova et al. 2014)]{Mor14}. As discussed in the previous section, such events can be explained by (1) a flow that is rotating through a helical magnetic field, (2) random walks of a turbulently disordered field, or (3) a twisted jet. To this we add another possibility, proposed by \cite[Zheng et al. (2014)]{Zhang14}: (4) the passage of a moving shock through a region with a highly disordered field. The compression of the shock partially orders the field, but this ordering is seen at different depths as time advances owing to light-travel delays, leading to an apparent rotation of the polarization by as much as $180^\circ$ per shock. When the position angle is not rotating, it generally fluctuates, often rapidly and sometimes wildly, about its mean value (see the above references for examples). The degree of polarization tends to do the same. This strongly implies that the magnetic field is at least partially disordered, which is consistent with turbulence. Although turbulence can also cause rotations of $\chi$, and therefore explain the main features of the time variability of the polarization vector, the observed rotations are often much smoother than expected from turbulence. In addition, the timing of the rotations often appears non-random, such as just before the peak of a flare, contrary to the behavior of a strictly stochastic process. The ultimate test of rotation caused by geometry or rotation of the flow in the jet is that the rotation in a given blazar should always be in the same direction, clockwise or counterclockwise. This seems to be the case for PKS~1510$-$089 \cite[(cf. the rotations reported by Marscher et al. 2010 with those in]{M10} \cite[Aleksi\'c et al. 2014)]{Alek14}. \section{Turbulence in Blazar Jets} Since the rapid fluctuations in linear polarization suggest the presence of turbulence, the author \cite[(Marscher 2014)]{M14} has been developing a numerical model (TEMZ --- Turbulent Extreme Multi-Zone) that attempts to explain the multi-waveband flux and polarization variations of blazars. The key features of the model include:\\ 1. Turbulent ambient jet plasma, which accelerates electrons with a power-law energy distribution through the second-order Fermi process and possibly magnetic reconnections. The turbulence is realized in the model by dividing the jet into many cylindrical cells, the number of which is selected to match the degree of polarization.\\ 2. A conical standing shock that further accelerates electrons, with the amplification in energy depending on the angle between the magnetic field of the turbulent cell and the shock normal \cite[(e.g., Summerlin \& Baring 2012)]{sb12}. \cite[Cawthorne (2006)]{caw06} and \cite[Cawthorne et al. (2013)]{caw13} have found that the polarization pattern of the ``core'' of some blazars, observed with the VLBA at 43 GHz, matches the predictions of turbulent plasma compressed in a standing conical shock.\\ 3. The dependence of the particle acceleration on magnetic field direction reduces the volume filling factor of the emission at the highest frequencies for both synchrotron and inverse Compton radiation. This in turn causes higher amplitude, shorter time-scale variations at the higher frequencies. Since the number of turbulent cells $N(\nu)$ that radiate at higher frequencies $\nu$ are more limited, the mean polarization also increases with frequency. \begin{figure}[b] \begin{center} \includegraphics[width=8cm]{marscher_f1a.ps} \vspace{8cm} \includegraphics[width=8cm]{marscher_f1b.ps} \vspace*{-8.0 cm} \caption{\textit{Top:} Optical and $\gamma$-ray light curves during a 25-day time interval from four runs of the TEMZ code with different ratios of helical (pitch angle of $85^\circ$, hence nearly toroidal) to total (helical + turbulent) magnetic field, as indicated (black \& white version --- solid: 0.99, dotted: 0.8, short-dashed: 0.4, long-dashed: 0). Parameters were selected to be similar to the physical parameters of BL Lacertae. In this run, the seed photons for inverse Compton scattering are synchrotron and synchrotron self-Compton (SSC) radiation emitted by relatively slowly moving plasma in a Mach disk. \textit{Bottom:} Polarization vs.\ time for the same runs.} \label{fig1} \end{center} \end{figure} \section{The Big Picture of a Blazar Jet} A rough sketch of a blazar jet might then consist of a helical magnetic field in the acceleration/collimation zone out to parsec scales, then turbulence (+ maybe magnetic reconnections) dominates, perhaps alongside boundary layers where velocity shear stretches the magnetic field in the longitudinal direction. Both moving shocks from major disturbances in the input energy and/or velocity of the flow and standing shocks from pressure mismatches between the jet and external medium, encounters with dense external gas after changes in jet direction, or collisions with clouds, compress the magnetic field and further accelerate the radiating particles. This general picture might be capable of producing the emission features that we see, including rapid variations of flux and polarization out to parsec scales. Since there is evidence that either helical or toroidal-plus-longitudinal magnetic fields can be present on parsec scales, the question arises as to whether turbulent and helical fields can co-exist in the same location. Since an ordered field should decrease the level of variability below that observed, one might expect that the ordered component would need to be a small fraction of the total field in blazars with rapidly variable polarization. In order to test this, the author has run some TEMZ simulations with various ratios of helical to total (helical + turbulent) field. The resulting flux and polarization versus time curves are displayed in Figure \ref{fig1}. As can be seen, the quenching of the variability is not apparent until the helical component composes considerably more than 50\% of the total field. The conclusion is that less than 50\% of the field needs to be disordered to explain --- qualitatively, at least --- the variability properties of blazars. The author plans to use statistical tests to make the comparison between the model and data more quantitative. \section{Conclusions} A combined international effort is now producing optical polarization data with sufficient time coverage to follow variations in dozens of blazars. Even more data would be better, since events such as rotations of the polarization vector are easy to miss when the sampling is sparse. We are now identifying patterns in data --- some apparently systematic, others apparently random --- that we can interpret in terms of physical properties of the jets. Further development of existing and new theoretical models is needed to facilitate this. The author welcomes competition to his own TEMZ model! This research is supported in part by NASA through Fermi Guest Investigator grants NNX11AQ03G, NNX12AO79G, NNX13AP06G, and NNX14AQ58G.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Fast and accurate processing of molecular information is essential for proper control, growth and regulation in the living cell. In carrying out essential tasks like protein synthesis, ribosomes are known to operate at error levels of $10^{-3}{-}10^{-4}$ incorrect peptide bindings per cycle~\cite{RibosomeErrEcoli,RibosomeTranscriptionErrRate,RibosomeErrors}, with even lower error rates for polymerases carrying out RNA transcription ($10^{-5}{-}10^{-6}$)~\cite{TranscriptionErrEukaryote,TranscriptionErrEcoli,TranscriptionErrEcoli2} and DNA replication ($10^{-7}{-}10^{-9}$)~\cite{PolymeraseErr,TranslationErrEColi,TranslationMutationRate}. This implies that the involved molecular machines must readily discriminate and preferentially bind the correct substrates over very similar, yet incorrect, competing substrates. However, simple energy discriminant binding models imply prohibitive energy binding differences among the pool of analogous substrates, and fail to predict the high level of fidelity observed. Instead, as originally proposed by Hopfield~\cite{HopfieldProofReading} and Ninio~\cite{NinioProofReading}, high accuracy may be achieved through kinetic proofreading (KPR), a mechanism that couples an effectively irreversible process to an intermediate reaction step which can then preferentially react or discard substrates --- via kinetic discrimination --- further down in the reaction pathway. In this manner, the original discrimination step that relies on binding energy differences is amplified through a second round of substrate verification. This mechanism has been confirmed experimentally for a variety of polymerase and ribosome systems~\cite{ProofReadingPolyRev,ProofReadingExperimental,ProofReadingExpRibosome}, and later recognized in signal transduction~\cite{KineticProofSignaling,KineticProofReadingSignalingRev} and homologous recombination~\cite{KPRRecombination}. While KPR facilitates high fidelity synthesis, it imparts a significant energy cost to the overall process. Furthermore, the nanometric scale of these molecular systems renders them vulnerable to strong thermal and active fluctuations from the cellular environment, suggesting performance limits set by fundamental thermodynamic fluctuation-dissipation trade-offs~\cite{NonEqDynLivingSysRev,MolecularMachinesRev,ThermodynamicsNanoScale}. Indeed, recent work on generic KPR models linked the amount of energy dissipated to the loss of configurational entropy during accurate product formation, and found that the efficiency of this process decreased rapidly for increasing levels of accuracy~\cite{KineticProofReadingEfficiencyTradeoff}. More generally, accuracy of the copying process was found to be fundamentally tied by the amount of excess work dissipated by the system, with higher dissipation corresponding to higher accuracy, independent of underlying system topology~\cite{KineticProofReadingErrEntropy}. In analyzing KPR processes, one typically coarse-grains the full complex biochemical system into a discrete set of states connected by stochastic transitions approximated as a Markov process. However, even under these simplified dynamics, thermodynamics places an inherent energetic cost to the output precision of an observed quantity. This seminal result has been dubbed the thermodynamic uncertainty relation (TUR), which is expressed in terms of the trade-off measure ${\mathcal{Q}}$ as \begin{equation} {\mathcal{Q}} \equiv \dot{Q} t \epsilon^2(t) \ge 2 {k_{\mathrm{B}}} T~, \label{eq:Q} \end{equation} where $T$ is the temperature, ${k_{\mathrm{B}}}$ is the Boltzmann constant, $t$ is time, $\dot{Q}$ is the energy dissipation rate, and $\epsilon^2(t) = \mathrm{Var}X /\langle X \rangle^2$ is the ratio of mean and variance of a current observable $X$~\cite{ThermUncertainty}. In short, Eq. \ref{eq:Q} implies that a reduction in the uncertainty of an observable must be accompanied by a matching increase in energy consumption. Optimal trade-off is achieved in the the limit of vanishing energy use ({\it{i.e.,~}} equilibrium) and with normally-distributed heat dissipation~\cite{ThermUncGaussianHeat}. The TUR was first shown to hold in the limit of linear response, and later proven to hold for any Markov jump process in the short or long time limits~\cite{ThermUncertaintyLongtProof,ThermUncertaintyShorttProof}. More recently, this relation has been shown to follow for currents from a generalized fluctuation theorem framework~\cite{ThermUncFlucTheorem}. In the context of enzymatic kinetics, the TUR has been used to infer performance boundaries in molecular systems such as motors~\cite{ThermUncFanoFactor,ThermUncMolMotors,StochThermRevMolMotorExp}. For instance, in a study by Hyeon and Hwang~\cite{KinesinThermUnc}, the transport efficiency of microtubule protein motors was defined in terms of ${\mathcal{Q}}$ and analyzed using experimental data, showing that ${\mathcal{Q}}$ is sub-optimized within physiological ATP fuel levels and cargo loadings. Notably, while the studied wild-type kinesin protein operates near a minimum in ${\mathcal{Q}}$, the defective mutant was about three times less efficient and did not display a minimum. Clearly, the TUR not only places fundamental constraints on system performance, but highlights the degree to which present day molecular systems may have accommodated to this limitation. While extensive kinetic analysis of copying-fidelity or proofreading mechanisms have been advanced in various contexts~ \cite{RibosomeRateAccTradeOff,KineticProofReadingGenerilizedMicroTubule,KineticProofReadingSensing2,ThermodynamicsMolecularCopying,HopfieldAsymtopicSpeedEntropyErrorLimit,KPRAssemblyRecA,KPRCascade}, direct TUR analysis of experimental KPR systems is surprisingly lacking. To this end, we consider a recent work by Banerjee, Kolomeisky and Igoshin~\cite{KineticProofBanerjee} on the KPR systems of the E. coli ribosome and the T7 bacteriophage polymerase, and analyze these circuits in the context of TUR. In~\cite{KineticProofBanerjee}, the reaction networks were adapted to a standard Hopfield-Ninio KPR model using experimental values of the kinetic rate constants. Taking the physiological state as a reference point, they investigated the speed-accuracy trade-off as a function of the kinetic rates, finding that speed is prioritized over error rate. In a follow-up work on the same systems, they found that speed is also prioritized over energy dissipation and output noise~ \cite{KineticProofMallory}. In this paper, we offer a complementary view on the existing body of analysis, focusing on the fundamental implications of the TUR on the \emph{synthesis process} in the KPR networks of E. coli ribosome and T7 DNA polymerase. We show that, in general, decreasing error rates and mean production times coincide with an underlying effective \emph{reduced network} of reactions steps that minimizes the TUR measure ${\mathcal{Q}}$ of production. Approaching this reduced network not only provides the best energetic trade-off between production precision and energy dissipation through ${\mathcal{Q}}$, but also decouples the operational speed from the dispersion of production. As a result, this regime minimizes trade-off constraints between the mean production time and the error rate for a given set of control parameters and fixed energy budget. Together, we show that approaching the reduced network regime corresponds to enhanced global performance of the studied ribosome and polymerase systems. \section{Kinetic Proofreading Circuits} \label{sec:Methods} \begin{table*}[!tbh] \caption{Kinetic model parameters as originally reported by Banerjee et al \cite{KineticProofBanerjee}. Kinetic rate constants $k^{(-)}_{i,R}$ reported in $s^{-1}$ and discrimination factors $f_{i}$ are dimensionless by definition. Rate constants $k^-_3$ and $k^-_p$ and discrimination factors $f^-_3$ and $f^-_p$ and are derived from the constraint relations of eq. \ref{eq:kconstraints} and \ref{eq:fconstraint}, respectively. } \label{tab:parameters} \begin{ruledtabular} \begin{tabular}{lcccc} Parameters & Ribosome WT & Ribosome Acc & Ribosome Err & T7 polymerase \\ \hline $k_{1,R}$ & $40 $ & $27$ & $37$ & $250$ \\ $k^-_{1,R}$ & $0.5$ & $0.41$ & $0.43$ & $1$ \\ $k_{2,R}$ & $25 $ & $14$ & $31$ & $0.2$ \\ $k^-_{2,R}$ & $10^{-3}$ & $10^{-3}$ & $10^{-3}$ & $700$ \\ $k_{3,R}$ & $8.5\times10^{-2}$ & $4.8\times 10^{-2}$ & $7.7\times 10^{-2}$ & $900$ \\ $k_{p,R}$ & $8.415$ & $4.752$ & $7.623$ & $250$ \\ $f_{1}$ & $0.675$ & $0.926$ & $0.973$ & $8\times10^{-6}$\\ $f^-_{1}$ & $94$ & $112.2$ & $9.3$ & $1\times10^{-4}\footnote{Value chosen from same experimental limits to ensure positive values of $J_{pW}$.}$\\ $f_{2}$ & $4.8\times10^{-2} $ & $3.5\times10^{-2}$ & $0.126$ & $11.5$\\ $f^-_{2}$ & $1$ & $1$ & $1$ & $1$\\ $f_{3}$ & $7.9$ & $10.34$ & $7.65$ & $1$\\ $f_{p}$ & $4.2\times10^{-3}$ & $7.4\times10^{-4}$ & $4.1\times10^{-3}$ & $4.8\times10^{-5}$\\ \end{tabular} \end{ruledtabular} \end{table*} \begin{figure}[ht!] \includegraphics[scale=0.20]{networks.eps} \caption{Chemical reaction networks for (a) T7 DNA polymerase and (b) E. coli ribosome. Half arrows indicate reversible forward and backward paths for both the right (R) and wrong (W) cycles. Kinetic rates are labeled by $k^{(-)}_{i,W/R}$ where $i={1,2,3,p}$ for each relevant path. Note that transitions through $p$ in green (curved gray) half-arrows are implied to reset the enzyme to its initial state after product formation. Blue half-arrows (straight gray) indicate the proofreading transitions. Shown to the right of the black arrows are the reduced kinetic cycles (RC) where only relevant paths and rates leading to correct product formation are included.} \label{fig:networks} \end{figure} We study the standard Hopfield-Ninio KPR model as adapted to the T7 DNA polymerase and the E.coli ribosome by Banerjee et al.~\cite{KineticProofBanerjee}, using measured values of the kinetic rate constants. These networks, shown in Figure \ref{fig:networks}, capture the main steps of nucleotide ligation in the polymerase and peptide elongation in the ribosome. In particular, these kinetic pathways model the steady-state processive action of the polymerase or ribosome and consist of the following three major steps. For the T7 DNA polymerase, the cycle starts at the polymerase bound to the growing DNA strand in state \textbf{E} where it adds either the correct (R) or incorrect (W) deoxy-NTP molecule to the growing strand and forms the \textbf{ER/W} state. The system can then ligate another dNTP molecule (path p) to restart the cycle or shift the strand to the polymerase exo-site \textbf{ER*/W*} where the dNTP is hydrolized and removed back to state \textbf{E}. Similarly for the ribosome cycle in (b), state \textbf{E} represents the mRNA template bound in the growing ribosome poly-peptide complex. Binding of the cognate (R) or non-cognate (W) aminoacyl-tRNA, EF-TU elongation factor and GTP leads to the second state \textbf{ER/W}. Hydrolysis of the GTP molecule brings the complex to state \textbf{ER*/W*} where the amino-acid can be added to the growing polypeptide strand (link p, green) or discarded (link 3,blue) which restarts the cycle. Note that the main difference in the topology of the cycles is that the KPR and production steps follow the first intermediate \textbf{ER/W} in the polymerase, whereas in the ribosome these occur only following the second hydrolyzed intermediate \textbf{ER*/W*}. All underlying kinetic rates $k^{(-)}_{i,W/R}$ for these cycles are assumed to be reversible and first order ({\it{i.e.,~}} measured in $s^{-1}$ units) in constant substrate concentration at physiological conditions. Additionally, we maintain constant chemical potential differences $\Delta\mu$ of the underlying chemical reactions in both R and W cycles. This requirement constrains the rates as \begin{equation} \prod_{i=1}^{N} \frac{k_{i,W/R}}{k^-_{i,W/R}} = e^{\Delta\mu}~, \label{eq:kconstraints} \end{equation} where $\Delta\mu= {\Delta\mu_\mathsmaller{\mathrm{KPR}}}$ for proofreading cycles ($N=3$) or $\Delta\mu_p$ for production cycle ($N=p$) and chemical potentials are hereafter measured in ${k_{\mathrm{B}}} T$ units. We take the approximate physiological values of the chemical potential differences, $\Delta\mu_{p} \sim 26 {k_{\mathrm{B}}} T$ for poly-peptide elongation, $\Delta\mu_p \sim 11 {k_{\mathrm{B}}} T$ for nucleotide ligation and ${\Delta\mu_\mathsmaller{\mathrm{KPR}}} \sim 20 {k_{\mathrm{B}}} T$ for the hydrolysis KPR step in both systems~\cite{KineticProofMallory}. For convenience, we also define discrimination factors $f_i$ which relate the R and rate constants as $f^{(-)}_i=k^{(-)}_{i,W}/k^{(-)}_{i,R}$ and represent the biased discriminant enzyme behavior when bound to either the right or wrong substrate. These factors are similarly constrained through Eq. \ref{eq:kconstraints}, \begin{equation} \label{eq:fconstraint} \prod_{i=1}^{N} \frac{f_i}{f^{-}_i} = 1 ~. \end{equation} Values for all $k_{i,R/W}^{(-)}$ and $f^{(-)}_i$ are as adapted by Banerjee et al.~\cite{KineticProofBanerjee} from experimental sources~\cite{KineticRiboExperiment,KineticPoly1,KineticPoly2,KineticPoly3} and listed in Table \ref{tab:parameters} above. For the purpose of our thermodynamic uncertainty analysis, we calculate the TUR measure ${\mathcal{Q}}$ for \emph{correct product transitions} across the path $p$ in the ribosome and polymerase as follows. The relative uncertainty $\epsilon^2(t)$ across this production path can be found from the mean transition current $J_{p}^{R}= \langle X \rangle /t$ and its diffusion constant $D_{p}^{R}=\mathrm{Var}X/2t$ so that \begin{equation} \label{eq:epsilon} \epsilon^2(t) = \mathrm{Var}X/\langle X \rangle^2 = 2 D_{p}^{R} / (J_{p}^{R})^2 t~. \end{equation} Using the definition for ${\mathcal{Q}}$ in Eq.\ref{eq:Q} and $\epsilon$ above we thus get \begin{equation} \mathcal{Q} = 2 \dot{Q} D_{p}^{R}/(J_{p}^{R})^2~, \end{equation} where the energy dissipation $\dot{Q} = {k_{\mathrm{B}}} T \sigma$ is defined in terms of the entropy production rate $\sigma$ as \begin{equation} \label{eq:sigma} \sigma = {J_\mathsmaller{\mathrm{KPR}}} {\Delta\mu_\mathsmaller{\mathrm{KPR}}} + J_{p} \Delta\mu_p~. \end{equation} The currents that determine $\sigma$ in Eq. \ref{eq:sigma} are $J_{p}=J_{p}^{R}+J_{p}^{W}$, the production current for both R and W cycles, and ${J_\mathsmaller{\mathrm{KPR}}}=J_{i}^{R}+J_{i}^{W}$, the discarded substrate current from kinetic proofreading (for $i=2$ in polymerase or $i=3$ ribosome). We also calculate the mean production time $\tau$ defined as the average time required to observe one net product addition onto the growing strand. This is given from the production rate as \begin{equation} \tau \equiv 1/J_{p}^{R}~. \end{equation} This definition of time is equivalent to a mean passage time in the limit of irreversible product formation and vanishing incorrect product rate $k_{pW/R}$. Similarly, the error $\eta$ is defined as the fraction of incorrect substrate units added onto a growing peptide chain or DNA strand, \begin{equation} \eta = \frac{J_{p}^{W}}{J_{p}^{W}+J_{p}^{R}}~, \end{equation} where $J_{p}^{W}$ is the current across the $p$ link in the wrong W cycle. The values of the currents, $J_{p}^{R}$ and $J_{p}^{W}$, and the diffusion constant $D_{p}^{R}$, are calculated using Koza's steady-state method~\cite{KozaMethod}, as demonstrated extensively in other studies~\cite{ThermUncertainty,KineticProofMallory,ThermUncFanoFactor}. In addition to the full reaction networks, it is instructive to consider idealized cycles consisting of only states and paths leading to correct product formation (shown on the right side in Figure \ref{fig:networks}). These reduced cycles (RC) represent, by construction, perfect performance of the underlying protein systems. The RC circuits allow direct comparison of current-dependent metrics like ${\mathcal{Q}}$ and $\tau$ between the actual and ideal system. Considering that $k_i \gg k^-_i$ for the systems studied here, the production current $J_{p}^{R}$ in these idealized cycles is a simplified function of the forward rate constants $k_i$ of the form $J_{p}^{R}\sim\prod_i^n k_i$. One can thus define the forward rate constants in terms of a single control parameter $k$ as $k_i \equiv a_i k$ where $a_i \equiv k^{\mathrm{phys}}_{i}/k^{\mathrm{phys}}_{1}$ are the ratios of the rate constants at physiological values. It follows that current in the idealized circuit $J_{p}^{R} \sim k^n \prod_i^n a_i$ is a monotonic function (a power) of $k$ that conserves the rate constant proportionality of the original system. The physiological state is matched when $k=k^{\mathrm{phys}}_{1}$. As we later show, operating near the regime of the ideal RC cycles implies minimimal ${\mathcal{Q}}$ values, and affords enhanced accuracy/speed trade-off performance. \section{Results and Discussion} \label{sec:Results} \begin{table} \caption{Values of calculated TUR measure ${\mathcal{Q}}$ (Eq. \ref{eq:Q}) and score ratios to its lower bound ${\mathcal{Q}}_\mathrm{lh}$ for a given number of states $N$ and constant $\Delta\mu$ as defined in Eq. \ref{eq:Qbound}. For ribosomes, $N=3$ and $\Delta\mu=\Delta\mu_p=26 {k_{\mathrm{B}}} T$. For T7 polymerase $N=2$ and $\Delta\mu=\Delta\mu_p=11 {k_{\mathrm{B}}} T$.} \label{tab:Qscores} \begin{ruledtabular} \begin{tabular}{lcc} & ${\mathcal{Q}}$ (${k_{\mathrm{B}}} T$) & ${\mathcal{Q}}/{\mathcal{Q}}_\mathrm{lh}$ \\ \hline Err Ribosome & 137 & 16 \\ WT Ribosome & 48 & 5.6 \\ Acc Ribosome & 28 & 3.2 \\ T7 Polymerase & 7.1 & 1.3 \\ \end{tabular} \end{ruledtabular} \end{table} \begin{figure*}[htbp!] \includegraphics[scale=0.26]{currents.eps} \caption{Product cycle current $J_{p}^{R}$ as a function of generalized rate constant $k$, defined as $k_{1R}=a_1 k$, $k_{2R}=a_2 k$, $k_{pR}=a_p k$ with $a_1=1$, $a_2=k_{2R}^{\mathrm{phys}}/k_{1R}^{\mathrm{phys}}$ and $a_p=k_{p}^{\mathrm{phys}}/k_{1R}^{\mathrm{phys}}$, where $k_{i}^{\mathrm{phys}}$ are the physiological values. The physiological points are shown as dots. (a) The current of correct substrate production $J_{p}^{R}$ for Wild-type ribosome (WT, blue top), more erroneous (Err, red bottom) and more accurate (Acc, yellow middle) mutants and the ideal RC current (dashed line). (b) Same as (a) but for T7 polymerase (solid line). The difference between the actual T7 current and RC is in the order of $1\%$ and not noticeable at this scale.} \label{fig:currents} \end{figure*} \emph{The approach of KPR circuits to the TUR limit.---} We begin by reporting the physiological values of ${\mathcal{Q}}$ for product transitions in the ribosome and polymerase systems as shown in Table \ref{tab:Qscores}. Here, the T7 polymerase achieves the lowest value of ${\mathcal{Q}}$ which is about seven times smaller than the native WT ribosome, with the more accurate mutant Acc closer to the limit than either WT or the less accurate Err. In order to compare these results more meaningfully, however, we must account for the underlying energy cost of cycle operation, which differs between the T7 DNAP and ribosome systems. This can be achieved by considering the reduced cycles (RCs) consisting of only states and transitions leading to correct product formation as introduced in section \ref{sec:Methods} and Figure \ref{fig:networks}(b). These RCs were extracted from the full network of states and represent an idealized limit where only the correct substrate is processed in the absence of any competing paths. Operating at the RC limit therefore provides the best overall enzyme performance for a fixed energy budget. In particular, the RC limit implies a unicycle regime for which a lower bound for ${\mathcal{Q}}$ is known to be \begin{equation} {\mathcal{Q}} \ge {\mathcal{Q}}_\mathrm{lh} \equiv 2 {k_{\mathrm{B}}} T\left( \frac{\Delta\mu}{2 N} \coth \frac{\Delta\mu}{2 N} \right ) \label{eq:Qbound} \end{equation} where $N$ denotes the number of states in the network, and $\Delta\mu$ is the overall change in Gibbs free energy of the underlying chemical reactions per cycle (in ${k_{\mathrm{B}}} T$ units)~\cite{ThermUncMultiCycles,StochThermRevMolMotorExp}. This hyperbolic lower bound ${\mathcal{Q}}_\mathrm{lh}$ is achieved for a system with uniform forward and backward rate constants and reduces to the minimal value of $2 {k_{\mathrm{B}}} T$ in the vanishing $\Delta\mu$ limit. Thus, the hyperbolic bound is pertinent for far-from-equilibrium driven process such as KPR and represents the best efficiency attainable given an energy input. The ratio of the physiological values of ${\mathcal{Q}}/{\mathcal{Q}}_\mathrm{lh}$ shown in Table \ref{tab:Qscores} are therefore a normalized optimization score, for a specific energetic constraint, of either the polymerase or ribosome systems. Markedly, the polymerase operates close to the TUR limit at ${\mathcal{Q}}/{\mathcal{Q}}_\mathrm{lh} \simeq 1.3$, while the ribosomes are $3-16$ times further away, even after accounting for the specific energy cost of the underlying chemical transitions. One can obtain a deeper appreciation of the score ratios ${\mathcal{Q}}/{\mathcal{Q}}_\mathrm{lh}$ by comparing the full enzymatic cycles to their respective RC limits. To achieve this, we define a collective rate constant $k$ which governs the product output current $J_{p}^{R} \sim k^n$ in RC networks, and from which ${\mathcal{Q}}$ and other performance metrics are calculated as detailed in section \ref{sec:Methods}. Consequently, $k$ serves as a control parameter that allows for direct performance comparison between actual and idealized RC systems. Figure \ref{fig:currents}, presents $J_{p}^{R}$ as a function of $k$ for both RCs and the full ribosome and polymerase systems. As seen, the ideal current is increasing montonically with $k$, while the actual current is non-monotonic for the ribosome systems and nearly indistinguishable from RC for the T7 polymerase. These results are a first indication that the polymerase is indeed working at virtually reduced network conditions, hence the lower ${\mathcal{Q}}$ value, while the ribosomes only approach this limit at longer operation times (lower currents). \begin{figure*}[htbp!] \includegraphics[scale=0.23]{QvsTime.eps} \caption{Parametric plots of the normalized TUR measure $q \equiv {\mathcal{Q}}/\Delta\mu_p$ vs. mean production time $\tau$ ($s$) as a function of $k$ for decreasing values of discrimination factor $f_1$ (a,c) and increasing values of the proofreading discrimination factor $f_3$ (b,d) for the wild-type ribosome and DNA polymerase respectively. Each corresponding factor $f_i$ is scaled from its physiological value $f^{\mathrm{phys}}_{i}$ as shown in the gradient scale on top. Dashed black lines indicate ideal RC limit. Dotted gray lines indicate curves of constant $k$ scaled from its physiological value in powers of $2$.} \label{fig:QvTfs} \end{figure*} The proximity of these systems to their lower TUR limits motivates a further examination of the full circuits and their corresponding RC limits. To this end, we consider two operating cases that either reduce or preserve the full reaction network respectively: (a) perfect binding discrimination corresponding to $f_1 \to 0 $ and b) perfect proofreading discrimination corresponding to $f_3 \to \infty$. In case (a) the full system gradually reduces to the RC limit by entire omission of the $W$ branch. In contrast, case (b) preserves the overall system topology while minimizing the impact of incorrect synthesis in the W branch. As shown below, comparing ${\mathcal{Q}}$, the error rate $\eta$ and the mean production time $\tau$ for either case allows us to see how the approach to the RC limit governs the performance of the full systems. \emph{Approaching the RC limit decouples the TUR measure ${\mathcal{Q}}$ from the mean production time $\tau$.---} To allow for normalized comparison of the actual system over its RC cycle, independent of system topology and energetic cost, we define the normalized TUR measure $q \equiv {\mathcal{Q}}/\Delta\mu_p$. Figure \ref{fig:QvTfs} shows $q$ for the WT ribosome against the mean production time $\tau$, as a function of $k$ while $f_1$ (case (a)) or $f_3$ (case (b)) are parametrically varied. As seen in (a) the WT ribosome displays a clear trade-off between $q$ and $\tau$ (red line), but quickly attenuates and decouples as it approaches the RC limit (black dashed line). On the other hand, increasing $f_3$ (b) maintains the trade-off constraint between $q$ and $\tau$ at all points, even when approaching the RC limit. Similar trends are seen for the polymerase in figure \ref{fig:QvTfs} (c) and (d). From these results we find that a system operating near the RC limit may more readily minimize both the product output noise and mean production time without being constrained by a strong trade-off relation. \begin{figure*}[htbp!] \includegraphics[scale=0.27]{QvsErr.eps} \caption{a) Parametric plot of $q \equiv {\mathcal{Q}}/\Delta\mu_p$ vs. error $\eta$ as a function of $f_1$. Lines indicate T7 DNA polymerase (purple leftmost), WT ribosome (blue bottom), erroneous (red top), and more accurate (yellow middle) ribosome mutants. Thick points indicate physiological values. Dashed line is a guide to the eye showing the value of ${\mathcal{Q}}$ achieved at the ideal RC limit of WT ribosome which is approximately the same for all systems shown. (b) Same as (a) but for $f_3 \to \infty$. Inset: WT ribosome scaled from $k=k_{1R}^{\mathrm{phys}}$ to $k=10 k_{1R}^{\mathrm{phys}}$ illustrating that only $f_1 \to 0$ guarantees ${\mathcal{Q}}$ goes to the RC limit.} \label{fig:QvErrf1} \end{figure*} \emph{The error rate $\eta$ decouples from the TUR measure ${\mathcal{Q}}$ in high fidelity regimes.---} Figure \ref{fig:QvErrf1} shows $q \equiv {\mathcal{Q}}/\Delta\mu_p$ against the error $\eta$ for decreasing $f_1$ (a) and increasing $f_3$ (b) from the measured physiological values. In both cases, $q$ decreases with decreasing $\eta$ and becomes decoupled in the low error regime. However, this asymptotic value of ${\mathcal{Q}}$ only matches the RC limit in the vanishing $f_1$ case (a) but not in case (b) where the Err mutant stays well above the RC value at the $f_3 \to \infty$ limit. The inset illustrates this asymptotic behavior more clearly where in this case $k$ has been rescaled from the physiological values as $k=10 k_{1}^{\mathrm{phys}}$ for the WT ribosome, and shows that the RC limit can be reached for $f_1\to 0$, but not for $f_3\to\infty$. \begin{figure*}[ht] \includegraphics[scale=0.23]{ErrorVsTime.eps} \caption{Parametric plots of the error $\eta$ versus mean production time $\tau$ for wild-type ribosome as a function of $k$ for (a) decreasing values of binding discrimination $f_1$, and (b) increasing values of proofreading discrimination $f_3$. Each respective factor $f_i$ is scaled from its physiological value $f^{\mathrm{phys}}_{i}$ as shown in the gradient scale on top. Dotted gray lines indicate curves of constant $k$ value scaled from its physiological value in powers of $2$.} \label{fig:ErrvsTau} \end{figure*} \emph{Approaching the RC limit relaxes the trade-off constraint between error rate and mean production time.---} It is also instructive to compare the error $\eta$ to the mean production time $\tau$ as a function of $k$ in the context of the idealized RC limit (Figure \ref{fig:ErrvsTau}). While changing either $f_1$ or $f_3$ parametrically is not expected to decouple the trade-off between measures, these curves highlight improved performance close to the RC limit in addition of minimizing ${\mathcal{Q}}$. For instance, while increasing discriminant proofreading $f_3$ naturally improves the accuracy of the system, it ultimately approaches a best trade-off curve for this parameter variation (a Pareto front). In contrast, reducing $f_1$ weakens the trade-off relationship (smaller negative derivatives) while moving these trade-off curves arbitrarily close to the origin by construction (incorrect substrate is never bound). \begin{figure}[htbp!] \includegraphics[scale=0.25]{dissipation.eps} \caption{Parametric plots of normalized dissipation differences between the actual $\dot{q}=\dot{Q}/\Delta\mu_p$ and the ideal RC limit $\dot{q}_{\mathsmaller{\mathrm{RC}}}=\dot{Q}_\mathsmaller{\mathrm{RC}}/\Delta\mu_p$ versus the mean production time $\tau$ as a function of $k$. Lines indicate T7 DNA polymerase (purple left), WT ribosome (blue middle), erroneous (red top), and more accurate (yellow bottom) ribosome mutants. Err mutant plot has been scaled down by a factor of $2$ to fit the figure. Points indicate physiological values for each line respectively. Dashed line marks the RC difference which is zero by definition.} \label{fig:dSigmas} \end{figure} \emph{The energy cost rate for faster speed of operation is minimized in the RC limit.---} Lastly, we consider energy dissipation in the limit of the RC cycle. While ${\mathcal{Q}}$ in general provides an efficiency measure of dissipation and product output precision, it is independent of time and hence agnostic to the cost of driving a cycle up to a required speed of operation. In this regard, Mallory et al.~\cite{KineticProofMallory} have shown that the ribosome and T7 polymerase prioritize speed over dissipation, and is therefore interesting to see how dissipation and mean production time vary between physiological systems and their corresponding RC limits. In particular, we calculate the difference in energy dissipation between the actual systems and their RC limits. Figure \ref{fig:dSigmas} shows the normalized dissipation rate difference, $\dot{q}-\dot{q}_{\mathsmaller{\mathrm{RC}}} \equiv {k_{\mathrm{B}}} T (\sigma - \sigma_{\mathsmaller{\mathrm{RC}}})/\Delta\mu_p$, against the mean production time $\tau$ as a parametric function of $k$ for ribosomes and the T7 polymerase. The dissipation rate was normalized by the operating energy cost $\Delta\mu_p$ to allow comparison of different reaction networks. Evidently, while the ribosomes display absolute differences lower than the polymerase, they operate more slowly by two orders of magnitude and with steep energy costs for $\tau$ shorter than physiological values. On the other hand, the T7 polymerase maintains a relatively flat profile over many $\tau$ decades, ensuring that the energy dissipation rate does not deviate strongly from the ideal RC values, which achieve minimal ${\mathcal{Q}}$ by construction. In closing, by all metrics considered, operating near the the RC limit confers considerable performance advantages to the KPR systems examined. By this measure, it is not surprising the polymerase outperforms the ribosomes given that its binding discrimination factor $f_1$ is about a million times more restrictive than that of the ribosomes ($f_{1,\mathrm{polymerase}}/f_{1,\mathrm{ribosome}} \sim 10^{-6}$) and places it significantly closer to the underlying RC limit. Note that a low ${\mathcal{Q}}$ score \emph{does not} imply by itself the RC limit; low values of ${\mathcal{Q}}$ are achieved for certain limiting values of $f_3$, and as discussed previously, this does not confer the similar trade-off advantages of approaching the RC limit. For instance, the Acc mutant achieves lower ${\mathcal{Q}}$ score due to its enhanced $f_3$, but must operate at slower production times than the WT due to steeply increasing energy demands as seen in Figure \ref{fig:dSigmas}. As a result, operating near the RC limit not only achieves low ${\mathcal{Q}}/{\mathcal{Q}}_{\mathrm{lh}}$ score by definition, but also improves the overall \emph{global} performance per production cycle in a reaction network given a fixed energy budget. \section{Conclusions} \label{sec:Conclusion} The ribosome and the DNA polymerase drive two essential production networks in the cell. The efficiency of these circuits is an important determinant of the organism fitness, and therefore they must be tuned to prioritize product-forming transitions over competing incorrect substrate binding and proofreading cycles. In this work, we have analyzed these circuits in the light of the Thermodynamic Uncertainty Relation (TUR), and found that the TUR measure ${\mathcal{Q}}$ for the product current is closer to the lower bound in the polymerase than in the E. coli ribosome system. In particular, we considered a reduced cycle (RC) limit that accounts for paths leading to product formation, and showed that operating near this regime affords minimized values of ${\mathcal{Q}}$ for corresponding rate constants. Notably, the polymerase operates very near the RC regime and thereby achieves nearly-optimal performance manifested by the proximity of its ${\mathcal{Q}}$ measure to the lower bound ${\mathcal{Q}}_\mathrm{lh}$ (Eq. \ref{eq:Qbound} and Table \ref{tab:Qscores}). Further, operating near RC relaxes the trade-off constraint between accuracy and speed, while decoupling both these measures from ${\mathcal{Q}}$. On the other hand, a similar analysis showed that E. coli ribosomes operate relatively farther away from the RC limit, resulting in stronger coupling across all performance measures and increased energy costs, manifested by larger values of ${\mathcal{Q}}$. That said, the ribosome is not more than one order-of-magnitude away from the TUR bound. The significant difference in the performance of polymerase and ribosome stems from the accuracy of substrate discrimination, which is higher by about six orders of magnitude in the polymerase. This binding selectivity difference, which is not directly addressed here, is linked to the different biochemical mechanisms employed by the polymerase and the ribosome~\cite{KineticPoly1,KineticRibo2,RibosomeDecoder}. As a result, the polymerase is more likely to operate in the regime of correct product cycles than the ribosome, close to the RC limit. The different regimes of performance may also reflect the much more deleterious impact of errors in replication, which are carried through genome heredity, relative to errors in translation that vanish when the protein is degraded. For future studies, it would be interesting to study how distinct reaction pathways in other protein systems, {\it{e.g.,~}} in signal transduction, adjust to prioritize correct response cycles and whether these imply similar RC limits that optimize the underlying TUR constraint. \begin{acknowledgments} The authors thank Anatoly B. Kolomeisky, Oleg A. Igoshin and Changbong Hyeon for helpful discussions. This work was supported by the taxpayers of South Korea through the Institute for Basic Science, Project Code IBS-R020-D1. \end{acknowledgments} \bibliographystyle{apsrev4-1.bst}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The detailed and correct computation of big-bang nucleosynthesis (BBN) dates back 51 years to the seminal papers of Wagoner, Fowler and Hoyle \cite{WFH}. Since that time, the BBN code has been updated for numerous physics effects including finite-temperature Coulomb and radiative corrections \cite{RadFiniteT}, finite nucleon mass \cite{FiniteMass}, neutrino heating by $e^\pm$ annihilations \cite{NeutrinoHeating}, QED plasma effects \cite{QEDPlasma} and more accurate numerical integration techniques \cite{BBNCode,PublicBBN1,PublicBBN2}. In addition, the input reaction rates, including the neutron mean lifetime, have been more accurately and precisely measured and the uncertainties quantified (see e.g. \cite{BurlesNollett}). The BBN code is one of the avatars of precision cosmology. Moreover, the comparison of the predicted light-element abundances (D, $^4$He, $^3$He and $^7$Li) with their inferred primordial abundances is the earliest test of the hot big bang cosmology as well as a powerful cosmological probe of particle physics. Recently, Sasankan et al \cite{Sasankan2018} have called attention to an effect which they claim changes the predictions for the abundances of the light elements significantly. Specifically, they question the assumption that nuclei are in kinetic equilibrium with phase-space distributions that are well described by Maxwell-Boltzmann distributions. They argue that kinetic equilibrium for nuclei in the relevant temperature range ($T\sim 1\,$MeV to $T\sim 0.05\,$MeV) is maintained by scattering with the semi-relativistic $e^\pm$ plasma; based on numerical simulations, they suggest that this leads to a distorted kinetic distributions for nuclei, one that appears to be described by a MB distribution that is about 20\% hotter than the temperature of the relativistic plasma (see Fig.~1 in \cite{Sasankan2018}). If their work is correct, this leads to a large change in the predicted light-element abundances, which they compute. The importance of BBN in cosmology motivates our work. In this paper, we show that there is a deviation from kinetic equilibrium, but it is extremely small. In particular, using the relativistic Boltzmann equation, we explicitly show that any distortion that arises from nuclei being thermalized by the EM plasma is of the order of the expansion rate over the scattering rate, or about $10^{-17}$, not 20\%. We discuss and quantify two related, small effects: The lightest nuclei are slightly relativistic, $v \sim 10^{-4} - 10^{-3}$, and so there are corrections to the MB distribution of order 0.1\%. Further, because the nuclei are non-relativistic, in the absence of interactions with the electromagnetic (EM) plasma, their temperature would decrease faster than the electromagnetic plasma, and the continued transfer of a small amount of energy from the EM plasma to nuclei quickens the cooling of EM plasma. This is a very tiny effect because the thermal energy of the EM plasma is a billion times greater than that of the kinetic energy carried by nuclei. \section{Relativistic Boltzmann Equation} The relativistic Boltzmann equation governing the phase space distribution of species $X$ in the RW expanding Universe is given by \cite{KolbTurner}: \begin{eqnarray} {\hat {\bf L}}[f_X] &=& {\hat {\bf C}}[f_X] \end{eqnarray} where the Liouville operator and collision term are given by \begin{eqnarray} {\hat {\bf L}}[f_X] & = & E {\partial f_X \over \partial t} - H |{\bf p}|^2 {\partial f_X \over \partial E} \\ {\hat {\bf C}}[f_X] & = & -{1\over 2} \int d\Pi_a d\Pi_i d\Pi_j |{\cal M}|^2_{a + X \leftrightarrow i + j } (2\pi )^4\delta^{(4)}(\dots ) \nonumber \\ &\times& \left[f_a f_X (1\pm f_i)(1\pm f_j) -f_if_j (1\pm f_a)(1\pm f_X) \right] \label{C-term} , \end{eqnarray} the $+$ is for bosons, the $-$ is for fermions, and $\hbar = k_B = c =1$ throughout. Anticipating the problem of interest we have specialized to a single $2\leftrightarrow 2$ reaction. More generally, the collision term should be summed over all possible interactions. Next, we remind the reader that ${\hat {\bf C}}[f_X] = 0$ in the case that the particles (here $a$, $X$, $i$ and $j$) take on equilibrium distributions characterized by a temperature: \begin{equation}} \newcommand{\eeq}{\end{equation} \label{th-eq-dist} f_{\rm EQ} = {1\over e^{(E-\mu )/T } \mp 1}, \eeq with $\mu_a + \mu_X = \mu_i + \mu_j$. That is, in the absence of expansion, the stationary solution, i.e., thermal equilibrium, is given by the usual FD (or BE) distributions for each species. In the expanding Universe, the growing scale factor $a(t)$ shifts the particle distributions through the effect of redshifting of particle momenta: $|{\bf p}| \propto a^{-1}$, irrespective of mass, where $a(t)$ is the RW cosmic scale factor. Thus, in general, maintaining thermal distributions requires interactions that occur rapidly on the expansion timescale, $H^{-1}$. \subsection{Collisionless, Nonrelativistic Limit} In the nonrelativistic limit, the Liouville operator becomes $${\hat {\bf L}} = M\left( {\partial \over \partial t} - 2HE_K {\partial \over \partial E_K} \right) ,$$ where $E_K = p^2/2M$. Further, for any phase space distribution $f(E_K, t)$ of the form $f= g(E_K = a^2{\tilde E})$, ${\hat {\bf L}}[f]$ vanishes. This means that in the collisionless limit, the phase space distribution function $f$ simply evolves due to the redshifting of particle kinetic energy as $a^{-2}$. If the initial phase-space distribution was thermal, then in the absence of collisions, the distribution remains thermal with a temperature that redshifts as $a^{-2}$. This is a standard result. More well known is that in the collisionless relativistic limit, $${\hat {\bf L }} = |{\bf p}| \left( {\partial \over \partial t} - 2H |{\bf p}| {\partial \over \partial E_K} \right) ,$$ an initially thermal distribution will remain thermal with a temperature that redshifts as $T \propto a^{-1}$, even in the absence of interactions. The high precision to which the CMB is a blackbody spectrum today, 15\,Gyr after photon decoupling, gives strong testimony to the correctness of this result. \subsection{Nucleon/nuclei heating by the EM plasma} Around the time of BBN, the two constituents of the EM plasma, photons and $e^\pm$ pairs, have comparable abundances. Since the $e^\pm$-scattering cross section with nucleons/nuclei is larger, that process is more important (see \cite{Sasankan2018} for a discussion of this point), and so we consider only the thermalizing reaction $e^\pm (p) + N (P) \leftrightarrow e^{\pm\prime }(p^\prime )+ N^\prime (P^\prime )$: \begin{eqnarray} & &{\partial f_N (P) \over \partial t} - 2HE_K {\partial f_N (P) \over \partial E_K} = -{1\over 2M} \int d\Pi_p d\Pi_{p^\prime}d\Pi_{P^\prime}\nonumber \\ & & \times |{\cal M}|^2_{e + N \leftrightarrow e^\prime+ N^\prime} (2\pi )^4\delta^{(4)}(\dots ) [f_e (p) f_N (P) (1 - f_{e^\prime}) \nonumber \\ & & - f_e(p^\prime )f_N (P^\prime ) (1 - f_e)] \label{B-eqn} \end{eqnarray} where $M$ is the mass of nuclide of interest, $d\Pi = d^3p/(2\pi )^3 2E$ is the usual LIPS, the matrix-element squared is \begin{eqnarray}} \newcommand{\eea}{\end{eqnarray} |{\cal M}|^2_{e + N \leftrightarrow e^\prime+ N^\prime} = {16 Z^2 e^4 \over q^4} \left[ 4(p\cdot P)^2 + q^2 \left(m_e^2+M^2 + 2 p\cdot P \right) + \frac{q^4}2 \right] , \label{mel-sq} \eea $p^\mu$, $P^\mu$, $p'^\mu$, $P'^\mu$ are the four momenta of the particles, $e$, $N$, $e^\prime$, and $N^\prime$, and the momentum transfer is $q^\mu \equiv p^\mu-p'^\mu$. Because the quantum occupancy of nucleons/nuclei is small, we have neglected the Pauli blocking factors (for more about this, see Sec.~2.4). Here and in the following, we will often use $P,q$ to denote $|\vec P|, |\vec q|$. The dimensions of the two sides of Eq.~\ref{B-eqn} are $[E] = [{\rm time}]^{-1}$, that of a rate. By pulling out a factor of $\Gamma \equiv 4MZ^2e^4$ on the r.h.s., the remaining integral becomes dimensionless. $\Gamma$ characterizes the interaction rate, and if we compare it to the expansion rate of the Universe $H \simeq T^2/m_{\rm Pl}$, $$ {\Gamma \over H} \simeq {Z^2e^4 m_{\rm Pl}M \over T^2} \sim {10^{21} \over (T/{\rm MeV})^2 } , $$ we see that the scattering rate of nuclei with thermal $e^\pm$ pairs is expected to be very high, of order $10^{21}$ scatterings per Hubble time, and so we expect any departures in the phase space distribution of nuclei from equilibrium to be very small, with size set by $H/\Gamma$. We now explicitly show that this is indeed the case, though factors of $M/T$ raise the size of the departure slightly. In the ensuing section, we calculate the correct $M/T$ dependence. \subsection{Perturbative estimate of non-equilibrium} Because nuclei are non-relativistic, expansion cools their kinetic distribution faster than the EM plasma, driving a departure from kinetic equilibrium with the EM plasma ($1/a^2$ versus $1/a$). However, elastic scatterings with the EM plasma heat the nuclei, and drive their distribution toward kinetic equilibrium with the plasma, at a rate $\Gamma \gg H$. To calculate the size of the expected small deviation from equilibrium, we write the distribution function for a nuclide as the equilibrium distribution plus a small correction: $ f_N (P)= f_{\rm EQ}(P) + \delta f (P)$. Applying the Liouville and collision operators to our {\it ansatz} and keeping the lowest-order terms in $\delta f$, we find: \begin{eqnarray}} \newcommand{\eea}{\end{eqnarray} {\hat{\bf L}}[f_N] & = & MH(E_K/T)f_{\rm EQ} \nonumber \\ {\hat{\bf C}}[f_N] & = & -\frac12 \int d \Pi_e d \Pi_{e'} d \Pi_{P'} |\cM|^2 (2\pi)^4 \delta^{(4)} (P^\mu + p^\mu - P'^\mu -p'^\mu) \times \nonumber\\ && \qquad \times \cbL f_e(p) [1-f_e(p')] \delta f(P) - f_e(p')[1-f_e(p)]\delta f(P') \cbR \nonumber \\ & = & -\frac12 \int d \Pi_e d \Pi_{e'} d \Pi_{P'} |\cM|^2 (2\pi)^4 \delta^{(4)} (P^\mu + p^\mu - P'^\mu -p'^\mu) \times \nonumber\\ && \qquad \times f_e(p) [1-f_e(p')] \bL\delta f(P) - e^{(E_e -E_e')/T } \delta f(P') \bR \nonumber \\ & \simeq & -\frac12 \int d \Pi_e d \Pi_{e'} d \Pi_{P'} |\cM|^2 (2\pi)^4 \delta^{(4)} (\dots) f_e(p) [1-f_e(p')] \times \nonumber\\ && \qquad \times \cbL \delta f(P) (E_e - E'_e)/T - \left(} \newcommand{\pR}{\right)} \newcommand{\bL}{\left[} \newcommand{\bR}{\right]} \newcommand{\cbL}{\left\{} \newcommand{\cbR}{\right\}} \newcommand{\mL}{\left|} \newcommand{\mR}{\right| P- P' \pR \delta f'(P) \cbR \nonumber \eea where in the final expression we have Taylor-expanded $e^{(E_e -E_e')/T}$ and $\delta f(P')$ and kept the lowest-order terms. The symbol $\delta f'(P)$ represents the partial derivative of $\delta f$ with respect to $P$. The Boltzmann equation, ${\hat{\bf L}}[f_N] = {\hat{\bf C}}[f_N]$, now leads to an ordinary differential equation for $\delta f(P)$ whose coefficients are phase-space integrals that can computed numerically. However, we can obtain a parametric estimate for $\delta f$ by using the fact that the momentum transferred $q \sim T$ and the energy transferred $q^2/M$ are small compared to the nuclide mass $M$, so that \begin{eqnarray}} \newcommand{\eea}{\end{eqnarray} \delta f(P)(E_e - E'_e)/T &\sim& {\cal O}(T/M) \delta f \nonumber \\ \left(} \newcommand{\pR}{\right)} \newcommand{\bL}{\left[} \newcommand{\bR}{\right]} \newcommand{\cbL}{\left\{} \newcommand{\cbR}{\right\}} \newcommand{\mL}{\left|} \newcommand{\mR}{\right| P- P' \pR \delta f'(P) &\sim& {\cal O}[(T/M)^{1/2}] \delta f. \nonumber \eea The first term therefore enters at higher order in $T/M$. Using the matrix element from \Eq{mel-sq} and integrating over phase space, we find that the collision term is approximately \begin{equation}} \newcommand{\eeq}{\end{equation} {\hat{\bf C}}[f_N] \sim \frac{32 \alpha^2 M^2 T^3 \ln(\theta_D/2) \cI(T)}{\pi |\vec P|^3} \delta f, \eeq where we have defined a dimensionless integral over the electron phase space, $$\cI(T)=\int_{m_e/T}^\infty d \epsilon} \newcommand{\vp}{\varphi} \newcommand{\half}{\frac12 \, \epsilon} \newcommand{\vp}{\varphi} \newcommand{\half}{\frac12^2 \sqrt{1-\frac{m_e^2}{T^2 \epsilon} \newcommand{\vp}{\varphi} \newcommand{\half}{\frac12^2}} \frac{\exp\epsilon} \newcommand{\vp}{\varphi} \newcommand{\half}{\frac12}{(\exp\epsilon} \newcommand{\vp}{\varphi} \newcommand{\half}{\frac12+1)^2} \sim \cO(1),$$ and the Debye screening angle is $\theta_D\sim \alpha^{3/2}$. Dropping order one numbers, our parametric estimate for $\delta f$ is: \begin{eqnarray}} \newcommand{\eea}{\end{eqnarray} {\delta f \over f_{\rm EQ}} \sim \frac H\Gamma \left( \frac MT\right)^{3/2} \frac1{\ln (\theta_D/2)} \sim {H \sqrt M \over \alpha^2 \ln(\theta_D/2) T^{3/2}} \sim 10^{-17} (T/\mev)^{1/2} .&& \eea Thus, we have explicitly shown that the departure from kinetic equilibrium is tiny and characterized by $H/\Gamma$. Furthermore, since the collision term is dominated by the first derivative with respect to $P$, we can approximately determine the $P$-dependence of $\delta f(P)$ by integrating once over $P$. Our analysis above gives $\delta f'(P) \propto P^4 f_{\rm EQ}$, leading to $$\delta f(P) \propto \left(} \newcommand{\pR}{\right)} \newcommand{\bL}{\left[} \newcommand{\bR}{\right]} \newcommand{\cbL}{\left\{} \newcommand{\cbR}{\right\}} \newcommand{\mL}{\left|} \newcommand{\mR}{\right| 1 + \frac{P^2}{3MT} \pR \frac{P}{\sqrt{2MT}} e^{- P^2/2MT} + \frac{\sqrt\pi}2 {\rm erfc}\left(} \newcommand{\pR}{\right)} \newcommand{\bL}{\left[} \newcommand{\bR}{\right]} \newcommand{\cbL}{\left\{} \newcommand{\cbR}{\right\}} \newcommand{\mL}{\left|} \newcommand{\mR}{\right| \frac{P}{\sqrt{2MT}} \pR.$$ This gives the correct $P$-dependence to order $\sim \sqrt{T/M}$. Sasankan et al.~\cite{Sasankan2018} describe in version 2 of their paper how they arrive at their result. Starting with the nonrelativistic version of the Langevin equation and using ``the principle of equipartition of KE,'' they numerically simulate the thermalization of nuclei, specifically protons, at an EM plasma temperature of $T=0.1\mev$. They find that protons have a kinetic distribution well described by a MB distribution at temperature of $T\simeq 0.12\mev$ (see Fig.~3), about 20\% warmer than the EM plasma. This is consistent with the fact that at $T=0.1\mev$ the average KE of an electron or positron is 20\% greater than $1.5T$. Their simulation gives results that are consistent with their incorrect assumption about equipartition of KE. In the nonrelativistic limit, thermal equilibrium implies equal KE for all particle species independent of mass; in the relativistic limit (or mixed nonrelativistic/relativistic limit) this is not true, cf., the average energy per particle for a boson is $2.7T$ and for a fermion is $3.15T$ compared to $1.5T$ for a nonrelativistic species. While the Langevin and Fokker-Planck equations, which are used to describe the thermalization of heavy particles by lighter particles (e.g., Brownian motion), can be derived from the Boltzmann equations, the relativistic version of these equation is appropriate here. Had they done this, they would not have needed their ``equipartition assumption'' and we believe they would have arrived at results consistent with ours. \subsection{Relativistic correction to MB distribution} Nuclei are slightly relativistic at the time of BBN: $v^2 \sim T/M \sim 10^{-4} - 10^{-3}$ and thus the use of the MB distribution, $f (v) \propto v^2 \exp (-E_K/T)$, to describe their phase space distribution is not exact. The correction is easy to compute by starting with the exact FD (or BE) distribution: $${g_N \over e^{(E-\mu )/T}+1} \longrightarrow e^{-(E-\mu )/T} \propto e^{-E_K/T}, $$ where the first step follows from the fact $(E-\mu )/T \sim \ln (\eta^{-1}) + 3\ln (M/T) /2 \sim 25 \gg 1$ (which implies small phase-space occupancy) and $E_K \equiv E-M = (\gamma -1)M$. Next, it is straightforward to show that $$p^2dp = M^3\gamma^5 v^2 dv .$$ Therefore, in the $E-\mu \gg T$ limit (relevant to cosmology), the fully relativistic phase space distribution is \begin{equation} {g_N \ p^2dp \over e^{(E-\mu )/T} +1 } \longrightarrow g_N M^3\gamma^5 v^2 dv e^{-E_K/T}. \end{equation} Expanding $\gamma = 1/{(1-v^2)}^{1/2}$ and $e^{-E_K/T}$ in powers of $v^2$, we find the lowest-order correction to the MB distribution: \begin{equation} f_N (v) \propto \left[ 1 + \left( {5\over 2} - {3\over 8}{Mv^2 \over T} \right) v^2 + {\cal O}(v^4) \right] v^2 dv e^{-Mv^2/2T}. \end{equation} The sign of the correction varies from positive, for $Mv^2 < 20T/3$, to negative, for $Mv^2 > 20T/3$. While the thermal average of $Mv^2$ is $3T$, where the overall ${\cal O}(v^2)$ correction is still positive, the Gamow peak for most of the important BBN nuclear reactions is at an energy of a few times the thermal average \cite{BurlesNollett}, where the correction can be negative. In any case, the correction is small, of the order of 0.1\%, smaller than the experimental uncertainties in the nuclear reaction rates \cite{BurlesNollett}. \subsection{Nuclear kinetic drag} Finally, we consider a distinct effect that has previously been ignored. As discussed above, once nucleons and eventually nuclei become non-relativistic, for $T\ll1\,$GeV, their kinetic energies redshift as $a^{-2}$ and without heating their kinetic temperature would decrease as $a^{-2}$ as well. The interactions of nuclei with the relativistic plasma heats the nuclei and keeps them in good thermal contact, as discussed above. However, this heating depletes energy from the relativistic plasma and causes it to cool moderately faster than $1/a(t)$. Using $dE = -pdV$ with \begin{eqnarray} E & = & a^3\, \rho \nonumber\\ \rho & = & \rho_{\rm EM} + {3\over 2}nT \nonumber \\ p &=& {1\over 3}\rho_{\rm EM} + nT \nonumber \\ V&=& a^3 \nonumber \\ \varepsilon & \equiv & {3nT/2 \over \rho_{\rm EM}} \end{eqnarray} it is simple to show that $$ T \propto a^{-(1+\varepsilon /4)}$$ Here $n$ is the number density of nucleons/nuclei and $\varepsilon $, the ratio of the kinetic energy in nucleons/nuclei compared to the EM plasma, is approximately constant and equal to the one-eighth of the baryon-to-photon ratio $\eta$, or around $10^{-10}$. Clearly this is a very tiny effect. By comparison, the annihilation of $e^\pm$ pairs ($T\sim 0.3\,$MeV to $T\sim 0.03\,$MeV) heats the photons. The average slope of the temperature/scale factor relationship during this period is $T \propto a^{-0.84}$ rather than $a^{-1}$. This much larger effect is incorporated into the standard BBN treatments. \section{Conclusions} It is often said that we are in the era of precision cosmology. BBN, CMB last-scattering, and the evolution of CMB anisotropy are exemplars of such. Both involve precision calculations based on well-understood physics, and both have a history that traces back to the discovery of the CMB more than 50 years ago. CMB anisotropies have been computed with a theoretical uncertainty of less than 0.1\% and have been measured to the cosmic variance limit for multipoles up to $\sim 2000$. The estimated theoretical uncertainty in the BBN code for computing $^4$He is less than 0.1\% with the uncertainty in the neutron lifetime adding a similar amount to the error budget \cite{BBNCode}. The theoretical uncertainties for the other light-element abundances are at a similar level \cite{BurlesNollett}. Moreover, the precision determination of the baryon density links the two: BBN and CMB each separately determine the baryon density to per cent level, and the two determinations agree \cite{BaryonDensity}. This backdrop of precision cosmology made the claim of a 20\% correction to the kinetic distribution functions of nuclei \cite{Sasankan2018} of potential great importance and motivated our work. To wit, we have solved the relativistic Boltzmann equation for the nuclear phase-space distribution and explicitly shown that any non-equilibrium effect arising due to $e^\pm$ (and photon) scattering with nuclei is many orders-of-magnitude smaller than this, owing to the very large scattering rate compared to the expansion rate. We have not been able to identify the source of the discrepancy with \cite{Sasankan2018}. Finally, we identified two new small effects: relativistic corrections to the MB distributions for nuclei and a nuclear kinetic drag on the EM plasma which hastens its cooling. The latter of these effects is extremely tiny. The former, the relativistic corrections, are expected to be of the order of 0.1\% and their effect on the light-element abundances is expected to be similar, but has yet to be computed. \vskip 20pt We thank Nikita Blinov for very helpful discussions and Susan Gardner and Grant Mathews for correspondence. This work was supported in part by the Kavli Institute for Cosmological Physics at the University of Chicago through grant NSF PHY-1125897 and an endowment from the Kavli Foundation and its founder Fred Kavli. SDM is an employee of Fermilab, operated by Fermi Research Alliance, LLC under Contract No.~De-AC02-07CH11359 with the United States Department of Energy.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:intro} Semantic segmentation aims at assigning each pixel a label from a predefined label set given a scene. For fully-supervised semantic segmentation \cite{long2015fully,chen2017deeplab,zhao2016pyramid,zheng2015conditional,lin2016refinenet}, the requirement of large-scale pixel-level annotations considerably limits its generality \cite{chaudhry2017discovering}. Some weakly-supervised works attempt to leverage relatively weak supervisions, such as scribbles \cite{lin2016scribblesup}, bounding boxes \cite{qi2016augmented}, or points \cite{bearman2016s}, but they still need large amount of hand labors. Therefore, semantic segmentation with image-level supervision \cite{pathak2015constrained,kolesnikov2016seed,wei2016stc,hou2016mining,wei2017object} is becoming a promising way to relief lots of human labors. In this paper, we are also interested in the problem of weakly-supervised semantic segmentation. As only image-level labels are available, most recent approaches \cite{kolesnikov2016seed,chaudhry2017discovering,hou2016mining,wei2017object,hong2017weakly}, more or less, rely on different attention models due to their ability of covering small but discriminative semantic regions. Therefore, how to generate high-quality attention maps is essential for offering reliable initial heuristic cues for training segmentation networks. Earlier weakly-supervised semantic segmentation methods \cite{kolesnikov2016seed,wei2017object} mostly adopt the original Class Activation Maps (CAM) model \cite{zhou2016learning} for object localization. For small objects, CAM does work well but when encountering large objects of large scales it can only localize small areas of discriminative regions, which is harmful for training segmentation networks in that the undetected semantic regions will be judged to background. Interestingly, the adversarial erasing strategy \cite{wei2017object,li2018tell} (\figref{fig:motivation}) has been proposed recently. Benefiting from the powerful localization ability of CNNs, this type of methods is able to further discover more object-related regions by erasing the detected regions. However, a key problem of this type of methods is that as more semantic regions are mined, the attentions may spread to the background and further the localization ability of the initial attention generator is downgraded. For example, trains often run on rails and hence as trains are erased rails may be classified as the train category, leading to negative influence on learning semantic segmentation networks. In this paper, we propose a promising way to overcome the above mentioned drawback of the adversarial erasing strategy by introducing the concept of self-erasing. The background regions of common scenes often share some similarities, which motivates us to explicitly feed attention networks with a roughly accurate background prior to confine the observable regions in semantic fields. To do so, we present two self-erasing strategies by leveraging the background prior to purposefully suppress the spread of attentions to the background regions. Moreover, we design a new attention network that takes the above self-erasing strategies into account to discover more high-quality attentions from a potential zone instead of the whole image \cite{zhang2018adversarial}. We apply our attention maps to weakly-supervised semantic segmentation, evaluate the segmentation results on the PASCAL VOC 2012 \cite{everingham2015pascal} benchmark, and show substantial improvements compared to existing methods. \newcommand{\addFig}[1]{} \newcommand{\addFigs}[1]{} \begin{figure} \centering \includegraphics[width=\linewidth]{figures/motivation.pdf} \caption{(a) A typical adversarial erasing approach \cite{zhang2018adversarial}, which is composed of an initial attention generator and a complementary attention generator; (b-d) Attention maps produced by (a) as the training iterations increase; (e) The attention map generated by our approach. As can be seen, the attentions by (a) gradually appear in unexpected regions while our results are confined in the bicycle region properly.} \label{fig:motivation} \end{figure} \section{Related Work} \subsection{Attention Networks} \paragraph{Earlier Work.} To date, a great number of attention networks have been developed, attempting to reveal the working mechanism of CNNs. At earlier stage, error back-propagation based methods \cite{simonyan2013deep,zeiler2014visualizing} were proposed for visualizing CNNs. CAM \cite{zhou2016learning} adopted a global average pooling layer followed by a fully connected layer as a classifier. Later, Selvaraju proposed the Grad-CAM, which can be embedded into a variety of off-the-shelf available networks for visualizing multiple tasks, such as image captioning and image classification. Zhang \emph{et al.~} \cite{zhang2016top}, motivated by humans' visual system, used the winner-take-all strategy to back-propagate discriminative signals in a top-down manner. A similar property shared by the above methods is that they only attempt to produce an attention map. \paragraph{Adversarial Erasing Strategy.} In \cite{wei2017object}, Wei \emph{et al.~} proposed the adversarial erasing strategy, which aims at discovering more unseen semantic objects. A CAM branch is used to determine an initial attention map and then a threshold is used to selectively erase the discovered regions from the images. The erased images are then sent into another CNN to further mine more discriminative regions. In \cite{zhang2018adversarial,li2018tell}, Zhang \emph{et al.~} and Li \emph{et al.~} extended the initial adversarial erasing strategy in an end-to-end training manner. \subsection{Weakly-Supervised Semantic Segmentation} Due to the fact that collecting pixel-level annotations is very expensive, more and more works are recently focusing on weakly-supervised semantic segmentation. Besides some works relying on relatively strong supervisions, such as scribble \cite{lin2016scribblesup}, points \cite{bearman2016s}, and bounding boxes \cite{qi2016augmented}, most weakly-supervised methods are based on only image-level labels or even inaccurate keyword \cite{hou2018webseg}. Limited by keyword-level supervision, many works \cite{kolesnikov2016seed,wei2017object,hong2017weakly,hou2016mining,chaudhry2017discovering,roycombining,wei2018revisiting} harnessed attention models \cite{zhou2016learning,zhang2016top} for generating the initial seeds. Saliency cues \cite{ChengPAMI,SalObjBenchmark,WangDRFI2017,hou2016deeply,JointSalExist17} are also adopted by some methods as the initial heuristic cues. Beyond that, there are also some works proposing different strategies to solve this problem, such as multiple instance learning \cite{pinheiro2015image} and the EM algorithm \cite{papandreou2015weakly}. \section{Self-Erasing Network} In this section, we describe details of the proposed \textbf{Se}lf-\textbf{E}rasing \textbf{N}etwork (SeeNet). An overview of our SeeNet can be found in \figref{fig:arch}. Before the formal description, we first introduce the intuition of our proposed approach. \subsection{Observations} \label{sec:observations} As stated in \secref{sec:intro}, with the increase of training iterations, adversarial erasing strategy tends to mine more areas not belonging to any semantic objects at all. Thus, it is difficult to determine when the training phase should be ended. An illustration of this phenomenon has been depicted in \figref{fig:motivation}. In fact, we humans always `deliberately' suppress the areas that we are not interested in so as to better focus on our attentions \cite{li2002rapid}. When looking at a large object, we often seek the most distinctive parts of the object first and then move the eyes to the other parts. In this process, humans are always able to inadvertently and successfully neglect the distractions brought by the background. However, attention networks themselves do not possess such capability with only image-level labels given. Therefore, how to explicitly introduce background prior to attention networks is essential. Inspired by this cognitive process of humans, other than simply erasing the attention regions with higher confidence as done in existing works \cite{zhang2018adversarial,li2018tell,wei2017object}, we propose to explicitly tell CNNs where the background is so as to let attention networks better focus on discovering real semantic objects. \renewcommand{\addFig}[1]{\includegraphics[height=0.125\linewidth,width=0.162\linewidth]{figures/illu/#1}} \renewcommand{\addFigs}[2]{\addFig{#1.jpg} & \addFig{#1_sa.png} & \addFig{#1_tsa.png} & ~ & \addFig{#2.jpg}& \addFig{#2_sa.png} & \addFig{#2_tsa.png}} \begin{figure} \centering \footnotesize \setlength\tabcolsep{0.8pt} \begin{tabular}{ccccccc} \addFigs{2007_000250}{2007_001416} \\ (a) Image & (b) Attention & (c) Mask & & (a) Image & (b) Attention & (c) Mask \\ \end{tabular} \caption{Illustrations explaining how to generate ternary masks. (a) Source images; (b) Initial attention maps produced by an initial attention generator; (c) Ternary masks after thresholding. Given (b), we separate each of them into three zones by setting two thresholds. The yellow zone in (c) corresponds to larger attention values in (b). The dark zone corresponding to lower values is explicitly defined as background priors. The middle zone contains semantic objects with high probability. Note that figures in (c) are used for explanation only but actually they are ternary masks. } \label{fig:illu} \end{figure} \subsection{The Idea of Self-Erasing} \label{sec:idea_comp} To highlight the semantic regions and keep the detected attention areas from expanding to background areas, we propose the idea of self-erasing during training. Given an initial attention map (produced by $S_A$ in \figref{fig:arch}), we functionally separate the images into three zones in spatial dimension, the internal ``attention zone'', the external ``background zone'', and the middle ``potential zone'' (\figref{fig:illu}c). By introducing the background prior, we aim to drive attention networks into a self-erasing state so that the observable regions can be restricted to non-background areas, avoiding the continuous spread of attention areas that are already near a state of perfection. To achieve this goal, we need to solve the following two problems: (I) Given only image-level labels, how to define and obtain the background zone. (II) How to introduce the self-erasing thought into attention networks. \paragraph{Background priors.} Regarding the circumstance of weak supervision, it is quite difficult to obtain a precise background zone, so we have to seek what is less attractive than the above unreachable objective to obtain relatively accurate background priors. Given the initial attention map $M_A$, other than thresholding $M_A$ with $\delta$ for a binary mask $B_A$ as in \cite{zhang2018adversarial}, we also consider using another constant which is less than $\delta$ to get a ternary mask $T_A$. For notational convenience, we here use $\delta_h$ and $\delta_l$ ($\delta_h > \delta_l$) to denote the two thresholds. Regions with values less than $\delta_l$ in $M_A$ will all be treated as the background zone. Thus, we define our ternary mask $T_A$ as follows: $T_{A, (i,j)} = 0$ if $M_{A, (i,j)} \ge \delta_h$, $T_{A, (i,j)} = -1$ if $M_{A, (i,j)} < \delta_l$, and $T_{A, (i,j)} = 1$ otherwise. This means the background zone is associated with a value of -1 in $T_A$. We empirically found that the resulting background zone covers most of the real background areas for almost all input scenes. This is reasonable as $S_A$ is already able to locate parts of the semantic objects. \paragraph{Conditionally Reversed Linear Units (C-ReLUs).} With the background priors, we introduce the self-erasing strategies by reversing the signs of the feature maps corresponding to the background outputted by the backbone to make the potential zone stand out. To achieve so, we extend the ReLU layer \cite{nair2010rectified} to a more general case. Recall that the ReLU function, according to its definition, can be expressed as $\text{ReLU}(x) = \max (0, x).$ More generally, our C-ReLU function takes a binary mask into account and is defined as \begin{equation} \label{eqn:crelu} \text{C-ReLU}(x) = \max(x, 0) \times B(x), \end{equation} where $B$ is a binary mask, taking values from $\{-1, 1\}$. Unlike ReLUs outputting tensors with only non-negative values, our C-ReLUs conditionally flip the signs of some units according to a given mask. We expect that the attention networks can focus more on the regions with positive activations after C-ReLU and further discover more semantic objects from the potential zone because of the contrast between the potential zone and the background zone. \begin{figure*}[t] \begin{center} \includegraphics[width=1\linewidth]{figures/arch.pdf} \caption{Overview of the proposed approach.} \label{fig:arch} \end{center} \end{figure*} \subsection{Self-Erasing Network} Our architecture is composed of three branches after a shared backbone, denoted by $S_A, S_B,$ and $S_C$, respectively. \figref{fig:arch} illustrates the overview of our proposed approach. Similarly to \cite{zhang2018adversarial}, our $S_A$ has a similar structure to \cite{zhang2018adversarial}, the goal of which is to determine an initial attention. $S_B$ and $S_C$ have similar structures to $S_A$ but differently, the C-ReLU layer is inserted before each of them. \noindent\textbf{Self-erasing strategy I.} By adding the second branch $S_B$, we introduce the first self-erasing strategy. Given the attention map $M_A$ produced by $S_A$, we can obtain a ternary mask $T_A$ according to \secref{sec:idea_comp}. When sending $T_A$ to the C-ReLU layer of $S_B$, we can easily adjust $T_A$ to a binary mask by setting non-negative values to 1. When taking the erasing strategy into account, we can extend the binary mask in C-ReLU function to a ternary case. Thus, \eqnref{eqn:crelu} can be rewritten as \begin{equation} \label{eqn:crelu2} \text{C-ReLU}(x) = \max(x, 0) \times T_A(x). \end{equation} An visual illustration of \eqnref{eqn:crelu2} has been depicted in \figref{fig:illu}c. The zone highlighted in yellow corresponds to attentions detected by $S_A$, which will be erased in the output of the backbone. Units with positive values in the background zone will be reversed to highlight the potential zone. During training, $S_B$ will fall in a state of self-erasing, deterring the background stuffs from being discovered and meanwhile ensuring the potential zone to be distinctive. \noindent\textbf{Self-erasing strategy II.} This strategy aims at further avoiding attentions appearing in the background zone by introducing another branch $S_C$. Specifically, we first transform $T_A$ to a binary mask by setting regions corresponding to the background zone to 1 and the rest regions to 0. In this way, only the background zone of the output of the C-ReLU layer has non-zero activations. During the training phase, we let the probability of the background zone belonging to any semantic classes learn to be 0. Because of the background similarities among different images, this branch will help correct the wrongly predicted attentions in the background zone and indirectly avoid the wrong spread of attentions. The overall loss function of our approach can be written as: $\mathcal{L} = \mathcal{L}_{S_A} + \mathcal{L}_{S_B} + \mathcal{L}_{S_C}$. For all branches, we treat the multi-label classification problem as $M$ independent binary classification problems by using the cross-entropy loss, where $M$ is the number of semantic categories. Therefore, given an image $I$ and its semantic labels $\mathbf{y}$, the label vector for $S_A$ and $S_B$ is $\mathbf{l}_n = 1$ if $n \in \mathbf{y}$ and 0 otherwise, where $|\mathbf{l}|$ = $M$. The label vector of $S_C$ is a zero vector, meaning that no semantic objects exist in the background zone. To obtain the final attention maps, during the test phase, we discard the $S_C$ branch. Let $M_B$ be the attention map produced by $S_B$. We first normalize both $M_A$ and $M_B$ to the range $[0,1]$ and denote the results as $\hat{M}_A$ and $\hat{M}_B$. Then, the fused attention map $M_F$ is calculated by $M_{F,i} = \max(\hat{M}_{M, i}, \hat{M}_{B, i})$. To obtain the final attention map, during the test phase, we also horizontally flip the input images and get another fused attention map $M_H$. Therefore, our final attention map $M_{final}$ can be computed by $M_{final,i} = \max(M_{F, i}, M_{H, i})$. \section{Weakly-Supervised Semantic Segmentation} \label{sec:weakly} To test the quality of our proposed attention network, we applied the generated attention maps to the recently popular weakly-supervised semantic segmentation task. To compare with existing state-of-the-art approaches, we follow a recent work \cite{chaudhry2017discovering}, which leverages both saliency maps and attention maps. Instead of applying an erasing strategy to mine more salient regions, we simply use a popular salient object detection model \cite{hou2016deeply} to extract the background prior by setting a hard threshold as in \cite{li2018tell}. Specifically, given an input image $I$, we first simply normalize its saliency map obtaining $D$ taking values from $[0, 1]$. Let $\mathbf{y}$ be the image-level label set of $I$ taking values from $\{1,2,\dots,M\}$, where $M$ is the number of semantic classes, and $A_c$ be one of attention maps associated with label $c \in \mathbf{y}$. We can calculate our ``proxy ground-truth'' according to Algorithm~\ref{alg:gen_gt}. Following \cite{chaudhry2017discovering}, here we harness the following harmonic mean function to compute the probability of pixel $I_i$ belonging to class $c$: \begin{equation} \label{eqn:harm_mean} \text{harm}(i) = \frac{w + 1}{\big(w / (A_c(i)) + 1 / D(i)\big)}. \end{equation} Parameter $w$ here is used to control the importance of attention maps. In our experiments, we set $w$ to 1. \begin{algorithm}[tb] \caption{``Proxy ground-truth'' for training semantic segmentation networks} \label{alg:gen_gt} \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \Input{Image $I$ with $N$ pixels; Image labels $\mathbf{y}$;} \Output{Proxy ground-truth $G$} $Q = \text{zeros}(M+1, N)$, $N$ is the number of pixels and $M$ is the number of semantic classes\; $D = \text{Saliency}(I)$~; \hfill $\Leftarrow$ obtain the saliency map \\ \For {\text{each pixel} $i \in I$} { $A_\mathbf{y} = \text{SeeNet}(I, \mathbf{y})$~; \hfill $\Leftarrow$ generate attention maps \\ $Q(0, i) \leftarrow 1 - D(i)$~; \hfill $\Leftarrow$ probability of position $i$ being Background \\ \For {\text{each label} $c \in \mathbf{y}$} { $Q(c, i) \leftarrow \text{harm}(D(i), A_c(i))$~; \hfill $\Leftarrow$ harmonic mean \\ } } $G \leftarrow \argmax_{l \in \{0, \mathbf{y}\}}{Q}$ \; \end{algorithm} \section{Experiments} To verify the effectiveness of our proposed self-erasing strategies, we apply our attention network to the weakly-supervised semantic segmentation task as an example application. We show that by embedding our attention results into a simple approach, our semantic segmentation results outperform the existing state-of-the-arts. \subsection{Implementation Details} \noindent\textbf{Datasets and evaluation metrics.} We evaluate our approach on the PASCAL VOC 2012 image segmentation benchmark \cite{everingham2015pascal}, which contains 20 semantic classes plus the background category. As done in most previous works, we train our model for both the attention and segmentation tasks on the training set, which consists of 10,582 images, including the augmented training set provided by \cite{hariharan2011semantic}. We compare our results with other works on both the validation and test sets, which have 1,449 and 1,456 images, respectively. Similarly to previous works, we use the mean intersection-over-union (mIoU) as our evaluation metric. \noindent\textbf{Network settings.} For our attention network, we use VGGNet \cite{simonyan2014very} as our base model as done in \cite{zhang2018adversarial,wei2017object}. We discard the last three fully-connected layers and connect three convolutional layers with 512 channels and kernel size 3 to the backbone as in \cite{zhang2018adversarial}. Then, a 20 channel convolutional layer, followed by a global average pooling layer is used to predict the probability of each category as done in \cite{zhang2018adversarial}. We set the batch size to 16, weight decay 0.0002, and learning rate 0.001, divided by 10 after 15,000 iterations. We run our network for totally 25,000 iterations. For data augmentation, we follow the strategy used in \cite{He2016}. Thresholds $\delta_h$ and $\delta_l$ in $S_B$ are set to 0.7 and 0.05 times of the maximum value of the attention map inputted to C-ReLU layer, respectively. For the threshold used in $S_C$, the factor is set to $(\delta_h + \delta_l)/2$. For segmentation task, to fairly compare with other works, we adopt the standard Deeplab-LargeFOV architecture \cite{chen2017deeplab} as our segmentation network, which is based on the VGGNet \cite{simonyan2014very} pre-trained on the ImageNet dataset \cite{russakovsky2015imagenet}. Similarly to \cite{chaudhry2017discovering}, we also try the ResNet version \cite{He2016} Deeplab-LargeFOV architecture and report the results of both versions. The network and conditional random fields (CRFs) hyper-parameters are the same to \cite{chen2017deeplab}. \renewcommand{\addFig}[1]{\includegraphics[width=0.120\linewidth]{figures/comps/#1}} \renewcommand{\addFigs}[2]{\addFig{#1.jpg} & \addFig{#1_DCN_new.png} & \addFig{#1_erase_bg.png} & \addFig{#1_mask.png} & ~~ & \addFig{#2.jpg} & \addFig{#2_DCN_new.png} & \addFig{#2_erase_bg.png} & \addFig{#2_mask.png} } \begin{figure} \centering \footnotesize \setlength\tabcolsep{0.8pt} \begin{tabular}{cccccccccc} \addFigs{6}{10} \\ \addFigs{13}{17} \\ \addFigs{1}{2} \\ \addFigs{4}{11} \\ (a) & (b)& (c) & (d) & & (a) & (b)& (c) & (d) \\ \end{tabular} \caption{Visual comparisons among results by different network settings. (a) Source images; (b) Attention maps produced by our SeeNet; (c) Attention maps produced by ACoL \cite{zhang2018adversarial}; (d) Attention maps produced by setting 2 in \secref{sec:role}. The top two roles show results with small objects while the bottom two lines show results with large objects. As can be seen, our approach is able to well suppress the expansion of attentions to background regions and meanwhile generate relatively integral results compared to another two settings. } \label{fig:att_comps} \end{figure} \noindent\textbf{Inference.} For our attention network, we resize the input images to a fixed size of $224 \times 224$ and then resize the resulting attention map back to the original resolution. For segmentation task, following \cite{lin2016refinenet}, we perform multi-scale test. For CRFs, we adopt the same code as in \cite{chen2017deeplab}. \subsection{The Role of Self-Erasing} \label{sec:role} To show the importance of our self-erasing strategies, we perform several ablation experiments in this subsection. Besides showing the results of our standard SeeNet (\figref{fig:arch}), we also consider implementing another two network architectures and report the results. First, we re-implement the simple erasing network (ACoL) proposed in \cite{zhang2018adversarial} (setting 1). The hyper-parameters are all same to the default ones in \cite{zhang2018adversarial}. This architecture does not use our C-ReLU layer and does not have our $S_C$ branch as well. Furthermore, to stress the importance of the conditionally sign-flipping operation, we also try to zero the feature units associated with the background regions and keep all other settings unchanged (setting 2). \noindent\textbf{The quality of attention maps.} In \figref{fig:att_comps}, we sample some images from the PASCAL VOC 2012 dataset and show the results by different experiment settings. When localizing small objects as shown on the top two rows of \figref{fig:att_comps}, our attention network is able to better focus on the semantic objects compared to the other two settings. This is due to the fact that our $S_C$ branch helps better recognize the background regions and hence improves the ability of our approach to keep the attentions from expanding to unexpected non-object regions. When encountering large objects as shown on the bottom two rows of \figref{fig:att_comps}, other than discovering where the semantic objects are, our approach is also capable of mining relatively integral objects compared to the other settings. The conditional reversion operations also protect the attention areas from spreading to the background areas. This phenomenon is specially clear in the monitor image of \figref{fig:att_comps}. \begin{table}[t!] \centering \footnotesize \setlength\tabcolsep{8pt} \renewcommand{\arraystretch}{1.2} \begin{tabular}{lccccc} \midrule[1pt] Settings & Training set & Supervision & mIoU (val) \\ \midrule[1pt] 1 (ACoL \cite{zhang2018adversarial}) & 10,582 VOC & weakly & 56.1\% \\ 2 (w/o sign-flipping in C-ReLU) & 10,582 VOC & weakly & 55.8\% \\ 3 (Ours) & 10,582 VOC & weakly & 57.3\% \\ \midrule[1pt] \end{tabular} \caption{Quantitative comparisons with another two settings described in \secref{sec:role} on the validation set of PASCAL VOC 2012 segmentation benchmark \cite{everingham2015pascal}. The segmentation maps in this table are directly generated by segmentation networks without multi-scale test for fair comparisons. CRFs are not used here as well.} \label{tab:ablation} \end{table} \noindent\textbf{Quantitative results on PASCAL VOC 2012.} Besides visual comparisons, we also consider reporting the results by applying our attention maps to the weakly-supervised semantic segmentation task. Given the attention maps, we first carry out a series of operations following the instructions described in \secref{sec:weakly}, yielding the proxy ground truths of the training set. We utilize the resulting proxy ground truths as supervision to train the segmentation network. The quantitative results on the validation set are listed in \tabref{tab:ablation}. Note that the segmentation maps are all based on single-scale test and no post-processing tools are used, such as CRFs. According to \tabref{tab:ablation}, one can observe that with the same saliency maps as background priors, our approach achieves the best results. Compared to the approach proposed in \cite{zhang2018adversarial}, we have a performance gain of 1.2\% in terms of mIoU score, which reflects the high quality of the attention maps produced by our approach. \begin{table}[t!] \centering \footnotesize \renewcommand{\arraystretch}{1.1} \begin{tabular}{lccccc} \midrule[1pt] \multirow{2}{*}{Methods} & \multirow{2}{*}{Publication} & \multirow{2}{*}{Supervision} & \multicolumn{2}{c}{mIoU (val)} & \multicolumn{1}{c}{mIoU (test)} \\ \cmidrule(l){4-5}\cmidrule(l){6-6} & & & w/o CRF & w/ CRF & w/ CRF \\ \midrule[1pt] CCNN \cite{pathak2015constrained} & ICCV'15 & 10K weak & 33.3\%& 35.3\%& - \\ % EM-Adapt \cite{papandreou2015weakly} & ICCV'15 & 10K weak & - & 38.2\%& 39.6\%\\ % MIL \cite{pinheiro2015image} & CVPR'15 & 700K weak & 42.0\%& - & - \\ % DCSM \cite{shimoda2016distinct} & ECCV'16 & 10K weak & - & 44.1\%& 45.1\%\\ SEC \cite{kolesnikov2016seed} & ECCV'16 & 10K weak & 44.3\%& 50.7\%& 51.7\%\\ % AugFeed \cite{qi2016augmented} & ECCV'16 & 10K weak + bbox & 50.4\%& 54.3\%& 55.5\%\\ % STC \cite{wei2016stc} & PAMI'16 & 10K weak + sal & - & 49.8\%& 51.2\%\\ % Roy et al. \cite{roycombining} & CVPR'17 & 10K weak & - & 52.8\%& 53.7\%\\ % Oh et al. \cite{oh2017exploiting} & CVPR'17 & 10K weak + sal & 51.2\%& 55.7\%& 56.7\% \\ % AE-PSL \cite{wei2017object} & CVPR'17 & 10K weak + sal & - & 55.0\%& 55.7\% \\ % Hong et al. \cite{hong2017weakly} & CVPR'17 & 10K + video weak & - & 58.1\% & 58.7\% \\ WebS-i2 \cite{jin2017webly} & CVPR'17 & 19K weak & - & 53.4\% & 55.3\% \\ % DCSP-VGG16 \cite{chaudhry2017discovering} & BMVC'17 & 10K weak + sal & 56.5\% & 58.6\% & 59.2\% \\ DCSP-ResNet101 \cite{chaudhry2017discovering} & BMVC'17 & 10K weak + sal & 59.5\% & 60.8\% & 61.9\% \\ TPL \cite{kim2017two} & ICCV'17 & 10K weak & & 53.1\% & 53.8\% \\ GAIN \cite{zhang2018adversarial} & CVPR'18 & 10K weak + sal & - & 55.3\% & 56.8\% \\ \midrule[1pt] SeeNet (Ours, VGG16) & - & 10K weak + sal & 59.9\% & 61.1\% & 60.7\% \\ SeeNet (Ours, ResNet101) & - & 10K weak + sal & 62.6\% & 63.1\% & 62.8\% \\ \midrule[1pt] \end{tabular} \caption{Quantitative comparisons with the existing state-of-the-art approaches on both validation and test sets. The word `weak' here means supervision with only image-level labels. `bbox' and `sal' mean that either bounding boxes or saliency maps are used. Without clear clarification, the methods listed here are based on VGGNet \cite{simonyan2014very} and Deeplab-LargeFOV framework.} \label{tab:comps} \end{table} \subsection{Comparison with the State-of-the-Arts} In this subsection, we compare our proposed approach with existing weakly-supervised semantic segmentation methods that are based on image-level supervision. Detailed information for each method is shown in \tabref{tab:comps}. We report the results of each method on both the validation and test sets. From \tabref{tab:comps}, we can observe that our approach greatly outperforms all other methods when the same base model, such as VGGNet \cite{simonyan2014very}, is used. Compared to DCSP \cite{chaudhry2017discovering}, which leverages the same procedures to produce the proxy ground-truths for segmentation segmentation network, we achieves a performance gain of more than 2\% on the validation set. This method uses the original CAM \cite{zhou2016learning} as their attention map generator while our approach utilizes the attention maps produced by our SeeNet, which indirectly proofs the better performance of our attention network compared to CAM. To further compare our attention network with adversarial erasing methods, such as AE-PSL \cite{wei2017object} and GAIN \cite{li2018tell}, our segmentation results are also much better than theirs. This also reflects the high quality of our attention maps. \renewcommand{\addFig}[1]{\includegraphics[height=0.098\linewidth,width=0.122\linewidth]{figures/failure/#1}} \renewcommand{\addFigs}[4]{\addFig{#1.jpg} & \addFig{#1.png} & \addFig{#2.jpg} & \addFig{#2.png} & \addFig{#3.jpg} & \addFig{#3.png} & \addFig{#4.jpg} & \addFig{#4.png} } \begin{figure} \centering \footnotesize \setlength\tabcolsep{0.8pt} \begin{tabular}{cccccccccc} \addFigs{7}{8}{9}{10} \\ \addFigs{4}{5}{12}{13} \\ \end{tabular} \caption{More visual results produced by our approach. } \vspace{-10pt} \label{fig:failure} \end{figure} \renewcommand{\addFig}[1]{\includegraphics[width=0.160\linewidth]{figures/seg_results/#1}} \renewcommand{\addFigs}[2]{\addFig{#1.jpg} & \addFig{#1.png} & \addFig{#1_resnet_crf.png} & ~~ & \addFig{#2.jpg} & \addFig{#2.png} & \addFig{#2_resnet_crf.png} } \begin{figure*}[h] \centering \setlength\tabcolsep{0.4mm} \begin{tabular*}{\linewidth}{ccccccc} \addFigs{2007_001311}{2007_006841}\\ \addFigs{2008_000391}{2008_002212}\\ \addFigs{2007_009691}{2009_000825}\\ \addFigs{2008_002929}{2009_002317}\\ (a) & (b) & (c) & & (a) & (b) & (c) \\ \end{tabular*} \caption{Segmentation results produced by our approach. (a) Source images. (b) Ground-truth annotations. (c) Our segmentation results. Other than good examples (the top three rows), we also list a couple of bad cases (the bottom row) to make readers better understand our work.} \label{fig:visualRes} \end{figure*} \subsection{Discussions} To better understand the proposed network, we show some visual results produced by our segmentation network in \figref{fig:visualRes}. As can be seen, our segmentation network works well because of the high-quality attention maps produced by our SeeNet. However, despite the good results, there are still a small number of failure cases, part of which has been shown on the bottom row of \figref{fig:visualRes}. These bad cases are often caused by the fact that semantic objects with different labels are frequently tied together, making the attention models difficult to precisely separate them. Specifically, as attention models are trained with only image-level labels, it is hard to capture perfectly integral objects. In \figref{fig:failure}, we show more visual results sampled from the Pascal VOC dataset. As can be seen, some scenes are with complex background or low contrast between the semantic objects and the background. Although our approach has involved background priors to help confine the attention regions, when processing these kinds of images it is hard to localize the whole objects and the quality of the initial attention maps are also essential. In addition, it is still difficult to deal with images with multiple semantic objects as shown in the first row of \figref{fig:failure}. The attention networks may easily predict which categories exist in the images but localizing all the semantic objects are not easy. A promising way to solve this problem might be incorporating a small number of pixel-level annotations for each category during the training phase to offer attention networks the information of boundaries. The pixel-level information will tell attention networks where the boundaries of semantic objects are and also accurate background regions that will help produce more integral results. This is also the future work that we will aim at. \section{Conclusion} In this paper, we introduce the thought of self-erasing into attention networks. We propose to extract background information based on the initial attention maps produced by the initial attention generator by thresholding the maps into three zones. Given the roughly accurate background priors, we design two self-erasing strategies, both of which aim at prohibiting the attention regions from spreading to unexpected regions. Based on the two self-erasing strategies, we build a self-erasing attention network to confine the observable regions in a potential zone which exists semantic objects with high probability. To evaluate the quality of the resulting attention maps, we apply them to the weakly-supervised semantic segmentation task by simply combining it with saliency maps. We show that the segmentation results based on our proxy ground-truths greatly outperform existing state-of-the-art results. \subsubsection*{Acknowledgments} This research was supported by NSFC (NO. 61620106008, 61572264), the national youth talent support program, Tianjin Natural Science Foundation for Distinguished Young Scholars (NO. 17JCJQJC43700), and Huawei Innovation Research Program. { \footnotesize \bibliographystyle{plain}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Conclusions} \label{sec:ccl} In this paper, we have proposed a set of good practices for designing and training an efficient and effective image representation model for the task of person re-identification. We showed through extensive experiments that our model outperforms all state-of-the-art approaches for this task by large margins, across four datasets and three metrics. We believe that our proposed approach can serve as a useful baseline for future contributions to the field. \section{Experiments} \label{sec:exp} \newcolumntype{K}[1]{>{\centering\arraybackslash}p{#1}} \subsection{Experimental details} \label{sec:details} \myparagraph{Datasets} We consider four datasets for evaluation. \noindent The {\bf Market-1501} dataset~\cite{zheng15scalable} (Market) is a standard person re-ID benchmark with images from 6 cameras of different resolutions. DPM detections \cite{felzenszwalb2010object} were annotated as containing one of the 1,501 identities, among which 751 are used for training and 750 for testing. The training set contains 12,936 images with 3,368 query images. The gallery set is composed of images from the 750 test identities and of distractor images, 19,732 images in total. There are two possible evaluation scenarios for this database, one using a single query image and one with multiple query images. \noindent The {\bf MARS} dataset~\cite{zheng16mars} is an extension of Market that targets the retrieval of gallery tracklets (\ie sequences of images) rather than individual images. It contains 1,261 identities, divided into a training (631 IDs) and a test (630 IDs) set. The total number of images is 1,067,516, among which 518,000 are used for training and the remainder for testing. \noindent The {\bf DukeMTMC-reID} dataset~\cite{zheng17unlabeled} (Duke) was created by manually annotating pedestrian bounding boxes every 120 frames of the videos from 8 cameras of the original DukeMTMC dataset~\cite{ristani16performance}. It contains 16,522 images of 702 identities in the training set, and 702 identities, 2,228 query and 17,661 gallery images in the test set. \noindent The {\bf Person Search} dataset \cite{xiao2017joint} (PS) differs from the previous three as it was created from images collected by hand-held cameras and frames from movies and TV dramas. It can therefore be used to evaluate person re-identification in a setting that doesn't involve a known camera network. It contains 18,184 images of 8,432 identities, among which 5,532 identities and 11,206 images are used for training, and 2,900 identities and 6,978 images are used for testing. \begin{table}[t!] \small \centering \begin{tabular}{cccK{1.5cm}K{1.5cm}} \toprule flip & crop & cut-out & Market & Duke \\ \midrule - & - & - & 75.9 & 69.6 \\ \cmark & - & - & 77.2 & 69.7 \\ - & \cmark & - & 76.8 & 69.4 \\ - & - & \cmark & \textbf{81.2} & \textbf{72.9} \\ \cmark & \cmark & \cmark & \textbf{81.2} & \textbf{72.9} \\ \bottomrule \end{tabular} \caption{{\bf Impact of different data augmentation strategies}. We report mean average precision (mAP) on Market and Duke.\label{tab:dataaugmentation}} \end{table} \begin{table}[t!] \small \centering \begin{tabular}{cK{1.5cm}K{1.5cm}} \toprule Largest dimension & Market & Duke \\ \midrule 256 pixels & 78.2 & 69.2 \\ 416 pixels & 81.2 & 72.9 \\ 640 pixels & 81.2 & 73.1 \\ \bottomrule \end{tabular} \caption{\textbf{Impact of the input image size}. We report mean average precision (mAP) on Market and Duke. \label{tab:inputsize}} \end{table} \myparagraph{Evaluation} We follow standard procedure for all datasets and report the mean average precision over all queries (mAP) and the cumulative matching curve (CMC) at rank-1 and rank-5 using the evaluation codes provided. \myparagraph{Training details} As mentioned in Section~\ref{sec:architecture}, for the convolutional part of our network we evaluate different flavors of ResNet \cite{he16deep}, concretely ResNet-50, ResNet-101 and ResNet-152 (we study their impact in the following section). For all of them, we start with the publicly available pre-trained model on ImageNet, and fine-tune the weights of the convolutional layers for person identification in the training set of the specific dataset. To do this, we follow standard practice and extract random-sized crops and then resize them to $224 \times 224$ pixels. We train with stochastic gradient descent (SGD) with momentum of $0.9$, weight decay of $5 \cdot 10^{-5}$, a batch size of $128$, and an initial learning rate of $10^{-2}$, which we decrease to $10^{-4}$. We use the weights of this pre-trained network for the convolutional layers of our architecture and we randomly initialize the fully-connected layer, whose output we set to 2,048 dimensions. We then train the ranking network using our Siamese architecture with input images of variable size, while fixing the largest side to $M$ pixels (whose influence we also study in the following section). We use again SGD with a batch size of $64$ and an initial learning rate of $10^{-3}$, which we decrease using a logarithmic rate that halves the learning rate every $512$ iterations. We observe in all our experiments that the model converges after approximately 4,096 iterations. For the hard triplet mining we set the number of random examples to $N=5,000$ and the number of updates to $k=16$. We set the margin of the triplet loss to $m=0.1$. Exactly the same training settings were used across all four datasets. \input{magic_table} \subsection{Ablative study} \label{sec:ablative} In this section we evaluate key design choices in our architecture and training strategy that relate to the good practices we propose in Figure~\ref{fig:recipe}. \myparagraph{Image transformation} We first focus on data augmentation (\#2 in Figure~\ref{fig:method}). As discussed in Section~\ref{sec:method}, we apply different transformations to the images at training time, namely flips, crops and cut-outs. Here we study how each transformation impacts the final results, reported in Table~\ref{tab:dataaugmentation}. We observe that cut-out has a very strong impact on the performance and renders the other two data augmentation schemes superfluous. We believe that this is because cut-out makes our representation much more robust to occlusion, and also avoids over-fitting on such little training data. Second, we consider the impact of the size of the input image (\#1). Images from the Market dataset have a fixed size of $256 \times 128$, while images from Duke have a variable size, with $256 \times 128$ pixels on average. In our experiments, we rescale images so that the largest image dimension is either 256, 416, or 640 pixels, without distorting the aspect ratio. We report results in Table~\ref{tab:inputsize} and observe that using a sufficiently large resolution is key to achieving the best performance. Increasing the resolution from 256 to 416 improves mAP by 3\%, while increasing it further to 640 pixels shows negligible improvement. We set the input size to 416 pixels for the rest of this paper. \input{table_sota} \myparagraph{Pooling} Table~\ref{tab:magic} (a) compares two pooling strategies (\#4) over the feature map produced by the convolutional layers. As we see that max pooling performs better than average pooling on both datasets, we use it for the rest of this paper. \myparagraph{Backbone architecture} Table~\ref{tab:magic} (b) compares different architectures for the convolutional backbone of our network (\#3). Results show that using ResNet-101 significantly improves the results compared with using ResNet-50 (about +5 mAP for both datasets). The more memory hungry ResNet-152 only marginally improves the results. \myparagraph{Fine-tuning for classification} Table~\ref{tab:magic} (c) shows the importance of fine-tuning the convolutional layers for the identity classification task before using the ranking loss to adjust the weights of the whole network (\#6). As discussed in Section~\ref{sec:curriculum}, training the model on tasks of increasing difficulty is highly beneficial. \subsection{Comparison with the state of the art}\label{sec:sota_exp} Table~\ref{tab:marsket_results} compares our approach to the state of the art. Our method consistently outperforms all methods by large margins on all 4 re-ID datasets and all metrics. In particular, we achieve a mAP of 81.2\% on Market, an 8.1\% absolute improvement compared with the best published results \cite{chen17dpfl}. We also outperform \cite{hermans17indefense} by 12.0\% mAP on MARS. On the Duke dataset, we achieve a mAP of 72.8\%, outperforming the previous best reported mAP \cite{chen17dpfl} by 12.2\%. It is also important to note that our approach using ResNet-50, reported in Table~\ref{tab:magic} b), still outperforms prior art by a significant margin, showing that all of our design choices play a crucial role, not only the backbone architecture. We also report the performance of our method with standard re-ranking\footnote{We expand both the query and the dataset by averaging the representation of the first 5 and 10 closest neighbors, respectively.} and we again see large improvements with respect to prior art that uses re-ranking, across all datasets and metrics. For example, for Market, we achieve a mAP of 90\%, 8.9\% above the best previously-reported mAP from \cite{hermans17indefense}. Looking closely at the approaches that report results on these datasets, we first note that our approach outperforms all recent methods that also use a variant of the triplet loss and hard triplet mining \cite{hermans17indefense, zhao17deeply}. As we show in this section, combining these key principles with the others mentioned in Figure~\ref{fig:recipe} is crucial for effective training of our image representation for Re-ID. It is also worth emphasizing that our approach even outperforms recent works that propose complex models for aligning images based on attributes \cite{lin17improving} or body parts via pose estimation \cite{zheng17pose}, part detection \cite{li17learning, zhao17spindle} or attention modules \cite{zhao17deeply}, most of which require extra resources such as annotations or pre-trained detectors. As we discuss in the next section, our model is able to discriminate body regions without such additional architectural modules. We also report results for the Person Search dataset in last column of Table~\ref{tab:marsket_results}. This dataset differs from traditional re-ID datasets in that the different views of each person do not correspond to different cameras in a network. Nevertheless, our approach performs quite well in this different scenario, achieving a mAP of 92.6\%, which is a 14.7\% absolute improvement over the previous best reported result \cite{xiao2017joint}. This shows the generality of our approach. \subsection{Qualitative analysis}\label{sec:qual} In this section we perform a detailed analysis of our trained model's performance and induction biases. \myparagraph{Re-identification examples} In Figure~\ref{fig:good_bad}, we show good results (top) and failure cases (bottom) for several query images from the Market dataset. We see that our method is able to correctly re-identify persons despite pose changes or strong scale variations. We observe that failure cases are mostly due to confusions between two people that are extremely difficult to differentiate even for a human annotator, or to unusual settings (for instance the person holding a backpack in front of him as in e.). \myparagraph{Localized responses and clothing landmark detection} In Section~\ref{sec:method}, we argued that, using our proposed approach, we obtain an embedding that captures invariance properties useful for re-ID. To qualitatively analyze this invariance, we use Grad-Cam~\cite{selvaraju17gradcam}, a method for highlighting the discriminative regions that CNN-based models activate to predict visual concepts. This is done by using the gradients of these concepts flowing into the final convolutional layer. Similar to~\cite{gordo2017beyond}, given two images, we select the 5 dimensions that contribute the most to the dot-product similarity between their representations. Then, for each image, we propagate the gradients of these 5 dimensions individually, and visualize their activations in the last convolutional layer of our architecture. In Figure~\ref{fig:pairwise}, we show several image pairs and their respective activations for the top 5 dimensions. We first note that each of these output dimensions are activated by fairly \textit{localized image regions} and that the dimensions often reinforce one-another in that image pairs are often activated by the same region. This suggests that the similarity score is strongly influenced by localized image content. Interestingly, these localized regions tend to contain body regions that can inform on the type of clothing being worn. Examples in the figure include focus on the hem of a pair of shorts, the collar of a shirt, and the edge of a sleeve. Therefore, rather than focusing on aligning human body joints, the model appears to make decisions based on \textit{attributes of clothing} such as the length of a pair of pants or of a shirt's sleeves. This type of information has been leveraged explicitly for retrieval using the idea of ``fashion landmarks'', as described in \cite{liu2016deepfashion}. Finally, we observe that some of the paired responses go \textit{beyond appearance similarity} and respond to each other at a more abstract and semantic level. For instance, in the top right pair the strong response of the first dimension to the bag in the first image seems to pair with the response to the strap of the bag in the second image, the bag itself being occluded (see also the backpack response of Figure~\ref{fig:front} as an other example). \begin{figure}[t!] \includegraphics[width=\linewidth]{good_failure.png} \caption{For several queries from Market, we show the first 10 retrieved images together with the mAP and the number of relevant images (in brackets) of that query. Green (resp. red) outlines images that are relevant (resp. non-relevant) to the query.} \label{fig:good_bad} \end{figure} \begin{figure*}[t!] \includegraphics[width=\linewidth]{pairwise2.png} \caption{{\bf Matching regions} For pairs of matching images, we show maps for the top 5 dimensions that contribute most to the similarity.} \label{fig:pairwise} \end{figure*} \myparagraph{Implicit attention} We now qualitatively examine which parts of the images are highly influential, independently of the images they are matched with. To do so, given an image and its embedding, we select the first 50 dimensions with the strongest activations. We then propagate and accumulate the gradients of these dimensions, again using Grad-Cam \cite{selvaraju17gradcam}, and visualize their activations in the last convolutional layer in our architecture. As a result, we obtain a visualization that highlights parts of the images that, \emph{a priori}, will have the most impact on the final results. This can be seen as a visualization of the implicit attention mechanism that is at play in our learned embedding. We show such \textit{implicit attention masks} in Figure~\ref{fig:attention} across several images of the same person, for three different persons. We first observe that the model attends to regions known to drive attention in human vision, such as high-resolution text \cite{cerf2009faces} (e.g. in rows 1 and 2). We also note that our model shows properties of contextual attention, particularly when image regions become occluded. For example, when the man in the second row faces the camera, text on his t-shirt and the hem of his pants are attended to. However, when his back or side is to the camera, the model focuses more intently on the straps of his backpack. \begin{figure}[t!] \centering \includegraphics[width=\linewidth]{single_attention2.png} \caption{{\bf Implicit attention} We highlight regions that correspond to the most highly-activated dimensions of the final descriptor. They focus on unique attributes, such as backpacks, bags, or shoes.\label{fig:attention}} \end{figure} \subsection{Re-ID in the presence of noise} To test the robustness of our model, we evaluate it in the presence of noise using Market+500K \cite{zheng15scalable}, an extension of the Market dataset that contains an additional set of 500K distractors. To generate these distractors, the authors first collected ground-truth bounding boxes for persons in the images. They then computed the IoU between each predicted bounding box and ground-truth bounding box for a given image. A detection was labeled a distractor if its IoU with all ground-truth annotations was lower than 20\%. We evaluate our ResNet-50- and ResNet-100-based models, trained on Market, on this expanded dataset, while increasing the number of distractors from 0 to 500K. We selected distractors by randomly choosing them from the distractor set and adding them to the gallery set. Both models significantly outperform the current state-of-the-art results in the presence of this noise, as presented in Figure \ref{fig:noise}. Note that our best model, with 500K added distractors, performs on par with \cite{hermans17indefense}'s performance with 0 added distractors. \begin{figure}[t!] \centering \includegraphics[width=0.85\linewidth]{distractors.png} \caption{Performance comparison in the presence of distractors.\label{fig:noise}} \vspace{-4mm} \end{figure} \section{Introduction} \label{sec:intro} Person re-identification (re-ID) is the task of correctly identifying individuals across different images captured under varying conditions, such as different cameras within a network. This task is of high practical value in a wide range of applications including surveillance or content-based image retrieval. Different from classification, there is no overlap between the persons seen at train time and at test time. Heavily studied for more than two decades \cite{bedagkar2014survey, karanam2016comprehensive}, most works that address this problem have sought to propose either a suitable image representation, often with hand-crafted rules, or a suitable image similarity metric. Following the great success of deep learning in a large number of computer vision tasks, including image classification \cite{he16deep}, object detection \cite{ren2015faster}, and semantic segmentation \cite{dai2016instance}, a dominant paradigm in person re-ID has emerged, where methods use or fine-tune successful deep architectures for this retrieval task \cite{chen17beyond, hermans17indefense, su16deep}. This paradigm leads to compact global image representations well-suited for person re-identification. However, within this general framework there remain many design choices, in particular those related to network architectures, training data, and model training, that have a large impact on the effectiveness of the final person re-ID model. In this paper, we focus on identifying which of these design choices matter. \begin{figure}[t!] \includegraphics[width=\linewidth]{person_reid_splash_v2.png} \caption{By careful design of our deep architecture and training strategy (Section~\ref{sec:method}), our approach builds global representations that capture the subtle details required for person re-identification by training the embedding dimensions to respond strongly to discriminative regions/concepts such as the backpack or the hem of the shorts. Heatmaps indicate image regions that strongly activate different dimensions of the embedding. } \label{fig:front} \end{figure} One potential limitation of using global representations designed for image classification is the absence of any explicit mechanism to tackle the misalignment inherent to human pose variations and person detection errors. Consequently, many recent works in the literature have explored strategies to alleviate this problem by explicitly aligning body parts between images \cite{su17pose,zhao17spindle}, for example by using pre-trained part or human joint detectors, or by enriching the training set with auxiliary data such as attributes \cite{su16deep}. In this work, we adopt a different approach that combines a simple deep network with an appropriate training strategy, and whose design choices were both carefully validated on several datasets. The result is a simple yet powerful architecture that produces global image representations that, when compared using a dot-product, outperform state-of-the-art person re-identification methods by large margins, including more sophisticated methods that rely on attention models, extra annotations, or explicit alignment. Our contribution is threefold. First, we identify a set of key practices to adopt, both for representing images efficiently and for training such representations, when developing person re-ID models (Section \ref{sec:method}). Many of these principles have been adopted in isolation in various related works. However, we show that when applied jointly, significant performance improvements result. We carefully evaluate different modeling and learning choices that impact performance. A key conclusion is that curriculum learning is critical for successfully training the image representation and several of our principles reflect this. Second, our method significantly improves over previous published results on four standard benchmark datasets for person re-identification (Section~\ref{sec:sota_exp}). For instance, we show an absolute improvement of 8.1\% mAP in the Market-1501 dataset compared with the current state of the art. Third, we provide a qualitative analysis of the information captured by the visual embedding produced by our architecture. Our analysis illustrates, in particular, the effectiveness of the model in localizing image regions that are critical for re-ID without the need for explicit attention or alignment mechanisms (Section~\ref{sec:qual}). We also show how individual dimensions of the embedding selectively respond to localized semantic regions producing a high similarity between pairs of images from the same person. We believe that our approach, which is easy to reproduce\footnote{To aid reproducibility we will release trained models and the evaluation code upon acceptance.}, can serve as a baseline of choice for future improvements in this field of research. \section{Learning a global representation for re-ID} \label{sec:method} We now describe the design of our deep architecture and our strategy for effectively training it for person re-ID. \subsection{Architecture design} \label{sec:architecture} The architecture of our image representation model in most ways resembles that of standard deep image recognition models. However, it incorporates several important modifications that proved beneficial for image retrieval tasks \cite{gordo16deep,Radenovic2016}. The model contains a backbone convolutional network, pre-trained for image classification, which is used to extract local activation features from input images of an arbitrary size and aspect ratio. These local features are then max-pooled into a single vector, fed to a fully-connected layer and $\ell_2$-normalized, producing a compact vector whose dimension is independent of the image size. Figure~\ref{fig:method} illustrates these different components and identifies the design choices (\#1 to \#4) that we evaluate in the experimental section (Section~\ref{sec:ablative}). Different backbone convolutional neural networks, such as ResNet \cite{hermans17indefense}, ResNeXt \cite{xie16resnext}, Inception \cite{szegedy16inception} and Densenet \cite{huang2017densely} can be used interchangeably in our architecture. In Section~\ref{sec:ablative}, we present results using several flavors of ResNet \cite{hermans17indefense}, and show the influence of the number of convolutional layers on the accuracy of our trained model. \subsection{Architecture training} A key aspect of the previously described representation is that all the operations are differentiable. Therefore, all the network weights (\ie from both convolutional and fully-connected layers) can be learned in an end-to-end manner. \myparagraph{Three-stream Siamese architecture} To train our representation end-to-end we use a three-stream Siamese architecture in which the weights are shared between all streams. This learning approach has been successfully used for person re-identification \cite{ding15deep,su16deep,hermans17indefense} as well as for different retrieval tasks \cite{gordo16deep,Radenovic2016}. Since the weights of the convolutional layers and the fully-connected layer are independent of the size of the input image, this Siamese architecture can process images of any size and aspect ratio. The three-stream architecture takes image triplets as input, where each triplet contains a query image $I_q$, a positive image $I^+$ (\ie an image of the same person as in the query image), and a negative image $I^-$ (\ie an image of a different person). Each stream produces a compact representation for each image in the triplet, leading to the descriptors $q$, $d^+$ and $d^-$ respectively. We then define the ranking triplet loss as \begin{equation} L(I_q,I^+,I^-) = \max (0, m + q^Td^- - q^Td^+), \end{equation} where $m$ is a scalar that controls the margin. This loss ensures that the embedding of the positive image $I^+$ is closer to the query image embedding $I_q$ than that of the negative image $I^-$, by at least a margin $m$. We now discuss key practices for improved training of our model. \myparagraph{Image size} Typically, training images are processed in batches and therefore resized to a fixed input size, which leads to distortions. We argue that images should be upscaled to increase the input image size, and that they should not be distorted. To this end, we process triplets sequentially, allowing a different input size for each image and allowing the use of high resolutions images even in the most memory hungry architectures (e.g. ResNet-152 or Densenet). To account for the reduced batch size, we accumulate the gradients of the loss with respect to the parameters of the network for every triplet, and only update the network once we achieve the desired effective batch size. \myparagraph{Pretraining} We found it crucial to use pre-trained models with our architecture. First, we follow standard practice and use networks pre-trained on ImageNet \cite{deng09imagenet}. To achieve the highest performance, it was also quite important to perform an additional pre-training step by fine-tuning the model on the training set using a classification loss, \ie to train the model for person identification. We discuss this further in Section~\ref{sec:curriculum} and in the ablative study in Section~\ref{sec:ablative}. \begin{figure}[t!] \centering \fcolorbox{black}{silver}{ \begin{minipage}[c]{0.95\linewidth} \footnotesize Good practices for person re-ID \begin{itemize} \item Pre-training for identity classification \item Sufficiently large image resolution \item State-of-the-art base architecture \item Hard triplet mining \item Dataset augmentation with difficult example \end{itemize} \end{minipage} } \caption{Summary of good practices for building a powerful representation for person re-identification. \label{fig:recipe}} \end{figure} \myparagraph{Data augmentation} To augment the dataset we adopt an image ``cut-out'' strategy, which consists of adding random noise to random-sized regions of the image. We progressively increase the maximum size of these regions during training, progressively producing more difficult examples. This strategy improves the results because it serves two purposes: it is a data augmentation scheme that directly targets robustness to occlusion and it allows for model regularization by acting as a ``drop-out'' mechanism at the image level. As a result, this strategy avoids the over-fitting inherent to the small size of the training set and significantly improves the results. We also considered standard augmentation strategies such as image flipping and cropping \cite{simonyan16vgg} but found no added improvement, as we show in Section~\ref{sec:ablative}. \myparagraph{Hard Triplet Mining} Finally, mining \emph{hard} triplets is crucial for learning. As already argued in \cite{hermans17indefense, harwood17mining, wu17sampling}, when applied naively, training with a triplet loss can lead to underwhelming results. Here we follow the hard triplet mining strategy introduced in \cite{gordo2017end}. First, we extract the features for a set of $N$ randomly selected examples using the current model and compute the loss of all possible triplets. Then, to select triplets, we randomly select an image as a query and randomly pick a triplet for that query from among the 25 triplets with the largest loss. To accelerate the process, we only extract a new set of random examples after the model has been updated $k$ times with the desired batch size $b$. This is a simple and effective strategy which yields good model convergence and final accuracy, although other hard triplet mining strategies \cite{harwood17mining, wu17sampling} could also be considered. \subsection{Curriculum learning for re-ID} \label{sec:curriculum} Similarly to humans, who learn a set of concepts more easily when the concepts to be learned are presented by increasing degree of complexity, it has been shown that curriculum learning has a positive impact on the speed and quality of the convergence of deep neural networks \cite{bengio09curriculum}. We adopt this learning strategy in our approach. In particular, three of our design principles described in this section aim to progressively increase the difficulty of the task being learned by our model. First, our hard-negative mining strategy samples triplets that increase in difficulty as learning continues. Second, our pre-training strategy first trains our model to solve the task of person ID classification (which requires the model to first recognize individuals within a closed set of possible IDs) before training it for the more challenging task of re-identifying persons. Third we observed that when training with cut-out, we achieve best results when the percentage of the image that is replaced by noise progressively increases. We believe that this general training principle is crucial to our results (reported in Section~\ref{sec:sota_exp}). Figure~\ref{fig:recipe} summarizes the good practices that we propose for both designing and training a deep architecture for person re-identification. \section{Related Work} \label{sec:rw} A vast literature addresses the person re-identification problem (the reader may refer to \cite{karanam2016comprehensive} for a recent survey). Traditionally, works on person re-ID sought to improve similarity scores between images \cite{kostinger12large,pedagadi13local,paisitkriangkrai15learning}, usually through metric learning. These methods typically used color-based histograms as input feature vectors~\cite{mignon12pcca,kostinger12large,pedagadi13local,xiong14person,paisitkriangkrai15learning} due to their discriminative power particularly with respect to clothing, and also to their small memory footprint. Recent research on person re-identification has mostly focused on end-to-end training of deep architectures. This research has taken two main directions: improving generic deep image representations using sophisticated learning objectives appropriate for person re-identification, or designing task-specific image representations. \myparagraph{Task-specific learning objectives} This line of research most often involves proposing loss functions suitable for the re-ID task, and in particular for learning effective similarity metrics. \cite{zhou17point} proposes a metric to learn similarities between an image and a set of images, as opposed to learning similarities between pairs of images as is typical. \cite{zhou17efficient} proposes a method to locally modify, in an online manner at test time using only negative examples, a global similarity metric that was trained offline. \cite{sun17svdnet} added an orthogonality constraint on the final fully-connected layer of a deep network in order to improve the discriminability of the learned features. \cite{Zhu2017} proposes to train a re-ID model using as a similarity metric a hybrid of the Euclidean, Cosine and Mahalanobis distances. \cite{zhang16learning} learns an embedding that aims to project images of the same person into the same point in the embedding space. \cite{bai17scalable} proposes to learn a method to modify image embeddings such that the learned similarity metric between images is smooth in the underlying image manifold. \cite{lin17improving} proposes to learn an image embedding for re-ID by training a network to predict both person IDs and attribute labels. Most recent works use cross-entropy or softmax loss functions for training their person re-identification models. Others treat person re-ID not as a recognition but rather as a ranking problem, and use losses that are more appropriate for ranking. For example, the contrastive loss \cite{varior16gated} and the triplet loss or variants thereof \cite{ding15deep, su16deep, hermans17indefense, chen17beyond} have been used to train Siamese architectures. \cite{ding15deep} proposes a scheme to limit the size of triplet batches while still obtaining informative samples, while \cite{chen17beyond} proposes a quadruplet loss, which adds to the triplet loss a term that enforces a margin constraint on the distance between image pairs that are unrelated. \cite{hermans17indefense} shows that, with appropriate training settings, the triplet loss can outperform more complicated objective functions. In this work, we propose several good practices that can be viewed as encouraging curriculum learning (\emph{c.f.} section~\ref{sec:curriculum}) that, when combined with the standard triplet loss, lead to large improvements over previous methods which have used varieties of the triplet loss. \myparagraph{Task-specific representations} Many works in this line have focused on addressing the alignment problem via use of part detectors, pose estimation, or attention models. Spatial transformer networks have been used to globally align images \cite{zheng17pan} and to localize salient image regions for finding correspondences~\cite{rahman21person}. In a similar vein, \cite{zhao17deeply,Liu2017,liu17hydra} use multiple parallel sub-branches which learn, in an unsupervised manner, to consistently attend to different human body parts. \cite{su17pose} uses a pre-trained pose estimation network to provide explicit part localization, while a similar approach \cite{zhao17spindle} integrates a pose estimation network into their deep re-ID model. \cite{zheng17pose} uses joint localization to create a new image that contains only the body parts. Rather than localize parts, \cite{Lin2017} represents images with fixed grids and learns cell correspondences across camera views. Several works have proposed multi-scale architectures with mechanisms for automatic scale selection \cite{qian17multi} or scale fusion \cite{chen17dpfl}. \cite{li17learning} combines a multi-scale architecture with unsupervised body part localization using spatial transformer networks. In Section~\ref{sec:exp}, we compare to such works and show that our learned representation can address alignment and scale variations without using additional scale, human parsing, or attention models. Other relevant areas of research in re-ID are data scarcity, re-ranking, and end-to-end re-ID. \cite{zheng17unlabeled} uses GANs to synthesize crops of pedestrians which were used to train a deep re-ID network in a semi-supervised manner. \cite{zhong17re-ranking} applies \emph{k}-reciprocal nearest neighbor reranking to the re-ID problem. \cite{liu2017neural, xiao2017joint} both tackle end-to-end re-ID by incorporating person detection into their proposed pipelines.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Is it possible for an AI model to receive as input the facial image of a person and predict, with some degree of accuracy, how many years of life that person has left? Predicting remaining lifespan (RL) is a complex task that requires taking into account a wide range of factors, many of which can be inferred from the face. For instance, a person's biological age, as determined by genetics, health, and lifestyle, is often reflected in their appearance. Additionally, certain genetic defects, diseases, and health issues may be visible on the face. There are also differences in life expectancy among different sexes and racial/ethnic groups, which can potentially be determined by facial features. If a human expert is asked to estimate the RL of a person solely from a photograph of their face, he or she will likely follow the following steps: first, guess the person's age from the facial features, second, find the average life expectancy of the sex and race cluster to which the person belongs, and finally, calculate the RL as the difference between the life expectancy and current age. Similarly, a deep learning model can perform this process within its "blackbox." These models have been shown to estimate age with a high level of accuracy \citep{AgboAjala2021}. Furthermore, when trained on a large and diverse dataset, such models can learn to cluster individuals not only by sex and race, but also by more subtle and fine-grained facial features and determine the average life expectancy for each cluster. Therefore, in theory, a deep learning model has the potential to exceed human expert performance in predicting RL from the face. In this study, our goal was to develop an AI model that could use facial features to predict RL. To create a dataset for this task, we used Wikidata and Wikipedia to collect images of individuals who died of natural causes between 1990 and 2022 (inclusive), along with captions indicating when the images were taken. By comparing the date of death with the date the image was taken, we were able to determine the number of years each person lived after the image was taken, which we used as the label for each image. The images were then subjected to face detection and alignment before being used for training. The final dataset consisted of more than 24,000 images which were used to fine-tune deep convolutional neural network (CNN) models. In order to improve the accuracy of the model, we also applied data augmentation techniques such as flipping and cropping of images. We experimented with various CNN models including FaceNet \citep{schroff2015facenet}, VGGFace \citep{parkhi2015deep}, and VGGFace2 \citep{cao2018vggface2}, all pretrained on large datasets of faces for the task of face recognition. These models map the face into a high-dimensional numerical vector (face embedding). We then fed these embeddings to a few fully-connected layers, culminating in a final layer that generates the output (RL). To generate the output, we considered classification (argmax of a softmax layer), regression (a fully-connected layer with 1 unit), and expected value (sum of values multiplied by their respective softmax probabilities), à la \citet{rothe2015dex}. The VGGFace model with two layers of 1024 fully-connected (FC) nodes followed by a regression output yielded the best performance, with a Mean Absolute Error (MAE) of 8.3 years. While an MAE of 8.3 may not seem impressive at first glance, it is important to note that RL is inherently highly uncertain. In comparison, for age, which involves less uncertainty, state-of-the-art models have MAEs of approximately 2-3 years \citep{AgboAjala2021}. Our error analysis revealed that the younger the person is at the image, and hence the longer the expected RL, the less accurate the predictions of the model become. This is both because predicting the RL of a younger person is more difficult in nature and because our data includes relatively fewer examples of images taken at a very young age. There are a variety of potential applications for RL prediction from the face. For example, life insurance companies can use such models alongside the conventional methods (mortality tables and physical evaluations) to estimate an individual's lifespan and determine insurance premiums accordingly. Additionally, RL prediction models can be used to estimate the loss of life, in terms of years (as opposed to just the number of people who died), resulting from deadly events. For example, using our model, we show that out of the 278 people in our dataset who died of COVID-19 (these examples were not used for training and validation), the average loss of life was 3.3 years, i.e. on average, they would have lived another 3.3 years had the pandemic not occurred. Furthermore, such models can be used to demonstrate how health interventions and lifestyle changes, such as weight loss, can impact an individual's RL. By applying the model to the pictures of celebrities before and after a significant weight loss, we can see an increase in RL. However, it is important to note that these predictions must be used with caution and in conjunction with other forms of analysis. The remainder of this paper is organized as follows: in Section \ref{sec:data}, we describe the process of collecting and cleaning the data. In Section \ref{sec:method}, we present the process of developing and training the model. Section \ref{sec:error-analysis} performs error analyses on the validations set predictions. Section \ref{fig:apps} demonstrates a few examples of how and where the model can be applied. Section \ref{sec:ethics} discusses the ethical implications of RL prediction models. Section \ref{sec:conclusion} provides concluding remarks. \section{Data} \label{sec:data} To train a model capable of predicting remaining lifespan (RL) from a facial image, we need a dataset of images of individuals and the number of years the person lived after the image was taken. To create this dataset, we queried Wikidata and scraped Wikipedia, as outlined in the remainder of this chapter. We have made our dataset openly accessible for other researchers to utilize in their own studies and experimentation.\footnote{Link to download the remaining lifespan dataset (images and labels): \href{https://github.com/fekrazad/remaining-lifespan-ai}{github.com/fekrazad/remaining-lifespan-ai}} An alternative suggestion might be to use profiles of deceased individuals on social media websites like Facebook to collect such data. However, it is important to keep in mind that such websites have only been around for the past 20 years and as a result, the RL variable collected from them would be truncated. Additionally, the data in social media websites may not be as reliable or accurate as data from other sources like Wikidata and Wikipedia. We first queried Wikidata, a searchable knowledge graph, to find all persons who have both a date of birth and a date of death specified at least to the year level, died between 1990 and 2022 (inclusive), and whose manner of death is either "natural causes" or not specified. It is worth noting that while we do not require the date of birth in order to create the RL labels, it is useful for analyzing the sample and conducting error analysis. In some cases, multiple dates of birth or death may be recorded for an individual due to contradictory sources or data entry errors. If the year of the conflicting dates differs, we excluded these entries from our dataset. In Wikidata, there are two properties related to death: "manner of death", which is more general (e.g., natural causes, accident), and "cause of death", which is more specific (e.g., a particular disease, airplane crash). We only considered entries for which the manner of death is either "natural causes" or not specified. We then created a separate dataset that maps each cause of death to the most relevant manner of death. If a person's manner of death is not specified but the cause of death is specified and related to an unnatural cause, that entry was excluded from the dataset. While dying from COVID-19 is technically considered death by a natural cause, since a global pandemic of this proportion is a once-in-a-century phenomenon, we excluded individuals whose cause of death was attributed to the virus. We were able to easily obtain images with associated "point in time" properties from Wikidata entries. For entries without this property, we attempted to extract the year the image was taken by searching the caption for a single number in the 19** or 20** format. We assumed this number to be the year the image was taken. Unfortunately, there is not a one-to-one correspondence between Wikidata entries and Wikipedia articles. To obtain images and their captions for Wikidata entries with an empty image field, we scraped the corresponding English Wikipedia page. If the page included an image and the image's caption included a single number in the 19** or 20** format, we assumed this to be the year the image was taken. If the image year falls before the individual's date of birth or after their date of death, we discarded the image as it is evidently not the correct year the image was taken. \figurename~\ref{fig:wiki-example} shows an example of a Wikipedia page where the image and image year can be scraped. Combined with the year of death, we can determine the RL label for the image. \begin{figure} \centering \includegraphics[width=0.5\linewidth]{imgs/wiki-example.png} \caption{An example of successful data extraction from Wikidata/Wikipedia. The difference between the year the image was taken (1945) and the year of death (2020) gives us an RL label of 75 for the image.} \label{fig:wiki-example} \end{figure} There are multiple potential sources of error in the data. First, since the death date and image date are measured in years, the RL variable for an image can be off by a year. For example, if the image was taken in January (December) 2000 and the person died in December (January) 2001, the actual RL is almost two (zero) while in our data it will be recorded as one. Additionally, since Wikidata/Wikipedia entries are created and edited by volunteers, they can be erroneous or incomplete. We assumed that if a person does not have a manner and cause of death specified, they died of natural causes. Upon manual inspection of a sample of entries, this turned out not to hold for those who died at younger ages. For this reason, we limited the data to those who were 50 years or older at the time of their death. Less than 5\% of the data fell under this threshold. Another source of error in our data is that we collect the images over a 33 year period (1990-2022) and from all countries. Over the years and across the countries, life expectancy changes as a result of improvements to medical technology and access to healthcare. However, as shown in \figurename~\ref{fig:age-at-death-dist}, the distribution of age at death in the data (containing 33 years and multiple countries) and the age at death in the US in 2019 (according to Social Security Administration data) are very similar. The concern about significant changes in life expectancy is the reason we did not collect data for deaths before 1990. \begin{figure} \centering \includegraphics[width=0.9\textwidth]{imgs/age-at-death-dist.png} \caption{Age at death distribution} \label{fig:age-at-death-dist} \end{figure} For some Wikidata/Wikipedia entries, the facial image may not be a photograph of the person. For example, the image could be a drawing or an illustration of the person's face. Furthermore, the scraped number from the image caption may not reflect the actual year the image was taken. For example, a person's Wikipedia page may include the picture of a commemorative stamp with the face of the person from a younger age, while the caption mentions the year the stamp was issued. \figurename~\ref{fig:bad-examples} displays three instances of entries that lead to erroneous data. Based on a manual inspection of 200 random entries, we found that in less than 2\% of cases the scraped images or their years were inaccurate. \begin{figure} \centering \begin{tabular}{ccc} \includegraphics[width=0.33\linewidth]{imgs/bad-img-1.png} & \includegraphics[width=0.33\linewidth]{imgs/bad-img-2.png.png} & \includegraphics[width=0.33\linewidth]{imgs/bad-img-3.png} \end{tabular} \caption{Examples of Wikidata/Wikipedia entries that lead to faulty data} \label{fig:bad-examples} \end{figure} After scraping Wikidata/Wikipedia, we obtained about 45,000 images and their respective labels (RL). We used Multi Task Cascaded Convolutional Neural Network (MTCNN) \citep{zhang2016joint} to detect faces in the images. We excluded those that did not contain any faces, contained more than one face, or when the face detection algorithm was less than 98\% confident that it found a human face. After detecting the faces in the images, we aligned them by rotating the face so that the line connecting the two eyes is parallel with the horizon. We cropped the face in such a way that the distance between the eyes spans the middle 28\% of the cropped image and the eyes are 43\% from the top of the image. We also applied a simple algorithm to ensure that the pose in the image is at least partially frontal (i.e. both eyes are fully visible). While convolutional neural networks are effective at handling different poses, we wanted to make it easier for our model to extract features useful for RL prediction. To avoid distorting important information in the image, we cropped the faces in the shape of a square to match the input shape of most models. Additionally, we filtered out face crops whose width was less than 64 pixels as these images were too low quality for our model to learn from. After all the filtering, we were left with 24,167 examples. \section{Methodology} \label{sec:method} To identify the most suitable model for RL prediction, we fine-tune CNNs that have been pre-trained on very large datasets for face recognition. These models map an image of a face into a high-dimensional numerical vector, with the aim that the vectors created for different images of the same person are close (as measured by cosine similarity or Euclidean distance), while those of two different people are far apart. This vector is referred to as a face embedding. We utilize these face embeddings as features in our RL prediction model. In particular, the flattened embedding is followed by several fully-connected layers with rectified linear unit (ReLU) activation. To avoid over-fitting, both the embedding and the output of each of the fully-connected layers are passed through dropout layers. The model ultimately culminates in an output layer. For the final (output) layer, we consider three options. The first option is a classification as defined by the argmax of a softmax layer. However, since RL is a rank variable and a classification task does not take the order of the labels into account, we believe this to be an improper way to model RL. The second option is to use the softmax probabilities ($p_i$) for labels ranging from 0 to 100 ($L_i$) and calculate the RL as the expected value ($RL = \sum_{i=0}^{100}L_i*p_i$). Lastly, the third option is to use a regression, that is, a fully connected layer of size one that generates the output. Our experiments showed that the regression works best with our data. To select the most appropriate model to fine-tune, we evaluated FaceNet, VGGFace (a version of the VGG16 trained on a dataset of face images), and VGGFace2 (ResNet-50 trained on the VGGFace2 dataset). We conducted a simple and rapid experiment to determine which model's face embeddings were most suitable for our particular task. We froze the weights of these models and added two FC layers of 64 and 32 units, followed by an output layer of 1 unit. We trained each model for 10 epochs and compared the results. As shown in \figurename~\ref{fig:compare-models}, VGGFace had the best performance among the three models. VGGFace (VGG16) has been successfully used for similar tasks such as age estimation in the past. VGGFace2 did not perform well, as expected, since it was specifically designed and trained to be age-invariant (i.e., to recognize a person as they age). Therefore, the embeddings generated by it do not change drastically with age, which is not beneficial for our purposes. As a result, we selected VGGFace as the basis for building our RL prediction model. \begin{figure} \centering \begin{tabular}{cc} \includegraphics[width=0.50\linewidth]{imgs/compare-train.png} & \includegraphics[width=0.50\linewidth]{imgs/compare-valid.png} \end{tabular} \caption{Comparing the performance of VGGFace, VGGFace2, and FaceNet embeddings for RL prediction was a preliminary step to find the model that we would use for further experimentation.} \label{fig:compare-models} \end{figure} After choosing VGGFace as the embedding-generator model, we conducted experiments with different configurations. We experimented with various numbers of FC layers to be added on top of it (ranging from 1 to 3) before the output layer. We also tried different methods for generating the output (expected value, regression), different layer sizes (128, 512, 1024, and 4096), and various hyperparameters such as learning rate and batch size to optimize the performance of the model. To fine-tune the model, we first train only the added layers while keeping the transferred weights frozen (linear probing). Then, we incrementally unfreeze and train additional layers with lower learning rates. This approach, known as "fine-tuning with freezing", allows us to leverage the pre-trained weights in the initial layers and fine-tune the model on our specific task, while avoiding the risk of catastrophic forgetting. \citet{https://doi.org/10.48550/arxiv.2202.10054} study 10 distribution shift datasets and show that linear probing followed by fine-tuning outperforms (out of distribution) just linear probing or just full fine-tuning. We divide the sample into training and validation sets with a 70/30 split. As seen in \figurename~\ref{fig:rl-dist}, RL is not uniformly distributed in the data. We have relatively more examples of short RL (RL < 10), and very few examples for long RL (RL > 70). To prevent the model from becoming biased towards the majority, we divide the sample into 15 bins, with the RL being between [0,5), [5, 10), ..., [70, $\infty$). When training the model (but not for validation), we oversample the bins whose number of examples is fewer than the bin with the highest number of examples. This way, in each epoch, we will have a virtually similar number of examples for different RL levels. To clarify, these bins are only used for data balancing, not for classification. \begin{figure} \centering \includegraphics[width=0.9\textwidth]{imgs/rl-dist.png} \caption{Distribution of RL in the data} \label{fig:rl-dist} \end{figure} Before being used for training, the images are preprocessed by being scaled (to a float between 0 and 1) and resized to (244, 244). The dataset includes both colored and grayscale images. While color is likely to be relevant for RL estimation, we convert all images to grayscale to make the dataset consistent. Since the model requires a 3-channel input, the single channel of grayscale images is repeated 3 times. To augment training images, we apply random light adjustments and random horizontal flips to the images. This is followed by a random crop of (224, 224), which is the required input size for the VGGFace model. \figurename~\ref{fig:augment} illustrates an original image (fetched from Wikipedia), the image resulting from cropping the face and alignment (as described in Section \ref{sec:data}), and a sample of augmented images that are used in training. Data augmentation increases the size of the dataset (especially for the bins with fewer number of examples) and introduces variations to the images, which can help the model generalize better to unseen data. Validation images are only converted to grayscale and resized to (224, 224). This is done to keep the validation set as similar as possible to the real-world scenarios, where the model will be applied. \begin{figure} \centering \begin{tabular}{ccc} \includegraphics[width=0.33\linewidth]{imgs/example-original.png} & \includegraphics[width=0.33\linewidth]{imgs/example-cropped.png} & \includegraphics[width=0.33\linewidth]{imgs/example-aug.png} \end{tabular} \caption{Original image, after cropping the face and alignment, and after converting to grayscale and augmentation.} \label{fig:augment} \end{figure} The models are trained using the Huber loss function. Instead of mean Squared Error, we use Huber loss function because it is less sensitive to outliers, meaning it will not put too much weight on the difficult cases, which may be the result of faulty data. Huber loss function considers both the Mean squared error and Mean absolute error, it gives less weight to outliers and more weight to the samples that are closer to the predicted output. This helps in preventing the model from overfitting and gives a more robust and accurate prediction. The models were trained on Google Colab GPUs using the Keras/TensorFlow framework. After trying different architectures and hyper-parameters, the VGGFace embeddings followed by 2 FC layers of 1024 units and ending with a regression layer of 1 unit achieved the best performance (MAE of 8.3 years on validation data). In Figures \ref{fig:epochs-loss} and \ref{fig:epochs-mae}, the loss and MAE history for the training and validation sets are provided. For epochs 1 to 10, only the layers added on top of the embeddings were trained. For epochs 11 to 20 (after the vertical line), two additional convolutional layers were unfrozen and trained. However, unfreezing additional layers after that did not improve the performance. This indicates that the model has achieved a good level of accuracy and further training of the model will not improve the performance. The final model can be used to predict the RL on unseen data. \begin{figure} \centering \includegraphics[width=0.9\textwidth]{imgs/epochs-loss.png} \caption{The progression of training and validation loss over epochs. The vertical line indicates when the last 2 convolutional layers became unfrozen. } \label{fig:epochs-loss} \end{figure} \begin{figure} \centering \includegraphics[width=0.9\textwidth]{imgs/epochs-mae.png} \caption{The progression of training and validation MAE over epochs. The vertical line indicates when the last 2 convolutional layers became unfrozen. } \label{fig:epochs-mae} \end{figure} When comparing the loss and MAE between the training and validation sets, keep in mind that because of data-balancing in the training set, large RLs, which are inherently difficult cases, are oversampled. This is why the training loss/MAE starts off higher than the validation. However, as a result of oversampling, the images with large RL in the training set are augmented repetitions of a small set of originals. Thus, the model learns to perform well with them quickly, but that learning does not generalize well to the large RL cases in the validation set. This is why the training loss/MAE gets lower than those of the validation set after a few epochs. \section{Error Analysis} \label{sec:error-analysis} To understand which cases the model has the most difficulty predicting RL, we predict the RL values for the validation set and analyze the errors. In \figurename~\ref{fig:err-dist}, the distribution of errors (true RL - predicted RL) is depicted. The figure shows that the errors are concentrated around zero, which suggests that the model is not making any systematic errors, which could be due to a problem in data or model architecture. The symmetrical distribution of errors on both sides indicates that the model is making errors of similar magnitude in both directions, over-predicting and under-predicting RL equally. \begin{figure} \centering \includegraphics[width=0.8\textwidth]{imgs/err-dist.png} \caption{Distribution of prediction errors (true - pred) in the validation set} \label{fig:err-dist} \end{figure} In Figures \ref{fig:good-preds} and \ref{fig:bad-preds}, we provide examples of images from the validation set where the model performs very well and images where the model performs poorly. Examination of the demonstrated "success" examples showed that they all had correct labels, and the pose in these pictures is almost completely frontal. In the demonstrated "failure" examples, the first (top left) and last (bottom right) one turned out to have incorrect labels. In the second image of the top row, the person died of HIV/AIDS (when it was still a pandemic in early 1990s) at a young age. In the second and third images of the bottom row, the faces have a three-quarter pose. We can conclude that by improving the data through manual removal of images with erroneous labels and by increasing the number of examples with variant poses, we are likely to improve performance. \begin{figure} \centering \includegraphics[width=0.8\textwidth]{imgs/best-preds.png} \caption{Examples of validation images were the model's predictions are accurate} \label{fig:good-preds} \end{figure} \begin{figure} \centering \includegraphics[width=0.8\textwidth]{imgs/worst-preds.png} \caption{Examples of validation images were the model's predictions are far off} \label{fig:bad-preds} \end{figure} Figure \ref{fig:mae-against-vars} illustrates the relationship between MAE and several variables in the validation set, including: \begin{itemize} \item RL \item Age at image \item Age at death \item Original (before resizing) image width \end{itemize} The horizontal line in the figure represents the MAE for the entire validation set (8.3), providing a reference point for determining whether the model's performance is better or worse than the average for specific variable values. The MAE-RL graph illustrates that the model exhibits its best performance when the person lived less than 20 years after the image was taken. Conversely, when the actual RL is very large (the person died more than 60 years after the image was taken), the model's predictions exhibit large errors. Similarly, the MAE vs. Age at Image graph indicates that when the person in the image is very young (less than 20 years old), the model performs poorly. This behavior is anticipated as predicting the RL of a young person poses an intrinsically difficult challenge. Moreover, the dataset contains relatively fewer images of young people, which exacerbates the problem. The MAE vs Age at Death graph illustrates that the model's performance is suboptimal when the person in the image passed away at a very young or very old age. The model exhibits its best performance for individuals who died between the ages of 70 and 90 years old. This suggests that the model may have difficulty predicting RL for individuals outside of this age range, which could be due to a lack of training examples in those age ranges or the intrinsic difficulty of predicting RL for those age groups. Lastly, the MAE vs. Cropped Face Width graph indicates that there is no apparent correlation between the size of the images (before being resized to match the model's required input size) and MAE. This suggests that the model's performance is not significantly affected by the width of the face crop images. However, it is important to note that this does not necessarily mean that the image quality is not important as it could be the case that the width is not a good metric of image quality. \begin{figure} \centering \begin{tabular}{cc} \includegraphics[width=0.50\linewidth]{imgs/err-rl.png} & \includegraphics[width=0.50\linewidth]{imgs/err-age-at-img.png} \\ \includegraphics[width=0.50\linewidth]{imgs/err-age-at-death.png} & \includegraphics[width=0.50\linewidth]{imgs/err-width.png} \end{tabular} \caption{MAE calculated for different levels of RL, age at image, age at death, and image width. The horizontal line represents the MAE for the entire validation set.} \label{fig:mae-against-vars} \end{figure} \section{Applications} \label{fig:apps} RL prediction using facial images has various potential applications. One example is that life insurance companies could use a facial RL model in conjunction with their traditional methods to obtain a more accurate understanding of a person's life expectancy and set premiums accordingly. As seen in Figure \ref{fig:prez}, the model can estimate the RL of living US presidents by using recent photographs of them. \begin{figure} \centering \includegraphics[width=0.8\textwidth]{imgs/test-presidents.png} \caption{RL predictions of the model for the living U.S. presidents using their year 2022 photographs} \label{fig:prez} \end{figure} The model can also be used to estimate the loss of life (in years) for victims of fatal accidents or pandemics. As an experiment, we applied the model to predict the RL of 278 individuals collected from Wikidata/Wikipedia whose sole cause of death was recorded as COVID-19 and whose image was taken after the year 2000. It is important to note that individuals whose cause of death included COVID-19 were excluded from the training and validation datasets. Figure \ref{fig:covid} shows the distribution of the actual RL (the years between when the image was taken and when they died of COVID-19) and their predicted RL (how much they would have lived if the pandemic did not occur). It is apparent that the predicted RL distribution is to the right of the actual RL distribution. Our analysis found that, on average, these individuals would have lived 3.3 more years had the pandemic not occurred. This experiment illustrates the potential use of the model in assessing the impact of pandemics on human life expectancy. \begin{figure} \centering \includegraphics[width=0.8\textwidth]{imgs/covid.png} \caption{Actual RL of a sample of individuals who died of COVID-19 vs. the RL prediction of the model had the pandemic not happened, using their photographs taken since 2000. Those individuals would have lived on average 3.3 years longer.} \label{fig:covid} \end{figure} Another potential use of the model is to evaluate the impact of lifestyle changes and assess the effectiveness of different health interventions by measuring how much they increase the RL. To demonstrate this, we applied the model to the images of 5 celebrities before and after significant weight loss, obtained from an online article\footnote{"The craziest celebrity weight loss transformations of all time" from Page Six \href{https://pagesix.com/article/the-craziest-celebrity-weight-loss-transformations-of-all-time/}{Link to the article}}. As seen in Figure \ref{fig:celeb}, the model suggests that these individuals increased their RL after the weight loss (considering the time between when the two images were taken). \begin{figure} \centering \includegraphics[width=0.8\textwidth]{imgs/celeb.png} \caption{RL gain in celebrities as a result of weight loss} \label{fig:celeb} \end{figure} Additionally, the model could be used for demographic studies and research in aging and longevity, providing valuable insights into the aging process. \section{Ethical Considerations} \label{sec:ethics} Death is a complex and unpredictable process, and it is unlikely that we will ever be able to build models that can predict RL from the face with a very high degree of accuracy. Regardless, the ethical implications of making such predictions should be taken into consideration. There is a possibility that it could cause harm or distress to the individuals being predicted, as well as potential privacy concerns. As mentioned previously, RL prediction models (although not using facial images) are already in use, explicitly or implicitly, in industries such as life insurance and reverse mortgage, which rely on making precise estimates about a client's RL in order to be profitable. The argument that having a rough estimate of one's RL causes distress is similar to the case made against 23andMe's "health risk report," which provided predictions on whether a person would develop diseases such as Alzheimer's or Parkinson's based on their DNA. In that case, the FDA granted authorization for the company to provide the report to clients who opt into it, after taking into account the potential benefits and risks of providing this information to clients. Having a rough estimate of one's own RL could make the person more cognizant of their limited time and lead them to live a more fulfilling life or change unhealthy habits in order to increase their RL. It could be used to identify individuals at high risk of mortality, enabling early interventions and improved health outcomes. \section{Conclusion} \label{sec:conclusion} In summary, predicting RL using only a person's face is a novel task that has not been attempted before, primarily due to a lack of data. To overcome this challenge, we constructed a dataset that can be used for this purpose and made it publicly available for other researchers to experiment on. By fine-tuning the VGGFace model, we achieved an MAE of approximately 8.3 years, which we believe to be satisfactory given the highly unpredictable nature of death. However, it is worth noting that the model's accuracy decreases for images of young individuals. We also demonstrated a few potential applications of the model, including estimating the loss of life due to accidents or pandemics, or estimating the gains in RL after positive lifestyle changes. Overall, using AI for RL prediction solely from the face is an inherently challenging task that has the potential to benefit businesses and individuals. Further research in this area, especially using more comprehensive and diverse data, can provide valuable insights into the aging process and help to improve the longevity of humankind. \bibliographystyle{unsrtnat}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section*{Checklist} \section{Conclusion} In summary, we propose RLlib Flow\xspace, a hybrid actor-dataflow programming model for distributed RL. We designed RLlib Flow\xspace to simplify the understanding, debugging, and customization of distributed RL algorithms RL developers require. RLlib Flow\xspace provides comparable performance to reference algorithms implemented directly on low-level actor and RPC primitives, enables complex multi-agent and meta-learning use cases, and reduces the lines of code for distributed execution in a production RL library by 2-9$\times$. RLlib Flow\xspace is available as part of the open source RLlib project, and we hope it can also help inform the design of future RL libraries. \section{Acknowledgement} In addition to NSF CISE Expeditions Award CCF-1730628, this research is supported by gifts from Amazon Web Services, Ant Group, Ericsson, Facebook, Futurewei, Google, Intel, Microsoft, Scotiabank, and VMware. \section{Evaluation} \label{sec:evaluation} In our evaluation, we seek to answer the following questions: \begin{enumerate} [leftmargin=*] \item What is the quantitative improvement in code complexity with RLlib Flow\xspace? \item How does RLlib Flow\xspace compare to other systems in terms of flexibility and performance for RL tasks? \end{enumerate} \subsection{Code Complexity} \textbf{Lines of Code}: In \tab{loc-algo} we compare the original algorithms in RLlib\xspace to after porting to RLlib Flow\xspace. No functionality was lost in the RLlib Flow\xspace re-implementations. We count all lines of code directly related to distributed execution, including comments and instrumentation code, but not including utility functions shared across all algorithms. For completeness, for RLlib Flow\xspace we include both an minimal (\texttt{RLlib Flow\xspace}) and conservative (\texttt{+shared}) estimate of lines of code. The conservative estimate includes lines of code in shared operators. Overall, we observe between a 1.9-9.6$\times$ (optimistic) and 1.1-3.1$\times$ (conservative) reduction in lines of code with RLlib Flow\xspace. The most complex algorithm (IMPALA) shrunk from 694 to 89-362 lines \input{figTex/loc-algorithm} \textbf{Readability}: We believe RLlib Flow\xspace provides several key benefits for readability of RL algorithms: \begin{enumerate}[leftmargin=*] \item The high-level dataflow of an algorithm is visible at a glance in very few lines of code, allowing readers to understand and modify execution pattern without diving deep into the execution logic. \item Execution logic is organized into individual operators, each of which has a consistent input and output interface (i.e., transforms an iterator into another iterator). In contrast to building on low-level RPC systems, developers can decompose their algorithms into reusable operators. \item Performance concerns are isolated into the lower-level parallel iterator library. Developers do not need to deal with low-level concepts such as batching or flow-control. \end{enumerate} \textbf{Flexibility}: As evidence of RLlib Flow\xspace's flexibility, an undergraduate was able to implement several model-based (e.g., MB-MPO) and meta-learning algorithms (e.g., MAML), neither of which fit into previously existing execution patterns in RLlib Flow\xspace. This was only possible due to the flexibility of RLlib Flow\xspace's model. RLlib Flow\xspace captures MAML in 139 lines compared to a baseline of $\approx$370 lines (\tab{loc-algo}). Detailed discussion can be found in~\sect{maml}. \subsection{Microbenchmarks and Performance Comparisons} For all the experiments, we use a cluster with an AWS p3.16xlarge GPU head instance with additional m4.16xlarge worker instances. All machines have 64 vCPUs and are connected by a 25Gbps network. More experiments for different RL algorithms can be found in \url{https://github.com/ray-project/rl-experiments}. \input{figTex/benchmark} \textbf{Sampling Microbenchmark}: We evaluate the data throughput of RLlib Flow\xspace in isolation by running RL training with a dummy policy (with only one trainable scalar). \fig{sample-efficiency} shows that RLlib Flow\xspace achieves slightly better throughput due to small optimizations such as batched RPC wait, which are easy to implement across multiple algorithms in a common way in RLlib Flow\xspace. \textbf{IMPALA Throughput}: In \fig{throughput} we benchmark IMPALA, one of RLlib\xspace's high-throughput RL algorithms, and show that RLlib Flow\xspace achieves similar or better end-to-end performance. \textbf{Performance of Multi-Agent Multi-Policy Workflow}: In \fig{rllib-performance}, we show that the workflow of the two-trainer example (\fig{twotrainer}) achieves close to the theoretical best performance possible combining the two workflows (calculated via Amdahl's law). This benchmark was run in a multi-agent Atari environment with four agents per policy, and shows RLlib Flow\xspace can be practically used to compose complex training workflows. \textbf{Comparison to Spark Streaming}: \label{sec:spark-streaming} Distributed dataflow systems such as Spark Streaming~\cite{zaharia_spark_stream_2013} and Flink~\cite{Carbone2015ApacheFS} are designed for collecting and transforming live data streams from online applications (e.g., event streams, social media). Given the basic \textit{map} and \textit{reduce} operations, we can implement synchronous RL algorithms in any of these streaming frameworks. However, without consideration for the requirements of RL tasks (Section \ref{sec:streaming}), these frameworks can introduce significant overheads. In \fig{rllib-spark} we compare the performance of PPO implemented in Spark Streaming and RLlib Flow\xspace. Implementation details are in in \apdx{spark}. \section{Distributed Reinforcement Learning} \label{sec:challenges} We first discuss the relevant computational characteristics of distributed RL algorithms, starting with the common \textit{single-agent training} scenario, where the goal is to optimize a single agent's performance in an environment, and then discuss the computational needs of emerging \textit{multi-agent}, \textit{model-based}, and \textit{meta-learning} training patterns. \subsection{RL Algorithm Basics} The goal of an RL algorithm is typically to improve the performance of a \textit{policy} with respect to an objective defined through an \textit{environment} (e.g., simulator). The policy is usually defined as a deep neural network, which can range from several KB to several hundred MB in size. RL algorithms can be generally broken down into the following basic steps: \input{figTex/rl-basic} \textbf{Rollout}: To generate experiences, the policy, which outputs actions to take given environment observations, is run against the environment to collect batches of data. The batch consists of observations, actions, rewards, and episode terminals and can vary in size (10s to 10000s of steps) \textbf{Replay}: On-policy algorithms (e.g., PPO \cite{schulman2017proximal}, A3C \cite{mnih2016asynchronous}) collect new experiences from the current policy to learn. On the other hand, off-policy algorithms (e.g., DQN \cite{mnih2015humanlevel}, SAC \cite{softlearning}) can leverage experiences from past versions of the policy as well. For these algorithms, a \textit{replay buffer} of past experiences can be used. The size of these buffers ranges from a few hundred to millions of steps. \textbf{Optimization}: Experiences, either freshly collected or replayed, can be used to improve the policy. Typically this is done by computing and applying a gradient update to the policy and value neural networks. While in many applications a single GPU suffices to compute gradient updates, it is sometimes desirable to leverage multiple GPUs within a single node, asynchronous computation of gradients on multiple CPUs \cite{mnih2016asynchronous}, or many GPUs spread across a cluster \cite{wijmans2020ddppo}. \subsection{RL Algorithm Variants} \textbf{Single-Agent Training.} Training a single RL agent---the most basic and common scenario---consists of applying the steps of rollout, replay, and optimization repeatedly until the policy reaches the desired performance. Synchronous algorithms such as A2C \cite{mnih2016asynchronous} and PPO apply the steps strictly sequentially. Parallelism may be leveraged internally within each step. Asynchronous algorithm variations such as A3C \cite{mnih2016asynchronous}, Ape-X \cite{horgan2018distributed }, APPO \cite{luo2020impact}, and IMPALA \cite{espeholt2018impala}, pipeline and overlap the rollout and optimization steps asynchronously to hit higher data throughputs. Rate limiting~\cite{hoffman2020acme} can be applied to control learning dynamics in the asynchronous setting. \textbf{Multi-Agent Training.} In multi-agent training, there are multiple acting entities in the environment (e.g., cooperating or competing agents). While there is a rich literature on multi-agent algorithms, we note that the \textit{dataflow structure} of multi-agent training is similar to that of single-agent---as long as all entities are being trained with the same algorithm and compatible hyperparameters However, problems arise should it be required to customize the training of any of the agents in the environment. For example, in a two-agent environment, one agent may desire to be optimized at a higher frequency (i.e., smaller batch size). This fundamentally alters the training dataflow---there are now two iterative loops executing at different frequencies. Furthermore, if these agents are trained with entirely different algorithms, there is a need to compose two different distributed dataflows \textbf{Model-Based and Meta-Learning Algorithms.} Model-based algorithms seek to learn transition dynamics of the environment to improve the sample efficiency of training. This can be thought of as adding a supervised training step on top of standard distributed RL, where an ensemble of one or more dynamics models are trained from environment-generated data. Handling the data routing, replay, optimization, and stats collection for these models naturally adds complexity to the distributed dataflow graph, ``breaking the mold'' of standard model-free RL algorithms and hard to be implemented in low-level systems. Using RLlib Flow\xspace, we have implemented two state-of-the-art model-based algorithms: MB-MPO \cite{mbmpo} and Dreamer \cite{hafner2020dream} \subsection{A Case for a Higher Level Programming Model} Given that existing distributed RL algorithms are already implementable using low level actor and RPC primitives, it is worth questioning the value of defining a higher level computation model. Our experience is that RL is more like data analytics than supervised learning. Advanced users want to tweak or add various distributed components (i.e., they need to program), and there is no way to have a ``one size fits all'' (i.e., Estimator interface from supervised learning). We believe that, beyond the ability to more concisely and cleanly capture \textit{single-agent} RL algorithms, the computational needs of more advanced RL training patterns motivate higher level programming models like RLlib Flow\xspace. \section{Introduction} \label{introduction} The past few years have seen the rise of deep reinforcement learning (RL) as a new, powerful optimization method for solving sequential decision making problems. As with deep supervised learning, researchers and practitioners frequently leverage parallel computation, which has led to the development of numerous distributed RL algorithms and systems as the field rapidly evolves. However, despite the high-level of abstraction that RL algorithms are defined in (i.e., as a couple dozen lines of update equations), their implementations have remained quite low level (i.e., at the level of message passing). This is particularly true for \textit{distributed} RL algorithms, which are typically implemented directly on low-level message passing systems or actor frameworks \cite{hewitt1973session}. Libraries such as Acme \cite{hoffman2020acme}, RLgraph \cite{Schaarschmidt2019rlgraph}, RLlib \cite{liang2018rllib}, and Coach \cite{coach} provide unified abstractions for defining single-agent RL algorithms, but their user-facing APIs only allow algorithms to execute within the bounds to predefined distributed execution patterns or ``templates''. While the aforementioned libraries have been highly successful at replicating a large number of novel RL algorithms introduced over the years, showing the generality of their underlying actor or graph-based computation models, the needs of many researchers and practitioners are often not met by their abstractions. We have observed this firsthand from users of open source RL libraries: First, RL practitioners are typically not systems engineers. They are not well versed with code that mixes together the logical dataflow of the program and system concerns such as performance and bounding memory usage. This leads to a high barrier of entry for most RL users to experimenting with debugging existing distributed RL algorithms or authoring new distributed RL approaches. Second, even when an RL practitioner is happy with a particular algorithm, they may wish to \textit{customize} it in various ways. This is especially important given the diversity of RL tasks (e.g., single-agent, multi-agent, meta-learning). While many customizations within common RL environments can be anticipated and made available as configuration options (e.g., degree of parallelism, batch size), it is difficult for a library author to provide enough options to cover less common tasks that necessarily alter the distributed pattern of the algorithm (e.g., interleaved training of different distributed algorithms, different replay strategies). \input{figTex/teasor-arch} \input{figTex/libraries} Our experience is that when considering the needs of users for novel RL applications and approaches, RL development requires a significant degree of programming flexibility. Advanced users want to tweak or add various distributed components (i.e., they need to write programs). In contrast to supervised learning, it is more difficult to provide a fixed set of abstractions for scaling RL training. As a result, it is very common for RL researchers or practitioners to eschew existing infrastructure, either sticking to non-parallel approaches, which are inherently easier to understand and customize \cite{dopamine, baselines}, or writing their own distributed framework that fits their needs. The large number of RL frameworks in existence today is evidence of this, especially considering the number of these frameworks aiming to be ``simpler'' versions of other frameworks. In this paper, we re-examine the challenges posed by distributed RL in the light of these user requirements, drawing inspiration from prior work in the field of data processing and distributed dataflow. To meet these challenges, we propose RLlib Flow\xspace, a hybrid actor-dataflow model for distributed RL. Like streaming data systems, RLlib Flow\xspace provides a small set of operator-like primitives that can be composed to express distributed RL algorithms. Unlike data processing systems, RLlib Flow\xspace explicitly exposes references to actor processes participating in the dataflow, permitting limited message passing between them in order to more simply meet the requirements of RL algorithms. The interaction of dataflow and actor messages is managed via special sequencing and concurrency operators. The contributions of our paper are as follows: \begin{enumerate}[leftmargin=*] \item We examine the needs of distributed RL algorithms and RL practitioners from a dataflow perspective, identifying key challenges (Section \ref{sec:challenges} and \ref{sec:streaming}). \item We propose RLlib Flow\xspace, a hybrid actor-dataflow programming model that can simply and efficiently express distributed RL algorithms\revise{, and enables composition of multi-agent algorithms not possible by end users before without writing low-level systems code.} (Section \ref{sec:dataflow} and \ref{sec:implementation}). \item We port all the algorithms of a production RL library (RLlib\xspace) to RLlib Flow\xspace, providing 2-9$\times$ savings in distributed execution code, compare its performance with the original implementation, and show performance benefits over systems such as Spark Streaming (Section \ref{sec:evaluation}). \end{enumerate} \section{A Dataflow Programming Model for Distributed RL} \label{sec:dataflow} Here we formally define the RLlib Flow\xspace hybrid actor-dataflow programming model. RLlib Flow\xspace consists of a set of dataflow operators that produce and consume \textit{distributed iterators} \cite{volcano}. These distributed iterators can represent parallel streams of a data items \texttt{T} sharded across many actors (\texttt{ParIter[T]}), or a single sequential stream of items (\texttt{Iter[T]}). It is important to note that these iterators are \textit{lazy}, they do not execute computation or produce items unless requested. This means that the entire RLlib Flow\xspace execution graph driven by taking items from the output operator. \begin{figure}[h] \centering \begin{minipage}[b]{.48\linewidth} {\scriptsize \begin{lstlisting} create(Seq[SourceActor[T]]) $\rightarrow$ ParIter[T] send_msg(dest: Actor, msg: Any) $\rightarrow$ Reply \end{lstlisting} } \vspace{-4pt} \centering \includegraphics[width=0.8\linewidth]{figures/creation.pdf} \caption{Creation and Message Passing} \label{fig:creation} \end{minipage} ~ \begin{minipage}[b]{.48\linewidth} {\scriptsize \begin{lstlisting} for_each(ParIter[T], T => U) $\rightarrow$ ParIter[U] for_each(Iter[T], T => U) $\rightarrow$ Iter[U] \end{lstlisting} } \centering \includegraphics[width=0.92\linewidth]{figures/transformation.pdf} \vspace{-4pt} \caption{Transformation} \label{fig:transformation} \end{minipage} \end{figure} \textbf{Creation and Message Passing}: RLlib Flow\xspace iterators are always created from an existing set of actor processes. In \fig{a3c}, the iterator is created from a set of rollout workers that produce experience batches given their current policy. Also, any operator may send a message to any source actor (i.e., a rollout worker, or replay buffer) during its execution. In the A3C example, the update weights operation is a use of this facility. The order guarantees of these messages with respect to dataflow steps depends on the barrier semantics provided by \textit{sequencing operators}. The sender may optionally block and await the reply of sent messages. We show the operator in~\fig{creation}. \textbf{Transformation}: As in any data processing system, the basic operation of data transformation is supported. Both parallel and sequential iterators can be transformed with the \texttt{for\_each} operator. The transformation function can be stateful (i.e., in Python it can be a callable function class that holds state in class members, and in the case of sequential operators it can reference local variables via closure capture). In the A3C example, \texttt{for\_each} is used to compute gradients for each batch of experiences, which depends on the current policy state of the source actor. In the case of the \texttt{ComputeGradients} step, this state is available in the local process memory of the rollout worker, and is accessible because RLlib Flow\xspace schedules the execution of parallel operations onto the source actors. We show the operator in~\fig{transformation}. \begin{figure}[h] \begin{minipage}{.49\linewidth} {\scriptsize \begin{lstlisting} gather_async(ParIter[T], num_async: Int) $\rightarrow$ Iter[T] gather_sync(ParIter[T]) $\rightarrow$ Iter[List[T]] next(Iter[T]) $\rightarrow$ T \end{lstlisting} } \centering \includegraphics[width=0.8\linewidth]{figures/gather.pdf} \caption{Sequencing} \label{fig:seq} \end{minipage} \begin{minipage}{.49\linewidth} {\scriptsize \begin{lstlisting} split(Iter[T]) $\rightarrow$ (Iter[T], Iter[T]) union(List[Iter[T]], weights: List[float]) $\rightarrow$ Iter[T] union_async(List[Iter[T]]): Iter[T] \end{lstlisting} } \centering \includegraphics[width=0.98\linewidth]{figures/union.pdf} \caption{Concurrency} \label{fig:concurrency} \end{minipage} \end{figure} \textbf{Sequencing}: To consume a parallel iterator, the items have to be serialized into some sequential order. This is the role of sequencing operators. Once converted into a sequential iterator, \texttt{next} can be called on the iterator to fetch a concrete item from the iterator. The \texttt{gather\_async} operator is used in A3C, and gathers computed gradients as fast as they are computed for application to a central policy. For a deterministic variation, we could have instead used \texttt{gather\_sync}, which waits for one gradient from each shard of the iterator before returning. The sync gather operator also has \textit{barrier semantics}. Upstream operators connected by a synchronous dependencies (black arrows) are fully halted between item fetches. This allows for the source actors to be updated prior to the next item fetch. Barrier semantics do not apply across asynchronous dependencies, allowing the mixing of synchronous and async dataflow fragments separated by pink arrows, in~\fig{seq}. \textbf{Concurrency}: Complex algorithms may involve multiple concurrently executing dataflow fragments. Concurrency (\texttt{union}) operators, in~\fig{concurrency}, govern how these concurrent iterators relate to each other. For example, one may wish two iterators to execute sequentially in a round robin manner, execute independently in parallel, or rate limiting progress to a fixed ratio \cite{hoffman2020acme}. Additionally, one might wish to duplicate (\texttt{split}) an iterator, in which case buffers are automatically inserted to retain items until fully consumed. In this case, the RLlib Flow\xspace scheduler tries to bound memory usage by prioritizing the consumer that is falling behind. \section{Implementation} \label{sec:implementation} We implemented RLlib Flow\xspace on the Ray distributed actor framework \cite{moritz2018ray} as two separate modules: a general purpose parallel iterator library (1241 lines of code), and a collection of RL specific dataflow operators (1118 lines of code) (\fig{arch}). We then ported the full suite of 20+ RL algorithms in RLlib\xspace to RLlib Flow\xspace, replacing the original implementations built directly on top of low-level actor and RPC primitives. Only the portions of code in RLlib\xspace related to distributed execution were changed (the exact same numerical computations are run in our port), which allows us to fairly evaluate against it as a baseline. In this section we overview two simple examples to illustrate RLlib Flow\xspace. MAML case study, can be found in~\sect{more_implementations}. \subsection{Asynchronous Optimization in RLlib Flow\xspace vs RLlib\xspace} As previously seen in \fig{a3c}, A3C is straightforward to express in RLlib Flow\xspace. \code{a3c-code} shows pseudocode for A3C in RLlib Flow\xspace (11 lines), which we compare to a simplified version of the RLlib\xspace implementation (originally 87 lines). RLlib Flow\xspace hides the low-level worker management and data communication with its dataflow operators, providing more readable and flexible code. More detailed comparison of implementations in~RLlib Flow\xspace and RLlib can be found in \sect{code-comparison}. \input{figTex/a3c-compare} \subsection{Ape-X Prioritized Experience Replay in RLlib Flow\xspace} Ape-X \cite{apex} (\fig{apex}) is a high-throughput variation of DQN. It is notable since it involves multiple concurrent sub-flows (experience storage, experience replay), sets of actors (rollout actors, replay actors), and actor messages (updating model weights, updating replay buffer priorities). The sub-flows ($\texttt{store\_op}$, $\texttt{replay\_op}$) can be composed in RLlib Flow\xspace as follows using the $\texttt{Union}$ operator (\code{apex-code}). \revise{The complicated workflow can be implemented in several lines, as shown in~\code{apex-code}.} \input{figTex/apex-dataflow-code} \subsection{Composing DQN and PPO in Multi-Agent Training} \label{sec:twotrainer} Multi-agent training can involve the composition of different training algorithms (i.e., PPO and DQN). \fig{twotrainer} shows the combined dataflow for an experiment that uses DQN to train certain policies in an environment and PPO to train others. \revise{The code can be found in~\code{twotrain-code}}. In an actor or RPC-based programming model, this type of composition is difficult because dataflow and control flow logic is intermixed. However, it is easy to express in RLlib Flow\xspace using the \texttt{Union} operator \revise{(\fig{concurrency})}. In~\code{sub-flow}, we show the implementation of the two subflow, \texttt{ppo\_plan} and \texttt{dqn\_plan}, in the multi-agent training (\code{twotrain-code}). \input{figTex/twotrainer-dataflow-code} \input{figTex/subflow-code} \section{Reinforcement Learning vs Data Streaming} \label{sec:streaming} The key observation behind RLlib Flow\xspace is that the dataflow graph of RL algorithms are quite similar to those of data streaming applications. Indeed, RL algorithms can be captured in general purpose dataflow programming models. However, due to several characteristics, they are not a perfect fit, even for dataflow programming models that support iterative computation. In this section we examine the dataflow of the A3C algorithm (Figure \ref{fig:a3c}) to compare and contrast RL with streaming dataflow. A3C starts with (1) parallel rollouts across many experiences. Policy gradients are computed in parallel based on rollouts in step (2). In step (3), the gradients are asynchronously gathered and applied on a central model, which is then used to update rollout worker weights. Importantly, each box or \textit{operator} in this dataflow may be \textit{stateful} (e.g., \texttt{ParallelRollouts} holds environment state as well as the current policy snapshot). \input{figTex/a3c} Similar to data processing topologies, A3C is applying a transformation to a data stream (of rollouts) in parallel (to compute gradients). This is denoted by the black arrow between (1) and (2). There is also a non-parallel transformation to produce metrics from the computation, denoted by the black arrow between (3) and (4) However, zooming out to look at the entire dataflow graph, a few differences emerge: \textbf{Asynchronous Dependencies}: RL algorithms often leverage asynchronous computation to reduce update latencies and eliminate stragglers \cite{mnih2016asynchronous}. In RLlib Flow\xspace, we represent these with a pink arrow between a parallel and sequential iterator. This means items will be fetched into the sequential iterator as soon as they are available, instead of in a deterministic ordering. The level of asynchrony can be configured to increase pipeline parallelism. \textbf{Message Passing}: RL algorithms, like all iterative algorithms, need to update upstream operator state during execution (e.g., update policy weights). Unlike iterative algorithms, these updates may be fine-grained and asynchronous (i.e., update the parameters of a particular worker), as well as coarse-grained (i.e., update all workers at once after a global barrier). RLlib Flow\xspace allows method calls (messages) to be sent to any actor in the dataflow. Ordering of messages in RLlib Flow\xspace with respect to dataflow steps is guaranteed if synchronous data dependencies (black arrows) fully connect the sender to the receiver, providing \textit{barrier semantics}. \textbf{Consistency and Durability}: Unlike data streaming, which has strict requirements such as exactly-once processing of data~\cite{zaharia_spark_stream_2013}, RL has less strict consistency and durability requirements. This is since on a fault, the entire computation can be restarted from the last checkpoint with minimal loss of work. Message or data loss can generally be tolerated without adverse affect on training. Individual operators can be restarted on failure, discarding any temporary state. This motivates a programming model that minimizes overhead (e.g., avoids state serialization and logging cost). \section{Related Work} \label{Related Work} \textbf{Reinforcement Learning Systems}: RLlib Flow\xspace is implemented concretely in RLlib\xspace, however, we hope it can provide inspiration for a new generation of general purpose RL systems. RL libraries available today range from single-threaded implementations \cite{duan2016benchmarking, baselines, castro18dopamine, tensorforce} to distributed \cite{Schaarschmidt2019rlgraph, loon2019slm, coach, liang2018rllib, falcon2019pytorch, hoffman2020acme}. These libraries often focus on providing common frameworks for the numerical concerns of RL algorithms (e.g., loss, exploration, and optimization steps). However, these aforementioned libraries rely on \textit{predefined} distributed execution patterns. For example, for the Ape-X dataflow in~\fig{apex}, RLlib defines this with a fixed ``AsyncReplayOptimizer''\footnote{\scriptsize\url{https://docs.ray.io/en/releases-0.7.7/\_modules/ray/rllib/optimizers/async\_replay\_optimizer.html}} class that implements the topology\revise{, intermixing the dataflow and the control flow.}; RLGraph uses an adapted implementation\footnote{\scriptsize\url{https://github.com/rlgraph/rlgraph/blob/master/rlgraph/execution/ray/apex/apex\_executor.py}} from RLlib as part of their Ape-X algorithm meta-graph, while Coach does not support Ape-X\footnote{\scriptsize\url{https://github.com/IntelLabs/coach}}. These execution patterns are predefined as they are low-level, complex to implement, and cannot be modified using high-level end-user APIs. In contrast, RLlib Flow\xspace proposes a high-level distributed programming model for RL algorithm implementation, exposing this pattern in much fewer lines of code (\code{apex-code}), and allowing free composition of these patterns by users (\code{twotrain-code}). The ideas from RLlib Flow\xspace can be integrated with any RL library to enable flexibility in distributed execution. \input{figTex/rllib-performance-spark} \textbf{Distributed Computation Models}: RLlib Flow\xspace draws inspiration from both streaming dataflow and actor-based programming models. Popular open source implementations of streaming dataflow, including Apache Storm~\cite{toshniwal2014storm}, Apache Flink~\cite{Carbone2015ApacheFS}, and Apache Spark~\cite{Zaharia_apache_spark,zaharia_spark_stream_2013} transparently distribute data to multiple processors in the background, hiding the scheduling and message passing for distribution from programmers. In~\apdx{spark}, we show how distributed PPO can be implemented in Apache Spark. Apache Flink's \texttt{Delta Iterate} operator can similarly support synchronous RL algorithms. However, data processing frameworks have limited asynchronous iteration support. The Volcano model~\cite{volcano}, commonly used for distributed data processing, pioneered the parallel iterator abstraction. RLlib Flow\xspace builds on the Volcano model to not only encapsulate parallelism, but also to encapsulate the synchronization requirements between concurrent dataflow fragments, enabling users to also leverage actor message passing. Naiad~\cite{murray2013naiad} is a low-level distributed dataflow system that supports cyclic execution graphs and message passing. It is designed as a system for implementing higher-level programming models. In principle, it is possible to implement the RLlib Flow\xspace model in Naiad. Transformation operators can be placed on the stateful vertices of the execution graph. The message passing and concurrency (\texttt{Union}) operators can be represented by calling \texttt{\textsc{SendBy}} and \texttt{\textsc{OnRecv}} interface on senders and receivers, which support asynchronous execution. RLlib Flow\xspace's barrier semantics can be expressed with \texttt{\textsc{OnNotify}} and \texttt{\textsc{NotifyAt}}, where the former indicates all the required messages are ready, and the latter blocks execution until the notification has been received. We implemented RLlib Flow\xspace on Ray instead of Naiad for practical reasons (e.g., Python support). \section{RL in Spark Streaming} \label{append:spark} \textbf{PPO Implementation.} In \code{spark-code}, we show the high-level pseudocode of our port of the PPO algorithm to Spark Streaming. Similar to our port of RLlib\xspace to RLlib Flow\xspace, we only changed the parts of the PPO algorithm in RLlib\xspace that affect distributed execution, keeping the core algorithm implementation (e.g., numerical definition of policy loss and neural networks in TensorFlow) as similar as possible for fair comparison. We made a best attempt at working around aforementioned limitations (e.g., using a \texttt{binaryRecordsStream} input source to efficiently handle looping, defining efficient serializers for neural network state, and adjusting the microbatching to emulate the RLlib\xspace configuration). \input{figTex/spark-code} \textbf{Experiment Setup.} We conduct comparisons between the performance of both implementations. In the experiment, we adopt the PPO algorithm for the CartPole-v0 environment with a fixed sampling batch size $B$ of 100K. Each worker samples ($B/\text{\# workers}$) samples each iteration, and for simplicity, the learner updates the model on CPU using a minibatch with 128 samples from the sampled batch. Experiments here are conducted on AWS m4.10xlarge instances. \textbf{Data Framework Limitations}: Spark Streaming is a data streaming framework designed for general purpose data processing. We note several challenges we encountered attempting to port RL algorithms to Spark Streaming: \setlist{nolistsep} \begin{enumerate}[leftmargin=*,noitemsep] \item Support for asynchronous operations. Data processing systems like Spark Streaming do not support asynchronous or non-deterministic operations that are needed for asynchronous RL algorithms. \item Looping operations are not well supported. While many dataflow models in principle support iterative algorithms, we found it necessary to work around them due to lack of language APIs (i.e., no Python API). \item Support for non-serializable state. In the dataflow model, there is no way to persist arbitrary state (i.e., environments, neural network models on the GPU). While necessary for fault-tolerance, the requirement for serializability impacts the performance and feasibility of many RL workloads. \item Lack of control over batching. We found that certain constructs such as the data batch size for on-policy algorithms are difficult to control in traditional streaming frameworks, since they are not part of the relational data processing model. \end{enumerate} For a single machine (the left three pairs), the breakdown of the running time indicates that the initialization and I/O overheads slow down the training process for Spark comparing to our RLlib Flow\xspace. The former overheads come from the nature of Spark that the transformation functions do not persist variables. We have to serialize both the sampling and training states and re-initialize the variables in the next iteration to have a continuous running process. On the other hand, the I/O overheads come from looping back the states back to the input. As an event-time driven streaming system, the stream engine detects changes for the saved states from the source directory and starts new stream processing. The disk I/O leads to high overheads compared to RLlib Flow\xspace. For distributed situation (the right three pairs), the improvement of RLlib Flow\xspace becomes more significant against Spark, up to 2.9$\times$. As the number of workers scales up, the sampling time decreases for both the dataflow model. Still, the initialization and I/O overheads stay unchanged, leading to lesser scalability for Spark. \section{Implementation Examples} \label{sec:more_implementations} \subsection{Example: MAML} \label{sec:maml} \code{maml-code} concisely expresses MAML's dataflow (also shown in Figure \ref{fig:maml}) \cite{finn2017modelagnostic}. The MAML dataflow involves nested optimization loops; workers collect pre-adaptation data, perform inner adaptation (i.e., individual optimization calls to an ensemble of models spread across the workers), and collect post-adaptation data. Once inner adaptation is complete, the accumulated data is batched together to compute the meta-update step, which is broadcast to all workers. \input{figTex/maml-dataflow-code} \section{Comparison of Implementations in RLlib Flow\xspace and RLlib} \label{sec:code-comparison} In this section we report the detailed code comparison of our RLlib Flow\xspace and the original RLlib. \lst{a3c-detailed} and \lst{a3c-detailed-origin} are the detailed implementation of A3C in RLlib Flow\xspace and RLlib, respectively. Note that the detailed implementation in~\lst{a3c-detailed} is exactly the same as we shown before in~\code{a3c-code}, but RLlib implementation is much more complicated as the intermixing of the control and data flow. In~\lst{apex-detailed} and~\lst{apex-detailed-origin}, we also show the detailed implementation of Ape-X algorithm in our~RLlib Flow\xspace and~RLlib\xspace respectively, which also indicates the simplicity, readability and flexibility of our~RLlib Flow\xspace. \input{figTex/a3c-detailed-compare} \input{figTex/apex-detailed-compare}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} While Sagittarius A* (Sgr~A*) dominates the gravitational dynamics in the central parsecs of the Galaxy, many other components are required to explain the wealth of detailed observations of this busy region. Even as we focus on the inner 3 parsecs, it is clear that other classes of object contribute to the overall radiative emission from the Galactic center (for a recent review, see \citealp{MF01}; see also \citealp{Ma02}). For example, the medium within which Sgr~A* is embedded---bounded by the circumnuclear disk (CND, with inner radius $\sim 2$--$3$~parsecs)---has a temperature $\sim 1.3$~keV and emits a {\it Chandra}-detectable glow of diffuse X-rays \citep{Bag03}. \citet{Roc04} have shown that this emission may be understood as the result of mutual interactions between the winds of Wolf-Rayet and O/B stars within $\sim 1$~parsec of the supermassive black hole. The CND is perhaps the shredded remains of a giant molecular cloud that passed by Sgr~A*. The cavity within the inner $\sim 2$--$3$~parsecs of this structure may itself have been created by the ablative influence of the cumulative wind outflow, which has by now produced a bubble of hot, ionized plasma. To make matters even more complex, observations of the supernova (SN) remnant Sgr~A East suggest that its broadband emission arises from shock heating in a recent ($<10,000$-year-old) supernova \citep[or other explosive outburst; see][]{Me89} originating within $\sim 3$~parsecs of the black hole. Any comprehensive model of this region must therefore include the effects of all these components: the supermassive black hole, the Wolf-Rayet and O/B winds, the dense CND and the expanding supernova remnant, which we now see predominantly as a radio-emitting shell. Although interacting winds can explain the bulk of the diffuse X-ray flux from the inner $\sim 2$--$3$~parsecs, several other features seen in the {\it Chandra} X-ray image are not as easy to explain without the influence of some other interaction. In particular, a well-defined ridge of X-ray emission just outside the central region, to the NE of Sgr~A* (see Figure~\ref{fig:chandra}), may be evidence of an ongoing collision between the SN ejecta and the cumulative Wolf-Rayet and O/B winds emanating from within the cavity \citep{Ma02}. In this {\it Letter}, we model the expansion of the supernova, focusing on the effect this explosion has on the central few parsecs surrounding Sgr~A*. We then directly compare the X-ray emission arising from the interaction zone with the actual {\it Chandra} image. Interestingly, using the well-studied wind conditions at the Galactic center, we may also be able to place tighter constraints on the supernova explosion itself---both the released energy and the age of the remnant. With this knowledge, we can address several outstanding issues pertaining to the influence of this explosion on the morphology of the Galactic center: Did the supernova shock clear out the region surrounding the black hole, effectively shutting down what would otherwise have been a high accretion rate onto the black hole? Could the supernova have caused a brief increase in the accretion rate onto Sgr~A*, producing a spike in X-ray emissivity that irradiated the X-ray-fluorescing Sgr~B2 and other nearby molecular clouds some 300 years ago \citep[see, e.g.,][]{Sun98,From01}? \section{General Physical Principles} Our simulation uses the SNSPH smoothed particle hydrodynamics code \citep*{FRW05} to follow the supernova explosion as it crosses the Galactic center. The domain of solution is a cube, $6$ pc on a side, centered on the black hole. Particles that move beyond this domain, or within $1.9\times10^{17}$~cm of the origin (effectively onto the black hole), are removed to simulate outflow (or inflow) conditions. Our initial (pre-explosion) conditions are taken from the structure of the wind-swept medium at the end of the simulation by \citet{Roc04}. The initial particle distribution is constructed from a $\sim 1$ million particle supernova explosion placed 2~pc due east (in right ascension) of the central supermassive black hole within a $\sim 6$ million particle wind-filled Galactic center region. We assume that the density structure within this domain of solution at the time of the supernova explosion is dominated by matter lost by the Wolf-Rayet and O/B stars, plus the dense CND surrounding the central black hole. The CND is mimicked by 200 spherical clumps (totaling $10^4\,M_\odot$), in a torus with a low filling factor surrounding the black hole. The winds from these stars (which we assume have not changed noticeably in the past $10,000$ years) have blown a bubble in the Galactic center that is probably at the edge of the 50~km~s$^{-1}$ molecular cloud \citep{Me89}. Note, however, that our initial conditions do not include the initial molecular cloud blown out by the stellar winds. There is evidence that the supernova shock has reached the boundary between the wind-blown bubble and this cloud \citep{Yusef99}. We can not address these effects with this current set of simulations. We also do not include the effect of mass loss from the supernova progenitor itself. However, the density structure near the X-ray ridge and the central black hole is dominated by the $\sim 25$ stars we do include \citep{Roc04}, and not by any outer stars or the surrounding molecular cloud. The structure of the supernova ejecta is set using a spherically symmetric model from \citet*{Hun04}. This $15\,M_\odot$ star is exploded with an energy of $1.5\times10^{51}$\,erg; these properties are typical of a supernova explosion, both in composition and energy. We place the explosion into our domain of solution after the shock has moved out for 1 year, at which point the explosion material is still within $0.02$~pc of the supernova site. \section{Calculations and Results} The wind-swept initial conditions of the Galactic center lead to an aspherical progression of the supernova shock. Plowing through diffuse regions with density $\sim 1$~particle~cm$^{-3}$, across the dense CND, and through central regions with densities $>10^4$~particle~cm$^{-3}$, the supernova shock is far from symmetric. The ejecta flow around the dense regions, taking the path of least resistance and producing shocks where they collide with and are decelerated by the dense material. Figure~\ref{fig:2ddens} shows the density profile and position of a set of tracer supernova particles 650 years after the launch of the supernova explosion. This is roughly the time of deepest penetration of the supernova ejecta into the region surrounding the supermassive black hole. The actual supernova shock has now moved beyond the Galactic Center and has already passed beyond the southern and eastern edges of our simulation grid. The ejecta do not sweep through the central region, nor do they significantly alter the density of the inner 0.2\,pc surrounding the black hole. A more detailed discussion of the effects of the supernova shock on the central region, the implications for the black hole accretion rate, and the possible excitation of Sgr~B2, will be presented in \citet{Fry05}. \citet{Me89} estimated the age and energy of the supernova remnant by assuming the supernova was plowing through a $10^4$~particle~cm$^{-3}$ molecular cloud. They calculated a supernova remnant age of $7,500$ years and, to fit the observed shock temperature, they required an explosion energy in excess of $10^{52}$\,erg. However, using the lower mean density in our wind-swept initial conditions allows us to account for the observed remnant characteristics with a more typical supernova explosion ($\sim 10^{51}$\,erg). Our simulation also suggests that the supernova remnant is younger than the estimate of \citet{Me89} \citep[see][for more details on the fitting]{Fry05}. From the position of the shock, we can set a crude lower limit to the remnant's age at $\sim 1,000$ years, but a more precise answer may be determined by comparing our predicted X-ray ridge properties to the observations shown in Figure~\ref{fig:chandra}. The column integrated X-ray emission from our models is shown in Figure~\ref{fig:xray} in a series of temporal snapshots. For very energetic shocks, the high compression ratio at the point of impact can lead to significant particle acceleration and consequent nonthermal emission, but by this time in the interaction, the dominant $2$--$10$ keV emission mechanism is expected to be optically thin bremsstrahlung \citep[see][for details]{Roc04,Fry05}. The supernova shock reaches the dense inner $0.2$--$0.3$~pc of the Galactic Center in 160 years and sweeps around this area after 650 years. The X-ray emission is highest where the shock is strongest (a function of both shock velocity and density of the impacted region). By $1,740$ years after the explosion, the supernova shock has swept beyond our simulation grid. However, over 95\% of the mass in the supernova ejecta is moving at less than half the speed of the supernova shock front. This slow-moving material continues to impinge on the inner $0.2$--$0.3$~pc of the Galaxy. It is the interaction between the stellar winds pushing against the slow-moving supernova ejecta that produces the X-ray ridge observed today (Fig.~\ref{fig:chandra}). For direct comparison between our simulation and the properties evident in Figure~\ref{fig:chandra}, we restrict our attention to the region bounded by the $9\arcsec$ and $15\arcsec$ arcs in this image; this appears to be the scene of dominant interaction at the present time. The observed flux is accurately modeled with two components at different temperatures: a $5.6$~keV component with a 2--10~keV flux of $3.92 \times 10^{-13}$~erg~cm$^{-2}$~s$^{-1}$ and a $1$~keV component with a 2--10~keV flux of $5.8 \times 10^{-13}$~erg~cm$^{-2}$~s$^{-1}$; these correspond to 2--10~keV luminosities of $3.0 \times 10^{33}$~erg~s$^{-1}$ and $4.4 \times 10^{33}$~erg~s$^{-1}$, respectively, at a distance of 8~kpc from the Galactic center. In our simulation, both the location of, and the X-ray intensity from, this region vary with time, so we have essentially two important constraints on the comparison between theory and observation. We suspect that the 1~keV component arises from either foreground or background emission; this is supported by the fact that the luminosity-weighted temperature in this region of our simulation is $4.9$~keV after $1,740$~years. From our simulation, we find that the column-integrated 2--10~keV luminosity from the interaction within the swath highlighted in Figure~\ref{fig:chandra} is $3.6 \times 10^{33}$~erg~s$^{-1}$ after $1,619$~years and $2.7 \times 10^{33}$~erg~s$^{-1}$ after $1,740$~years. These values are within 20\% of the observed luminosity of the $5.6$~keV component, and the good match between theory and observation for both the morphology and the X-ray radiance of the ridge therefore provides us with compelling evidence that $\sim 1,700$ years must be a reasonable estimate for the remnant's age. As the velocity of the impinging supernova ejecta decreases, the X-ray ridge moves out and dims. By $2,560$ years, the ridge will have moved out $30\arcsec$ and its flux should then be 3 times lower than its (current) $1,740$-year value. \section{Conclusion} Our simulation of the passage of Sgr~A East across the Galactic center has produced several new insights into the structure of the environment within $\sim 3$~pc of Sgr~A*. The front of the SNR flowed around the Galactic center $\sim 1,100$ years ago (650 years after the supernova explosion). The shock front pushed back the combined winds from the Wolf-Rayet and O/B stars, but did not get within $\sim 0.2$ pc of the accreting supermassive black hole. The collision between the supernova ejecta and the central winds produces a ridge of X-ray-emitting gas $\sim 9\arcsec$--$15\arcsec$ to the NE of the black hole. Comparing our simulations with the observations allows us to estimate the age of the supernova remnant to be $\sim 1,700$ years. We predict that this X-ray ridge will move out and dim with time. In $\sim 800$ years, it should be roughly twice its current distance from Sgr~A*, and a factor $\sim 3$ dimmer. The supernova did not significantly alter the accretion rate onto the black hole. We discuss this in greater detail in an upcoming paper \citep{Fry05}. Our simulations demonstrate how rich in information are the X-ray observations of the Galactic Center. Although this region contains several complex flows that are difficult to simulate completely, the extensive body of data contains many features that we can employ to tightly constrain the calculations. As we continue to refine our models, we expect to uncover additional new features regarding the environment and physical processes at the Galactic center. This work has barely scratched the surface of what we can learn about the Galactic center by carefully studying the gas dynamics in this region. In \citet{Fry05}, we will discuss in more detail the hydrodynamic evolution of the shock and calculate the emission as the shock hits the large molecular cloud to form Sgr~A East. We will also report in detail the metallicity gradients expected within Sgr~A East as a function of time and distance from Sgr~A*, providing yet another observational signature that may be used to better constrain this remnant's age. {\bf Acknowledgments} This work was funded in part under the auspices of the U.S.\ Dept.\ of Energy, and supported by its contract W-7405-ENG-36 to Los Alamos National Laboratory, by a DOE SciDAC grant DE-FC02-01ER41176. At The University of Arizona, this research was supported by NSF grant AST-0402502, and has made use of NASA's Astrophysics Data System Abstract Service. F. M. is grateful to the University of Melbourne for its support (through a Sir Thomas Lyle Fellowship and a Miegunyah Fellowship). The simulations were conducted on the Space Simulator at Los Alamos National Laboratory.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction and Theory} \label{sec:intro} Viscosity has attracted remarkable attention at RHIC during the first years of running. In particular, experimental observations of large elliptic flow have pointed to a small shear viscosity and inspired the term ``perfect liquid'' \cite{RHIC_Whitepapers}. In this talk, we review the general theoretical definition of viscosity, then show how five different physical effects can lead to non-zero viscous coefficients (Sec. 2). We focus on bulk viscosity, and show that one can find large, even singular, effects in the neighborhood of $T_c$ (Sec. 3). Although the present study focuses on understanding the behavior and physical explanation of the coefficients, we speculate on the experimental manifestations large viscosities might bring about. In non-viscous hydrodynamics the elements of the stress-energy tensor depend only on the energy density $\epsilon$ and particle-number densities $\vec{n}$ when viewed in the rest frame of the matter. \begin{equation} \tilde{T}_{ij}^{\rm (non.visc.)}({\bf r},t) = \delta_{ij}P(\epsilon({\bf r},t),\vec{n}({\bf r},t)), \end{equation} where $\epsilon$ and $\vec{n}$ are implicitly functions of ${\bf r}$ and $t$. The tilde denotes that $T_{ij}$ is evaluated in a frame where the collective velocity ${\bf u}({\bf r})=0$. In Navier-Stokes hydrodynamics, viscosity is incorporated by altering $\tilde{T}$ so that it includes terms proportional to the velocity gradients $\partial u_i/\partial r_j$. \begin{equation} \label{eq:NS} \tilde{T}_{ij}^{\rm (N.S.)} = \delta_{ij}P(\epsilon,\vec{n}) +\eta(\epsilon,\vec{n})\left(\frac{\partial u_i}{\partial r_j}+ \frac{\partial u_j}{\partial r_i}-\frac{2}{3}\nabla\cdot{\bf u}\delta_{ij}\right) +B(\epsilon,\vec{n})(\nabla\cdot{\bf u})\delta_{ij}, \end{equation} Here, $\eta$ and $B$ are the shear and bulk viscosities. Since $\nabla\cdot {\bf u}=(\partial\epsilon/\partial t)/(P+\epsilon)$, the bulk viscosity can be interpreted as a correction to the pressure due to the changing energy density, whereas the shear viscosity describes the asymmetry of $\tilde{T}_{ij}$ due to an anisotropic expansion. In non-viscous hydrodynamics accelerations are proportional to the gradient of the pressure, while in general, accelerations arise from derivatives of the stress energy tensor, \begin{equation} (\epsilon\delta_{ij}+\tilde{T}_{ij})\frac{\partial u_j}{\partial t}= -\frac{\partial}{\partial x_j}\tilde{T}_{ij}. \end{equation} Thus, the components of the stress-energy tensor can be considered as representatives of the pressure in a given direction, and any reduction/rise of $\tilde{T}_{ij}$ from viscous effects will result in a slowing/acceleration of the expansion in that direction. Viscous coefficients can be expressed in terms of correlations in the stress-energy tensor through Kubo relations. These are derived by considering alterations of $T_{ij}$ due to a perturbation $V$ in linear response theory, \begin{equation} \langle\delta T_{ij}(r=0)\rangle =-(i/\hbar)\int_{r'_0<0} d^4r' \langle[T_{ij}(r=0),V(r')]\rangle,~~~ V(r')=r'_i(\partial_iu_j) T_{0j}(r'), \end{equation} where the perturbation represents the change to the Hamiltonian due to boosting according to a linear velocity gradient. After integrating by parts and applying the conservation of the stress energy tensor, $\partial_tT_{i,0}=-\partial_jT_{ij}$, one can derive the Kubo relation, \begin{equation} \label{eq:deltij} \delta\langle T_{ij}(r=0)\rangle =(i/\hbar)\int_{r'_0<0} d^4r' r'_0 \langle [\Delta T_{ij}(r=0),\Delta T_{kl}(r')]\rangle \partial_k u_l. \end{equation} Here, $\Delta T_{kl}$ refers to the difference with respect to the average of $T_{kl}$ at $t=-\infty$. For $i\ne j$, symmetries constrain $k$ and $l$ to equal $i$ and $j$, which allows one to read off the shear viscosity from Eq. (\ref{eq:NS}), \begin{eqnarray} \label{eq:kuboshear} \eta&=&(i/\hbar)\int_{r'_0<0} d^4r' r'_0 \langle[\Delta T_{ij}(0),\Delta T_{ij}(r')]\rangle,~~~i\ne j\\ \nonumber &=&\lim_{\omega\rightarrow 0} \frac{-1}{2\omega\hbar}\int d^4r' e^{i\omega t'} \langle[\Delta T_{ij}(0),\Delta T_{ij}(r')]\rangle. \end{eqnarray} By considering the case where $\partial_iu_j=(1/3)\delta_{ij} \nabla\cdot{\bf u}$, one can inspect $T_{ii}$ in Eq. (\ref{eq:deltij}) to find the bulk viscosity, \begin{eqnarray} \label{eq:kuboB} B&=&(i/3\hbar)\sum_j \int_{r'_0<0} d^4r' r'_0 \langle [\Delta T_{ii}(0),\Delta T_{jj}(r')]\rangle\\ \nonumber &=&\lim_{\omega\rightarrow 0} \frac{-1}{6\omega\hbar}\sum_j \int d^4r' e^{i\omega t'} \langle [\Delta T_{ii}(0),\Delta T_{jj}(r')]\rangle. \end{eqnarray} The Kubo relations, Eq.s (\ref{eq:kuboshear}) and (\ref{eq:kuboB}), are fully consistent with quantum mechanics. The classical limit can be obtained by applying the identity \cite{foerster}, \begin{equation} e^{-\beta H}V(t)=e^{i\beta\hbar\partial_t}V(t)e^{-\beta H}, \end{equation} to one of the terms in the commutator in Eq.s (\ref{eq:kuboshear}) or (\ref{eq:kuboB}), then keeping the lowest term in $\hbar$, \begin{equation} {\rm Tr}~ e^{-\beta H}[\Delta T_{ij}(0),\Delta T_{kl}(r)] \approx -i\hbar\beta{\rm Tr}~ \partial_t \left(\Delta T_{ij}(0)Delta T_{kl}(r)\right),\\ \end{equation} which after an integration by parts gives the classical limit of the Kubo relations, \begin{eqnarray} \eta&\approx&\beta\int_{r'_0<0} d^4r' \langle\Delta T_{ij}(0)\Delta T_{ij}(r')\rangle,~~~i\ne j\\ B&\approx&(\eta/3)\sum_j \int_{r'_0<0} d^4r' \langle \Delta T_{ii}(0)\Delta T_{jj}(r')\rangle \end{eqnarray} Although the Kubo relations are difficult to interpret physically, they do make it clear that viscosity is related to the size and to the damping of fluctuations of the elements $T_{ij}$. If fluctuations in $T_{ij}$ (at fixed energy) are large, or if they are slow to relax, a large viscosity will ensue. \section{Five Sources of Viscosity} Viscous effects arise whenever the elements of the stress-energy tensor, $T_{ij}$, have difficulty maintaining the equilibrium values due to a dynamically changing system, i.e., one with velocity gradients. In this section we briefly review five physical sources of viscosity, the first three of which have already been explained in the literature. \begin{enumerate} \item {\bf Viscosity from non-zero mean free paths}: This is the most commonly understood source of viscosity. It is straight-forward to see how a non-zero collision time leads to an anisotropy for $T_{ij}$ by considering a velocity gradient for a Bjorken expansion, $u_z=z/\tau$, i.e., $\partial_zu_z=1/\tau, \partial_xu_x=\partial_yu_y=0$. We consider a particle of momentum whose momentum is $p'_z(\tau)$ when measured in the frame moving with the collective velocity corresponding to its position. In the absence of collisions, $p'_z$ will fall with $\tau$ since the particle will asymptotically approach a region where its velocity equals the collective velocity, $p'_z(\tau+\delta\tau) =p'_z(\tau)\tau/(\tau+\delta\tau)$. Meanwhile, $p'_x$ and $p'_y$ are frozen. The resulting anisotropy in the stress-energy tensor is easy to derive, which yields the following expression for the shear viscosity \cite{weinberg}, \begin{equation} \eta=(4/3)P\tau_c, \end{equation} where $\tau_c$ is the collision time. It is also easy to see how such an expansion does not yield a bulk viscosity for either ultra-relativistic or non-relativistic gases. In those cases an isotropic expansion scales all three momenta proportional to $1/\tau$ which maintains thermal equilibrium, and collisions do not play a role. This is not the case when $m\sim T$, or especially if the gas has a mixture of relativistic and non-relativistic particles. \item {\bf Viscosity from non-zero interaction range}: If the range of interaction between two-particles extends a distance $R$, interactions will share energy between particles from regions with different collective energies. A particle at $r=0$, where the collective energy is zero, will share energy with particles whose collective energy is $(1/2)m(R\partial_r u)^2$. For Boltzmann calculations, the viscosity will be proportional to $P R^2/\tau_c$ \cite{Cheng:2001dz}, with the constant of proportionality depending on the scattering kernel. Both bulk and shear terms result from non-zero interaction range. In Boltzmann calculations, the range of the interaction can approach zero for fixed scattering rates if the over-sampling ratio is allowed to approach infinity. Although this solves causality problems \cite{Kortemeyer:1995di}, it simultaneously eliminates viscous terms arising from finite-range scattering kernels, which might or might not be desirable. This has profound effects on calculations of elliptic flow, which can vary by a factor of 2 depending on the range of the scattering kernel \cite{Cheng:2001dz}. \item {\bf Classical Electric Fields}: Color flux tubes form after the exchange of soft gluons between nucleons passing at high energy, and might also be formed during rapid hadronization. Additionally, longitudinal color electric fields might be created during the pre-thermalized stage of the collision (color-glass condensate). Since these fields tend to align with the velocity gradient, they can be a natural source of shear viscosity. In fact, if the fields are purely longitudinal, the elements of the stress-energy tensor become $T_{zz}=-\epsilon, T_{xx}=T_{yy}=\epsilon$. Thus, the transverse pressure becomes three times as stiff as a massless gas, $P=\epsilon/3$, which is usually considered a stiff equation of state. The negative longitudinal pressure signifies that the energy within a given unit of rapidity is increasing during the expansion, similar to the negative work associated with stretching a rubber band. \item {\bf Non-equilibrium chemistry}: Chemical abundances can not keep up with the expansion if the rate at which equilibrium abundances change is not much smaller than the chemical equilibration rate, $1/\tau_{\rm chem}$, \begin{equation} \frac{dN}{dt}=-(1/\tau_{\rm chem})(N-N_{\rm eq}). \end{equation} If the equilbrium number is slowly changing, the abundances will vary from equilbrium by an amount, \begin{equation} \delta N=-\tau_{\rm chem}\frac{dN_{\rm eq}}{dt}. \end{equation} To associate this departure from equilibrium as a viscosity, one must consider the corresponding change in the pressure, \begin{equation} \delta P=\left.\frac{\partial P}{\partial n}\right|_{{\rm fixed~}\epsilon}\delta n,~~~ \frac{dN_{\rm eq}}{dt}=-\frac{\partial n}{\partial s}s\nabla\cdot{\bf u}, \end{equation} where the second relation exploits the fact that entropy is conserved in a slow expansion. The bulk viscosity is then, \begin{equation} B=\left.\frac{\partial P}{\partial n}\right|_{{\rm fixed~}\epsilon} \frac{\partial n}{\partial s}s. \end{equation} The bulk viscosity will be large whenever the equilibrium number is rapidly changing, e.g., the temperatures are falling below the masses, or masses are rising due to restoring chiral symmetry. If the hydrodynamic equations explicitly treat particle numbers as current obeying chemical evolution rates, chemical non-equilibration would not need to be accounted for through viscous terms. \item {\bf Viscosity from dynamic mean fields}: Bosonic mean fields, such as the $\sigma$ field, obey the Klein-Gordon equation. For fluctuations of wave number $k\rightarrow 0$, \begin{eqnarray} \label{eq:kg} \frac{\partial^2}{\partial t^2}\Delta\sigma(t) &=&-(m_\sigma(T)^2+k^2)\Delta\sigma(t)-\Gamma\frac{\partial}{\partial t} \Delta\sigma(t),\\ \nonumber \Delta\sigma(t)&\equiv&\sigma(t)-\sigma_{\rm eq}(\epsilon), \end{eqnarray} where $\sigma_{\rm eq}(\epsilon)$ is the equilibrium value of the condensate which is non-zero for $k=0$. The value of $\sigma$ is determined by minimizing the free energy, while the mass is related to the curvature of the free energy near the minimum, \begin{equation} \frac{\partial}{\partial \sigma}F(\sigma,T)=0,~~~ m_{\sigma,{\rm eq}}^2(T) =\frac{\partial^2}{\partial \sigma^2}F(\sigma_{\rm eq},T). \end{equation} One can see the equivalence of Eq. (\ref{eq:kg}) with the differential equation for the harmonic oscillator after performing the following substitutions, \begin{equation} k_{\rm h.o.}/m_{\rm h.o.}\rightarrow m^2_{\sigma},~~~~ \gamma_{\rm h.o.}/m_{\rm h.o.} \rightarrow \Gamma, \end{equation} where $\gamma_{\rm h.o.}$ is the drag coefficient for the harmonic oscillator, $k_{\rm h.o.}$ is the spring constant and $m_{\rm h.o.}$ is the particle mass. For the harmonic oscillator, the mean value of the position $x$ is altered if the equilibrium position is moving. The amount of the change was consistent with the drag force $\gamma dx_{\rm eq}/dt$ being equal and opposite to the restoring force $k\delta x$. The corresponding result can be derived for the damped Klein-Gordon equation, \begin{equation} \delta x=-\frac{\gamma_{\rm h.o.}}{k_{\rm h.o.}}\frac{dx_{\rm eq}}{dt},~~~~ \delta \sigma=-\frac{\Gamma}{m^2_{\sigma}(T)}\frac{d\sigma_{\rm eq}}{dt}, \end{equation} where $\delta\sigma$ is the mean offset from the equilibrium value. Thus, $m^2_\sigma$ determines the restoring force, while $\Gamma$ describes the drag. Finite-size effects could be estimated by replacing $m^2$ with $m^2+k^2$, where $k^2$ would be given by the finite size, $k\sim 1/L$. The resulting bulk viscosity is, \begin{equation} B=\left.\frac{\partial P}{\partial \sigma}\right|_{{\rm fixed~}\epsilon} \frac{\Gamma}{m_\sigma^2}\frac{\partial \sigma_{\rm eq}}{\partial s}s. \end{equation} The bulk viscosity is then large for energy densities where $\sigma$ is rapidly varying, or for when $m_\sigma$ is small, i.e., the critical region. \end{enumerate} \section{Bulk Viscosity in the Linear Sigma Model} Both of the last two sources of viscosity described in the previous section can be of special importance during the chiral transition. First, since masses are changing suddenly near $T_c$, chemical abundances should easily stray from equilibrium. Secondly, the mean field, which is zero above $T_c$ suddenly changes, and given the small masses in this region large bulk viscosities are expected. \begin{figure}[htb] \centerline{\includegraphics[width=8cm]{figs/all4}} \caption{\label{fig:all4}For the linear sigma model, the sigma field and mass are shown as a function of the temperature in the left panels. Near $T_c$, the masses fall to zero and the mean value of the field changes rapidly, which gives rise to a sharp peak in the bulk viscosity. The pressure and temperature are displayed in the right-side panels as a function of energy density.} \end{figure} As an example, we consider a simple example of a linear sigma model, where the coupling of the sigma model to the quarks provides the quark mass \cite{Paech:2005cx,Paech:2003fe}, \begin{equation} H=-\frac{1}{2}\sigma\nabla^2\sigma+\frac{\lambda^4}{4}\left( \sigma^2-f_\pi^2+m_\pi^2/\lambda^2\right)^2-h_q\sigma +H_{\rm quarks}(m=g\sigma), \end{equation} assuming only up and down flavored quarks. The resulting equation of state and values for $m_\sigma$ and $\sigma$ are displayed in Fig. \ref{fig:all4} for $\lambda^2=40$. For couplings $g<g_c=3.554$, the transition is a smooth cross-over, while for $g=g_c$ the transition is second order, and for $g>g_c$ a first-order phase transition ensues with $T_c=172$ MeV. From Fig. \ref{fig:all4}, one can see that $m_\sigma$ becomes small in the same region that the field rapidly changes, thus one expects a peak in the bulk viscosity. \begin{figure} \centerline{\includegraphics[width=8cm]{figs/visc_tau}} \caption{\label{fig:visctau}For a Bjorken expansion ($\nabla\cdot{\bf u}=1/\tau$), the pressure is plotted alongside $T_{ii}=P-B\nabla\cdot{\bf u}$, to demonstrate the significance of viscous terms near $T_c$. Viscous terms are larger and sharper for couplings close to the critical coupling.} \end{figure} The bulk viscosity was calculated according to the methods described in (4) and (5) of the previous section assuming that the width $\Gamma=400$ MeV, and that the chemical equilibration time $\tau_{\rm chem}$=1 fm/$c$ for a density of one quark per fm$^3$ and scaling inversely with the density. For a Bjorken expansion $\nabla\cdot{\bf u}=1/\tau$, and assuming an isentropic expansion starting with $\epsilon=8$ GeV/fm$^3$ at $\tau=1$ fm/$c$, we calculated both $P$ and $B$ as a function of $\tau$. To illustrate the size of the effect, we display both $P$ and the Navier-Stokes expression $T_{ii}=P-B\nabla\cdot{\bf u}$ as a function of the energy density for a Bjorken expansion in Fig. \ref{fig:visctau}. The effect is certainly dramatic, especially for $g\approx g_c$. However, the Navier-Stokes expression is only applicable for small expansion rates. We expect Israel-Stewart \cite{Israel:1979wp,Heinz:2005bw} equations for hydrodynamics to result in moderated effects compared to Navier-Stokes, though they should give identical results for small velocity gradients. \section{Summary} The simplicity of the Kubo relations, Eq.s (\ref{eq:kuboshear}) and Eq. (\ref{eq:kuboB}), masks the wide variety of physical sources of viscosity. The one common aspect of the various sources is that non-zero equilibration times, non-zero interaction ranges can always be identified. In this talk, we focused on bulk viscosities associated with the chiral transition. In general, one would expect such effects whenever a system needs to rapidly rearrange its basic structure. In this sense these effects have much in common with super-cooling or hysteresis. The implications for dynamics should be that the matter accelerates more quickly due to the higher gradients in $T_{x,x}$ when the interior energy density is above the critical region. However, once the matter flows into the viscous region of energy densities, there should be a slowing down and a reduction of surface emission. This trend would be in the right direction to explain HBT measurements which show a rapid expansion with a sudden disintegration \cite{Retiere:2003kf}, but the potential magnitude of the effects are not yet known. Finally, we re-emphasize that if one were to solve for the evolution of the mean fields or chemistry alongside solving the hydrodynamic evolution equations, one might be able to neglect some of these effects. If these effects are large, the proper conclusion may be that rather than absorbing these effects into viscous hydrodynamics, one should treat non-equilibrated degrees of freedom more explicitly. \section*{Acknowledgments} Support was provided by the U.S. Department of Energy, Grant No. DE-FG02-03ER41259.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Full Paper and not an Extended Abstract} LREC2020 is moving to full paper submissions. Each submitted paper should be submitted on white A4 paper. The fully justified text should be formatted in two parallel columns, each 8.25 cm wide, and separated by a space of 0.63 cm. Left, right, and bottom margins should be 1.9 cm and the top margin 2.5 cm. The font for the main body of the text should be Times New Roman 10 with interlinear spacing of 11 pt. \subsection{General Instructions for the Submitted Full Paper} Each submitted paper should be between \ul{a minimum of four and a maximum of eight pages including figures}. \section{Full Paper} Each manuscript should be submitted on white A4 paper. The fully justified text should be formatted in two parallel columns, each 8.25 cm wide, and separated by a space of 0.63 cm. Left, right, and bottom margins should be 1.9 cm. and the top margin 2.5 cm. The font for the main body of the text should be Times New Roman 10 with interlinear spacing of 12 pt. Articles must be between 4 and 8 pages in length, regardless of the mode of presentation (oral or poster). \subsection{General Instructions for the Final Paper} Each paper is allocated between \ul{a minimum of four and a maximum of eight pages including figures}. The unprotected PDF files will appear in the on-line proceedings directly as received. Do not print the page number. \section{Page Numbering} \textbf{Please do not include page numbers in your article.} The definitive page numbering of articles published in the proceedings will be decided by the organising committee. \section{Headings / Level 1 Headings} Headings should be capitalised in the same way as the main title, and centred within the column. The font used is Times New Roman 12 bold. There should also be a space of 12 pt between the title and the preceding section, and a space of 3 pt between the title and the text following it. \subsection{Level 2 Headings} The format for level 2 headings is the same as for level 1 Headings, with the font Times New Roman 11, and the heading is justified to the left of the column. There should also be a space of 6 pt between the title and the preceding section, and a space of 3 pt between the title and the text following it. \subsubsection{Level 3 Headings} The format for level 3 headings is the same as for level 2 headings, except that the font is Times New Roman 10, and there should be no space left between the heading and the text. There should also be a space of 6 pt between the title and the preceding section, and a space of 3 pt between the title and the text following it. \section{Citing References in the Text} \subsection{Bibliographical References} All bibliographical references within the text should be put in between parentheses with the author's surname followed by a comma before the date of publication,\cite{Martin-90}. If the sentence already includes the author's name, then it is only necessary to put the date in parentheses: \newcite{Martin-90}. When several authors are cited, those references should be separated with a semicolon: \cite{Martin-90,CastorPollux-92}. When the reference has more than three authors, only cite the name of the first author followed by ``et al.'' (e.g. \cite{Superman-Batman-Catwoman-Spiderman-00}). \subsection{Language Resource References} \subsubsection{When Citing Language Resources} When citing language resources, we recommend to proceed in the same way to bibliographical references. Thus, a language resource should be cited as \cite{speecon}. \section{Figures \& Tables} \subsection{Figures} All figures should be centred and clearly distinguishable. They should never be drawn by hand, and the lines must be very dark in order to ensure a high-quality printed version. Figures should be numbered in the text, and have a caption in Times New Roman 10 pt underneath. A space must be left between each figure and its respective caption. Example of a figure enclosed in a box: \begin{figure}[!h] \begin{center} \includegraphics[scale=0.5]{lrec2020W-image1.eps} \caption{The caption of the figure.} \label{fig.1} \end{center} \end{figure} Figure and caption should always appear together on the same page. Large figures can be centred, using a full page. \subsection{Tables} The instructions for tables are the same as for figures. \begin{table}[!h] \begin{center} \begin{tabularx}{\columnwidth}{|l|X|} \hline Level&Tools\\ \hline Morphology & Pitrat Analyser\\ \hline Syntax & LFG Analyser (C-Structure)\\ \hline Semantics & LFG F-Structures + Sowa's\\ & Conceptual Graphs\\ \hline \end{tabularx} \caption{The caption of the table} \end{center} \end{table} \section{Footnotes} Footnotes are indicated within the text by a number in superscript\footnote{Footnotes should be in Times New Roman 9 pt, and appear at the bottom of the same page as their corresponding number. Footnotes should also be separated from the rest of the text by a horizontal line 5 cm long.}. \section{Copyrights} The Language Resouces and Evaluation Conference (LREC) proceedings are published by the European Language Resources Association (ELRA). They are available online from the conference website. ELRA's policy is to acquire copyright for all LREC contributions. In assigning your copyright, you are not forfeiting your right to use your contribution elsewhere. This you may do without seeking permission and is subject only to normal acknowledgement to the LREC proceedings. The LREC 2020 Proceedings are licensed under CC-BY-NC, the Creative Commons Attribution-Non-Commercial 4.0 International License. \section{Conclusion} Your submission of a finalised contribution for inclusion in the LREC proceedings automatically assigns the above-mentioned copyright to ELRA. \section{Acknowledgements} Place all acknowledgements (including those concerning research grants and funding) in a separate section at the end of the article. \section{Providing References} \subsection{Bibliographical References} Bibliographical references should be listed in alphabetical order at the end of the article. The title of the section, ``Bibliographical References'', should be a level 1 heading. The first line of each bibliographical reference should be justified to the left of the column, and the rest of the entry should be indented by 0.35 cm. The examples provided in Section \secref{main:ref} (some of which are fictitious references) illustrate the basic format required for articles in conference proceedings, books, journal articles, PhD theses, and chapters of books. \subsection{Language Resource References} Language resource references should be listed in alphabetical order at the end of the article. \section*{Appendix: How to Produce the \texttt{.pdf} Version} In order to generate a PDF file out of the LaTeX file herein, when citing language resources, the following steps need to be performed: \begin{itemize} \item{Compile the \texttt{.tex} file once} \item{Invoke \texttt{bibtex} on the eponymous \texttt{.aux} file} \item{Compile the \texttt{.tex} file twice} \end{itemize} \section{Bibliographical References} \label{main:ref} \bibliographystyle{lrec} \section{References} \label{main:ref} \bibliographystyle{acl_natbib} \section{Data acquisition} The first step when constructing a language classifier is to gather a data set. For this purpose I wrote a small script using the Wikipedia API for Python.\footnote{\url{https://pypi.org/project/wikipedia/}}\\ With the script I was able to download the summaries for randomly chosen Wikipedia articles in each of the languages which are saved to as raw text to 6 {\tt .txt} files of about 10MB each.\\ After the initial cleaning, which i will describe in the next section, the data set contains just over 50K sentences in each of the language categories. I chose to select two data sets with exactly 10K and 50K sentences respectively from the raw data set. In this way the data sets we will work with are balanced, containing the same number of data points in each language category.\\ Throughout this report we split these data sets, reserving 80\% for the training set and 20\% for the test set we use when evaluating the models. \subsection{Data cleaning} In this section I will briefly describe how the data set is initially cleaned and how sentences are extracted from the raw data. \paragraph{Extracting sentences:} The first thing we want to do is to divide the text into sentences. This is generally a non-trivial thing to do. My approach is to first split the raw string by line break. This roughly divides the text into paragraphs with some noise which we filter out in the last step.\\ We then extract shorter sentences with the sentence tokenizer ({\tt sent\_tokenize}) function from the NLTK\cite{nltk} python package. This does a better job than just splitting by {\tt '.'} due to the fact that abbreviations, which can appear in a legitimate sentence, typically include a period symbol. \paragraph{Cleaning characters} The initial data set have a lot of characters that do not belong to the alphabets of the languages we work with. Often the Wikipedia pages for people or places contain the name in the original language. For example a summary might contain Chinese or Russian characters which are arguably irrelevant for the purpose of discriminating between the Nordic languages.\\ To make the feature extraction simpler, and to reduce the size of the vocabulary, I chose to make the raw data lowercase and strip all characters with are not part of the standard alphabet of the six languages.\\ In this way we only accept the following character set \begin{verbatim} 'abcdefghijklmnopqrstuvwxyzáäåæéíðóöøúýþ ' \end{verbatim} and replace everything else with white space before continuing to extract the features. For example the raw sentence \begin{verbatim} 'Hesbjerg er dannet ved sammenlægning af de 2 gårde Store Hesbjerg og Lille Hesbjerg i 1822.' \end{verbatim} will after this initial cleanup be reduced to \begin{verbatim} 'hesbjerg er dannet ved sammenlægning af de gårde store hesbjerg og lille hesbjerg i ', \end{verbatim} We thus make the assumption that capital letters, numbers and characters outside this character set do not contribute much information relevant for language classification. \section{Conclusion} In this section I will sum up the results of the paper and provide some suggestions for further work.\\ We have used the dimensionality reduction techniques PCA and t-SNE to make visualizations of feature vectors obtained by making a one-hot encoding with character bi-grams and with the two modes from FastText.\\ These unsupervised techniques was able to separate the sentences from Wikipedia into different clusters. Without any prior knowledge about the actual language of each sentence these techniques indicated that the six languages can be divided into three main language categories: (1) Danish Nynorsk Bokmål (2) Faroese Icelandic and (3) Swedish.\\ We then compared four different "classical" models: K nearest Neighbors, Logistic regression, Naive Bayes and a linear support vector machines with two neural network architectures: Multilayer perceptron and a convolutional neural network.\\ Generally the supervised models had the largest errors when discriminating between languages belonging to either of the language groups mentioned above.\\ For the "classical" models we saw that Logistic Regression and support vector machines achieved better performance than KNN and Naive Bayes, where the latter performed the worst. This was true in all cases irrespective of the method of feature extraction.\\ Additionally we saw that when we used feature vectors from the FastText skipgram model the classification models achieved better results than when using either FastText cbow or character n-grams.\\ Generally we saw that increasing the number of data points lead to better performance. When comparing the CNN with the "classical" models however the CNN performed better than any of the other models even when trained on less data points. In this way i seems that the CNN is able to learn more from less data when compared to the other models.\\ We saw that when we tested the two best performing models, FastText supervised and CNN, on the dataset from Tatoeba the accuracies dropped quite a lot. We showed that using character n-grams as features instead of words increased the performance for the FastText supervised classifier. By also training FastText on the Tatoeba dataset as well as the Wikipedia dataset resulted in an additional increase in performance. \subsection{Ideas for improvements} Here I will provide a couple of ideas for extensions of the work presented in this paper. \begin{itemize} \item \textbf{Hyper parameter optimization:} Especially for the neural networks it could be interesting to see if additional tuning of the hyper parameters could result in better performance. One obvious extension would be to try and stacking convolutional layers on top of each other. I briefly looked at this but did not manage to make a rigorous investigation as the training time increased substantially when adding more layers. \item \textbf{Implement a two step classification approach.} The paper "Discriminating Similar Languages: Evaluations and Explorations" \cite{DSLEvaluation} mentions that in the 2014 edition of the DSL competition, the best result achieved an accuracy of 95.7\%. The winning team used a two step approach where their algorithm first classified the language group an then the actual language. The team used a linear support vector machines with words and characters as features.\\ Inspired by this result I propose to construct a classification algorithm which first discriminates between the major language groups before making a prediction about the actual language with a dedicated classifier. In this way one could treat discriminating between Danish and Bokmål or Faroese and Icelandic as a binary classification problem and see if performance would increase.\\ \end{itemize} \section{Feature extraction} In this section I will begin with a brief explanation of how the data is initially cleaned and then go on to explain how the features are extracted.\\ There are several ways of converting text into feature vectors suitable for input for the classification algorithms. In this paper we compare the performance of encoding the sentences using character level n-grams and compare these results with feature vectors obtained using FastText. \subsection{Bag of Words and one-hot encoding} According to \cite{BagOfTricks} a simple and efficient base line for sentence classification is to represent sentences as a of words (BoW) and train a linear classifier.\footnote{The paper proposes logistic regression or an SVM} Here first a brief explanation of the BoW model. \\ Using words as features the first step of doing the BoW representation is to split each sentence into individual words. Continuing the example above the sentence would be split into. \begin{verbatim} ['hesbjerg','er','dannet','ved', 'sammenlægning','af','de','gårde', 'store','hesbjerg','og','lille', 'hesbjerg','i'] \end{verbatim} We proceed to make a dictionary over all the words in the the corpus (every word in the complete dataset) counting the frequency by which the each word appears. We can now sort the words by their frequency and in this way we can assign each word to a unique integer. \begin{verbatim} [2442, 3, 1513, 39, 4819, 17, 30, 8684, 189, 2442, 2, 617, 2442, 1] \end{verbatim} Here we see that {\tt 'i', 'og'} and {\tt 'er' } are the most frequent words in the corpus. Now for the one-hot encoding we construct a feature vector which has the dimension of the vocabulary and is zero everywhere except in the component corresponding to the integer which we have mapped the word onto as seen above. \\ When using individual word the vocabulary, and thus the dimension of the feature vector for each sentence, increases rapidly with the size of the dataset. In the example above using only 1000 sentences pr language the vocabulary consists of 26695 words and if use 50k sentences in each language the vocabulary grows to 368493! \subsection{n-grams} Apart from the large vocabulary, an obvious limitation of the bag of words model described above if that it does not take into account the relative position of each word.\\ A way to remedy this is to use character n-grams as features. An n-gram is constructed by regarding $n$ consecutive tokens (here either words or characters) as the features. An n-gram of size one is called a uni-gram and an n-gram of size 2 is called a bi-gram and so forth.\\ Continuing the example sentence above a bi-gram representation would look like \begin{verbatim} ['he','es','sb','bj','je','er','rg','g ', ' e','er','r ',' d','da','an','nn','ne', 'et','t ',' v','ve','ed','d ',' s','sa', 'am','mm','me','en','nl','læ','æg','gn', 'ni','in','ng','g ',' a',...] \end{verbatim} An advantage of using a character level representation instead of a word level is a drastic reduction in the vocabulary size. Since we only accept 40 characters a one-hot encoding using character level uni-grams would have at most 40 components while bi-grams would have a vocabulary of $40^2 = 1600$ and for trigrams the vocabulary would grow to $40^3 = 64000$.\\ In this project we will compare the performance of a bag of words encoding using character level uni-grams and bi-grams with feature vectors obtained from using FastText. \subsection{Using FastText} The methods described above are quite simple and in this project i will also compare the above method with FastText which is a library for creating word embeddings developed by Facebook \cite{BagOfTricks}. \\ In the paper "Enriching Word Vectors with Subword Information" \cite{EnrichingWordVectors} the authors explain how FastText extracts feature vectors from raw text data. FastText makes word embedding using one of two model architectures: continuous bag of words (cbow) or the continuous skipgram model.\\ The skipgram and cbow models are first proposed in \cite{EfficientWordRepresentations} which is the paper introducing the word2vec model for word embeddings. FastText builds upon this work by proposing an extension to the skipgram model which takes into account subword information.\\ Both models use a neural network to learn word embedding from using a context windows consisting of the words surrounding the current target word. The CBOW architecture predicts the current word based on the context, and the skipgram predicts surrounding words given the current word.\cite{EfficientWordRepresentations} We see an illustration of this in Figure \ref{cbowskipgram}. \begin{figure}[h!] \centering \includegraphics[width = 225pt]{figs/cbowskipgram} \caption{Diagram of the cbow and skipgram models.} \label{cbowskipgram} \end{figure} \section{Introduction} Automatic language identification is a challenging problem and especially discriminating between closely related languages is one of the main bottlenecks of state-of-the-art language identification systems.\cite{DSL2014}\\ The problem has been investigated in recent work \cite{DSLEvaluation}\cite{DSL2015} which discuss the results from two editions of the "Discriminating between Similar Languages (DSL) shared task". Over the two editions of the DSL shared task different teams competed to develop the best machine learning algorithms to discriminate between the languages in a corpus consisting of 20K sentences in each of the languages: Bosnian, Croatian, Serbian, Indonesian, Malaysian, Czech, Slovak, Brazil Portuguese, European Portuguese, Argentine Spanish, Peninsular Spanish, Bulgarian and Macedonian.\\ In this project we will develop a machine learning based pipeline for automatic language identification for the Nordic languages. Concretely we will focus on discrimination between the six Nordic languages: Danish, Swedish, Norwegian (Nynorsk), Norwegian (Bokmål), Faroese and Icelandic.\\ We will explore different ways of extracting features from a corpus of raw text data consisting of summaries from Wikipedia in the respective languages and continue to evaluate the performance of a selection of machine learning models.\\ Concretely we will compare the performance of classic machine learning models such as Logistic Regression, Naive Bayes, Support vector machine, and K nearest Neighbors with more contemporary neural network approaches such as Multilayer Perceptrons (MLP) Convolutional Neural Networks (CNNs).\\ After evaluating these models on the Wikipedia dataset we will continue to evaluate the best models on a Dataset from a different domain in order to investigate how well the models generalize when classifying sentences from a different domain. \section{Models} Now the feature vectors are extracted we can begin to develop classification models. In this section I will briefly go through the theory of for classic machine learning models and continue to explain the core concepts of the neural network architectures The problem of assigning one of mutually exclusive languages to a sentence is an example of a multi class classification problem. Since we have gathered a dataset with which we can train our models we are dealing with supervised learning. Let us start with developing a bit of notation. We are provided with a set of data points $\mathcal{X}$ together with an associated set of labels $\mathcal{Y}$. We can define our \emph{dataset} $\mathcal{D}$ the as follows \begin{align} \mathcal{D} = \{(\mathbf{x}_i,y_i)|\mathbf{x}_i\in\mathcal{X},y_i\in\mathcal{Y}\} \end{align} where we have that the data points and labels are defined by \begin{align} \mathcal{X}&=\{\mathbf{x}_1, \mathbf{x}_2, \mathbf{x}_3, \cdots,\mathbf{x}_n\} \qquad \mathbf{x}_i \in \mathbf{R}^{m} \\ \mathcal{Y}&=\{y_1, y_2, y_3, \cdots,y_n\} \qquad y_i \in C \end{align} Where $C$ is the set of possible language categories, in this case the set \begin{align} C = \{"dk", "sv", "nn", "nb", "fo", "is"\} \end{align} Now our aim is to construct a function $f$ of some parameters $\theta$ such that given a datapoint $\mathbf{x}$ the function returns the predicted label $\hat{y}$ \begin{align} \hat{y} = f_\theta (\mathbf{x}) \end{align} Now the predicted label might not be correct and we need a measure of how wrong our model is over the whole dataset. One common error function is the mean squared error (MSE) cost function. \begin{align} E_{\text{MSE}} = \frac{1}{n}\sum_i^n (\hat{y}_i - y_i)^2 \end{align} Later for the training of the neural networks with keras we will use the categorical cross entropy (CCE) which is given by. \begin{align} E_{\text{CCE}} = -\frac{1}{n}\sum_i^n \sum_{c\in C} y_{i,c}\ln \hat{y_{i,c}} \end{align} \subsection{Baseline models} As mentioned in \cite{BagOfTricks}, simple baseline models for text classification are for example logistic regression and support vector machines. In this project I will also test how the performance of the K Nearest Neighbors algorithm and a Naive Bayes classifier. \subsubsection{K Nearest Neighbors} Perhaps the simplest model to understand is the K Nearest Neighbors classification algorithm. Here we calculate the distance from a test example which we want to label to its $k$ "Nearest Neighbors". The distance $d$ between the test example $\mathbf{x}$ and a labeled data point $\mathbf{x}_{\text{train}}$ in the training set is the conventional euclidean distance. \begin{align} d=\sqrt{\sum_{i=1}^m (x_i-x_{\text{train}_i})^2} \end{align} The algorithm calculates the distance $c$ between a test point $\mathbf{x}$ to all other data points in the training set. Then we simply assign the most common label to $\mathbf{x}$ among its $k$ nearest neighbors. In this project and for the results to be presented later we have set $k=3$. \subsubsection{Linear support vector machines} Support vector machine is all about finding the maximal margin hyperplanes. Any hyper plane in the feature space can be written as points satisfying \begin{align} \mathbf{w}\cdot\mathbf{x} + \mathbf{b} = 0 \end{align} We want to find the two parallel hyperplanes where the distance between them is at large as possible. The functional form of the linear support vector machine then becomes \begin{align} \hat{y} = \text{sign}(\mathbf{w}\cdot \mathbf{x} + b) \end{align} Where we define \begin{align} \mathbf{w} = \sum_i \alpha y_i \mathbf{x}_i \end{align} Where $\alpha_i$ is only non-zero for the support vectors $\mathbf{x}_i$. Thus the aim thus the aim is to minimize $||\mathbf{w}||$ since the width between the two supporting hyperplanes is proportional to $1/||\mathbf{w}||$ \subsubsection{Logistic Regression} Contrary to the k nearest neighbors and support vector machine algorithm described above in Logistic regression we calculate the probability that $\mathbf{x}$ belongs to the language $C_k$ as \begin{align} p(C_k|\mathbf{x}) = f(\mathbf{x}) = \frac{1}{1+e^{\theta^T \mathbf{x}}} \end{align} The predicted label of the logistic regression model is then given by \begin{align} \hat{y} = \text{argmax}_k p(C_k|\mathbf{x}) \end{align} The method for finding the parameters $\theta$ which minimizes the error is gradient decent which is given by \begin{align} \theta_{t+1} = \theta_t + \eta \nabla E \end{align} \subsubsection{Naive Bayes} Our goal is to find the probability of belonging to a language class $C_k$. We can use Bayes' theorem here to find the probability $p(C_k|\mathbf{x})$ of a language class $C_k$ given a feature vector $\mathbf{x}$. Using Bayes Theorem we get \begin{align} p(C_k|\mathbf{x}) = \frac{p(C_k)}{p(\mathbf{x})} p(\mathbf{x}|C_k) \label{bayes1} \end{align} Now the prior $p(\mathbf{x})$ is a constant and so it $p(C_k)$ if the number of data points in each of the language categories are equal, as is the case here. We can rewrite $p(\mathbf{x}|C_k)$ using the assumption that the features $x_i$\footnote{This is the "naive" part of Naive Bayes} are independent we can write. \begin{align} p(\mathbf{x}|C_k) = p(C_k) \prod_i p(x_i|C_k) \label{bayes2} \end{align} By using eq. $\ref{bayes2}$ in eq. $\ref{bayes1}$ and selecting the largest probability we get the prediction by. \begin{align} \hat{y} = \underset{k}{\text{argmax }} p(C_k) \prod_i p(x_i|C_k) \end{align} \subsection{Neural Networks} In this section I will go through the basic theory behind the two neural network architectures Multi layer Perceptrons and convolutional neural networks. \subsubsection{Feed Forward Neural Networks} Arguably the simplest neural network if the Multi Layer Perception MLP. A MLP consists of at least 3 layers where the first is the input layer which have the same number of nodes as the feature vector we feed into the network. The last layer is the output layer which always have the same number of nodes at the number of different classes, six in this cave. In the MLP we use dense layers as the hidden layers, this means that all the nodes in a layer are connected to all nodes in the next layer. In each layer we take the output from the previous layer and multiply each component with a set of weights, the values of which we optimize during training, sum them up and pass the result to a nonlinear activation function. The activation function in the hidden layers is the rectified linear unit or ReLU function \begin{align} \text{ReLU}(z)= \text{max}(0,z) \end{align} The activation function final layer is the softmax function \begin{align} \text{softmax}(z)_i= \frac{e^z_i}{\sum_{j=1}^k e^z_j} \end{align} where $k$ is the number of different classes. \subsubsection{Convolutional Neural Networks} While every layer in the MLP is densely connected such that each of the nodes in a layer is connected to all nodes in the next layer, in a convolution neural network we use one or more convolutional layers. Convolutional Neural networks are very popular for image recognition but they can also be used for text classification \cite{textcnn_google}. \begin{figure}[h!] \centering \includegraphics[width = 200pt]{figs/cnn_diagram} \caption{Diagram of Convolutional Neural network.} \label{cnn} \end{figure} The basic premise of a convolutional layer is illustrated in figure \ref{cnn}\footnote{Source: \url{https://realpython.com/python-keras-text-classification/}}. In a CNN you have a filter which slides over the input. The CNN then takes the dot product of the weights of the filter and the corresponding input features, before applying the activation function. The kernel size is a hyper parameter which can be optimized by experimentation which we will also do later. \section{Results} In this section I will present the results from running the various classification models on the Wikipedia dataset and then proceed to investigate how well these models perform on a dataset from a different domain. \subsection{Baseline With langid.py} As a baseline to compare the performance of the models in this word I would like to compare with an out of the box language classification system. "langid.py: An Off-the-shelf Language Identification Tool." \cite{langID} is such a tool.\\ Out of the box langid.py comes with with a pretrained model which covers 97 languages. The data for langid.py comes from from 5 different domains: government documents, software documentation, newswire, online encyclopedia and an internet crawl.\\ I have written a program to test how well langid.py performed on the Wikipedia dataset using the the python API.\footnote{\url{https://pypi.org/project/langid/}}\\ Since langid.py returned the language id "no" (Norwegian) on some of the data points i chose to restrict langid.py to only be able to return either "nn" (Nynorsk) or "nb" (Bokmål) as predictions. It is a quite peculiar feature of the Norwegian language that there exist two different written languages but three different language codes. \begin{figure}[h!] \centering \includegraphics[width = 200pt]{figs/langid} \caption{Confusion matrix with results from langid.py on the full dataset with 300K data points. } \label{langid_confusion_matrix} \end{figure} In Figure \ref{langid_confusion_matrix} we see the confusion matrix for the langid.py classifier. The largest errors are between Danish and Bokmål and between Faroese and Icelandic. We see that langid.py was actually able to correctly classify most of the Danish data points however approximately a quarter of the data points in Bokmål was incorrectly classified as Danish and just under and eighth was classified as Nynorsk.\\ Furthermore langid.py correctly classified most of the Icelandic data points however over half of the data points in Faroese was incorrectly classified as Icelandic. \subsection{Baseline with linear models.} In the Figure \ref{baseline-results-10k} we see the results for running the models on a dataset with 10K data points in each language category. We see that the models in general perform better if we use character level bi-grams instead of uni-grams (individual) characters.\\ Also we see that logistic regression and support vector machines outperform Naive Bayes and K nearest neighbors in all cases. Furthermore for all models we get the best performance if we use the skipgram model from FastText.\\ If we compare the cbow mode from FastText with character level bi-grams we see that the cbow model is on par with bi-grams for the KNN and Naive Bayes classifiers while bi-grams outperform the cbow for Logistic Regression and support vector machines.\\ \begin{figure}[h!] \centering \begin{tabular}{ l | c | r } \hline Model & Encoding & Accuracy \\ \hline Knn & cbow & 0.780\\ Log-Reg & cbow & 0.819\\ Naive Bayes & cbow & 0.660\\ SVM & cbow & 0.843\\ Knn & skipgram & 0.918\\ Log-Reg & skipgram & \textbf{0.929}\\ Naive Bayes & skipgram & 0.840\\ SVM & skipgram & \textbf{0.928}\\ Knn & char bi-gram & 0.745\\ Log-Reg & char bi-gram & 0.907\\ Naive Bayes & char bi-gram & 0.653\\ SVM & char bi-gram & 0.905\\ Knn & char uni-gram & 0.620\\ Log-Reg & char uni-gram & 0.755\\ Naive Bayes & char uni-gram & 0.614\\ SVM & char uni-gram & 0.707\\ \hline \end{tabular} \caption{Overview of results for the dataset with 10K data points in each language.} \label{baseline-results-10k} \end{figure} \subsection{Results with neural networks.} In the following we evaluate the initial results for the neural network architectures which can be seen in Figure \ref{keras-results}. Here we compare the result of doing character level uni- and bi-grams using the Multilayer Perceptron and Convolutional Neural Network. We see that the CNN performs the best, achieving an accuracy of 95.6\% when using character bi-grams. Both models perform better using bi-grams than individual characters as features while the relative increase in performance is greater for the MLP model. \begin{figure}[h!] \centering \begin{tabular}{ l | c | r } \hline Model & Encoding & Accuracy \\ \hline MLP & char bi-gram & 0.898 \\ CNN & char bi-gram & \textbf{0.956} \\ MLP & char uni-gram & 0.697\\ CNN & char uni-gram & 0.942 \\ \hline \end{tabular} \caption{Overview of results for the neural network models for the dataset with 10K data points in each language.} \label{keras-results} \end{figure} \subsection{Increasing the size of the dataset.} Usually the performance of supervised classification models increase with more training data. In this section I have increased amount of training data to 50K sentences in each of the language categories. Due to much longer training times I choose to only rerun the "baseline models" with the skipgram encoding from FastTest which we saw achieved the highest accuracy. \begin{figure}[h!] \centering \begin{tabular}{ l c | r } \hline Model & Encoding & Accuracy \\ \hline Knn & skipgram & 0.931\\ Logistic Regression & skipgram & \textbf{0.933}\\ Naive Bayes & skipgram & 0.806\\ SVM & skipgram& 0.925\\ \hline \end{tabular} \caption{Overview of results for the dataset with 50K data points in each language.} \label{results-sklearn300k} \end{figure} As can be seen in Figure \ref{results-sklearn300k} the accuracies for the logistic regression model and the K nearest neighbors algorithm improved by including more data, however not by much. Unexpectedly the accuracies for the support vector machine and naive Bayes actually dropped a bit by including more data.\\ Even when including five times the amount of data the best result, logistic regression with an accuracy of 93.3\%, is still worse than for the Convolutional Neural Network trained on 10K data points in each language.\\ In Figure \ref{results-keras-300k} we see the results for running the neural networks on the larger dataset. Both models improve by increasing the amount of data and the Convolutional Neural Network reached an accuracy of 97\% which is the best so far. \begin{figure}[h!] \centering \begin{tabular}{ l c | r } \hline Model & Encoding & Accuracy \\ \hline MLP & char bi-gram & 0.918\\ CNN & char bi-gram & \textbf{0.970}\\ \hline \end{tabular} \caption{Overview of results for the dataset with 50K data points in each language.} \label{results-keras-300k} \end{figure} In Figure \ref{confusion_matrix-big-cnn} we see the confusion matrix for the convolutional Neural Network trained on the full Wikipedia dataset with 300K data points pr language. We see that the largest classification errors still happens between Danish, Bokmål and Nynork as well as between Icelandic and Faroese. \\ \begin{figure}[h!] \centering \includegraphics[width = 200pt]{figs/confusion_CNN} \caption{Confusion matrix with results from the convolutional neural network on the full datasetwith 50K data points in each language.} \label{confusion_matrix-big-cnn} \end{figure} \subsection{Optimizing the kernel size.} In attempt to optimize the CNN i tried to look at different kernel sizes (between 1 and 11) for character level uni- bi- and trigrams. I tested on the dataset with 10K sentences in each language since the training time for a single CNN was several hours and it would have taken several days to train on the full dataset even using cloud resources in Google Colab. We see the results in Figure \ref{kernel_sizes}.\\ \begin{figure}[h!] \centering \includegraphics[width = 225pt]{figs/KernelSizes} \caption{Accuracy for different kernel sizes for the Constitutional Neural Network.} \label{kernel_sizes} \end{figure} \subsection{Using FastText supervised} FastText can also be used for be trained supervised and be used for Classification. In the Paper Bag of Tricks for Efficient Text Classification \cite{BagOfTricks} the authors show that FastText can obtain performance on par with methods inspired by deep learning, while being much faster on a selection of different tasks in Tag prediction and Sentiment Analysis. The confusion matrix from running the FastText supervised classifier can be seen in Figure \ref{fasttext_supervised}. We see that FastText is on par with the CNN\\ \begin{figure}[h!] \centering \includegraphics[width = 225pt]{figs/fasttext_supervised} \caption{Confusion matrix with results from a supervised FastText model on the full dataset with 300K datapoints.} \label{fasttext_supervised} \end{figure} \subsection{Reflections.} At this point I it beneficial to take a step back and look at what we have done so far. We have developed a selection of different models and our results to far indicate that the FastText supervised model and the CNN both give quite good results on the Wikipedia dataset with accuracies over 97\%. \\ In the rest of this section I would like to investigate the cases there these classifiers fail and try if I can find common patters. Also i will test the models on a dataset which they have not been trained on to investigate how well the models generalize to data from a different domain than Wikipedia. \\ \subsection{Looking at failure cases} Now having achieved quite good results with FastText and the CNN I now look a bit more closely at the sentences which were misclassified by both models. \\ As a first observation the misclassified sentences tend to be shorter than the correctly classified ones. In Figure \ref{faliurelengthdist} I have plotted the distribution of the length, defined as the number of characters after the data cleaning procedure as defined in a previous section. The mean length of the sentences in the test set is 104.74 characters with a standard deviation of 65.39 while it is only 48.91 characters with a deviation of 37.27. \\ \begin{figure}[h!] \centering \includegraphics[width=200pt]{figs/faliurelengthdist} \caption{Distribution of sentence lengths.} \label{faliurelengthdist} \end{figure} Generally the misclassified sentences fall into the categories: Very short sentences, names, mathematical content and sentences with many foreign words.\\ \textbf{Mathematical equations}: A substantial (and useful) part of Wikipedia is related to mathematics and equations are common on these pages. My Wikipedia scraper did not filter out mathematical equations which appear in the dataset as in the examples below. \\ \begin{verbatim} displaystyle n lambda eff cdot t displaystyle a frac a cot frac pi simeq a displaystyle a b c a cdot b a cdot c ab ac \end{verbatim} \textbf{Foreign Words}: Another common type of misclassified sentences with a lot of foreign words. In the dataset this commonly happened with English words. Below are some examples of misclassified sentences where most or all word are from foreign languages. \\ \begin{verbatim} art garfunkel all i know skywriter watermark mémoires de la société des antiquaires du nord fjögur bindi avatar the last airbender prins zuko the fifth dimension up up and away \end{verbatim} \textbf{Names}: Another common failure case is sentences that contain only names of people or locations. Examples are \begin{verbatim} romain rolland henry morton stanley anna karin \end{verbatim} It is not surprising that these cases are hard to classify since names are usually spelled the same way irrespective of language. \\ \textbf{Danish and Bokmål}: One of the largest error categories was sentences on Danish or Bokmål misclassified as the other language.\\ This is not surprising sine these two languages are very closely related and often the difference between them is an alternative spelling of a single word in a sentence. \\ The examples below are in Bokmål but was classified as Danish. The first sentence is in indistinguishable from danish except from the alternative spelling of the single word "magesekken" which would be "mavesækken" in Danish. \\ The Second sentence would be hard or impossible for a native speaker to distinguish since the there is no difference between this sentence in the two languages.Finally for the third example the difference is only two characters in the word "sommerfuglhager" which in danish would be spelled as "sommerfuglehaver". \begin{verbatim} (1) hos mennesket har magesekken et volumet ca (2) klubben har hjemmebane på slettebakken hvor de også har et klubbhus (3) en dyrepark kan derfor også omfatte for eksempel akvarier terrarier sommerfuglhager og fuglehus \end{verbatim} \subsection{Evaluating the models on another dataset} It would be interesting to see how the two best performing models generalize by testing on a dataset different from the they have been trained on (the Wikipedia dataset).\\ I downloaded an additional dataset from Tatoeba\footnote{{\tt tatoeba.org/}} which is a large database of user provided sentences and translations. In Figure \ref{tatoebasentprlang} we see the number of sentences in each language for all sentences in the Tatoeba dataset. Observe that we have very few samples in Nynorsk and Faroese.\\ \begin{figure}[h!] \centering \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width =\textwidth]{figs/tatoebasentprlang} \caption{Distribution of the number of sentences in each language in the tatoeba dataset.} \label{tatoebasentprlang} \end{subfigure} ~ \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width =\textwidth]{figs/taboeba-faliurelengthdist} \caption{Distribution of the length of sentences tatoeba dataset.} \label{tatoebalengths} \end{subfigure} \caption{Distribution of the lengths and language classes of Tatoeba sentences.} \end{figure} The language used in the Tatoeba dataset is different from the language used in Wikipedia. The Tatoeba dataset mainly consists of sentences written in everyday language. Below we see some examples from the Danish part of the Tatoeba dataset. \begin{verbatim} Hvordan har du det? På trods af al sin rigdom og berømmelse, er han ulykkelig. Vi fløj over Atlanterhavet. Jeg kan ikke lide æg. Folk som ikke synes at latin er det smukkeste sprog, har intet forstået. \end{verbatim} The confusion matrix for the FastText supervised model and the CNN, which are both trained on the full Wikipedia dataset and then evaluated on the Tatoeba dataset can be seen in Figure \ref{tatoeba-confuss}. Both models use the same setting that produced good results on the Wikipedia dataset. \begin{figure}[h!] \centering \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{figs/tatoeba-langid} \caption{Langid.py} \end{subfigure} ~ \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{figs/tatoeba-fasttext} \caption{fasttext classifier } \end{subfigure} ~ \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{figs/tatoeba-cnn} \caption{Convolutional neural network} \end{subfigure} \caption{Confusion matrices for the tatoeba dataset } \label{tatoeba-confuss} \end{figure} We see that the performance drops quite a lot when shifting to another domain. For reference the accuracy of langid.py on this dataset is 80.9\% so FastText actually performs worse than the baseline with an accuracy of 75.5\% while the CNN is a bit better than the baseline with an accuracy of 83.8 \% \\ One explanation for the drop in performance is that the sentences in the Tatoeba dataset is significantly shorter than the sentences in the Wikipedia dataset as seen in Figure \ref{tatoebalengths}. As we saw in the previous section both models tend to misclassify shorter sentences more often than longer sentences. This and the fact that the "genre" of sentences are different might explain why the models trained on the Wikipedia dataset does not generalize too well to the Tatoeba dataset without a drop on performance.\\ The CNN uses character bi-grams as features while, with the standard settings, FastText uses only individual words to train. The better performance of the CNN might indicate that character level n-grams are more useful features for language identification than words alone.\\ To test this I changed the setting of FastText to train using only character level n-grams in the range 1-5 instead of individual words. In Figure \ref{fasttextcharngram} we see the confusion matrix for this version of the FastText model. This version still achieved 97.8\% on the Wikipedia test set while improving the accuracy on the Tatoeba dataset from 75.4\% to 85.8\% which is a substantial increase.\\ Thus using character level features seem to improve the FastText models ability to generalize to sentences belonging to a domain different from the one it has been trained on. \begin{figure}[h!] \centering \includegraphics[width=200pt]{figs/fasttextcharngram} \caption{Confusion matrix for FastText trained using only characterlevel n-grams on the Wikipedia dataset and evaluated on the tatoeba dataset.} \label{fasttextcharngram} \end{figure} \subsection{Retraining on the combined dataset} To improve the accuracy on the Tatoeba dataset I decided to retrain the FastText model on a combined dataset consisting of datapoints from both the Wikipedia and Tatoeba dataset.\\ The FastText model achieved an accuracy of 97.2\% on this combined dataset and an accuracy of 93.2\% when evaluating this model on the Tatoeba test set alone - the confusion matrix can be seen in Figure \ref{retrain-confuss}.\\ As was the case with the Wikipedia dataset the misclassified sentences tend to be shorter than the average sentence in the dataset. In Figure \ref{retrain-lengths} we see the distribution of sentence lengths for the Tatoeba test set along with the misclassified sentences.\\ In the Tatoeba test set the mean length of sentences is 37.66 characters with a standard deviation of 17.91 while the mean length is only 29.70 characters for the misclassified sentences with a standard deviation of 9.65. This again supports the conclusion that shorter sentences are harder to classify. \begin{figure}[h!] \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=200pt]{figs/retrain} \caption{Confusion matrix for FastText trained using only characterlevel n-grams on the combined Wikipedia/Tatoeba dataset and evaluated on the tatoeba dataset.} \label{retrain-confuss} \end{subfigure} ~ \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{figs/faliurelengthdist_tatoeba} \caption{Distribution of sentence lengths Tatoeba testset along with the misclassified sentences.} \label{retrain-lengths} \end{subfigure} \end{figure} \section{Visualization} In this section we will try to develop some intuition about the dataset by making different visualizations. \subsection{Character distributions} One obvious thing to look at is how frequent each character appears in each language. In Figure \ref{chardist}\footnote{Some of the characters miss Diacritics i could not find a solution for matplotlib to handle there characters.} I have plotted the frequency by which each character appears for each language in the corpus, in Figure \ref{chardistnorm} i have normalized the distribution by dividing each occurrence of a character with the total number of times that character appear in the dataset.\\ \begin{figure}[h!] \centering \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{figs/chardist} \caption{Character frequencies} \label{chardist} \end{subfigure} ~ \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{figs/chardistnorm} \caption{Normalized character frequencies} \label{chardistnorm} \end{subfigure} \caption{Frequency of each character in each language. The character are ordered along the x-axis according to the frequency by which they appear in the dataset. } \end{figure} Interestingly the most frequent character {\tt e} is much less frequent in Faroese and Icelandic than in the other languages. Not surprisingly the character {\tt ð} only appears in Faroese and Icelandic while it is much more frequent in Icelandic. The character {\tt þ} is exclusive to Icelandic and {\tt ö} almost only appears in Swedish. We could potentially use the information about the relative frequencies of characters in the models. \subsection{Principal Component analysis and t-SNE} To gain additional insight on how the different word embedding capture important information about each of the language classes I thought it would be interesting to try and visualize the embeddings using two different techniques for dimensionality reduction.\\ No matter which way we choose to extract the feature vectors they belong to a high dimensional feature space and in order to do visualization we need to project the feature vectors down to 2d space.\\ To do this I have implemented two different methods: Principal Component Analysis (PCA) which i will compare with T-distributed Stochastic Neighbor Embedding (t-SNE). Here we will begin with a brief explanation of the two techniques and proceed with an analysis of the results. \paragraph{Pricipal Component Analysis} The first step is to calculate the covariance matrix of the dataset. The components of the covariance matrix is given by \begin{align} K_{X_i,X_j} = E[(X_i - \mu_i )(X_j - \mu_j)] \end{align} where $X_{i}$ is the $i$th component of the feature vector and $\mu_{i}$ is the mean of that component. In matrix form we can thus write the covariance matrix as \begin{align} K(\mathbf{x},\mathbf{z}) = \begin{bmatrix} \text{cov}(x_1,z_1) & \dots & \text{cov}(x_1,z_n) \\ \vdots & \ddots & \vdots \\ \text{cov}(x_n,z_1) & \dots & \text{cov}(x_n,z_n) \\ \end{bmatrix} \end{align} The next step is to calculate the eigenvectors and eigenvalues of the covariance matrix by solving the eigenvalue equation. \begin{align} \det (K v-\lambda v) = 0 \end{align} The eigenvalues are the variances along the direction of the eigenvectors or "Principal Components". To project our dataset onto 2D space we select the two eigenvectors largest associated eigenvalue and project our dataset onto this subspace.\\ In Figure \ref{pca} we see the result of running the PCA algorithm on the wikipedia dataset where we have used character level bigrams as features as well as the cbow and skipgram models from FastTest.\\ \begin{figure}[h!] \centering \begin{subfigure}[b]{0.47\textwidth} \includegraphics[width=\textwidth]{figs/pcachar2} \caption{Character bigram} \end{subfigure} ~ \begin{subfigure}[b]{0.47\textwidth} \includegraphics[width=\textwidth]{figs/pcacbow1} \caption{Fasttext cbow} \end{subfigure} ~ \begin{subfigure}[b]{0.47\textwidth} \includegraphics[width=\textwidth]{figs/pcaskipgram1} \caption{Fasttext skipgram} \end{subfigure} \caption{Dimensionality reduction using PCA} \label{pca} \end{figure} In the figure for encoding with character level bi-grams the PCA algorithm resulted in two elongated clusters. Without giving any prior information about the language of each sentences the PCA is apparently able to discriminate between Danish, Swedish, Nynorsk and Bokmål on one side and Faroese and Icelandic on the other since the majority of the sentences in each language belong to either of these two clusters. With the FastText implementations we observe three clusters.\\ For both cbow and skipgram we see a distinct cluster of Sweedish sentences. When comparing the two FastText models we see that the t-SNE algorithm with skipgrams seems to be able to separate the Faroese and Icelandic data points to a high d ecree compared with the cbow model. Also for the cluster identified with the sentences with Danish, Bokmål and Nynorsk the skipgram models seem seem to give a better separation, however to a lesser degree than with the two former languages. \paragraph{t-SNE} The T-distributed Stochastic Neighbor Embedding method was first proposed in 2008 in the paper "Visualizing Data using t-SNE"\cite{tsne}. In the paper the authors explain the theory behind the algorithm which I will make a brief summary of here. Suppose you pick a data point $x_i$, then the probability of picking another data point $x_j$ as a neighbor to $x_i$ is given by \begin{align} p_{ji}= \frac{\exp (|| x_i - x_j ||^2/2\sigma_i^2 )}{\sum_{k\neq i} \exp (|| x_i - x_k ||^2/2\sigma_i^2 )} \end{align} Now having this probability distribution the goal is to find the low-dimensional mapping of the data points $x_i$ which we denote $y_i$ follow a similar distribution. To solve what is referred to as the "crowding problem" the t-SNE algorithm uses the "Student t-distribution" which is given by \begin{align} q_{ij}= \frac{ (1+|| y_i - y_j ||^2 )^{-1}}{\sum_{k\neq l} (1+|| y_k - y_l ||^2 )^{-1}} \end{align} Now finally for optimizing this distribution is done by using gradient decent on the "Kullback-Leibler divergence" which is given by \begin{align} \frac{\delta C}{\delta y_i}= 4 \sum_j (p_{ij} - q_{ij})(y_i-y_j)(1+ || y_i - y_j ||^2 )^{-1} \end{align} The result from running the t-SNE algorithm on the wikipedia dataset can be seen in Figure \ref{tsne}. As was the case with the PCA algorithm it appears that the encoding with FastText seem to capture the most relevant information to discriminate between the languages, especially the skipgram mode seems to do a good job in capturing relevant information.\\ \begin{figure}[h!] \centering \begin{subfigure}[b]{0.47\textwidth} \includegraphics[width=\textwidth]{figs/tsnechar2} \caption{Character bigram} \end{subfigure} ~ \begin{subfigure}[b]{0.47\textwidth} \includegraphics[width=\textwidth]{figs/tsnecbow1} \caption{Fasttext cbow} \end{subfigure} ~ \begin{subfigure}[b]{0.47\textwidth} \includegraphics[width=\textwidth]{figs/tsneskipgram1} \caption{Fasttext skipgram} \end{subfigure} \caption{Dimensionality reduction using t-SNE} \label{tsne} \end{figure} Here we recover some interesting information about the similarity of the languages. The data points in Bokmål lies in between those in Danish and Nynorsk while Icelandic and Faroese have their own two clusters which are separated from the three former languages. \\ This is in good agreement with what we already know about the languages. Interestingly the Swedish data points and quite scattered and the t-SNE is not able to make a coherent Swedish cluster.\\ This does not however mean that the Swedish datapoint are not close in the original space. Some care is needed when interpreting the plot since t-SNE groups together data points such that neighboring points in the input space will tend to be neighbors in the low dimensional space.\\ If points are separated in input space, t-SNE would like to separate them in the low dimensional space however it does not care how far they are separated. So clusters that are far away in the low dimensional space are not necessarily far away in the input space. \section{Introduction} Automatic language identification is a challenging problem and especially discriminating between closely related languages is one of the main bottlenecks of state-of-the-art language identification systems~\cite{DSL2014}. Language technology for Scandinavian languages is in a nascent phase (e.g.~\newcite{kirkedal2019lacunae}). One problem is acquiring enough text with which to train e.g. large language models. Good quality language ID is critical to this data sourcing, though leading models often confuse similar Nordic languages. This paper presents a machine learning approach for automatic language identification between six closely-related Nordic languages: Danish, Swedish, Norwegian (Nynorsk), Norwegian (Bokmål), Faroese and Icelandic. This paper explores different ways of extracting features from a corpus of raw text data consisting of Wikipedia summaries in respective languages and evaluates the performance of a selection of machine learning models. Concretely we will compare the performance of classic machine learning models such as Logistic Regression, Naive Bayes, Support vector machine, and K nearest Neighbors with more contemporary neural network approaches such as Multilayer Perceptrons (MLP) and Convolutional Neural Networks (CNNs). After evaluating these models on the Wikipedia data set we will continue to evaluate the best models on a data set from a different domain in order to investigate how well the models generalize when classifying sentences from a different domain. \section{Related Work} The problem of discriminating between similar languages has been investigated in recent work \cite{DSLEvaluation,DSL2015} which discuss the results from two editions of the ``Discriminating between Similar Languages (DSL) shared task". Over the two editions of the DSL shared task different teams competed to develop the best machine learning algorithms to discriminate between the languages in a corpus consisting of 20K sentences in each of the languages: Bosnian, Croatian, Serbian, Indonesian, Malaysian, Czech, Slovak, Brazil Portuguese, European Portuguese, Argentine Spanish, Peninsular Spanish, Bulgarian and Macedonian. \section{The Nordic DSL Dataset} This section describes the construction of the Nordic DSL (Distinguishing Similar Lanugages) dataset. Data was scraped from Wikipedia. We downloaded summaries for randomly chosen Wikipedia articles in each of the languages, saved as raw text to six {\tt .txt} files of about 10MB each. While Bornholmsk would be a welcome addition~\cite{derczynski2019bornholmsk}, exhibiting some similarity to Faroese and Danish, there is not yet enough digital text. After the initial cleaning (described in the next section) the dataset contained just over 50K sentences in each of the language categories. From this, two datasets with exactly 10K and 50K sentences respectively were drawn from the raw dataset. In this way the datasets are stratified, containing the same number of sentences for each language. We split these datasets, reserving 80\% for the training set and 20\% for the test set. \subsection{Data Cleaning} This section describes how the dataset is initially cleaned and how sentences are extracted from the raw data. \paragraph{Extracting Sentences} The first pass in sentence tokenisation is splitting by line breaks. We then extract shorter sentences with the sentence tokenizer ({\tt sent\_tokenize}) function from the NLTK\cite{nltk} python package. This does a better job than just splitting by {\tt '.'} due to the fact that abbreviations, which can appear in a legitimate sentence, typically include a period symbol. \paragraph{Cleaning characters} The initial dataset has many characters that do not belong to the alphabets of the languages we work with. Often the Wikipedia pages for people or places contain names in foreign languages. For example a summary might contain Chinese or Russian characters which are not strong signals for the purpose of discriminating between the target languages. Further, it can be that some characters in the target languages are mis-encoded. These mis-encodings are also not likely to be intrinsically strong or stable signals. To simplify feature extraction, and to reduce the size of the vocabulary, the raw data is converted to lowercase and stripped of all characters with are not part of the standard alphabet of the six languages. In this way we only accept the following character set \begin{verbatim} 'abcdefghijklmnopqr stuvwxyzáäåæéíðóöøúýþ ' \end{verbatim} and replace everything else with white space before continuing to extract the features. For example the raw sentence \begin{verbatim} 'Hesbjerg er dannet ved sammenlægning af de 2 gårde Store Hesbjerg og Lille Hesbjerg i 1822.' \end{verbatim} will be reduced to \begin{verbatim} 'hesbjerg er dannet ved sammenlægning af de gårde store hesbjerg og lille hesbjerg i ', \end{verbatim} We thus make the assumption that capitalisation, numbers and characters outside this character set do not contribute much information relevant for language classification. \section{Baselines} \subsection{Baseline With langid.py} As a baseline to compare the performance of the models in we compare with an off-the-shelf language identification system. ``langid.py: An Off-the-shelf Language Identification Tool." \cite{langID} is such a tool. {\tt langid.py} comes with with a pretrained model which covers 97 languages. The data for langid.py comes from from five different domains: government documents, software documentation, newswire, online encyclopedia and an internet crawl. Features are selected for cross-domain stability using the LD heuristic~\cite{lui2011cross}. We evaluated how well langid.py performed on the Nordic DSL data set. It is a peculiar feature of the Norwegian language that there exist two different written languages but three different language codes. Since langid.py also returned the language id ``no" (Norwegian) on some of the data points we restrict langid.py to only be able to return either ``nn" (Nynorsk) or ``nb" (Bokmål) as predictions. \begin{figure} \centering \includegraphics[width = 200pt]{figs/langid.png} \caption{Confusion matrix with results from langid.py on the full dataset, 300K instances.} \label{langid_confusion_matrix} \end{figure} Figure~\ref{langid_confusion_matrix} shows the confusion matrix for the langid.py classifier. The largest confusions were between Danish and Bokmål, and between Faroese and Icelandic. We see that langid.py was able to correctly classify most of the Danish instances; however, approximately a quarter of the instance in Bokmål were incorrectly classified as Danish and just under an eighth was misclassified as Nynorsk. Furthermore, langid.py correctly classified most of the Icelandic data points; however, over half of the data points in Faroese were incorrectly classified as Icelandic. \begin{table} \centering \begin{tabular}{ l | c | r } \hline Model & Encoding & Accuracy \\ \hline Knn & cbow & 0.780\\ Log-Reg & cbow & 0.819\\ Naive Bayes & cbow & 0.660\\ SVM & cbow & 0.843\\ Knn & skipgram & 0.918\\ Log-Reg & skipgram & \textbf{0.929}\\ Naive Bayes & skipgram & 0.840\\ SVM & skipgram & \textbf{0.928}\\ Knn & char bi-gram & 0.745\\ Log-Reg & char bi-gram & 0.907\\ Naive Bayes & char bi-gram & 0.653\\ SVM & char bi-gram & 0.905\\ Knn & char uni-gram & 0.620\\ Log-Reg & char uni-gram & 0.755\\ Naive Bayes & char uni-gram & 0.614\\ SVM & char uni-gram & 0.707\\ \hline \end{tabular} \caption{Overview of results for the data set with 10K data points in each language.} \label{baseline-results-10k} \end{table} \subsection{Baseline with linear models} Table~\ref{baseline-results-10k} shows results for running the models on a data set with 10K sentences in each language category. We see that the models tend to perform better if we use character bi-grams instead of single characters. Also we see that logistic regression and support vector machines outperform Naive Bayes and K-nearest neighbors in all cases. Furthermore, for all models, we get the best performance if we use the skip-gram model from FastText. Comparing the CBOW mode from FastText with character bi-grams we see that the CBOW model is on par with bi-grams for the KNN and Naive Bayes classifiers, while bi-grams outperform CBOW for Logistic Regression and support vector machines. \section{Our Approach} \subsection{Using FastText} The methods described above are quite simple. We also compared the above method with FastText, which is a library for creating word embeddings developed by Facebook~\cite{BagOfTricks}. \newcite{EnrichingWordVectors} explain how FastText extracts feature vectors from raw text data. FastText makes word embeddings using one of two model architectures: continuous bag of words (CBOW) or the continuous skip-gram model. The skip-gram and CBOW models are first proposed in~\cite{EfficientWordRepresentations} which is the paper introducing the word2vec model for word embeddings. FastText builds upon this work by proposing an extension to the skip-gram model which takes into account sub-word information. Both models use a neural network to learn word embedding from using a context windows consisting of the words surrounding the current target word. The CBOW architecture predicts the current word based on the context, and the skip-gram predicts surrounding words given the current word~\cite{EfficientWordRepresentations}. \begin{figure} \centering \includegraphics[width = 200pt]{figs/cnn_diagram} \caption{Diagram of Convolutional Neural network.} \label{cnn} \end{figure} \subsection{Using A Convolutional Neural Network} While every layer in a classic multilayer perceptron is densely connected, such that each of the nodes in a layer are connected to all nodes in the next layer, in a convolutional neural network we use one or more convolutional layers. Convolutional Neural Networks have an established use for text classification~\cite{textcnn_google}. The basic premise of a convolutional layer is illustrated in Figure~\ref{cnn}.\footnote{Source: \url{https://realpython.com/python-keras-text-classification/}} In a CNN a filter ``slides" over the input. The CNN then takes e.g. the dot product of the weights of the filter and the corresponding input features, before applying a further function. \section{Results} \subsection{Results with neural networks} Results for the neural network architectures are in Table~\ref{keras-results}. Here we compare the result of doing character level uni- and bi-grams using Multilayer Perceptron and Convolutional neural networks. We see that the CNN performs the best, achieving an accuracy of 95.6\% when using character bi-grams. Both models perform better using bi-grams than individual characters as features while the relative increase in performance is greater for the MLP model. \subsection{Increasing the size of the data set} Often the performance of supervised classification models increases with more training data. To measure this effect we increase the amount of training data to 50K sentences in each of the language categories. Due to longer training times only the baseline models were included, with the skip-gram encoding from FastText which we saw achieved the highest accuracy. \begin{table} \centering \begin{tabular}{ l c | r } \hline Model & Encoding & Accuracy \\ \hline Knn & skipgram & 0.931\\ Logistic Regression & skipgram & \textbf{0.933}\\ Naive Bayes & skipgram & 0.806\\ SVM & skipgram& 0.925\\ \hline \end{tabular} \caption{Overview of results for the data set with 50K data points in each language.} \label{results-sklearn300k} \end{table} \begin{table} \centering \begin{tabular}{ l c | r } \hline Model & Encoding & Accuracy \\ \hline MLP & char bi-gram & 0.918\\ CNN & char bi-gram & \textbf{0.970}\\ \hline \end{tabular} \caption{Overview of results for the data set with 50K data points in each language.} \label{results-keras-300k} \end{table} \begin{figure} \centering \includegraphics[scale=0.5]{figs/confusion_CNN} \caption{Confusion matrix with results from the convolutional neural network on the full data set with 50K data points in each language.} \label{confusion_matrix-big-cnn} \end{figure} Table~\ref{results-sklearn300k} shows that the performance of the logistic regression model and the K-nearest-neighbors algorithm improved slightly by including more data. Unexpectedly, performance of the support vector machine and Na\"{i}ve Bayes dropped slightly with extra data. Even when including five times the amount of data, the best result, logistic regression with an accuracy of 93.3\%, is still worse than for the Convolutional Neural Network trained on 10K data points in each language. In Table~\ref{results-keras-300k} we see the results for running the neural networks on the larger data set. Both models improve by increasing the amount of data and the Convolutional Neural Network reached an accuracy of 97\% which is the best so far. In Figure~\ref{confusion_matrix-big-cnn} we see the confusion matrix for the convolutional Neural Network trained on the full Wikipedia data set with 300K data points pr language. We see that the largest classification errors still happens between Danish, Bokmål and Nynorsk as well as between Icelandic and Faroese. \begin{figure} \centering \includegraphics[scale=0.5]{figs/fasttext_supervised} \caption{Confusion matrix with results from a supervised FastText model on the full data set with 300K data points.} \label{fasttext_supervised} \end{figure} \subsection{Using FastText supervised} FastText can also be used for be supervised classification. In~\newcite{BagOfTricks} the authors show that FastText can obtain performance on par with methods inspired by deep learning, while being much faster on a selection of different tasks, e.g. tag prediction and sentiment analysis. We apply FastText classification to the Nordic DSL task. The confusion matrix from running the FastText supervised classifier can be seen in Figure~\ref{fasttext_supervised}. We see that FastText performance is similar to that of the CNN. \subsection{Cross-domain evaluation} Training on single-domain data can lead to classifiers that only work well on a single domain. To see how the two best performing models generalize, we tested on a non-Wikipedia data set. For this, we used Tatoeba,\footnote{{\tt tatoeba.org/}} a large database of user-provided sentences and translations. The language style used in the Tatoeba data set is different from the language used in Wikipedia. The Tatoeba data set mainly consists of sentences written in everyday language. Below we see some examples from the Danish part of the Tatoeba data set. \begin{quote} Hvordan har du det? (How are you?) På trods af al sin rigdom og berømmelse, er han ulykkelig. (Despite all his riches and renown, he is unlucky.) Vi fløj over Atlanterhavet. (We flew over the Atlantic Ocean.) Jeg kan ikke lide æg. (I don't like eggs.) Folk som ikke synes at latin er det smukkeste sprog, har intet forstået. (People who don't think Latin is the most beautiful language have understood nothing.) \end{quote} \begin{figure} \centering \begin{subfigure}[b]{0.45\textwidth} \includegraphics[scale=0.5]{figs/tatoebasentprlang} \caption{Distribution of the number of sentences in each language in the Tatoeba data set.} \label{tatoebasentprlang} \end{subfigure} ~ \begin{subfigure}[b]{0.45\textwidth} \includegraphics[scale=0.5]{figs/taboeba-faliurelengthdist} \caption{Distribution of the length of sentences in the Tatoeba data set.} \label{tatoebalengths} \end{subfigure} \caption{Distribution of the lengths and language classes of Tatoeba sentences.} \end{figure} In Figure~\ref{tatoebasentprlang} we see the number of sentences in each language for all sentences in the Tatoeba data set. Observe that we have very few samples in Nynorsk and Faroese. We see that the performance drops when shifting to Tatoeba conversations. For reference the accuracy of langid.py on this data set is 80.9\% so FastText actually performs worse than the baseline with an accuracy of 75.5\% while the CNN is better than the baseline with an accuracy of 83.8 \%. \begin{figure} \centering \includegraphics[scale=0.5]{figs/fasttextcharngram} \caption{Confusion matrix for FastText trained using only character level n-grams on the Wikipedia data set and evaluated on the Tatoeba data set.} \label{fasttextcharngram} \end{figure} One explanation for the drop in performance is that the sentences in the Tatoeba data are significantly shorter than the sentences in the Wikipedia data set as seen in Figure~\ref{tatoebalengths}. As we saw in the previous section, both models tend to mis-classify shorter sentences more often than longer sentences. This and the fact that the text genre is different might explain why the models trained on the Wikipedia data set does not generalise to the Tatoeba data set without a drop on performance. The CNN uses character bi-grams as features while, with the standard settings, FastText uses only individual words to train. The better performance of the CNN might indicate that character level n-grams are more useful features for language identification than words alone. To test this we changed the setting of FastText to train using only character level n-grams in the range 1-5 instead of individual words. In Figure~\ref{fasttextcharngram} we see the confusion matrix for this version of the FastText model. This version still achieved 97.8\% on the Wikipedia test set while improving the accuracy on the Tatoeba data set from 75.4\% to 85.8\% which is a substantial increase. Thus, using character-level features seems to improve the FastText models' ability to generalize to sentences belonging to a domain different from the one they have been trained on. \begin{figure} \centering \includegraphics[scale=0.5]{figs/retrain} \caption{Results for FastText trained w. char n-grams on Wikipedia+Tatoeba and evaluated on Tatoeba.} \label{retrain-confuss} \end{figure} \begin{figure} \centering \includegraphics[scale=0.5]{figs/faliurelengthdist_tatoeba} \caption{Distribution of sentence lengths Tatoeba test set along with the mis-classified sentences.} \label{retrain-lengths} \end{figure} \subsection{Retraining on the combined data set} To improve the accuracy over the Tatoeba data set, we retrained the FastText model on a combined data set consisting of data points from both the Wikipedia and Tatoeba data set. The FastText model achieved an accuracy of 97.2\% on this combined data set and an accuracy of 93.2\% when evaluating this model on the Tatoeba test set alone - the confusion matrix can be seen in Figure~\ref{retrain-confuss}. As was the case with the Wikipedia data set the mis-classified sentences tend to be shorter than the average sentence in the data set. In Figure~\ref{retrain-lengths} we see the distribution of sentence lengths for the Tatoeba test set along with the mis-classified sentences. In the Tatoeba test set the mean length of sentences is 37.66 characters with a standard deviation of 17.91 while the mean length is only 29.70 characters for the mis-classified sentences with a standard deviation of 9.65. This again supports the conclusion that shorter sentences are harder to classify. \section{Analysis} \subsection{Principal Component analysis and t-SNE} To gain additional insight on how the different word embedding capture important information about each of the language classes, we visualized the embeddings using two different techniques for dimensionality reduction. We used two different methods: Principal Component Analysis (PCA) and T-distributed Stochastic Neighbor Embedding (t-SNE). We begin with a brief explanation of the two techniques and proceed with an analysis of the results. \paragraph{Principal Component Analysis} The first step is to calculate the covariance matrix of the dataset. The components of the covariance matrix is given by \begin{equation} K_{X_i,X_j} = E[(X_i - \mu_i )(X_j - \mu_j)] \end{equation} where $X_{i}$ is the $i$th component of the feature vector and $\mu_{i}$ is the mean of that component. In matrix form we can thus write the covariance matrix as \begin{equation} K(\mathbf{x},\mathbf{z}) = \begin{bmatrix} \text{cov}(x_1,z_1) & \dots & \text{cov}(x_1,z_n) \\ \vdots & \ddots & \vdots \\ \text{cov}(x_n,z_1) & \dots & \text{cov}(x_n,z_n) \\ \end{bmatrix} \end{equation} The next step is to calculate the eigenvectors and eigenvalues of the covariance matrix by solving the eigenvalue equation. \begin{equation} \det (K v-\lambda v) = 0 \end{equation} The eigenvalues are the variances along the direction of the eigenvectors or ``Principal Components". To project our data set onto 2D space we select the two eigenvectors' largest associated eigenvalue and project our data set onto this subspace. In Figure~\ref{pca} we see the result of running PCA on the wikipedia data set where we have used character level bi-grams as features, as well as the CBOW and skipgram models from FastText. \begin{figure} \centering \begin{subfigure}[b]{0.47\textwidth} \centering \includegraphics[width=\textwidth]{figs/pcachar2} \caption{Character bigram} \end{subfigure} ~ \begin{subfigure}[b]{0.47\textwidth} \centering \includegraphics[width=\textwidth]{figs/pcacbow1} \caption{Fasttext cbow} \end{subfigure} ~ \begin{subfigure}[b]{0.47\textwidth} \centering \includegraphics[width=\textwidth]{figs/pcaskipgram1} \caption{Fasttext skipgram} \end{subfigure} \caption{Dimensionality reduction using PCA} \label{pca} \end{figure} In the figure for encoding with character level bi-grams, the PCA algorithm resulted in two elongated clusters. Without giving any prior information about the language of each sentences, PCA is apparently able to discriminate between Danish, Swedish, Nynorsk and Bokmål on one side, and Faroese and Icelandic on the other, since the majority of the sentences in each language belong to either of these two clusters. \\ With the FastText implementations we observe three clusters. For both CBOW and skipgram we see a distinct cluster of Swedish sentences. When comparing the two FastText models we see that the t-SNE algorithm with skipgrams seems to be able to separate Faroese and Icelandic data points to a high degree compared with the CBOW model. Also for the cluster identified with the sentences with Danish, Bokmål and Nynorsk the skipgram models seem seem to give a better separation, however to a lesser degree than with the two former languages. \paragraph{t-SNE} The T-distributed Stochastic Neighbor Embedding method was first proposed in 2008~\cite{tsne}, which favours retaining local spatial relationships over remote ones. In t-SNE, for a given data point $x_i$, the probability of picking another data point $x_j$ as a neighbor to $x_i$ is given by: \begin{equation} p_{ji}= \frac{\exp (|| x_i - x_j ||^2/2\sigma_i^2 )}{\sum_{k\neq i} \exp (|| x_i - x_k ||^2/2\sigma_i^2 )} \end{equation} Given this probability distribution the goal is to find the low-dimensional mapping of the data points $x_i$ which we denote $y_i$ follow a similar distribution. To solve what is referred to as the ``crowding problem", t-SNE uses the Student t-distribution which is given by: \begin{equation} q_{ij}= \frac{ (1+|| y_i - y_j ||^2 )^{-1}}{\sum_{k\neq l} (1+|| y_k - y_l ||^2 )^{-1}} \end{equation} Optimization of this distribution is done using gradient decent on the Kullback-Leibler divergence which is given by: \begin{equation} \frac{\delta C}{\delta y_i}= 4 \sum_j (p_{ij} - q_{ij})(y_i-y_j)(1+ || y_i - y_j ||^2 )^{-1} \end{equation} The result from running the t-SNE algorithm on the Wikipedia data set can be seen in Figure~\ref{tsne}. As was the case with PCA, it appears that the encoding with FastText seem to capture the most relevant information to discriminate between the languages; especially the skip-gram mode does well at capturing information relevant to this task. \begin{figure} \centering \begin{subfigure}[b]{0.47\textwidth} \includegraphics[width=\textwidth]{figs/tsnechar2} \caption{Character bi-gram} \end{subfigure} ~ \begin{subfigure}[b]{0.47\textwidth} \includegraphics[width=\textwidth]{figs/tsnecbow1} \caption{FastText CBOW} \end{subfigure} ~ \begin{subfigure}[b]{0.47\textwidth} \includegraphics[width=\textwidth]{figs/tsneskipgram1} \caption{FastText skip-gram} \end{subfigure} \caption{Dimensionality reduction using t-SNE} \label{tsne} \end{figure} Here we recover some interesting information about the similarity of the languages. The data points in Bokmål lie in between those in Danish and Nynorsk, while Icelandic and Faroese have their own two clusters which are separated from the three former languages. This is in good agreement with what we already know about the languages. Interestingly the Swedish data points and quite scattered and the t-SNE is not able to make a coherent Swedish cluster. This does not however mean that the Swedish data points are not close in the original space. Some care is needed when interpreting the plot since t-SNE groups together data points such that neighboring points in the input space will tend to be neighbors in the low dimensional space. If points are separated in input space, t-SNE would like to separate them in the low dimensional space however it does not care how far they are separated. So clusters that are far away in the low dimensional space are not necessarily far away in the input space. \subsection{Discussion} We used the dimensionality reduction techniques PCA and t-SNE to make visualizations of feature vectors obtained by making a one-hot encoding with character bi-grams and with the two modes from FastText. These unsupervised techniques was able to separate the sentences from Wikipedia into different clusters. Without any prior knowledge about the actual language of each sentence these techniques indicated that the six languages can be divided into three main language categories: (1) Danish Nynorsk Bokmål (2) Faroese Icelandic and (3) Swedish. Generally the supervised models had the largest errors when discriminating between languages belonging to either of the language groups mentioned above. For the ``classical" models we saw that Logistic Regression and support vector machines achieved better performance than KNN and Naive Bayes, where the latter performed the worst. This was true in all cases irrespective of the method of feature extraction. Additionally we saw that when we used feature vectors from the FastText skip-gram model the classification models achieved better results than when using either FastText CBOW or character n-grams. Generally we saw that increasing the number of data points lead to better performance. When comparing the CNN with the ``classical" models however the CNN performed better than any of the other models even when trained on less data points. In this way it seems that the CNN achieves higher sample efficiency compared to the other models. \section{Conclusion} This paper presented research on the difficult task of distinguishing similar languages applied for the first time to the Scandinavian context. We describe and release a dataset and detail baseline approaches and problem analysis. The dataset and code are available at \url{https://github.com/renhaa/NordicDSL}. We compared four different classical models: K nearest Neighbors, Logistic regression, Naive Bayes and a linear support vector machines with two neural network architectures: Multilayer perceptron and a convolutional neural network. The two best performing models, FastText supervised and CNN, saw low performance when going off-domain. Using character n-grams as features instead of words increased the performance for the FastText supervised classifier. By also training FastText on the Tatoeba data set as well as the Wikipedia data set resulted in an additional increase in performance.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Although Bose-Einstein condensation (BEC) has been an active topic of research in condensed matter physics for decades \cite{Bose24,Einstein24,Huang,Pathria}, recent revival of interests in this field is mainly credited to the achievement in atom molecular optics; most recent laser cooling and evaporative cooling techniques in magnetic and optical atom traps enable us to realize BEC of a weakly interacting gas in a controlled way \cite{AEMWC95,DMADDKK95,MADKDK96}. The weakly interacting nature of a gas allows us to handle the phenomena and make theoretical predictions with high accuracy. The rapid progress in this field also stimulates other areas in physics such as high energy physics and astrophysics \cite{BEC95}. For atom trap experiments, since atoms are trapped in a finite geometry, finite size as well as finite number effects play a significant role in the condensation process. The conventional phase transition picture defined in the thermodynamic limit has to be reexamined or modified \cite{Krueger68,Sonin69,Osborne49,BagKle91,GroHol95b, KirTom96,GroHol95c,IngoldLambrecht98}. One significant change due to finiteness is the existence of BEC in low-dimensional systems. Recently, quasi low-dimensional systems prepared by optical or magnetic trapping devices are actively discussed \cite{GHSSPM98,VKCC99}. The study of such systems provides an ideal test for the theory of finite size, low-dimensional systems in a controlled environment. The critical behavior of a finite size system \cite{Fisher71,BarFis73,Barber83} is characterized by the effective infrared dimension (EIRD) of the system \cite{HuOco84,OSH88}: near the critical point when the contribution of the lowest mode of a system dominates, its symmetry properties can be shown to be equivalent to a lower-dimensional one. The system, in such a case, is said to possess an EIRD. Varying the relative size (or shape) of the potential changes the infrared behavior and hence the EIRD of the system. Dimensionless parameters $\eta_i = \beta \hbar \omega_i$ ($\beta= 1 / k_{B} T$) for a harmonic oscillator potential with natural frequencies $\omega_i ~(i = 1,2,3)$ characterize the degree of anisotropy and finite size effects. With respect to $\eta_i$, we can classify the dynamical behavior of the system into the following four cases dependent on the degree of anisotropy: Case 1; ~~$\eta_1, \eta_2, \eta_3 > 1$ $~~\rightarrow$ EIRD = 0, Case 2; ~~$\eta_1, \eta_2 > 1 > \eta_3$ $\rightarrow$ EIRD = 1, Case 3; ~~$\eta_1 > 1 > \eta_2, \eta_3$ $\rightarrow$ EIRD = 2, Case 4; ~~$1 > \eta_1, \eta_2, \eta_3$ $~~\rightarrow$ EIRD = 3. \\ \noindent As the temperature is lowered, dynamical dimensional reduction of the system characterized by the decrease of effective dimension can be observed. Particularly, in the presence of maximal anisotropy $ 1 >> \eta_1 >> \eta_2 >> \eta_3$, EIRD decreases one by one from three to zero at the temperature $k_{B} T = \hbar \omega_1, \hbar \omega_2$, and $\hbar \omega_3$. While the picture described above is of generic nature of any quantum system, for particles that obey Bose statistics, the dimension of the dynamics of excited particles can also change in the form of condensation. BEC can also occur in separate steps in the presence of anisotropy reducing the effective dimension attributed to the excited modes of the system by one, two, or three at a time \cite{Sonin69}. In this sense, the dimensional crossover associated with both the multistep condensation and the reduction of effective dimension due to frozen degrees of freedom can manifest themselves in a similar way in a finite system in spite of their different origin. This is the main subject discussed in this paper. For the system with finite size and number of atoms, the reduced chemical potential $\phi \equiv \beta (E_0 - \mu)$ does not strictly vanish at the critical temperature. Only in the thermodynamic limit, $\phi$ vanishes at the critical temperature and the specific heat develops the discontinuity in the derivative at the critical point \cite{BarFis73}. In an isotropic system, the thermodynamic limit can be uniquely defined as discussed in many textbooks \cite{Pathria}. However, if we allow anisotropy in the system, there are three different ways of taking a thermodynamic limit: (Three-dimensional limit) $\omega_1, \omega_2, \omega_3 \rightarrow 0$ while keeping $N \omega_1 \omega_2 \omega_3$ fixed. In this case, the system shows the critical behavior of three dimension and the corresponding three dimensional critical temperature $T_{3D}$ can be defined. (Two-dimensional limit) $\omega_2 \omega_3 \rightarrow 0$ while $N \omega_2 \omega_3$ fixed. The system shows the critical behavior of two dimension and the corresponding critical temperature $T_{2D}$ is the one for the two-dimensional system. (One-dimensional limit) If we simply take $\omega_3 \rightarrow 0$ while $N \omega_3$ fixed, the critical temperature vanishes. However, if we tune $\omega_3$ a little slower such that $\omega_3 \sim \log(2N) / N \rightarrow 0$, the one-dimensional critical temperature $T_{1D}$ can still be defined in this modified sense. We will discuss this issue again in Sec. 3.1. In the presence of strong anisotropy, the whole particle spectrum naturally splits into zero, one, two, and three-dimensional excitations. The ground state is viewed as a zero-dimensional excitation. Let us denote the number of modes excited in the corresponding directions as $N_0$, $N_1$, $N_2$, and $N_3$, respectively. An $n$-dimensional condensation temperature $T_{nD}$ ($n=1,2,3$) can be defined as the temperature at which all the $n$-dimensionally excited modes are saturated: \begin{eqnarray} \mbox{3-dimensional;} ~N&=&N_3(T_{3D}), \label{def3d} \\ \mbox{2-dimensional;} ~N&=&N_3(T_{2D}) + N_2(T_{2D}), \label{def2d} \\ \mbox{1-dimensional;} ~N&=&N_3(T_{1D}) + N_2(T_{1D}) + N_1(T_{1D}). \label{def1d} \end{eqnarray} One can see that the condensation temperatures defined above are equivalent to the critical temperatures if the appropriate $n$-dimensional thermodynamic limit is taken. This splitting of the excitation spectrum gives the basis of the rest of our analysis. Similar splitting was proposed in \cite{Sonin69} for liquid helium. In a liquid, however, an occupation number of particles excited in a particular direction is extremely difficult to observe. Furthermore, the validity of this splitting is by no means obvious for the strongly interacting system such as the liquid helium. On the other hand, such a quantity is directly observable in atom trap experiments and therefore it deserves careful study. Moreover, as shown in Section 3, occupation numbers $N_1$, $N_2$, and $N_3$ behave as if they were independent quantities and show the similar behavior when the multistep crossover occurs. This result indicates the independent nature of each $N_i$ for strongly anisotropic systems. For a realistic system where $N$, $\omega_1, \omega_2, \omega_3$, are all finite and fixed, physically observable temperature of interest is the crossover temperature at which the deviation from the bulk critical behavior sets in. The crossover temperature is achieved when the correlation length reaches the size of the system since further ordering in this direction will be suppressed at this point. In the strongly anisotropic system in which $\omega_1, \omega_2 >> \omega_3$ holds, $T_{1D}, T_{2D} << T_{3D}$ gives the necessary (but not sufficient) condition for the multistep condensation: the condensation into two, one-dimensional modes and into the ground state can occur in separate steps. The multistep condensation was discussed for a nonrelativistic ideal gas in a cavity \cite{Sonin69}, in a harmonic trap \cite{DruKet97}, and a relativistic ideal gas in a cavity \cite{HuShi98}. In Section 2, we study the excitation spectrum of anisotropic harmonic oscillators. We focus our attention to three different cases in which the equipotential surface has a prolate, oblate, and maximally anisotropic ellipsoidal shape. In Section 3, we show that each case shows qualitatively different condensation behavior. After introducing the condensation temperatures defined in the bulk limit, we focus on the multistep crossover behavior of excited modes between different effective dimensions through BEC or dynamical reduction of EIRD. In particular, in a maximally anisotropic potential, the dimensional reduction can occur in three-steps. In such a case, we show that each dimensional component behaves in a similar manner as a function of the reduced temperature defined near the corresponding crossover temperature. The possible effect of interactions is also discussed. This work deals with a nonrelativistic ideal Bose gas in anisotropic magnetic traps, and a companion paper deals with a relativistic ideal Bose gas in rectangular cavities \cite{HuShi98}. Our calculations are focused on the occupation number and condensation temperature for each dimension. The effect of an interaction in condensation process has been discussed in many literatures. In principle, it can affect the dynamics of condensation considerably. For a weakly interacting gas, however, the averaged quantities such as condensation fractions and critical temperatures are relatively insensitive to the presence of interactions and the corrections to bulk ideal-gas value are well-explained by the finite number correction \cite{BPK87,EJMWC96}. On the other hand, interaction effects are known to affect higher moments such as specific heat significantly and considered to be essential to explain the observed specific heat data. Throughout the rest of the paper, we use units such that $k_B = \hbar = 1$ for brevity. The results in ordinary units can be easily reproduced by replacing $\omega \rightarrow \hbar \omega$ and $T \rightarrow k_B T$. \section{Anisotropic Harmonic Oscillator and Excitation Spectrum} For an anisotropic harmonic oscillator with oscillator frequencies $\omega_{i}~(i=1,2,3)$, the Hamiltonian has the form \begin{equation} H = \frac{1}{2} \sum_{i=1}^{3} (p_i^2 + \omega_i^2 x_i^2) . \label{f0} \end{equation} In this paper, we study cases where the frequencies $\omega_{i}$ are rationally related.\footnote{ This assumption of rationality is rather for the technical convenience and will not affect the physical results of this paper. Other methods can be found, for example, in \cite{KirTom96}, \cite{DruKet97}, and \cite{HHR97}.} Whence there exist integers $k_{i} ~(i=1,2,3)$ such that $\omega_{i} k_{i} = \omega ~(i=1,2,3)$ where $ \omega = \Omega (k_1 k_2 k_3)^{1/3}$ and $ \omega_1 \omega_2 \omega_3 = \Omega^3$ . The energy level of an anisotropic harmonic oscillator is given by \begin{equation} E_{n} = \sum_{i=1}^{3} \omega_{i} (n_i + 1/2) . \label{f1} \end{equation} We can also define the energy level modulo $k_i$ as $ n_i = k_i \nu_i + \lambda_i $ where $ \nu_i \equiv [ n_i / k_i ]$, and $[~]$ denotes the integer part of the number inside the bracket. Then Eq. (\ref{f1}) can be written as \begin{equation} E_{n} = \omega M + \sum_{i=1}^{3} \omega_i \lambda_i + E_0, \label{f2} \end{equation} where $M = \nu_1 + \nu_2 + \nu_3$. The first term in Eq. (\ref{f2}) corresponds to the isotropic harmonic oscillator Hamiltonian (see also Appendix A). The ground state energy $E_0$ has the familiar form \begin{equation} E_0 = \frac{ \omega_1 + \omega_2 + \omega_3 }{ 2 } . \label{c3f7} \end{equation} \subsection{Excitation spectrum} \subsubsection{Prolate shape potential} First we discuss the case of anisotropy corresponding to $\omega_1 = \omega_2 > \omega_3$ (we simply choose here $k_1 = k_2 = 1 < k_3$). In such a case, the equipotential surface has a prolate shape. For a strong anisotropy $k_3 >> 1$, two-step condensation can occur. In such a case, $\omega = \omega_1 = \omega_2$ and the energy eigenvalue is \begin{equation} E_n = \omega M + \omega_3 \lambda_3 + E_0 . \label{c3f} \end{equation} For sufficiently large values of $k_3$, the whole energy spectrum can be split into the energy level of the ground state ( $\bar{E}_{n} = 0$), one-dimensionally excited states ( $\bar{E}_{n} = n_3 \omega_3 $; $n_3 = 1, 2, \cdots$), two-dimensionally excited states ( $\bar{E}_{n} = n_1 \omega_1 + n_3 \omega_3 $; $n_1 = 1, 2, \cdots, n_3 = 0,1,\cdots$ and $n_2 \omega_2 + n_3 \omega_3 $, $n_2 = 1, 2, \cdots, n_3 = 0,1,\cdots$), and three-dimensionally excited states ( $\bar{E}_{n} = n_1 \omega_1 + n_2 \omega_2 + n_3 \omega_3 $; $n_1, n_2 = 1, 2, \cdots, n_3 = 0, 1, \cdots$ ), where $\bar{E}_n \equiv E_n - E_0 = \omega M + \omega_3 \lambda_3 $ is the energy measured from the ground state. The number of particles excited in these dimensions are given respectively by \begin{eqnarray} N_0 &=& \frac{z}{1-z}, \\ \label{f90} N_1 &=& \sum_{n_3=1}^{\infty} \frac{z}{e^{ n_3 \eta_3 } - z}, \label{f91} \\ N_2 &=& \sum_{n_1=1,n_3=0}^{\infty} \frac{2 z}{e^{ n_1 \eta_1 + n_3 \eta_3 } - z} \nonumber \\ &=& \sum_{\lambda_3=0}^{k_3 - 1} \sum_{M=1}^{\infty} \frac{2 M z}{e^{ M \eta_1 + \lambda_3 \eta_3 } - z}, \label{f92} \\ N_3 &=& \sum_{n_1=n_2=1,n_3=0}^{\infty} \frac{z}{ e^{ n_1 \eta_1 + n_2 \eta_2 + n_3 \eta_3 } - z} \nonumber \\ &=& \sum_{\lambda_3=0}^{k_3 - 1} \sum_{M=2}^{\infty} \frac{(M-1)M}{2} \frac{z}{ e^{ M \eta_1 + \lambda_3 \eta_3 } - z}, \label{f93} \end{eqnarray} where $z \equiv e^{\beta(\mu - E_0)}$ is the (reduced) fugacity. The factor $2$ in Eq. (\ref{f92}) accounts for the symmetry between the first and second axis. These expressions can be further simplified in the following manner. For one dimension, \begin{eqnarray} N_1 &=& \sum_{n_3=1}^{\infty} \frac{ z e^{ - n_3 \eta_3 } }{ 1 - z e^{ - n_3 \eta_3 } } = \sum_{l=1}^{\infty} \frac{ z^l e^{ - l \eta_3 } }{ 1 - e^{ - l \eta_3 } } \\ &=& \frac{ g_1(z e^{ - \eta_3 / 2 } ) }{ \eta_3 } + \cdots, \label{f10} \end{eqnarray} where $ g_p(z) = \sum_{ n=1}^{\infty} \frac{z^n}{n^p} $ is the Bose-Einstein function. We used Eq. (\ref{App0}) to obtain the second line from the first line. For two-dimensional excitations, making use of Eq. (\ref{App2}), Eq. (\ref{f92}) can be written as \begin{eqnarray} N_2 &=& 2 \sum_{\lambda_3=0}^{k_3 - 1} \sum_{M=1}^{\infty} \sum_{l=1}^{\infty} M z^l e^{-l (M \eta_1 + \lambda_3 \eta_3) } \nonumber \\ &=& 2 \sum_{l=1}^{\infty} \frac{ z^l e^{-l \eta_1}} { ( 1 - e^{-l \eta_1} ) ( 1 - e^{-l \eta_3} ) } \nonumber \\ &=& 2 \sum_{l=1}^{\infty} \frac{ z^l e^{- l (\eta_1 - \eta_3) / 2 }} { l^2 \eta_1 \eta_3 } - ( k_3 + \frac{1}{k_3} ) \sum_{l=1}^{\infty} \frac{ z^l e^{ - l (\eta_1 - \eta_3) / 2 }} { 12 } + \cdots \nonumber \\ &=& \frac{ 2 g_2( z e^{- (\eta_1 - \eta_3) / 2 } )}{ \eta_1 \eta_3 } - \frac{ k_3 g_0( z e^{- (\eta_1 - \eta_3) / 2 } )}{ 12 } + \cdots. \label{f15} \end{eqnarray} To obtain the third line from the second line, we used Eq. (\ref{App3}). For three dimensional excitations, Eq. (\ref{f93}) gives \begin{eqnarray} N_3 &=& \sum_{\lambda_3=0}^{k_3 - 1} \sum_{M=2}^{\infty} \sum_{l=1}^{\infty} \frac{(M-1)M}{2} z^l e^{-l (M \eta_1 + \lambda_3 \eta_3) } \nonumber \\ &=& \sum_{l=1}^{\infty} z^l \frac{ e^{-2 l \eta_1} } { ( 1 - e^{-l \eta_1} )^2 (1 - e^{-l \eta_3} ) } \nonumber \\ &=& \sum_{l=1}^{\infty} \frac{ z^l e^{- l (\eta_1 - \eta_3 / 2) }} { l^3 \eta_1^{~2} \eta_3 } - \sum_{l=1}^{\infty} \frac{ z^l e^{- l (\eta_1 - \eta_3 / 2) }} { 12 l \eta_3 } + \cdots \nonumber \\ &=& \frac{ g_3( z e^{- (\eta_1 - \eta_3 / 2) } ) }{ \eta_1^{~2} \eta_3 } - \frac{ g_1( z e^{- (\eta_1 - \eta_3 / 2) } ) }{ 12 \eta_3 } + \cdots. \label{300} \end{eqnarray} \subsubsection{Oblate shape potential} Next we discuss the case of anisotropy corresponding to an oblate shape potential $\omega_1 > \omega_2 = \omega_3$ ( $k_1 = 1 < k_2 = k_3$). In this case, $\omega = \omega_1$ and \begin{equation} \bar{E}_n =\omega M + \omega_2 (\lambda_2 + \lambda_3). \label{Oc3f} \end{equation} The number of particles excited in these dimensions are given respectively by \begin{eqnarray} N_0 &=& \frac{z}{1-z}, \\ \label{Of90} N_1 &=& \sum_{n_2=1}^{\infty} \frac{2 z}{e^{ n_2 \eta_2 } - z}, \label{Of91} \\ N_2 &=& \sum_{n_2=1,n_3=1}^{\infty} \frac{z}{e^{ n_2 \eta_2 + n_3 \eta_3 } - z} \nonumber \\ &=& \sum_{M=2}^{\infty} \frac{(M - 1) z}{e^{ M \eta_2 } - z}, \label{Of92} \\ N_3 &=& \sum_{n_1=1,n_2=0,n_3=0}^{\infty} \frac{z}{ e^{ n_1 \eta_1 + n_2 \eta_2 + n_3 \eta_3 } - z} \nonumber \\ &=& \sum_{n_1=1}^{\infty} \sum_{M=0}^{\infty} \frac{(M+1) z}{ e^{ n_1 \eta_1 + M \eta_2 } - z}, \label{Of93} \end{eqnarray} where the factor $2$ in Eq. (\ref{Of91}) is due to the symmetry between the second and third axis. Compared to Eq. (\ref{f10}), we obtain \begin{eqnarray} N_1 &=& \frac{ 2 g_1(z e^{ - \eta_2 / 2 } ) }{ \eta_2 } + \cdots \end{eqnarray} in the present case. The number of particles excited two-dimensionally on $x_2-x_3$ plane can be written as \begin{eqnarray} N_2 &=& \sum_{l=1}^{\infty} \sum_{M=2}^{\infty} (M-1) z^l e^{-l M \eta_2} \nonumber \\ &=& \sum_{l=1}^{\infty} \frac{ z^l e^{-2 l \eta_2}} { ( 1 - e^{-l \eta_2} )^2 } \nonumber \\ &=& \frac{ g_2( z e^{- \eta_2}) }{ \eta_2^{~2} } + \cdots. \label{Of15} \end{eqnarray} For three dimensional excitations, Eq. (\ref{Of93}) gives \begin{eqnarray} N_3 &=& \sum_{l=1}^{\infty} z^l \frac{ e^{- l \eta_1} } { ( 1 - e^{-l \eta_1} ) (1 - e^{-l \eta_2} )^2 } \nonumber \\ &=& \frac{ g_3( z e^{- ( \eta_1/2 - \eta_2) } ) } { \eta_1 \eta_2^{~2} } - \frac{ g_1( z e^{- ( \eta_1/2 - \eta_2) } ) }{ 24 } ( \frac{ \eta_1^{~2} + 2 \eta_2^{~2} } { \eta_1 \eta_2^{~2} } ) + \cdots. \label{O300} \end{eqnarray} \subsubsection{Maximally anisotropic potential} For anisotropies $\omega_1 > \omega_2 > \omega_3$ with $k_1 =1 << k_2 << k_3$, $\omega = \omega_1$ and \begin{equation} \bar{E}_n = \omega M + \omega_2 \lambda_2 + \omega_3 \lambda_3 . \label{c3f61} \end{equation} The number of excited modes in the corresponding dimensions can be defined by \begin{eqnarray} N_1 &=& \sum_{n_3=1}^{\infty} \frac{z}{e^{ n_3 \eta_3 } - z}, \\ \label{f910} N_2 &=& \sum_{n_2=1, n_3=0}^{\infty} \frac{z}{e^{ n_2 \eta_2 + n_3 \eta_3 } - z}, \label{f920} \\ N_3 &=& \sum_{n_1=1,n_2=n_3=0}^{\infty} \frac{z}{ e^{ n_1 \eta_1 + n_2 \eta_2 + n_3 \eta_3 } - z} \nonumber \\ &=& \sum_{\lambda_2=0}^{k_2 - 1} \sum_{\lambda_3=0}^{k_3 - 1} \sum_{M=1}^{\infty} \frac{M(M+1)}{2} \frac{z}{ e^{ M \eta_1 + \lambda_2 \eta_2 + \lambda_3 \eta_3 } - z}. \label{f930} \end{eqnarray} For the two dimensional case, following Eq. (\ref{f15}), \begin{eqnarray} N_2 &=& \sum_{\lambda_3=0}^{\kappa - 1} \sum_{M=1}^{\infty} \sum_{l=1}^{\infty} M z^l e^{-l (M \eta_2 + \lambda_3 \eta_3) } \nonumber \\ &=& \sum_{l=1}^{\infty} \frac{ z^l e^{-l \eta_2}} { ( 1 - e^{-l \eta_2} ) ( 1 - e^{-l \eta_3} ) } \nonumber \\ &=& \sum_{l=1}^{\infty} \frac{ z^l e^{- \eta_2 / 2 }} { l^2 \eta_2 \eta_3 } - ( \kappa + \frac{1}{\kappa} ) \sum_{l=1}^{\infty} \frac{ z^l e^{ - \eta_2 / 2 }} { 24 } + \cdots \nonumber \\ &=& \frac{ g_2( z e^{- \eta_2 / 2 } )}{ \eta_2 \eta_3 } - \frac{ \kappa g_0( z e^{- \eta_2 / 2 } )}{ 24 } + \cdots, \label{f150} \end{eqnarray} where $\kappa \equiv k_3/k_2$. For three dimensional excitations, Eq. (\ref{f930}) gives \begin{eqnarray} N_3 &=& \sum_{l=1}^{\infty} z^l \frac{ e^{- l \eta_1} } { ( 1 - e^{-l \eta_1} ) ( 1 - e^{-l \eta_2} ) (1 - e^{-l \eta_3} ) } \nonumber \\ &=& \sum_{l=1}^{\infty} \frac{ z^l e^{- l ( \eta_1 - \eta_2 - \eta_3) / 2} } { l^3 \eta_1 \eta_2 \eta_3 } - \sum_{l=1}^{\infty} \frac{ z^l e^{- l ( \eta_1 - \eta_2 - \eta_3) / 2 } } { 24 l } ( \frac{ \eta_1^{~2} + \eta_2^{~2} + \eta_3^{~2} } { \eta_1 \eta_2 \eta_3 } ) + \cdots \nonumber \\ &=& \frac{ g_3( z e^{- ( \eta_1 - \eta_2 - \eta_3) / 2 } ) } { \eta_1 \eta_2 \eta_3 } - \frac{ g_1( z e^{- ( \eta_1 - \eta_2 - \eta_3) / 2} ) }{ 24 } ( \frac{ \eta_1^{~2} + \eta_2^{~2} + \eta_3^{~2} } { \eta_1 \eta_2 \eta_3 } ) + \cdots. \label{3002} \end{eqnarray} \section{Finite Size Effects and Dimensional Crossover Behavior} \subsection{Bulk behavior} The bulk three-dimensional condensation temperature is defined in the thermodynamic limit $\eta_i \rightarrow 0 ~(i=1,2,3)$ and $N \rightarrow \infty$ while $\eta_1 / \eta_3$ and $\eta_2 / \eta_3$ fixed. The dominant term is given by the first term in $N_3$ and the critical temperature satisfies \begin{eqnarray} N = \frac{ T_{3D}^{~3} }{ \omega_1 \omega_2 \omega_3 } \zeta(3). \label{T3d} \end{eqnarray} Therefore \begin{eqnarray} T_{3D} = \left( \frac{N \omega_1 \omega_2 \omega_3 }{ \zeta(3) } \right)^{1/3}. \label{T3d2} \end{eqnarray} Two-dimensional limit is given by $\eta_2,\eta_3 \rightarrow 0$, $N \rightarrow \infty$, but $\eta_1 >> 1$. In such a case, the dominant term in particle number is $N_2$ and the critical temperature $T_{2D}$ is defined by \begin{eqnarray} N = \frac{T_{2D}^{~2}}{ \omega_2 \omega_3 } g_2(e^{ - (\omega_2 + \omega_3) / 2 T_{2D}} ) = \frac{T_{2D}^{~2}}{ \omega_2 \omega_3 } \zeta(2) + \cdots. \label{T2d} \end{eqnarray} Thus we have \begin{eqnarray} T_{2D} = \left( \frac{N \omega_2 \omega_3}{ \zeta(2) } \right)^{1/2}. \label{T2d2} \end{eqnarray} For one-dimensional limit $\eta_3 \rightarrow 0$, $N \rightarrow \infty$, but $\eta_1,\eta_2 >> 1$, the dominant term in particle number is $N_1$. The condensation temperature is defined by \begin{eqnarray} N = \frac{T_{1D}}{ \omega_3 } g_1(e^{ - \omega_3 / 2 T_{1D}} ). \label{T1d} \end{eqnarray} To leading order in $\eta_3^{~-1}$, Eq. (\ref{f10}) can be approximated as $ N_1 \sim \frac{T}{ \omega_3 } \log \frac{2 T}{ \omega_3 } $. Thus the one-dimensional condensation temperature $T_{1D}$ is defined by \begin{eqnarray} N = \frac{T_{1D}}{ \omega_3 } \log \frac{2 T_{1D}}{ \omega_3 }, \label{f101a} \end{eqnarray} which gives \begin{eqnarray} T_{1D} = \frac{N \omega_3}{ \log(2N)} \label{f101b} \end{eqnarray} for large $N$. Note that in the thermodynamic limit $N \rightarrow \infty$ while $N \omega_3$ fixed, $T_{1D}$ vanishes. In Figure 1, we plot $T_{1D}$, $T_{2D}$, and $T_{3D}$ as a function of the anisotropy parameter $k_3$. In an isotropic and weakly isotropic case ($k_3 \sim 1$), $T_{3D} < T_{1D},T_{2D}$ and the condensation is directly into the ground state. As anisotropy is increased ($k_3 >> 10^3$), $T_{1D}, T_{2D} < T_{3D}$ is achieved. This is the regime where various multistep behaviors can take place. In terms of bulk condensation temperatures we obtained in Section 3.1 as $T_{1D} = N \omega_3 / \log(2 N) $, $T_{2D} = ( N \omega_2 \omega_3 / 2 \zeta(2) )^{1/2} $, and $T_{3D} = ( N \omega_1 \omega_2 \omega_3 / \zeta(3) )^{1/3} $, the conditions (A) $T_{1D} << T_{2D}$, (B) $T_{2D} << T_{3D}$, and (C) $T_{1D} << T_{3D}$ give constraints for $k_3$ and $\kappa$ as \begin{eqnarray} (A)& \kappa &>> \frac{ N \zeta(2) }{ (\log(2N))^2 }, \nonumber \\ (B)& k_3^{~2} / \kappa &>> \frac{ N \zeta(3)^2 }{ \zeta(2)^3 }, \nonumber \\ (C)& k_3 \kappa &>> \frac{ N^2 }{ (\log(2 N))^3 }. \label{f18} \end{eqnarray} Since (B) $T_{2D} << T_{3D}$ implies $T_{3D} << \omega_1$, three-step BEC never occurs in harmonic traps. In Figure 2, different condensation behaviors corresponding to various anisotropy parameters $\omega_1 / \omega_2$ and $\omega_2 / \omega_3$ are shown. The vertical axis corresponds to the prolate-shape potential studied in Section 3.2.1 (this case was studied in \cite{DruKet97}). In such a potential, two-step BEC can be seen. The horizontal axis corresponds to the oblate-shape potential discussed in Section 3.2.2, where we show that there is no multistep condensation in this case. The more general class of anisotropic case will be discussed in Section 3.2.3. The combined effect of dynamical dimensional reduction and two-step BEC in such a potential can appear in three steps. \subsection{Dimensional crossover and condensation} For a highly anisotropic trap, the three dimensional crossover temperature $T_{3D}^{*}$ should be reached when the correlation length is in the order of the size of the ground state wave function in the most confining direction. Spreading of the wave function can be characterized by $L_i \equiv \sqrt{ \hbar / m \omega_i }$ (for $i=1,2,3$) \cite{BayPet96,CYY96}. Hence the above condition is equivalent to $\xi(T_{3D}^{*}) \sim \lambda_{\theta dB} / \sqrt{t_3} \sim L_1$, where $\lambda_{\theta dB} \equiv h / \sqrt{2\pi m k T}$ is the thermal de Broglie wavelength, and $t_3 \equiv | T_{3D}^{*} - T_{3D} | / T_{3D} $. This will give us the crude estimate of $T_{3D}^{*}$ as \begin{eqnarray} | T_{3D}^{*} - T_{3D} | \sim \left( \frac{ k_3 \zeta(3)}{ N } \right)^{1/3} T_{3D}. \label{t3d} \end{eqnarray} \subsubsection{Two-step condensation} For a prolate shape potential discussed in Section 2.1.1, we expand the whole particle spectrum with respect to $\eta_1$ and $\eta_3$ and obtain \begin{eqnarray} N &=& N_0 + N_1 + N_2 + N_3 \nonumber \\ &=& g_0( z ) + \frac{ g_1( z e^{- \eta_3 / 2} ) }{ \eta_3 } + \frac{ 2 g_2( z e^{- \eta_1 / 2} ) }{ \eta_1 \eta_3 } + \frac{ g_3( z e^{- \eta_1 } ) }{ \eta_1^{~2} \eta_3 } + \cdots \nonumber \\ &=& g_0( z ) + \frac{ g_1( z ) }{ \eta_3 } + \frac{ 2 g_2( z ) }{ \eta_1 \eta_3 } + \frac{ g_3( z ) }{ \eta_1^{~2} \eta_3 } \nonumber \\ &-& \frac{ g_0( z ) }{ \eta_3 } \frac{ \eta_3 }{ 2 } - \frac{ 2 g_1( z ) }{ \eta_1 \eta_3 } \frac{ \eta_1 }{ 2 } - \frac{ g_2( z ) }{ \eta_1^{~2} \eta_3 } \eta_1 \nonumber \\ &+& \frac{ g_0( z ) }{ \eta_1 \eta_3 } \left( \frac{ \eta_1 }{ 2 } \right)^2 + \frac{ g_1( z ) }{ \eta_1^{~2} \eta_3 } \frac{ \eta_1 ^2 }{ 2 } - \frac{ g_0( z ) }{ \eta_1^{~2} \eta_3 } \frac{ \eta_1^{~3} }{ 6 } + \cdots. \label{Ntotal} \end{eqnarray} This expression can be simplified to give \begin{eqnarray} N \eta_3 = \frac{ g_3( z ) }{ \eta_1^{~2} } + \frac{ g_2( z ) }{ \eta_1 } + \frac{ g_1( z ) }{ 2 } + \cdots. \label{Ntotal2} \end{eqnarray} Writing $z=e^{-\phi}$ and expanding Eq. ({\ref{Ntotal2}) with respect to $\phi$ give \begin{eqnarray} N \eta_3 = \frac{ \zeta(3) }{ \eta_1^{~2} } + \frac{ \zeta(2) }{ \eta_1 } - \frac{ \zeta(2) \phi }{ \eta_1^{~2} } + \cdots \label{Ntotal3}, \end{eqnarray} where we used an asymptotic expansion of the Bose-Einstein function $ g_3( e^{- \alpha } ) \sim \zeta(3) - \zeta(2) \alpha + 1/2 ( 3/2 - \log \alpha) \alpha^2 + \cdots $ and % $ g_2( e^{- \alpha } ) \sim \zeta(2) + ( \log \alpha - 1) \alpha + \cdots$ for small $\alpha $ \cite{Pathria}. Correlation length $\xi$ of an ideal Bose gas is given by $\xi = \lambda_{\theta dB} / 2 \sqrt{\pi \phi}$ \cite{GunBuc68}. In terms of scaling parameters $x_i(T) \equiv \phi(T) / \eta_i (i = 1,2,3)$, the above argument implies that $T_{3D}^{*}$ is achieved when $x_1(T_{3D}^{*}) \equiv c_1$, where $c_1$ is some constant in the order of unity. Inserting this into Eq. (\ref{Ntotal3}) gives \begin{eqnarray} N = \frac{ \zeta(3) }{ \eta_1^{~2} \eta_3 } + \frac{ \zeta(2) (1 - c_1) }{ \eta_1 \eta_3 } + \cdots ~~{\mbox at} ~~T = T_{3D}^{*}. \label{Ntotal4} \end{eqnarray} Thus we have \begin{eqnarray} \frac{ T_{3D}^{*} }{ T_{3D} } = 1 + \frac{ c_1 - 1 }{ 3 } \frac{ \zeta(2) }{ \zeta(3)^{~2/3} } \left( \frac{ k_3 }{ N } \right)^{1/3}. \label{Ntotal5} \end{eqnarray} This result gives the same correction term proportional to $\left( k_3 / N \right)^{1/3}$ as in Eq. (\ref{t3d}) obtained by heuristic arguments. The one-dimensional condensation temperature $T_{1D}$ is defined in Eq. (\ref{T1d}) in the limit of small $\eta_3$ and the vanishing reduced chemical potential $\phi$; finite size effects on $T_{1D}$ originate in finiteness of both $\eta_3$ and $\phi$. At one-dimensional crossover temperature $T_{1D}^{*}$, correlation length reaches the size of the ground state wave function in the least confining direction, namely, along the third axis in the present case. Thus $\xi(T_{1D}^{*}) \sim L_3$ or equivalently $x_3(T_{1D}^{*}) \equiv c_3 = O(1)$. Then from the second line in Eq. (\ref{Ntotal}), we obtain \begin{eqnarray} N &=& \frac{ g_1( e^{- (1 + 2 c_3) \eta_3 / 2} ) }{ \eta_3 } + \frac{ 2 g_2( e^{- (k_3 + 2 c_3) \eta_3 / 2} ) }{ \eta_1 \eta_3 } + \frac{ g_3( e^{- (k_3 + c_3) \eta_3} ) }{ \eta_1^{~2} \eta_3 } + \cdots. \label{Ntotal1d} \end{eqnarray} In the limit $\eta_3 \rightarrow 0$, only the first term dominates and we obtain the crossover temperature as \begin{eqnarray} N &=& - \frac{ T_{1D}^{*} }{ \omega_3 } \log( 1 - e^{- (1 + 2 c_3) \omega_3 / 2 T_{1D}^{*} } ) \nonumber \\ &=& \frac{ T_{1D}^{*} }{ \omega_3 } \log \frac{2 T_{1D}^{*} }{ (1 + 2 c_3) \omega_3 }. \label{T1dcvr1} \end{eqnarray} For large $N$, this gives \begin{eqnarray} T_{1D}^{*} = \frac{N \omega_3}{ \log[2N/(1 + 2 c_3)]}. \label{T1dcvr2} \end{eqnarray} Note that $ T_{1D}^{*} > T_{1D} $ holds. The conditions (C) in Eq. (\ref{f18}) and $\omega_1 < T_{1D} $ are satisfied if \begin{eqnarray} \frac{ N }{ (\log(2 N))^{3/2} } < k_3 < \frac{ N }{ (\log(2 N)) }. \label{2step} \end{eqnarray} In such a case, two-step condensation leading to the condensation into the ground state can be seen whereas the system is effectively still three-dimensional. In Figure 3-6, the condensation fractions $ N_i / N (i=0,1,2,3) $ as a function of temperature are plotted. At high temperature, three-dimensionally excited states dominate, as expected from the density of states which grows as $M^2$, where $M$ is the number of degeneracy of an isotropic harmonic oscillator appeared in Eq. (\ref{f2}). In the isotropic case (Figure 3), condensation is only into the ground state. Due to the finite size effects, condensation already starts before the critical temperature is reached. In a strongly anisotropic case, as in Figure 4, two-step condensation can be seen. $T_{3D}^{*}$ determines the onset of condensation into one-dimensionally excited states. At $T_{3D}^{*}$ the ground state fraction is negligiblly small. Condensation into the ground state will not start until $T_{1D}^{*}$ is reached. In the multistep process peculiar to the highly anisotropic system, when the correlation length reaches the size of the system, the dynamics shows the crossover to the low-dimensional one before the actual phase transition occurs. In this sense, the critical temperature is never observed in such a process and the directly relevant quantity to the observation is the crossover temperature, the temperature at which the finite size correction sets in. For practical purposes, this is often replaced by including the finite size correction as the term proportional to the power of $1/N$ whereas the chemical potential is set to the ground state energy \cite{KetDru96}. Strictly speaking, however, since the chemical potential never reaches the ground state energy in the finite system, the meaning of this correction has some ambiguity. The difference between the crossover temperature and the finite size-corrected critical temperature in the present axially symmetric trap case is given by \begin{eqnarray} \frac{ \Delta T }{ T_{3D} } = \frac{ c_1 }{ 3 } \frac{ \zeta(2) }{ \zeta(3)^{~2/3} } \left( \frac{ k_3 }{ N } \right)^{1/3}. \label{DIFF} \end{eqnarray} While this is fairly small for an isotropic or weakly anisotropic case: $\Delta T / T_{3D} \sim 0.024 $ for $k_3=1$, it is no longer so for a strongly anisotropic case: $\Delta T / T_{3D} \sim 0.24 $ for $k_3=10^3$ as used in Figure 4, where $c_1=1, N=10^4$ for both cases. The ordinary finite size correction significantly underestimates the results in the latter case.\footnote{Since the boundary condition in the harmonic potential corresponds to the one in the Neumann boundary condition, the surface correction increases the density of states and hence decreases the critical temperature. } For these reasons, we focus our discussions on crossover temperatures in the present work. We should also note that there is a slight amount of ambiguity in the choice of $c_i (i=1,2,3)$. In general, the correlation length $\xi$ is a complicated function of the temperature away from the critical value and the reliable choice is obtained by the numerical fitting. We will simply put $c_i (i=1,2,3) = 1$ for our comparison with numerical data for brevity. \subsubsection{Two-dimensional condensation} For an oblate shape potential discussed in Section 2.1.2, assuming $\eta_1 >> 1$ and we obtain \begin{eqnarray} N &=& N_0 + N_1 + N_2 + N_3 \nonumber \\ &=& g_0( z ) + \frac{2 g_1( z e^{- \eta_2 / 2} ) }{ \eta_2 } + \frac{ g_2( z e^{- \eta_2} ) }{ \eta_2^{~2} } + \cdots. \label{Ntotal2dcond} \end{eqnarray} Two-dimensional crossover temperature $T_{2D}^{*}$ can be defined when the correlation length reaches the size of the ground state wave function along the second axis. Thus we have $\xi(T_{2D}^{*}) \sim L_2$ or equivalently $x_2(T_{2D}^{*}) \equiv c_2 = O(1)$. Then from Eq. (\ref{Ntotal2dcond}) we obtain \begin{eqnarray} N &=& \frac{ g_2( e^{- ( 1 + c_2 ) \eta_2 } ) }{ \eta_2^{~2} } + \frac{ 2 g_1( e^{- ( 1 + 2 c_2 ) \eta_2 / 2 } ) }{ \eta_2 } + \cdots \label{Ntotal2d22} \end{eqnarray} at $T = T_{2D}^{*}$. Expanding in terms of $\eta_2$ and we obtain $T_{2D}^{*}$ as \begin{eqnarray} N &=& \frac{ \zeta(2) }{ \eta_2^{~2}} - \frac{ 1 + c_2 }{ \eta_2 } + \frac{ (1 + c_2) \log [\eta_2 (1 + c_2)] }{ \eta_2 } - \frac{ 2 \log (\eta_2 c_2) }{ \eta_2 } \nonumber \\ &=& \frac{ T_{2D}^{*2} \zeta(2) }{ \omega_2^{~2}} - \frac{ T_{2D}^{*} }{ \omega_2 } \left[ (1 + c_2) \right. \nonumber \\ &+& \left. (1 + c_2) \log \left( \frac{ T_{2D}^{*} }{\omega_2 (1 + c_2)} \right) + 2 \log \left( \frac{ T_{2D}^{*} }{\omega_2 c_2} \right) \right] + \cdots \label{T2d2cvr} \end{eqnarray} for $\eta_2, \eta_3 << 1$. For large $N$ and $c_2=1$, this gives \begin{eqnarray} \frac{ T_{2D}^{*} }{ T_{2D} } &=& 1 + \left( \frac{ \kappa }{ N \zeta(2) } \right)^{1/2} \log \left( \frac{ N }{ \kappa \zeta(2) } \right). \label{Ntotal2d52} \end{eqnarray} As explained in Section 3.1, condensation into $N_2$ does not occur in harmonic traps. For an oblate shape potential, the system dynamics freezes out along the first axis at $T = \omega_1$. Therefore the dynamics of the system at $T < \omega_1$ is two-dimensional. Ordinary two-dimensional BEC can still be observed as long as $T_{2D} < \omega_1$. This condition requires $k_3 > (N / \zeta(2))^{1/2}$. In Figure 5, two-dimensional BEC in this parameter regime is shown. \subsubsection{Three-step dimensional reduction} For a general class of anisotropic potential discussed in Section 2.1.3, three-step process can be observed. We expand each $N_i$ with respect to $\eta_1,\eta_2,\eta_3$ and obtain \begin{eqnarray} N &=& N_0 + N_1 + N_2 + N_3 \nonumber \\ &=& g_0( z ) + \frac{ g_1( z e^{- \eta_3 / 2} ) }{ \eta_3 } + \frac{ g_2( z e^{- \eta_2 / 2} ) }{ \eta_2 \eta_3 } - \frac{\kappa}{24} g_0( z e^{- \eta_2 / 2} ) + \frac{ g_3( z e^{- (\eta_1 - \eta_2 - \eta_3) / 2} ) } { \eta_1 \eta_2 \eta_3 } \nonumber \\ &-& \frac{ g_1( z e^{- (\eta_1 - \eta_2 - \eta_3) / 2} ) }{ 24 } ( \frac{ \eta_1^{~2} + \eta_2^{~2} + \eta_3^{~2} } { \eta_1 \eta_2 \eta_3 } ) + \cdots \nonumber \\ &=& \frac{ g_3( z ) }{ \eta_1 \eta_2 \eta_3 } + \frac{ g_2( z ) }{ \eta_2 \eta_3 } - \frac{ g_2( z ) }{ \eta_1 \eta_2 \eta_3 } \frac{ \eta_1 - \eta_2 - \eta_3 }{ 2 } + \cdots. \label{Ntotal3d} \end{eqnarray} Expanding Eq. ({\ref{Ntotal3d}) with respect to $\phi = - \log z$ gives \begin{eqnarray} N &=& \frac{ \zeta(3) }{ \eta_1 \eta_2 \eta_3 } + \frac{ \zeta(2) }{ 2 } \left( \frac{ 1 }{ \eta_1 \eta_2 } + \frac{ 1 }{ \eta_2 \eta_3 } + \frac{ 1 }{ \eta_3 \eta_1 } \right) - \frac{ \zeta(2) \phi }{ \eta_1 \eta_2 \eta_3 } \cdots. \label{Ntotal3d2} \end{eqnarray} As discussed in Section 3.2.1, $x_1(T_{3D}^{*}) \equiv c_1 = O(1)$ holds at $T = T_{3D}^{*}$. Inserting this into Eq. (\ref{Ntotal3d2}) gives \begin{eqnarray} N = \frac{ \zeta(3) }{ \eta_1 \eta_2 \eta_3 } + \frac{ \zeta(2) }{ 2 } \left[ \frac{ 1 }{ \eta_1 \eta_2 } + \frac{ 1 }{ \eta_2 \eta_3 } \left( \frac{ 1 }{ 2 } - c_1 \right) + \frac{ 1 }{ \eta_3 \eta_1 } \right] + \cdots ~~{\mbox at} ~~T = T_{3D}^{*}. \label{Ntotal3d4} \end{eqnarray} Thus we have \begin{eqnarray} \frac{ T_{3D}^{*} }{ T_{3D} } = 1 + \frac{ c_1 - 1/2 }{ 3 } \frac{ \zeta(2) }{ \zeta(3)^{~2/3} } \left( \frac{ k_2 k_3 }{ N } \right)^{1/3}. \label{Ntotal3d5} \end{eqnarray} The crossover temperature $T_{2D}^{*}$ associated with two-dimensional BEC can be defined if $T_{2D}^{*} < \omega_1$.\footnote{ Thus the system behaves effectively two-dimensional (EIRD=2) at $T=T_{2D}^{*}$.} Then from Eq. (\ref{Ntotal3d}) we obtain \begin{eqnarray} N &=& \frac{ g_2( e^{- \eta_2 ( 1 / 2 + c_2 ) } ) }{ \eta_2 \eta_3 } + \frac{ g_1( e^{- \eta_2 ( 1 / 2 + c_2 ) } ) }{ \eta_3 } + \frac{ e^{- \eta_1 / 2 } }{ \eta_1 \eta_2 \eta_3 } \nonumber \\ &+& \frac{ \log( 1 - e^{- \eta_1 / 2} ) \eta_1 }{ 24 \eta_2 \eta_3} + \cdots \label{Ntotal2d} \end{eqnarray} at $T = T_{2D}^{*}$. When $\eta_1$ becomes large, only the first two terms dominate and we obtain the crossover temperature as \begin{eqnarray} N &=& \frac{ \zeta(2) }{ \eta_2 \eta_3 } - \frac{ 1/2 + c_2 }{ \eta_3 } + \frac{ (1/2 + c_2) \log [\eta_2 (1/2 + c_2)] }{ \eta_3 } - \frac{ \log (\eta_2 c_2) }{ \eta_3 } + \cdots \nonumber \\ &=& \frac{ T_{2D}^{*2} \zeta(2) }{ \omega_2 \omega_3 } - \frac{ T_{2D}^{*} }{ \omega_3 } \left[ (1/2 + c_2) \right. \nonumber \\ &+& \left. (1/2 + c_2) \log \left( \frac{ T_{2D}^{*} }{\omega_2 (1/2 + c_2)} \right) + \log \left( \frac{ T_{2D}^{*} }{\omega_2 c_2} \right) \right] + \cdots \label{T2dcvr} \end{eqnarray} for $\eta_2, \eta_3 << 1$. For large $N$ and $c_2=1$, we simply have \begin{eqnarray} \frac{ T_{2D}^{*} }{ T_{2D} } &=& 1 + \frac{ 5 }{ 8 } \left( \frac{ \kappa }{ N \zeta(2) } \right)^{1/2} \log \left( \frac{ N }{ \kappa \zeta(2) } \right). \label{Ntotal3d52} \end{eqnarray} One-dimensional crossover temperature $T_{1D}^{*}$ has the same form as in the prolate shape potential case. The conditions (A) and (B) in Eq. (\ref{f18}) give the constraints for anisotropy parameters for three-step behavior to be observed: \begin{eqnarray} & \kappa &> \frac{ N \zeta(2) }{ (\log(2N))^2 }, \nonumber \\ & k_3 &> \frac{ \zeta(3)}{ \zeta(2) } \frac{ N }{ \log(2N) }. \label{3step} \end{eqnarray} If $T_{2D} << T_{3D}$ holds, since it also implies $T_{2D} << \omega_1$, excitations along the first axis will be dynamically suppressed at $T < \omega_1$ making the system effectively two-dimensional before BEC sets in. Furthermore, if $T_{1D} << T_{2D}$ is also satisfied, BEC occurs in two-steps, one at $T_{2D}$ and the other at $T_{1D}$. In Figure 6, above senario of three-step dimensional reduction is numerically realized. Three-dimensionally excited modes dominant in higher temperature are dynamically suppressed at $T < \omega_1$ followed by two-step BEC. When the condensation into the ground state sets in, the effective dimension of the system is still two. From the form of critical temperatures in Eqs. (\ref{Ntotal5}) and (\ref{Ntotal2d52}), we see that high anisotropy and the small number of atoms have similar consequences. It is useful to see how the weak interaction effect modifies the above arguments.\footnote { We recover ordinary units in this discussion for quantitative comparison with other literatures. } The shift of the critical temperature due to the interaction effect is given in \cite{GPS96} as $\Delta T_{3D}^{int}/T_{3D} \sim a N^{1/6} / L$, where $a$ is the s-wave scattering length, $L \equiv \sqrt{ \hbar / m \Omega}$. The shift due to finite size is given in Eq.(\ref{Ntotal3d5}) as $\Delta T_{3D}^{*}/T_{3D} \sim \left( k_3 / N \right)^{1/3}$. For a reasonable choice of parameters, $N=5000$, $k_3=10$, $a/L=0.001$, we obtain $\Delta T_{3D}^{int}/T_{3D} \sim 0.003$ and $\Delta T_{3D}^{*}/T_{3D} \sim 0.13$. Thus the interaction effect is negligible compared to the finite size correction. Comparing the zero point energy in the most confining direction with the interaction energy, if $\omega_1 > n_0 U$, where $n_0$ is the ground state density and $U \equiv 4 \pi \hbar^2 a / m$, the criteria for the system to behave as effectively two-dimensional is still given by $\omega_1 > T$. For the choice of value $n_0 U / k_B \sim 110$ [nK] for a sodium atom, this gives $\omega_1 > n_0 U \sim 10^4$ [1$/$sec]. We should note that the collision effect even in such a case cannot be strictly two-dimensional \cite{KSS87}; for high energy atoms, the interaction vertex has the form of the three-dimensional one. Nevertheless, these effects will be still suppressed for $a << L_1$ and a small number of particles whereas the finite size correction behaves as $\Delta T_{2D}^{*}/T_{2D} \sim \left( \kappa / N \right)^{1/2}$ and $\Delta T_{1D}^{*}/T_{1D} \sim 1 / \log N$. Therefore the effect of interaction to the particle number and the critical temperature remains small and the system is still expected to show the multistep behavior under these conditions. At $T < \omega_1$, the most notable difference would be the absence of long range order due to interactions in this effectively two-dimensional system. In this case, the system can show quasi-condensation, the condensation with the fluctuating phase at near $T_{2D}$. The true condensation with the constant phase will be acheived at lower temperature. One example of quasi two-dimensional system, a gas of spin-polarized hydrogen in liquid helium, is known to exhibit Kosterlitz-Thouless transition \cite{Silvera95,SVYLJ98}. Manifestation of the crossover from BEC to Kosterlitz-Thouless transition \cite{PHS99} during the multistep process is of particular interest to study. But even in the presence of such a transition, we expect that the particle occupation number which is insensitive to the phase information still behaves similar to the ideal gas. The interaction contributions used above are calculated within the framework of the mean field theory. Thus we conclude, apart from the critical regime where the mean field theory fails, interactions do not essentially alter the multistep process for a small number of atoms in highly anisotropic traps. From the arguments given in this section, the sum of most relevant terms in Eq. (\ref{Ntotal3d}) around each crossover temperature will give the simple expression of the total number of atoms as \begin{eqnarray} N &=& N_0 + N_1 + N_2 + N_3 \nonumber \\ &\sim& g_0( e^{- \phi} ) + \frac{ g_1( e^{- \phi} ) }{ \eta_3 } + \frac{ g_2( e^{- \phi} ) }{ \eta_2 \eta_3 } + \frac{ g_3( e^{- \phi} ) }{ \eta_1 \eta_2 \eta_3 }. \label{Sim1} \end{eqnarray} Making use of the fact that $\phi$ varies as a nonvanishing function of the temperature for the finite system, we define $N_3(\lambda) \equiv g_3( e^{\lambda \phi} ) / \eta_1 \eta_2 \eta_3 $. The relation for the Bose function $\partial_{\phi} g_n( e^{- \phi} ) = - g_{n-1}( e^{- \phi} )$ \cite{Pathria} allows us to write \begin{eqnarray} N_2(\lambda) &\equiv& \frac{ g_2( e^{\lambda \phi} ) }{ \eta_2 \eta_3 } = \frac{1}{x_1} \frac{d N_3(\lambda)}{d \lambda}, \nonumber \\ N_1(\lambda) &\equiv& \frac{ g_1( e^{\lambda \phi} ) }{ \eta_3 } = \frac{1}{x_1 x_2} \frac{d^2 N_3(\lambda)}{d \lambda^2}, \nonumber \\ N_0(\lambda) &\equiv& g_0( e^{\lambda \phi} ) = \frac{1}{x_1 x_2 x_3} \frac{d^3 N_3(\lambda)}{d \lambda^3}. \label{Sim2} \end{eqnarray} Thus the total number of atoms is given by \begin{eqnarray} N &=& \frac{1}{x_1 x_2 x_3} \frac{d^3 N_3(-1)}{d \lambda^3} + \frac{1}{x_1 x_2} \frac{d^2 N_3(-1)}{d \lambda^2} + \frac{1}{x_1} \frac{d N_3(-1)}{d \lambda} + N_3(-1) . \label{Sim3} \end{eqnarray} At $T \sim T_{3D}^{*}$, $x_1 \sim 1$ and $x_2, x_3 >> 1$, then we have \begin{eqnarray} N &\sim& \frac{d}{d \lambda} N_3(-1) + N_3(-1) . \label{Sim4} \end{eqnarray} At $T \sim T_{2D}^{*}$, $x_2 \sim 1$, $x_1 << 1$, and $x_3 >> 1$, we have \begin{eqnarray} N &\sim& \frac{d}{d \lambda} N_2(-1) + N_2(-1) . \label{Sim5} \end{eqnarray} At $T \sim T_{1D}^{*}$, $x_3 \sim 1$ and $x_1, x_2 << 1$, we have \begin{eqnarray} N &\sim& \frac{d}{d \lambda} N_1(-1) + N_1(-1) . \label{Sim6} \end{eqnarray} These results suggest that each condensation fraction $N_i$ behaves similarly as a function of the temperature in the neiborhood of its characteristic temperature $T_{iD}^{*}$. We simply write this fact as $N_i(T) \sim F(T / T_{iD}^{*})$ for $i=1,2,3$, where $F$ is a function independent of $i$. It also implies that each component has a similar shape in the logarithmic $T$ scale. The occupation number of each dimension is plotted in the logarithmic $T$ scale in Figure 7. Note that this derivation relies on the special property of the Bose function which determines the density of states of an ideal gas trapped in the harmonic potential. Whether the same result holds or not in other systems with different density of states is not obvious. In conclusion, finite size effects on the Bose-Einstein condensation of an ideal gas in a strongly anisotropic trap give rise to various different types of multistep behavior depending on the degree of anisotropy. In an isotropic trap, BEC into the ground state always begins while the system is effectively three-dimensional, i.e. $T_{3D} > \omega_1=\omega_2=\omega_3$. In an anisotropic trap, in addition to the BEC which may occur in multisteps, EIRD will also decrease in steps as the temperature is lowered. The combined effect of these leads to the appearant multistep behavior. The existence of the intermediate condensation into one-dimensional space can be traced back to the logarithmic divergence of the one-dimensional occupation number in Eq. (\ref{f101a}). This means that, when the trap is loosened in one direction, the particles tend to occupy quantum states along this direction with more likelihood than along other directions. Thus one-dimensionally excited modes in this direction will dominate multidimensional excitations spread in other directions ($N_1 >> N_2, N_3$) and the thermodynamic behavior of such a system is characterized by $T_{1D}$ even though effective dimension of the system is still three. Note that the same mechanism is responsible for the {\it nonexistence} of BEC in one-dimensional harmonic trap in the ordinary thermodynamic limit. For the same reason, the intermediate condensation into two-dimensionally excited modes can be observed in a rectangular cavity \cite{Sonin69,DruKet97}, where $T_{2D}$ does not exist in the naive thermodynamic limit. Three-step BEC can take place only in such a system. Away from the thermodynamic limit, the temperature dependence of the chemical potential around $T_{1D}$, $T_{2D}$, and $T_{3D}$ causes similar crossover behaviors in condensation fractions $N_1$, $N_2$, and $N_3$ as a function of the reduced temperature. Atom trap experiments probing the two-step BEC are realizable in Ioffe-Pritchard type magnetic traps or in optical dipole traps \cite{DruKet97}. The similar type of device can be used to study multistep behavior discussed here, although it is difficult to achieve BEC in a maximally anisotropic trap with our current cooling technique. Further progress in a trapping device may be required. The basic mechanism of multistep dimensional crossovers discussed here can be applied to many other bosonic systems and should be amenable to future experimental verification. Quasi low-dimensional systems realized in the optical lattice or waveguide are the promising option for testing such processes \cite{GHSSPM98,VKCC99}. Also of interest is the kinetics of multistep behavior where the correlation length and the thermal de Broglie wave length are related in a nontrivial manner. This is currently under investigation. \noindent {\bf Acknowledgement} The author appreciated Prof. B. Hu for the hospitality of the Center for Nonlinear Studies at Hong Kong Baptist University during his visit from March to September 1998. He also thanks Prof. B. L. Hu for various discussions, Prof. J. Weiner for useful comments, particularly of experimental relevance, and Dr. K. Kirsten for useful references.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:intro} A large amount of Cherenkov light is produced in extensive air showers~\cite{galbraith_light_1953} and several experimental techniques have been proposed to explore this signal to study astroparticle physics. The generation of light in the cascade is highly dominated by electrons. The emission of Cherenkov light by relativistic electrons including geometry, intensity, and wavelength is explained by classical electrodynamics~\cite{PhysRev.52.378}, which has been used as an inspiration for the development of robust detection techniques. The total signal produced by all particles in the air shower evolves as the cascade deepens in the atmosphere. The correct description of this evolution is mandatory to extract physical results from measurements. This problem is common to all collaborations running ground-based detectors, including Imaging Atmospheric Cherenkov Telescopes (IACT) and Fluorescence Detectors (FD), and also to proposed space experiments. In particular, to reconstruct the properties of the primary particle, it is necessary to understand the properties of Cherenkov-light production in air showers, including the longitudinal distribution, the lateral distribution, and the angular distribution. In this paper, special attention is given to the description of the angular distribution of Cherenkov photons in air showers. IACTs are of fundamental importance for the Very High Energy (VHE) gamma-ray astronomy ($E_0 > 100\,$GeV). The identification and the reconstruction of the primary gamma-ray are done by interpreting the Cherenkov light detected by telescopes at ground. Current observatories~\cite{bib:hess,bib:magic,bib:veritas} are equipped with some ($\leq \,$5) telescopes with few degrees ($< \,$5\deg) of field of view installed hundred meters apart from each other. The Cherenkov Telescope Array (CTA)~\cite{bib:cta} is the next-generation IACT system presently under development. The CTA baseline design calls for 118 telescopes to be installed at two sites covering areas of 0.6$\,$km$^2$ in La Palma, Spain and 4$\,$km$^2$ in Paranal, Chile. The angular distribution of Cherenkov photons in an air shower determines the image shape detected by IACTs and is therefore a key aspect in many reconstruction techniques~\cite{bib:3d:rec,bib:performance:hess,bib:performance:magic,bib:performance:hess}. FDs have been long used to study Ultra-High Energy Cosmic Rays (UHERC)~\cite{bib:flys:eye}. These telescopes have been optimized to measure the isotropic fluorescence light emitted by nitrogen molecules due to the passage of charged particles in the atmosphere. The telescopes in operation~\cite{bib:auger:fd,bib:ta:xmax} have large aperture ($\approx\,$30\deg) and cover a detection area of thousands km$^2$. The fluorescence and Cherenkov emission produce signals in the telescopes in the overlapping wavelength band of 300-450$\,$nm, making it impossible to separate their signals. Traditionally, Cherenkov light was considered as an unwanted noise in the FD measurements~\cite{bib:ta:xmax}, but recently the Cherenkov light seen by FDs has been used as signal to detect showers with energies down to 2$\,$PeV~\cite{Unger:2008:reco,Novotny:spectrum,bib:ta:tale}. Direct Cherenkov light is also used to study UHECR with ground detectors~\cite{budnev_tunka-25_2013,Ivanov_2009} and is proposed to be used as an important signal source in future space experiments~\cite{bib:poemma}. The angular distribution of Cherenkov photons in an air shower is an important feature for all UHECR experiments because it determines the lateral spread of light and the balance between fluorescence and Cherenkov-light signals measured by the FDs, including large angles ($>\,$10\deg) and great distances (several km) from the shower axis. The number of Cherenkov photons produced in an air shower reaching a detector at a given distance from the shower axis can be calculated only if the angular distribution of photons is known. Reversely, the reconstruction of the primary particle properties is only possible if the measured amount of light in each detector is converted into the amount of light emitted by the particles in the shower. The angular distribution of Cherenkov photons is determined by the convolution of the longitudinal development of electrons\footnote{The term \textit{electrons} here refer to both electrons and positrons.}, the energy distribution of electrons, the angular distribution of electrons, the scattering of electrons, the refractive index, the geomagnetic field effects, and the scattering of photons~\cite{bib:stanev,Hillas_1982,bib:elbert,Giller200497,NERLING2006421}. Influenced by the main techniques detecting Cherenkov light (IACT and FD), the study of the angular distribution of Cherenkov photons has been divided respectively in two regimes: (a) gamma-ray primaries, small angles $<$ 10\deg, and TeV energies and (b) cosmic ray primaries, large angles $>$ 10\deg, and highest energies ($10^{17}\,$eV). Experiments have measured the angular distribution of Cherenkov photons~\cite{Baltrusaitis_1987} in regime (b). Since the pioneering work~\cite{Hillas_1982}, the angular distribution was simulated for regime (a)~\cite{bib:3d:rec} and (b)~\cite{NERLING2006421,Giller:2009zz}. In this paper, the angular distribution of Cherenkov photons is simulated using the most updated simulation software and a new parametrization based on shower physics is proposed. The parametrization presented here improves the precision in the description of the angular distribution of Cherenkov light in comparison to models found in previous publications~\cite{bib:3d:rec,NERLING2006421,Giller:2009zz}. Beside the needed update of the parametrizations concerning the new shower models made available after the previous works, this paper aims at the improvement of the precision requested by the new generation of experiments~\cite{bib:cta,bib:poemma} and at the refinement demanded by the new uses of Cherenkov light as the main signal source in FD analyses~\cite{Novotny:spectrum,bib:ta:tale}. Moreover, a unified view of the two regimes is presented for the first time. This paper is organized as follows. In section \ref{sec:model:exa}, an exact model to compute the angular distribution of Cherenkov photons is derived. This model is simplified in section \ref{sec:model:app} to obtain a simple form in terms of free parameters. The parameters of the model are constrained by Monte Carlo simulations in section \ref{sec:parametrization}. A discussion of the results and a comparison to previous works are presented in section \ref{sec:results} and some final remarks are given in section \ref{sec:conclusion}. \section{Exact model for the Cherenkov light angular distribution} \label{sec:model:exa} A mathematical description of the number of Cherenkov photons emitted in a given angular interval as a function of the shower development in the atmosphere, $\text{d}^2 N_\gamma / \text{d} \theta \, \text{d} X$, is presented in this section. Each physical quantity relevant to this description is identified and explained below. Electrons are responsible for over 98\% of the Cherenkov-photon content in a shower~\cite{NERLING2006421}. Therefore it is assumed in this study that all photons are emitted by electrons. Figure \ref{fig:angles} depicts the composition of angles determining the final angular distribution of Cherenkov photons. Shown is that an electron emitted during the shower development is subject to scattering in the atmosphere and its trajectory forms an angle $\theta_\text{p}$ with the shower axis. Such an electron will emit Cherenkov photons in a cone of half-aperture angle $\theta_\text{em}$ around its propagation path. The two angles $\theta_\text{em}$ and $\phi_\text{em}$ (measured in a plane perpendicular to the moving electron track) determine the direction of the emitted photon. Finally, the emitted photon forms an angle $\theta$ with the shower axis. It is the interplay between the Cherenkov emission angle $\theta_\text{em}$ and the scattering angle of electrons $\theta_\text{p}$ that determines the distribution of the resulting Cherenkov-photon angle $\theta$. In the beginning of the shower, most electrons move parallel to the shower axis, therefore $\theta_\text{p} \approx 0 \Rightarrow \theta \approx \theta_\text{em}$. The angle $\theta_\text{em}$, in its turn, is an increasing function of the atmospheric depth and reaches a maximum value of about $1.5\deg$ at sea level. As the cascade develops further, electrons scatter multiple times, increasing the fraction of particles with large $\theta_\text{p}$ values. Indeed, the effect of multiple scattering generates electrons with $\theta_\text{p} > \theta_\text{em}$ and therefore $\theta > \theta_\text{em}$. As a consequence, for $\theta_\text{p} \gg \theta_\text{em} \Rightarrow \theta \approx \theta_\text{p}$, so that the angular distribution of Cherenkov photons approximately reproduces the angular distribution of electrons in the shower. The number of Cherenkov photons emitted by electrons with energy $E$ and angle $\theta_\text{p}$ in a shower per interval of depth $\text{d}X$ is given by\footnote{\setstretch{0.85}The dependency on the primary particle energy ($E_0$) is omitted here for brevity and discussed in terms of simulated showers in the following sections. For the purpose of this section, $E_0$ may be regarded as fixed.} \begin{equation} \text{d}N_\gamma = N_e(s) \, \frac{\text{d}N_e}{\text{d}E}(E,s) \, \text{d}E \, \frac{\text{d}N_e}{\text{d}\theta_\text{p}}(\theta_\text{p},E) \, \text{d}\theta_\text{p} \, Y_\gamma(E,h) \, \sec \theta_\text{p} \, \text{d}X \, \frac{\text{d}\phi_\text{em}}{2\pi} \, , \label{eq:differential1} \end{equation} where $s$ is the shower age\footnote{$s = 3X/(X + 2X_\text{max})$ where $X_\text{max}$ is the depth in which the shower reaches the maximum number of particles.} and $h$ is the emission height above sea level. $N_e(s)$ is the total number of electrons, $\text{d}N_e/\text{d}E$ is the energy distribution of electrons, and $\text{d}N_e/\text{d}\theta_\text{p}$ is the angular distribution of electrons. The function $Y_\gamma(E,h)$ represents the number of photons emitted by one electron per depth interval (yield) and the factor of $\sec\theta_\text{p}$ takes into account the correction in the length of the electron track due to its inclined trajectory. Photons are uniformly distributed in $\phi_\text{em}$ (factor of $1/2\pi$). According to reference~\cite{NERLING2006421}, $Y_\gamma(E,h)$ is given by \begin{equation} \begin{split} Y_\gamma(E,h) &\approx 4\pi \, \alpha \, \frac{ n(h)-1}{\rho(h)} \left( \frac{1}{\lambda_1} - \frac{1}{\lambda_2}\right) \left( 1 - \frac{E^2_\mathrm{thr}(h)}{E^2} \right) \, , \label{eq:cer-yield} \end{split} \end{equation} in which $\alpha \approx \nicefrac{1}{137}$ is the fine-structure constant, $n(h)$ is the refractive index of the medium, $\rho(h)$ is the atmospheric density, and $\lambda_i$ the wavelength interval of the emitted photons. The threshold energy $E_\mathrm{thr}$ for an electron to produce Cherenkov light is $E_\mathrm{thr}(h) = m_{e}c^2/\sqrt{1-n^{-2}(h)}$, where $m_e$ is the electron rest mass. The dependency of $\text{d}N_\gamma$ on the angle between the Cherenkov photon and the shower axis directions, $\theta$, is found after a change of variable from $\phi_\text{em}$ to $\theta$ (see Figure \ref{fig:angles}) \begin{equation} \cos\theta = \cos\theta_\text{p} \cos\theta_\text{em} - \sin\theta_\text{p} \sin\theta_\text{em} \cos\phi_\text{em} \, , \label{eq:angle} \end{equation} which leads to \begin{equation} \begin{split} \text{d}\phi_\text{em} &= 2 \left| \frac{\text{d}\phi_\text{em}}{\text{d}\theta} \right| \text{d}\theta \\ &= \frac{2\; \sin\theta\;\text{d}\theta}{\sqrt{\sin^2\theta_\text{p} \sin^2\theta_\text{em} - (\cos \theta_\text{p} \cos \theta_\text{em} -\cos \theta)^2}} \, , \label{eq:change-of-variable} \end{split} \end{equation} in which a factor of $2$ was added to account for the fact that there are always two values of $\phi_\text{em}$ resulting in the same value of $\theta$ (see Figure \ref{fig:sphere}). The half-aperture angle of the Cherenkov radiation cone, $\theta_\text{em}$, relates to the particle velocity $\beta$ by the usual expression \begin{equation} \cos\theta_\text{em} = \frac{1}{\beta\, n} \; . \label{eq:cone-angle} \end{equation} Substitution of Equation \eqref{eq:change-of-variable} into Equation \eqref{eq:differential1} gives \begin{equation} \begin{split} \text{d}N_\gamma = & \, N_e(s) \, \frac{\text{d}N_e}{\text{d}E}(E,s) \, \text{d}E \, \frac{\text{d}N_e}{\text{d}\theta_\text{p}}(\theta_\text{p},E) \, \text{d}\theta_\text{p} \, Y_\gamma(E,h) \, \sec \theta_\text{p} \, \text{d}X \\ & \times \, \frac{1}{\pi} \frac{\sin\theta \, \text{d}\theta}{\sqrt{\sin^2\theta_\text{p} \sin^2\theta_\text{em} - (\cos \theta_\text{p} \cos \theta_\text{em} -\cos \theta)^2}} \, . \label{eq:differential2} \end{split} \end{equation} Finally, to obtain the desired angular distribution of Cherenkov photons, $\text{d}^2 N_\gamma / \text{d} \theta \, \text{d} X$, it is necessary to integrate Equation \eqref{eq:differential2} over all possible values of electron energies $E$ and angles $\theta_\text{p}$. Integration over $E$ must assert that relation \eqref{eq:cone-angle} is satisfied, therefore $E$ takes values for which $E > E_\text{thr}(h)$. Limits of the integral over electron angles $\theta_\text{p}$ should take only values that contribute to $\theta$. From Figure \ref{fig:sphere} and Equation \eqref{eq:angle}, it is found that this interval is $|\theta-\theta_\text{em}| < \theta_\text{p} < \theta+\theta_\text{em}$. Thus, the exact angular distribution of Cherenkov photons is given by \begin{equation} \begin{split} \frac{\text{d}^2N_\gamma }{\text{d}\theta \, \text{d}X} (\theta,s,h) = & \, \frac{1}{\pi} \, N_\text{e}(s) \, \sin \theta \int_{E_\mathrm{thr}(h)}^\infty \text{d}E \, Y_\gamma(E,h) \, \frac{\text{d}N_\text{e}}{\text{d}E}(E,s) \\ & \times \int_{|\theta - \theta_\text{em}|}^{\theta+\theta_\text{em}} \frac{\text{d}N_e}{\text{d}\theta_\text{p}}(\theta_\text{p},E) \, \frac{\text{d}\theta_\text{p}}{\cos\theta_\mathrm{p} \sqrt{\sin^2 \theta_\text{p} \sin^2 \theta_\text{em} - (\cos \theta_\text{p} \cos \theta_\text{em} - \cos \theta)^2}} \, . \end{split} \label{eq:dedthetadx} \end{equation} \section{Approximated model for the Cherenkov light angular distribution} \label{sec:model:app} In this section, an approximation of the above equation is proposed to obtain a simpler yet meaningful description of the angular distributions of Cherenkov light. The idea is to summarize the angular distribution to a minimum set of parameters, allowing its parametrization. First, note that the integration in $\theta_\text{p}$ is done in a very narrow interval given that $\theta_\mathrm{em}<1.5\degree$. Therefore it is possible to consider that $\sec\theta_\text{p} \, \text{d}N_e / \text{d}\theta_\text{p}$ varies little within integration limits and, in a first approximation, can be taken as constant and calculated in the mean angle $\langle\theta_\text{p}\rangle$ of the range in between the integration limits \begin{equation} \begin{split} \frac{\text{d}^2N_\gamma }{\text{d}\theta \, \text{d}X} (\theta,s,h) \approx & \, \frac{1}{\pi} \, N_\text{e}(s) \, \sin \theta \int_{E_\mathrm{thr}(h)}^\infty \text{d}E \; Y_\gamma(E,h) \, \frac{\text{d}N_\text{e}}{\text{d}E}(E,s) \\ & \times \frac{1}{\cos{\langle\theta_\text{p}\rangle}} \frac{\text{d}N_e}{\text{d}\theta_\text{p}}({\langle\theta_\text{p}\rangle},E) \\ &\times \int_{|\theta - \theta_\text{em}|}^{\theta+\theta_\text{em}}\, \frac{\text{d}\theta_\text{p}}{ \sqrt{\sin^2 \theta_\text{p} \sin^2 \theta_\text{em} - (\cos \theta_\text{p} \cos \theta_\text{em} - \cos \theta)^2}} \, , \end{split} \label{eq:app:1} \end{equation} where \begin{equation} \langle\theta_\text{p}\rangle = \begin{cases} \theta_\text{em} \,, \; \; &\text{if} \; \; \; \theta \leq \theta_\text{em} \\ \theta \,, \; \; &\text{if} \; \; \; \theta > \theta_\text{em} \end{cases} \, . \end{equation} The remaining integral over $\theta_\text{p}$ is a complete elliptic integral of the first kind and can be approximated by a logarithmic function \begin{equation} \begin{split} \int_{|\theta - \theta_\text{em}|}^{\theta+\theta_\text{em}} \, \frac {\text{d}\theta_\text{p}} { \sqrt{\sin^2 \theta_\text{p} \sin^2 \theta_\text{em} - (\cos \theta_\text{p} \cos \theta_\text{em} - \cos \theta)^2}} \approx \\ \frac{1}{\sin\langle\theta_\text{p}\rangle} \begin{cases} \pi - \log\big(1 - \frac{\theta}{\theta_\mathrm{em}}\big) \;, \; \; &\text{if} \; \; \; \theta \leq \theta_\mathrm{em} \\ \pi - \log\big(1 - \frac{\theta_\mathrm{em}}{\theta}\big) \;, \; \; &\text{if} \; \; \; \theta > \theta_\mathrm{em} \end{cases} \, . \end{split} \end{equation} The abbreviation below is introduced \begin{equation} I(\theta,\theta_\mathrm{em},E) = \frac{1}{\sin\langle\theta_\text{p}\rangle} \begin{cases} \pi - \log\big(1 - \frac{\theta}{\theta_\mathrm{em}}\big) \;, \; \; &\text{if} \; \; \; \theta \leq \theta_\mathrm{em} \\ \pi - \log\big(1 - \frac{\theta_\mathrm{em}}{\theta}\big) \;, \; \; &\text{if} \; \; \; \theta > \theta_\mathrm{em} \end{cases} \, , \label{eq:I:def} \end{equation} and by noting that $\cos\theta_\text{em} = 1/\beta n$ rapidly converges to $1/n$ as the electron energy increases, it is reasonable to assume that $\cos\theta_\text{em} = 1/n$ for all electrons. With this assumption the function $I(\theta,\theta_\text{em},E)\sim I(\theta,\theta_\text{em}) =I(\theta,h)$ becomes independent of the electron energy\footnote{From now on $\theta_\text{em} = \arccos(1/n)$.} \begin{equation} \begin{split} \frac{\text{d}^2N_\gamma }{\text{d}\theta \, \text{d}X} (\theta,s,h) \approx & \, \frac{1}{\pi} \, N_\text{e}(s) \, \sin\theta \, I(\theta,h) \\ & \times \int_{E_\mathrm{thr}(h)}^\infty \text{d}E \; Y_\gamma(E,h) \; \frac{\text{d}N_\text{e}}{\text{d}E}(E,s) \; \frac{1}{\cos \langle \theta_\text{p} \rangle}\frac{\text{d}N_e}{\text{d}\theta_\text{p}}(\langle \theta_\text{p} \rangle, E) \; . \end{split} \label{eq:app:2} \end{equation} The validity of the approximations done until here were tested using Monte Carlo simulations of air showers and the results shown in Appendix~\ref{app:tests}. The remaining integral over electron energies, \begin{equation} \int_{E_\mathrm{thr}(h)}^\infty \text{d}E \; Y_\gamma(E,h) \; \frac{\text{d}N_\text{e}}{\text{d}E}(E,s) \frac{1}{\cos \langle \theta_\text{p} \rangle}\frac{\text{d}N_e}{\text{d}\theta_\text{p}}(\langle \theta_\text{p} \rangle, E) \; , \label{eq:app:3} \end{equation} has been studied before in references~\cite{Giller:2009zz,NERLING2006421}. A parametric form to describe this quantity is proposed here \begin{equation} K(\theta,s,h) = C \, {\langle\theta_\text{p}\rangle}^{\nu - 1} \mathrm{e}^{-{\langle\theta_\text{p}\rangle}/\theta_1} \left( 1 + \epsilon \, \text{e}^{{\langle\theta_\text{p}\rangle}/\theta_2} \right) \, , \label{eq:k:def} \end{equation} where $\nu$, $\theta_1$, $\theta_2$, and $\epsilon$ are parameters varying with shower age, height (or refractive index), and, possibly, the primary energy. The constant $C$ is intended to normalize Equation~\eqref{eq:k:def} according to Equation~\eqref{eq:app:3}. In the next section the parameters of this function are going to be studied and the quality of the description is going to be tested. The approximated model is summarized as \begin{equation} \frac{\text{d}^2N_\gamma }{\text{d}\theta \; \text{d}X} (\theta,s,h) = \frac{1}{\pi} \, N_\text{e}(s) \, \sin \theta \, I(\theta,h) \, K(\theta,s,h). \label{eq:app:final} \end{equation} \section{Parametrization of the Cherenkov light angular distribution} \label{sec:parametrization} Monte Carlo simulations of air showers are done using the CORSIKA$\,$7.6900 package~\cite{Heck:1998vt}. Gamma-ray and proton showers are simulated with energies between $100\,$GeV and $1\,$EeV in intervals of 1 in $\log_{10}(E_0/\mathrm{eV})$. For each combination of primary type and energy, at least 120 showers are simulated. Simulations are performed for vertical showers and showers inclined at 20\degree. {\scshape QGSJetII.04}~\cite{Ostapchenko:2010vb} and {\scshape urqmd}~\cite{Bleicher:1999xi} are used as high- and low-energy hadronic interaction models, respectively. The U.S. standard atmosphere model is used in the simulations and the refractive index is considered to be independent of the wavelength ($180\,\text{nm} \leq \lambda \leq 700\,\text{nm}$) of the emitted photons. Cherenkov photons are produced in bunches of maximum five. The COAST option is used to store the angle between the Cherenkov photons and the shower axis directions, $\theta$. \ifmmode{X_\mathrm{max}}\else{$X_\mathrm{max}$}\fi \xspace, which is used to compute the shower age, is extracted from the longitudinal development of charged particles by fitting a Gaisser-Hillas function~\cite{gaisser-hillas}. The approximated model summarized in Equation~\eqref{eq:app:final} suggests that the angular distribution of Cherenkov photons should vary with shower age and atmospheric height. Both dependencies are made clear in the upper plots of Figure~\ref{fig:sim:dist}, where the angular distribution of Cherenkov photons of five randomly chosen gamma-ray-induced air showers at different values of $s$ and $h$ are compared. From the cascade theory~\cite{rossi_cr,kamata_nishimura,greisen_review}, the angular distributions of Cherenkov light in gamma-ray showers are expected to be independent of the primary particle energy and this is confirmed in the bottom-left plot of Figure~\ref{fig:sim:dist}. In the case of proton showers, some dependency on the primary energy is observed in the bottom-right plot of the same figure. These plots also reiterate the fact that distributions with common age, height, primary type, and primary energy are similar. Taking this dependency into account, the angular distribution of Cherenkov photons in a given interval with mean age $\bar{s}$ and height $\bar{h}$ in a shower of energy $E_0$ can be described by \begin{equation} \frac{\text{d}N_\gamma }{\text{d}\theta}(\theta,\bar{s},\bar{h},E_0) = N \, \sin\theta \, I(\theta,\bar{h}) \, K(\theta,\bar{s},\bar{h}, E_0) \, , \label{eq:app:fit} \end{equation} in which $N$ (different from $N_e(s)$) is a normalization constant which depends on the parameters of $K(\theta, \bar{s},\bar{h}, E_0)$. The parameters of $K(\theta, \bar{s},\bar{h}, E_0)$ are considered to be \begin{equation} \begin{split} \nu (s,n) &= p_{0,\nu} \left(n - 1\right)^{p_{1,\nu}} + p_{2,\nu} \log(s) \, , \\ \theta_1(s,n,E_0) &= p_{0,\theta_1} \left(n - 1\right)^{p_{1,\theta_1}} \left(E_0/\text{TeV}\right)^{p_{2,\theta_1}} + p_{3,\theta_1} \log(s)\, , \\ \theta_2(s,n) &= \theta_1(s,n) \, \left( p_{0,\theta_2} + p_{1,\theta_2} s \right) \, , \\ \epsilon(E_0) &= p_{0,\epsilon} + p_{1,\epsilon} \left(E_0/\text{TeV}\right)^{p_{2,\epsilon}} \; . \end{split} \end{equation} The coefficients $p_{i,\mu}$ are the parameters of the model to be fitted. In these equations, the dependence in height, $h$, was changed by the dependence in the refractive index, $n$, to make the parametrization independent of the atmospheric model used in the simulations. The simulated angular distributions of Cherenkov photons are fitted with this model. For that, a multinomial likelihood function ($L_\text{MLE}$) is built taking into account every simulated distribution from shower ages in the interval $0.8 \leq s \leq 1.2$. A single value of refractive index $n$ is associated with each distribution according to the emission height. Histograms are weighted by the inverse of the primary energy in TeV, so the contribution from showers of distinct energies to $L_\text{MLE}$ are of the same order of magnitude. Gamma-ray and proton showers are fit separately, as distributions strongly depend on the primary particle type in lower energies. All coefficients $p_{i,\mu}$ are allowed to vary in the fit procedure. In the case of gamma showers, however, the energy dependency is dropped ($p_{2,\theta_1}$, $p_{1,\epsilon}$, $p_{2,\epsilon}=0$). Fitted values of $p_{i,\mu}$ and their associated confidence intervals are found in Tables~\ref{tab:par:gamma} and \ref{tab:par:proton}. \section{Results} \label{sec:results} In this section, the parametrization proposed in the previous section is compared to the Monte Carlo distributions and to previous works. Figure~\ref{fig:fit:ex} shows the simulated angular distribution of Cherenkov photons in comparison to four models for one single gamma-ray (upper panel) and one single proton shower (lower panel). The ability of the presented parametrization to describe the simulated data both around the peak of the distributions and at the small and large $\theta$ regions is evident in this figure. Predictions from the models presented in Refs. \cite{NERLING2006421,Giller:2009zz} are shown in the region $\theta > 5\deg$ only, for which they are defined. Figure~\ref{fig:fit:dev} shows the overall quality of the models by comparing their average relative deviation to the simulated distributions for four combinations of primary type and energy. Three shower ages ($s=0.8$, $1.0$, and $1.2$) are shown. It is clear that the model proposed here has many advantages. The model presents the smaller deviation from the simulations for a large angular range. For gamma-ray showers, the deviation is very small ($< 5\%$) for angles smaller than $25^\circ$. For proton showers, the deviation of the model developed here improves with energy. Results presented in this work are optimized with simulated showers having energies from 100$\,$GeV (1$\,$TeV, in case of proton) to 1$\,$EeV. The quality of the model measured with respect to shower energy can be assessed in Figure~\ref{fig:fit:dev:energy}, where the average relative deviation is shown at $s=1.0$ for all studied energies. For both primaries, this deviation is smaller than 10\% in a wide angular and energy interval. In the case of gamma-ray showers (left box), the big deviation seen for 100$\,$GeV gamma-ray showers above $\theta = 30\deg$ is related to the small number of photons in this region. On average, $99.8\%$ of the Cherenkov photon content of 100 GeV gamma-ray fall in the region $\theta < 30\deg$. This figure confirms again the quality of the proposed model and ensures its adequacy to be employed in both the aforementioned regimes (a) and (b). \section{Conclusion} \label{sec:conclusion} To understand the nature and to describe the angular distribution of Cherenkov photons in air showers is of great importance in current experimental astrophysics. An exact model has been derived in Section~\ref{sec:model:exa} to describe the angular distribution photons in terms of the unknown energy and angular distributions of electrons. In Section~\ref{sec:model:app}, successive approximations to this exact model have lead to a factorized form for the angular distributions of Cherenkov photons: a first term, $I$, depending on the maximum Cherenkov emission angle and a second term, $K$, depending on the energy and angular distributions of electrons. A simple parametric form has been proposed to describe this second term, overcoming the necessity of describing the two-dimensional energy and angular distribution of electrons. Parameters of this model have been obtained by fitting Monte Carlo simulations in Section~\ref{sec:parametrization}. The direct comparison of the parametrization and Monte Carlo simulations in Section~\ref{sec:results} has shown the excellent capability of the model to describe the angular distributions of Cherenkov photons. The use of this model has many advantages as it is able to: 1) cover both small and large angular regions, including the peak around $\theta_\text{em}$ and 2) cover a large energy interval, from hundreds of GeV to EeV energies. The parametrization presented here is therefore adequate to be employed in both the reconstruction of gamma-rays and cosmic-rays in IACT systems and also in the study of extensive air showers with fluorescence detectors. \section*{Acknowledgments} LBA and VdS acknowledge FAPESP Project 2015/15897-1. This study was financed in part by the Coordena\c{c}\~ao de Aperfei\c{c}oamento de Pessoal de N\'{\i}vel Superior - Brasil (CAPES) - Finance Code 001. Authors acknowledge the National Laboratory for Scientific Computing (LNCC/MCTI, Brazil) for providing HPC resources of the SDumont supercomputer (http://sdumont.lncc.br). VdS acknowledges CNPq.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{The numerical result} I have evaluated up to 1100 digits of precision the mass-independent contribution to the electron $g$-$2$ anomaly of all the 891 diagrams in 4-loop QED, thus finalizing a twenty-year effort \cite{Laporta:2001dd,Laporta:2000dc,Laporta:2001rc,Laporta:2003jz,Laporta:2003xa,Laporta:2008sx,Laporta:2009cna} begun after the completion of the calculation of 3-loop QED contribution \cite{Laporta:1996mq}. Having extracted the power of the fine structure constant $\alpha$ \begin{equation} a_e(\textrm{4-loop})=\aql{4}{\left(\frac{\alpha}{\pi}\right)}^4\ , \end{equation} the first digits of the result are \begin{equation}\label{ae4loop} \aql{4}=-1.912245764926445574152647167439830054060873390658725345171329848{\ldots}\ . \end{equation} The full-precision result is shown in table \ref{tab:a1100}. The result \itref{ae4loop} is in excellent agreement ($0.9\sigma$) with the numerical value \begin{equation} \aql{4}(\textrm{Ref.\cite{Aoyama:2014sxa}})=-1.91298(84)\ , \end{equation} latest result of a really impressive pluridecennial effort\cite{Kinoshita:1979ej,Kinoshita:1979ei,Kinoshita:1981wx,Kinoshita:1981ww,Kinoshita:1981wm,Kinoshita:2005zr,Aoyama:2007dv,Aoyama:2007mn,Aoyama:2012wj,Aoyama:2014sxa}. By using the best numerical value of $a_e(\textrm{5-loop})=7.795(336) {\left(\frac{\alpha}{\pi}\right)}^5$(Ref.\cite{Aoyama:2014sxa}), the measurement of the fine structure constant\cite{Bouchendira:2010es} \begin{equation*} \alpha^{-1}=137.035\;999\;040 (90)\ , \end{equation*} and the values of mass-dependent QED, hadronic and electroweak contributions (see Ref.\cite{Aoyama:2014sxa} and references therein), one finds \begin{equation}\label{aeth} a_e^{\textrm{th}}=1\;159\;652\;181.664(23)(16)(763) \times 10^{-12} \ , \end{equation} where the first error comes from $\aql{5}$, the second one from the hadronic and electroweak corrections, the last one from $\alpha$. Conversely, using the experimental measurement of $a_e$\cite{Hanneke:2010au} \begin{equation*} a_e^{\textrm{exp}}=1\;159\;652\;180.73(0.28) \times 10^{-12} \ , \end{equation*} one finds \begin{equation*} \alpha^{-1}(a_e) = 137.035\;999\;1596(27)(18)(331) \ , \end{equation*} where the errors come respectively from $a_e(\textrm{5-loop})$, hadronic and electroweak corrections, and $a_e$. The 891 vertex diagrams contributing to $\aql{4}$ are not shown for reasons of space. They can be obtained by inserting an external photon in each possible electron line of the 104 4-loop self-mass diagrams shown in Fig.\ref{self104}, excluding the vertex diagrams with closed electron loops with an odd number of vertices which give null contribution because of the Furry's theorem. The vertex diagrams can be arranged in 25 gauge-invariant sets (Fig.\ref{figuragau}), classifying them according to the number of photon corrections on the same side of the main electron line and the insertions of electron loops (see Ref.\cite{Cvitanovic:1977dp} for more details on the 3-loop classification). The numerical contributions of each set, truncated to 40 digits, are listed in the table \ref{tableset}. Adding respectively the contributions of diagrams with and without closed electron loops one finds \begin{align} \aql{4}(\textrm{no closed electron loops})&= -2.176866027739540077443259355895893938670 \ ,\\ \aql{4}(\textrm{closed electron loops only}) &= \phantom{+}0.264620262813094503290612188456063884609 \ . \end{align} The contributions of the sets 17 and 18, the sum of contributions of the sets 11 and 12, and the sum of the contributions of the sets 15 and 16 are in perfect agreement with the analytical results of Ref.\cite{Caffo:1984nm}. The contributions of all diagrams can be expressed by means of 334 master integrals belonging to 220 topologies. I have fit analytical expressions to the high-precision numerical values of all master integrals and diagram contributions by using the PSLQ algorithm\cite{PSLQ,Bailey:1999nv}. The analytical expression of $\aql{4}$ contains values of harmonic polylogarithms\cite{Remiddi:1999ew} with argument $1$, $\frac{1}{2}$, $e^{\frac{i\pi}{3}}$ , $e^{\frac{2i\pi}{3}}$, $e^{\frac{i\pi}{2}}$, a family of one-dimensional integrals of products of elliptic integrals, and the finite terms of the $\epsilon-$expansions of six master integrals belonging to the topologies 81 and 83 of Fig.\ref{self104}. Work is still in progress to fit analytically these six unknown elliptical constants. The result of the analytical fit can be written as follows: \begin{align}\label{Tall} \aql{4}=&T_0+T_2+T_3+T_4+T_5+T_6+T_7 +\sqrt{3} \left( V_{4a}+V_{6a}\right) +V_{6b}+V_{7b} +W_{6b} +W_{7b} \cr& +\sqrt{3} \left(E_{4a}+E_{5a}+E_{6a}+E_{7a}\right) +E_{6b}+E_{7b} +U\ . \end{align} The terms have been arranged in blocks with equal transcendental weight. The index number is the weight. The terms containing the ``usual'' transcendental constants are: \begin{equation} \label{T023} T_0+T_2+T_3 = \frac{1243127611}{130636800} +\frac{30180451}{25920} \Z2 -\frac{255842141}{2721600} \Z3 -\frac{8873}{3} \Z2 \ln2 \ , \end{equation} \begin{equation}\label{T4} T_4 = \dfrac{6768227}{2160} \Z4 +\frac{19063}{360} \Z2 \ln^2 2 +\frac{12097}{90}\left( \A4 +\frac{1}{24} \ln^4 2 \right) \ , \end{equation} \begin{align}\label{T5} T_5 =& -\frac{2862857}{6480} \Z5 -\frac{12720907}{64800} \Z3 \Z2 -\frac{221581}{2160} \Z4 \ln2 \nonumber \\\cr& +\frac{9656}{27} \left(\A5 +\frac{1}{12} \Z2 \ln^3 2 -\frac{1}{120} \ln^5 2\right) \ , \end{align} \begin{align}\label{T6} T_6 =& \frac{191490607}{46656} \Z6 +\frac{10358551}{43200} \zeta^2(3) -\frac{40136}{27} \A6 +\frac{26404}{27} \B6 \nonumber \\\cr& -\frac{700706}{675} \A4 \Z2 -\frac{26404}{27} \A5 \ln2 +\frac{26404}{27} \Z5 \ln2 -\frac{63749}{50} \Z3 \Z2 \ln2 \\\cr& -\frac{40723}{135}\Z4 \ln^2 2 +\frac{13202}{81} \Z3 \ln^3 2 -\frac{253201}{2700} \Z2 \ln^4 2 +\frac{7657}{1620} \ln^6 2 \ , \nonumber \end{align} \begin{align}\label{T7} T_7 =& \frac{2895304273}{435456} \Z7 +\frac{670276309}{193536} \Z4 \Z3 +\frac{85933}{63} \A4 \Z3 +\frac{7121162687}{967680} \Z5 \Z2 \nonumber \\\cr& -\frac{142793}{18} \A5 \Z2 -\frac{195848}{21} \A7 +\frac{195848}{63} \B7 -\frac{116506}{189} \D7 \nonumber \\\cr& -\frac{4136495}{384} \Z6 \ln2 -\frac{1053568}{189} \A6 \ln2 +\frac{233012}{189} \B6 \ln2 +\frac{407771}{432} \zeta^2(3) \ln2 \\\cr& -\frac{8937}{2} \A4 \Z2 \ln2 +\frac{833683}{3024} \Z5 \ln^2 2 -\frac{3995099}{6048} \Z3 \Z2 \ln^2 2 -\frac{233012}{189} \A5 \ln^2 2 \nonumber \\\cr& +\frac{1705273}{1512} \Z4 \ln^3 2 +\frac{602303}{4536} \Z3 \ln^4 2 -\frac{1650461}{11340}\Z2 \ln^5 2 +\frac{52177}{15876} \ln^7 2 \ . \nonumber \end{align} The terms containing harmonic polylogarithms of $e^{\frac{i\pi}{3}}$, $e^{\frac{2i\pi}{3}}$: \begin{align} V_{4a}=\label{V4a} -\frac{14101}{480} \Cl4 -\frac{169703}{1440} \Z2 \Cl2\ , \end{align} \begin{align}\label{V6a} V_{6a}=& \frac{494}{27} \iha{0,0,0,1,-1,-1} +\frac{494}{27} \ihb{0,0,0,1,-1,1} +\frac{494}{27} \ihb{0,0,0,1,1,-1} \nonumber \\\cr& + {19} \ihb{0,0,1,0,1,1} +\frac{437}{12} \ihb{0,0,0,1,1,1} +\frac{29812}{297} \Cl6 \nonumber \\\cr& +\frac{4940}{81} \A4 \Cl2 -\frac{520847}{69984} \Z5 \pi -\frac{129251}{81} \Z4 \Cl2 \nonumber \\\cr& -\frac{892}{15} \ihb{0,1,1,-1} \Z2 -\frac{1784}{45} \iha{0,1,1,-1} \Z2 +\frac{1729}{54} \Z3 \iha{0,1,-1} \\\cr& +\frac{1729}{36} \Z3 \ihb{0,1,1} +\frac{837190}{729} \Cl4 \Z2 +\frac{25937}{4860} \Z3 \Z2 \pi \nonumber \\\cr& -\frac{223}{243} \Z4 \pi \ln2 +\frac{892}{9} \iha{0,1,-1} \Z2 \ln2 +\frac{446}{3} \ihb{0,1,1} \Z2 \ln2 \nonumber \\\cr& -\frac{7925}{81} \Cl2 \Z2 \ln^2 2 +\frac{1235}{486} \Cl2 \ln^4 2\ , \nonumber \end{align} \begin{align}\label{V6b} V_{6b}= \frac{13487}{60} \rha{0,0,0,1,0,1} +\frac{13487}{60} \Cl4 \Cl2 +\frac{136781}{360} {\mathrm{Cl^2_2}\left(\frac{\pi}{3}\right)} \Z2 \ , \end{align} \begin{align}\label{V7b} V_{7b}&= \frac{651}{4} \rha{0,0,0,1,0,1,-1} + {651} \rha{0,0,0,0,1,1,-1} -\frac{17577}{32} \rhb{0,0,1,0,0,1,1} \nonumber \\\cr& -\frac{87885}{64} \rhb{0,0,0,1,0,1,1} -\frac{17577}{8} \rhb{0,0,0,0,1,1,1} +\frac{651}{4} \Cl4 \iha{0,1,-1} \nonumber \\\cr& +\frac{1953}{8} \Cl4 \ihb{0,1,1} +\frac{31465}{176}\Cl6 \pi +\frac{211}{4} \rha{0,1,0,1,-1} \Z2 \\\cr& +\frac{211}{2} \rha{0,0,1,1,-1} \Z2 +\frac{1899}{16} \rhb{0,1,0,1,1} \Z2 +\frac{1899}{8} \rhb{0,0,1,1,1} \Z2 \nonumber \\\cr& +\frac{211}{4} \iha{0,1,-1} \Cl2 \Z2 +\frac{633}{8} \ihb{0,1,1} \Cl2 \Z2 \ . \nonumber \end{align} The terms containing harmonic polylogarithms of $e^{\frac{i\pi}{2}}$: \begin{align}\label{W6b} W_{6b} =& -\frac{28276}{25} \Z2 \cat2^2\ , \\\cr W_{7b} =&104\biggl( {4} \rhe{0,1,0,1,1} \Z2 +{4} \ihe{0,1,1} \cat2 \Z2 -{2} \cat4 \Z2 \pi \\\cr& + {\mathrm{Cl}^2_{2}\left(\frac{\pi}{2}\right)} \Z2 \ln2\ \biggr)\ . \nonumber \end{align} The terms containing elliptic constants: \begin{equation}\label{E4a} E_{4a} =\pi\left( -\frac{28458503}{691200} B_3 +\frac{250077961}{18662400} C_3 \right)\ , \end{equation} \begin{equation}\label{E5I} E_{5a} = \frac{483913}{77760} \pi \intaxonetwos{0,0,1} \ , \end{equation} \begin{align}\label{E6a} E_{6a}=&\pi\biggl( \frac{4715}{1944} \ln2\; \intaxonetwos{0,0,1} +\frac{270433}{10935} \intaxonetwos{0,2,0} -\frac{188147}{4860} \intaxonetwos{0,1,1} +\frac{188147}{12960} \intaxonetwos{0,0,2} \biggr)\ , \end{align} \begin{equation}\label{E6b} E_{6b}= -\frac{4715}{1458} \Z2 \intaxoneones{0,0,1}\ , \end{equation} \begin{align}\label{E7I} E_{7a}=&\pi\biggl( \frac{826595}{248832} \Z2 \intaxonetwos{0,0,1} -\frac{5525}{432} \ln2\; \intaxonetwos{0,0,2} +\frac{5525}{162} \ln2\; \intaxonetwos{0,1,1} \nonumber \\\cr& -\frac{5525}{243} \ln2\; \intaxonetwos{0,2,0} +\frac{526015}{248832} \intaxonetwos{0,0,3} -\frac{4675}{768} \intaxonetwos{0,1,2} +\frac{1805965}{248832} \intaxonetwos{0,2,1} \nonumber \\\cr& -\frac{3710675}{1119744} \intaxonetwos{0,3,0} -\frac{75145}{124416} \intaxonetwos{1,0,2} -\frac{213635}{124416} \intaxonetwos{1,1,1} +\frac{168455}{62208} \intaxonetwos{1,2,0} \\\cr& +\frac{75145}{248832} \intaxonetwos{2,0,1} +\frac{69245}{124416} \intaxonetwos{2,1,0} \biggr)\ , \nonumber \end{align} \begin{align}\label{E7b} E_{7b}=&\Z2\left( \frac{2541575}{82944} \intaxoneones{0,0,2} -\frac{556445}{6912} \intaxoneones{0,1,1} +\frac{54515}{972} \intaxoneones{0,2,0} -\frac{75145}{20736} \intaxoneones{1,0,1} \right)\ . \end{align} The term containing the $\epsilon^0$ coefficients of the $\epsilon-$expansion of six master integrals (see $f,f',f'',g,g',g''$ of Fig.\ref{unkmas}): \begin{equation} U =\label{UU} -\frac{541}{300} C_{81a} -\frac{629}{60} C_{81b} +\frac{49}{3} C_{81c} -\frac{327}{160} C_{83a} +\frac{49}{36} C_{83b} +\frac{37}{6} C_{83c}\ . \end{equation} The numerical values of \eqrefb{T023}{UU} are listed in Table \ref{tableter}. In the above expressions $\zeta(n)=\sum_{i=1}^\infty i^{-n}$, $a_n=\sum_{i=1}^\infty 2^{-i}\;i^{-n}$, $b_6=H_{0,0,0,0,1,1}\left(\frac{1}{2}\right)$, $b_7=H_{0,0,0,0,0,1,1}\left(\frac{1}{2}\right)$, $d_7=H_{0,0,0,0,1,-1,-1}(1)$, $\cl{n}{\theta}=\mathrm{Im} {\mathrm{Li}}_n (e^{i\theta})$. $H_{i_1,i_2,{\ldots} }(x)$ are the harmonic polylogarithms. The integrals $f_j$ are defined as follows: \begin{eqnarray}\label{fdef} f_1(i,j,k)&=&\int_1^9 ds \; D_1^2(s) \left(s-\frac{9}{5}\right) \ln^i\left(9-s\right) \ln^j\left(s-1\right) \ln^k\left(s\right) \ , \nonumber \\\cr f_2(i,j,k)&=&\int_1^9 ds\; D_1(s) \mathrm{Re}\left(\sqrt{3} D_2(s)\right) \left(s-\frac{9}{5}\right) \ln^i\left(9-s\right) \ln^j\left(s-1\right) \ln^k\left(s\right) \ , \\\cr D_1(s)&=&\frac{2}{\sqrt{(\sqrt{s}+3)(\sqrt{s}-1)^3}}K\left(\frac{(\sqrt{s}-3)(\sqrt{s}+1)^3}{(\sqrt{s}+3)(\sqrt{s}-1)^3}\right)\ , \nonumber \\\cr D_2(s)&=&\frac{2}{\sqrt{(\sqrt{s}+3)(\sqrt{s}-1)^3}}K\left(1-\frac{(\sqrt{s}-3)(\sqrt{s}+1)^3}{(\sqrt{s}+3)(\sqrt{s}-1)^3}\right)\ ; \nonumber \end{eqnarray} $K(x)$ is the complete elliptic integral of the first kind. Note that $D_1(s)=2J_2^{(1,9)}(s)$, with $J_2^{(1,9)}$ defined in Eq.(A.12) of Ref.\cite{Laporta:2004rb}. The integrals $f_1(0,0,0)$ and $f_2(0,0,0)$ were studied in Ref.\cite{Laporta:2008sx}. The constants $A_3$, $B_3$ and $C_3$, defined in Ref.\cite{Laporta:2008sx}, admit the hypergeometric representations: \begin{align}\label{cdef} A_3=\int_0^1 dx &\dfrac{K_c(x) K_c(\textup{1-$x$})}{\sqrt{1-x}}= \dfrac{2\pi^\frac{3}{2}}{3}\left( \dfrac{{\Gamma^2(\frac{7}{6})}{\Gamma(\frac{1}{3})}}{{\Gamma^2(\frac{2}{3})}{\Gamma(\frac{5}{6})}} {}_4F_3\left(\begin{smallmatrix} {{\frac{1}{6}\;\frac{1}{3}\;\frac{1}{3}\;\frac{1}{2}}}\\ {{\frac{5}{6}\;\frac{5}{6}\;\frac{2}{3}}}\end{smallmatrix}; 1\right) - \dfrac{{\Gamma^2(\frac{5}{6})}{\Gamma(-\frac{1}{3})}}{{\Gamma^2(\frac{1}{3})}{\Gamma(\frac{1}{6})}} {}_4F_3\left(\begin{smallmatrix} {{\frac{1}{2}\;\frac{2}{3}\;\frac{2}{3}\;\frac{5}{6}}}\\ {{\frac{7}{6}\;\frac{7}{6}\;\frac{4}{3}}}\end{smallmatrix}; 1\right) \right)\!\:,\\\cr B_3=\int_0^1 dx &\dfrac{K_c^2(x)}{\sqrt{1-x}} = \dfrac{4\pi^\frac{3}{2}}{3}\left( \dfrac{{\Gamma^2(\frac{7}{6})}{\Gamma(\frac{1}{3})}}{{\Gamma^2(\frac{2}{3})}{\Gamma(\frac{5}{6})}} {}_4F_3\left(\begin{smallmatrix} {{\frac{1}{6}\;\frac{1}{3}\;\frac{1}{3}\;\frac{1}{2}}}\\ {{\frac{5}{6}\;\frac{5}{6}\;\frac{2}{3}}}\end{smallmatrix}; 1\right) + \dfrac{{\Gamma^2(\frac{5}{6})}{\Gamma(-\frac{1}{3})}}{{\Gamma^2(\frac{1}{3})}{\Gamma(\frac{1}{6})}} {}_4F_3\left(\begin{smallmatrix} {{\frac{1}{2}\;\frac{2}{3}\;\frac{2}{3}\;\frac{5}{6}}}\\ {{\frac{7}{6}\;\frac{7}{6}\;\frac{4}{3}}}\end{smallmatrix}; 1\right) \right) \ , \\\cr C_3=\int_0^1 dx &\dfrac{E_c^2(x)}{\sqrt{1-x}}= \frac{486\pi^2}{1925}\; {}_7F_6\left(\begin{smallmatrix} {{\frac{7}{4}\;-\frac{1}{3}\;\frac{1}{3}\;\frac{2}{3}\;\frac{4}{3}\;\frac{3}{2}\;\frac{3}{2}}}\\ {{\frac{3}{4}\;1\;\frac{7}{6}\;\frac{11}{6}\;\frac{13}{6}\;\frac{17}{6}}} \end{smallmatrix}; 1\right) \ , \\\cr K_c(x)&=\frac{2\pi}{\sqrt{27}} \Fcub{x} \ , \qquad E_c(x)=\frac{2\pi}{\sqrt{27}} \Ecub{x} \ . \end{align} $A_3$ appears only in the coefficients of the $\epsilon$-expansion of master integrals, and cancels out in the diagram contributions. Fig.\ref{unkmas} shows the fundamental elliptic master integrals which contains irreducible combinations of $B_3$, $C_3$ and $f_m(i,j,k)$. The analytical fits of $V_{6b}$, $V_{6a}$, $V_{7b}$, $V_{7i}$ and the master integrals involved needed PSLQ runs with basis of $\sim 500$ elements calculated with $9600$ digits of precision. The multi-pair parallel version\cite{Bailey:1999nv} of the PSLQ algorithm has been essential to work out these difficult analytical fits in reasonable times. The method used for the computation of the master integrals with precisions up to 9600 digits is essentially based on the difference equation method\cite{Laporta:2001dd,Laporta:2000dc} and the differential equation method\cite{Kotikov:1990kg,Remiddi:1997ny,Gehrmann:1999as}. This method and the procedures used for the extraction of $g$-$2$ contribution, renormalization, reduction to master integrals, generation and numerical solution of systems of difference and differential equations, (all based on upgrades of the program {\tt SYS} of Ref.\cite{Laporta:2001dd}) will be thoroughly described elsewhere. \setcounter{secnumdepth}{0} \section{Acknowledgments} The author wants to thank Antonino Zichichi and Luca Trentadue for having provided support to this work, Ettore Remiddi for continuous support and encouragement. The main part of the calculations was performed on the cluster ZBOX2 of the Institute for Theoretical Physics of Zurich and on the Schr\"odinger supercomputer of the University of Zurich. The author is deeply indebted to Thomas Gehrmann for having allowed him to use these facilities. Some parts of the calculations were done on computers of the Department of Physics and INFN in Bologna. The author thanks Michele Caffo, Franco Martelli, Sandro Turrini and Vincenzo Vagnoni for providing suitable desktop computers in Bologna.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Crowd counting~\cite{kang2018beyond,gao2020cnn} is a fundamental computer vision task that aims to automatically estimate the number of people in unconstrained scenes. Over the past decade, this task has attracted a lot of research interests due to its huge application potentials (e.g., traffic management \cite{zhang2017understanding,liu2020dynamic} and video surveillance \cite{xiong2017spatiotemporal}). During the recent COVID-19 pandemic \cite{velavan2020covid}, crowd counting has also been employed widely for social distancing monitoring \cite{ghodgaonkar2020analyzing}. \begin{figure*} \centering \includegraphics[width=1.945\columnwidth]{RGBT_sample2.pdf} \vspace{0mm} \caption{Visualization of RGB-thermal images in our RGBT-CC benchmark. When only using optical information of RGB images, we cannot effectively recognize pedestrians in poor illumination conditions, as shown in (a) and (b). When only utilizing thermal images, some heating negative objects are hard to be distinguished, as shown in (c) and (d). } \label{fig:RGBT_sample} \vspace{-2mm} \end{figure*} In the literature, numerous models \cite{zhang2016single,sindagi2017generating,liu2018crowd,zhang2019attentional,bai2020adaptive, li2018csrnet,liu2019crowd,ma2019bayesian,liu2019context,liu2020semi} have been proposed for crowd counting. Despite substantial progress, it remains a very challenging problem that desires rich information to generate pixel-wise crowd density maps. However, most previous methods only utilized the optical information extracted from RGB images and may fail to accurately recognize the semantic objects in unconstraint scenarios. For instance, as shown in Fig.~\ref{fig:RGBT_sample}-(a,b), pedestrians are almost invisible in poor illumination conditions (such as backlight and night) and they are hard to be directly detected from RGB images. Moreover, some human-shaped objects (e.g., tiny pillars and blurry traffic lights) have similar appearances to pedestrians \cite{zhang2016faster} and they are easily mistaken for people when relying solely on optical features. In general, RGB images cannot guarantee the high-quality density maps, and more comprehensive information should be explored for crowd counting. Fortunately, we observe that thermal images can greatly facilitate distinguishing the potential pedestrians from cluttered backgrounds. Recently, thermal cameras have been extensively popularized due to the COVID-19 pandemic, which increases the feasibility of thermal-based crowd counting. However, thermal images are not perfect. As shown in Fig.~\ref{fig:RGBT_sample}-(c,d), some hard negative objects (e.g., heating walls and lamps) are also highlighted in thermal images, but they can be eliminated effectively with the aid of optical information. Overall, RGB images and thermal images are highly complementary. To the best of our knowledge, no attempts have been made to simultaneously explore RGB and thermal images for estimating the crowd counts. In this work, to promote further researches of this field, we propose a large-scale benchmark ``RGBT Crowd Counting ({\bf{RGBT-CC}})'', which contains 2,030 pairs of RGB-thermal images and 138,389 annotated pedestrians. Moreover, our benchmark makes significant advances in terms of diversity and difficulty, as these RGBT images were captured from unconstrained scenes (e.g., malls, streets, train stations, etc.) with various illumination (e.g., day and night). Nevertheless, capturing the complementarities of multimodal data (i.e., RGB and thermal images) is non-trivial. Conventional methods \cite{lian2019density,zhou2020cascaded,piao2019depth,jiang2020emph,zhai2020bifurcated,sun2019leveraging} either feed the combination of multimodal data into deep neural networks or directly fuse their features, which could not well exploit the complementary information. In this work, to facilitate the multimodal crowd counting, we introduce a cross-modal collaborative representation learning framework, which incorporates multiple modality-specific branches, a modality-shared branch, and an Information Aggregation-Distribution Module (IADM) to fully capture the complementarities among different modalities. Specifically, our IADM is integrated with two collaborative components, including {\color{red}\bf{i)}} an Information Aggregation Transfer that dynamically aggregates the contextual information of all modality-specific features to enhance the modality-shared feature and {\color{red}\bf{ii)}} an Information Distribution Transfer that propagates the modality-shared information to symmetrically refine every modality-specific feature for further representation learning. Furthermore, the tailor-designed IADM is embedded in different layers to learn the cross-modal representation hierarchically. Consequently, the proposed framework can generate knowledgeable features with comprehensive information, thereby yielding high-quality crowd density maps. It is worth noting that our method has three appealing properties. {\bf{First}}, thanks to the dual information propagation mechanism, IADM can effectively capture the multi-modal complementarities to facilitate the crowd counting task. {\bf{Second}}, as a plug-and-play module, IADM can be easily incorporated into various backbone networks for end-to-end optimization. {\bf{Third}}, our framework is universal for multimodal crowd counting. Except for RGBT counting, the proposed method can also be directly applied for RGB-Depth counting. In summary, the major contributions of this work are three-fold: \begin{itemize} \vspace{-2mm} \setlength{\itemsep}{0pt} \setlength{\parsep}{0pt} \setlength{\parskip}{0pt} \item We introduce a large-scale RGBT benchmark to promote the research of crowd counting, in which 138,389 pedestrians are annotated in 2,030 pairs of RGB-thermal images captured in unconstrained scenarios. \item We develop a cross-modal collaborative representation learning framework, which is capable of fully learning the complementarities among different modalities with a Information Aggregation-Distribution Module. \item Extensive experiments conducted on two multimodal benchmarks (i.e., RGBT-CC and ShanghaiTechRGBD \cite{lian2019density}) greatly demonstrate that the proposed method is effective and universal for multimodal crowd counting. \end{itemize} \section{Related Works} {\bf{Crowd Counting Benchmarks:}} In recent years, we have witnessed the rapid evolution of crowd counting benchmarks. UCSD \cite{chan2008privacy} and WorldExpo \cite{zhang2015cross} are two early datasets that respectively contain 2,000 and 3,980 video frames with low diversities and low-medium densities. To alleviate the limitations of the aforementioned datasets, Zhang \textit{et al.}~\cite{zhang2016single} collected 1,198 images with 330,165 annotated heads, which are of better diversity in terms of scenes and density levels. Subsequently, three large-scale datasets were proposed in succession. For instance, UCF-QNRF \cite{idrees2018composition} is composed of 1,535 high density images images with a total of 1.25 million pedestrians. JHU-CROWD++ \cite{sindagi2020jhu} contains 4,372 images with 1.51 million annotated heads, while NWPU-Crowd \cite{gao2020nwpu} consists of 2.13 million annotations in 5,109 images. Nevertheless, all the above benchmarks are based on RGB optical images, in which almost all previous methods fail to recognize the invisible pedestrians in poor illumination conditions. Recently, Lian \textit{et al.}~\cite{lian2019density} utilized a stereo camera to capture 2,193 depth images that are insensitive to illumination. However, these images are coarse in outdoor scenes due to the limited depth ranges (0{\small{$\sim$}}20 meters), which seriously restricts their deployment scopes. Fortunately, we find that thermal images are robust to illumination and have large perception distance, thus can help to recognize pedestrians under various scenarios. Therefore, we propose the first RGBT crowd counting dataset in this work, hoping that it would greatly promote the future development in this field. {\bf{Crowd Counting Approaches:}} As a classics problem in computer vision, crowd counting has been studied extensively. Early works \cite{Chan2009bayesian,chen2012feature,idrees2013multi} directly predict the crowd count with regression models, while subsequent methods usually generate crowd density maps and then accumulate all pixels' values to obtain the final counts. Specifically, a large number of deep neural networks with various architectures \cite{fu2015fast,zhang2015cross,wang2015deep,walach2016learning,sam2017switching,kang2017incorporating,sindagi2017generating,li2018csrnet,zhang2019relational,liu2019adcrowdnet,qiu2019crowd, jiang2019crowd,yuan2020crowd} and loss functions \cite{cao2018scale,idrees2018composition,ma2019bayesian,liu2019crowd} are developed for still image-based crowd counting. Meanwhile, some methods \cite{zhang2019wide,xiong2017spatiotemporal,ren2018fusing,liu2020estimating} perform crowd estimation from multi-view images or surveillance videos. However, all aforementioned methods estimate crowd counts with the optical information of RGB images/videos and are not effective when working in poor illumination conditions. Recently, depth images are used as auxiliary information to count and locate human heads~\cite{lian2019density}. Nevertheless, depth images are coarse in outdoor scenarios, thus depth-based methods have relatively limited deployment scopes. Nevertheless, depth images are coarse in outdoor scenarios, thus depth-based methods have relatively limited deployment scopes. {\bf{Multi-Modal Representation Learning:}} Multi-modal representation learning aims at comprehending and representing cross-modal data through machine learning. There are many strategies in cross-modal feature fusion. Some simple fusion methods~\cite{kiela2014learning,lian2019density,sun2019leveraging,fu2020jl} obtain a fused feature with the operations of element-wise multiplication/addition or concatenation in the ``Early Fusion'' and ``Late Fusion'' way. To exploit the advantages of both early and late fusion, various two-stream-based models~\cite{wu2020deepdualmapper,piao2020a2dele,zhao2019contrast,zhang2019attend} propose to fuse hierarchical cross-modal features, achieving the fully representative shared feature. Besides, a few approaches~\cite{lu2020cross} explore the use of a shared branch, mapping the shared information to common feature spaces. Furthermore, some recent works~\cite{fan2020bbsnet,HDFNet-ECCV2020,zhang2020uc} are presented to address RGBD saliency detection, which is also a cross-modal dense prediction task like RGBT crowd counting. However, most of these works are one-way information transfer, just using depth modality as auxiliary information to help the representation learning of RGB modality. In this work, we propose a symmetric dynamic enhancement mechanism that can take full advantage of the modal complementarities in crowd counting. \begin{figure}[t] \centering \includegraphics[width=0.375\textwidth]{RBGT-CC5.pdf} \vspace{-1mm} \caption{The statistics histogram of people distribution in the proposed RGBT Crowd Counting benchmark.} \vspace{0mm} \label{fig:distribution} \end{figure} \begin{figure*}[t] \centering \includegraphics[width=1.9\columnwidth]{framework4.pdf} \vspace{-1mm} \caption{The architecture of the proposed cross-modal collaborative representation learning framework for multimodal crowd counting. Specifically, our framework consists of multiple modality-specific branches, a modality-shared branch, and an Information Aggregation-Distribution Module (IADM). } \vspace{-3mm} \label{fig:framework} \end{figure*} \begin{table} \caption{The training, validation and testing sets of our RGBT-CC benchmark. In each grid, the first value is the number of images, while the second value denotes the average count per image.} \vspace{0mm} \centering \begin{tabular}{c|c|c|c} \hline & {{Training}} & {{Validation}} & {{Testing}} \\ \hline \hline \#Bright & {~~}510 / 65.66 & {~~}97 / 63.02 & 406 / 73.39 \\ \#Dark{~~~} & {~~}520 / 62.52 & 103 / 67.74 & 394 / 74.88 \\ \#Total{~~} & 1030 / 64.07 & 200 / 65.45 & 800 / 74.12 \\ \hline Scene & \multicolumn{3}{c}{malls, streets, train/metro stations, etc} \\ \hline \end{tabular} \label{tab:data_split} \vspace{-1mm} \end{table} \section{RGBT Crowd Counting Benchmark} To the best of our knowledge, there is currently no public RGBT dataset for crowd counting. To promote the future research of this task, we propose a large-scale RGBT Crowd Counting ({\bf{RGBT-CC}}) benchmark. Specifically, we first use an optical-thermal camera to take a large number of RGB-thermal images in various scenarios (e.g., malls, streets, playgrounds, train stations, metro stations, etc). Due to the different types of electronic sensors, original RGB images have a high resolution of 2,048$\times$1,536 with a wider field of view, while thermal images have a standard resolution of 640$\times$480 with a smaller field of view. On the basis of coordinate mapping relation, we crop the corresponding RGB regions and resize them to 640$\times$480. We then choose 2,030 pairs of representative RGB-thermal images for manual annotations. Among these samples, 1,013 pairs are captured in the light and 1,017 pairs are in the darkness. A total of 138,389 pedestrians are marked with point annotations, on average 68 people per image. The detailed distribution of people is shown in Fig.~\ref{fig:distribution}. Finally, the proposed RGBT-CC benchmark is randomly divided into three parts. As shown in Table~\ref{tab:data_split}, 1030 pairs are used for training, 200 pairs are for validation and 800 pairs are for testing. Compared with those Internet-based datasets \cite{idrees2018composition,gao2020nwpu,sindagi2020jhu} with serious bias, our RGBT-CC dataset has closer crowd density distribution to realistic cities, since our images are captured in urban scenes with various densities. Therefore, our dataset has wider applications for urban crowd analysis. \section{Method} In this work, we propose a cross-modal collaborative representation learning framework for multimodal crowd counting. Specifically, multiple modality-specific branches, a modality-shared branch, and an Information Aggregation-Distribution Module are incorporated to fully capture the complementarities among different modalities with a dual information propagation paradigm. In this section, we adopt the representative CSRNet~\cite{li2018csrnet} as a backbone network to develop our framework for RGBT crowd counting. It is worth noting that our framework can be implemented with various backbone networks (e.g., MCNN \cite{zhang2016single}, SANet \cite{cao2018scale}, and BL \cite{ma2019bayesian}), and is also universal for multimodal crowd counting, as verified in Section~\ref{RGBD-counting-exp} by directly applying it to the ShanghaiTechRGBD~\cite{lian2019density} dataset. \subsection{Overview} As shown in Fig.~\ref{fig:framework}, the proposed RGBT crowd counting framework is composed of three parallel backbones and an Information Aggregation-Distribution Module (IADM). Specifically, the top and bottom backbones are developed for modality-specific (i.e. RGB images and thermal images) representation learning, while the middle backbone is designed for modality-shared representation learning. To fully exploit the multimodal complementarities, our IADM dynamically transfers the specific-shared information to collaboratively enhance the modality-specific and modality-shared representations. Consequently, the final modality-shared feature contains comprehensive information and facilitates generating high-quality crowd density maps. Given an RGB image $R$ and a thermal image $T$, we first feed them into different branches to extract modality-specific features, which maintain the specific information of individual modality. The modality-shared branch takes a zero-tensor as input\footnote{When the input of modality-shared branch is set to 0, Eq.\ref{eq:aggregation} at Conv$1$\_$2$ is simplified as $\hat{F}_s^{1,2} = I_{r}^{1,2}{\odot}{Conv_{1*1}}(I_{r}^{1,2}) + I_{t}^{1,2}{\odot}{Conv_{1*1}}(I_{t}^{1,2})$. In other words, the initial modality-shared feature is generated by directly aggregating the information of RGB and thermal features.} and aggregates the information of modality-specific features hierarchically. As mentioned above, each branch is implemented with CSRNet, which consists of (1) a front-end block with the first ten convolutional layers of VGG16 \cite{simonyan2014very} and (2) a back-end block with six dilated convolutional layers. More specifically, the modality-specific branches are based on the CSRNet front-end block, while the modality-shared branch is based on the last 14 convolutional layers of CSRNet. In our work, the $j$-th dilated convolutional layer of back-end block is renamed as ``Conv$5$\_$j$''. For convenience, the RGB, thermal, and modality-shared features at Conv$i$\_$j$ layer are denoted as $F_r^{i,j}$, $F_t^{i,j}$, and $F_s^{i,j}$, respectively. \begin{figure*} \centering \includegraphics[width=1.70\columnwidth]{IAD6.pdf} \vspace{-1mm} \caption{{\bf{(a) Information Aggregation Transfer:}} we first extract the contextual information $I_r$/$I_t$ from modality-specific features $F_r$/$F_t$, and then propagate them dynamically to enhance the modality-shared feature ${F}_s$. % {\bf{(b) Information Distribution Transfer:}} the contextual information $\hat{I}_s$ of the enhance feature $\hat{F}_s$ is distributed adaptively to each modality-specific feature for feedback refinement. % ``+'' denotes element-wise addition and ``-'' refers to element-wise subtraction. } \vspace{-2mm} \label{fig:IAD} \end{figure*} After feature extraction, we employ the Information Aggregation-Distribution Module described in Section \ref{IADM} to learn cross-modal collaborative representation. To exploit the multimodal information hierarchically, the proposed IADM is embedded after different layers, such as Conv$1$\_$2$, Conv$2$\_$2$, Conv$3$\_$3$, and Conv$4$\_$3$. Specifically, after Conv$i$\_$j$, IADM dynamically transfers complementary information among modality-specific and modality-shared features for mutual enhancement. This process can be formulated as follow: { \setlength\abovedisplayskip{5pt} \setlength\belowdisplayskip{5pt} \begin{equation} \hat{F}_s^{i,j}, \hat{F}_r^{i,j}, \hat{F}_t^{i,j} = \text{IADM}(F_s^{i,j}, F_r^{i,j}, F_t^{i,j}), \end{equation} }% where $\hat{F}_s^{i,j}$, $\hat{F}_r^{i,j}$, and $\hat{F}_t^{i,j}$ are the enhanced features of ${F}_s^{i,j}$, ${F}_r^{i,j}$, and ${F}_t^{i,j}$ respectively. These features are then fed into the next layer of each branch to further learn high-level multimodal representations. Thanks to the tailor-designed IADM, the complementary information of the input RGB image and the thermal image is progressively transferred into the modality-shared representations. The final modality-shared feature ${F}_s^{5,6}$ contains rich information. Finally, we directly feed ${F}_s^{5,6}$ into a 1*1 convolutional layer for prediction of the crowd density map $M$. \subsection{Collaborative Representation Learning}\label{IADM} As analyzed in Section \ref{intro}, RGB images and thermal images are highly complementary. To fully capture their complementarities, we propose an Information Aggregation and Distribution Module (IADM) to collaboratively learn cross-modal representation with a dual information propagation mechanism. Specifically, our IADM is integrated with two collaborative transfers, which dynamically propagate the contextual information to mutually enhance the modality-specific and modality-shared representations. {\bf{1) Contextual Information Extraction: }} In this module, we propagate the contextual information rather than the original features, because the later manner would cause the excessive mixing of specific-shared features. To this end, we employ a $L$-level pyramid pooling layer to extract the contextual information for a given feature $F^{i,j} \in R^{h{\times}w{\times}c}$. Specifically, at the $l^{th}$ level ($l$=1,...,$L$), we apply a $2^{l-1}{\times}2^{l-1}$ max-pooling layer to generate a $\frac{h}{2^{l-1}}{\times}\frac{w}{2^{l-1}}$ feature, which is then upsampled to $h{\times}w$ with nearest neighbor interpolation. For convenience, the upsampled feature is denoted as $F^{i,j,l}$. Finally, the contextual information $I^{i,j} \in R^{h{\times}w{\times}c}$ of feature $F^{i,j}$ is computed as: { \setlength\abovedisplayskip{4pt} \setlength\belowdisplayskip{4pt} \begin{equation} I^{i,j} = {Conv}_{1*1}(F^{i,j,1} \oplus F^{i,j,2} \oplus ... \oplus F^{i,j,L}), \label{eq:infromation} \end{equation} }% where $\oplus$ denotes an operation of feature concatenation and ${Conv}_{1*1}$ is a 1*1 convolutional layer. This extraction has two advantages. First, with a larger receptive field, each position at $I^{i,j}$ contains more context. Second, captured by different sensors, RGB images and thermal images are not strictly aligned, as shown in Figure \ref{fig:RGBT_sample}. Thanks to the translation invariance of max-pooling layers, we can eliminate the misalignment of RGB-thermal images to some extent. {\bf{2) Information Aggregation Transfer (IAT): }} In our work, IAT is proposed to aggregate the contextual information of all modality-specific features to enhance the modality-shared feature. As shown in Fig.~\ref{fig:IAD}-(a), instead of directly absorbing all information, our IAT transfers the complementary information dynamically with a gating mechanism that adaptively filters useful information. Specifically, given features $F_r^{i,j}$, $F_t^{i,j}$ and $F_s^{i,j}$, we first extract their contextual information $I_r^{i,j}$, $I_t^{i,j}$, and $I_s^{i,j}$ with Eq.~\ref{eq:infromation}. Similar to \cite{zhang2019residual,zhao2019spatiotemporal}, we then obtain two residual information $I_{r2s}^{i,j}$ and $I_{t2s}^{i,j}$ by computing the differences between $I_r^{i,j}$/$I_t^{i,j}$ and $I_s^{i,j}$. Finally, we apply two gating functions to adaptively propagate the complementary information for refining the modality-shared feature ${F}_s^{i,j}$. The enhanced feature $\hat{F}_s^{i,j}$ is formulated as follow: { \setlength\abovedisplayskip{4pt} \setlength\belowdisplayskip{4pt} \begin{equation} \begin{split} I_{r2s}^{i,j} & = I_r^{i,j} - I_s^{i,j}, {~~} w_{r2s}^{i,j} = {Conv_{1*1}}(I_{r2s}^{i,j}), \\ I_{t2s}^{i,j} & = I_t^{i,j} - I_s^{i,j}, {~~} w_{t2s}^{i,j} = {Conv_{1*1}}(I_{t2s}^{i,j}), \\ \hat{F}_s^{i,j} &= F_s^{i,j} + I_{r2s}^{i,j}{\odot}w_{r2s}^{i,j} + I_{t2s}^{i,j}{\odot}w_{t2s}^{i,j}, \end{split} \label{eq:aggregation} \end{equation} }% where the gating functions are implemented by convolutional layers, $w_{r2s}^{i,j}$ and $w_{t2s}^{i,j}$ are the gating weights. $\odot$ refers to an operation of element-wise multiplication. With such a mechanism, the complementary information is effectively embedded into the modality-shared representation, thus our method can better exploit the multimodal data. \begin{table*}[t] \centering \caption{The performance of different inputs and different representation learning approaches on our RGBT-CC benchmark.} \label{tab:ab_study} \vspace{0.5mm} \resizebox{0.675\textheight}{!}{% \begin{tabular}{c|c|c|c|c|c|c} \hline Input Data & Representation Learning & GAME(0) $\downarrow$ & GAME(1) $\downarrow$ & GAME(2) $\downarrow$ & GAME(3) $\downarrow$ & RMSE $\downarrow$ \\ \hline \hline RGB & - & 33.94 & 40.76 & 47.31 & 57.20 & 69.59\\ \hline T & - & 21.64 & 26.22 & 31.65 & 38.66 & 37.38 \\ \hline \multirow{6}{*}{RGBT} & Early Fusion & 20.40 & 23.58 & 28.03 & 35.51 & 35.26 \\ & Late fusion & 19.87 & 25.60 & 31.93 & 41.60 & 35.09 \\ \cline{2-7} & W/O Gating Mechanism & 19.76 & 23.60 & 28.66 & 36.21 & 33.61 \\ & W/O Modality-Shared Feature & 18.67 & 22.67 & 27.95 & 36.04 & 33.73 \\ & W/O Information Distribution & 18.59 & 23.08 & 28.73 & 36.74 & 32.91 \\ & IADM & {\bf\color{red}17.94} & {\bf\color{red}21.44} & {\bf\color{red}26.17} & {\bf\color{red}33.33} & {\bf\color{red}30.91} \\ \hline \end{tabular} } \vspace{-1mm} \end{table*}% \begin{table*}[t] \centering \caption{The performance under different illumination conditions on our RGBT-CC benchmark. The unimodal data is directly fed into CSRNet, while the multimodal data is fed into our proposed framework based on CSRNet.} \vspace{0.5mm} \label{tab:illumination} \resizebox{0.60\textheight}{!}{% \begin{tabular}{c|c|c|c|c|c|c} \hline Illumination & Input Data & GAME(0) $\downarrow$ & GAME(1) $\downarrow$ & GAME(2) $\downarrow$ & GAME(3) $\downarrow$ & RMSE $\downarrow$ \\ \hline \hline \multirow{3}{*}{Brightness} & RGB & 23.49 & 30.14 & 37.47 & 48.46 & 45.40\\ & T & 25.21 & 28.98 & 34.82 & 42.25 & 40.60 \\ & RGBT & {\bf\color{red}20.36} & {\bf\color{red}23.57} & {\bf\color{red}28.49} & {\bf\color{red}36.29} & {\bf\color{red}32.57} \\ \hline \multirow{3}{*}{Darkness} & RGB & 44.72 & 51.70 & 57.45 & 66.21 & 87.81 \\ & T & 17.97 & 23.38 & 28.39 & 34.95 & 33.74 \\ & RGBT & {\bf\color{red}15.44} & {\bf\color{red}19.23} & {\bf\color{red}23.79} & {\bf\color{red}30.28} & {\bf\color{red}29.11} \\ \hline \end{tabular} } \vspace{-2mm} \end{table*}% {\bf{3) Information Distribution Transfer (IDT): }} After information aggregation, we distribute the information of the new modality-shared feature to refine each modality-specific feature respectively. As shown in Fig.~\ref{fig:IAD}-(b), with the enhanced feature $\hat{F}_s^{i,j}$, we first extract its contextual information $\hat{I}_s^{i,j}$, which is then dynamically propagated to $F_{r}^{i,j}$ and $F_{t}^{i,j}$. Simialr to IAT, two gating functions are used for information filtering. Specifically, the enhanced modality-specific features are computed as follow: { \setlength\abovedisplayskip{4pt} \setlength\belowdisplayskip{4pt} \begin{equation}\nonumber \begin{split} I_{s2r}^{i,j} &= \hat{I}_s^{i,j} - I_r^{i,j}, {~~~~~~~~~~~~~~~~~} I_{s2t}^{i,j} = \hat{I}_s^{i,j} - I_t^{i,j}, \\ w_{s2r}^{i,j} &= {Conv_{1*1}}(I_{s2r}^{i,j}), {~~~~~~~~} w_{s2t}^{i,j} = {Conv_{1*1}}(I_{s2t}^{i,j}), \\ \hat{F}_r^{i,j} &= F_r^{i,j} + I_{s2r}^{i,j}{\odot}w_{s2r}^{i,j}, {~~~}\hat{F}_t^{i,j} = F_t^{i,j} + I_{s2t}^{i,j}{\odot}w_{s2t}^{i,j}. \end{split} \end{equation} }% Finally, all enhanced features $\hat{F}_r^{i,j}$, $\hat{F}_t^{i,j}$, and $\hat{F}_s^{i,j}$ are fed into the following layers of the individual branch for further representation learning. \section{Experiments}\label{experiment} \subsection{Implementation Details \& Evaluation Metrics} In this work, the proposed method is implemented with PyTorch~\cite{paszke2019pytorch}. Here we take various models (e.g., CSRNet~\cite{li2018csrnet}, MCNN \cite{zhang2016single}, SANet \cite{cao2018scale}, and BL \cite{ma2019bayesian}) as backbone to develop multiple instances of our framework. To maintain a similar number of parameters to original models for fair comparisons, the channel number of these backbones in our framework is respectively set to 70\%, 60\%, 60\%, and 60\% of their original values. The kernel parameters are initialized by Gaussian distribution with a zero mean and a standard deviation of 1e-2. At each iteration, a pair of 640$\times$480 RGBT image is fed into the network. The ground-truth density map is generated with geometry-adaptive Gaussian kernels \cite{zhang2016single}. The learning rate is set to 1e-5 and Adam \cite{kingma2014adam} is used to optimize our framework. Notice that the loss function of our framework is the same as that of the adopted backbone network. Following~\cite{liu2020weighing,sindagi2019multi,liu2020efficient}, we adopt the Root Mean Square Error (RMSE) as an evaluation metric. Moreover, Grid Average Mean Absolute Error (GAME \cite{guerrero2015extremely}) is utilized to evaluate the performance in different regions. Specifically, for a specific level $l$, we divide the given images into $4^l$ non-overlapping regions and measure the counting error in each region. Finally, the GAME at level $l$ is computed as: { \setlength\abovedisplayskip{0.25mm} \setlength\belowdisplayskip{0.25mm} \begin{equation} GAME(l) = \frac{1}{N} \sum_{i=1}^{N} \sum_{j=1}^{4^l} |\hat{P}_i^j - P_{i}^j|, \end{equation} }% where $N$ is the total number of the testing samples, $\hat{P}_i^j$ and $P_{i}^j$ are the estimated count and the corresponding ground-truth count in the $j^{th}$ region of the $i^{th}$ image. Note that GAME(0) is equivalent to Mean Absolute Error (MAE). \subsection{Ablation Studies} We perform extensive ablation studies to verify the effectiveness of each component in our framework. In this subsection, CSRNet is utilized as the backbone network to implement our proposed method. \begin{figure*} \centering \includegraphics[width=2.1\columnwidth]{RGBT_result2.pdf} \vspace{-5mm} \caption{Visualization of the crowd density maps generated in different illumination conditions. (a) and (b) show the input RGB images and thermal images. (c) and (d) are the results of RGB-based CSRNet and thermal-based CSRNet. (e) shows the results of CSRNet that takes the concatenation of RGB and thermal images as input. (f) refers to the results of our CSRNet+IDAM. And the ground-truths are shown in (g). We can observe that our density maps and estimated counts are more accurate than those of other methods. (\textit{Best to zoom in to view this figure.}) } \label{fig:RGBT_results} \vspace{0mm} \end{figure*} \begin{table*}[t] \centering \caption{Performance of different methods on the proposed RGBT-CC benchmark. All the methods in this table utilize both RGB images and thermal images to estimate the crowd counts.} \resizebox{0.525\textheight}{!}{% \begin{tabular}{c|c|c|c|c|c} \hline Backbone & GAME(0) $\downarrow$ & GAME(1) $\downarrow$ & GAME(2) $\downarrow$ & GAME(3) $\downarrow$ & RMSE $\downarrow$\\ \hline\hline UCNet \cite{zhang2020uc} & 33.96 & 42.42 & 53.06 & 65.07 & 56.31 \\ HDFNet \cite{HDFNet-ECCV2020} & 22.36 & 27.79 & 33.68 & 42.48 & 33.93 \\ BBSNet \cite{fan2020bbsnet} & 19.56 & 25.07 & 31.25 & 39.24 & 32.48 \\ MVMS \cite{zhang2019wide} & 19.97 & 25.10 & 31.02 & 38.91 & 33.97 \\ \hline {~~}MCNN{~~~~~~~~~~~~~~~} & 21.89 & 25.70 & 30.22 & 37.19 & 37.44 \\ {~~}MCNN + IADM & {\bf\color{red}19.77} & {\bf\color{red}23.80} & {\bf\color{red}28.58} & {\bf\color{red}35.11} & {\bf\color{red}30.34} \\ \hline {~~}SANet{~~~~~~~~~~~~~~~} & 21.99 & 24.76 & 28.52 & 34.25 & 41.60 \\ {~~}SANet + IADM & {\bf\color{red}18.18} & {\bf\color{red}21.84} & {\bf\color{red}26.27} & {\bf\color{red}32.95} & {\bf\color{red}33.72}\\ \hline CSRNet{~~~~~~~~~~~~~~~} & 20.40 & 23.58 & 28.03 & 35.51 & 35.26 \\ CSRNet + IADM & {\bf\color{red}17.94} & {\bf\color{red}21.44} & {\bf\color{red}26.17} & {\bf\color{red}33.33} & {\bf\color{red}30.91} \\ \hline {~~~~~~~~}BL{~~~~~~~~~~~~~~~} & 18.70 & 22.55 & 26.83 & 34.62 & 32.67 \\ {~~~~~~~~}BL + IADM & {\bf\color{red}15.61} & {\bf\color{red}19.95} & {\bf\color{red}24.69} & {\bf\color{red}32.89} & {\bf\color{red}28.18} \\ \hline \end{tabular} } \vspace{-2mm} \label{tab:RGBT_result} \end{table*} {\bf{1) Effectiveness of Multimodal Data:}} We first explore whether the multimodal data (i.e., RGB images and thermal images) is effective for crowd counting. As shown in Table~\ref{tab:ab_study}, when only feeding RGB images into CSRNet, we obtain less impressive performance (e.g., GAME(0) is 33.94 and RMSE is 69.59), because we cannot effectively recognize people in dark environments. When utilizing thermal images, GAME(0) and RMSE are sharply reduced to 21.64 and 37.38, which demonstrates that thermal images are more useful than RGB images. In contrast, various models in the bottom six rows of Table \ref{tab:ab_study} achieve better performance, when considering RGB and thermal images simultaneously. In particular, our CSRNet+IADM has a relative performance improvement of 17.3\% on RMSE, compared with the thermal-based CSRNet. To further verify the complementarities of multimodal data, the testing set is divided into two parts to measure the performance in different illumination conditions separately. As shown in Table~\ref{tab:illumination}, using both RGB and thermal images, our CSRNet+IADM consistently outperforms the unimodal CSRNet in both bright and dark scenarios. This is attributed to the thermal information that greatly helps to distinguish potential pedestrians from the cluttered background, while optical information is beneficial to eliminate heating negative objects in thermal images. Moreover, we also visualize some crowd density maps generated with different modal data in Fig.~\ref{tab:RGBT_result}. We can observe that the density maps and estimated counts of our CSRNet+IADM are more accurate. These quantitative and qualitative experiments show that RGBT images are greatly effective for crowd counting. \begin{table}[t] \centering \caption{Performance of different level numbers of the pyramid pooling layer in IADM.} \label{tab:level_number} \resizebox{8.3cm}{!}{% \begin{tabular}{c|c|c|c|c|c} \hline \#Level & GAME(0) & GAME(1) & GAME(2) & GAME(3) & RMSE \\ \hline\hline $L$=1 & 18.94 & 23.05 & 28.03 & 35.88 & 33.01 \\ $L$=2 & 18.35 & 22.56 & 27.84 & 35.90 & 31.94 \\ $L$=3 & {\bf\color{red}17.94} & {\bf\color{red}21.44} & {\bf\color{red}26.17} & {\bf\color{red}33.33} & {\bf\color{red}30.91} \\ $L$=4 & 17.80 & 21.39 & 25.91 & 33.20 & 31.48 \\ \hline \end{tabular} } \vspace{-4mm} \end{table}% \begin{table*}[t] \centering \caption{Performance of different methods on the ShanghaiTechRGBD benchmark. All the methods in this table utilize both RGB images and depth images to estimate the crowd counts.} \resizebox{0.525\textheight}{!}{% \begin{tabular}{c|c|c|c|c|c} \hline Method & GAME(0) $\downarrow$ & GAME(1) $\downarrow$ & GAME(2) $\downarrow$ & GAME(3) $\downarrow$ & RMSE $\downarrow$\\ \hline\hline UCNet \cite{zhang2020uc} & 10.81 & 15.24 & 22.04 & 32.98 & 15.70 \\ HDFNet \cite{HDFNet-ECCV2020} & 8.32 & 13.93 & 17.97 & 22.62 & 13.01 \\ BBSNet \cite{fan2020bbsnet} & 6.26 & 8.53 & 11.80 & 16.46 & 9.26 \\ \hline DetNet \cite{liu2018decidenet} & 9.74 & - & - & - & 13.14 \\ CL \cite{idrees2018composition} & 7.32 & - & - & - & 10.48 \\ RDNet \cite{lian2019density} & 4.96 & - & - & - & 7.22 \\ \hline {~~}MCNN{~~~~~~~~~~~~~~~} & 11.12 & 14.53 & 18.68 & 24.49 & 16.49 \\ {~~}MCNN + IADM & {\bf\color{red}9.61} & {\bf\color{red}11.89} & {\bf\color{red}15.44} & {\bf\color{red}20.69} & {\bf\color{red}14.52} \\ \hline {~~~~~~~~}BL{~~~~~~~~~~~~~~~} & 8.94 & 11.57 & 15.68 & 22.49 & 12.49 \\ {~~~~~~~~}BL + IADM & {\bf\color{red}7.13} & {\bf\color{red}9.28} & {\bf\color{red}13.00} & {\bf\color{red}19.53} & {\bf\color{red}10.27} \\ \hline {~~}SANet{~~~~~~~~~~~~~~~} & 5.74 & 7.84 & 10.47 & 14.30 & 8.66 \\ {~~}SANet + IADM & {\bf\color{red}4.71} & {\bf\color{red}6.49} & {\bf\color{red}9.02} & {\bf\color{red}12.41} & {\bf\color{red}7.35}\\ \hline CSRNet{~~~~~~~~~~~~~~~} & 4.92 & 6.78 & 9.47 & 13.06 & 7.41 \\ CSRNet + IADM & {\bf\color{red}4.38} & {\bf\color{red}5.95} & {\bf\color{red}8.02} & {\bf\color{red}11.02} & {\bf\color{red}7.06} \\ \hline \end{tabular} } \vspace{-3mm} \label{tab:RGBD_result} \end{table*} {\bf{2) Which Representation Learning Method is Better?}} We implement six methods for multimodal representation learning. Specifically, ``Early Fusion'' feeds the concatenation of RGB and thermal images into CSRNet. ``Late Fusion'' extracts the RGB and thermal features respectively with two CSRNet and then combines their features to generate density maps. As shown in Table~\ref{tab:ab_study}, these two models are better than unimodal models, but their performance still lags far behind various variants of our IADM. For instance, without gating functions, the variant ``W/O Gating Mechanism'' directly propagates information among different features and obtains an RMSE of 33.61. The variant ``W/O Modality-Shared Feature'' obtains a GAME(0) of 18.67 and an RMSE of 33.73, when removing the modality-shared branch and directly refining the modality-specific features. When using the modality-shared branch but only aggregating multimodal information, the variant ``W/O Information Distribution'' obtains a relatively better RMSE 32.91. When using the full IADM, our method achieves the best performance on all evaluation metrics. This is attributed to our tailor-designed architecture (i.e., specific-shared branches, dual information propagation) that can effectively learn the multimodal collaborative representation, and fully capture the complementary information of RGB and thermal images. These experiments demonstrate the effectiveness of the proposed IADM for multimodal representation learning. {\bf{3) The Effectiveness of Level Number of Pyramid Pooling Layer: }} In the proposed IDAM, an $L$-level pyramid pooling layer is utilized to extract contextual information. In this section, we explore the effectiveness of the level number. As shown in Table \ref{tab:level_number}, when $L$ is set to 1, the GAME(3) and RMSE are 35.88 and 33.01, respectively. As the level number increases, our performance also becomes better gradually, and we can achieve very competitive results when the pyramid pooling layer has three levels. More levels over 3 will not bring additional performance gains. Thus, the level number $L$ is consistently set to 3 in our work. \subsection{Comparison with State-of-the-Art Methods} We compare the proposed method with state-of-the-art methods on the large-scale RGBT-CC benchmark. The compared methods can be divided into two categories. The first class is the specially-designed models for crowd counting, including MCNN \cite{zhang2016single}, SANet \cite{cao2018scale}, CSRNet~\cite{li2018csrnet}, and BL \cite{ma2019bayesian}. These methods are reimplemented to take the concatenation of RGB and thermal images as input in an ``Early Fusion" way. Moreover, MVMS \cite{zhang2019wide} is also reimplemented on RGBT-CC and pixel-wise attention map \cite{chen2016attention} is utilized to fuse the features of optical view and thermal view. The second class is several best-performing models for multimodal learning, including UCNet \cite{zhang2020uc}, HDFNet \cite{HDFNet-ECCV2020}, and BBSNet \cite{fan2020bbsnet}. Based on their official codes, these methods are reimplemented to estimate crowd counts on our RGBT-CC dataset. As mentioned above, our IADM can be incorporated into various networks, thus here we take CSRNet, MCNN, SANet, and BL as backbone to develop multiple instances of our framework. The performance of all comparison methods is shown in Table \ref{tab:RGBT_result}. As can be observed, all instances of our method outperform the corresponding backbone networks consistently. For instance, both MCNN+IADM and SANet+IADM have a relative performance improvement of 18.9\% on RMSE, compared with their ``Early Fusion'' models. Moreover, our CSRNet+IADM and BL+IADM achieve better performance on all evaluation metrics, compared with other advanced methods (i.e., UCNet, HDFNet, and BBSNet). This is because our method learns specific-shared representations explicitly and enhances them mutually, while others just simply fuse multimodal features without mutual enhancements. Thus our method can better capture the complementarities of RGB images and thermal images. This comparison has demonstrated the effectiveness of our method for RGBT crowd counting. \subsection{Apply to RGBD Crowd Counting}\label{RGBD-counting-exp} We apply the proposed method to estimate crowd counts from RGB images and depth images. In this subsection, we also take various crowd counting models as backbone to develop our framework on ShanghaiTechRGBD \cite{lian2019density} benchmark. The implementation details of the compared methods are similar to the previous subsection. As shown in Table~\ref{tab:RGBD_result}, all instances of our framework are superior to their corresponding backbone networks by obvious margins. Moreover, our SANet+IADM and CSRNet+IADM outperform three advanced models (i.e., UCNet, HDFNet, and BBSNet) on all evaluation metrics. More importantly, our CSRNet+IADM achieves the lowest GAME(0) 4.38 and RMSE 7.05, and becomes the new state-of-the-art method on ShanghaiTechRGBD benchmark. This experiment shows that our approach is universal and effective for RGBD crowd counting. \section{Conclusion} In this work, we propose to incorporate optical and thermal information to estimate crowd counts in unconstrained scenarios. To this end, we introduce the first RGBT crowd counting benchmark with 2,030 pairs of RGB-thermal images and 138,389 annotated people. Moreover, we develop a cross-modal collaborative representation learning framework, which utilizes a tailor-designed Information Aggregation-Distribution Module to fully capture the complementary information of different modalities. Extensive experiments on two real-world benchmarks show the effectiveness and universality of the proposed method for multimodal (e.g., RGBT and RGBD) crowd counting. \section*{Acknowledgments} This work was supported in part by National Natural Science Foundation of China under Grant No.U1811463, in part by National Key R\&D Program of China under Grant No.2020AAA0109700, in part by National Natural Science Foundation of China under Grant No.61976250 and 61876045, in part by the Guangdong Basic and Applied Basic Research Foundation under Grant No.2017A030312006 and 2020B1515020048, in part by Major Project of Guangzhou Science and Technology of Collaborative Innovation and Industry under Grant 201605122151511. {\small \bibliographystyle{ieee_fullname}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:intro} Multiple-input multiple-output (MIMO) systems refer to systems with multiple antennas implemented at the transceiver nodes. They exploit spatial diversity to provide high data rate and link reliability \cite{MIMO}. In conventional MIMO systems, the number of antennas is usually moderate (e.g., the LTE standard allows for up to 8 antenna ports). Recently, large-scale MIMO systems or massive MIMO, where hundreds of antennas are implemented at the transceiver nodes, attract a lot of attention \cite{massive,massive-1}. It has been shown that, due to the large scale, the antennas can form sharp beams toward desired terminals, thus providing high spectral and energy efficiencies. Besides, the effects of small-scale fading and interference can be significantly reduced with linear signal processing, such as maximal-ratio-combining (MRC), maximal-ratio-transmission (MRT), and zero-forcing (ZF) \cite{massive}. The performance of massive MIMO systems have been widely studied in the literature \cite{efficiency, MRCvsZF, SINR_distri, composite_fading, Chi, scaling-mimo-qi}. In \cite{efficiency}, for the uplink of massive MIMO systems with MRC or ZF, the deterministic equivalence of the achievable sum-rate is derived by using the law of large numbers. The following power scaling laws are shown. With perfect channel state information (CSI), the user and/or relay power can be scaled down linearly with the number of antennas while maintaining the same signal-to-interference-plus-noise-ratio (SINR); when there is CSI error (where minimum mean-squared error (MMSE) estimation is used) and the training power equals the data transmit power, the power can only be scaled down by the square root of the number of antennas. Another work on the energy efficiency and power efficiency of a single-cell multi-user massive MIMO network is reported in \cite{MRCvsZF}, where a Bayesian approach is used to obtain the capacity lower bounds for both MRT and ZF precodings in the downlink. It is shown that that for high spectral efficiency and low energy efficiency, ZF outperforms MRT, while at low spectral efficiency and high energy efficiency the opposite holds. While the channel models used in \cite{efficiency,MRCvsZF} are Rayleigh fading, Ricean fading channel is considered in \cite{scaling-mimo-qi} in massive MIMO uplink, where the CSI is also obtained with MMSE estimator. Sum-rate approximations on the MRC and ZF receivers are obtained using the mean values of the components in the SINR formula. The derived power scaling law is that when the CSI is perfect or Ricean factor is non-zero, the user transmit power can be scaled down inversely proportional with the number of antennas while maintaining the same SINR level. Otherwise, the transmit power can only be scaled down inversely proportional to the square root of the antenna number. While the aforementioned work analyses the sum-rate and power scaling law, there are also some work on the SINR distribution and outage probability. In \cite{SINR_distri}, the SINR probability density function (PDF) of MRT precoding is derived in closed-form in the downlink of a single-cell multi-user massive MIMO network. Besides, the asymptotic SINR performance is analysed when the number of users remains constant or scales linearly with the number of antennas. For the same network, in \cite{Chi}, the outage probability of MRT precoding is derived in closed-form. The authors first obtain the distribution of the interference power, based on which the outage probability is derived in closed-form. While only small-scale fading is considered in \cite{SINR_distri,Chi}, both small-scale (Rayleigh) fading and large-scale (log-normal) fading are considered in \cite{composite_fading}. In this work, the PDF of the SINR of MRC receiver is approximated by log-normal distribution, and the outage probability is derived in closed-form. The analysis shows that the shadowing effect cannot be eliminated by the use of a large number of antennas. Current results on massive MIMO show fantastic advantages of utilizing a large number of antennas in communications. A natural expansion of the single-hop massive MIMO systems is the two-hop massive MIMO relay networks, where the relay station is equipped with a large number of transmit and receive antennas to help the communications of multiple source-destination pairs. Relaying technology has been integrated to various wireless communication standards (e.g., LTE-Advanced and WIMAX Release 2) as it can improve the coverage and throughput of wireless communications \cite{relay-book}. Early studies focus on single-user relay networks and various relaying schemes, such as amplify-and-forward (AF) and decode-and-forward (DF), have been proposed \cite{relay-book}. With ever-increasing demands for higher performance, recently, multi-user relay networks have gained considerable attention \cite{M3,Qian1,Qian2,our-work1}. An important issue in multi-user relaying is how to deal with inter-user interference\cite{7of1}. By utilizing massive MIMO, the interference is expected to be significantly reduced and the network performance will be significantly improved. Research activities on massive MIMO relay networks are increasing in recent years \cite{3,4,5,sum-rate-analysis,multi-cell, channel_aging,1,2,efficiency-zf,efficiency-twoway-mrcmrt-csi,efficiency-full-duplex-AF,rate-CSI}. In \cite{3,4}, for a single-user massive MIMO relay network with co-channel interferences at the relay, the ergodic capacity and outage probability of MRC/MRT and ZF relaying schemes are derived in closed-forms. The more general multiple-user massive MIMO relay networks are analysed in \cite{5,sum-rate-analysis,multi-cell, channel_aging,1,2,efficiency-zf,efficiency-twoway-mrcmrt-csi,efficiency-full-duplex-AF,rate-CSI}. Depending on the structure of the network model, the works can be divided to the following two categories. In \cite{sum-rate-analysis, multi-cell, channel_aging}, a network with multiple single-antenna users, one massive MIMO relay station and one massive MIMO destination is considered. This model applies to the relay-assisted uplink multiple-access network. In \cite{sum-rate-analysis}, it is shown that with perfect CSI, and infinite relay and destination antennas, the relay or user transmit power can scale inversely proportional to the number of antennas without affecting the performance. When there is CSI error, the user or relay power can only scale down with the square root of the number of antennas, given that the training power equals the transmit power. The same network is also considered in \cite{multi-cell, channel_aging} while the co-channel interference and pilot contamination are considered in \cite{multi-cell}, and channel aging effect is considered in \cite{channel_aging}. The effects of these factors on the power scaling are shown therein. Another type of network is the relay-assisted multi-pair transmission network, where multiple single-antenna sources communicate with their own destinations with the help of a massive MIMO relay \cite{5,1,2,efficiency-zf,efficiency-twoway-mrcmrt-csi,efficiency-full-duplex-AF,rate-CSI}. In \cite{1,2}, the sum-rates of multi-pair massive MIMO relay network with MRC/MRT and ZF relaying under perfect CSI are analysed for one-way and two-way relaying respectively. In both work, with the deterministic equivalence analysis, it is shown that the sum-rate can remain constant when the transmit power of each source and/or relay scales inversely proportional to the number of relay antennas. In \cite{5}, the same network model as \cite{2} is considered for MRC/MRT relaying where the number of relay antennas is assumed to be large but finite. The analysis shows that, when the transmit powers of the relay and sources are much larger than the noise power, the achievable rate per source-destination pair is proportional to the logarithm of the number of relay antennas, and is also proportional to the logarithm of the reciprocal of the interferer number. In \cite{efficiency-full-duplex-AF}, the full-duplex model is considered for one-way MRC/MRT relaying and a sum-rate lower bound is derived with Jensen's inequality. While the above work assume perfect CSI at the relay, recent study has turned to networks with CSI error \cite{efficiency-twoway-mrcmrt-csi, efficiency-zf, rate-CSI}, which is more practical and challenging to analyse. In \cite{efficiency-zf,rate-CSI}, a one-way massive MIMO relay network model is considered, where MMSE estimation is used to obtain the CSI. While \cite{efficiency-zf} uses ZF relaying and assumes that the CSI error exists in both hops, \cite{rate-CSI} uses MRC/MRT relaying and assumes that the CSI error only exists in the relay-destination hop. In both work, the power scalings of the sources and relay for non-vanishing SINR are discussed under the assumption that the training power equals the data transmission power. Compared with previous power scaling law results, the analysis in \cite{efficiency-zf,rate-CSI} are more comprehensive by allowing the power scaling to be anywhere between constant and linearly increasing with the number of relay antennas. \cite{efficiency-twoway-mrcmrt-csi} is on a two-way MRC/MRT relaying network with CSI error. With deterministic equivalence analysis, it is shown that when the source or relay power scales inversely proportional to the number of relay antennas, the effects of small-scale fading, self-interference, and noise caused by CSI error all diminish. In this work, the performance of MRC/MRT relaying in a one-way massive MIMO relay network with CSI error is investigated . Our major differences from existing work are summarized as blow. \begin{itemize} \item Our system model is different from all the aforementioned existing work in relaying scheme, CSI assumption, or communication protocol. The work with the closest model is \cite{rate-CSI}, where the CSI error is assumed to exist in the relay-destinations hop only. We use a more general model where CSI error exists in both hops. \item In our scaling law analysis, a general model for network parameters, including the number of source-destination pairs, the CSI quality parameter, the transmit powers of the source and the relay, is proposed. In this model, the scale exponent with respect to the relay antenna number can take continuous values from '0' to '1'. In most existing work, only a few discrete values for the power scaling, e.g., $0,1,1/2$, are allowed. Although \cite{efficiency-zf,rate-CSI} allow continuous exponent values, they constrains the number of sources as constant and the training power equals to the transmit power. \item While in existing work, the asymptotically deterministic equivalence analysis is based on the law of large numbers, we use the quantized measure, squared coefficient of variation (SCV), to examine this property. As law of large numbers only applies to the summation of independent and identical distributed random variables, by using the SCV, we can discuss the asymptotically deterministic property of random variables with more complex structures. \end{itemize} Based on these features that distinguish our work from existing ones, our unique contributions are listed as below. \begin{enumerate} \item Firstly, by deriving a lower bound on the sum-rate, we investigate the performance scaling law with respect to the relay antenna number for a general setting on the scalings of the network parameters. The law provides comprehensive insights and reveals quantitatively the tradeoff among different system parameters. \item Deterministic equivalence is an important framework for performance analysis of massive MIMO systems. We derive a sufficient condition on the parameter scales for the SINR to be asymptotically deterministic. Compared with existing work, where only specific asymptotic cases are discussed, our derived sufficient condition is more comprehensive. It covers all cases in existing works, and shows more asymptotically deterministic SINR scenarios. Besides, for the SINR to be asymptotically deterministic, the tradeoff between different parameter scales is also discussed. \item Through the scaling law results, we show that for practical network scenarios, the average SINR is at the maximum linearly increasing with the number of relay antennas. We prove that the sufficient and necessary condition for it is that all other network parameters remain constant. Furthermore, our work shows that in this case the interference power does not diminish and it dominates the statistical performance of the SINR. By deriving the PDF of the interference power in closed-form, expressions for outage probability and average bit error rate (ABER) are obtained. While existing work mainly focus on the constant SINR case, this linearly increasing SINR case, suitable for high quality-of-service applications, has not been studied. \end{enumerate} The remaining of the paper is organized as follows. In the next section, the system model including both the channel estimation and data transmission under MRC/MRT relaying is introduced. Then the performance scaling law is analyzed in Section \ref{sec:scaling}. In Section \ref{sec:deter}, the asymptotically deterministic SINR case is discussed. The linearly increasing SINR case is investigated in Section \ref{sec:linear}. Section \ref{sec:simu} shows the simulation results and Section \ref{sec:con} contains the conclusion. \section{System Model and Preliminaries for Scaling Law Analysis} \label{sec:model} We consider a multi-pair relay network with $K$ single-antenna sources ($S_1,\cdots,S_K$), each transmitting to its own destination. That is, $S_i$ sends information to Destination $i$, $D_i$. We assume that the sources are far away from the destinations so that no direct connections exist. To help the communications, a relay station is deployed \cite{relay-book}. The number of antennas at the relay station, $M$, is assumed to be large, e.g., a few hundreds \cite{3,4,5,sum-rate-analysis,multi-cell, channel_aging,1,2,efficiency-zf,efficiency-twoway-mrcmrt-csi,efficiency-full-duplex-AF,rate-CSI}. In addition, we assume $M\gg K$ because under this condition, simple linear relay processing, e.g., MRC/MRT, can have near optimal performance in massive MIMO systems \cite{non-cooperative}. Denote the $M\times K$ and $K\times M$ channel matrices of the source-relay and relay-destination links as ${\bf F}$ and ${\bf G}$, respectively. The channels are assumed to be independent and identically distributed (i.i.d.) Rayleigh fading, i.e., entries of ${\bf F}$ and ${\bf G}$ are mutually independent following the circular symmetric complex Gaussian (CSCG) distribution with zero-mean and unit-variance, denoted as $\mathcal{CN}(0,1)$. The assumption that the channels are mutually independent is valid when the relay antennas are well separated. The information of ${\bf F}$ and ${\bf G}$ is called the channel state information (CSI), which is essential for the relay network. In practice, the CSI is obtained through channel training. Due to the existence of noises and interference, the channel estimation cannot be perfect but always contains error. The CSI error is an important issue for massive MIMO systems \cite{efficiency,scaling-mimo-qi,efficiency-twoway-mrcmrt-csi, efficiency-zf, rate-CSI}. In what follows, we will first describe the channel estimation model, then the data transmission and MRC/MRT relaying scheme will be introduced. \subsection{Channel Estimation} To combine the received signals from the sources and precode the signals for the destinations, the relay must acquire CSI. ${\bf F}$, the uplink channel from the sources to the relay, can be estimated by letting the sources send pilots to the relay. In small-scale MIMO systems, ${\bf G}$ can be estimated by sending pilots from the relay to the destinations and the destinations will feedback the CSI to the relay \cite{MIMO,relay-book}. However, this strategy is not viable for massive MIMO systems, as the training time length grows linearly with the number of relay antennas $M$, which may exceed the channel coherence interval. Consequently, to estimate $\bf G$, we assume a time-division-duplexing (TDD) system with channel reciprocity \cite{massive}. So pilots are sent from the destinations and the relay-destination channels can be estimated at the relay station. Without loss of generality, we elaborate the estimation of $\bf F$, and the estimation of $\bf G$ is similar. Since the channel estimation is the same as that in the single-hop MIMO system, we will briefly review it and more details can be found in \cite{MIMO,relay-book} and references therein. Denote the length of the pilot sequences as $\tau$. For effective estimation, $\tau$ is no less than the number of sources $K$ \cite{efficiency,MRCvsZF}. Assume that all nodes use the same transmit power for training, which is denoted as $P_t$. Therefore, the pilot sequences from all $K$ sources can be represented by a $\tau\times K$ matrix $\sqrt{\tau P_t}{\bf \Phi}$, which satisfies ${\bf \Phi}^H{\bf \Phi}={\bf I}_K$. The $M\times \tau$ received pilot matrix at the relay is \begin{equation*} {\bf Y}_{train}=\sqrt{\tau P_t}{\bf F}{\bf \Phi}^T+{\bf N}, \end{equation*} where ${\bf N}$ is the $M\times \tau$ noise matrix with i.i.d. $\mathcal{CN}(0,1)$ elements. The MMSE channel estimation is considered, which is widely used in the channel estimation of massive MIMO networks \cite{efficiency, sum-rate-analysis,efficiency-zf, scaling-mimo-qi}. The MMSE estimation of ${\bf F}$ given ${\bf Y}_{train}$ is \begin{equation*} \hat{{\bf F}}=\frac{1}{\sqrt{\tau P_t}}{\bf Y}_{train}{\bf \Phi}^*\frac{\tau P_t}{1+\tau P_t}=\frac{\tau P_t}{1+\tau P_t}\left({\bf F}+\frac{1}{\sqrt{\tau P_t}} {\bf N}_F\right), \end{equation*} where ${\bf N}_F\triangleq {\bf N}{\bf \Phi}^*$, which has i.i.d. $\mathcal{CN}(0,1)$ elements. Similarly, the MMSE estimation of ${\bf G}$ is \begin{equation*} \hat{{\bf G}}=\frac{\tau P_t}{1+\tau P_t}\left({\bf G}+\frac{1}{\sqrt{\tau P_t}} {\bf N}_G\right). \end{equation*} Define ${\bf E}_f\triangleq \hat{\bf F}-{\bf F}$ and ${\bf E}_g\triangleq \hat{\bf G}-{\bf G}$ which are the estimation error matrices. Due to the feature of MMSE estimation, $\hat{\bf F}$ and ${\bf E}_f$, $\hat{\bf G}$ and ${\bf E}_g$ are mutual independent. Elements of $\hat{\bf F}$ and $\hat{\bf G}$ are distributed as $\mathcal{CN}(0,\frac{\tau P_t}{\tau P_t +1})$. Elements of ${\bf E}_f$ and ${\bf E}_g$ are distributed as $\mathcal{CN}(0,\frac{1}{\tau P_t +1})$. Define \begin{equation} E_t\triangleq\tau P_t \text { and } P_c\triangleq \frac{\tau P_t}{\tau P_t+1}. \label{eq:pc} \end{equation} So $E_t$ is total energy spent in training. $P_c$ is the power of the estimated channel element, representing the quality of the estimated CSI, while $1-P_c$ is the power of the CSI error. It is straightforward to see that $0\le P_c\le 1$. When $P_c\rightarrow 1$, the channel estimation is nearly perfect. When $P_c\rightarrow 0$, the quality of the channel estimation is very poor. Note that, different combinations of $\tau$ and $P_t$ can result in the same $P_c$. For the majority of this paper, $P_c$ will be used in the performance analysis instead of $\tau$ and $P_t$. This allows us to isolate the training designs and focus on the effects of CSI error on the system performance. When we consider special cases with popular training settings, e.g., $\tau=K$ and the same training and data transmission power, $\tau$ and $P_t$ will be used instead of $P_c$ in modelling the CSI error. \subsection{Data Transmissions} With the estimated CSI, the next step is the data transmission. Various relay technologies have been proposed \cite{relay-book}. For massive MIMO systems, the MRC/MRT relaying is a popular one due to its computational simplicity, robustness, and high asymptotic performance \cite{1,2,3,4,5,efficiency-twoway-mrcmrt-csi, rate-CSI}. In the rest of this section, the data transmission with MRC/MRT relaying will be introduced. Denote the data symbol of $S_i$ as $s_i$ and the vector of symbols from all sources as $\bf{s}$. With the normalization ${\mathbb E}(|s_i|^2)=1$, we have ${\mathbb E}({\bf s}^H{\bf s})=K$, where $(\cdot)^H$ represents the Hermitian of a matrix or a vector. Let $P$ be the average transmit power of each source. The received signal vector at the relay is \begin{equation} {\bf x}=\sqrt{P}{\bf F}{\bf s}+{\bf n_r}, \label{eq:x} \end{equation} where ${\bf n_r}$ is the noise vector at the relay with i.i.d.~entries each following $\mathcal{CN}(0,1)$. With MRC/MRT relaying, the retransmitted signal vector from the relay is $a_e\hat{\bf G}^H\hat{\bf F}^H {\bf x}$, where $a_e$ is to normalize the average transmit power of the relay to be $Q$. With straightforward calculations, we have \begin{eqnarray} a_e^2= \frac{Q}{{\mathbb E}\{{\rm tr}\left((\hat{{\bf G}}^H\hat{{\bf F}}^H {\bf x})(\hat{{\bf G}}^H\hat{{\bf F}}^H {\bf x})^H\right)\}}\approx\frac{Q}{PKP_c^3M^3(1+\frac{K}{MP_c}+\frac{1}{PP_cM})}, \label{eq:a_e} \end{eqnarray} where the approximation is made by ignoring the lower order terms of $M$. Denote ${\bf f}_i$, $\hat{{\bf f}}_i$, and ${\boldsymbol \epsilon}_{f,i}$ as the $i$th columns of ${\bf F}$, $\hat{\bf F}$ and ${\bf E}_f$ respectively; ${\bf g}_i$, $\hat{\bf g}_i$ and ${{\boldsymbol \epsilon}}_{g,i}$ as the $i$th rows of ${\bf G}$, $\hat{\bf G}$ and ${\bf E}_g$ respectively. The received signal at $D_i$ can be written as follows. \begin{eqnarray} y_i&=&a_e\sqrt{P}{\bf g}_i\hat{{\bf G}}^H\hat{{\bf F}}^H{\bf F}{\bf s}+a_e{\bf g}_i\hat{{\bf G}}^H\hat{{\bf F}}^H{\bf n_r}+n_{d,i},\nonumber\\ &=&\underbrace{a_e\sqrt{P}\hat{\bf g}_i\hat{\bf G}^H\hat{\bf F}^H\hat{\bf f}_i s_i}_{\text{desired signal}} +\underbrace{a_e\sqrt{P}\sum_{k=1,k\neq i}^{K}{\bf g}_i\hat{\bf G}^H\hat{\bf F}^H{\bf f}_k s_k}_{\text{multi-user interference}} +\underbrace{a_e{\bf g}_i\hat{\bf G}^H\hat{\bf F}^H{\bf n_r}}_{\text{forwarded relay noise}}+\nonumber\\ &&\underbrace{a_e\sqrt{P}{\bf \epsilon}_{g,i}\hat{\bf G}^H\hat{\bf F}^H{{\boldsymbol \epsilon}}_{f,i} s_i-a_e\sqrt{P}\hat{\bf g}_{i}\hat{\bf G}^H\hat{\bf F}^H{{\boldsymbol \epsilon}}_{f,i} s_i-a_e\sqrt{P}{\bf \epsilon}_{g,i}\hat{\bf G}^H\hat{\bf F}^H\hat{\bf f}_{i} s_i}_{\text{noise due to CSI error}} +n_{d,i},\label{eq:recei_sig_e} \end{eqnarray} where $n_{d,i}$ is the noise at the $i$th destination following $\mathcal{CN}(0,1)$. Equation (\ref{eq:recei_sig_e}) shows that the received signal is composed of 5 parts: the desired signal, the multi-user interference, the forwarded relay noise, the CSI error term, and the noise at $D_i$. Define \begin{eqnarray} &&P_{s,e}\triangleq \frac{|\hat{\bf g}_i\hat{\bf G}^H\hat{\bf F}^H\hat{\bf f}_i|^2}{M^4}, \quad \quad \hspace{10mm}P_{i,e}\triangleq \frac{1}{K-1}\sum_{k=1,k\neq i}^{K}\frac{|{\bf g}_i\hat{\bf G}^H\hat{\bf F}^H{\bf f}_k|^2}{M^3},\label{eq:component1}\\ &&P_{n,e}\triangleq\frac{||{\bf g}_i\hat{\bf G}^H\hat{\bf F}^H||_F^2}{M^3}, \quad \quad \hspace{10mm} P_{e,1}\triangleq \frac{(1-P_c)^2}{M^3}\sum_{n=1}^{K}\sum_{m=1}^{K}\hat{\bf f}_n^H\hat{\bf f}_m\hat{\bf g}_m\hat{\bf g}_n^H,\label{pse}\\ &&P_{e,2}\triangleq(1-P_c)\frac{\|\hat{\bf g}_{i}\hat{\bf G}^H\hat{\bf F}^H\|_F^2}{M^3}, \quad P_{e,3}\triangleq(1-P_c)\frac{\|\hat{\bf G}^H\hat{\bf F}^H\hat{\bf f}_{i}\|_F^2}{M^3}.\label{eq:component2} \end{eqnarray} From (\ref{eq:recei_sig_e}), we know that $P_{s,e},P_{i,e},P_{n,e}$ and $P_{e,1}+P_{e,2}+P_{e,3}$ are the normalized powers of the signal, the interference, the forwarded relay noise, and the noise due to CSI error respectively. With these definitions, the SINR of the $i$th source-destination pair can be written as \begin{eqnarray} {\rm SINR}_{i}= M\frac{P_{s,e} }{(K-1)P_{i,e}+\frac{1}{P} P_{n,e}+P_{e,1}+P_{e,2}+P_{e,3}+\frac{KP_c^3(1+\frac{K}{MP_c}+\frac{1}{PP_cM})}{Q}}. \label{eq:SINR_e} \end{eqnarray} The achievable rate for the $i$th source-destination pair is \begin{equation} C_{i}={\mathbb E}\left\{\frac{1}{2}\log_2(1+{\rm SINR}_{i})\right\}. \end{equation} \subsection{Preliminaries for Scaling Law Analysis} This paper is on the performance behaviour and asymptotic performance scaling law of the massive MIMO relay network. It is assumed throughout the paper that the number of relay antennas $M$ is very large and the scaling law is obtained by studying the highest-order term with respect to $M$. Due to the complexity of the network, it is impossible to rigorously obtain insightful forms for the SINR and the achievable rate for the general $M$ case. Instead, we find the asymptotic performance properties for very large $M$ with the help of Lindebergy-L$\acute{\text{e}}$vy central limit theorem (CLT). The CLT states that, for two length-$M$ independent column vectors ${\bf v}_1$ and ${\bf v}_2$, whose elements are i.i.d. zero-mean random variables with variances $\sigma_1^2$ and $\sigma_2^2$, $$\frac{1}{\sqrt{M}}{\bf v}_1^H{\bf v}_2\xrightarrow[]{d}\mathcal{CN}(0,\sigma_1^2\sigma_2^2),$$ where $\xrightarrow[]{d}$ means convergence in distribution when $M\rightarrow \infty$. Another important concept in the performance analysis of massive MIMO systems is \textit{asymptotically deterministic}. In many existing literature on massive MIMO, a random variable sequence $X_M$ is said to be asymptotically deterministic if it converges almost surely (a.s.) to a deterministic value $x$, i.e., \begin{equation*} X_M \overset{a.s.}{\longrightarrow}x \text{ when } M\rightarrow \infty. \end{equation*} The strong law of large numbers is usually used to derive the deterministic equivalence. The almost sure convergence implies the convergence in probability \cite{random_book}. Another type of convergence that implies convergence in probability is the convergence in mean square \cite{random_book}. For a random variable sequence $X_M$ with a bounded mean, $X_M$ converges in mean square to a deterministic value $x$, i.e., $X_M\overset{m.s.}{\longrightarrow}x$ if \[ \lim_{M\rightarrow \infty}{\rm Var}\{X_M\}=0. \] The convergence in mean square requires the variances of the random variable sequence to approach zero. It has been used to define the channel hardening effects for massive MIMO\cite{massive-1,hardening}, where the convergence in mean square means that the effects of small-scale fading is ignorable when the number of antennas is large. Besides, compared with almost sure convergence, the convergence in mean square is more tractable for analysis. We adopt the convergence in mean square for the asymptotically scaling law of massive MIMO relay network. However, the use of the variance may cause inconvenience and sometimes confusion in performance analysis of massive MIMO systems. One can always scale $X_M$ by $1/M^n$ with large enough $n$ to have the asymptotic deterministic property and the scaled random variable converges in mean square to 0. But this does not help the performance analysis when the scaling factor $M^n$ is put back into the SINR formula. Thus to avoid the scaling ambiguity, we use the squared coefficient of variance (SCV), defined as the square of the ratio of the standard deviation over the mean of the random variable \cite{scv}. It is noteworthy that the bounded mean condition is important. Without this condition, the convergence with $M\rightarrow\infty$ may not be well defined. Thus in this work, a random variable sequence $X_M$ with bounded mean is said to be asymptotically deterministic if \begin{equation} \lim_{M\rightarrow \infty}{\rm SCV}\{X_M\}=0. \end{equation} \section{Analysis on the Achievable Rate Scaling Law} \label{sec:scaling} The general performance scaling law of the massive MIMO relay network will be studied in this section. We start with analysing components of the received SINR to obtain a large-scale approximation. Consequently, a lower bound on the sum-rate is derived via Jensen's inequality. Then the performance scaling law and conditions for favourable SINR (non-decreasing SINR with respect to $M$) are derived. Typical network scenarios are discussed. Our analysis will show the relationship between the SINR scale and the parameter scales, and the trade-off between different parameter scales. \subsection{Sum-Rate Lower Bound and Asymptotically Equivalent SINR} For the SINR analysis, we first derive the means and SCVs of components of the SINR, i.e., $P_{s,e}$, $P_{i,e}$, $P_{n,e}$, $P_{e,1}$, $P_{e,2}$ and $P_{e,3}$. With the help of CLT and tedious derivations, the following can be obtained. \begin{eqnarray} &&{\mathbb E}\{P_{s,e}\}\approx P_c^4, \quad \hspace{3.2cm}{\rm SCV}\{P_{s,e}\}\approx\frac{8}{M},\label{eq:scv1}\\ &&{\mathbb E}\{P_{i,e}\}\hspace{1mm}\approx \hspace{1mm}P_c^3\left(2+\frac{K}{MP_c}\right),\hspace{1.1cm} {\rm SCV}\{P_{i,e}\}\hspace{1mm}\approx \hspace{1mm}\frac{\frac{4}{K-1}+ \frac{8+10P_c}{P_cM}+\frac{K^2+18(K-2)P_c}{(K-1)P_c^2M^2}}{4+\frac{K^2}{M^2P_c^2}+\frac{4K}{MP_c}},\\ &&{\mathbb E}\{P_{n,e}\}\approx P_c^3+\frac{K}{M}P_c^2,\quad \hspace{1.6cm} {\rm SCV}\{P_{n,e}\}\approx\frac{2+5P_c-2P_c^2}{MP_c+\frac{K^2}{MP_c}+2K},\\ &&{\mathbb E}\{P_{e,1}\}\approx\frac{K}{M}P_c^2(1-P_c)^2,\quad \hspace{1.1cm}{\rm SCV}\{P_{e,1}\}\approx\frac{3}{K},\\ &&{\mathbb E}\{P_{e,2}\}={\mathbb E}\{P_{e,3}\}\approx P_c^3(1-P_c), \hspace{0.35cm} {\rm SCV}\{P_{e,2}\}={\rm SCV}\{P_{e,3}\}\approx 1, \end{eqnarray} where the approximations are made by keeping the dominant terms of $M$. Due to the space limit, we only show the derivations of ${\mathbb E}\{P_{s,e}\}$ and ${\rm SCV}\{P_{s,e}\}$ in Appendix A. The rest can be derived similarly. With our definitions in (\ref{eq:component1})-(\ref{eq:component2}) and by noticing that $P_c\in[0,1]$, the random variables $P_{s,e}$, $P_{i,e}$, $P_{n,e}$, $P_{e,1}$, $P_{e,2}$, $P_{e,3}$ all have bounded means. From (\ref{eq:scv1}), we know that $P_{s,e}$ is asymptotically deterministic since its SCV approaches to 0 as $M\rightarrow \infty$. Furthermore, the decreasing rate of its SCV is linear in $M$, showing a fast convergence rate. Thus, for large $M$, we can approximate it with its mean value. While for the rest components in the SINR, their SCVs depend on the scalings of network parameters (such as $K$ and $P_c$), which do not necessarily converge to $0$. We cannot assume they are asymptotically deterministic so far. With the aforementioned approximation, the SINR expression becomes \begin{eqnarray} {\rm SINR}_{i}\approx \frac{MP_c^4 }{(K-1)P_{i,e}+\frac{1}{P} P_{n,e}+P_{e,1}+P_{e,2}+P_{e,3}+\frac{KP_c^3(1+\frac{K}{MP_c}+\frac{1}{PP_cM})}{Q}}. \label{eq:SINR_e_app} \end{eqnarray} With this simplification, the following result on the sum-rate can be obtained. \begin{lemma} The achievable rate of Source $i$ in the massive MIMO relay network has the following lower bound: \begin{eqnarray} C_{i}\ge C_{i,LB} \triangleq \frac{1}{2}\log_2\left(1+\widetilde{\rm SINR}_{i}\right),\label{rate-LB} \end{eqnarray} where \begin{equation} \widetilde{\rm SINR}_{i}\triangleq \frac{1}{\frac{2K}{MP_c}+\frac{K^2}{M^2P_c^2}+\frac{1}{MPP_c}+\frac{K}{M^2PP_c^2}+\frac{K}{MP_cQ}+\frac{K^2}{M^2P_c^2Q}+\frac{K}{M^2PP_c^2Q} }.\label{eq:sinr_exp} \end{equation} \label{lemma-rate} \end{lemma} \begin{proof} As $\log_2(1+1/x)$ is a convex function of $x$ \cite{convex}, according to Jensen's inequality, we have \begin{eqnarray} C_{i}\ge \frac{1}{2}\log_2\left(1+\frac{1}{{\mathbb E}\left\{\frac{1}{{\rm SINR}_i}\right\}}\right).\nonumber \end{eqnarray} By applying the SINR approximation in (\ref{eq:SINR_e_app}), we have \setlength{\arraycolsep}{1pt} \begin{eqnarray*} \frac{1}{{\mathbb E}\left\{\frac{1}{{\rm SINR}_i}\right\}}&=&\frac{MP_c^4 }{{\mathbb E}\left\lbrace(K-1)P_{i,e}+\frac{1}{P} P_{n,e}+P_{e,1}+P_{e,2}+P_{e,3}+\frac{KP_c^3(1+\frac{K}{MP_c}+\frac{1}{PP_cM})}{Q}\right\rbrace}\\ &=& \frac{1}{\frac{K-1}{M}\left[\frac{2}{P_c}+\frac{K}{MP_c^2}\right]+\frac{1}{MPP_c}+\frac{K}{M^2PP_c^2}+\frac{K}{M^2}(\frac{1}{P_c}-1)^2+\frac{2(1-P_c)}{MP_c}+\frac{K(1+\frac{K}{MP_c}+\frac{1}{PP_cM})}{MP_cQ}},\nonumber\\ &\approx & \frac{1}{\frac{2K}{MP_c}+\frac{K^2}{M^2P_c^2}+\frac{1}{MPP_c}+\frac{K}{M^2PP_c^2}+\frac{K}{MP_cQ}+\frac{K^2}{M^2P_c^2Q}+\frac{K}{M^2PP_c^2Q} }=\widetilde{\rm SINR}_{i}, \end{eqnarray*} \setlength{\arraycolsep}{5pt}where the approximation is made by ignoring the lower order terms of $M$ when $M\gg 1$. Thus the lower bound in (\ref{rate-LB}) is obtained. \end{proof} From (\ref{rate-LB}) and (\ref{eq:sinr_exp}), we can see that the achievable rate lower bound increases logarithmically with $M$ and $P_c$. But its increasing rates with $P$, $Q$, $1/K$ are slower than logarithmic increase. Note that, by using the method in Lemma 1 of \cite{scaling-mimo-qi}, the sum-rate expression in (\ref{rate-LB}) can also be obtained. But with the method in \cite{ scaling-mimo-qi}, the derived expression is an approximation, while our derivations show that it is a lower bound for large $M$. On the other hand, from Lemma 1 of \cite{scaling-mimo-qi}, we know that the lower bound becomes tighter when the number of relay antennas $M$ or the number of sources $K$ increases. The parameter $\widetilde{\rm SINR}_{i}$ has the physical meaning of asymptotic effective SINR corresponding to the achievable rate lower bound. Due to the monotonic relationship in (\ref{rate-LB}), to understand the scaling law of the achievable rate is equivalent to understanding the scaling law of $\widetilde{\rm SINR}_{i}$. \subsection{Scaling-Law Results} Now, the scaling law of the asymptotic effective SINR, $\widetilde{\rm SINR}_{i}$, will be analysed to show how the system performance is affected by the size of the relay antenna array and other network parameters. To have a comprehensive coverage of network setups and applications, for all system parameters including the number of source-destination pairs $K$, the source transmit power $P$, the relay transmit power $Q$, and the CSI quality parameter $P_c$, a general scaling model with respect to $M$ is used. Assume that \begin{equation} K={\mathcal O}(M^{r_k}),\quad \frac{1}{P}={\mathcal O}(M^{r_p}), \quad \frac{1}{Q}={\mathcal O}(M^{r_q}), \quad \frac{1}{P_c}={\mathcal O}(M^{r_c}), \label{exponents-def} \end{equation} where the notation $f(M)={\mathcal O}\left(g(M)\right)$ means that when $M\rightarrow\infty$, $f(M)$ and $g(M)$ have the same scaling with respect to $M$. In other words, there exists positive constants $C_1,C_2$ and natural number $m$, such that $C_1|g(M)|\le|f(M)|\le C_2|g(M)|$ for all $M\ge m$. Thus the exponents $r_k$, $r_p$, $r_q$, and $r_c$ represents the relative scales of $K$, $1/P$, $1/Q$, and $1/P_c$ with respect to $M$. For practical ranges of the system parameters, we assume that $0\le r_k,r_p,r_q,r_c\le 1$. The reasons are given in the following. \begin{itemize} \item The scale of $K$. Following typical applications of massive MIMO, the number of users should increase or keep constant with the number of relay antennas. Thus $r_k\ge 0$. On the other hand, the number of users $K$ cannot exceed $M$ since the maximum multiplexing gain provided by the relay antennas is $M$. Thus, $r_k\le 1$. \item The scale of $P$ and $Q$. Following the high energy efficiency and low power consumption requirements of massive MIMO, the source and relay transmit power should not increase with the number of relay antennas. But they can decrease as the number of relay antennas increases with the condition that their decreasing rates do not exceed the increasing rate of the antenna number. This is because that the maximum array gain achievable from $M$ antennas is $M$. A higher-than-linear decrease will for sure make the receive SINR a decreasing function of $M$, which contradicts the promise of massive MIMO communications. Thus $0\le r_p,r_q\le 1$. \item The scale of $P_c$. From the definition of $P_c$ in (\ref{eq:pc}), we have $1/P_c=1+1/E_t$, thus $r_c\ge 0$. This is consistent with the understanding that the CSI quality will not improve as the number of relay antennas increases, as the training process cannot get benefits from extra antennas \cite{massive}. On the other hand, since similar to the data transmission, the total training energy should not has lower scaling than $1/M$, we conclude that $1/P_c$ should not have a higher scaling than $M$. Thus $r_c\le 1$. \end{itemize} In our parameter modelling, the exponents can take any value in the continuous range $[0,1]$. This is different from most existing work where only one or two special values are assumed for the parameters. Widely used values are 0, 0.5, and 1, which mean that the parameters scale as constant, linear function, and square-root of $M$. Our model covers existing work as special cases. For the scaling law of $\widetilde{\rm SINR}_{i}$, denote its scaling with respect to $M$ as \begin{equation} \widetilde{\rm SINR}_{i}=\mathcal{O}(M^{r_s}) \text{, or equivalently, } r_s=\lim_{M\rightarrow\infty} \frac{\log \widetilde{\rm SINR}_{i}}{\log M}. \label{exp-SINR} \end{equation} The exponent $r_s$ shows the scaling of $\widetilde{\rm SINR}_{i}$. \begin{theorem} For the massive MIMO relay network with MRC/MRT relaying and CSI error, with the model in (\ref{exponents-def}) and (\ref{exp-SINR}), we have the following performance scaling law: \begin{equation} r_s=1-r_c-\max(r_p,r_k+r_q). \label{SNR-scaling} \end{equation} \label{thm-1} \end{theorem} \vspace{-1cm} \begin{proof} From (\ref{eq:sinr_exp}) we can see that, the maximal scaling exponent of the terms in the denominator determines the scaling exponent of $\widetilde{\rm SINR}_{i}$ with respect to $M$. After some tedious calculation, we find that the term with the highest scaling exponent is either $\frac{1}{MPP_c}$ or $\frac{K}{MP_cQ}$. By using the parameter models in (\ref{exponents-def}), the results in (\ref{SNR-scaling}) is obtained. \end{proof} Sensible massive MIMO system should have $r_s\ge0$, i.e., the asymptotic effective SINR and the sum-rate scale as ${\mathcal O}(1)$ or higher. Otherwise, the system performance will decrease with $M$, which contradicts the motivations of massive MIMO systems. To help the presentation, we refer to the case where $r_s\ge0$ as the \textit{favourable-SINR scenario}. The condition for favourable-SINR is presented in the following corollary. \begin{corollary} The necessary and sufficient condition for the massive MIMO relay network with MRC/MRT relaying and CSI error to have favourable-SINR is \begin{equation} r_c+\max(r_p,r_k+r_q)\le 1, \quad r_c,r_p,r_q,r_k \in [0,1]. \label{cond-scaling} \end{equation} \label{coro-1} \end{corollary} \vspace{-1cm} \begin{proof} This is a straightforward extension from (\ref{SNR-scaling}) of Theorem \ref{thm-1}. \end{proof} The scaling law in (\ref{SNR-scaling}) illustrates quantitatively the concatenation of the scalings of different parameters and their effects on the network performance. The condition in (\ref{cond-scaling}) forms a region of $r_k$, $r_p$, $r_q$, $r_c$ that makes the SINR favourable. They provide guidelines for the design of the massive MIMO relay network. Next, we discuss the physical meanings of (\ref{SNR-scaling}) and (\ref{cond-scaling}), and several popular network setups. Firstly, in (\ref{SNR-scaling}), $r_k$ and $r_q$ appears as a summation. According to their definitions in (\ref{exponents-def}), the summation is the scaling exponent of $K/Q$. Then in (\ref{SNR-scaling}), $\max(r_p,r_k+r_q)$, which equals $\min(-r_p,-r_k-r_q)$, is the minimum of the power scaling exponents of $P$ and $Q/K$. Recall that $P$ is the per-source transmit power and $Q/K$ is the average relay power allocated to each source. Thus, from (\ref{SNR-scaling}), we can see that the performance scaling of the SINR is determined by two factors: 1) $\max(r_p,r_k+r_q)$, which is the worse per-source power scaling of the two steps, and 2) $P_c$, which is the CSI quality. Further, (\ref{SNR-scaling}) shows that $r_s$, which represents the scale of the system SINR, is a decreasing function of both $\max(r_p,r_k+r_q)$ and $r_c$. Thus high transmit power and better CSI quality result in improved performance. There is a natural tradeoff between the worse per-source power and channel training (e.g., between the data transmission phase and the training phase), and one can compensate for the other in performance scaling. For the two-step communication, the worse step dominates the overall performance. The condition in (\ref{cond-scaling}) implies $r_k+r_q\le 1$, which means that for the SINR to be favourable, the scaling of the per-source-destination-pair relay power should be no less that $1/M$. This also shows a tradeoff between $r_k$ and $r_q$. Recall that $0\le r_k,r_q\le 1$. That is, with extra relay antennas, we can serve more users or use less relay power for the same level of performance, but the improvement in the two aspects has a total limit. For example, two cases satisfying the constraint are 1) $r_k=1$, $r_q=0$; 2) $r_q=1$, $r_k=0$. The first case means that when the number of users increases linearly with the number of relay antennas (i.e., $r_k=1$), the relay power must remain constant (i.e., $r_q=0$), and thus the goal of saving relay power cannot be achieved. The second case is the opposite: when the relay power is scaled inversely proportional to the number of relay antennas, the goal of serving more users cannot be achieved. \subsection{Discussions on Several Popular Network Settings} In this subsection, we further elaborate the scaling law in (\ref{SNR-scaling}) and the condition in (\ref{cond-scaling}) for popular network settings. \begin{enumerate} \item First, we consider the case of $r_c=0$, corresponding to perfect or constant CSI quality case (e.g., $E_t$ increases linearly in $M$). From (\ref{SNR-scaling}) and (\ref{cond-scaling}), the SINR scaling exponent is $r_s=1-\max(r_p,r_k+r_q) $ and the necessary and sufficient condition for favourable SINR is $r_k+r_q\le 1$. Its physical meaning is that, when the CSI is perfect and for the SINR to be favourable, the most power-saving design is to make both the per-source power of the two hops decrease linearly with the number of antennas. Thus, when the CSI quality is good, we can design the network to serve more users and/or save power consumption, while maintain certain quality-of-service. \item Next, we consider the case of $r_c=1$, which is equivalent to $E_t= {\mathcal O}(1/M)$. This means that the total energy used in training and the CSI quality are inversely proportional to the relay antenna number. In this case, the SINR scaling exponent is $r_s=-\max(r_p,r_k+r_q).$ To have favourable SINR, from (\ref{cond-scaling}), we need $r_p=r_k=r_q=0$. That is, the source data transmit power, the per-source relay power, and the number of users should all remain constant. This shows that the CSI quality is key to the performance of the massive MIMO relay network. With low CSI quality, all the promising features of the massive MIMO network are gone. \item For a general $r_c\in(0,1)$, favourable SINR requires $\max(r_p,r_k+r_q)\le1-r_c$. That is, the worse transmit power per-source of the two steps cannot be lower than ${\mathcal O}(1/M^{1-r_c})$. This shows the trade-off between the training phase and the data transmission phase. \item For the most power saving setting where $r_p=1$ or $r_k+r_q=1$, the per-source transmit power of either the two steps scales as $1/M$. To have favourable SINR, $r_c=0$ is needed. Thus, the per source transmit power of either or both steps can be made inverse proportional to the number of relay antennas. But at the same time, the CSI quality must remain at least constant, not a decreasing function of $M$. If furthermore $r_k=0$ (e.g., the number of source-destination pairs $K$ remains constant), we have for this setting $P$ or $Q$ scales with $1/M$, which is the major power scaling scenario considered in the literature. It is obvious that our results cover this case, and shows more insights by considering the scales of $K$ and $P_c$. \item While in previous discussions, $r_c$ is treated as a free parameter, next, we consider the special case of $P_t=P$ and $\tau=K$. The condition $P_t=P$ corresponds to the practical scenario that user devices always use the same transmit power, no matter for training or data transmission. It is a common assumption in the literature \cite{efficiency-twoway-mrcmrt-csi, efficiency-zf, rate-CSI}. $\tau=K$ is the minimum training length for effective communications \cite{efficiency}. It is shown in \cite{MRCvsZF} that, for maximal-ratio processing, the case achieves the maximal spectral efficiency. We can see that in this case, $r_c=\max\{0, r_p-r_k\}$. Consequently, the SINR scale exponent is $r_s=1-\max\{0, r_p-r_k\}-\max(r_p,r_k+r_q).$ For the SINR to be favourable, we need $\max(r_k+r_q,2r_p-r_k,r_p+r_q)\le 1$. If further $r_k=0$, i.e., the number of source-destination pairs is constant, favourable SINR requires $r_p\le 1/2$, i.e., the source transmit power can be reduced by $1/\sqrt{M}$ at maximum. This is same as the conclusion as in \cite{efficiency-twoway-mrcmrt-csi, efficiency-zf, rate-CSI}. But note that our model is different from \cite{efficiency-twoway-mrcmrt-csi, efficiency-zf, rate-CSI} and is more general. \item Another popular setting is to have the number of source-destination pairs increase linearly with $M$, i.e., $r_k=1$. One example is assuming that $K/M$ is a constant as $M$ increases. From (\ref{SNR-scaling}) and (\ref{cond-scaling}), for this case, the SINR scaling exponent is $r_s=-r_c-r_{q}$ and to have favourable SINR, we need $r_c=r_{q}=0$. Thus, to support such number of source-destinations, the CSI quality must be high and at the same time the relay power cannot decrease with $M$. \end{enumerate} \section{Systems with Asymptotically Deterministic SINR} \label{sec:deter} One important concept in massive MIMO systems is asymptotically deterministic. For example, with receiver combining and/or pre-coding at the base station or relay station, random variables such as the signal power and interference power which are random in finite-dimension cases converge to deterministic values as the number of relay antennas is large \cite{1,2}. This effect is also called channel hardening \cite{massive,massive-1}. With channel hardening, the small-scale fading effect is negligible, and so is the channel variance in the frequency domain. This not only simplifies many design issues but also enables performance analysis via the deterministic equivalences of the random variables, e.g., \cite{efficiency,1,2}. One important question is thus when the massive MIMO system have asymptotically deterministic SINR for the corresponding performance analysis to be valid. In this section, we derive a sufficient condition on asymptotically deterministic SINR and discuss typical scenarios. The result is summarized in the following proposition. \begin{proposition} \label{pro:condition} When $M\gg 1$, a sufficient condition for the SINR to be asymptotically deterministic is \begin{eqnarray*} &&1)\text{ } r_s+r_c+\max\{r_p,r_k+r_q\} = 1,\quad 2)\text{ } 2r_s+2r_c+r_k\le 1, \quad \\ &&3)\text{ } 2r_s+3r_c+2r_p \le 2, \quad 4)\text{ } r_c,r_p,r_q,r_k \in [0,1]. \label{eq:suff-con} \end{eqnarray*} \end{proposition} \begin{proof} Please see Appendix B. \end{proof} From Constraint 2) of the conditions, we can see that $r_s\le 1/2$, meaning that the highest possible SINR scaling is $1/\sqrt{M}$ for the sufficient condition. In addition, $r_c\le 1/2$, meaning that to make the SINR asymptotically deterministic, the CSI quality should scale no lower than $1/\sqrt{M}$. By the definition of $P_c$ in (\ref{eq:pc}), the lowest scaling the training energy $E_t$ can have is $1/\sqrt{M}$. Note that, for a favourable SINR, the scale of the CSI quality parameter can be as low as $1/M$. Therefore, for asymptotically deterministic SINR, the constraint on the CSI quality is more strict. Next, we investigate typical scenarios for the SINR scaling, which include all possible cases if $r_s$ and $r_c$ are allowed to take values from $\{0,1/2,1\}$ only. The tradeoff between parameters will be revealed. \begin{enumerate} \item To achieve both $r_s=1/2$ (the SINR increases linearly with $\sqrt{M}$) and asymptotically deterministic SINR, the sufficient condition reduces to $r_k=0$, $r_c=0$, and $\max\{r_p,r_q\}=1/2$. It means that when the number of users and the CSI quality remain constant, the lower of the source power and the relay power must scale as $1/\sqrt{M}$. While in existing work, only constant SINR case ($r_s=0$) has been considered \cite{efficiency, 1,2}, our result shows that the SINR can scale as $\sqrt{M}$ with asymptotically deterministic property. \item To achieve $r_s=0$ (constant SINR level) and asymptotically deterministic SINR, two cases may happen: a) $r_c=0$ and $\max\{r_p,r_k+r_q\}=1$; and b) $r_c=1/2$, $r_k=0$, $r_p\le 1/4$ and $r_q= 1/2$. For Case a), when the CSI quality has constant scaling (e.g., perfect CSI or high quality channel estimation), the scale of the lower per-source transmission power of the two hops should scale as $1/M$. This is the case considered in \cite{1,2}. Similar scenarios for massive MIMO systems without relays have also been reported in \cite{efficiency}. Case b) indicates that when the CSI quality scales as $1/\sqrt{M}$ (e,g., the training power scales as $1/\sqrt{M}$ with fixed training length), the number of source-destination pairs should remain constant, the relay power should scale as $1/M$, and the source power can scale smaller than $1/\sqrt[4]{M}$. \end{enumerate} \section{Systems with Linearly Increasing SINR} \label{sec:linear} In our asymptotically deterministic SINR analysis, the scale of the SINR is no larger than ${\mathcal O}(\sqrt{M})$. While, it can be seen from (\ref{SNR-scaling}) that the maximum scale of the SINR with respect to the number of relay antennas $M$, is ${\mathcal O}(M)$, i.e., linearly increasing with $M$. This is a very attractive scenario for massive MIMO relay networks, in the sense that when $M\gg 1$ significant improvement in the network throughput and communication quality can be achieved. Possible applications for such scenario are networks with high reliability and throughput requirement such as industrial wireless networks and high-definition video. In this section, we study networks with linearly increasing SINR. First, the condition on the parameter scaling for the SINR to be linearly increasing is investigated. Then we show that in this case the interference power is not asymptotically deterministic, but with a non-diminishing SCV as $M\rightarrow \infty$. Thus deterministic equivalence analysis does not apply and the small-scale effect needs to be considered in analyzing the performance. We first derive a closed-form PDF of the interference power, then obtain expressions for the outage probability and ABER. Their scalings with network parameters are revealed. \begin{proposition} \label{pro:linearSINR} When $M\gg 1$, the sufficient and necessary condition for the average SINR to scale as ${\mathcal O}(M)$ is $r_c=r_q=r_p=r_k=0$, i.e., the CSI quality, the source transmit power, the relay power, and the number of users all remain constant. In this case, the SINR can be approximated as \begin{equation} {\rm SINR}_{i,e}\approx \frac{M}{P_{i,e}\frac{(K-1)}{P_c^4}+\left(\frac{1}{P}+\frac{K}{Q}\right)(\frac{1}{P_c}+\frac{K}{MP_c^2})+2\left(\frac{1}{P_c}-1\right)+\frac{K}{M}\left(\frac{1}{P_c}-1\right)^2}, \label{eq:SINR_app_high} \end{equation} where ${\rm SCV}\{P_{i,e}\}\approx \frac{1}{K-1}$. \end{proposition} \begin{proof} Please see Appendix C. \end{proof} Proposition \ref{pro:linearSINR} shows that for linearly-increasing SINR, the interference power is not asymptotically deterministic and does not diminish as $M$ increases. In addition, the randomness of the interference power is the dominant contributor to the random behaviour of the SINR. With this result, to analyse the outage probability and ABER performance, the distribution of the interference needs to be derived. \begin{proposition} \label{pro:pdf} Define \begin{eqnarray} &&\rho_e=\frac{1}{\sqrt{M}}\frac{\sqrt{\frac{4}{P_c}+10}}{2+\frac{K}{MP_c}},\label{eq:miu}\\ &&b_e=(K-1)\rho_e, \quad c_e=1-\rho_e ,\quad d_e=\frac{P_c^3}{K-1}\left(2+\frac{K}{MP_c}\right). \end{eqnarray} When $M\gg 1$, the PDF of $P_{i,e}$ has the following approximation: \begin{equation}\label{equ4} f_{P_{i,e}}(y)=\frac{c_e}{b_e+c_e}\sum\limits_{i=0}^\infty \left(\frac{b_e}{b_e+c_e}\right)^i\phi\hspace{-0.5mm}\left(y;K+i-1,d_ec_e\right), \end{equation} where $\phi(y;\alpha,\beta)=\frac{y^{\alpha-1}e^{-y/\beta}}{\beta^\alpha(\alpha-1)!}$ is the PDF of Gamma distribution with shape parameter $\alpha$ and scale $\beta$. It can also be rewritten into the following closed-form expression: \begin{equation} \label{cf-pdf-e} f_{P_{i,e}}(y)\approx\frac{(b_e+c_e)^{K-3}}{d_e b_e^{K-2}} \left[e^{-\frac{y}{d_e(b_e+c_e)}}-e^{-\frac{y}{d_ec_e}} \hspace{-1mm}\sum_{n=0}^{K-3}\hspace{-0.5mm}\frac{1}{n!}\left(\hspace{-0.5mm}\frac{b_e}{d_ec_e(b_e+c_e)}y\right)^{n}\hspace{-0.5mm}\right].\hspace{1cm} \end{equation} \end{proposition} \begin{proof} Please see Appendix D. \end{proof} From (\ref{equ4}), it can be seen that the interference power has a mixture of infinite Gamma distributions with the same scale parameter which is $d_ec_e$ but different shape parameters. But as (\ref{equ4}) is in the form of an infinite summation, it is manipulated into (\ref{cf-pdf-e}) for further analysis. Besides, when the CSI quality is high, i.e., $P_c \approx 1$, we have $K/(MP_c)\ll 1 $ and thus $\rho_e$ and $d_e$ can be simplified by ignoring the term $K/(MP_c)$. Compared with the perfect CSI case where $P_c=1$, the CSI error makes $d_ec_e$ smaller. \subsection{Outage Probability Analysis} Outage probability is the probability that the SINR falls below a certain threshold. Due to the complexity of relay communications, the user-interference, and the large scale, the outage probability analysis of multi-user massive MIMO relay networks is not available in the literature. The derived approximate PDF for the interference power in (\ref{cf-pdf-e}) and the simplified SINR approximation in (\ref{eq:SINR_app_high}) for linearly increasing SINR case allow the following outage probability derivation. Let $\gamma_{th}$ be the SINR threshold and define \[\xi\triangleq \left(\frac{1}{P}+\frac{K}{Q}\right)\left(\frac{1}{P_c}+\frac{K}{MP_c^2}\right)+2\left(\frac{1}{P_c}-1\right)+\frac{K}{M}\left(\frac{1}{P_c}-1\right)^2. \] The outage probability of User $i$ can be approximated as \setlength\arraycolsep{1pt} \begin{eqnarray*} P_{out}(\gamma_{th})&=&{\mathbb{P}}({\rm SINR}_{i,e}<\gamma_{th})\nonumber \\ &\approx& {\mathbb{P}}\left(\frac{M}{P_{i,e}\frac{K-1}{P_c^4}+\xi}<\gamma_{th}\right)={\mathbb{P}}\left(P_{i,e}>\left(\frac{M}{\gamma_{th}}-\xi\right) \frac{P_c^4}{K-1}\right)\\ &=&\left\{\begin{array}{ll} 1 & \mbox{\ \ if $\gamma_{th}\ge\frac{M}{\xi}$}\\ {\mathbb{P}}\left(P_{i,e}>\left(\frac{M}{\gamma_{th}}-\xi\right) \frac{P_c^4}{K-1} \right)& \mbox{\ \ otherwise}\end{array}\right..\label{outage_expression} \end{eqnarray*} When $\gamma_{th}< \frac{M}{\xi}$, from (\ref{cf-pdf-e}), we have \begin{eqnarray} &&\hspace{-2mm} P_{out}(\gamma_{th}) \approx \left(\frac{b_e}{b_e+c_e}\right)^{\hspace{-1mm} 2-K}\hspace{-2mm} e^{-\frac{\left(\frac{M}{\gamma_{th}}-\xi\right)P_c^4}{(K-1)d_e(b_e+c_e)}}\nonumber \\ &&\qquad-\frac{c_e}{b_e+c_e} \hspace{-1mm}\sum_{n=0}^{K-3} \frac{1}{\Gamma(n+1)} \left(\frac{b_e}{b_e+c_e}\right)^{n-K+2}\hspace{-2mm}\Gamma\hspace{-1mm}\left(n+1,\frac{\left(\frac{M}{\gamma_{th}}-\xi\right)P_c^4 }{(K-1)d_ec_e}\right)\hspace{-1mm}, \label{outageprob} \end{eqnarray} where $\Gamma(s,x)\triangleq\int_x^\infty t^{s-1}e^{-t}\mathrm{d}t$ is the upper incomplete gamma function \cite{Int}. This outage probability expression is too complex for useful insights. A simplified one is derived in the following proposition for systems with high CSI quality. \begin{proposition} \label{pro:outage_app} Define $$D\triangleq \frac{\left(2\left(1-P_c\right)+\frac{1}{P}+\frac{K}{Q}\right)P_c^3}{(K-1)d_e(b_e+c_e)}.$$ When $E_t\gg1$ and $M\gg \gamma_{th}\left(2d_ec_e(1+\frac{c_e}{{b_e}}) K(K-1)+\frac{1}{P}+\frac{K}{Q}\right)$, we have \begin{equation} P_{out}(\gamma_{th}) \approx \left(\frac{b_e}{b_e+c_e}\right)^{2-K}e^{D-\frac{MP_c^4}{\gamma_{th}(K-1)d_e(b_e+c_e)}}. \label{outate_app} \end{equation} \end{proposition} \begin{proof} By the definitions of $P_c$ and $E_t$ in (\ref{eq:pc}), when $E_t\gg1$, we have $P_c\approx 1$. Thus $\xi\approx 1/P+K/Q$. Further define $$a\triangleq\frac{b_e\left(\frac{M}{\gamma_{th}}-\xi\right)P_c^4}{(K-1)d_ec_e(b_e+c_e)}.$$ When $M\gg \gamma_{th}\left(2d_ec_e(1+\frac{c_e}{{b_e}}) K(K-1)+\frac{1}{P}+\frac{K}{Q}\right)$, we have $a\gg 2K>1$ and therefore $$\frac{\left(\frac{M}{\gamma_{th}}-\xi\right)P_c^4}{(K-1)d_ec_e} \gg 1.$$ Then, from \cite[8.357]{Int} we know that $$\Gamma\left(n+1,\frac{\left(\frac{M}{\gamma_{th}}-\xi\right)P_c^4 }{(K-1)d_ec_e}\right)\approx \left(\frac{\left(\frac{M}{\gamma_{th}}-\xi \right)P_c^4}{(K-1)d_ec_e}\right)^n e^{-\frac{\left(\frac{M}{\gamma_{th}}-\xi\right)P_c^4}{(K-1)d_ec_e}}.$$ With this approximation, the outage probability expression in (\ref{outageprob}) can be reformulated as \begin{eqnarray*} \hspace{-2mm} P_{out}(\gamma_{th}) &\approx & \left(\hspace{-1mm}\frac{b_e}{b_e+c_e}\hspace{-1mm}\right)^{\hspace{-1mm}2-K}\hspace{-4mm} e^{-\frac{\left(\frac{M}{\gamma_{th}}-\xi\right)P_c^4}{(K-1)d_e(b_e+c_e)}}\hspace{-2mm}\left[1-\frac{c_e}{b_e+c_e}e^{-a}\sum_{n=0}^{K-3}\frac{a^n}{\Gamma(n+1)}\right] \end{eqnarray*} Notice that as $a\gg 2K>1$, $e^{a}\gg\sum_{n=0}^{K-3}\frac{a^n}{\Gamma(n+1)}$. Thus the second term in the bracket of the previous formula can be ignored, and the approximation in (\ref{outate_app}) is obtained. \end{proof} We can see that the outage probability approximation in (\ref{outate_app}) is tight when the number of relay antennas is much larger than the number of source-destination pairs and the training power and transmit powers are high. These conditions will result in a high received SINR. Thus, the approximation in (\ref{outate_app}) applies to the high SINR case. Note that (\ref{outate_app}) can also be obtained by deleting the second summation term in the PDF formula in (\ref{cf-pdf-e}) and then integrating with the approximated PDF. This is because that, for the high SINR case, the outage probability is determined by the SINR distribution in the small SINR region, which is equivalently the high interference power region, corresponding to the tail of the PDF of the interference power. It can be seen from the PDF in (\ref{cf-pdf-e}) that, the first term has a heavier tail, thus dominates the outage probability. Now, we explore insights from (\ref{outate_app}). As $b_e,c_e,d_e$ are independent with $P$ or $Q$, the outage probability scales as $e^{\frac{P_c^3}{P(K-1)d_e(b_e+c_e)}}$ with $P$ and scales as $e^{\frac{KP_c^3}{Q(K-1)d_e(b_e+c_e)}}$ with $Q$. Firstly, it shows the natural phenomenon that increasing $P$ or $Q$ will decrease the outage probability. Also, we can see that the outage probability curve with respect to $Q$ has a sharper slope than that with $P$. For example, let $P=Q=\alpha$, doubling $P$ alone will shrink the outage probability by a factor of $e^{\frac{P_c^3}{2(K-1)d_e(b_e+c_e)\alpha}}$, while doubling $Q$ alone will shrink the outage probability by a factor of $e^{\frac{KP_c^3}{2(K-1)d_e(b_e+c_e)\alpha}}$, which is $K$ powers of the shrinkage of the doubling-$P$ case. Furthermore, the outage probability will not diminish to zero as the user and relay transmit power increase. An error floor exists due to the user-interference. On the other hand, increasing the number of relay antennas to infinity leads to faster decrease in the outage probability and makes it approach zero. Note that in our analysis, we assume $M\gg 1$ but does not go to infinity. So terms with $1/\sqrt{M}$ are not treated as asymptotically small and thus are not ignored. If $M\rightarrow \infty$ and $P_c\rightarrow 1$, the $1/\sqrt{M}$ terms can be seen as $0$ and we will have $P_{out}(\gamma_{th})\approx\left(\frac{(K-1)\sqrt{3.5}}{\sqrt{M}}\right)^{2-K}e^{-\frac{M}{2\gamma_{th}}}.$ However, this asymptotic analysis is not practical because the number of massive MIMO antennas is usually a few hundreds in practice, so that $\sqrt{M}$ may not be much larger than other parameters such as $K,P,Q$. \subsection{ABER analysis} ABER is anther important performance metric. Due to the complexity of the SINR distribution, ABER analysis of the massive MIMO relay network is not available in the literature. For the linearly increasing SINR case, the ABER can be analyzed as below. Denote the ABER as $P_b(e)$. It is given by \begin{equation} P_b(e)=\int_0^{\infty}P_b(e|r)f_{\rm SINR}(r){\rm d}r,\label{eq:defpb} \end{equation} where $P_b(e|r)$ is the conditional error probability and $f_{\rm SINR}(r)$ is the PDF of the SINR. For channels with additive white Gaussian noise, $P_b(e|r)=A {\rm erfc}\left(\sqrt{B r}\right)$ for several Gray bit-mapped constellations employed in practical systems, where ${\rm erfc}(x)$ is the complementary error function, $A$ and $B$ are constants depended on the modulation. For example, for BPSK, $A=0.5$, $B=1$. For the linearly increasing SINR case, With the PDF of the interference power in (\ref{cf-pdf-e}) and the SINR approximation in (\ref{eq:SINR_app_high}), the PDF of the SINR can be derived as below. \begin{eqnarray} &&f_{\rm SINR}(r)=\frac{\left(b_e+c_e\right)^{K-3}MP_c^4}{r^2 (K-1)d_eb_e^{K-2}}e^{-\frac{\left(\frac{M}{r}-\xi\right)P_c^4}{(K-1)d_e(b_e+c_e)}}\nonumber\\ &&\hspace{1.9cm}-\sum_{n=0}^{K-3}\frac{(b_e+c_e)^{K-n-3}MP_c^{4n+4}}{\Gamma(n+1)((K-1)d_e)^{n+1}c_e^nb_e^{K-n-2}}\frac{\left(\frac{M}{r}-\xi\right)^n}{r^2}e^{-\frac{\left(\frac{M}{r}-\xi\right)P_c^4}{(K-1)d_ec_e}}, r\in\left(0,\frac{M}{\xi}\right). \label{pdf_SINR} \end{eqnarray} By using (\ref{pdf_SINR}) in (\ref{eq:defpb}), an approximation on the ABER is derived in the following proposition. \begin{proposition} \label{pro:BER} When $E_t\gg1$ and $M\gg 2d_ec_e(1+c_e/b_e)K(K-1)+\frac{1}{P}+\frac{K}{Q}$, the ABER can be approximated as \begin{eqnarray} &&P_b(e)\approx A\left(\frac{b_e}{b_e+c_e}\right)^{2-K}e^{D-2P_c^2\sqrt{\frac{BM}{(K-1)d_e(b_e+c_e)}}}. \label{eq:BER_2} \end{eqnarray} \end{proposition} \begin{proof} The PDF of the SINR in (\ref{pdf_SINR}) can be rewritten as \begin{eqnarray} &&\hspace{-2cm} f_{\rm SINR}(r)=\frac{\left(b_e+c_e\right)^{K-3}MP_c^4}{r^2 (K-1)d_eb_e^{K-2}}e^{-\frac{\left(\frac{M}{r}-\xi\right)P_c^4}{(K-1)d_e(b_e+c_e)}}\left[1-\frac{\sum_{n=0}^{K-3}\frac{\left(\frac{b_e\left(\frac{M}{r}-\xi\right)P_c^4}{(K-1)d_ec_e(b_e+c_e)}\right)^n}{\Gamma(n+1)}}{e^{\frac{b_e\left(\frac{M}{r}-\xi\right)P_c^4}{(K-1)d_ec_e(b_e+c_e)}}}\right].\label{pdf_SINR_1} \end{eqnarray} As the ABER is determined by the PDF when $r$ is small \cite{giannakis}, we consider the range $r<1$. With $E_t\gg1$ and $M\gg 2d_ec_e(1+c_e/b_e)K(K-1)+1/P+K/Q$, similarly as the proof of Proposition \ref{pro:outage_app}, we can show that ${\sum_{n=0}^{K-3}{\left(\frac{b_e\left(\frac{M}{r}-\xi\right)P_c^4}{(K-1)d_ec_e(b_e+c_e)}\right)^n}/{\Gamma(n+1)}}{/}{e^{\frac{b_e\left(\frac{M}{r}-\xi\right)P_c^4}{(K-1)d_ec_e(b_e+c_e)}}}\ll 1$, and thus this term can be ignored. The ABER can be derived by solving $\int_{r=0}^{M/\xi}A {\rm erfc}(\sqrt{Br})f_{\rm SINR}(r) dr$. As the ABER is determined by the region when $r$ is small, we replace the integration region with $\int_{r=0}^{\infty}$ for a tractable approximation. By using ${\rm erfc}(x)=\Gamma(\frac{1}{2},x^2)/\sqrt{\pi}$, the integration formula $\int_{0}^{\infty}e^{-\mu x}\Gamma(v,\frac{a}{x}){\rm d}x=2a^{v/2}\mu^{v/2-1}K_v(2\sqrt{\mu a})$ \cite{Int}, and $K_{\frac{1}{2}}(x)=\sqrt{\frac{\pi}{2x}}e^{-x}$ \cite{Handbook}, the ABER approximation in (\ref{eq:BER_2}) is obtained. \end{proof} We can see from (\ref{eq:BER_2}) that increasing $M$ will make the ABER decrease and approach zero. Besides, for very large $M$ the ABER behaves as $Ce^{-C'\sqrt{M}}$. As is known, the ABER of traditional MIMO system with $M$ transmit antennas and 1 receive antenna under Rayleigh fading is $C_{MIMO}{\rm SINR}^{-C_{MIMO}'M}$. This shows different ABER behaviour in the massive MIMO relay network, where the ABER decreases exponentially with respect to $\sqrt{M}$. If the diversity gain definition of traditional MIMO system is used \cite{MIMO}, the massive relay network will have infinite diversity gain. Comparing (\ref{eq:BER_2}) with (\ref{outate_app}), we see that the ABER and the outage probability has the same scaling with $P$ and $Q$ respectively. Thus $P$, $Q$ scaling analysis for the outage probability also applies to the ABER. In addition, if the threshold is set as $\gamma_{th}=\sqrt{\frac{MP_c^4}{4B(K-1)d_e(b_e+c_3)}}$, the ABER equals $A$ times the outage probability. Thus, there is a simple transformation between the two metrics. \section{Simulation Results} \label{sec:simu} In this section, simulation results are shown to verify the analytical results. \begin{table} \caption{The network settings for Figure 1} \center \begin{tabular}{|c|c|c|c|c|c|} \hline & $P_c$ & $P$ & $Q$ & $K$ & $r_s$\\ \hline Case 1& 0.8 & 10 & 10 & $M/10$ & 0\\ \hline Case 2& $100/M$& 10 & 10& 10 & 0\\ \hline Case 3& 0.8 & 10 & $1/\sqrt{M}$& $\lfloor\sqrt{M}\rfloor$ &0\\ \hline Case 4& 0.8 & 1 & 1& 20 & 1\\ \hline Case 5& $10/\sqrt{M}$& 10 &10 &20& $1/2$\\ \hline \end{tabular} \label{table} \end{table} \begin{figure} \center \includegraphics[width=5in]{scaling.eps}\vspace{-5mm} \caption{Average SINR v.s.~the number of relay antennas $M$ for different network scenarios.} \label{fig:scaling} \end{figure} In Fig.~\ref{fig:scaling}, the simulated average SINR with respect to the number of relay antennas $M$ is shown for the five network settings given in Table \ref{table} to verify the SINR scaling result in Theorem \ref{thm-1}. In the table, $\lfloor \cdot \rfloor$ is the floor function. For different settings of network parameters, their SINR scalings ($r_s$ values) are calculated based on the SINR scaling law in (\ref{SNR-scaling}) and shown in the table. The first three cases have constant scaling. In Case 4 and Case 5, the average SINR scale as $\mathcal{O}(M)$ and $\mathcal{O}(\sqrt{M})$. The figure verifies these scaling law results. \begin{figure}[t] \center \includegraphics[width=5in]{capacity_lowbound.eps} \vspace{-5mm} \caption{Achievable rate for different number of sources. $M=200$ and $100$, $P=Q=0$ dB, $P_c=1/2$.} \label{fig-rate} \end{figure} In Fig.~\ref{fig-rate}, the average achievable rate per source-destination pair is simulated for different number of sources with $200$ or $100$ relay antennas. The source and the relay powers are set to be $0$ dB. The CSI quality is set as $P_c=1/2$. We can see that the lower bound in (\ref{rate-LB}) is very tight. With given number of relay antennas, the achievable rate per source-destination pair decreases as there are more pairs \begin{figure}[t] \center \includegraphics[width=5in]{PDF_inter_error_new.eps} \vspace{-5mm} \caption{PDF of interference power. $K=20$ or $10$, $P_c=0.8$, $M=200$.} \label{PDF_inter} \end{figure} In Fig.~\ref{PDF_inter}, for a relay network with $20$ or $10$ source-destination pairs and $200$ relay antennas, the simulated PDF of $P_{i,e}$ is shown. The CSI quality parameter is set as $P_c=0.8$. The analytical expression in (\ref{cf-pdf-e}) is compared with the simulated values. We can see from Fig.~\ref{PDF_inter} that the PDF approximation is tight for the whole parameter range. Especially, the approximation matches tightly at the tail when the interference power is large, which is the dominate range of outage and ABER. \begin{figure}[t] \center \includegraphics[width=5in]{outageVsM_draft.eps} \vspace{-5mm} \caption{Outage probability for different number of relay antennas. $K=8 \text{ or } 12, P=Q=10$ dB, $\gamma_{th}=8$dB, $P_c=0.95$.} \end{figure} Fig.~4 shows the outage probability for different number of relay antennas. The analytical expressions in (\ref{outageprob}) and (\ref{outate_app}) are compared with the simulated values. The transmit powers of the users and the relay are set as $10$ dB. The CSI quality parameter is set as $P_c=0.95$. The number of sources is $8$ or $12$ and the SINR threshold is $8$ dB. We can see that our analytical result in (\ref{outageprob}) and the further approximation in (\ref{outate_app}) are both tight for all the simulated parameter ranges. Besides, the approximations becomes tighter as the relay antennas number increases. \begin{figure}[t] \center \includegraphics[width=5in]{ABER_M_draft_simple.eps} \vspace{-5mm} \caption{Average bit error rate of BPSK for different number of relay antennas $M$. $K=8 \text{or} 12, P=Q=10$ dB, $P_c=0.95$.} \label{fig:aber_M_Ksmall} \end{figure} In Fig.~\ref{fig:aber_M_Ksmall}, the ABER for BPSK is simulated for different number of relay antennas with $K=8$ or $12$, $P=Q=10$ dB and $P_c=0.95$. The analytical approximation in (\ref{eq:BER_2}) is compared with the simulated values. From the figure, we can see that the analytical result in (\ref{eq:BER_2}) is tight for the simulated values, and is tighter when the number of source-destination pairs is smaller. \section{Conclusion} \label{sec:con} In this work, we analysed the performance of a massive MIMO relay network with multiple source-destination pairs under MRC/MRT relaying with imperfect CSI. Firstly, the performance scaling law is analysed which shows that the scale of the SINR is decided by the summation of the scales of the CSI quality plus the larger of the per-source transmission power of the two hops. With this result, typical scenarios and trade-off between parameters are shown. Our scaling law is comprehensive as it takes into considerations many network parameters, including the number of relay antennas, the number of source-destination pairs, the source transmit power and the relay transmit power. Then, a sufficient condition for asymptotically deterministic SINR is derived, based on which new network scenarios for systems with the asymptotically deterministic property are found and tradeoff between the parameters is analysed. At last, we specify the necessary and sufficient condition for networks whose SINR increases linearly with the number of relay antennas. In addition, our work show that for this case the interference power does not become asymptotically deterministic and derived the PDF of the interference power in closed-form. Then the outage probability and ABER expressions for the relay network are obtained and their behaviour with respect to network parameters are analysed. Simulations show that the analytical results are tight. \section*{Appendix} \subsection{Derivations of ${\mathbb E}\{P_{s,e}\}$ and ${\rm SCV}\{P_{s,e}\}$} \label{sec-app1} Firstly, we have \begin{eqnarray} {\mathbb E}\{P_{s,e}\}&=&{\mathbb E}\left\lbrace\frac{|\hat{\bf g}_i\hat{\bf g}_i^H\hat{\bf f}_i^H\hat{\bf f}_i+\sum_{k=1,k\neq i}^{K}\hat{\bf g}_i\hat{\bf g}_k^H\hat{\bf f}_k^H\hat{\bf f}_i|^2}{M^4}\right\rbrace \nonumber\\ &=&{\mathbb E}\left\lbrace\frac{\left(\|\hat{\bf g}_i\|_F^2\|\hat{\bf f}_i\|_F^2+\sum_{k=1,k\neq i}^{K}\hat{\bf g}_i\hat{\bf g}_k^H\hat{\bf f}_k^H\hat{\bf f}_i\right)\left(\|\hat{\bf g}_i\|_F^2\|\hat{\bf f}_i\|_F^2+\sum_{k=1,k\neq i}^{K}\hat{\bf g}_i\hat{\bf g}_k^H\hat{\bf f}_k^H\hat{\bf f}_i\right)^H}{M^4}\right\rbrace\nonumber\\ &=&{\mathbb E}\left\lbrace\frac{\|\hat{\bf g}_i\|_F^4\|\hat{\bf f}_i\|_F^4}{M^4}\right\rbrace+ \sum_{k=1,k\neq i}^{K}{\mathbb E}\left\lbrace\frac{|\hat{\bf g}_i\hat{\bf g}_k^H\hat{\bf f}_k^H\hat{\bf f}_i|^2}{M^4}\right\rbrace,\label{appen_1} \end{eqnarray} where the last step is obtained because the means of the cross terms are zero. In the first term of (\ref{appen_1}), as entries of $\hat{\bf g}_i$ and $\hat{\bf f}_i$ are i.i.d. whose distribution follows $\mathcal{CN}(0, P_c)$, $\|\hat{\bf g}_i\|_F^2$ and $\|\hat{\bf f}_i\|_F^2$ have a gamma distribution with shape parameter $M$ and scale parameter $P_c$. Thus, $${\mathbb E}\left\lbrace \frac{\|\hat{\bf g}_i\|_F^4\|\hat{\bf f}_i\|_F^4}{M^4}\right\rbrace=P_c^4\left(1+\frac{2}{M}+\frac{1}{M^2}\right)\approx P_c^4,$$ where the approximation is by ignoring lower order terms of $M$ when $M\gg 1$. For the remaining terms, $$\hat{\bf g}_i\hat{\bf g}_k^H\hat{\bf f}_k^H\hat{\bf f}_i=\sum_{m_g=1}^M\sum_{m_f=1}^M\hat{ g}_{i,m_g}\hat{ g}_{k,m_g}^*\hat{ f}_{k,m_f}^*\hat{ f}_{i,m_f},$$ where $\hat{ g}_{i,m_g}$ is the $(i,m_g)$th entry of $\hat{\bf G}$, and $\hat{ f}_{i,m_f}$ is the $(i, m_f)$th entry of $\hat{\bf F}$. Thus $\hat{\bf g}_i\hat{\bf g}_k^H\hat{\bf f}_k^H\hat{\bf f}_i$ can be seen the summation of $M^2$ terms of i.i.d. random variables, each with mean $0$, variance $P_c^2$. According to CLT, the distribution of $\frac{\hat{\bf g}_i\hat{\bf g}_k^H\hat{\bf f}_k^H\hat{\bf f}_i}{M}$ converges to $\mathcal{CN}(0,P_c^4)$ when $M\rightarrow\infty$. Then $\frac{|\hat{\bf g}_i\hat{\bf g}_k^H\hat{\bf f}_k^H\hat{\bf f}_i|^2}{M^4}$ has a gamma distribution with shape parameter $1$ and scale parameter $P_c^4/M^2$. Thus, we can obtain $$\sum_{k=1,k\neq i}^{K}{\mathbb E}\left\lbrace\frac{|\hat{\bf g}_i\hat{\bf g}_k^H\hat{\bf f}_k^H\hat{\bf f}_i|^2}{M^4}\right\rbrace=\frac{(K-1)P_c^4}{M^2}.$$ As $M\gg K$, we have $\frac{(K-1)P_c^4}{M^2}\ll P_c^4$. Thus the mean of $P_s$ is $P_c^4$. Similarly, we can derive the variance of $P_{s,e}$ as below. \begin{eqnarray*} {\rm Var}\{P_{s,e}\}&=&{\mathbb E}\left\lbrace \frac{|\hat{\bf g}_i\hat{\bf g}_i^H\hat{\bf f}_i^H\hat{\bf f}_i+\sum_{k=1,k\neq i}^{K}\hat{\bf g}_i\hat{\bf g}_k^H\hat{\bf f}_k^H\hat{\bf f}_i|^4}{M^8} \right\rbrace-{\mathbb E}\{P_{s,e}\}^2\\ &\approx & {\mathbb E}\left\lbrace \frac{\|\hat{\bf g}_i\|_F^8\|\hat{\bf f}_i\|_F^8}{M^8}\right\rbrace-P_c^8\left(1+\frac{2}{M}+\frac{1}{M^2} \right)^2\\ &=& P_c^8\frac{(M+3)^2(M+2)^2(M+1)^2}{M^6}-P_c^8\left(1+\frac{2}{M}+\frac{1}{M^2} \right)^2\approx \frac{8P_c^8}{M}. \end{eqnarray*} Then, we have ${\rm SCV}\{P_{s,e}\}={\rm Var}\{P_{s,e}\}/({\mathbb E}\{P_{s,e}\})^2=8/M$. \subsection{Proof of Proposition \ref{pro:condition}} \label{app-B} The SINR expression in (\ref{eq:SINR_e}) can be reformulated as \begin{equation} {\rm SINR}_{i,e}= M^{r_s}\frac{P_{s,e}/P_c^4 }{P_{i,e}\frac{K-1}{P_c^4M^{1-r_s}}+\frac{1}{PP_c^4M^{1-r_s}}P_{n,e}+\frac{1}{P_c^4M^{1-r_s}}( P_{e,1}+P_{e,2}+P_{e,3})+\frac{K(1+\frac{K}{MP_c}+\frac{1}{PP_cM})}{QP_cM^{1-r_s}}}.\label{eq:asym} \end{equation} The received SINR is asymptotically deterministic when its SCV approaches zero as $M\rightarrow \infty$. However, due to the complex structure of the SINR expression, it is highly challenging to obtain its SCV directly. Alternatively, as is shown in Section III, $P_{s,e}/P_c^4$ is asymptotically deterministic, thus for the SINR to be asymptotically deterministic, the sufficient and necessary condition is that the denominator of the formula in (\ref{eq:asym}) is asymptotically deterministic. One sufficient condition is that the SCV of the denominator denoted as ${\rm SCV}_d$, is no larger than $E/M$ for some constant $E$\footnote{Note that, when $M\rightarrow\infty$, given any positive number $\alpha$, $1/M^{\alpha}\rightarrow 0$. But for practical applications of the deterministic equivalence analysis in large but finite-dimension systems, we consider the scenario that the SCV decrease linearly with the number of antennas or faster. The derived condition is thus sufficient but not necessary.}. This can be expressed as \begin{eqnarray} {\rm SCV}_{d}&=&\frac{{\rm Var}\left\lbrace P_{i,e}\frac{K-1}{P_c^4M^{1-r_s}}+\frac{1}{PP_c^4M^{1-r_s}}P_{n,e}+\frac{1}{P_c^4M^{1-r_s}}( P_{e,1}+P_{e,2}+P_{e,3})\right\rbrace}{\left({\mathbb E}\left\lbrace P_{i,e}\frac{K-1}{P_c^4M^{1-r_s}}+\frac{1}{PP_c^4M^{1-r_s}}P_{n,e}+\frac{1}{P_c^4M^{1-r_s}}( P_{e,1}+P_{e,2}+P_{e,3})\right\rbrace\right)^2}\le\frac{E}{M}. \label{eq:scv} \end{eqnarray} From (\ref{eq:asym}), we have $$\frac{P_{s,e}/P_c^4 }{P_{i,e}\frac{K-1}{P_c^4M^{1-r_s}}+\frac{1}{PP_c^4M^{1-r_s}}P_{n,e}+\frac{1}{P_c^4M^{1-r_s}}( P_{e,1}+P_{e,2}+P_{e,3})+\frac{K(1+\frac{K}{MP_c}+\frac{1}{PP_cM})}{QP_cM^{1-r_s}}}={\mathcal O}(1)$$ and since $P_{s,e}/P_c^4 \overset{m.s.}{\longrightarrow} 1$, we have $${\mathbb E}\left\lbrace P_{i,e}\frac{K-1}{P_c^4M^{1-r_s}}+\frac{1}{PP_c^4M^{1-r_s}}P_{n,e}+\frac{1}{P_c^4M^{1-r_s}}( P_{e,1}+P_{e,2}+P_{e,3})\right\rbrace={\mathcal O}(1).$$ Thus (\ref{eq:scv}) is equivalent to that \begin{equation} {\rm Var}\left\lbrace P_{i,e}\frac{K-1}{P_c^4M^{1-r_s}}+\frac{1}{PP_c^4M^{1-r_s}}P_{n,e}+\frac{1}{P_c^4M^{1-r_s}}( P_{e,1}+P_{e,2}+P_{e,3})\right\rbrace \le \frac{E'}{M} \label{eq:var_scv} \end{equation} for some constant $E'$. \begin{lemma} \label{lemma:var} A sufficient condition for (\ref{eq:var_scv}) is that the variance of each term in (\ref{eq:var_scv}) scales no larger than $1/M$, i.e., the maximum scale order of $ {\rm Var}\left\lbrace P_{i,e}\frac{K-1}{P_c^4M^{1-r_s}}\right\rbrace$, ${\rm Var}\left\lbrace\frac{1}{PP_c^4M^{1-r_s}}P_{n,e}\right\rbrace$, ${\rm Var}\left\lbrace \frac{1}{P_c^4M^{1-r_s}}P_{e,1}\right\rbrace$, ${\rm Var}\left\lbrace \frac{1}{P_c^4M^{1-r_s}}P_{e,2}\right\rbrace$, and ${\rm Var}\left\lbrace \frac{1}{P_c^4M^{1-r_s}}P_{e,3}\right\rbrace$ is no larger than $1/M$. \end{lemma} \begin{proof} The variance of $P_{i,e}\frac{K-1}{P_c^4M^{1-r_s}}+\frac{1}{PP_c^4M^{1-r_s}}P_{n,e}+\frac{1}{P_c^4M^{1-r_s}}( P_{e,1}+P_{e,2}+P_{e,3})$ is the summation of two parts: the variances of each term, and the covariance of every two terms. Now, we will prove that if the variances of each term scales no larger than $1/M$, their covariance also scales no larger than $1/M$. To make it general and clear, we define $Y=\sum_{n=1}^N X_n$, where $N$ is a finite integer and $X_n$'s are random variables. Without loss of generality, we assume that ${\rm Var}\{X_1\}$ has the highest scale among all ${\rm Var}\{X_n\}$'s and ${\rm Var}\{X_1\}={\mathcal O}(1/M^\alpha)$, where $\alpha\ge 1$. The variance of $Y$ is $${\rm Var}\{Y\}=\sum_{n=1}^N{\rm Var}\{X_n\}+\sum_{i\neq j}{\rm Cov}\{X_i,X_j\}.$$ By the definition of covariance, $\sum_{i\neq j}{\rm Cov}\{X_i,X_j\}$ takes the maximum value when $X_n$'s are linearly correlated, i.e., $X_1=X_2/a_2=X_3/a_3\dots=X_N/a_N$. In this case, we can obtain that $$\sum_{i\neq j}{\rm Cov}\{X_i,X_j\}={\rm Var} \{X_1\}\sum_{i\neq j}a_ia_j,$$ where we have defined $a_1=1$. As ${\rm Var}\{X_1\}$ has the highest scale, we have $a_n$ scales no higher than ${\mathcal O}(1)$, that is, there exists constants $c_n$'s such that $a_n \le c_n$. Thus $\sum_{i\neq j}{\rm Cov}\{X_i,X_j\}={\mathcal O}(1/M^{\alpha})$, and consequently ${\rm Var}\{Y\}$ scales no higher than $1/M^\alpha$. \end{proof} Given Lemma \ref{lemma:var}, we only need to find the condition for the variances of $(K-1)P_{i,e}/(P_c^4M^{1-r_s})$, $P_{n,e}/(PP_c^4M^{1-r_s})$, $P_{e,1}/(P_c^4M^{1-r_s})$, $P_{e,2}/(P_c^4M^{1-r_s})$, and $\frac{1}{P_c^4M^{1-r_s}}P_{e,3}$ to scale no larger than $1/M$. Using the results on the variances of SINR components, the variances of the terms can be obtained as \begin{eqnarray*} &&{\rm Var}\{\frac{K-1}{P_c^4M^{1-r_s}}P_{i,e}\}=\frac{(K-1)^2}{P_c^2M^{2-2r_s}}\left(\hspace{-1mm}\frac{4}{K-1}\hspace{-1mm}+\hspace{-1mm} \frac{8+10P_c}{P_cM}\hspace{-1mm}+\hspace{-1mm}\frac{K^2+18(K-2)P_c}{(K-1)P_c^2M^2}\hspace{-1mm}\right)\sim {\mathcal O}\left(M^{-(2-2r_s-2r_c-r_k)}\right), \\ &&{\rm Var}\{\frac{1}{PP_c^4M^{1-r_s}}P_{n,e}\}=\frac{\frac{2}{P_c^3}+\frac{5}{P_c^2}-\frac{2}{P_c}}{M^{3-2r_s}P^2}\sim {\mathcal O}\left(M^{-(3-2r_s-3r_c-2r_p)}\right), \\ &&{\rm Var}\{\frac{1}{P_c^4M^{1-r_s}}P_{e,1}\}=\frac{3K}{M^{4-2r_s}}(\frac{1}{P_c}-1)^4 \sim {\mathcal O}\left(M^{-(4-2r_s-4r_c-r_k)}\right), \\ &&{\rm Var}\{\frac{1}{P_c^4M^{1-r_s}}P_{e,2}\}={\rm Var}\{\frac{1}{P_c^4M^{1-r_s}}P_{e,3}\}=\frac{1}{M^{2-2r_s}}(\frac{1}{P_c}-1)^2 \sim {\mathcal O}\left(M^{-(2-2r_s-2r_c)}\right), \end{eqnarray*} where the scaling behaviour at the end of each line is obtained from the definitions of the scaling exponents in (\ref{exponents-def}) and considering the constraints in (\ref{cond-scaling}). Then, we can see that the condition for the scaling order of each term to be no higher than $1/M$ is that both following constrains are satisfied. \begin{equation} r_k+2r_c+2r_s\le 1,\quad 2r_p+3r_c+2r_s \le 2. \label{eq:cond_2} \end{equation} Combining (\ref{cond-scaling}) and (\ref{eq:cond_2}), we get the sufficient condition for the SINR to be deterministic in (\ref{eq:suff-con}). \subsection{Proof of Proposition \ref{pro:linearSINR}} \label{app-C} Linearly increasing SINR means that the SINR scaling exponent is 1, i.e., $r_s=1$. Thus the SINR can be formulated as \begin{equation*} {\rm SINR}_{i,e}= M\frac{P_{s,e}/P_c^4 }{P_{i,e}\frac{K-1}{P_c^4}+\frac{1}{PP_c^4}P_{n,e}+\frac{1}{P_c^4}( P_{e,1}+P_{e,2}+P_{e,3})+\frac{K(1+\frac{K}{MP_c}+\frac{1}{PP_cM})}{QP_c}}. \end{equation*} From the SINR scaling law in (\ref{SNR-scaling}), we can see that the sufficient and necessary condition for $r_s=1$ is $r_c=r_p=r_k=r_q=0$ (note that $r_c,r_p,r_q,r_k \in [0,1]$). With the parameter values, we can calculate that the SCVs of $P_{s,e}/P_c^4$ and $P_{n,e}/P/P_c^4$ scales of $1/M$. Therefore, they are asymptotically deterministic and can be approximated with their mean values. On the other hand, the SCVs of $(K-1)P_{i,e}/P_c^4$, $P_{e,1}/P_c^4$, $P_{e,2}/P_c^4$, and $P_{e,3}/P_c^4$ are constant. We analyze their behaviour next. $${\rm Var}\left\{\frac{K-1}{P_c^4}P_{i,e}\right\}\ge \frac{4(K-1)}{P_c^2}\ge 4(K-1) \left(\frac{1}{P_c}-1\right)^2=4(K-1){\rm Var}\left\{\frac{P_{e,2}}{P_c^4}\right\}.$$ Also, $P_{e,2}$ and $P_{e,3}$ have the same distribution. As we mainly consider the non-trivial case that $K\ge 3$, we have ${\rm Var}\{(K-1)P_{i,e}/P_c^4\}\gg P_{e,2}/P_c^4, P_{e,3}/P_c^4$, especially when the CSI quality $P_c$ is high. Besides, the mean of $P_{e,1}/P_c^4$ scales as $1/M$, and its variance scales as $1/M^2$. Thus the variance of this term is also much smaller than $P_{i,e}(K-1)/P_c^4$. Therefore, $P_{i,e}(K-1)/P_c^4$ dominates the random behaviour of the SINR and other terms can be approximated with their mean values. Thus the SINR approximation in (\ref{eq:SINR_app_high}) is obtained, where only dominant terms of $M$ are kept. \subsection{Proof of Proposition \ref{pro:pdf}} \label{app-D} When $K=2$, $P_{i,e} =\left|\frac{{\bf g}_i\hat{\bf g}_i^H\hat{\bf f}_i^H{\bf f}_k}{\sqrt{M^3}}+\frac{{\bf g}_i\hat{\bf g}_k^H\hat{\bf f}_k^H{\bf f}_k}{\sqrt{M^3}}\right|^2/(K-1).$ Then, using CLT, $P_{i,e}$ has an exponential distribution with parameter $1/d_e$. Then, the PDF can be approximated as $f_{P_{i,e}}(y)\approx e^{-y/d_e}/d_e$, which is the same as (\ref{cf-pdf-e}) for $K=2$. Now, we work on the more complicated case of $K\ge 3$. Firstly, \begin{eqnarray*} &&\hspace{-5mm}\frac{|{\bf g}_i\hat{\bf G}^{\hspace{-0.5mm}H}\hspace{-0.5mm}\hat{\bf F}^{\hspace{-0.5mm}H}{\bf f}_k|^2}{M^3}\hspace{-1mm} =\left|\frac{{\bf g}_i\hat{\bf g}_i^H\hat{\bf f}_i^H{\bf f}_k}{\sqrt{M^3}}\hspace{-1mm}+\hspace{-1mm}\frac{{\bf g}_i\hat{\bf g}_k^H\hat{\bf f}_k^H{\bf f}_k}{\sqrt{M^3}}\hspace{-1mm}+\hspace{-2mm}\sum_{n\neq i,n\neq k}^M\hspace{-2mm}\frac{{\bf g}_i\hat{\bf g}_n^H\hat{\bf f}_n^H{\bf f}_k}{\sqrt{M^3}}\right|^2 \end{eqnarray*} With the help of CLT, as $M\gg1$, $\frac{{\bf g}_i\hat{\bf g}_i^H\hat{\bf f}_i^H{\bf f}_k}{\sqrt{M^3}}$ is approximately distributed as $CN(0,P_c^3+\frac{P_c^2}{M})$, and $\frac{{\bf g}_i\hat{\bf g}_n^H\hat{\bf f}_n^H{\bf f}_k}{\sqrt{M^3}}$ is approximately distributed as $CN(0,\frac{P_c^2}{M})$. We can further show that the covariances between $\frac{{\bf g}_i\hat{\bf g}_i^H\hat{\bf f}_i^H{\bf f}_k}{\sqrt{M^3}}$, $\frac{{\bf g}_i\hat{\bf g}_n^H\hat{\bf f}_n^H{\bf f}_k}{\sqrt{M^3}}$, and $\frac{{\bf g}_i\hat{\bf g}_k^H\hat{\bf f}_k^H{\bf f}_k}{\sqrt{M^3}}$ are zero, thus they are uncorrelated. For tractable analysis, we assume independence as they are Gaussian distributed. Now we conclude that $\frac{|{\bf g}_i\hat{\bf G}^{H}\hat{\bf F}^{H}{\bf f}_k|^2}{(K-1)M^3}$ has a gamma distribution with shape parameter $1$ and scale parameter $\frac{P_c^3}{K-1}\left(2+\frac{K}{MP_c}\right)$, which is also defined as $d_e$. Using CLT, the covariance between $\frac{|{\bf g}_i\hat{\bf G}^{H}\hat{\bf F}^{H}{\bf f}_k|^2}{(K-1)M^3}$ and $\frac{|{\bf g}_i\hat{\bf G}^{H}\hat{\bf F}^{H}{\bf f}_l|^2}{(K-1)M^3}$ ($k\neq l$) can be derived as \begin{equation} {\rm Cov}=\frac{4P_c^5+10P_c^6}{(K-1)^2M}+\frac{18P_c^5+(2K-4)P_c^6}{(K-1)^2M^2},\label{eq:Inter_cov-e} \end{equation} where the proof is omitted due to save space. The correlation coefficient between the two is subsequently \begin{eqnarray} \rho_{jl}&=&\hspace{-1mm}\frac{{\rm Cov}\left\{\frac{|{\bf g}_i\hat{\bf G}^{H}\hat{\bf F}^{H}{\bf f}_k|^2}{(K-1)M^3},\frac{|{\bf g}_i\hat{\bf G}^{H}\hat{\bf F}^{H}{\bf f}_l|^2}{(K-1)M^3}\hspace{-1mm}\right\}}{\sqrt{\mathrm{Var} \Big\{\frac{|{\bf g}_i\hat{\bf G}^{H}\hat{\bf F}^{H}{\bf f}_k|^2}{(K-1)M^3}\Big\}\mathrm{Var}\Big\{\frac{|{\bf g}_i\hat{\bf G}^{H}\hat{\bf F}^{H}{\bf f}_l|^2}{(K-1)M^3}\Big\}}}\hspace{-1mm}\approx \frac{1}{M}\frac{\frac{4}{P_c}+10}{(2+\frac{K}{MP_c})^2}.\label{eq:correlation-e} \end{eqnarray} It equals $\rho_e^2$ based on the definition in (\ref{eq:miu}). Thus $P_{i,e}$ is a summation of $K-1$ correlated random variables following the same Gamma distribution. From Corollary 1 of \cite{refe2}, the PDF of $P_{i,e}$ is \begin{equation}\label{pdfproof} f_{P_{i,e}}(y)=\prod_{i=1}^{K-1}\Big(\frac{\sigma_1}{\sigma_i}\Big)\sum_{j=0}^\infty \frac{\delta_jy^{K+j-2}e^{-y/\sigma_1}}{\sigma_1^{K+j-1}\Gamma(K+j-1)}, \end{equation} where $\sigma_1\le\sigma_2\le\cdots\le \sigma_{K-1} $ are the ordered eigenvalues of the $(K-1)\times (K-1)$ matrix $\bf A$, whose diagonal entries are $d_e$ and off-diagonal entries are $d_e\rho_e$, and $\delta_j$'s are defined iteratively as \begin{eqnarray} &&\delta_0 \triangleq1, \quad \delta_{j+1} \triangleq \frac{1}{j+1}\sum\limits_{m=1}^{j+1}\left[\sum\limits_{n=1}^{K-1} \Bigg(1-\frac{\sigma_1}{\sigma_n}\Bigg)^m\right]\delta_{j+1-m}.\label{deltai+1} \end{eqnarray} As $\bf A$ is a circulant matrix whose off-diagonal entries are the same, its eigenvalues can be calculated as \begin{eqnarray} &&\sigma_1=\cdots =\sigma_{K-2}=d_e-d_e\rho_e,\label{lamda1} \hspace{1cm}\sigma_{K-1}\hspace{-1mm}=d_e+(K-2)d_e\rho_e. \label{lamdaK} \end{eqnarray} Then we can show that \begin{equation}\label{deltai} \delta_j = \Bigg(\frac{(K-1)\rho_e}{1+(K-2)\rho_e}\Bigg)^j=\left(\frac{b_e}{b_e+c_e}\right)^j. \end{equation} Substituting \eqref{lamda1} and \eqref{deltai} into \eqref{pdfproof}, we can get PDF of $P_{i,e}$ as in (\ref{equ4}) in Proposition \ref{pro:pdf}. Notice that \setlength{\arraycolsep}{1pt} \begin{eqnarray*} &&\hspace{-2mm}\sum_{i=0}^{\infty}\left(\frac{b_e}{b_e+c_e}\right)^i\phi\hspace{-0.5mm}\left(y;K\hspace{-0.5mm}+\hspace{-0.5mm}i\hspace{-0.5mm}-\hspace{-0.5mm}1,d_ec_e\right)=\left(\frac{b_e}{b_e+c_e}\right)^{\hspace{-0.5mm}-\hspace{-0.5mm}(K\hspace{-0.5mm}-\hspace{-0.5mm}2)}\hspace{-2mm}\frac{e^{-\frac{y}{d_ec_e}}}{d_ec_e} \hspace{-1mm}\left(\hspace{-1mm}\sum_{n=0}^\infty \hspace{-2mm}-\hspace{-2mm} \sum_{n=0}^{K-3}\right)\hspace{-1mm} \left(\frac{b_e}{d_ec_e(b_e+c_e)}\right)^{\hspace{-1mm} n}\hspace{-2mm}\frac{y^n}{n!}. \end{eqnarray*} By straightforward calculations, we can obtain the closed-form PDF of $P_{i,e}$ in (\ref{cf-pdf-e}). \linespread{1.2}
{ "redpajama_set_name": "RedPajamaArXiv" }
\chapter{Introduction} The principal chiral, or ${\cal G}\times{\cal G}$, sigma models are two-dimensional quantum field theories that are integrable at the quantum level. The fact that the theories are integrable means that their scattering matrices are factorizable. Such $S$-matrices have been conjectured for all the theories corresponding to the classical Lie algebras [\Ref{ORW}]. Expressions for the complete $S$-matrices for SU($N$) can be found in [\Ref{ORW}] and for Sp($2N$) in [\Ref{TH}]. Not all the $S$-matrix elements are known in the case of SO($N$). It is an outstanding problem to prove from first principles that the $S$-matrices actually do describe the lagrangian field theories. This is especially important because a given factorizable $S$-matrix is ambiguous since it may always by multiplied by CDD factors. Connecting the $S$-matrix picture with the lagrangian picture is highly non-trivial since the masses are generated dynamically and the theories are asymptotically free. In a series of papers such non-trivial tests have been applied to various integrable models: the Gross-Neveu model [\Ref{FNW}], the O($N$) sigma model [\Ref{HMN},\Ref{HN}] and the SU($N$) principal chiral sigma model [\Ref{BNNW},\Ref{W}] (the SU(2) case was also considered in [\Ref{PW},\Ref{Y}]), using a technique known as the Thermodynamic Bethe Ansatz (TBA). The central idea is to couple the theory to a particular conserved current and then compute the response of the free-energy for large values of the source in the regime when conventional perturbation theory is valid. The same quantity can then be computed directly from the $S$-matrix using the TBA equations at zero temperature in which the coupling to the source appears as a chemical potential. By comparing the two expressions a non-trivial test of the $S$-matrix is obtained as well as an exact expression for the mass-gap (the ratio of the physical mass to the $\Lambda$-parameter). In general the solution of the TBA equations at zero temperature coupled to an arbitrary chemical potential would be a formidable problem, even in ultra-violet limit, since the equations are a set of coupled integral equations. However, by a judicious choice of the source the state of the system contains just one particle which undergoes elastic scattering (the particle is the highest weight state of a multiplet). The TBA equations then reduce to a single integral equation which can be solved in the ultra-violet limit using generalized Wiener-Hopf techniques [\Ref{HMN},\Ref{JNW}] (for a summary see the appendix of [\Ref{FNW}]). In this paper we extend the results of [\Ref{BNNW}] to the principal chiral models for all the classical Lie algebras and arrive at a universal formula for the exact mass-gap. We also show that by tuning the source we can force the system into inequivalent ground-states which each consist of a single type of particle. The fact that the ground-states are pure for particular values of the source, is presented as a conjecture whose ultimate justification comes from the agreement with perturbation theory; however, it should be possible to prove this fact directly from the full TBA equations of the models. The principal chiral models are described by a lagrangian density $$ {\cal L}_0=-{1\over\lambda^2}{\fam0\eightrm Tr}\left(g^{-1}\partial_\mu g\cdot g^{-1}\partial^\mu g\right), \efr where $g$ is a group valued field. The theory is invariant under a global symmetry corresponding to left and right multiplication by the group $g\mapsto h_{\fam0\eightrm L}gh_{\fam0\eightrm R}^{-1}$, $h_{\fam0\eightrm L,R}\in{\cal G}$. $\lambda$ is a dimensionless coupling constant. The $S$-matrices that have been conjectured to describe the scattering of the states of the model describe $r={\fam0\eightrm rank}({\cal G})$ particles which are associated to the fundamental representations of ${\cal G}$. The masses of the particles, except for those associated to the spinors of SO($N$), can be described by the universal formula [\Ref{ORW}] $$ m_a=m{\sin(\pi a/g)\over\sin(\pi/g)},\qquad a=1,2,\ldots, \nameformula} \def\numali{\numero{MASS} where $g$ is the dual Coxeter number of the Lie algebra associated to the group: for ${\fam0\eightrm A}_r$, ${\fam0\eightrm B}_r$, ${\fam0\eightrm C}_r$ and ${\fam0\eightrm D}_r$ it is $r+1$, $2r-1$, $2(r+1)$ and $2(r-1)$, respectively. The masses of the spinors of ${\fam0\eightrm B}_r$ and ${\fam0\eightrm D}_r$ are $$ {\fam0\eightrm B}_r:\quad m_r={m\over2\sin(\pi/g)},\quad {\fam0\eightrm D}_r:\quad m_{r-1}=m_r={m\over2\sin(\pi/g)}. \efr The particle with mass $m_a$ transforms in the following representation of ${\cal G}\times{\cal G}$ [\Ref{ORW}]: $$\eqalign{ {\fam0\eightrm A}_r:\qquad &W_a=V_a\otimes V_a,\quad a=1,2,\ldots,r,\cr {\fam0\eightrm B}_r:\qquad &W_a=\sum_{k=0}^{a-2k\geq0}V_{a-2k}\otimes\sum_{j=0}^{a-2j\geq0}V_{a-2j},\quad a=1,2,\ldots,r-1,\cr &W_r=V_r\otimes V_r,\cr {\fam0\eightrm C}_r:\qquad &W_a=V_a\otimes V_a,\quad a=1,2,\ldots,r,\cr {\fam0\eightrm D}_r:\qquad &W_a=\sum_{k=0}^{a-2k\geq0}V_{a-2k}\otimes \sum_{j=0}^{a-2j\geq0}V_{a-2j},\quad a=1,2,\ldots,r-2,\cr &W_{r-1}=V_{r-1}\otimes V_{r-1},\qquad W_r=V_r\otimes V_r,\cr} \efr where $V_a$ is the $a^{\fam0\eightrm th}$ fundamental representation of ${\cal G}$ with the standard labelling of the Dynkin diagram [\Ref{ORW}]. Notice that although the particles are associated to the fundamental representations they are sometimes reducible in the case of SO($N$). Fortunately, we shall not require the expression for the complete $S$-matrices but only those elements for the particles of the highest weight in each multiplet (so with quantum numbers $|\omega_a,\omega_a\rangle$ where the $\omega_a$'s are the fundamental weights). The $S$-matrix amongst these states is purely elastic and their expressions can be extracted from [\Ref{ORW}]: $$ S_{ab}(\theta)=\exp\left\{i\pi\delta_{ab}+ 2i\int_0^\infty{dx\over x}\sin(\theta x)\left[R_{ab} (x)-\delta_{ab}\right]\right\}, \nameformula} \def\numali{\numero{SM} where $\theta$ is the rapidity difference of the incoming particles and the kernel $R_{ab}(\theta)$ has the following form for all the particles except the spinors: $$\eqalign{ {\fam0\eightrm A}_r:\qquad&R_{ab}(x)={2\sinh\left({{\fam0\eightrm min}(a,b)\over r+1}\pi x\right) \sinh\left({r+1-{\fam0\eightrm max}(a,b)\over r+1}\pi x\right)\over\sinh\left({1\over r+1}\pi x\right)},\cr {\fam0\eightrm B}_r,{\fam0\eightrm C}_r,{\fam0\eightrm D}_r:\qquad&R_{ab}(x)={2\sinh\left( {{\fam0\eightrm min}(a,b)\over g}\pi x\right)\cosh\left( {g-2{\fam0\eightrm max}(a,b)\over2g}\pi x\right)\over\cosh\left({1\over2}\pi x\right)}.\cr} \efr The $S$-matrix elements involving the spinors can also be deduced from the formulas of [\Ref{ORW}] but we shall not require them. In the following two section we calculate the free-energy in the presence of a source coupling to the conserved charge of the ${\cal G}\times{\cal G}$ symmetry in two ways: from the lagrangian using perturbation theory and from the $S$-matrix using the thermodynamic Bethe Ansatz. \chapter{Free-energy in perturbation theory} The conserved currents of the left and right symmetry are $J^{\fam0\eightrm L}_\mu=g^{-1}\partial_\mu g$ and $J^{\fam0\eightrm R}_\mu=(\partial_\mu g)g^{-1}$. We wish to couple to modify the hamiltonian of the theory by introducing a coupling to the conserved charge of the diagonal action of the symmetry. At the lagrangian level this is described by introducing the ``covariant derivative'' [\Ref{BNNW}]: $$ D_\mu g=\partial_\mu g-ih\delta_{\mu0}(Qg+gQ), \efr where $Q$ is a constant element of the Lie algebra. The lagrangian density in the presence of the source is $$ {\cal L}={\cal L}_0-{2hi\over\lambda^2}{\fam0\eightrm Tr}\left((g^{-1}Q+Qg^{-1})\partial_0g\right)-{2h^2\over\lambda^2}{\fam0\eightrm Tr}\left(Q^2+g^{-1}QgQ\right). \nameformula} \def\numali{\numero{LAGS} The quantity we will calculate is $\delta f(h)=f(h)-f(0)$ where $f(h)$ is the free-energy per unit volume in the presence of the source. We shall perform a perturbative calculation in the running coupling $\lambda(h)$ which in the ultra-violet regime (large $h$) runs to zero and hence is the regime where perturbation theory will be reliable. We shall only perform the computation to one loop; however this will be sufficient to provide a non-trivial check of the $S$-matrix and allow for the evaluation of the exact mass-gap. An explicit basis for $g$ is provided by $$ g=\exp\left\{i\sum_{\alpha}n^{(\alpha)}E_\alpha+in\cdot H\right\}, \efr where the fields satisfy the reality condition $n^{(\alpha)*}=n^{(-\alpha)}$ and $n^*=n$, and the sum is over all the roots of algebra. In the above $E_\alpha$ is the usual step generator associated to a root $\alpha$ and $H$ is the generator of the Cartan subalgebra. In what follows we choose a normalization in which the roots of the simply-laced algebras have length-squared 2 and the the long roots of ${\fam0\eightrm B}_r$ and ${\fam0\eightrm C}_r$ have length-squared $2$ and $4$, respectively. Without loss of generality we take $Q$ to be in the Cartan subalgebra so $Q=q\cdot H$, where $q$ is some $r$-dimensional vector. The quadratic part of the (euclidean) lagrangian density \LAGS\ is simply $$ {\cal L}=-{4h^2\over\lambda^2}q^2+ {1\over\lambda^2}\sum_{\alpha>0}\left\{\partial_\mu n^{(\alpha)} \partial^\mu n^{(-\alpha)}+h^2(\alpha\cdot q)^2n^{(\alpha)}n^{(-\alpha)}\right\}, \efr where the sum is over the positive roots and for simplicity we have changed the normalization of some of the $n^{(\alpha)}$'s. Notice that the Cartan subalgebra fields $n$ are completely decoupled to this order in the loop expansion. The tree level contribution to $\delta f(h)$ is simply $$ \delta f(h)_0=-{4h^2\over\lambda^2}q^2. \efr To evaluate the one-loop contribution we use dimensional regularization. Using standard methods one finds $$ \delta f(h)_1=-{h^2g\over2\pi\epsilon}q^2 +{h^2\over4\pi}\sum_{\alpha>0}(\alpha\cdot q)^2\left\{1-\gamma_{\fam0\eightrm E}+\ln4\pi-\ln\left(h^2(\alpha\cdot q)^2/\mu^2\right)\right\}+\cdots, \efr where $\epsilon=d-2$, $\mu$ is the usual mass parameter of dimensional regularization and $g$ is the dual Coxeter number as before. To cancel the divergence in the $\overline{\fam0\eightrm MS}$-scheme we add to the lagrangian a counter-term $$ \delta{\cal L}= {h^2g\over2\pi\epsilon}q^2+{h^2g\over4\pi} q^2(\gamma_{\fam0\eightrm E}-\ln4\pi). \efr The quantity $\delta f(h)$ is renormalization group invariant when $\lambda$ runs with $\mu$. We can use this freedom to set $\mu=h$. The way that the coupling constant runs with $h$ is determined from the form of the counter-term. One finds $$ h{\partial\over\partial h}\lambda^2=-{g\over8\pi}\lambda^4-\beta_1 \lambda^6-{\cal O}(\lambda^8), \nameformula} \def\numali{\numero{BFE} although the second universal coefficient of the beta-function $\beta_1$ is not determined at the one-loop level. The expression for the first coefficient of the beta-function $\beta_0=g/8\pi$ agrees with [\Ref{MS}]. The expression for the free-energy is then $$ \delta f(h)=-{4h^2\over\lambda^2(h)}q^2-{h^2\over4\pi}\sum_{\alpha>0} (q\cdot\alpha)^2\left[\ln(q\cdot\alpha)^2-1\right]+{\cal O}(\lambda^2), \nameformula} \def\numali{\numero{HH} where the explicit $h$ dependence is obtained by expressing the running coupling in terms of the $\Lambda$-parameter by solving \BFE: $$ {1\over\lambda^2(h)}=\beta_0\ln{h\over\Lambda_{\overline{\fam0\eightrm MS}}}+{\beta_1\over\beta_0} \ln\ln{h\over\Lambda_{\overline{\fam0\eightrm MS}}}+{\cal O}\left(1\over\ln{h\over\Lambda_ {\overline{\fam0\eightrm MS}}}\right), \efr where $\beta_0=g/8\pi$. Equation \HH\ is the generalization to all the classical Lie algebras of equation (17) of [\Ref{BNNW}] for ${\fam0\eightrm A}_r$. For comparing with the expression for the free-energy from the TBA calculation we set $q=\omega_a/(2\omega_a^2)$ (excluding the spinors of SO($N$)). Writing $$ \delta f(h)=-{h^2\over4}k^2_a\left[\ln{h\over\Lambda_{\overline{\fam0\eightrm MS}}}+A_a+{\beta_1\over\beta_0^2}\ln\ln{h\over\Lambda_{\overline{\fam0\eightrm MS}}}+{\cal O}\left({1\over\ln{h\over\Lambda_{\overline{\fam0\eightrm MS}}}}\right) \right]. \nameformula} \def\numali{\numero{PFE} By explicit computation we find for ${\fam0\eightrm A}_r$ that $$ k_a^2={(r+1)^2\over 2\pi a(r+1-a)},\qquad A_a=\ln\left({r+1\over2a(r+1-a)}\right)-{1\over2}, \nameformula} \def\numali{\numero{KA} and for the other algebras a universal form applies: $$ k^2_a={g\over2\pi a},\qquad A_a=-\ln a-{1\over2}-{d_1-2a\over g}\ln2, \nameformula} \def\numali{\numero{KR} where the quantity $d_1$ is the dimension of the vector representation of the algebra, i.e. $r+1$, $2r+1$, $2r$ and $2r$ for ${\fam0\eightrm A}_r$, ${\fam0\eightrm B}_r$, ${\fam0\eightrm C}_r$ and ${\fam0\eightrm D}_r$, respectively. \chapter{Free-energy from the $S$-matrix} In this section we will calculate $\delta f(h)$ in the ultra-violet limit, $h\gg m$ directly from the $S$-matrix. The technique is known as the Thermodynamic Bethe Ansatz (TBA) and in its most general form it allows one to calculate the behaviour of the free-energy of a one-dimensional gas of particles described by a factorizable $S$-matrix on the temperature and in the presence of a chemical potential. The free-energy is given in terms of a set of functions---in general infinite in number---which satisfy a set of coupled integral equations (the TBA equations). For our application we working on the plane and hence at zero temperature. The coupling of the theory to the source in \LAGS\ leads to a particular form for the chemical potential. The one-particle states are labelled by two weight vectors $|\mu,\nu\rangle$ (as well as the rapidity) and they can be chosen to be eigenstates of the charge $Q$ with $Q|\mu,\nu\rangle=q\cdot(\mu+\nu)|\mu,\nu\rangle$. The full TBA equations for the principal chiral models are known [\Ref{ORW}]; however, for particular choices of $q$, extending the philosophy of [\Ref{FNW}-\Ref{BNNW}], we conjecture that only one particle contributes to the ground-state and the infinite set of TBA equations reduces to a single equation. The precise formulation of our conjecture is that when $q=\omega_a/(2\omega_a^2)$ only the unique particle with the highest charge/mass ratio contributes to the ground-state, i.e. the particle $|\omega_a,\omega_a\rangle$ which is highest weight state of the multiplet $W_a$. This particle has $Q$ eigenvalue 1. However, we exclude the the spinor particles of SO($N$) from this conjecture. We shall find that this proposal leads to a result which is perfectly consistent with the perturbative calculation. Notwithstanding this, it should be possible to prove the conjecture directly from the full TBA equations. The expression for the free-energy with $q=\omega_a/(2\omega_a^2)$ is then given in terms of a quantity $\epsilon(\theta)$ which satisfies the integral equation: $$ \epsilon(\theta)-\int_{-B}^Bd\theta'\phi_a(\theta-\theta')\epsilon(\theta')=m_a\cosh\theta-h. \nameformula} \def\numali{\numero{TBA} The parameter $B$ is determined by the boundary condition $\epsilon(\pm B)=0$ and the kernel is given by $$ \phi_a(\theta)={1\over2\pi i}{d\over d\theta}\ln S_{aa}(\theta)=\delta(\theta)- \int_0^\infty{dx\over\pi}\cos(x\theta)R_{aa}(x), \efr where $S_{aa}(\theta)$ is the $S$-matrix element of the particle $|\omega_a,\omega_a\rangle$ with itself \SM. Once $\epsilon(\theta)$ is known the expression for the free-energy per unit volume is $$ \delta f(h)={m_a\over2\pi}\int_{-B}^Bd\theta\,\epsilon(\theta)\cosh\theta. \efr Our problem is to solve the integral equation \TBA. In general it is not possible to find the solution of such an equation in closed form; however, for comparing with the perturbative result we only need to compute the free-energy in the ultra-violet regime $h\gg m$. In this limit a series solution can be found using generalized Wiener-Hopf techniques [\Ref{FNW},\Ref{HMN},\Ref{JNW}]. The first problem is to decompose the kernel $R_{aa}(x)$: $$ R_{aa}(x)={1\over G_+^{(a)}(x)G_-^{(a)}(x)}, \efr where $G_\pm^{(a)}(x)$ are analytic in the upper/lower half-planes, respectively, and $G_-^{(a)}(x)=G_+^{(a)}(-x)$. The next step in the solution technique depends upon the form of $G_+^{(a)}(x)$. For all the principal chiral models $$ G_+^{(a)}(i\xi)={k'_a\over\sqrt\xi}\left\{1-b_a\xi+{\cal O}(\xi^2)\right\}, \nameformula} \def\numali{\numero{GF} for constants $k'_a$ and $b_a$. So in this respect these models are of similar type to the O($N$) sigma model rather than the fermion models. With $G_+^{(a)}(i\xi)$ of the form \GF, [\Ref{BNNW}] gives a formula for the first few terms in the expansion of the free-energy $$\eqalign{ &\delta f(h)=\cr&-{h^2\over4}k^{\prime2}_a\left[\ln{h\over m_a} +\ln\left({\sqrt{2\pi}k'_ae^{-b_a}\over G_+^{(a)}(i)}\right)-1+ {1\over2}\ln\ln{h\over m_a}+{\cal O}\left({1\over\ln{h\over\Lambda_{\overline{\fam0\eightrm MS}}}}\right) \right].\cr} \nameformula} \def\numali{\numero{SMFE} The explicit expressions for the decompositions are for ${\fam0\eightrm A}_r$ $$\eqalign{ G_+^{(a)}(i\xi)=&{r+1\over\sqrt{2\pi a(r+1-a)\xi}} {\Gamma\left(1+{a\over r+1}\xi\right) \Gamma\left(1+{r+1-a\over r+1}\xi\right)\over\Gamma(1+\xi)}\cr &\qquad\qquad\times\exp\left\{-\xi\left({r+1-a\over r+1}\ln {r+1-a\over r+1}+{a\over r+1}\ln{a\over r+1}\right)\right\}.\cr }\efr For the other algebra one finds the universal form $$\eqalign{ G_+^{(a)}(i\xi)=&\sqrt{g\over2\pi a\xi}{\Gamma\left(1+{a\over g}\xi\right)\Gamma\left({1\over2}+{ g-2a\over2g}\xi\right)\over\Gamma\left({1\over2}+{1\over2}\xi\right)}\cr &\qquad\qquad\times\exp\left\{ -\xi\left({a\over g}\ln{a\over g}+{g-2a\over2g}\ln{g-2a\over2 g}-{1\over2}\ln{1\over2}\right)\right\}.\cr} \efr {}From these expressions we find that $k'_a$ equals $k_a$ in \KA\ and \KR\ and for ${\fam0\eightrm A}_r$ $$ b_a={r+1-a\over r+1}\ln {r+1-a\over r+1}+{a\over r+1}\ln{a\over r+1}, \efr whilst for the other algebras $$ b_a={a\over g}\ln{a\over g}+{g-2a\over2g}\ln{g-2a\over 2g}-{1\over2}\ln{1\over2}-{2a\over g}\ln2. \efr Comparing the expression \SMFE\ with the result of the perturbative calculation \PFE\ we see that they are in perfect agreement if $m_a\propto\sin(\pi a/g)$ which is true for all the particles excluding the spinors of SO($N$) \MASS, and furthermore the expression for the mass-gap has a universal form: $$ {m\over\Lambda_{\overline{\fam0\eightrm MS}}}={g\over \sqrt{\pi e}}\exp\left\{\left({2d_1+g\over 2g}\right)\ln2\right\}\sin\left({\pi\over g}\right), \nameformula} \def\numali{\numero{MG} where $m$ is the mass of the vector particle, which is the lightest particle in the theory (since without loss of generality it is only necessary to consider ${\fam0\eightrm B}_r$ for $r\geq3$ and ${\fam0\eightrm D}_r$ for $r\geq4$). In addition the $S$-matrix calculation implies that the universal ratio $\beta_1/\beta_0^2=1/2$ in exact agreement with the perturbative calculation of [\Ref{MS}] a fact first pointed out for the SU($r+1$) theories in [\Ref{W},\Ref{PW}]. The expression for the mass-gap \MG\ reduces to that of [\Ref{BNNW}] for ${\fam0\eightrm A}_r$. The explicit expressions for each group/algebra are $$\eqalign{ {\fam0\eightrm SU}(r+1),{\fam0\eightrm A}_r:\qquad &{m\over\Lambda_{\overline{\fam0\eightrm MS}}}= {r+1\over\sqrt{\pi e}}2^{3/2}\sin\left({\pi\over r+1}\right),\cr {\fam0\eightrm SO}(2r+1),{\fam0\eightrm B}_r:\qquad &{m\over\Lambda_{\overline{\fam0\eightrm MS}}}={2r-1\over\sqrt{\pi e}}2^{(6r+1)/(4r-2)} \sin\left({\pi\over2r-1}\right),\cr {\fam0\eightrm Sp}(2r),{\fam0\eightrm C}_r:\qquad &{m\over\Lambda_{\overline{\fam0\eightrm MS}}}={2r+2\over\sqrt{\pi e}}2^{(3r+1)/(2r+2)} \sin\left({\pi\over 2r+2}\right),\cr {\fam0\eightrm SO}(2r),{\fam0\eightrm D}_r:\qquad &{m\over\Lambda_{\overline{\fam0\eightrm MS}}}={2r-2\over\sqrt{\pi e}}2^{(3r-1)/(2r-2)} \sin\left({\pi\over2r-2}\right).\cr }\efr The fact that the perturbative result and the $S$-matrix result are consistent provides strong grounds for believing that our conjecture about the structure of the ground-states is correct. As has been pointed out in [\Ref{BNNW}], the fact that the TBA calculation reproduces the universal part of the beta-function $\beta_1/\beta_0^2$ is a highly non-trivial fact. In addition if the $S$-matrix were modified with CDD factors then the thermodynamics would be drastically altered and the perfect agreement with the perturbative result would be destroyed. It would be interesting to compare these results with lattice simulations. I would like to thank Tim Morris and Michel Bauer for very useful discussions. \references \beginref \Rref{ORW}{E. Ogievetsky, N. Reshetikhin and P. Wiegmann, Nucl. Phys. {\fam\bffam\eightbf B280} (1987) 45} \Rref{FNW}{P. Forg\'acs, F. Niedermayer and P. Weisz, Nucl. Phys. {\fam\bffam\eightbf B367} (1991) 123} \Rref{MS}{A. McKane and M. Stone, Nucl. Phys. {\fam\bffam\eightbf B163} (1980) 169} \Rref{Y}{S.-K. Yang, Nucl. Phys. {\fam\bffam\eightbf B267} (1986) 290} \Rref{HN}{P. Hasenfratz and F. Niedermayer, Phys. Lett. {\fam\bffam\eightbf B245} (1990) 529} \Rref{BNNW}{J. Balog, S. Naik, F. Niedermayer and P. Weisz, Amsterdam Lattice (1992) 232} \Rref{HMN}{P. Hasenfratz, M. Maggiore and F. Niedermayer, Phys. Lett. {\fam\bffam\eightbf B245} (1990) 522} \Rref{JNW}{G. Japaridze, A. Nersesyan and P. Wiegmann, Nucl. Phys. {\fam\bffam\eightbf B230} (1984) 511} \Rref{W}{P.B. Wiegmann, Phys. Lett. {\fam\bffam\eightbf B141} (1984) 217} \Rref{PW}{A. Polyakov and P.B. Wiegmann, Phys. Lett. {\fam\bffam\eightbf B131} (1983) 121} \Rref{TH}{T.J. Hollowood, ``{\fam\slfam\eightsl The analytic structure of trigonometric $S$-matrices\/}'', CERN preprint CERN-TH.6888/93, {\fam\ttfam\eighttt hep-th/9305042}, {\fam\itfam\eightit to appear in\/}: Nucl. Phys. {\fam\bffam\eightbf B}} \endref \ciao
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Hereafter, we are interested in the explicit rate at which a system of $N$-interacting stochastic particle $(X^{1,N},X^{2,N},\dots,X^{N,N})$ satisfying \begin{equation} \label{eq:particle_intro} \left\{ \begin{aligned} &X^{i,N}_t=X^i_0+\int_{0}^{t}c(s,(X_r)_{0\leq r\leq s})\,ds\\ &\hspace{1cm}+\int_0^t A(s,(X^{i,N}_r)_{0\leq r\leq s})\Big(B(s,(X^{i,N}_r)_{0\leq r\leq s};\overline{\mu}^{N,N}_s)\,ds+\,dW^i_s\Big),\,i=1,\cdots,N,\,0\leq t\leq T,\\ &\overline{\mu}^{N,N}_t=\frac{1}{N}\sum_{j=1}^N\delta_{\{(X^{j,N}_r)_{0\leq r\leq t}\}},\,X^i_0\sim \mu^0,\,(X^1_0,X^2_0,\dots,X^N_0)\,\text{independent}, \end{aligned} \right. \end{equation} propagates chaos. The particle system is defined up to some finite time horizon $0<T<\infty$, with a given initial distribution on $\er^d$ and $W^1,\dots,W^N$ a sequence of independent $m$-dimensional standard Brownian motions ($m\geq 1$). The system of SDEs \eqref{eq:particle_intro} mainly endows an non-anticipative diffusion component $A$ and two non-anticipative drift components $c$ and $A B$ (resulting from the product of $A$ and $B$) and issued from some given progressively measurable mappings: \[ c:(t,x)\in[0,T]\times\Cc([0,T];\er^d) \mapsto c(t,x)=c\big(t,(\omega_{\theta\wedge t}(x))_{0\leq \theta\leq T}\big)\in \er^{d}, \] \[ A:(t,x)\in[0,T]\times\Cc([0,T];\er^d) \mapsto A(t,x)=A\big(t,(\omega_{\theta\wedge t}(x))_{0\leq \theta\leq T}\big)\in\er^{d\times m}, \] \[ B:(t,x,P)\in[0,T]\times\Cc([0,T];\er^d)\times \Pp(\Cc([0,T];\er^d))\mapsto B(t,x;P)=B\big(t,(\omega_{\theta\wedge t}(x))_{0\leq \theta\leq T},P\circ((\omega_{\theta\wedge t})_{0\leq \theta\leq T})^{-1}\big)\in\er^m, \] for $(\omega_t)_{0\leq t\leq T}$ the canonical process on $\Cc([0,T];\er^d)$. In particular, the interaction between particles are described by the component $B$ whose values range in the same dimension as of the Brownian diffusion driving each elements of \eqref{eq:particle_intro}. The propagation of chaos property will be here mainly understood for the law of the paths of \eqref{eq:particle_intro}; namely in the sense where, for a fixed number of particles $X^{1,N},\dots,X^{k,N}$ as the overall number $N$ of interacting particles increases, the chaos (independency) of the initial $X^1_0,\dots,X^N_0$ and diffusive $(W^1_t)_{t\geq 0},\,\dots,(W^N_t)_{t\geq 0}$ inputs of the system is restored in the particle dynamics of the group of particles yielding to the generic dynamic: \begin{equation} \label{eq:McKeanVlasov_intro} \left\{ \begin{aligned} &X^{\infty}_t=X_0+ \int_{0}^{t}c(s,(X^{\infty}_r)_{0\leq r\leq s})\,ds\\ &\hspace{1cm}+\int_0^t A(s,(X^{\infty}_r)_{0\leq r\leq s})\Big(B(s,(X^{\infty}_r)_{0\leq r\leq s};\Ll((X^{\infty}_r)_{0\leq r\leq s}))\,ds+\,dW_s\Big),\,0\leq t\leq T,\\ &\Ll((X^{\infty}_r)_{0\leq r\leq t}))=\text{Law of }((X^\infty_r)_{0\leq r\leq t}),\,X_0\sim \mu^0, \end{aligned} \right. \end{equation} and the weak limit behaviour: \[ \Ll((X^{1,N}_t,\dots,X^{k,N}_t)_{0\leq t\leq T})\underset{N\rightarrow \infty}{\longrightarrow} \Ll((X^{\infty}_t)_{0\leq t\leq T})\otimes\dots\otimes \Ll((X^{\infty}_t)_{0\leq t\leq T}). \] Due to the exchangeability of the particle system, this property is further equivalent to \[ \Ll\Big(\frac{1}{N}\sum_{j=1}^N\delta_{\{(X^{k,N}_t)_{0\leq t\leq T} \}}\Big)\underset{N\longrightarrow \infty}{\longrightarrow}\Ll((X_t)_{0\leq t\leq T})\,\text{in the weak sense on }\Pp(\Cc([0,T];\er^d)), \] whenever $k\geq 2$, [Sznitman \cite{Sznitman-89}, Proposition 2.2]. Particular cases of interest for \eqref{eq:McKeanVlasov_intro} that will be discussed later are the situations where the interaction kernel is of the form \[ \int b(t,x,\tilde{x})\,\nu(d\tilde{x}),\,t\geq 0,\,x\in\er^d,\,\nu\in\Pp(\er^d),\,b:[0,\infty)\times\er^d\times\er^d\rightarrow \er^m\,\text{bounded}, \] and where the diffusion component $A$ is either a $d\times d$-valued ($m=d$) bounded and uniformly elliptic matrix or, for $d=2m$, is of the form: \begin{equation}\label{eq:LangevinMcKean} A=\begin{pmatrix} 0 & 0\\ 0 & \sigma\\ \end{pmatrix} \end{equation} More precisely, the former case corresponds to the prototypical McKean-Vlasov dynamic: \begin{equation}\label{eq:ProtoMcKeanVlasovintro} dX_t=\Big(\int b(t,(Y_t,V_t),(y,v))\,\mu(t,dx)\Big)\,dt+\sigma(t,X_t)\,dW_t,\,\mu(t)=\Ll(X_t),\,X_0\sim \mu_0, \end{equation} while the later case can be further particularized into a Langevin dynamic $X_t=(Y_t,V_t)\in\er^m\times\er^m$ satisfying: \begin{equation}\label{eq:LangevinMcKeanintro} \left\{ \begin{aligned} &dY_t=V_t\,dt,\,\,(Y_0,V_0)\sim\mu_0\\ &dV_t=\Big(\int b(t,(Y_t,V_t),(y,v))\,\mu(t,dy,dv)\Big)\,dt+\sigma(t,X_t)\,dW_t,\,\mu(t)=\Ll(Y_t,V_t). \end{aligned} \right. \end{equation} The propagation of chaos property of stochastic interacting particle systems has received over of the years a tremendous amount of attention since its initial introduction in statistical physics (Kac \cite{Kac-56}) for its applications for the probabilistic interpretation of nonlinear pdes (McKean \cite{McKean-66}, \cite{McKean-67}; see the surveys Bossy \cite{Bossy-03}, Jabin and Wang \cite{JabinWang-17} for two global overviews on the theoretical and practical aspects related to McKean-Vlasov or McKean SDEs and related particles approximations) and in its modern utilization for the description of interacting economical agents models and game theory (see e.g. Kolokolstov \cite{Kolokolstov-10}, Carmona and Delarue \cite{CarDel-18a}, \cite{CarDel-18b} and references therein). The central result of the present paper (Theorem \ref{mainthm1:LinearCaseTV}) establishes an explicit (and optimal) rate of convergence for the propagation of chaos property between \eqref{eq:particle_intro} and \eqref{eq:McKeanVlasov_intro} in terms of the total variation distance: \[ \Vert\mu-\nu\Vert_{TV}=\sup_{A\in\Bb(\Cc([0,T];\er^d))}\left|\int \1_{\{x\in A\}}\mu(dx)-\int \1_{\{x\in A\}}\nu(dx)\right|,\,\mu,\nu\in\Pp(\Cc([0,T];\er^d)), \] Mainly this result rests on a generic criterion (see the condition $(\mathbf{C})$ below) which does not directly relies on some regularity properties of $B$ but rather ensure the control of some moments of the Doleans-Dale exponential martingale related to the Girsanov transformation which maps the $N$-system of McKean SDEs \eqref{eq:McKeanVlasovParticle} into the $N$-interacting particle system \eqref{eq:Nparticles}. The core idea of the main result of the present paper is based on a probabilistic interpretation of the proof techniques introduced in Jabin and Wang \cite{JabinWang-16} for the propagation of chaos in entropy (and by extension in total variation) of the one time-marginal distributions of McKean-Vlasov dynamics of the form \eqref{eq:LangevinMcKeanintro} with bounded interaction. More generally, the authors designed a guideline for establishing a sharp quantitative estimate of the propagation of chaos, in terms of a vanishing initial chaos (the particle being initially correlated) and (possibly) vanishing diffusion, through a powerful combination of pde analysis, entropy estimate and combinatorics. This guideline, combined with large deviations principles, was extended to the instance of McKean-Vlasov dynamics \eqref{eq:McKeanVlasov_intro} endowed with singular interaction kernels of the form $b\in W^{-1,\infty}$ (i.e. $b^{(k)}(x)=\sum_{l}\partial_{x_l} G^{k,l}(x),\,G\in L^\infty$). Linked to the probabilistic interpretation of the proof techniques of \cite{JabinWang-16}, let us mention that a (non-explicit) propagation of chaos property in entropy and in total variation distance was recently considered in Lacker \cite{Lacker-18} for the McKean SDE: \begin{equation} \label{eq:McKeanVlasovPastDepend} z_t=Z_0+\int_0^t B\big(s,(z_r)_{0\leq r\leq s},\Ll((z_r)_{0\leq r\leq s})\big)\,ds+\int_{0}^{t}\sigma(s,(z_r)_{0\leq r\leq s})\,dW_s,\,0\leq t\leq T. \end{equation} and its related particle approximation: \begin{equation} \label{eq:ParticlePastDepend} z^{i,N}_t=Z^i_0+\int_0^t B\big(s,(z^{i,N}_r)_{0\leq r\leq s},\frac{1}{N}\sum_{j=1}^N\delta_{\{(z^{j,N}_r)_{0\leq r\leq s}\}}\big)\,ds+\int_{0}^{t}\sigma(s,(z^{i,N}_r)_{0\leq r\leq s})\,dW^i_s,\,0\leq t\leq T, \end{equation} assuming the uniform ellipticity of $\sigma$, the boundedness and Lipschitz continuity (in terms of the total variation distance) of $\sigma^{-1}B$ and the continuity of \[ \nu\in\Pp(\Cc([0,T];\er^d))\mapsto \int_{\Cc([0,T];\er^d)}\int_{0}^{T} \left|\sigma^{-1}(t,z)\left( B(t,z,\mu)- B(t,z,\nu)\right)\right|^2\,dt\,\nu(dz), \] The core idea of \cite{Lacker-18} is closely connected to the original idea introduced in Mishura and Veretennikov \cite{MisVer-16} (from which the present paper owns also its initial step) linking the measurement of the total variation distance between two It\^o's diffusion processes in terms of the Girsanov transformation between the two processes and its applications for the weak uniqueness problems of the McKean SDEs \eqref{eq:ProtoMcKeanVlasovintro}. (It should also be noticed that the idea of establishing propagation of chaos through the Girsanov transformation was already hinted in the preprint Veretennikov \cite{Veretennikov-18} almost at the same time as \cite{Lacker-18}.) The dynamics \eqref{eq:particle_intro} and \eqref{eq:McKeanVlasov_intro} considered hereafter present a extended version of \eqref{eq:McKeanVlasovPastDepend} and \eqref{eq:ParticlePastDepend} which enable to relax elliptic assumption on the diffusion coefficients and embed the case \eqref{eq:LangevinMcKeanintro}. Let also mention that, compared to \cite{Lacker-18}, the wellposed problems related to \eqref{eq:McKeanVlasov_intro} and \eqref{eq:particle_intro} will not be addressed hereafter (assumptions \hypi and \hypii) to rather focus on quantifying explicitly the related propagation of chaos property. The main result of this paper (Theorem \ref{mainthm1:LinearCaseTV}) is stated in Section \ref{sec:MainResults} and proved in Section \ref{sec:Proof}. Section \ref{sec:SufficientConditions} is dedicated to applications of this main result in the particular cases \eqref{eq:ProtoMcKeanVlasovintro} and \eqref{eq:LangevinMcKeanintro} (see corollaries \ref{coro:BoundedCase} and \ref{coro:KineticBoundedCase} respectively) and to exhibit a sufficient condition for the condition $(\mathbf{C})$ in terms of the second order differentiability of $\nu\mapsto B(t,x,\nu)$ (Proposition \ref{prop:DifferentiabilityCondition}). Although \eqref{eq:ProtoMcKeanVlasovintro} and \eqref{eq:LangevinMcKeanintro} only presents applications of Theorem \ref{mainthm1:LinearCaseTV} where the interaction is bounded, more singular situations should be handled by cut-smoothing techniques. The particular case of conditional McKean Lagrangian models (see Bossy, Jabir and Talay \cite{jabir-11a}), which initially motivated the present work, will be discussed in \cite{JabMen-19}. \textbf{Assumptions}: (As before, $(A B)$ denotes the functional on $[0,\infty)\times\Cc([0,\infty);\er^d)\times \Pp(\Cc([0,\infty);\er^d))$ resulting from the product between the diffusion $A$ and drift component $B$ in \eqref{eq:McKeanVlasovParticle} and \eqref{eq:McKeanVlasov_intro}.) \noindent \hypo For any $\mu_0$ on $\er^d$, $0\leq T<\infty$, there exists a unique weak solution $(\Xx_t)_{0\leq t\leq T}$ satisfying the SDE: \begin{equation}\label{eq:IntermediateSDE} \left\{ \begin{aligned} &\Xx_t=X_0+ \int_{0}^{t}c(s,(\Xx_r)_{0\leq r\leq s})\,ds+\int_0^t A(s,(\Xx_r)_{0\leq r\leq s})\,dW_s,\,0\leq t\leq T,\\ &\Xx_0\sim \mu^0. \end{aligned} \right. \end{equation} and for $(\Xx^{1}_t)_{0\leq t\leq T},\,\dots,(\Xx^{1}_t)_{0\leq t\leq T}$ a family of $N$ independent copies of $(\Xx_t)_{0\leq t\leq T}$, it holds that $1\leq i\leq N$, \[ \int_0^T\left| \big (A B\big)\big(s,(\Xx^{i}_r)_{0\leq r\leq s};\nu^N_s\big)\right|^2\,ds<\infty, \] where $\nu^{N}_t=\frac{1}{N}\sum_{j=1}^{N}\delta_{\{(\Xx^j_r)_{0\leq r\leq t}\}}$. \noindent \hypi For any $\mu_0$, $0<T<\infty$, the SDE \eqref{eq:McKeanVlasov_intro} admits a unique weak solution $(X_t)_{t\geq 0}$ such that, almost surely, \[ \int_0^T\left|\big(A B\big)(s,(X_r)_{0\leq r\leq s};\Ll((X_r)_{0\leq r\leq s}))\right|^2\,ds<\infty. \] \noindent \hypii For any $\mu_0$, $0<T<\infty$, $N\geq 1$, the system of SDEs \eqref{eq:particle_intro} admits a unique weak solution $\{(X^{i,N}_t)_{t\geq 0};\,1\leq i\leq N\}$ such that, a.s. \[ \forall\,1\leq i\leq N,\,\int_0^T\left|\big(A B\big)(s,(X^{i,N}_r)_{0\leq r\leq s};\overline{\mu}^{N,N}_s)\right|^2\,ds<\infty, \] where $(\overline{\mu}^{N,N}_t)_{0\leq t\leq T}$ is the flow of (random) empirical measures given as in \eqref{eq:McKeanVlasov_intro}. \begin{remark} With the assumptions \hypi and \hypii, we deliberately leave aside the wellposedness problems of a weak solution to the $N$-interacting particle system \eqref{eq:particle_intro} and to the McKean SDE \eqref{eq:McKeanVlasov_intro} to rather focus on quantifying explicitly the related propagation of chaos property. Although not necessary, the assumption \hypo is used to ensure, in a simple way, the equivalency in law between \eqref{eq:particle_intro} and \eqref{eq:McKeanVlasov_intro}. Let us also mention that the assumptions on the weak uniqueness of \eqref{eq:particle_intro} and \eqref{eq:McKeanVlasov_intro} can be relaxed as long as there exist a solution to \eqref{eq:particle_intro} and a solution to \eqref{eq:McKeanVlasov_intro} for which \eqref{proofstp:i} hold. \end{remark} \textbf{Notation: For any integer $m\geq 1$, and any finite positive time horizon $T$, $\Cc([0,T];\er^{m})$ (respectively $\Cc([0,\infty);\er^{m})$) will denote the space of continuous functions defined on $[0,T]$ (resp. $[0,\infty)$) with values in $\er^m$ equipped with the uniform norm $\Vert x\Vert_{\Cc([0,T];\er^m)}=\max_{0\leq t\leq T}|x(t)|$ (resp. $\Vert x\Vert_{\Cc([0,\infty);\er^m)}=\max_{t\geq 0}|x(t)|\wedge 1)$. $\Pp(\Cc([0,T];\er^{m}))$ and $\Pp(\Cc([0,\infty);\er^{m}))$ will denote respectively the space of probability measures defined on $\Cc([0,T];\er^m)$ and on $\Cc([0,\infty);\er^m)$. Finally, $\Vert~\Vert_{TV,(0,T)}$ will denote the total variation norm on $\Pp(\Cc([0,T];\er^{m}))$, that is (see e.g. Equation $(3.2.13)$ in Rachev \cite{Rachev-91}): for all $P_1,P_2$ on $\Pp(\Cc([0,T];\er^{m}))$ \[ \Vert P_1-P_2\Vert_{TV,(0,T)}=\sup_{A \in \Bb(\Cc([0,T];\er^m))}\left|P_1(A)-P_2(A)\right|, \] where $\Bb(\Cc([0,T];\er^m))$ denotes the Borel $\sigma$-algebra of $\Cc([0,T];\er^m)$. Whenever $P_1, P_2\in \Pp(\Cc([0,\infty);\er^{m}))$ and $0<T<\infty$ is a finite time horizon, $\Vert P_1-P_2\Vert_{TV,(0,T)}$ will simply correspond to the total variation distance between the probability measures restrained to the sample space $(\Cc([0,T];\er^d),\Bb(\Cc([0,T];\er^d)))$. \section{Main result}\label{sec:MainResults} Let $(\Omega,\Ff,(\Ff_t;\,0\leq t\leq T),\PP)$ and $(\widetilde{\Omega},\widetilde{\Ff},(\widetilde{\Ff}_t;\,0\leq t\leq T),\widetilde{\PP})$ be two (possibly different) filtered probability spaces under each of which are defined a collection of $(X^i_0,(W^i_t)_{0\leq t\leq T})$ and $(\widetilde{X}^i_0,(\widetilde{W}^i_t)_{0\leq t\leq T})$ of independent copies of $(X_0,(W_t)_{0\leq t\leq T})$. Then, under \hypi and \hypii, consider a version of the particle system \eqref{eq:McKeanVlasovParticle} defined on $(\widetilde{\Omega},\widetilde{\Ff},(\widetilde{\Ff}_t;\,0\leq t\leq T),\widetilde{\PP})$ as \begin{equation} \label{eq:Nparticles} \left\{ \begin{aligned} &X^{i,N}_t=\widetilde{X}^i_0+\int_{0}^{t}c(s,(X_r)_{r\leq s})\,ds\\ &\hspace{1cm}+\int_0^t A(s,(X^{i,N}_r)_{0\leq r\leq s})\Big(B(s,(X^{i,N}_r)_{0\leq r\leq s};\overline{\mu}^{N,N}_s)\,ds+d\widetilde{W}^i_s\Big),\,0\leq t\leq T,\,i=1,\cdots,N,\\ &\overline{\mu}^{N,N}_t=\frac{1}{N}\sum_{j=1}^N\delta_{\{(X^{j,N}_r)_{0\leq r\leq t}\}},\,\widetilde{X}^i_0\sim \mu^0, \end{aligned} \right. \end{equation} and a system of $N$-independent copies of \eqref{eq:McKeanVlasov_intro} defined on $(\Omega,\Ff,(\Ff_t;\,0\leq t\leq T),\PP)$ as \begin{equation} \label{eq:McKeanVlasovParticle} \left\{ \begin{aligned} &X^{i,\infty}_t=X^i_0+\int_{0}^{t}c(s,(X^{i,\infty}_r)_{r\leq s})\,ds\\ &\hspace{1cm}+\int_0^t A(s,(X^{i,\infty}_r)_{0\leq r\leq s})\Big(B(s,(X^{i,\infty}_r)_{0\leq r\leq s};\Ll((X^{i,\infty}_r)_{0\leq r\leq s}))\,ds+\,dW^i_s\Big),\\ &\mu^{i,\infty}(t)=\Ll((X^{i,\infty}_r)_{0\leq r\leq t}),\,X^i_0\sim \mu^0. \end{aligned} \right. \end{equation} As the assumption \hypii ensures the uniqueness of each component of the system \eqref{eq:McKeanVlasovParticle}, the distribution $\Ll((X^{i,\infty}_t)_{0\leq t\leq T})$ is the common for all component and equal to the one of \eqref{eq:McKeanVlasov_intro}; the index $i$ may be dropped. The superscript $\infty$ in \eqref{eq:McKeanVlasovParticle} will be used as a pointer to remind that \eqref{eq:McKeanVlasovParticle} is (at least heuristically) the suitable limit system of \eqref{eq:Nparticles}. Our main result is given by the following theorem: \begin{theorem}\label{mainthm1:LinearCaseTV} Assume that \hypi and \hypii hold. Assume also that the following condition $\mathbf{(C)}$ holds: \begin{equation*} \mathbf{(C)}\,\,\left\|\,\, \begin{aligned} &\text{There exists a constant }0<\beta<\infty\text{ such that for any }0<T_0<T<\infty,\,0<\delta<\infty,\text{and, for all integer }p\geq 1,\\ &\hspace{4cm}\EE_{\PP}\left[\left(\int_{T_0}^{(T_0+\delta)\wedge T}\left| \triangle B^{i,N,\infty}_t\right|^{2}\,dt\right)^p\right]\leq \frac{p! \beta^{p}\delta^p}{N^p},\\ &\text{where}\,\triangle B^{N,\infty}_t=B(t,(X^{i,\infty}_r)_{0\leq r\leq t};\overline{\mu}^{N,\infty}_t)- B(t,(X^{i,\infty}_r)_{0\leq r\leq t};\Ll((X^{i,\infty}_r)_{0\leq r\leq t}),\\ &\overline{\mu}^{N,\infty}_t=\frac{1}{N}\sum_{j=1}^N\delta_{\{(X^{j,\infty}_r)_{0\leq r\leq t}\}}. \end{aligned} \right. \end{equation*} Then \[ \Vert \Ll\big((X^{1,N}_t,X^{2,N}_t,\dots,X^{N,N}_t)_{0\leq t\leq T}\big)- \Ll\big((X^{1,\infty}_t,X^{2,\infty}_t,\dots,X^{N,\infty}_t)_{0\leq t\leq T}\big)\Vert_{TV,(0,T)}\leq C(1+\beta T)\sqrt{\frac{k}{N}}, \] where $C$ is a constant only depending on $T$, $m$ and $\beta$. \end{theorem} The condition $\mathbf{(C)}$ can be understood as a local Novikov condition in the spirit the one key argument for the proof of Khasm'inskii's lemma (see e.g. [Simon \cite{Simon-82}, Lemma B.1.2.]). Alternatively the condition $\mathbf{(C)}$ in Theorem \ref{mainthm1:LinearCaseTV} can be viewed as a (non-asymptotic) large deviation principle or a sub-gaussian concentration property for the deviation between the "empirical" drift of \eqref{eq:Nparticles} evaluated along the $N$-system of McKean SDEs \eqref{eq:McKeanVlasovParticle}: \[ B\big(t,((X^{i,\infty}_r)_{0\leq r\leq t});\frac{1}{N}\sum_{j=1}^N\delta_{\{(X^{j,\infty_r})_{0\leq r\leq t}\}}\big), \] and its mean-field limit: \[ B\big(t,((X^{i,\infty}_r)_{0\leq r\leq t});\Ll((X^{i,\infty_r})_{0\leq r\leq t})\big). \] In the situations \eqref{eq:ProtoMcKeanVlasovintro} and \eqref{eq:LangevinMcKeanintro}, $\mathbf{(C)}$ is a direct consequence of the boundedness of the interaction kernel $b$. In more general situation the condition may result from a Lipschitz property of $\nu\in\Pp(\Cc([0,T];\er^d))\mapsto B\big(t,x;\nu\big)$ and a centering property (see Lemma \ref{lem:RatePathDependent}) or from a higher regularity property in terms of the variational- linear functional derivative of $\nu\in\Pp(\Cc([0,T];\er^d))\mapsto B\big(t,x;\nu\big)$ (see Definition \ref{def:FlatDerivative} and Proposition \ref{prop:DifferentiabilityCondition}). \section{Proof of Theorem \ref{mainthm1:LinearCaseTV}}\label{sec:Proof} \subsection{Preliminary on propagation of chaos for the total variation distance and control of the Girsanov transformation between $\Ll(X^{1,\infty},\dots,X^{N,\infty})$ and $\Ll(X^{1,N},\dots,X^{N,N})$} For notation convenience, define \[ P^{k,N}=\Ll((X^{1,N}_t,X^{2,N}_t,\cdots,X^{k,N}_t)_{0\leq t\leq T})\in \Pp(\Cc([0,T];\er^{dk})), \] the joint law of the first $k$ particles of \eqref{eq:McKeanVlasovParticle} and by \[ P^{k,\infty}=\Ll((X^{1,\infty}_t,X^{2,\infty}_t,\cdots,X^{k,\infty}_t)_{0\leq t\leq T})\in \Pp(\Cc([0,T];\er^{dk})), \] the joint law of the first $k$ independents copies of \eqref{eq:McKeanVlasov_intro}. The later reduces to \[ P^{k,\infty}=\underbrace{P^{\infty}\otimes P^{\infty}\otimes \cdots \otimes P^{\infty}}_{\text{k times}},\,\,\, P^{\infty}=\Ll\big((X^{\infty}_t)_{0\leq t\leq T}\big), \] as the assumption \hypi ensures the weak uniqueness of \eqref{eq:McKeanVlasov_intro}. The combination of the assumptions \hypo, \hypi and \hypii ensure that for all $1\leq k\leq N<\infty$, the measures $P^{k,N}$ and $P^{k,\infty}$ are equivalent and the Radon-Nikodym derivative formulates\footnote{The proof of \eqref{proofstp:i} under the sole assumptions \hypo, \hypi and \hypii is detailed in the appendix section.} is given by the Doleans-Dale exponential martingale: \begin{equation} \label{proofstp:i} \begin{aligned} &Z^{N}_T:=\frac{dP^{N,N}}{dP^{N,\infty}}\\ &=\exp\left\{-\sum_{i=1}^N\int_0^T \left(B\big(t,(X^{i,\infty}_r)_{0\leq r\leq t},\frac{1}{N}\sum_{j=1}^N\delta_{\{(X^{j,\infty}_r)_{0\leq r\leq t}\} }\big)-\int B\big(t,(X^{i,\infty}_r)_{0\leq r\leq t},\Ll((X^{i,\infty}_r)_{0\leq r\leq t})\big)\right)\cdot \,dW^{i}_t\right.\\ &\quad \left.-\frac{1}{2}\int_0^T \sum_{i=1}^N \left|B\big(t,(X^{i,\infty}_r)_{0\leq r\leq t},\frac{1}{N}\sum_{j=1}^N\delta_{\{(X^{j,\infty}_r)_{0\leq r\leq t}\}}\big)-\int B\big(t,(X^{i,\infty}_r)_{0\leq r\leq t},\Ll((X^{i,\infty}_r)_{0\leq r\leq t})\big) \right|^2\,dt\right\}\\ &=\exp\left\{-\sum_{i=1}^N\int_0^T \triangle B^{i,N}_t\cdot \,dW^{i}_t-\frac{1}{2}\sum_{i=1}^N\int_0^T \left|\triangle B^{i,N}_t\right|^2\,dt\right\}, \end{aligned} \end{equation} where $(\triangle B^{i,N}_t)_{0\leq t\leq T},\,i=1\dots,N$ are given as in $(\mathbf{C})$. By Csisz\'ar-Pinsker-Kullback's inequality, \begin{equation}\label{proofstp:g} \Vert P^{k,N}-P^{k,\infty}\Vert_{TV,\Pp((\Cc([0,T];\er^{kd})))} \leq \sqrt{2 H(P^{k,N}\,|\,P^{k,\infty})}, \end{equation} where $H(P^{k,N}\,|\,P^{k,\infty})$ is the relative entropy between $P^{k,N}$ and $P^{k,\infty}$ is given by \[ H(P^{k,N}\,|\,P^{k,\infty})=\int_{\mathbf{\omega}^k\in\Cc([0,T];\er^{dk})} \log(dP^{k,\infty}/dP^{k,\infty})(\mathbf{\omega}^{k})P^{k,N}(d\mathbf{\omega}^k) \] with $dP^{k,N}/dP^{k,\infty}$ being explicitly given by the conditional expectation $\EE_{\PP}\left[Z^N_T \,|\,(X^{1,\infty},\dots,X^{k,\infty})\right]$ valuing the average value of $Z^{N}_T$ given the path on $[0,T]$ of the $k$-first components of \eqref{eq:McKeanVlasovParticle}, $(X^{1,\infty}_t,\dots,X^{k,\infty}_t)_{0\leq t\leq T}$. At this stage, for $1\leq k<N$, decomposing the empirical measure $\frac{1}{N}\sum_{j=1}^N\delta_{\{(X^{j,\infty}_r)_{0\leq r\leq t}\}}$ into \[ \frac{1}{N}\sum_{j=1}^k\delta_{\{(X^{j,\infty}_r)_{0\leq r\leq t}\}}+\frac{N-(k+1)}{N}\left(\frac{1}{N-(k+1)}\sum_{j=k+1}^N\delta_{\{(X^{j,\infty}_r)_{0\leq r\leq t}\}}\right), \] and owing to the l.s.c. property of $H$ and as $(X^{k+1,\infty},\dots,X^{N,\infty})$ are i.i.d., a natural propagation of chaos property can be derived providing some boundedness and continuity properties on $\nu\mapsto B(t,x;\nu)$. (In \cite{Lacker-18}, an alternative route was proposed proving that $\lim_{N\rightarrow\infty}H(P^{k,\infty}\,|\,P^{k,N})=0$. This results was derived succeeding from a preliminary propagation of chaos results $\frac{1}{N}\sum_{j=1}^N\delta_{\{(X^{j,N}_r)_{0\leq r\leq T}\}}\rightarrow \Ll((X^\infty_r)_{0\leq r\leq T})$ derived from a large deviation principle.) An explicit estimate of the propagation of chaos can further be deduced from the super-additive property of the renormalized relative entropy (see e.g. [Hauray and Mischler 2014, Lemma 3.3-iv]), \[ \frac{1}{k}H\big(P^{k,N}|\,P^{k,\infty}\big)\leq \frac{1}{N}H\big(P^{N,N}\,|\,P^{N,\infty}\big). \] Plugged into \eqref{proofstp:g}, \begin{equation*} \Vert P^{k,N}-P^{k,\infty}\Vert_{TV,\Pp((\Cc([0,T];\er^{kd})))} \leq \sqrt{\frac{2k}{N} H(P^{N,N}\,|\,P^{N,\infty})}=\sqrt{\frac{2k}{N} \EE_{\PP}\left[Z^{N}_T\log(Z^{N}_T)\right]}. \end{equation*} from which emerges the optimal rate $1/\sqrt{N}$ provided $\sup_N\EE[(Z^N_T)^{1+\delta}]<\infty$, for some $\delta>0$. The necessity of the uniform control for a moment greater than $1$ of $(Z^N_t)_{0\leq t\leq T}$ can be observed more directly in the case of $P^{k,N}$ and $P^{k,\infty}$: Under \hypi and \hypii, the total variation distance between $P^{k,N}$ and $P^{k,\infty}$ can be expressed as:\eqref{proofstp:i}, for all $A\in\Bb(\Cc([0,T];\er^{kd}))$, we have \begin{align*} \widetilde{\PP}((X^{1,N},\dots,X^{k,N})\in A)=P^{k,N}(A)=\EE_\PP\left[Z^{N}_T\1_{\{(X^{1,\infty},\dots,X^{k,\infty})\in A\}}\right] \end{align*} from which we deduce that \begin{align*} \Vert P^{k,N}-P^{k,\infty} \Vert_{TV,(0,T)} &=\sup_{A \in\Bb(\Cc([0,T];\er^{2d}))} \left|\EE_\PP\left[\left(Z^{N}_T-1\right)\1_{\{(X^{1,\infty},\dots,X^{k,\infty})\in A\}}\right]\right|\\ &=\EE_\PP\left[\left|\EE_\PP\left[\left(Z^{N}_T-1\right)\1_{\{(X^{1,\infty},\dots,X^{k,\infty})\in A\}}\,|\,(X^{1,\infty},\dots,X^{k,\infty})\right]\right|\right]. \end{align*} Since \[ Z^N_T=1+\sum_{i=1}^{N}\int_0^T Z^N_t\triangle B^{i,N}_t\,\cdot dW^{i}_t=\sum_{i=1}^{N} \sum_{l=1}^m \int_0^T Z^N_t\triangle B^{i,N,(l)}_t\,\cdot dW^{i,(l)}_t, \] for \[ Z^N_t=\frac{dP^{N,N}}{dP^{N,\infty}}\Big{|}_{\Bb(\Cc([0,T];\er^d))}=\exp\left\{-\sum_{i=1}^N\int_0^t \triangle B^{i,N}_r\cdot \,dW^{i}_r-\frac{1}{2}\sum_{i=1}^N\int_0^t \left|\triangle B^{i,N}_r\right|^2\,dr\right\}, \] and since $(W^{k+1},\dots,W^{N})$ are independent from $(X^{1,\infty},\dots,X^{N,\infty})$, the conditional expectation \[ \EE_\PP\left[\left(Z^{N}_T-1\right)\1_{\{(X^{1,\infty},\dots,X^{k,\infty})\in A\}}\,|\,(X^{1,\infty},\dots,X^{k,\infty})\right], \] reduces into \[ \EE_\PP\left[\left(\sum_{i=1}^k\int_0^T Z^N_t\triangle B^{i,N}_t\,\cdot dW^{i}_t\right)\,|\,(X^{1,\infty},\dots,X^{k,\infty})\right]. \] This gives: \begin{equation}\label{eq:BoundTVa} \Vert P^{k,N}-P^{k,\infty} \Vert_{TV,(0,T)}= \EE_{\PP}\left[\left|\EE_{\PP}\left[\sum_{i=1}^k\int_0^T Z^{N}_t \triangle B^{i,N}_t\cdot \,dW^{i}_t\,\Big{|}\,(X^{1,\infty}_r,\dots,X^{k,\infty}_r)_{0\leq r\leq T}\right]\right|\right]. \end{equation} Using successively Burkh\"older-Davis-Gundy's inequality, Jensen's inequality, the exchangeability of $(X^{1,\infty},\dots,X^{N,\infty})$ and H\"older's inequality for an arbitrary $1<p< \infty$, it follows: \begin{align*} \Vert P^{k,N}-P^{k,\infty} \Vert_{TV,(0,T)}&\leq \EE_{\PP}\left[\left(\int_0^T (Z^{N}_t)^2 \sum_{i=1}^k\left|\triangle B^{i,N}_t\right|^2\,dt\right)^{1/2}\right]\leq \sqrt{k}\EE_{\PP}\left[\left(\int_0^T (Z^{N}_t)^2\left|\triangle B^{i,N}_t\right|^2\,dt\right)^{1/2}\right]\\ &\leq \sqrt{k}\EE_{\PP}\left[\max_{0\leq t\leq T}(Z^{N}_t)\left(\int_0^T\left|\triangle B^{i,N}_t\right|^2\,dt\right)^{1/2}\right]\\ &\leq \sqrt{k}\left(\EE_{\PP}\left[\max_{0\leq t\leq T}(Z^{N}_t)^p\right]\right)^{1/p}\left(\EE_{\PP}\left[\left(\int_0^T\left|\triangle B^{i,N}_t\right|^2\,dt\right)^{p/(2(p-1))}\right]\right)^{(p-1)/p}. \end{align*} Applying Doob's inequality, we get \begin{equation}\label{eq:BoundTVb} \Vert P^{k,N}-P^{k,\infty} \Vert_{TV,(0,T)}\leq \sqrt{k}\frac{p}{p-1}\left(\EE_{\PP}\left[(Z^{N}_T)^p\right]\right)^{1/p}\left(\EE_{\PP}\left[\left(\int_0^T\left|\triangle B^{i,N}_t\right|^2\,dt\right)^{p/(2(p-1))}\right]\right)^{(p-1)/p}. \end{equation} The display of the rate $1/\sqrt{N}$ is then directly related to the technical difficulty of controlling uniformly a $1+\delta$-moment of $Z^N_T$ as such uniform control would imply that the finiteness of the moments , $\EE_{\PP}[(\sum_{i=1}^N\int_0^T|\triangle B^{i,N}_t|^2\,dt)^{k}]$, which, owing to the exchangeability of $(X^{1,\infty},\dots,X^{N,\infty})$ amounts to establishing $\EE_{\PP}[(\int_0^T|\triangle B^{i,N}_t|^2\,dt)^{k}]$ is of order $1/N^k$. The proof of Theorem of \ref{mainthm1:LinearCaseTV} below is set by first establishing a local-in-time control of an arbitrary moment of $(Z^N_t)_{0\leq t\leq T}$, which combined with \eqref{eq:BoundTVb} and a careful split of the transformation from $(X^{1,\infty},\dots,X^{N,\infty})$ to $(X^{1,N},\dots,X^{N,N})$ to small time intervals enable to conclude the claim. \subsection{Proof of Theorem \ref{mainthm1:LinearCaseTV}} \begin{prop}\label{prop:ControlExpMart} Let $\{(X^{i,\infty}_t)_{0\leq t\leq T};\,1\leq i\leq N\}$ be given as in \eqref{eq:McKeanVlasovParticle} and assume that $\mathbf{(C)}$ hold true. Then, for all $0<T_0<T<\infty$, $0<\kappa<\infty$, \[ \sup_N \EE_\PP\left[(Z^N_{T_0+\delta}/Z^N_{T_0})^\kappa\right]=\sup_N\EE_\PP\left[\exp\left\{\kappa\sum_{i=1}^N \int_{T_0}^{T_0+\delta} \triangle B^{i,N}_t\cdot \,dW^{i}_t-\frac{\kappa}{2}\int_{T_0}^{T_0+\delta} \left|\triangle B^{i,N}_t\right|^2 \,dt\right\}\right], \] is bounded from above by $1+\exp{\kappa^2}+\frac{2}{1-8 \kappa\delta \beta}$ provided that $\delta< (8\kappa \beta)^{-1}$. \end{prop} \begin{proof}[Proof of Proposition \ref{prop:ControlExpMart}] For the moment, let $\delta$ be an arbitrary positive real number and let us show that \[ \sup_N\EE_\PP\left[\exp\left\{\kappa\sum_{i=1}^N \int_{T_0}^{T_0+\delta} \triangle B^{i,N}_t\cdot \,dW^{i}_t\right\}\right]<\infty. \] Using the Taylor expansion for the exponential function, \begin{align*} \EE_\PP\left[ \exp\left\{\kappa\sum_{i=1}^N\int_{T_0}^{T_0+\delta}\triangle B^{i,N}_t\cdot \,dW^{i}_t\right\}\right] \leq \sum_{k\geq 0}\frac{\kappa^k}{k!}\EE_\PP\left[\left(\sum_{i=1}^N\int_{T_0}^{T_0+\delta} \triangle B^{i,N}_t\cdot \,dW^{i}_t\right)^k\right]. \end{align*} Splitting this sum into its even and odd components, and since, for all $r\in\er$, $r^{2p+1}\leq 1+r^{2p+2}$, we have \begin{equation} \label{proofstp:a} \begin{aligned} &\EE_\PP\left[ \exp\left\{\kappa\sum_{i=1}^N\int_{T_0}^{T_0+\delta}\triangle B^{i,N}_t\cdot \,dW^{i}_t\right\}\right]\\ &\leq \sum_{p\geq 0}\frac{\kappa^{2p+1}}{(2p+1)!}\EE_\PP\left[\left(\sum_{i=1}^N\int_{T_0}^{T_0+\delta} \triangle B^{i,N}_t\cdot \,dW^{i}_t\right)^{2p+1}\right] +\sum_{p\geq 0}\frac{\kappa^{2p}}{(2p)!}\EE_\PP\left[\left(\sum_{i=1}^N\int_{T_0}^{T_0+\delta} \triangle B^{i,N}_t\cdot \,dW^{i}_t\right)^{2p}\right]\\ &\leq 1+\sum_{p\geq 0}\frac{\kappa^{2p+1}}{(2p+1)!}+2\sum_{p\geq 0}\frac{|\kappa|^{2p}}{(2p)!}\EE_\PP\left[\left(\sum_{i=1}^N\int_{T_0}^{T_0+\delta} \triangle B^{i,N}_t\cdot \,dW^{i}_t\right)^{2p}\right]. \end{aligned} \end{equation} Applying the martingale moment control of Carlen-Kr\'ee \cite{CarKre-91} (see Theorem \ref{thm:CarlenKree}, Appendix section, for a reminder), we have \begin{align*} \EE_\PP\left[\left(\sum_{i=1}^N\int_{T_0}^{T_0+\delta} \triangle B^{i,N}_t\cdot \,dW^{i}_t\right)^{2p}\right] \leq 2^{2p} (2p)^{p} \EE_\PP\left[\left(\sum_{i=1}^N\int_{T_0}^{T_0+\delta} \left|\triangle B^{i,N}_t\right|^2\,dt\right)^{p}\right]. \end{align*} Then, by Jensen's inequality and the exchangeability of the $N$-system of McKean-Vlasov dynamics, we get that \begin{align*} \EE_\PP\left[\left(\sum_{i=1}^N\int_{T_0}^{T_0+\delta} \left|\triangle B^{i,N}_t\right|^2\,dt\right)^{p}\right] & \leq N^{p}\EE_\PP\left[\left( \int_{T_0}^{T_0+\delta} \left|\triangle B^{i,N}_t\right|^2\,dt\right)^{p}\right]. \end{align*} Plugin the estimate of the condition $\mathbf{(C)}$ then ensures the upper bound \begin{equation} \label{proofstp:c} \begin{aligned} \EE_\PP\left[ \exp\left\{\kappa\sum_{i=1}^N\int_{T_0}^{T_0+\delta}\triangle B^{i,N}_t\cdot \,dW^{i}_t\right\}\right]\leq 1+\exp{\kappa^2}+2\sum_{p\geq 0}\frac{p!p^p2^{3p}\delta^p\beta^{p}\kappa^p}{(2p)!}. \end{aligned} \end{equation} Since $C:=\sup_{p}\big(p!p^p/((2p)!) \big)<\infty$, the sum is essentially geometric and the condition $\delta/(8\beta\kappa)<1$ ensures its finiteness with \[ \sup_N \EE_\PP\left[(Z^N_{T_0+\delta}/Z^N_{T_0})^\kappa\right]\leq 1+\exp{\kappa^2}+\frac{2}{1-8 \kappa\delta \beta} . \] \end{proof} Coming back to the proof of Theorem \ref{mainthm1:LinearCaseTV}], for an arbitrary integer $1<p<\infty$, and for $\overline{\delta}:=(8\beta p)^{-1}$, choose an arbitrary real number $\delta$ in $(0,\overline{\delta}(p))$ (this number will be specified at the end of the proof). For $M:=\llcorner T/\delta\lrcorner$, we define the partition $[0,T]=\cup_{m=0}^M[t_m,t_{m+1})$ with \[ t_0=0,\,t_{M+1}=T,\,t_{m+1}-t_{m}=\delta\,\mbox{for}\,0\leq m< M. \] Next, for each $m$, define the family of $N$-processes $(Y^{1,N,m,\infty}_t)_{0\leq t\leq T},\dots,(Y^{N,N,m,\infty}_t)_{0\leq t\leq T}$ as: for each $1\leq i\leq N$, \noindent $\bullet$ Whenever $0\leq t\leq m\delta$, the path $Y^{i,N,m,\infty}_t$ is given as a weak solution to \begin{align*} Y^{i,N,m,\infty}_t&=Y^{i,N,m,\infty}_0+\int_{0}^{t}c(s,(Y^{i,N,m,\infty}_r)_{r\leq s})\,ds\\ &\quad+\int_0^t A(s,(Y^{i,N,m,\infty}_r)_{0\leq r\leq s})\big(B(s,(Y^{i,N,m,\infty}_r)_{0\leq r\leq s};\Ll((Y^{i,N,m,\infty}_r)_{0\leq r\leq s}))\,ds+\,dW^i_s\big); \end{align*} $\bullet$ Whenever $m\delta <t\leq T$, \begin{align*} Y^{i,N,m,\infty}_t&=Y^{i,N,m,\infty}_{m\delta}+\int_{m\delta}^{t}c(s,(Y^{i,N,m,\infty}_r)_{r\leq s})\,ds\\ &\quad +\int_{m\delta}^t A(s,(Y^{i,N,m,\infty}_r)_{0\leq r\leq s})\big(B(s,(Y^{i,N,m,\infty}_r)_{0\leq r\leq s};\overline{\nu}^{N,N}_s)\,ds+\,dW^i_s\big), \end{align*} for \[ \overline{\nu}^{N,N}_t=\Ll((Y^{i,N,m,\infty}_r)_{0\leq r\leq m\delta})+\frac{1}{N}\sum_{j=1}^N\delta_{\{(Y^{j,N,m,\infty}_r)_{m\delta< r\leq t} \}}. \] By construction, the sequence $\{(Y^{i,N,1,\infty}_t)_{0\leq t\leq T};\,i=1,\dots,N\}$, ..., $\{(Y^{i,N,1,\infty}_t)_{0\leq t\leq T};\,i=1,\dots,N\}$ corresponds to a partially interacting particle corresponding, for any fixed $m$, to the McKean SDEs system \eqref{eq:McKeanVlasov_intro} up to the time $m\delta$, and integrate a mean-field interaction from $t=m\delta$ to $t=T$. Owing the uniqueness properties following \hypii and \hypiii, for $m=0$, $(Y^{1,N,0,\infty}_t,\dots,Y^{N,N,0,\infty}_t)_{0\leq t\leq T}$ corresponds to the McKean-Vlasov system \eqref{eq:ParticlePastDepend} and, for $m=M+1$, $(Y^{1,N,M+1,\infty}_t,\dots,Y^{N,N,M+1,\infty}_t)_{0\leq t\leq T}$ to the interacting particle system \eqref{eq:McKeanVlasovParticle}. Denoting by $P^{k,m,N}$ the probability measure generated by $(Y^{1,N,m,\infty}_t)_{0\leq t\leq T},\dots,(Y^{k,N,m,\infty}_t)_{0\leq t\leq T}$ on $(\Cc([0,T];\er^d),\Bb(\Cc([0,T];\er^d)))$, by the triangular inequality, \begin{equation} \label{proofstp:d} \Vert P^{k,\infty}-P^{k,N}\Vert_{TV,(0,T)}=\Vert P^{1,M+1,N}-P^{1,0,N}\Vert_{TV,(0,T)}\leq \sum_{m=0}^{M}\Vert P^{k,m+1,N}-P^{k,m,N}\Vert_{TV,(0,T)}. \end{equation} By definition, the cost in term of an exponential martingale reduces is given by the following: for some $0\leq m\leq M+1$, $1\leq i\leq N<\infty$, and and, for $0\leq m\leq M-1$, using Corollary \ref{coro:DensityTwoDiff}, \begin{align*} \frac{dP^{N,m,N}}{dP^{N,m+1,N}} &=\exp\left\{-\sum_{i=1}^N\int_{m\delta}^{(m+1)\delta} \triangle B^{i,N}_t\cdot \,dW^{i}_t-\frac{1}{2}\int_{m\delta}^{(m+1)\delta} \sum_{i=1}^N \left| \triangle B^{i,N}_t \right|^2\,dt\right\}=Z^N_{(m+1)\delta}/Z^N_{m\delta}, \end{align*} and \begin{equation}\label{proofstp:e} \begin{aligned} \frac{dP^{N,M,N}}{dP^{N,M+1,N}} &=\exp\left\{-\sum_{i=1}^N\int_{M\delta}^{T} \triangle B^{i,N}_t\cdot \,dW^{i}_t-\frac{1}{2}\int_{M\delta}^{T} \sum_{i=1}^N \left| \triangle B^{i,N}_t \right|^2\,dt\right\}=Z^{N}_T/Z^{N}_{M\delta}. \end{aligned} \end{equation} Replicating the preceding calculations from \eqref{eq:BoundTVa} to \eqref{eq:BoundTVb}, we immediately get, for any $0\leq m\leq M-1$, and $p^*=p/(p-1)$ the conjugate of $p$, \begin{equation}\label{proofstp:f} \begin{aligned} &\Vert P^{1,m+1,N}-P^{1,m,N}\Vert_{TV,(0,T)}\\ &\leq\sqrt{k} p^*\left(\EE_{\PP}\left[\left(Z^N_{(m+1)\delta}/Z^N_{m\delta}\right)^p\right]\right)^{1/p}\left(\EE_{\PP}\left[ \left(\int_{m\delta}^{(m+1)\delta} \left|\triangle B^{i,N}_t\right|^2\,dt\right)^{p^*}\right]\right)^{1/p^*}. \end{aligned} \end{equation} Using Jensen's inequality and $(\mathbf{C})$, for $\lfloor p^*\rfloor$ the (least) integer part of $p^*/2$, \begin{align*} \EE_{\PP}\left[\left(\int_{m\delta}^{(m+1)\delta} \left|\triangle B^{i,N}_t\right|^2\,dt\right)^{p^*/2}\right] &=\EE_{\PP}\left[\left(\left(\int_{m\delta}^{(m+1)\delta} \left|\triangle B^{i,N}_t\right|^2\,dt\right)^{\lfloor p^*/2\rfloor +1}\right)^{p^*/(2(\lfloor p^*/2\rfloor+1))}\right]\\ &\leq \left(\EE_{\PP}\left[\left(\int_{m\delta}^{(m+1)\delta} \left|\triangle B^{i,N}_t\right|^2\,dt\right)^{\lfloor p^*/2\rfloor +1}\right]\right)^{p^*/(2(\lfloor p^*/2\rfloor+1))}\\ &\leq \frac{((\lfloor p^*/2\rfloor+1)!)^{p^*/(2(\lfloor p^*\rfloor+1))}(\delta\beta)^{p^*/2}}{N^{p^*/2}}. \end{align*} Finally, coming back to \eqref{proofstp:f}, Proposition \ref{prop:ControlExpMart} gives: \begin{align*} &\Vert P^{k,m+1,N}-P^{k,m,N}\Vert_{TV,(0,T)}\leq\sqrt{k} \left(1+\exp{p^2}+\frac{2}{1-8 p\delta \beta}\right)\left(\frac{\overline{C}(p)\sqrt{\delta\beta}}{\sqrt{N}}\right),\,m=0,\dots,M-1,\\ &\overline{C}(p):=\frac{p}{p-1}((\lfloor p/(2(p-1))\rfloor+1)!)^{1/(\lfloor p/(2(p-1))\rfloor+1)}. \end{align*} In the same way, we get \begin{equation}\label{proofstp:h} \begin{aligned} \Vert P^{k,M+1,N}-P^{k,M,N}\Vert_{TV,(0,T)}&\leq \sqrt{k} \left(1+\exp{p^2}+\frac{2}{1-8 p(T-M\delta) \beta}\right)\left(\frac{\overline{C}(p)\sqrt{(T-\delta M)\beta}}{\sqrt{N}}\right). \end{aligned} \end{equation} Coming back to $\Vert P^{k,N}-P^{k,\infty}\Vert_{TV,(0,T)}$, we get \begin{align*} &\Vert P^{k,N}-P^{k,\infty}\Vert_{TV,(0,T)}\\ &\leq \frac{\sqrt{k}}{\sqrt{N}}\overline{C}(p)\times\left(\left(1+\exp{p^2}+\frac{2}{1-8 p\delta \beta}\right)\sqrt{\delta\beta}M +\left(1+\exp{p^2}+\frac{2}{1-8 p(T-M\delta) \beta}\right)\sqrt{(T-M\delta)\beta}\right)\\ &\leq \frac{\sqrt{k}}{\sqrt{N}}\overline{C}(p)\left(1+\exp{p^2}+\frac{2}{1-8 p\delta \beta}\right)\times\left(\sqrt{\frac{\beta}{\delta}}T+\sqrt{\delta\beta}\right). \end{align*} Then, choosing for instance $\delta=1/((8+\epsilon)p\beta)$ for some $\epsilon>0$, we conclude \begin{align*} &\Vert P^{k,N}-P^{k,N}\Vert_{TV,(0,T)}\leq C\frac{\sqrt{k}}{\sqrt{N}}(1+T\beta),\\ &C:=\inf_{p>1,\epsilon>0}\left\{ \frac{p}{p-1}\left(1+\exp{p^2}+\frac{8+\epsilon}{\epsilon}\right) \left(\frac{((\lfloor p/(p-1)\rfloor+1)!)^{p/(p-1)\times (\lfloor p/(p-1)\rfloor+1)^{-1}}}{\sqrt{(8+\epsilon)}}\sqrt{8+\epsilon}\right)\right\}. \end{align*} \section{Some applications and a sufficient condition for Theorem \ref{mainthm1:LinearCaseTV}}\label{sec:SufficientConditions} \subsection{Applications to McKean-Vlasov dynamics with bounded interaction kernel} As an immediate consequence of Theorem \ref{mainthm1:LinearCaseTV}, we have the following propagation of chaos result for McKean's toy model: \begin{equation*} dX_t=\int b(t,X_t,y)\mu(t,dy)\,dt+\sigma(t,X_t) dW_t,\,\mu(t,dy)=\Ll(X_t) \end{equation*} \begin{corollary}\label{coro:BoundedCase} Given $b:(0,\infty)\times\er^d\times\er^d\rightarrow \er^d$ a Borel bounded function, $\sigma=\sigma(t,x)$ is a uniformly bounded and continuous, positive definite matrix-valued function in the sense that there exist $0<\lambda<\Lambda<\infty$ such that \[ \lambda|\xi|^2\leq \xi\cdot \sigma\sigma^*(t,x)\xi\leq\Lambda|\xi|^2,\,\forall\,t\geq 0,x\in\er^d,\xi\in\er^d, \] let $(X^{1,N}_t,X^{2,N}_t,\dots,X^{N,N}_t)_{t\geq 0}$ and $(X^{1,\infty}_t,X^{2,\infty}_t,\dots,X^{N,\infty}_t)_{t\geq 0}$ satisfy \begin{align} &dX^{i,N}_t=\frac{1}{N}\sum_{j=1}^Nb(t,X^{i,N}_t,X^{j,N}_t)\,dt+\sigma(t,X^{i,N}_t) d\widetilde{W}^i_t,\label{eq:ProtoMcParticleSys}\\ &dX^{i,\infty}_t=\int b(t,X^{i,\infty}_t,y)\mu(t,dy)\,dt+\sigma(t,X^{i,\infty}_t) dW^i_t,\,\mu(t,dy)=\Ll(X^{i,\infty}_t),\label{eq:ProtoMcKeanVlasov}\\ \end{align} where $(X^1_0,\,(W^{1}_t)_{t\geq 0}),\,\dots,(X^N_0,\,(W^{N}_t)_{t\geq 0})$ and $(\widetilde{X}^{1,N}_0,\,(\widetilde{W}^{1}_t)_{t\geq 0}),\,\dots,(\widetilde{X}^{N,N}_0,\,(\widetilde{W}^{N}_t)_{t\geq 0})$ independent copies of $(X_0,(W_t)_{t\geq 0}),\,X_0\sim\mu_0$. Then, for any arbitrary $0<T<\infty$, we have \[ \Vert \Ll\big((X^{1,N}_t,X^{2,N}_t,\dots,X^{N,N}_t)_{0\leq t\leq T}\big)- \Ll\big((X^{1,\infty}_t,X^{2,\infty}_t,\dots,X^{N,\infty}_t)_{0\leq t\leq T}\big)\Vert_{TV,(0,T)}\leq C(1+2\Vert \sigma^{-1} b\Vert_{L^{\infty}}T)\sqrt{\frac{k}{N}}, \] where $C$ is given as in Theorem \ref{mainthm1:LinearCaseTV} and $\Vert \sigma^{-1} b\Vert_{L^{\infty}}:=\text{supess}_{0\leq t\leq T,\,x,y\in\er^d}\big(\sum_{l=1}^{d}|(\sigma^{-1}b)^{(l)}(t,x,y)|^2\big)^{1/2}$. \end{corollary} (Owing to the boundedness of the interaction kernel $b$, the wellposedness of the SDEs \eqref{eq:ProtoMcParticleSys} is immediately granted by a Girsanov transformation. For \eqref{eq:McKeanVlasov_intro}, the weak uniqueness property is immediately granted by [Jourdain \cite{Jourdain-97}, Theorem 3.2].) As a preliminary step for the proof, let us remind the following moment inequality for the sum of i.i.d. real random variables which is a simple consequence of the moment estimates for Sub-Gaussian r.v.s' (see e.g. Bougeron, Lugosi and Massart \cite{BoLuMa-16}, Theorem 2.1) and of Hoeffding's inequality (see e.g. \cite{BoLuMa-16}, Theorem 2.8): \begin{proposition}\label{prop:SubGaussianMoment} Let $X_1,X_2,\cdots,X_n$ be a sequence of i.i.d. random variables such that a.s. $|X_1|\leq \overline{m}<\infty$. Then, for all integer $q\geq 1$, \[ \EE[\left(\sum_{i=1}^n\left(X_i-\EE[X_i]\right)\right)^{2q}]\leq q!(2n\overline{m}^2)^q. \] \end{proposition} \begin{proof}[Proof of Corollary \ref{coro:BoundedCase}] The uniform ellipticity of $\sigma$ allowing to rewrite \eqref{eq:ProtoMcParticleSys} and \eqref{eq:ProtoMcKeanVlasov} can be rewritten into \begin{align*} &d\tilde{X}^{i,N}_t=\sigma(t,\tilde{X}^{i,N}_t) \big(\frac{1}{N}\sum_{j=1}^Nb(t,\tilde{X}^{i,N}_t,\tilde{X}^{j,N}_t)\,dt+d\tilde{W}^i_t\big),\\ &dX^{i,\infty}_t=\sigma(t,X^{i,N}_t) \big(\int \sigma^{-1}(t,X^{i,\infty}_t)b(t,X^{i,\infty}_t,y)\mu(t,dy)\,dt+ dW^i_t\big),\,\mu(t,dy)=\Ll(X^{i,\infty}_t). \end{align*} Owing to the boundedness of $(t,x,y)\mapsto (\sigma^{-1}b)(t,x,y)$, applying Proposition \ref{prop:SubGaussianMoment} yields, for all $1\leq l\leq d$, \begin{align*} \EE_\PP\left[\left|\sum_{j=2}^N\left( \sigma^{-1}(t,X^{1,\infty}_t)\left(b(t,X^{1,\infty}_t,X^{j,\infty}_t)-\int b(t,X^{1,\infty}_t,y)\,\mu(t,dy)\right)\right) \right|^{2p}\right]\leq p!\left(2(N-1)\Vert \sigma^{-1}b\Vert^2_{L^{\infty}}\right)^p. \end{align*} Setting \[ \triangle (\sigma^{-1}b)^{i,j,N}_t:= \sigma^{-1}(t,X^{i,\infty}_t)\left(b(t,X^{i,\infty}_t,X^{j,\infty}_t)-\int b(t,X^{i,\infty}_t,y)\,\mu(t,dy)\right). \] Jensen's inequality yields \begin{align*} &\EE_\PP\left[\left(\int_{T_0}^{T_0+\delta}\left| \frac{1}{N}\sum_{j=1}^N\triangle (\sigma^{-1}b)^{i,j,N}_t\right|^{2}\,dt\right)^p\right] \leq \frac{\delta^{p-1}}{N^{2p}}\int_{T_0}^{T_0+\delta} \EE_\PP\left[\left|\sum_{j=1}\triangle (\sigma^{-1}b)^{i,j,N}_t\right|^{2p}\right]\,dt\\ &\leq \frac{\delta^{p-1}}{N^{2p}}\int_{T_0}^{T_0+\delta} \EE_\PP\left[\left|\sum_{j=1,j\neq i}\triangle (\sigma^{-1}b)^{i,j,N}_t\right|^{2p}\right]\,dt +\frac{\delta^{p-1}}{N^{2p}}\int_{T_0}^{T_0+\delta} \EE_\PP\left[\left|\triangle (\sigma^{-1}b)^{i,i,N}_t\right|^{2p}\right]\,dt\\ &\leq \frac{\delta^{p}p!(N-1)^p}{N^{p}}\Vert \sigma^{-1}b\Vert^{2p}_{L^\infty} +\frac{\delta^{p}}{N^{2p}}\Vert (\sigma^{-1}b)\Vert^{2p}_{L^\infty}. \end{align*} The condition $(\mathbf{C})$ is then satisfy for $\beta=2\Vert \sigma^{-1}b\Vert^2_{L^{\infty}}$ and the estimate on the total variation distance then follows from Theorem \ref{mainthm1:LinearCaseTV}. \end{proof} The demonstration of Corollary \ref{coro:BoundedCase} can be easily extended to the case of Langevin dynamic yielding to the following propagation of chaos result: \begin{corollary}\label{coro:KineticBoundedCase} Given $b:(0,\infty)\times\er^d\times\er^d\rightarrow \er^d$ a Borel bounded function and $\sigma:(0,\infty)\times\er^d\rightarrow \er^{d\times d}$, a uniformly bounded positive definite matrix-valued function, let $((Y^{1,N}_t,V^{1,N}_t),\dots,(Y^{N,N}_t,V^{N,N}_t)_{t\geq 0}$ and $((Y^{1,\infty}_t,V^{1,\infty}_t),\dots,(Y^{N,\infty}_t,V^{N,\infty}_t)_{t\geq 0}$ satisfy \begin{equation*} \left\{ \begin{aligned} &dY^{i,N}_t=V^{i,N}_t\,dt,\,\,(Y^{i,N}_0,V^{i,N})=(\widetilde{Y}^i_0,\widetilde{V}^i_0),\label{eq:ProtoLangevinParticle}\\ &dV^{i,N}_t=\frac{1}{N}\sum_{j=1}^N b(t,(Y^{i,N}_t,V^{i,N}_t),(Y^{j,N}_t,V^{j,N}_t))\,dt+\sigma(t,Y^{i,N}_t,V^{i,N}_t)d\widetilde{W}^i_t, \end{aligned} \right. \end{equation*} \begin{equation*} \left\{ \begin{aligned} &dY^{i,\infty}_t=V^{i,\infty}_t\,dt,\,\,(Y^{i,\infty}_0,V^{i,\infty})=(Y^i_0,V^i_0),\label{eq:ProtoLangevinMcKean}\\ &dV^{i,\infty}_t=\Big(\int b(t,(Y^{i,\infty}_t,V^{i,\infty}_t),(y,v))\,\mu(t,dy,dv)\Big)\,dt+\sigma(t,Y^{i,\infty}_t,V^{i,\infty}_t)dW^i_t,\,\mu(t)=\Ll(Y^{i,\infty}_t,V^{i,\infty}_t). \end{aligned} \right. \end{equation*} where $((Y^1_0,V^1_0),\,(W^{1}_t)_{t\geq 0}),\dots,((Y^N_0,V^N_0),\,(W^{N}_t)_{t\geq 0})$ and $((\tilde{Y}^1_0,\tilde{V}^1_0),\,(\tilde{W}^{1}_t)_{t\geq 0}),\dots,((\tilde{Y}^N_0,\tilde{V}^N_0)),\,(\tilde{W}^{N}_t)_{t\geq 0})$ are two collections of independent copies of $(Y_0,V_0)\sim\mu_0$ and $(W_t)_{t\geq 0}$. Then, for any arbitrary $0<T<\infty$, we have \[ \Vert \Ll\big((Y^{1,N}_t,V^{1,N}_t),\dots,(Y^{k,N}_t,V^{k,N}_t))_{0\leq t\leq T}\big)- \Ll\big((Y^{1,\infty}_t,V^{1,\infty}_t),\dots,(Y^{k,\infty}_t,V^{k,\infty}_t)_{0\leq t\leq T}\big)\Vert_{TV,(0,T)}\leq C(1+2\beta T)\sqrt{\frac{k}{N}}, \] where $\beta=2\text{supess}_{0\leq t\leq T,\,x,y\in\er^d}\big(\sum_{l=1}^{d}|(\sigma^{-1}b)^{(l)}(t,x,y)|^2\big)^{1/2}$. \end{corollary} (We refer to \cite{JabMen-19} for a detailed discussion on the wellposedness, in the weak and strong sense, of \eqref{eq:ProtoLangevinMcKean}.) \subsection{A sufficient condition for Theorem \ref{mainthm1:LinearCaseTV}} In this section, we present a sufficient condition for the application of Theorem \ref{mainthm1:LinearCaseTV} which cover the corollaries \ref{coro:BoundedCase} and \ref{coro:KineticBoundedCase} as particular cases. As a warm-up, let us consider the following lemma: \begin{lemma}\label{lem:RatePathDependent} Assume that \hypi and \hypii hold. Assume also that, for all $0\leq t<\infty$, $x\in\Cc([0,\infty);\er^d)$ $\nu\in\Pp(\Cc([0,\infty);\er^d))\mapsto B(t,x;P)$ is Lipschitz continuous w.r.t. the total variation distance; that is there exists $0<K<\infty$ such that $P,Q\in\Pp(\Cc([0,\infty);\er^d))$, $0\leq t<\infty$, $x\in\Cc([0,\infty);\er^d)$, \begin{equation}\label{cond:TVLip} \left|B(t,x;P)-B(t,x;Q)\right|\leq K\Vert P-Q\Vert_{TV,(0,t)}. \end{equation} Assume finally that the following centering (conditional) property holds: \[ \EE_{\PP}\left[B\big(t,X^{i,\infty};\frac{1}{N-1}\sum_{j=1,j\neq i}^{N}\delta_{\{X^{i,\infty}\}}\big)\,\Big{|}\,X^{i,\infty}\right]=B(t,X^{i,\infty};\Ll(X^{i,\infty})) \] Then the condition $(\mathbf{C})$ is satisfied for $\beta= 4K$. \end{lemma} Prior to the proof let us recall the notion of functions with bounded difference and an annex concentration property: \begin{definition} Let $E$ be some measurable space. A function $f:E^n\rightarrow \er$ is said to have the bounded difference property if, there exists $c_1,c_2,\cdots,c_n>0$ such that for all $(x_1,x_2,\cdots,x_n),(y_1,y_2,\cdots,y_n)\in E^n$, we have for all $1\leq i\leq n$, \[ \left|f(x_1,\cdots,x_{i-1},x_i,x_{i+1},\cdots,x_n)-f(x_1,\cdots,x_{i-1},y_i,x_{i+1},\cdots,x_n)\right|\leq c_i. \] \end{definition} \begin{theorem}[Bounded Difference Inequality, \cite{BoLuMa-16}, Theorem $6.2$]\label{thm:BoundedDifferenceIneq} Let $E$ be some measurable space, $(Y_1,\cdots,Y_n)$ be a family of $E$-valued i.i.d. random variables and let $f:E^n\rightarrow \er$ be some function satisfying the bounded difference property. Then \[ \mathbf{Y}=f(Y_1,\cdots,Y_n) \] satisfies: for all $t\geq 0$, \[ \max\left(\PP\left(\mathbf{Y}-\EE[\mathbf{Y}]\geq t\right),\PP\left(\mathbf{Y}-\EE[\mathbf{Y}]\leq -t\right)\right)\leq \exp\{-\frac{t^2}{2\nu}\}, \] for $\nu=\sum_{i=1}^n (c_i)^2/4$. \end{theorem} In particular, the above ensure the following moment estimates: For all integer $k\geq 1$, \begin{equation}\label{proofstp:j} \EE\left[\left(\mathbf{Y}-\EE[\mathbf{Y}]\right)^{2k}\right]\leq k!(4\nu)^k. \end{equation} \begin{proof}[Proof of Lemma \ref{lem:RatePathDependent}] Fix $t\geq 0$ and $\nu$ an arbitrary probability measure on $\Cc([0,\infty);\er^d)$ and define the family of mappings \[ f^{(l)}_i:\mathbf{x}^N=(x_1,x_2,\cdots,x_n)\in \Cc([0,\infty);\er^d)\mapsto f_i(\mathbf{x}^N)= \left(B^{(l)}(t,x_i,\mu^{-i,N}(\mathbf{x}^N))-B^{(l)}(t,\mathbf{x},\nu)\right)\in\er,\,1\leq l\leq m, \] for $1\leq i\leq N$, $\mu^{-i,N}(\mathbf{x}^N)=\frac{1}{N}\sum_{j=1,j\neq i}^N\delta_{\{x_j\}}$ the empirical measure related to $\mathbf{x}^N$ deprived of $x_i$. For any $i$, $l$, observe that the Lipschitz condition \eqref{cond:TVLip} implies that: \begin{align*} &\left|f^{(l)}_i(x_1,\cdots,x_{k-1},x,x_{k+1},\cdots,x_n)-f^{(l)}_i(x_1,\cdots,x_{k-1},y,x_{k+1},\cdots,x_n)\right|\\ &=\left|B^{(l)}\big(t,x_i,\frac{1}{N}\sum_{j=1,j\neq i,k}^N\delta_{\{x_j\}}+ \frac{1}{N}\delta_{\{x\}}\big)- B^{(l)}\big(t,x_i,\frac{1}{N}\sum_{j=1,j\neq i,k}^N\delta_{\{x_j\}}+ \frac{1}{N}\delta_{\{y\}}\big) \right|\\ &\leq K\Vert \frac{1}{N}\delta_{\{x\}}-\frac{1}{N}\delta_{\{y\}}\Vert_{TV,(0,T)}\leq\frac{K}{N}. \end{align*} so that each of the $f_k$'s satisfies a bounded difference property with coefficients $c_i:=K/N$ for all $1\leq i\leq N$. Applying \eqref{proofstp:j} with $\nu=\sum_{i=1}^N(c_i)^2/4=K^2/4N$, it follows that \begin{align*} \EE_\PP\left[\left|\left(B^{(l)}\Big(t,X^{i,\infty},\frac{1}{N}\sum_{j=1,j\neq i}^N\delta_{\{(X^{k,N}_t)_{0\leq t\leq T} \}}\Big)-B^{(l)}\Big(t,X^{i,\infty},\Ll(X^{i,\infty})\Big)\right)\right|^{2p}\right]\leq p! \frac{K^{2p}}{N^p}, \end{align*} from which we deduce that \begin{align*} &\EE_\PP\left[\left|B^{(l)}\Big(t,X^{i,\infty},\frac{1}{N}\sum_{j=1}^N\delta_{\{(X^{k,N}_t)_{0\leq t\leq T} \}}\Big)-B^{(l)}\Big(t,X^{i,\infty},\Ll(X^{i,\infty})\Big)\right|^{2p}\right]\\ &\leq 2^{2p-1}\EE_\PP\left[\left|B^{(l)}\Big(t,X^{i,\infty},\frac{1}{N}\sum_{j=1,j\neq i}^N\delta_{\{X^{k,N}_.\}}\Big) -B^{(l)}\Big(t,X^{i,\infty},\Ll(X^{i,\infty})\Big)\right|^{2p}\right]\\ &\quad +2^{2p-1} \EE_\PP\left[\left|B^{(l)}\Big(t,X^{i,\infty},\frac{1}{N}\sum_{j=1}^N\delta_{\{X^{k,N}_.\}}\Big)-B^{(l)}\Big(t,X^{i,\infty},\frac{1}{N}\sum_{j=1,j\neq i}^N\delta_{\{X^{k,N}_.\}}\Big)\right|^{2p}\right]\\ &\leq p! \frac{2^{2p-1}K^{2p}}{N^p}+\frac{2^{2p-1}K^p}{N^{2p}}\leq p! \frac{4^{p}K^p}{N^p}. \end{align*} Therefore, \begin{equation*} \EE_\PP\left[\left(\int_{T_0}^{T_0+\delta}\left|B(t,X^{i,\infty},\mu^{N,\infty})-B(t,X^{i,\infty},\Ll(X^{i,\infty}))\right|^{2}\,dt\right)^p\right]\leq \frac{(4\delta m K^2)^p}{N^p}. \end{equation*} \end{proof} The core argument of Lemma \ref{lem:RatePathDependent} relies mostly on the centering property regularity of the drift component $(A B)$ in its measure argument formulated in terms of an analog of the linear derivative functional linear (see e.g. [\cite{Kolokolstov-10}, Appendix $F$], [\cite{CarDel-18a}, Section 5.4]) here below set on the sample space $\Cc([0,T];\er^d)$ : \begin{definition}\label{def:FlatDerivative} The $\er^m$-valued functional $B=\left(B^{(1)},B^{(2)},\dots,B^{(m)}\right)$ is said to admit a bounded second order flat derivative if, for all $1\leq l\leq m$ there exist two measurable bounded functionals: \[ \frac{d B^{(l)}}{dm}=:\in [0,T]\times\Cc([0,T];\er^d)\times \Pp(\Cc([0,T];\er^d))\times \Cc([0,T];\er^d)\rightarrow \er, \] \[ \frac{d^2 B^{(l)}}{dm^2}:(t,x,m;\omega_1,\omega_2)\in [0,T]\times\Cc([0,T];\er^d)\times \Pp(\Cc([0,T];\er^d))\times \Cc([0,T];\er^d)\times \Cc([0,T];\er^d)\rightarrow \er, \] such that, for all $0<T<\infty$, $0\leq t\leq T$, $x\in\Cc([0,T];\er^d)$, $P,Q\in\Pp(\Cc([0,T];\er^d)$, \[ B^{(l)}(t,x,Q)-B^{(l)}(t,x,P) =\int_{0}^{1}\int_{\omega\in\Cc([0,T];\er^d)}\frac{d B^{(l)}}{dm}(t,x,(1-\alpha)P+\alpha Q;\omega)\left(Q(d\omega)-P(d\omega)\right)\,d\alpha, \] and, for all $0<T<\infty$, $0\leq t\leq T$, $x\in\Cc([0,T];\er^d)$, $P,Q\in\Pp(\Cc([0,T];\er^d)$, $\omega\in\Cc([0,T];\er^d)$, \begin{align*} &\frac{d B^{(l)}}{dm}(t,x,Q;\omega)-\frac{d B}{dm}(t,x,P;\omega)\\ &=\int_{0}^{1} \int_{\tilde{\omega}\in\Cc([0,T];\er^d)}\frac{d^2 B^{(l)}}{d m^2}(t,x,(1-\alpha)P+\alpha Q;\omega,\tilde{\omega})\left(Q(d\tilde{\omega})-P(d\tilde{\omega})\right)\,d\alpha. \end{align*} where $(1-\alpha)P+\alpha Q,\,0\leq \alpha\leq 1$ is the set of probability measures given by the convex interpolations between $P$ and $Q$. \end{definition} \begin{proposition}\label{prop:DifferentiabilityCondition} Assume that \hypi and \hypii hold and that for all $0\leq t\leq T$, $x\in\Cc([0,T];\er^d)$, $\mu\in\Pp(\Cc[0,T];\er^d)\mapsto B(t,x,\mu)$ admits a uniformly bounded second order derivative in the sense of Definition \ref{def:FlatDerivative}. Then the condition $\mathbf{(C)}$ in Theorem \ref{mainthm1:LinearCaseTV} holds. \end{proposition} \begin{proof} For any $1\leq l\leq m$, using $\frac{d B^{(l)}}{d m}$, we have \begin{align*} &\triangle B^{i,N,(l)}_t:=B^{(l)}(t,(X^{i,\infty}_r)_{0\leq r\leq t},\overline{\nu}^N_t)-B^{(l)}(t,(X^{i,\infty}_r)_{0\leq r\leq t},\Ll((X^{i,\infty}_r)_{0\leq r\leq t}))\\ \end{align*} for $\overline{\nu}^{\alpha,N}_t=(1-\alpha)\overline{\nu}^{N}_t+\alpha\Ll((X^{i,\infty}_r)_{0\leq r\leq t})$. In the first sum, for fixed $j$, define the (partial) empirical measure $\overline{\nu}^{-j,N}_t=\frac{1}{N-1}\sum_{l=1,l\neq j}^{N}\delta_{\{(X^{l,\infty}_r)_{0\leq r\leq t}\}}$. Adding and subtracting to the above, \begin{align*} &\frac{1}{N}\sum_{j=1}^N\int_{0}^{1} \frac{dB^{(l)}}{dm}(t,(X^{i,\infty}_r)_{0\leq r\leq t},\overline{\nu}^{-j,\alpha,N}_t ;(X^{j,\infty}_r)_{0\leq r\leq t})\\ &\quad-\int_{0}^{1}\int_{\omega\in\Cc([0,T];\er^d)}\frac{dB^{(l)}}{dm}(t,(X^{i,\infty}_r)_{0\leq r\leq t},\overline{\nu}^{-j,\alpha,N}_t )(\omega)\Ll((X^{i,\infty}_r)_{0\leq r\leq t})(d\omega)\,d\alpha, \end{align*} for \[ \overline{\nu}^{-j,\alpha,N}_t=(1-\alpha)\overline{\nu}^{-j,N}_t+\alpha\Ll((X^{i,\infty}_r)_{0\leq r\leq t}), \] we have the decomposition: \[ \triangle B^{i,N,(l)}_t=I^{i,N,(l)}_t+J^{i,N,(l)}_t+K^{i,N,(l)}_t, \] where \begin{align*} I^{i,N,(l)}_t&:=\frac{1}{N}\sum_{j=1}^N\int_{0}^{1}\left( \frac{dB^{(l)}}{dm}(t,(X^{i,\infty}_r)_{0\leq r\leq t},\overline{\nu}^{\alpha,N}_t ;(X^{j,\infty}_r)_{0\leq r\leq t})-\frac{dB^{(l)}}{dm}(t,(X^{i,\infty}_r)_{0\leq r\leq t},\overline{\nu}^{-j,\alpha,N}_t ;(X^{j,\infty}_r)_{0\leq r\leq t})\right)\,d\alpha, \end{align*} \begin{align*} J^{i,N,(l)}_t&:=\frac{1}{N}\sum_{j=1}^N\int_{0}^{1} \frac{dB^{(l)}}{dm}(t,(X^{i,\infty}_r)_{0\leq r\leq t},\overline{\nu}^{-j,\alpha,N}_t ;(X^{j,\infty}_r)_{0\leq r\leq t})\\ &\quad-\int_{0}^{1}\int_{\omega\in\Cc([0,T];\er^d)}\frac{dB^{(l)}}{dm}(t,(X^{i,\infty}_r)_{0\leq r\leq t},\overline{\nu}^{-j,\alpha,N}_t )(\omega)\Ll((X^{i,\infty}_r)_{0\leq r\leq t})(d\omega)\,d\alpha, \end{align*} \begin{align*} &K^{i,N,(l)}_t\\ &:=\int_{0}^{1}\int_{\omega\in\Cc([0,T];\er^d)}\left(\frac{dB^{(l)}}{dm}(t,(X^{i,\infty}_r)_{0\leq r\leq t},\overline{\nu}^{-j,\alpha,N}_t ;\omega)-\frac{dB^{(l)}}{dm}(t,(X^{i,\infty}_r)_{0\leq r\leq t},\overline{\nu}^{\alpha,N}_t ;\omega)\right)\Ll((X^{i,\infty}_r)_{0\leq r\leq t})(d\omega)\,d\alpha, \end{align*} Using the second order derivative $d^2B/dm^2$ and since \[ \overline{\nu}^{\alpha,N}_t(d\omega)-\overline{\nu}^{-j,\alpha,N}_t(d\omega)=\frac{1}{N}\delta_{\{(X^{j,N}_r)_{0\leq r\leq t}\}}+\frac{1}{N(N-1)} \sum_{l=1,l\neq j}^N\delta_{\{(X^{l,N}_r)_{0\leq r\leq t}\in d\omega\}} \] we immediately get for $I^{i,N}_t$: \begin{align*} I^{i,N,(l)}_t&=\frac{1}{N}\sum_{j=1}^N\int_{0}^{1}\int_{0}^1 \frac{d^2B^{(l)}}{dm^2}(t,(X^{i,\infty}_r)_{0\leq r\leq t},(1-r)\overline{\nu}^{\alpha,N}_t+r\overline{\nu}^{\alpha,N}_t ;(X^{j,\infty}_r)_{0\leq r\leq t};\tilde{\omega}) \left(\overline{\nu}^{\alpha,N}_t(d\tilde{\omega})-\overline{\nu}^{-j,\alpha,N}_t(d\tilde{\omega})\right)\,d\alpha\,dr\\ &=\frac{1}{N^2}\sum_{j=1}^N\int_{0}^{1}\int_{0}^1 \frac{d^2B^{(l)}}{dm^2}(t,(X^{i,\infty}_r)_{0\leq r\leq t},(1-r)\overline{\nu}^{\alpha,N}_t+r\overline{\nu}^{\alpha,N}_t ;(X^{j,\infty}_r)_{0\leq r\leq t},(X^{j,\infty}_r)_{0\leq r\leq t})\,d\alpha\,dr\\ &\quad + \frac{1}{N^2(N-1)}\sum_{j=1}^N\sum_{l=1,l\neq j}^N\int_{0}^{1}\int_{0}^1 \frac{d^2B^{(l)}}{dm^2}(t,(X^{i,\infty}_r)_{0\leq r\leq t},(1-r)\overline{\nu}^{\alpha,N}_t+r\overline{\nu}^{\alpha,N}_t ;(X^{j,\infty}_r)_{0\leq r\leq t},(X^{l,\infty}_r)_{0\leq r\leq t})\,d\alpha\,dr. \end{align*} In the same way, \begin{align*} &K^{i,N,(l)}_t\\ &=\frac{1}{N}\sum_{j=1}^N\int_{0}^{1}\int_{0}^1 \frac{dB^{(l)}}{dm}(t,(X^{i,\infty}_r)_{0\leq r\leq t},(1-r)\overline{\nu}^{\alpha,N}_t+r\overline{\nu}^{\alpha,N}_t ;(X^{j,\infty}_r)_{0\leq r\leq t};\tilde{\omega}) \left(\overline{\nu}^{\alpha,N}_t(d\tilde{\omega})-\overline{\nu}^{-j,\alpha,N}_t(d\tilde{\omega})\right)\,d\alpha\,dr\\ &=\frac{1}{N^2}\sum_{j=1}^N\int_{0}^{1}\int_{0}^1 \frac{d^2B^{(l)}}{dm^2}(t,(X^{i,\infty}_r)_{0\leq r\leq t},(1-r)\overline{\nu}^{\alpha,N}_t+r\overline{\nu}^{\alpha,N}_t ;\omega,(X^{j,\infty}_r)_{0\leq r\leq t})\Ll((X^{j,\infty}_r)_{0\leq r\leq t})\,d\alpha\,dr\\ &\quad + \frac{1}{N^2(N-1)}\sum_{j=1}^N\sum_{l=1,l\neq j}^N\int_{0}^{1}\int_{0}^1 \frac{d^2B^{(l)}}{dm^2}(t,(X^{i,\infty}_r)_{0\leq r\leq t},(1-r)\overline{\nu}^{\alpha,N}_t+r\overline{\nu}^{\alpha,N}_t ;\omega,(X^{l,\infty}_r)_{0\leq r\leq t})\Ll((X^{j,\infty}_r)_{0\leq r\leq t})\,d\alpha\,dr. \end{align*} These estimates ensure directly that \begin{align*} \EE\left[\left(\int_{T_0}^{T_0+\delta}\left|I^{i,N,(l)}_t\right|^2\,dt\right)^p\right]\leq \frac{2^p\delta^p}{N^p}\Vert \frac{d^2 B}{dm ^2}\Vert^{2p}_{L^\infty}, \end{align*} and \begin{align*} \EE\left[\left(\int_{T_0}^{T_0+\delta}\left|K^{i,N,(l)}_t\right|^2\,dt\right)^p\right]\leq \frac{2^p\delta^p}{N^p}\Vert \frac{d^2 B}{dm ^2}\Vert^{2p}_{L^\infty}. \end{align*} The final component $J^{i,N,(l)}$ can be estimated in the same way as in the proof of Lemma \ref{lem:RatePathDependent}. \end{proof} \paragraph{Acknowledgement:} This article was prepared within the framework of the Russian Academic Excellence Project '5-100'. The author is thankful to Lukasz Szpruch and Paul-Eric Chaudru de Raynal for having pointed out the use of linear functional derivative to derive the sufficient condition in Proposition \ref{prop:DifferentiabilityCondition}, and to Alexander Veretennikov for very fruitful discussions over the past year. \section{Appendix} \textbf{Carlen and Kr\'ee's optimal martingale moment control}: \begin{theorem}[Carlen and Kr\'ee \cite{CarKre-91}, Theorem $A$]\label{thm:CarlenKree} For $p\geq 1$, define \[ b_p=\sup_{(M_t)_{t\geq 0}}\left\{\frac{\EE\left[(M_t)^p\right]^{1/p}}{\EE\left[(\sqrt{\langle M\rangle_t})^p\right]^{1/p}}\right\}, \] where the supremum is taken over the set of real valued bounded and continuous martingales $(M_t)_{t\geq 0}$. Then \[ \sup_{p\geq 1}\frac{b_p}{\sqrt{p}}=2. \] \end{theorem} The boundedness condition, assumed in Carlen and Kr\'ee \cite{CarKre-91}, can be easily dropped, thanks to a truncation argument, to state the generic inequality: \begin{equation}\label{proofst:h} \EE\left[(M_t)^p\right]^{1/p}\leq 2\sqrt{p}\EE\left[(\sqrt{\langle M\rangle_t})^p\right]^{1/p}\,\text{whenever}\,\EE\left[(\sqrt{\langle M\rangle_t})^p\right]<\infty. \end{equation} Indeed, given $(M_t)_{t\geq 0}$ a continuous $L^p$-finite martingale and introducing the stopping time $\tau_\lambda=\inf\{t>0\,:\,|M_t|\geq \lambda\}$, the truncated process $(M_{t\wedge\tau_\lambda};\,t\geq 0)$ is bounded, so that \[ \EE\left[(M_{t\wedge \tau_\lambda})^p\right]^{1/p}\leq 2\sqrt{p}\EE\left[(\sqrt{\langle M\rangle_{t\wedge \tau_\lambda}})^p\right]^{1/p}. \] Taking the limit $\lambda\rightarrow \infty$, we conclude \eqref{proofst:h} \noindent \textbf{Proof of \eqref{proofstp:i}:} From this proposition, we deduce the following corollary that can be simply deduced from [Theorem $7.7$, Lipster and Shiryaev \cite{LipShi-01}]: \begin{corollary}\label{coro:DensityTwoDiff} Let $(\zeta^1_t)_{0\leq t\leq T}$ and $(\zeta^2_t)_{0\leq t\leq T}$ be two It\^o diffusion processes defined a filtered probability space $(\Omega,\Ff,(\Ff_t)_{t\geq 0},\PP)$, satisfying \[ d\zeta^i_t=\alpha_i(t,\zeta^i)\,dt+dW^i_t,\,\zeta_0=0,\,0\leq t\leq T,,\,i=1,2, \] Then assuming that \[ \PP\left(\int_0^T\left|\alpha_1(t,\zeta^1)\right|^2\,dt +\int_0^T\left|\alpha_2(t,\zeta^2)\right|^2\,dt<\infty\right)=1, \] and \[ \PP\left(\int_0^T\left|\alpha_1(t,W^1)\right|^2\,dt+\int_0^T\left|\alpha_2(t,W^2)\right|^2\,dt<\infty\right)=1, \] the probability measures $P_{\zeta_1}$ and $P_{\zeta_2}$ are equivalent and \[ \frac{dP_{\zeta^1}}{dP_{\zeta^2}}(T,\zeta^2)=\exp\left\{-\int_0^T \left(\alpha_1(t,\zeta_2)-\alpha_2(t,\zeta_2)\right)\cdot \,d\zeta^2_t-\frac{1}{2}\int_0^T \left|\alpha_1(t,\zeta^2)-\alpha_2(t,\zeta^2)\right|^2\,dt \right\}, \] \[ \frac{dP_{\zeta^2}}{dP_{\zeta^1}}(T,\zeta^1)=\exp\left\{-\int_0^T \left(\alpha_2(t,\zeta_1)-\alpha_2(t,\zeta_1)\right)\cdot \,d\zeta^1_t-\frac{1}{2}\int_0^T \left|\alpha_2(t,\zeta^1)-\alpha_1(t,\zeta^1)\right|^2\,dt\right\}. \] \end{corollary} Applying the preceding corollary to \eqref{eq:McKeanVlasovParticle} and \eqref{eq:Nparticles}, we deduce \eqref{proofstp:i} by applying two successive Girsanov transformations, first mapping the $\er^{dN}$-valued process: \begin{equation*} (X^{1,\infty}_t,\dots,X^{N,\infty}_t)_{0\leq t\leq T}, \end{equation*} into a system of $N$ (independent) copies of the solution to \eqref{eq:IntermediateSDE}. The interaction between the component is then introduced by a second Girsanov transformation yielding to \eqref{eq:Nparticles}.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Experimental reports, starting in the sixties \cite{abeles}, of substantial enhancement of superconductivity in thin granular films of different materials have been a continuous stimulus to study superconductivity in low dimensions. However the dramatic increase of the critical temperature observed in materials like Al or Sn \cite{abeles} resisted a conclusive theoretical explanation. The cause of the enhancement was related to surface phonons, fluctuations of the spectral density around the Fermi energy or shape resonances \cite{blat,bianconi}. The first two proposals could not be reconciled with the fact that the enhancement was observed on some materials but not in others. The latter mechanism, put forward by Blatt and Thompson \cite{blat}, is only effective for clean thin films with only a few mono-layers thick. However the samples were granular and disordered. Indeed more refined experimental studies \cite{goldman,qsize} where thin films were smoother and granularity was attenuated showed no substantial enhancement of superconductivity. In the context of single nanograins, the seminal experiments of Tinkham and coworkers \cite{tinkham} on single nanoscale Al grains provided evidence that some sort of superconductivity is still present in grains of only a few nanometer size where the mean level spacing is comparable with the energy gap \cite{ander2}. For larger clean grains, but still within the nanoscale region, numerical solutions of the the Bardeen-Cooper-Schrieber (BCS) gap equation \cite{bcs} and Boguliubov de Gennes equations \cite{parmenter,fomin,shell,shanenko,peeters,heiselberg} showed that the critical temperature and other superconducting properties were highly non-monotonic as a function of the system size with peaks well above the bulk limit. Explicit results were obtained for a variety of shapes and confining potentials:cubes \cite{parmenter,fomin}, spheres \cite{shell,shanenko}, cylinders \cite{peeters} and harmonic confining potentials \cite{heiselberg}. The magnitude of the peaks, namely, the enhancement of superconductivity, was larger in spherical and cubic grains than in chaotic grains \cite{usprl,leboeuf} with no symmetry. Moreover, for a fixed size, deviations from the bulk limit are more pronounced as the superconducting coherence length of the material increases. Analytical results \cite{usprl,leboeuf} based on the periodic orbit theory \cite{baduri} indicate that the reason for these non-monotonic deviations from the bulk limit was associated to shell effects, namely, level degeneracy in the proximities of the Fermi energy due to the geometrical symmetries of the grain. A larger spectral density induces an effective stronger binding of Cooper's pairs that boost superconductivity. Recent experiments on single isolated hemispherical Sn grains \cite{nmat} have reported, in full agreement with the theoretical prediction \cite{usprl,nmat}, large oscillations in the size dependence of the energy gap in the region $\sim 10$nm. However some puzzles still remain. For instance, in \cite{nmat} it was also observed a substantial ($\approx 20\%$) monotonic enhancement of the superconducting gap up to the largest grains $\approx 30$nm studied that cannot be explained by shell effects or surface phonons. \begin{figure} \includegraphics[width=1\columnwidth,clip,angle=0]{sphME2.eps}\caption{\label{fig1}(a) The mean superconducting gap as a function of the hemispherical grain size for $\lambda=0.166$ Al (dotted line),$\lambda = 0.243$ Sn (dashed line), $\lambda = 0.382$ Pb (solid line). (b) Comparison between the experimental results (solid line) of Ref. \cite{nmat} for Sn hemispherical nanograins and the theoretical prediction (dashed line) Eq.(\ref{gapsup}) that includes the effect of inhomogeneous pairing. We have averaged fluctuations in order to single out the contribution not related to shell effects in the spectral density. The horizontal line corresponds to the bulk behaviour.} \end{figure} Here we provide evidence that this monotonic enhancement is caused by spatial fluctuations in the density of probability of Cooper's pairs in a confined geometry. We carry out a numerical calculation, within a mean-field framework, of the order parameter in a hemispherical grain for sizes up to $30$nm. Our results, see Fig. \ref{fig1}, which are parameter free, are in fair agreement with recent experimental results \cite{nmat}. This additional enhancement stems from the fact that, in finite size grains, the interactions that bind the electron into a Cooper's pair depend on the quantum numbers of the one-body problem eigenstates. The dimensionless electron-phonon coupling constant $\lambda$ becomes inhomogeneous as it depends on these quantum numbers, $\lambda \to \lambda V I_{n,m}$ where $V$ is the grain volume, $I_{n,m} = \int dr^d \Psi^2_n(r)\Psi^2_m(r)$ and $\Psi_n(r)$ is the eigenstate of the one body problem and $n$ the set of quantum numbers that labels the state. For the case of grains with no symmetry, the leading finite size correction due to this effect Ref. \cite{vinas} is positive $I = 1+A/k_FL$ with $A \geq 0$ that depends on boundary conditions. For a chaotic grain the semiclassical analytical analysis of Ref. \cite{usprl} showed that this the leading correction for sizes $L \geqslant 10$nm. \section{The model and results} The superconducting grain is described by the BCS Hamiltonian \cite{bcs}, \begin{equation} H=\sum_{{\bf n}\,\sigma}\epsilon_{\bf n} c^\dag_{{\bf n}\sigma}c_{{\bf n}\sigma}-\frac{\lambda}{\nu(0)}\sum_{{\bf n},{\bf n'}}I_{{\bf n},{\bf n'}}c_{{\bf n}\uparrow}^\dag c_{{\bf n}\downarrow}^\dag c_{{\bf n'}\uparrow}c_{{\bf n'}\downarrow} \end{equation} where $c_{{\bf n}\sigma}^\dag$ creates an electron of spin $\sigma$ in a state with quantum numbers ${\bf n}$ and energy $\epsilon_{\bf n}$, $\lambda$ is the dimensionless BCS coupling constant for the material, $\nu(0)$ is the density of states at the Fermi energy. The electron-electron interaction matrix elements resulting from a contact interaction is given by, \begin{equation} I_{{\bf n},{\bf n'}}=V\int \psi_{\bf{n}}^2({\bf r})\psi_{\bf{n'}}^2({\bf r})\,dV \label{Mel} \end{equation} where $V$ is the volume of the grain and $\psi_{\bf{n}}({\bf r})$ is single-electron eigenfunction in state $\bf{n}$.\\ The superconducting gap is calculated from the self-consistency equation, \begin{equation} \Delta_{\bf n}=\frac{\lambda}{2}\displaystyle\sum_{{\bf n'}}\frac{\Delta_{{\bf n'}}I_{{\bf n},{\bf n'}}}{\sqrt{\epsilon_{{\bf n'}}^2+\Delta_{{\bf n'}}^2}}\frac{1}{\nu(0)} \label{GapEqn} \end{equation} where the sum is now taken over all states $\left\{{\bf n'}\big|\:|\epsilon_{{\bf n'}}|<\epsilon_D \right\}$, and $\epsilon_D$ is the Debye energy. We note that \cite{shanenko} that this approach leads to results similar to those obtained from the technically more involved Bogoliubov de Gennes equations. In the bulk limit eigenfunctions are plane waves and the matrix elements are simply $I_{{\bf n},{\bf n'}}=1$ However in small grains eigenstates of the one body problem are not plane waves so we expect deviations in $I_{n,n'}$ from the bulk limit. We restrict our interest to grains such that $\delta/\Delta_0 \ll 1$ where a BCS mean field approach is valid. To make direct comparison with experimental results we calculate numerically $I_{n,n'}$ for hemispherical grains of radius $R$. We note that the experimental grains \cite{nmat} are not exactly hemispherical but closer to a spherical cap of height $h \sim 0.9R$. Although the non-monotonic oscillations due to shell effects depend strongly on the shape of the grains we expect that the monotonic deviations that we aim to describe are less sensitive to this relatively small shape difference. Therefore the eigenfunctions entering in the matrix element above are those of a single electron in a spherical grain of radius $R$, \begin{equation}\label{hemiWF} \psi_{n,l,m}(r,\theta,\phi)=Nj_l(u_{ln}\frac{r}{R})Y_{lm}(\theta,\phi) \end{equation} where $N=\frac{2}{j_{a+1}(u_{am})R^{3/2}}$ is the normalisation constant, $j_l(r)$ are the spherical Bessel functions of the first kind, $u_{ln}$ is the $n^{th}$ zero of the $l^{th}$ spherical Bessel function and $Y_{lm}(\theta,\phi)$ are the spherical harmonics. The energy associated with these eigenstates is, $E_{l,n}=\frac{\hbar^2u_{ln}^2}{2mR^2}$. Dirichlet boundary conditions on the hemispherical surface \cite{Rodriguez2001} restricts $|m-l|$ to be odd. The final expression for the matrix elements is simplified by using Clebsch-Gordan coefficients, \begin{eqnarray} I_{{\bf n},{\bf n'}} = \frac{4(2l+1)(2l'+1)}{3 j_{a+1}(u_{am})^2j_{a'+1}(u_{a'm'})^2}\sum_{\Lambda}\frac{<ll',mm'|ll',\Lambda M>^2<ll',00|ll',\Lambda0>^2}{(2\Lambda+1)} \nonumber \\ \nonumber \times \int_0^1j_l(u_{ln}\rho)^2j_{l'}(u_{l'n'}\rho)^2\rho^2d\;\rho \end{eqnarray} where $M=m+m'$ and $\Lambda$ is summed over all possible values in the range, $l+l'\geq \Lambda$, $|l-l'|\leq\Lambda$ and $m\leq|\Lambda|$. The superconducting gap can then be written as, \begin{equation} \Delta=2\epsilon_D e^{-\frac{1}{\lambda_{eff}}} \label{gapsup} \end{equation} where $\lambda_{eff}=\lambda {\bar I}$ and $\bar I$ is the average of $I_{{\bf n},{\bf n'}}$ over all states in the interacting region $2\epsilon_D$ where $n'$ is the level closest to the Fermi-energy. This should be a good approximation for sufficiently large grains for which the matrix elements do not depend strongly on the quantum numbers. This is also consistent with the observation that in scanning tunnelling microscope experiments \cite{nmat} the value of the gap did not depend much on the exact position of the tip. Moreover it was found in \cite{usprl} that a similar simplified expression for the gap describes shell effects related to fluctuations of the spectral density. In that case the resulting spectral density after solving the gap equation is expressed as a finite sum over classical periodic orbits of length less than the superconducting coherence length. The numerical results are shown in Fig. \ref{fig1} for $\lambda=0.243$. This value is consistent with the Sn bulk gap $\Delta_{bulk}=0.57$meV and a Debye energy $\epsilon_D=17.2$meV. The numerical results, see Fig \ref{fig1}, show substantial deviations from the bulk even at large grain sizes. The theoretical prediction is strikingly similar to the experimental observation. We stress that the theoretical expression is parameter fee. Results for other materials, see Fig. \ref{fig1}, are similar. From Eq. (\ref{gapsup}) it is clear that finite size effects are stronger the smaller is the coupling constant $\lambda$. In more physical terms, finite size effects are stronger in materials with a long superconducting coherence length $\xi \propto 1/\Delta \propto e^{1/\lambda}$. A few final comments are in order: a) shell effects that induce oscillations in the spectral density have been averaged out in order to single out monotonic deviations from the bulk limit, b) the dip at $\sim 28$nm in the theoretical prediction is likely a consequence of statistical fluctuations related to the relatively small number of points employed in the averaging of gap size oscillations, c) deviations for smaller sizes $ < 18$nm are likely due to the difference between the spherical cap shape of the experimental grains and the exact hemispherical shape employed in the theoretical calculation. In conclusion we have investigated superconductivity in hemispherical nano-grains of metallic superconductors. Deviations from the bulk are clearly observed even for the largest grains $\sim 30$nm. Experimental results \cite{nmat} in single isolated Sn nanograins are in full agreement with the analytical predictions. Similar results are expected in other weakly coupled superconducting materials.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Power systems have always been critical systems in which robustness is crucial \cite{bevrani2009robust}. For the last decades, developed power grids have tried to remain robust to unexpected events following a simple guiding "N-1" redundancy principle: the system should keep operating in nominal conditions despite the loss of any single asset such as lines or power plants at any time. Operators hence anticipated through forecasts and simulations the effect of having any line disconnected in the hours to come. This very fundamental property has been the root of the success of power systems : to distribute electricity reliably throughout the grid at all times. However, these aging systems are moving closer to their limits and this redundancy constraint is no longer easy to satisfy due to the ever increasing complexity. In addition, climate change can potentially lead to more frequent and extreme weather events which also have to be accounted for. As a consequence, the time resolution operators are considering is moving towards sampling and refresh rates of 15 minutes instead of 3 hours previously. Because of constraints over the computation time budget in real-time operations, not every contingency can be studied on these higher resolution forecasting horizon. There is therefore a need for new methods to be robust to extreme events while having the capacity to operate in real time even at larger scales. That need has led to a trend toward cyber-physical systems with innovative behaviors \cite{allgoewer2019position}. Some methods have successfully addressed this problem before (worst case methods \cite{capitanescu2011day}, robust control) but they have troubles scaling up to continuous time horizon and adapting to cyber considerations. It is therefore essential to find new ways of coming up with robust strategies that are capable of coping with the new complexity in the power systems. Machine Learning seems to be a promising approach to solve this issue. Indeed, learning a certain behavior beforehand could allow to constantly use that knowledge instead of making use of the computation budget to simulate events and thus gain significant improvements in online computation times as shown by \cite{donnot2018optimization}, possibly solving the scaling issue. In particular, it could be possible to predict power flows using Deep Learning. Other kinds of Machine Learning methods look particularly suited to the problem, such as Reinforcement Learning methods \cite{dulac-arnold_challenges_2019}\cite{dalal2016hierarchical}. These methods let agents learn through experience, possibly solving extremely complex problems \cite{silver2017mastering}. Reinforcement Learning techniques are also adapted for finding robust policies \cite{morimoto_robust_2001}. Moreover, once the modeling framework has been built, environments for different networks in the world can be created easily and quickly, allowing for the models to be used for the different existing networks. There is a wide access to experience data in power systems that could allow for Reinforcement Learning methods to thrive. Thus, control is now a central area of power systems. Recent research recommends worst case methods, adversarial testing and safe learning \cite{dobbe2020learning}. Following these guidelines, we seek to achieve safe learning through an adversarial training approach. Furthermore, we make use of the L2RPN (Learning to Run a Power Network) competitions environments \cite{marot2020learning} to evaluate the submitted agents using adversarial testing. In this paper we present the opponent that we considered, we propose to characterize in a more exhaustive way the robustness and we evaluate the relevance of adversarial training as a means of building robust policies. \section{Formalization of the N-1 problem} The initial objective for the robustness competition is that the agent should be able to solve the continuous N-1 problem over time, simply referred as the N-1 problem. The N-1 problem is a well-known problem in power systems \cite{abulwafasecurity}. \subsection{Definition of N-1 time-continuous problem} At a given time, the instantaneous N-1 problem is to keep operating the network despite the loss of any element of the electrical network at that moment. That is, at that instant, one must be robust to the loss of any electrical network element. Therefore, the continuous N-1 problem is the following: to be able to keep operating the network over a time horizon, despite the untimely loss of an element of the electrical network. This implies a notion of robustness because it is necessary to prepare in advance for the eventual loss of an element, hence the name N-1. Specifically, one must not only be prepared for the loss of an element but also have the grid be in a configuration that is acceptable for the hours to come when a contingency happens. Our goal is on the one hand to create a framework that models and implements the N-1 problem and that provides a means to evaluate the robustness of an agent, and on the other hand to propose agents that can respond to it, i.e. agents that are robust to the loss of an element of the power grid. \subsection{Metrics to evaluate the N-1} We first look for a meaningful metric in regards to the N-1 problem. The objective is to find a single quantity that is capable of assessing an agent's performance relatively to the N-1 problem at some time step $t$. We call it the N-1 reward, $R_t^{eval}$. Other studies have aimed at evaluating the N-1 criterion before, typically through an evaluation reward equally considering the different line disconnections \cite{dalal2016hierarchical} : \begin{equation} R_t^{eval}=\Sigma_{i=1}^{n_{lines}}S_{i,t} \end{equation} Where : \begin{itemize} \item $n_{lines}$ is the number of attackable lines. \item $S_{i,t}$ is a stability score to be determined for when the disconnection of the line $i$ is simulated at time step $t$. \\ In the previous example \cite{dalal2016hierarchical}, $S_{i,t}$ equals 1 if the grid is safe, else 0. \\ \end{itemize} Such metrics account for the robustness to the different possible line disconnections uniformly. At the contrary, worst case approaches only consider the worst possible line disconnection at some given time \cite{mankowitz_robust_2020}\cite{dulac-arnold_challenges_2019}\cite{xiao_adversarial_training}, using : \begin{equation} R_t^{eval}=min_{i \in [1, n_{lines}]}S_{i,t} \end{equation} We propose another approach in-between those two that accounts for all the different possible line disconnections while putting an emphasis on the worst ones. \\ We remind that the N-1 problem is to be robust to the untimely disconnection of one of the attackable lines. Therefore to evaluate it, we choose to simulate exhaustively at each step the disconnection of each of those attackable lines and check whether any overflow appeared in the power grid. That way, we know if the agent was prepared for such a disconnection. Note that those simulations can be computationally expensive but are only required for the evaluation of the agents and not when simply running them. \\ We then put together these values $S_{i,t}$ into the following formula for the N-1 reward $R_t^{eval}$ : \begin{equation} R_t^{eval}=\Sigma_{i=1}^{n_{lines}}w_{\phi_t^{-1}(i)}S_{i,t} \end{equation} Where : \begin{itemize} \item We choose that the stability score $S_{i,t}$ equals 1 if no overflows occurred in the grid after the disconnection was simulated, else 0. \item $w_j$ are weights given to the stability scores of the disconnections and will be discussed further down. \\ At a given time step $t$, the highest weights are always given to the lowest scores $S_{i,t}$, hence the permutation $\phi_t$ to reorder the weights to the stability scores. \\ This is done to have a behavior close to the one of worst case evaluation, while still considering all line disconnections instead of only the worst one. \\ \end{itemize} Through this evaluation reward, we consider all of the different possible line disconnections while assigning higher weights to the most dangerous ones. \\ As for the weights, we choose the following : \begin{equation} w_j=exp(-\lambda \frac{j-1}{n_{lines}-1}) \end{equation} The weights are exponentially decreasing and $w_0=1$, where $\lambda$ is a parameter to be determined and which balances the emphasis to be put on the worst disconnections. Tuning that parameter effectively allows us to get closer to either the worst case metrics or the uniform ones. \\ We choose a value for $\lambda$ such that 95\% of the weights are contained in the 20\% worst line disconnections, i.e. with lowest stability scores $S_{i,t}$ at a given time. This gives strong importance to the worst line disconnections while being soft enough so that the values for the metric can be easily and meaningfully compared across different agents and scenarios. \section{Modeling \& adversarial approach implementation} \subsection{Opponent modeling} At scale, it is impossible to simulate all N-1 disconnections online over a long time horizon because of the computation burden. Therefore, we propose a new adversarial approach that lets a controller learn offline to be robust to those disconnections without having to simulate them online. \\ Our adversarial approach is inspired from the classical RL (Reinforcement Learning) framework \cite{sutton1998introduction}, where an agent and an environment interact with each other sequentially. Here we introduce a modified version of this framework where we add a new adversarial actor, the opponent. The opponent acts in parallel to the agent and in a similar way, picking an action at each time step according to some observation for the current state that it received from the environment. The purpose of the opponent will be to trigger untimely adversarial powerline disconnections that the agent then has to cope with. This aims at forcing the agent to acquire some sort of robustness to these disconnections, the objective being that it can then solve the N-1 problem. \\ The observations for the agent mostly contain information concerning power flows, productions, consumptions and the topology of the grid. The actions that are available to the agent are topological modifications such as connecting or disconnecting powerlines and splitting or merging nodes in substations, as well as redispatching actions on productions. Please refer to \cite{kelly2020reinforcement} for more details. The opponent has access to the same observation as the agent and can choose to either disconnect a line or do nothing. \begin{figure}[htbp] \centerline{\includegraphics[width=9cm]{framework.jpg}} \vspace{-4mm} \caption{Considered adversarial RL framework.} \end{figure} Furthermore, so that the difficulty of the competition is not too high, we chose that the opponent should only be able to disconnect the powerlines that are subject to maintenance. This would make the agents less confused since they already had to face their disconnections at maintenance times. These powerlines are shown in red in figure \ref{fig_attackable}. \\ \begin{figure} \centerline{\includegraphics[width=8cm]{attackable_lines.png}} \vspace{-3mm} \caption{10 Attackable powerlines highlighted.} \label{fig_attackable} \vspace{-4mm} \end{figure} The agent and the opponent hence have opposite goals that lead to joint equations for their optimal policies : \begin{equation} \pi^*=argmax_{\pi \in \Pi} E[\Sigma_{t=0}^{t_{final}}R_t | \pi, \psi^*] \end{equation} \begin{equation} \psi^*=argmin_{\psi \in \Psi} E[\Sigma_{t=0}^{t_{final}}R_t | \pi^*, \psi] \end{equation} Where : \begin{itemize} \item $\pi$ and $\psi$ are the agent and opponent policies respectively. \item $\Pi$ and $\Psi$ are the agent and opponent policy domains respectively, which may be different to the total spaces due to constraints over the policies. \item $R_t$ is the reward received by the agent at time step $t$. \item $t_{final}$ is the total number of time steps for the scenario. \end{itemize} Eventually, an agent robust to the optimal opponent policy $\psi^*$ is also robust to all opponent policies for the N-1 problem. \subsection{Considering the class of fixed policy opponents for the L2RPN competition} However, we need to choose a fixed opponent policy for the competition beforehand. Thus, during the competition, only the agent policy is going to change and improve relatively to that opponent policy. This policy will likely be different from the opponent optimal policy $\psi^*$. \\ Then comes the choice of an opponent policy. We consider that it actually has two distinct objectives : \begin{itemize} \item His attacks must be as dangerous as possible for the agent as stated before. \item His attacks must be fairly uniformly distributed among the powerlines. \end{itemize} We choose to add that second goal in order to be consistent with reality and to not be too predictable for the participants. \\ Those two goals can easily go in opposite directions since enforcing a uniform distribution acts as a constraint for the effectiveness objective. It is necessary to find a compromise between those two aspects. We integrate a second term into the opponent's optimal policy equation to reflect that second end of the opponent : \begin{multline} \psi^*=argmin_{\psi \in \Psi} E[(\Sigma_{t=0}^{t_{final}}R_t+\mu .\delta_{attack})| \pi^*, \psi] \\ \text{with } \delta_{attack}=max_{i, j \in [1, n_{lines}]^2}|n_i - n_j| \label{eq:opp_final} \end{multline} Where : \begin{itemize} \item $n_i$ is total number of times the opponent has attacked the powerline $i$ during the scenario. \item $\mu$ is a parameter balancing the relative importance between the two terms. \item $n_{lines}$ is the number of attackable lines in the power grid. \end{itemize} The second term of the equation states that the $n_i$ should be as close as possible, meaning that the attack distribution should be as balanced as possible. \\ We try to come up with an opponent for the competition that is close to $\psi^*$. Here are some possible opponents examples : \begin{itemize} \item Baseline : DoNothingOpponent. This opponent does nothing, i.e. never attacks. It is useful as a baseline for comparison with other opponents. \item Proposal : \textbf{WeightedRandomOpponent}. This is the opponent that we bring forward here and that we chose for the competition. It will be detailed in the next subsection. \item Trained opponent : this is a possibility that we did not explore because it would require having reliable and diverse agents to train against. \end{itemize} Before diving into the opponent that we propose, it is important to talk about the constraints that we chose for the opponent. Indeed, it is necessary for the opponent to have constraints to prevent it from freely disconnecting a line at each time step, which would result in a problem that would be too far from reality and also probably could not be solved, hence not really interesting and useful to study. \\ We choose the following constraints for the opponent : \begin{itemize} \item An attack consists in the disconnection of $n_{attack}$ powerlines at a time. \item An attack must last for a duration $d_{attack}$. During that time, the powerline cannot be reconnected. \item It is restricted to a maximum frequency of attacks. It can only attack once in the time period $T_{attack}$. \\ We note $\forall k \in \mathbb{N}, T_k = [k*T_{attack}, (k+1)*T_{attack}[$ the successive attack periods of a scenario. \item It has a certain budget which increases by a fixed amount at each time step. The opponent can only attack if the cost of the attack is inferior to its current budget. In our case, the cost of any attack is fixed to 1. \item Only some powerlines in the grid can be attacked by the opponent. It will not be able to disconnect the other ones. \end{itemize} For the competition, we set $n_{attack}=1$, $d_{attack}=4$ hours, $T_{attack}=24$ hours. \subsection{WeightedRandomOpponent} For that opponent, for each attack period $T_k$ of a scenario, the time of the attack $t_k$ is uniformly picked at random during that attack period. The powerline $l_k$ that is attacked at that time is also picked at random among the attackable powerlines, with probabilities $\rho_{i, t} / \alpha_i$ if the attack is made at time step $t$, where $\rho_{i, t}$ is the electric current flowing in the powerline $i$ at time step $t$ relatively to the thermal capacity of the line, and $\alpha_i$ is a normalizing factor equal to the empirical mean value of $\rho_{i, t}$ through time. We note this probability distribution $D_{\rho_t}$. Formally, for an environment state $s$, the WeightedRandomOpponent policy $\psi_{wro}$ is as follows : \begin{equation} \begin{multlined}[t] \psi_{wro}(s) = \attack(l_k) \text{ if } t(s) \in \{t_k\}_{k \in \mathbb{N}} \text{ else nothing} \\ \text{where } \forall k \in \mathbb{N}, t_k \sim U(T_k) \text{ and } l_k \sim D_{\rho_{t_k}(s)} \\ \text{s.t.} \cost(\attack(l_k)) \leq \text{budget} \\ t_k + d_{attack} \leq t_{k+1} \\ \label{eq:wro_policy} \vspace{-3mm} \end{multlined} \end{equation} This opponent is within the constraints we mentioned and aims at fulfilling the two objectives of the opponent highlighted in the equation \ref{eq:opp_final}. Indeed, on the one hand it favors the most dangerous attacks since the most loaded powerlines have a higher chance of being attacked and are more likely to put the agent in danger when disconnected. On the other hand, the normalizing factors $\alpha_i$ ensure a balanced distribution of attacks between the different powerlines. At the same time, the behavior of the opponent is randomized concerning both the time of the attack and the line to attack. This is so that it is not easily predictable for the participants of the competition. \section{Experiments \& results} \subsection{Competition results} The results and winners of the WCCI (World Congress on Computational Intelligence) and NeurIPS (Neural Information Processing Systems) competitions can be checked \href{https://l2rpn.chalearn.org/competitions}{\underline{online}}. The competition score is based on the cumulative network operational cost and normalized to the range [-100, 100] where the score is -100 for an initial blackout, 0 for when the agent does nothing (do\_nothing agent), 80 for when the scenario is completed and up to 100 depending on the operational cost optimization. Refer to \cite{marot2021l2rpn} for more details. \\ For our experiments, as a first step we picked four of the top submitted agents from the WCCI and NeurIPS robustness competitions \cite{marot2020l2rpn} and ran them against those two competitions along with two of our baseline agents. The data we used was generated with \href{https://github.com/BDonnot/ChroniX2Grid}{\underline{Chronix2Grid}}. The results are shown in table \ref{tab:pres_results}. \begin{table} \caption{Agents results on the WCCI and NeurIPS competitions} \centering \begin{tabular}{llll} \toprule \multicolumn{2}{c}{} & \multicolumn{2}{c}{Results} \\ \cmidrule{3-4} Agent & From competition & WCCI & NeurIPS \\ \midrule rl\_agnet & NeurIPS & 71.21 & 61.05 \\ binbinchen & NeurIPS & 63.97 & 52.42 \\ lujixiang & NeurIPS & 73.73 & 45.00 \\ zenghsh3 & WCCI & 58.21 & 19.10 \\ reco\_powerline & Baseline & 25.75 & 10.76 \\ do\_nothing & Baseline & 0.00 & 0.00 \\ \bottomrule \end{tabular} \label{tab:pres_results} \vspace{-5mm} \end{table} The only major difference between the two competitions lies in the fact that the NeurIPS competition uses an opponent to threaten the agents, while the WCCI competition does not. Therefore, the superiority of the NeurIPS agents over one of the top WCCI contestants, zenghsh3, could suggest an effectiveness of the adversarial training method to which the NeurIPS agents were confronted. Not only are these agents stronger on the NeurIPS Robustness competition, they also outperform the WCCI agent on the WCCI competition. Plus, while the scores for the NeurIPS agents are only a little bit lower on the NeurIPS competition, which is expected since this competition uses an opponent and is thus harder to deal with, the performance of zenghsh3 drops much more dramatically when confronted to the opponent. It should be noted that contrarily to the other two NeurIPS agents, lujixiang's score also drops more significantly on the NeurIPS competition but this was also expected since we noted that it was already less robust on the WCCI competition compared to the other agents, failing at one scenario, despite obtaining a very high score thanks to better energy loss management. As a whole, these results seem to indicate not only a better performance in general of the agents which have used adversarial training, but also a stronger robustness to line disconnections. However, since we only have one strong agent to study from the WCCI competition, we cannot fully conclude on that matter given only these data. \\ In this part we have evaluated the agents robustness through their competition scores, in particular on the NeurIPS competition, which is actually adversarial testing. Next, we come up next with different metrics and evaluations that fit better the N-1 problem in order to have a more meaningful measure of what we want to achieve as well as a deeper insight on the agents behaviors. \subsection{Evaluation based on the N-1 criterion} It is important to note that we conduct a preventive robustness study here but do not evaluate the agent's corrective behavior since that metric is based on simulated disconnections and is measured before the agent has a chance to respond with an action. Thus, it evaluates how well the agent is prepared but not how well the agent could correct an issue. We then run exhaustive experiments for some of the best agents from the NeurIPS competition and some of our baselines, on the 24 NeurIPS competition test scenarios, and measure their performance based on the N-1 reward $R_t^{eval}$ averaged over these scenarios. During all the following experiments, no opponent has been used since the N-1 problem evaluation is already contained in the N-1 reward. These scenarios each last one week and go from Monday to Sunday. The results are shown in figure \ref{fig_series}. Through that experiment, we want to evaluate if an agent actually more robust to attacks is ultimately more robust to the N-1 continuous problem. \begin{figure} \hspace{2mm} \centerline{\includegraphics[width=9cm]{N-1_without_opponent.png}} \vspace{-9mm} \caption{Mean N-1 reward through time for one-week long scenarios.} \vspace{-3mm} \label{fig_series} \end{figure} The N-1 rewards and the load in the power grid are strongly linked. For each agent, the N-1 rewards tend to increase when the load is low and to decrease when the load is high. Indeed, the correlation between the load and the smoothed derivative of the N-1 rewards is as significant as -0.5. This is expected as a higher load means more loaded lines and thus a higher probability to see congestion appearing on the grid, hence a lower reward. \\ A clear gap is to be observed at all times between rl\_agnet and the other agents. Indeed, as already indicated in the competition scores, that agent does much better than the other ones when evaluated with the N-1 reward, reinforcing the idea that this agent acquired a strong notion of robustness to the disconnection of the powerlines. Next come reco\_powerline and binbinchen. Except sometimes at night, we don't notice any substantial gain from binbinchen agent there: we can wonder why given its good score in the competition. To study more carefully this observation, we also choose to inspect the mean N-1 rewards for each scenario individually in order to have a clearer view of the agents performance (see figure \ref{fig_heatmap}). \begin{figure} \hspace{5mm} \centerline{\includegraphics[width=9cm]{reward_heatmap_without_opponent.png}} \vspace{-6mm} \caption{Mean N-1 reward for each of the 24 NeurIPS test scenarios.} \label{fig_heatmap} \vspace{-5mm} \end{figure} The rl\_agnet agent has again the highest mean N-1 reward on almost all scenarios, accounting once more for his higher robustness. The other NeurIPS agents, binbinchen and lujixiang, come next along with the baseline reco\_powerline. Their performance as evaluated by the N-1 reward is rather disappointing given that they both reached high scores in the competition. We believe this can be explained by the fact that we only evaluate the preventive capacity of the agents and they most probably rely strongly on their corrective capacities. This will be the object of future works. It can be seen that the WCCI agent zenghsh3 perform slightly worse than all other agents on almost all scenarios, suggesting again that the agents which have used adversarial trained have effectively acquired a stronger robustness to powerline disconnections. Yet, there are still several difficult scenarios during which the loads are rather high and in which no agent could really conserve a durable robustness. \\ Lastly, we inspect the agents robustness to each line disconnection separately to see if a robustness has been acquired uniformly throughout the grid as desired, or if the management of some of the powerlines has been prioritized. We show the probabilities a disconnection of a line will cause an overflow, averaged over the 24 NeurIPS test scenarios, for rl\_agnet compared to reco\_powerline in Figure \ref{fig_barplot}. \begin{figure} \centerline{\includegraphics[width=9cm]{proba_barplot_without_opponent.png}} \vspace{-3mm} \caption{Empirical probabilities of overflow after disconnection of a powerline, for each attackable powerline. The confidence intervals are negligeable given the high number of observations (less than $10^{-3}$ wide).} \label{fig_barplot} \vspace{-3mm} \end{figure} It can be seen that for all the attackable lines in the grid, the probability of a disconnection causing an overflow is lower for the rl\_agnet agent than for reco\_powerline. This suggests that the agent has obtained a robustness to line disconnections not only in some parts of the power grid but in the whole grid, making it a robust and versatile agent. This what we hoped for when designing our opponent and we believe it helped achieve that robustness property. Studying the impact of different opponents on the learnt agent robustness will be the object of future works to draw more define conclusions about the opponent design. Adversarial training hence showed to be an appropriate framework for an agent to become robust to the N-1 problem, and this without having to run costly online N-1 simulations to ensure it. \section{Conclusion} In this paper, we presented an original approach for learning a controller, i.e. an agent, with desirable robustness properties, in particular according to the N-1 principle. We achieve interesting online computational efficiency and robust performance without online N-1 simulations thanks to our adversarial training. The best agent further displays a preventive behavior in addition to any curative actions, highlighting an advanced robust strategy compared to other agents. There are possible extensions to this paper that we leave as future works such as extending the set of attackable lines, the evaluation of corrective behavior in addition to the preventive one, as well as the design of new opponents with trained policies. \section*{Acknowledgment} We thank Camilo Romero and Jean Grizet for making the L2RRPN competitions and environments possible. We thank Isabelle Guyon, Gabriel Dulac and Patrick Panciatici for their valuable insights for framing the problem and guiding us through relevant modelizations. \bibliographystyle{ieeetr}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The theory of stochastic processes provides a powerful tool to describe the dynamics of open systems. Physically, the noise to which these systems are subjected results from the fact that the system is coupled to an environment composed of many other degrees of freedom about which we have only limited information and control. This coarse-grained description of the system -- as opposed to the microscopic description involving the composite system \emph{and} environment -- is particularly appealing and tractable, when the Markovian approximation is applied. Therefore, Markovian stochastic dynamics are nowadays very commonly used to describe small open systems ranging from biochemistry (e.g., enzymes, molecular motors) to quantum systems (e.g., single atoms or molecules)~\cite{HillBook1977, SpohnRMP1980, VanKampenBook2007, BreuerPetruccioneBook2002}. Due to their outstanding importance for many branches of science, an entire branch of mathematics is also devoted to their study~\cite{KemenySnellBook1976}. A common feature of all Markovian processes is their \emph{contractivity}, i.e., the volume of accessible states shrinks monotonically during the evolution. This statement can be made mathematically precise by considering two arbitrary preparations, $p_\alpha(0)$ and $q_\alpha(0)$, describing different probabilities to find the system in state $\alpha$ at the initial time $t=0$. Their distance, as measured by the relative entropy $D[p_\alpha\|q_\alpha] \equiv \sum_\alpha p_\alpha \ln\frac{p_\alpha}{q_\alpha}$, monotonically decreases over time $t$, i.e., for all $t \ge 0$ \begin{equation}\label{eq contractivity} \frac{\partial}{\partial t} D[p_\alpha(t)\|q_\alpha(t)] \le 0. \end{equation} In other words, the ability to distinguish between any pair of initial states monotonically shrinks in time due to a continuous loss of information from the system to the environment. We note that also other distance quantifiers than the relative entropy fulfill Eq.~(\ref{eq contractivity}) and an analogue of Eq.~(\ref{eq contractivity}) also holds in the quantum regime where its violations has been proposed as an indicator of non-Markovianity~\cite{BreuerLainePiiloPRL2009, RivasHuelgaPlenioRPP2014, BreuerEtAlRMP2016}. The contractivity property~(\ref{eq contractivity}) of Markov processes gets another interesting physical interpretation in quantum and stochastic thermodynamics. In these fields, a nonequilibrium thermodynamics is systematically build on top of Markovian dynamics typically described by (quantum) master or Fokker-Planck equations~\cite{SchnakenbergRMP1976, JiangQianQianBook2004, EspositoHarbolaMukamelRMP2009, SekimotoBook2010, SeifertRPP2012, KosloffEntropy2013, SchallerBook2014, VandenBroeckEspositoPhysA2015}. In addition to being Markovian, the rates entering the dynamics must also satisfy local detailed balance. For a system coupled to a single heat bath, this ensures that the Gibbs state of the system is a null eigenvector of the generator of the dynamics at all times $t$. For autonomous dynamics, this implies that the fixed point of the dynamics is an equilibrium Gibbs state. For nonautonomous (also called \emph{driven}) dynamics, i.e., when some parameters are changed in time according to a prescribed protocol $\lambda_t$, the system in general does not reach a steady state, but the Gibbs state remains a null eigenvector of the generator of the dynamics at all times $t$. We call this an \emph{instantaneous} fixed point of the dynamics in the following. If we denote the Gibbs state of the system by $e^{-\beta E_\alpha(\lambda_t)}/\C Z(\lambda_t)$ with the energy $E_\alpha(\lambda_t)$ of state $\alpha$ and the equilibrium partition function $\C Z(\lambda_t) = \sum_\alpha e^{-\beta E_\alpha(\lambda_t)}$, the second law of thermodynamics for a driven system in contact with a single heat bath at inverse temperature $\beta$ can be expressed as \begin{equation}\label{eq 2nd law intro} \dot\Sigma(t) = -\left.\frac{\partial}{\partial t}\right|_{\lambda_t} D\left[p_\alpha(t)\left\|\frac{e^{-\beta E_\alpha(\lambda_t)}}{\C Z(\lambda_t)}\right.\right] \ge 0. \end{equation} Here, the derivative is evaluated at fixed $\lambda_t$, i.e., $E_\alpha(\lambda_t)$ and $\C Z(\lambda_t)$ are treated as constants, which only depend parametrically on time. The quantity $\dot\Sigma(t)$ is the entropy production rate. Its positivity follows from the fact that the dynamics is Markovian \emph{and} that the Gibbs state is an instantaneous fixed point of the dynamical generator at all times. Within the conventional weak coupling and Markovian framework~\cite{SchnakenbergRMP1976, JiangQianQianBook2004, EspositoHarbolaMukamelRMP2009, SekimotoBook2010, SeifertRPP2012, KosloffEntropy2013, SchallerBook2014, VandenBroeckEspositoPhysA2015}, the entropy production rate can be rewritten as $\dot\Sigma(t) = \beta[\dot W(t) - d_t F(t)] \ge 0$, where $\dot W$ is the rate of work done on the system and $d_t F(t)$ denotes the change in non-equilibrium free energy (see Sec.~\ref{sec thermodynamics microlevel} for microscopic definitions of these quantities). The intimate connection between relative entropy and the second law was noticed some time ago in Ref.~\cite{ProcacciaLevineJCP1976} for undriven systems. In the undriven case, the precise form of Eq.~(\ref{eq 2nd law intro}) seems to appear first in Ref.~\cite{SpohnJMP1978} for quantum systems and it is discussed as a Lyapunov function in Ref.~\cite{VanKampenBook2007} for classical systems. The generalization to driven systems was given in Ref.~\cite{LindbladBook1983} and a similar form of Eq.~(\ref{eq 2nd law intro}) also holds for a system in contact with multiple heat baths~\cite{SpohnLebowitzAdvChemPhys1979}, see also Ref.~\cite{AltanerJPA2017} for a recent approach where Eq.~(\ref{eq 2nd law intro}) plays a decisive role. In this paper we will only focus on a single heat bath. While the Markovian assumption is widely used due to the enormous simplifications it enables, it is not always justified. Especially in stochastic thermodynamics an implicit but crucial assumption entering the Markovian description is that the degrees of freedom of the environment are always locally equilibrated with a well-defined associated temperature. This is in general only valid in the limit of time-scale separation where the environmental degrees of freedom can be adiabatically eliminated~\cite{EspositoPRE2012}. There is currently no consensus about the correct thermodynamic description of a system when the local equilibrium assumption for the environment is not met, i.e., when the system dynamics are non-Markovian. Especially, while different interesting results were obtained in Refs.~\cite{AndrieuxGaspardJSM2008, EspositoLindenbergPRE2008, RoldanParrondoPRL2010, RoldanParrondoPRE2012, LeggioEtAlPRE2013, BylickaEtAlSciRep2016} by \emph{starting} from a non-Markovian description of the system, the \emph{emergence} of non-Markovianity and its link to an underlying Markovian description of the microscopic degrees of freedom (system \emph{and} bath) was not yet established. The first main contribution of this paper is to provide a systematic framework for that situation able to investigate the influence of an environment, which is not locally equilibrated. While there has been recently great progress in the \emph{integrated} thermodynamic description of such systems~\cite{SeifertPRL2016, JarzynskiPRX2017, MillerAndersPRE2017, StrasbergEspositoPRE2017}, the instantaneous thermodynamic properties at the rate level were only studied in Ref.~\cite{StrasbergEspositoPRE2017}. We will here see that a remarkably similar framework to the conventional one above arises with the main difference that the entropy production rate $\dot\Sigma(t)$ can be negative sometimes. We then precisely link the occurence of $\dot\Sigma(t) < 0$ to underlying dynamical properties of the environment, thereby connecting the abstract mathematical property of (non-) Markovianity to an important physical observable. Our second main contribution is to establish a quantum counterpart for the classical strong coupling scenario studied by Seifert~\cite{SeifertPRL2016}. We find that the integrated thermodynamic description is very similar, but the instantaneous rate level description is \emph{not}. This hinders us to connect the occurence of negative entropy production rates to the non-Markovianity of the system evolution. We also provide an explicit example to show that recent claims in the literature about non-Markovianity, negative entropy production rates and steady states of dynamical maps do not hold. \emph{How to read this paper.---} This paper covers a wide range of applications from (i) rate master equations over (ii) classical Hamiltonian dynamics to (iii) quantum systems. We will keep this order in the narrative because it demonstrates beautifully the similarities and discrepancies of the different levels of description. We will start with a purely mathematical description of classical, non-Markovian systems, which arise from an arbitrary coarse-graining of an underlying Markovian network. While Sec.~\ref{sec lumpability Markov chains} reviews known results, Sec.~\ref{sec time dependent MEs} establishes new theorems (Appendices~\ref{sec app weak lumpability} and~\ref{sec app IFP} give additional technical details). Sec.~\ref{sec coarse-grained dissipative dynamics} can then be seen as a direct physical application of the previous section to the coarse-grained dynamics of a Markovian network obeying local detailed balance. In Sec.~\ref{sec classical system bath theory} we change the perpective and consider classical Hamiltonian system-bath dynamics, but with the help of Appendix~\ref{sec app Hamiltonian dynamics} we will see that we obtain identical results to Sec.~\ref{sec coarse-grained dissipative dynamics}. In our last general section~\ref{sec thermo quantum} we consider quantum systems. To illustrate the general theory, each subsection of Sec.~\ref{sec applications} is used to illustrate a particular feature of one of the previous sections. This roadmap of the paper is shown in Fig.~\ref{fig outline} and we wish to emphasize that it is also possible to read some sections independently. The paper closes by summarizing our results together with the state of the art of the field in Sec.~\ref{sec summary} and by discussing alternative approaches and open questions in Sec.~\ref{sec outlook}. We also provide an example to demonstrate that non-Markovian effects can speed up the erasure of a single bit of information, thereby showing that the field of non-Markovian finite-time thermodynamics provides a promising research direction for the future. \begin{figure \centering\includegraphics[width=0.40\textwidth,clip=true]{outline5.pdf} \label{fig outline} \caption{``Roadmap'' of the paper with solid (dotted) arrows indicating strong (weak) dependencies. } \end{figure} The following abbreviations are used throughout the text: EP (entropy production), IFP (instantaneous fixed point), ME (master equation), TM (transition matrix), and TSS (time-scale separation). \section{Mathematical preliminaries} \label{sec mathematical results} \subsection{Coarse-grained Markov chains} \label{sec lumpability Markov chains} In this section we establish notation and review some known results about Markov processes under coarse-graining. We will start with the description of a discrete, time-homogeneous Markov chain for simplicity, but soon we will move to the physically more relevant case of an arbitrary continuous-time Markov process described by a ME. Finally, we also introduce the concept of lumpability~\cite{KemenySnellBook1976}. \emph{Discrete, homogeneous Markov chains.---} We consider a Markov process on a discrete space $\C X$ with $N$ states $x\in\C X$ with a fixed TM $T_{\tau}(x|x')$, which propagates the state of the system such that \begin{equation} p_x(n\tau+\tau) = \sum_{y} T_{\tau}(x|y) p_{y}(n\tau) ~~~ (n\in\mathbb{N}), \end{equation} or in vector notation $\bb p(n\tau+\tau) = T_\tau \bb p(n\tau)$. Here, $p_x(n\tau)$ is the probability to find the system in the state $x$ at time $n\tau$, where $\tau > 0$ is an arbitrary but fixed time step (here and in what follows we will set the initial time to ${t_0 \equiv 0}$). Probability theory demands that $\sum_x p_x(n\tau) = 1$, $p_x(n\tau) \ge 0$ for all $x$, $\sum_x T_{\tau}(x|y) = 1$ and $T_{\tau}(x|y) \ge 0$ for all $x,y$. The steady state of the Markov chain is denoted by $\pi_x$ and it is defined via the equation $\boldsymbol\pi = T_{\tau}\boldsymbol\pi$. In this section we exclude the case of multiple steady states for definiteness, although large parts of the resulting theory can be applied to multiple steady states as well.\footnote{The contractivity property of Markov chains, Eqs.~(\ref{eq contractivity}) and~(\ref{eq 2nd law intro}), which plays an important role in the following, holds true irrespective of the number of steady states. } Next, we consider a partition $\boldsymbol\chi = \{\chi_1,\dots,\chi_M\}$ ($1<M<N$) of the state space such that \begin{equation} \bigcup_{\alpha=1}^M \chi_\alpha = \C X, ~~~ \chi_\alpha \cap\chi_\beta = \emptyset \text{ for } \alpha \neq \beta. \end{equation} In the physics literature this is known as a coarse-graining procedure where different ``microstates'' $x$ are collected together into a ``mesostate'' $\alpha$, whereas in the mathematical literature this procedure is usually called lumping. In the following we will use both terminologies interchangeably and we denote a microstate $x$ belonging to the mesostate $\alpha$ by $x_\alpha$, i.e., $x_\alpha\in\chi_\alpha$. The idea is illustrated in Fig.~\ref{fig lumping}. We remark that tracing out the degrees of freedom of some irrelevant system (usually called the ``bath'') is a special form of coarse-graining. We will encounter this situation, e.g., in Sec.~\ref{sec classical system bath theory}. \begin{figure \centering\includegraphics[width=0.20\textwidth,clip=true]{lumping.pdf} \label{fig lumping} \caption{Lumping/coarse-graining of a discrete Markov chain with microstate-space $\C X = \{1,2,3,4,5,6,7,8,9\}$ into three mesostates according to the partition $\boldsymbol\chi = \{\chi_\alpha,\chi_\beta,\chi_\gamma\}$ with $\chi_\alpha = \{1,2,3\}$, $\chi_\beta = \{4,5,7,8\}$ and $\chi_\gamma = \{6,9\}$ (grey areas). Possible transistions for which $T_{\tau}(x|y) \neq 0$ are depicted by a solid line connecting state $x$ and $y$. } \end{figure} Any partition $\boldsymbol\chi$ defines a stochastic process on the set of mesostates by considering for a given initial distribution $p_{x}(0)$ the probabilities to visit a sequence of mesostates $\alpha,\beta,\gamma,\dots$ at times $0,\tau,2\tau,\dots$ with joint probabilities \begin{equation} \begin{split} & p(\beta,\tau;\alpha,0) = \sum_{y_\beta,x_\alpha} T_{\tau}(y_\beta|x_\alpha)p_{x|\alpha}(0)p_\alpha(0), \\ & p(\gamma,2\tau;\beta,\tau;\alpha,0) = \\ & \sum_{z_\gamma,y_\beta,x_\alpha} T_{\tau}(z_\gamma|y_\beta) T_{\tau}(y_\beta|x_\alpha)p_{x|\alpha}(0)p_\alpha(0), \end{split} \end{equation} etc., where $p_\alpha(0) = \sum_{x_\alpha} p_{x_\alpha}(0)$ is the marginalized initial mesostate and $p_{x|\alpha}(0) = p_{x_\alpha}(0)/p_\alpha(0)$ is the initial microstate conditioned on a certain mesostate $\alpha$. The so generated hierarchy of joint probabilities $p(\alpha_n,n\tau;\dots;\alpha_1,\tau;\alpha_0,0)$ completely specifies the stochastic process at the mesolevel. It is called Markovian whenever the conditional probabilities \begin{equation} \begin{split} & p(\alpha_n,n\tau|\alpha_{n-1},n\tau-\tau;\dots;\alpha_0,0) \\ & \equiv \frac{p(\alpha_n,n\tau;\dots;\alpha_0,0)}{p(\alpha_{n-1},n\tau-\tau;\dots;\alpha_0,0)} \end{split} \end{equation} satisfy the Markov property~\cite{KemenySnellBook1976, VanKampenBook2007, RivasHuelgaPlenioRPP2014, BreuerEtAlRMP2016} \begin{equation}\label{eq cond Markovianity} \begin{split} & p(\alpha_n,n\tau|\alpha_{n-1},n\tau -\tau;\dots;\alpha_0,0) \\ & = p(\alpha_n,n\tau|\alpha_{n-1},n\tau-\tau). \end{split} \end{equation} In practice this requires to check infinitely many conditions. But as we will see below, to compute all quantities of thermodynamic interest, only the knowledge about the evolution of the \emph{one}-time probabilities $p(\alpha_n,n\tau)$ is important for us. To see how non-Markovianity affects the evolution of the one time-probabilities, we introduce the following matrices derived from the above joint probabilities \begin{align} G_{\tau,0}(\beta|\alpha) &= \frac{p(\beta,\tau;\alpha,0)}{p_\alpha(0)} = \sum_{y_\beta,x_\alpha} T_{\tau}(y_\beta|x_\alpha)p_{x|\alpha}(0), \label{eq TM mesolevel} \\ \tilde G_{2\tau,\tau}(\gamma|\beta) &= \frac{p(\gamma,2\tau;\beta,\tau)}{p_\beta(\tau)} = \frac{\sum_\alpha p(\gamma,2\tau;\beta,\tau;\alpha,0)}{\sum_\alpha p(\beta,\tau;\alpha,0)}, \nonumber \\ G_{2\tau,0}(\gamma|\alpha) &= \frac{p(\gamma,2\tau;\alpha,0)}{p_\alpha(0)} \nonumber \\ &= \sum_{z_\gamma,x_\alpha}\sum_{\beta,y_\beta} T_{\tau}(z_\gamma|y_\beta) T_{\tau}(y_\beta|x_\alpha)p_{x|\alpha}(0). \nonumber \end{align} Formally, these matrices are well-defined conditional probabilities because they are positive and normalized. However, we have deliberately choosen a different notation for $\tilde G_{2\tau,\tau}$ because only $G_{\tau,0}$ and $G_{2\tau,0}$ can be interpreted as \emph{transition} probabilities (or matrices) as they generate the correct time evolution for \emph{any} initial mesostate $p_\alpha(0)$. The matrix $\tilde G_{2\tau,\tau}$ instead depends on the specific choice of $p_\alpha(0)$: if we start with a different initial mesostate $q_\alpha(0)\neq p_\alpha(0)$, we cannot use $\tilde G_{2\tau,\tau}$ to propagate $q_\beta(\tau) = \sum_\beta G_{\tau,0}(\beta|\alpha) q_\alpha(0)$ further in time. This becomes manifest by realizing that the so generated hierarchy of conditional probabilities does not in general obey the Chapman-Kolmogorov equation, \begin{equation} G_{2\tau,0}(\gamma|\alpha) = \sum_\beta\tilde G_{2\tau,\tau}(\gamma|\beta)G_{\tau,0}(\beta|\alpha). \end{equation} A way to avoid this undesired feature is to define the TM from time $\tau$ to $2\tau$ via the inverse of $G_{\tau,0}$ (provided it exists)~\cite{HaenggiThomasZPB1977, RivasHuelgaPlenioPRL2010, RivasHuelgaPlenioRPP2014, BreuerEtAlRMP2016} \begin{equation}\label{eq def intermediate TM} G_{2\tau,\tau} \equiv G_{2\tau,0} G_{\tau,0}^{-1}. \end{equation} The TM $G_{2\tau,\tau}$ does not depend on the initial mesostate, preserves the normalization of the state and by construction, it fulfills the Chapman-Kolmogorov equation: $G_{2\tau,0} = G_{2\tau,\tau}G_{\tau,0}$. However, as the inverse of a positive matrix is not necessarily positive, $G_{2\tau,\tau}$ can have negative entries. This clearly indicates that $G_{2\tau,\tau}(\gamma|\beta)$ cannot be interpreted as a conditional probability and hence, the process must be non-Markovian. Based on these insights we introduce a weaker notion of Markovianity, which we coin 1-Markovianity. In the context of open quantum systems dynamics this notion is often simply called Markovianity~\cite{RivasHuelgaPlenioRPP2014, BreuerEtAlRMP2016}: \begin{mydef}[1-Markovianity]\label{def 1 Markovian} A stochastic process is said to be 1-Markovian, if the set of TMs $\{G_{n\tau,m\tau}|n\ge m\ge 0\}$ introduced above fulfill $G_{n\tau,m\tau}(\alpha|\beta) \ge0$ for all $n\ge m\ge 0$ and all $\alpha,\beta$. \end{mydef} It is important to realize that the notion of 1-Markovianity is weaker than the notion of Markovianity: if the coarse-grained process is Markovian, then it is also 1-Markovian and the TMs coincide with the conditional probabilities in Eq.~(\ref{eq cond Markovianity}). Furthermore, there exist processes which are 1-Markovian but not Markovian according to Eq.~(\ref{eq cond Markovianity}) (see, e.g., Ref.~\cite{RivasHuelgaPlenioRPP2014}). Before we consider MEs, we introduce some further notation. We let \begin{equation} \C A(0) \equiv \{p_x(0)|p_\alpha(0) \text{ arbitrary, } p_{x|\alpha}(0) \text{ fixed}\} \end{equation} be the set of all \emph{physically admissible initial states} with respect to a partition $\boldsymbol\chi$ (whose dependence is implicit in the notation). The reason to keep $p_{x|\alpha}(0)$ fixed is twofold: first, in an experiment one usually does not have detailed control over the microstates, and second, the TMs~(\ref{eq TM mesolevel}) for the lumped process depend on $p_{x|\alpha}(0)$, i.e., every choice of $p_{x|\alpha}(0)$ defines a different stochastic process at the mesolevel and should be treated separately. Which of the mesostates $p_\alpha(0)$ we can really prepare in an experiment is another interesting (but for us unimportant) question; sometimes this could be only a single state (e.g., the steady state $\pi_\alpha$). Of particular importance for the applications later on will be the set \begin{equation}\label{eq stationary preparation class} \C A_{\pi} \equiv \{p_\alpha\pi_{x|\alpha}|p_\alpha \text{ arbitrary}\} \end{equation} where $\pi_{x|\alpha} = \pi_{x_\alpha}/\pi_\alpha$ is the conditional steady state. Experimentally, such a class of states can be prepared by holding the mesostate fixed while allowing the microstates to reach steady state. Finally, we define the set of time-evolved admissible initial states \begin{equation}\label{eq time evolved admissible states} \C A(\tau) \equiv \{\bb p(\tau) = T_\tau\bb p(0)| \bb p(0) \in \C A(0)\}. \end{equation} \emph{Time-dependent MEs.---} For many physical applications it is indeed easier to derive a ME, which describes the continuous time evolution of the system state, compared to deriving a TM for a finite time-step~\cite{VanKampenBook2007, BreuerPetruccioneBook2002}. The ME reads in general \begin{equation}\label{eq ME general} \frac{\partial}{\partial t}p_{x}(t) = \sum_{y} W_{x,y}(\lambda_t) p_{y}(t) \end{equation} or in vector notation $\partial_t \bb p(t) = W(\lambda_t)\bb p(t)$. The rate matrix $W(\lambda_t)$ fulfills $\sum_{x} W_{x,y}(\lambda_t) = 0$ and $W_{x,y}(\lambda_t) \ge 0$ for $x\neq y$ and it is now also allowed to be parametrically dependent on time through a prescribed parameter $\lambda_t$. This situation usually arises by subjecting the system to an external drive, e.g., a time-dependent electric or magnetic field. Furthermore, we assume that the rate matrix has one IFP, which fulfills $W(\lambda_t) \boldsymbol\pi(\lambda_t) = 0$. Clearly, the steady state will in general also parametrically depend on $\lambda_t$. We can connect the ME description to the theory above by noting that the TM over any finite time interval ${[t,t+\tau]}$ is formally given by \begin{equation}\label{eq TM from ME} T_{t,t+\tau} = \C T_+ \exp\int_t^{t+\tau} W(\lambda_s) ds, \end{equation} where $\C T_+$ is the time-ordering operator. In particular, if we choose $\delta t = \tau/N$ small enough such that $\lambda_{t+\delta t} \approx \lambda_t$ (assuming that $\lambda_t$ changes continuously in time), we can approximate the TM to any desired accuracy via \begin{equation} T_{t+\tau,t} \approx \prod_{i=0}^{N-1} T_{t+i\delta t + \delta t,t+i\delta t} \equiv \prod_{i=0}^{N-1} e^{W(\lambda_{t+i\delta t}) \delta t}. \end{equation} As a notational convention, whenever the system is undriven (i.e., $\dot\lambda_t = 0$ for all $t$), we will simply drop the dependence on $\lambda_t$ in the notation. We now fix an arbitrary partition $\boldsymbol\chi$ as before. To describe the dynamics at the mesolevel, one can use several formally exact procedures, two of them we mention here. First, from Eq.~(\ref{eq ME general}) we get by direct coarse-graining \begin{equation} \begin{split}\label{eq effective rate matrix CG} \frac{\partial}{\partial t} p_\alpha(t) &= \sum_\beta R_{\alpha,\beta}[\lambda_t,p_\alpha(0)] p_\beta(t), \\ R_{\alpha,\beta}[\lambda_t,p_\alpha(0)] &\equiv \sum_{x_\alpha,y_\beta} W_{x_\alpha,y_\beta}(\lambda_t) p_{y|\beta}(t). \end{split} \end{equation} Here, the matrix $R[\lambda_t,p_\alpha(0)]$ still fulfills all properties of an ordinary rate matrix: $\sum_\alpha R_{\alpha,\beta}[\lambda_t,p_\alpha(0)] = 0$ and $R_{\alpha,\beta}[\lambda_t,p_\alpha(0)] \ge 0$ for $\alpha\neq\beta$. However, it explicitly depends on the initial mesostate $p_\alpha(0)$, which influences $p_{y|\beta}(t)$ for later times $t$. This is analogous to the problem mentioned below Eq.~(\ref{eq TM mesolevel}): the TMs computed with Eq.~(\ref{eq effective rate matrix CG}) at intermediate times depend on the initial state of the system. This reflects the non-Markovian character of the dynamics and makes it inconvenient for practical applications. Note that Eq.~(\ref{eq effective rate matrix CG}) still requires to solve for the full microdynamics and does not provide a closed reduced dynamical description. A strategy to avoid this undesired feature follows the logic of Eq.~(\ref{eq def intermediate TM}) and only makes use of the well-defined transition probability [cf.~Eq.~(\ref{eq TM mesolevel})] \begin{equation}\label{eq TM mesolevel general} G_{t,0}(\alpha|\beta) \equiv \sum_{x_\alpha,y_\beta} T_{t,0}(x_\alpha|y_\beta) p_{y|\beta}(0). \end{equation} Provided that its inverse exists\footnote{Finding a general answer to the question whether the inverse of a dynamical map exists, which allows one to construct a time-local ME, is non-trivial. Nevertheless, many open systems can be described by a time-local ME and this assumptions seems to be less strict than one might initially guess. See Refs.~\cite{AnderssonCresserHallJMO2007, MaldonadoMundoEtAlPRA2012} for further research on this topic.}, it allows to define an effective ME independent of the initial mesostate~\cite{HaenggiThomasZPB1977, RivasHuelgaPlenioPRL2010, RivasHuelgaPlenioRPP2014, BreuerEtAlRMP2016}, \begin{align} \frac{\partial}{\partial t} p_\alpha(t) &= \sum_\beta V_{\alpha,\beta}(\lambda_t,t) p_\beta(t), \label{eq ME meso general} \\ V(\lambda_t,t) &\equiv \lim_{\delta t\rightarrow0}\frac{G_{t+\delta t,0}G_{t,0}^{-1}-1}{\delta t}, \label{eq meso generator ME} \end{align} but where the matrix $V(\lambda_t,t)$ now carries an additional time-dependence, which does not come from the parameter $\lambda_t$. Notice that the construction~(\ref{eq meso generator ME}) shares some similarity with the time-convolutionless ME derived from the Nakajima-Zwanzig projection operator formalism, which is another formally exact ME independent of the initial mesostate~\cite{FulinskiKramarczyk1968, ShibataTakahashiHashitsumeJSP1977, BreuerPetruccioneBook2002, DeVegaAlonsoRMP2017}. The generator $V(\lambda_t,t)$ preserves normalization and yields to a set of TMs, which fulfill the Chapman-Kolmogorov equation, but it can have temporarily negative rates, i.e., $V_{\alpha,\beta}(\lambda_t,t) < 0$ for $\alpha\neq\beta$ is possible. This is a clear indicator that the dynamics are not 1-Markovian~\cite{HallEtAlPRA2014}. Finally, we note that there are also other MEs to describe the reduced state of the dynamics, e.g., the standard Nakajima-Zwanzig equation which is an integro-differential equation~\cite{BreuerPetruccioneBook2002, DeVegaAlonsoRMP2017}. This ME is free from the assumption that the inverse of Eq.~(\ref{eq TM mesolevel general}) exists and therefore more general. On the other hand, we will see in Sec.~\ref{sec time dependent MEs} that we will need the notion of an IFP of the dynamics, which is hard to define for an integro-differential equation. \emph{Lumpability.---} In this final part we introduce the concept of lumpability from Sec.~6.3 in Ref.~\cite{KemenySnellBook1976}. It will help us to further understand the conditions which ensure Markovianity at the mesolevel and it will be occassionally used in the following. In unison with Ref.~\cite{KemenySnellBook1976} we first introduce the concept for discrete, time-homogeneous Markov chains before we consider MEs again. Furthermore, we emphasize that in the definition below the notion of Markovianity refers to the usual property~(\ref{eq cond Markovianity}) and not only to the one-time probabilities. Another related weaker concept (known as ``weak lumpability'') is treated for the interested reader in Appendix~\ref{sec app weak lumpability}. \begin{mydef}[Lumpability]\label{def strong lumpability} A Markov chain with TM $T_\tau$ is lumpable with respect to a partition $\boldsymbol\chi$ if for every initial distribution $p_{x}(0)$ the lumped process is a Markov chain with transition probabilities independent of $p_{x}(0)$. \end{mydef} It follows from the definition that a lumpable process for a given TM $T_\tau$ and partition $\boldsymbol\chi$, is also a lumpable process for all larger times, i.e., for all $T_{n\tau} = (T_\tau)^n$ with $n>1$ and the same partition $\boldsymbol\chi$. The following theorem will be useful for us: \begin{thm}\label{thm strong lumpability} A necessary and sufficient condition for a Markov chain to be lumpable with respect to the partition $\boldsymbol\chi$ is that \begin{equation}\label{eq cond lumpability} \C G_{\tau}(\alpha|\beta) \equiv \sum_{x_\alpha} T_{\tau}(x_\alpha|y_\beta) = \sum_{x_\alpha} T_{\tau}(x_\alpha|y'_\beta) \end{equation} holds for any $y_\beta\neq y'_\beta$. The lumped process then has the TM $\C G_{\tau}$. \end{thm} The details of the proof can be found in Ref.~\cite{KemenySnellBook1976}. However, it is obvious that the so-defined set of TMs is independent of the inital state. In addition, one can readily check that they fulfill the Chapman-Kolmogorov equation, are normalized and have positive entries. The concept of lumpability can be straightforwardly extended to time-dependent MEs by demanding that a lumpable ME with respect to the partition $\boldsymbol\chi$ has lumpable TMs $T_{t+\delta t,t}$ for any time $t$ and every $\delta t>0$. By expanding Eq.~(\ref{eq cond lumpability}) in $\delta t$ and by taking $\delta t\rightarrow0$, we obtain the following corollary (see also Ref.~\cite{NicolisPRE2011}): \begin{crllr}\label{thm cor lumpable ME} A ME with possibly time-dependent rates is lumpable with respect to the partition $\boldsymbol\chi$ if and only if \begin{equation}\label{eq cond lumpability ME} \C V_{\alpha,\beta}(\lambda_t) \equiv \sum_{x_\alpha} W_{x_\alpha,y_\beta}(\lambda_t) = \sum_{x_\alpha} W_{x_\alpha,y'_\beta}(\lambda_t) \end{equation} for any $y_\beta\neq y'_\beta$ and any $t$. The lumped process is then governed by the rate matrix $\C V(\lambda_t)$. \end{crllr} Notice that the dynamical description of a lumpable ME is unambiguous because the generator $R[\lambda_t,p_\alpha(0)]$ from Eq.~(\ref{eq effective rate matrix CG}) and $V(\lambda_t,t)$ from Eq.~(\ref{eq meso generator ME}) both coincide with $\C V(\lambda_t)$ from the above corollary. For $R[\lambda_t,p_\alpha(0)]$ this follows from directly applying Eq.~(\ref{eq cond lumpability ME}) to Eq.~(\ref{eq effective rate matrix CG}). For $V(\lambda_t,t)$ this follows from the fact that the propagator in Eq.~(\ref{eq def intermediate TM}) coincides for a Markovian process with the transition probabilities obtained from Eq.~(\ref{eq cond Markovianity}), which for a lumpable process are identical to the TMs introduced in Theorem~\ref{thm strong lumpability}. All generators are then identical and have the same well-defined rate matrix. In the following we will stop repeating that any concept at the coarse-grained level is always introduced ``with respect to the partition $\boldsymbol\chi$''. Furthermore, to facilitate the readability, Table~\ref{table} summarizes the most important notation used in this section and in the remainder. \begin{table}[h] \centering \begin{tabular}{l|l} symbol & meaning \\ \hline $\C X$ & full state space \\ $\boldsymbol\chi$ & state space partition \\ $x$ & arbitary microstate \\ $\alpha$ & mesostate \\ $x_\alpha$ & microstate belonging to mesostate $\alpha$ \\ $\pi_x(\lambda_t)$ & microlevel IFP \\ $\pi_\alpha(\lambda_t)$ & $= \sum_{x_\alpha} \pi_{x_\alpha}(\lambda_t)$ (no IFP in general!) \\ $\C A(0)$ & set of admissible initial states \\ $\C A_\pi(\lambda_t)$ & $\rightarrow$ Eq.~(\ref{eq stationary preparation class}), in general dependent on $\lambda_t$ \\ $\C A(t)$ & $\C A(0)$ time-evolved \\ $p_x(t)$ [$\rho(x;t)$] & microstate probability discrete [continuous] \\ $p_\alpha(t)$ [$\rho(\alpha;t)$] & mesostate probability discrete [continuous] \\ $W(\lambda_t)$ & rate matrix for microdynamics \\ $D[p_x\|q_x]$ & relative entropy \\ \end{tabular} \caption{\label{table} List of symbols frequently used in the text. } \end{table} \subsection{Entropy production rates, non-Markovianity and instantaneous fixed points} \label{sec time dependent MEs} After having discussed how to describe the dynamics at the mesolevel, we now turn to its thermodynamics. This is still done in an abstract way without recourse to an underlying physical model. An important concept in our theory is the notion of an IFP, which we define as follows: \begin{mydef}[Instantaneous fixed point]\label{def IFP} Let $V(\lambda_t,t)$ be the generator of the time-local ME~(\ref{eq ME meso general}). We say that $\tilde{\bs\pi}(t)$ is an IFP of the dynamics if $V(\lambda_t,t)\tilde{\bs\pi}(t) = 0$. \end{mydef} We notice that $\tilde{\bs\pi}(t)$ does not need to be a well-defined probability distribution because $V(\lambda_t,t)$ can have negative rates. We also point out that the IFP at time $t$ might not be reachable from any state in the class of initially admissible states and it is therefore a purely abstract concept. Hence, while $V(\lambda_t,t)\tilde{\bs\pi}(t) = 0$ it need not be true that $R[\lambda_t,p_\alpha(0)]\tilde{\bs\pi}(t) = 0$ for any $p_{x_\alpha}(0)\in\C A(0)$. The IFP cannot be computed with the help of the effective rate matrix in Eq.~(\ref{eq effective rate matrix CG}). The IFP is only well-defined for a time-local ME with a generator independent of the initial mesostate. In Appendix~\ref{sec app IFP} we will show that it also does not matter how we have derived the ME as long as it is time-local, formally exact and independent of the initial mesostate. In the first part of this section, we introduce the concept of EP rate in a formal way and establish a general theorem. In the second part of this section, we will answer the question when does the IFP $\tilde{\bs\pi}(t)$ coincide with the marginalized IFP of the microdynamics, \begin{equation}\label{eq marginalized IFP} \pi_\alpha(\lambda_t) = \sum_{x_\alpha} \pi_{x_\alpha}(\lambda_t). \end{equation} \emph{EP rate.---} We define the EP rate for the coarse-grained process by \begin{equation}\label{eq ent prod abstract} \begin{split} \dot\Sigma(t) &\equiv -\left.\frac{\partial}{\partial t}\right|_{\lambda_t} D[p_\alpha(t)\|\pi_\alpha(\lambda_t)] \\ &= -\sum_\alpha \frac{\partial p_\alpha(t)}{\partial t}[\ln p_\alpha(t) - \ln\pi_\alpha(\lambda_t)], \end{split} \end{equation} where $\pi_\alpha(\lambda_t)$ was defined in Eq.~(\ref{eq marginalized IFP}).\footnote{We remark that it turns out to be important to use in our definition~(\ref{eq ent prod abstract}) the coarse-grained steady state $\pi_\alpha(\lambda_t)$ and not the actual IFP $\tilde\pi_\alpha(t)$ of the generator $V(\lambda_t,t)$. In the latter case, the so-defined EP rate has only a clear thermodynamic meaning in the Markovian limit, where it was previously identified with the non-adiabatic part of the EP rate~\cite{EspositoVanDenBroeckPRE2010, VanDenBroeckEspositoPRE2010}. } Notice that $\dot\Sigma(t)$ can be defined for any stochastic process and \emph{a priori} it is not related to the physical EP rate known from nonequilibrium thermodynamics. However, for the systems considered in Secs.~\ref{sec coarse-grained dissipative dynamics} and~\ref{sec classical system bath theory} this will turn out to be the case. Having emphasized this point, we decided for simplicity to refrain from introducing a new terminology for $\dot\Sigma(t)$ in this section. Furthermore, we remark that the definition of $\dot\Sigma(t)$ is experimentally meaningful: it only requires to measure the mesostate $p_\alpha(t)$ and the knowledge of $\pi_\alpha(\lambda_t)$. The latter can be obtained by measuring the steady state of the system after holding $\lambda_t$ fixed for a long time or by arguments of equilibrium statistical mechanics (see Secs.~\ref{sec coarse-grained dissipative dynamics} and~\ref{sec classical system bath theory}). Also theoretically, Eq.~(\ref{eq ent prod abstract}) can be evaluated with any method that gives the exact evolution of the mesostates. The following theorem shows how to connect negative EP rates to non-Markovianity. Application of this theorem to various physical situations will be the purpose of the next sections. \begin{thm}\label{thm ent prod} If $\pi_\alpha(\lambda_t)$ is an IFP of the mesodynamics and if $I$ denotes the time interval in which the mesodynamics are 1-Markovian, then $\dot\Sigma(t) \ge 0$ for all $t\in I$. \end{thm} To prove this theorem, it is useful to recall the well-known lemma, which we have stated already in Eq.~(\ref{eq contractivity}): \begin{lemma}\label{lemma Markov contractivity} For a 1-Markovian process the relative entropy between any two probability distributions is continuously decreasing in time, i.e., for all $t$ and any pair of intial distributions $p_\alpha(0)$ and $q_\alpha(0)$ Eq.~(\ref{eq contractivity}) holds. \end{lemma} This lemma follows from the fact that, firstly, for every stochastic matrix $M$ and any pair of distributions $p_\alpha$ and $q_\alpha$ one has that \begin{equation}\label{eq contractivity rel ent} D\left[\sum_\beta M_{\alpha,\beta}p_\beta\left\|\sum_\beta M_{\alpha,\beta}q_\beta\right]\right. \le D[p_\alpha\|q_\alpha], \end{equation} and secondly, for a 1-Markovian process the TM at any time $t$ and for every time step $\delta t$ is stochastic. We can now prove Theorem~\ref{thm ent prod}: \begin{proof} By definition of the EP rate we have \begin{align} & \dot\Sigma(t) = \label{eq help 3} \\ & -\lim_{\delta t\rightarrow0}\frac{D[G_{t+\delta t,t}\bb p_\text{cg}(t)\|\bs\pi_\text{cg}(\lambda_t)]-D[\bb p_\text{cg}(t)\|\bs\pi_\text{cg}(\lambda_t)]}{\delta t}, \nonumber \end{align} where $G_{t+\delta t,t}$ is the propagator obtained from the ME~(\ref{eq ME meso general}) [cf.~also Eq.~(\ref{eq def intermediate TM})], $\bb p_\text{cg}(t)$ denotes the vector of the coarse-grained state $p_\alpha(t)$ and likewise for $\bs\pi_\text{cg}(\lambda_t)$. Next, we use the assumption that $\bs\pi_\text{cg}(\lambda_t)$ is an IFP of the ME~(\ref{eq ME meso general}), i.e., we have \begin{equation} G_{t+\delta t,t}\bs\pi_\text{cg}(\lambda_t) \approx \bs\pi_\text{cg}(\lambda_t) \end{equation} and any possible discrepancy vanishs in the limit $\delta t\rightarrow0$. Thus, we can rewrite Eq.~(\ref{eq help 3}) \begin{equation} \begin{split} \dot\Sigma(t) = -\lim_{\delta t\rightarrow0}\frac{1}{\delta t}\big\{ & D[G_{t+\delta t,t}\bb p_\text{cg}(t)\|G_{t+\delta t,t}\bs\pi_\text{cg}(\lambda_t)] \\ & -D[\bb p_\text{cg}(t)\|\bs\pi_\text{cg}(\lambda_t)]\big\}. \end{split} \end{equation} Now, if the dynamics is 1-Markovian (Definition~\ref{def 1 Markovian}), then $G_{t+\delta t,t}$ is a stochastic matrix and from Eq.~(\ref{eq contractivity rel ent}) it follows that $\dot\Sigma(t) \ge 0$. \end{proof} Whereas the proof of Theorem~\ref{thm ent prod} is straightforward, two things make it a non-trivial statement. First, we will show that the EP rate defined in Eq.~(\ref{eq ent prod abstract}) deserves its name because it can be linked to \emph{physical} quantities with a \emph{precise thermodynamic interpretation}. This will be done in Secs.~\ref{sec coarse-grained dissipative dynamics} and~\ref{sec classical system bath theory}. Second, the essential assumption that $\pi_\alpha(\lambda_t)$ is an IFP of the mesodynamics is non-trivial: it is \emph{not} a consequence of a 1-Markovian time-evolution and it can also happen for \emph{non}-Markovian dynamics. The details of this crucial assumption will be worked out in the remainder of this section, but already at this point we emphasize that 1-Markovianity alone is \emph{not} sufficient to guarantee that $\dot\Sigma(t) \ge 0$. The Venn diagramm in Fig.~\ref{fig Venn} should help to understand the implications of Theorem~\ref{thm ent prod} better. \begin{figure \centering\includegraphics[width=0.40\textwidth,clip=true]{Venn.pdf} \label{fig Venn} \caption{A Venn-diagramm to understand the implications of Theorem~\ref{thm ent prod}. The largest outer box contains all possible lumped stochastic process. One subset of them is 1-Markovian (shaded in grey). For another subset the maginalized microlevel steady state $\pi_\alpha(\lambda_t)$ is an IFP of the dynamics (striped area). Where both sets overlap, $\dot\Sigma(t)\ge0$ is guaranteed, i.e., whenever we observe $\dot\Sigma(t) < 0$ we cannot be simultaneously in the striped and in the shaded grey area. Note that the Venn-diagramm shows the situation for a fixed driving protocol $\lambda_t$ and time interval $I$. Depending on $\lambda_t$ and $I$ the shaded grey and the striped area can parametrically change in time. } \end{figure} \emph{IFP of the coarse-grained process.---} To answer the question when is $\tilde\pi_\alpha(\lambda_t) = \pi_\alpha(t)$, we start with the simple case and assume that the coarse-grained dynamics are lumpable. Hence, according to Corollary~\ref{thm cor lumpable ME} there is a unique and well-defined rate matrix. We then get: \begin{thm}\label{thm steady state strong} If the stochastic process is lumpable for some time interval $I$, then the IFP of the mesostates is given by the marginal IFP of $W(\lambda_t)$ for all $t\in I$. \end{thm} \begin{proof} We want to show that $\C V(\lambda_t)\bs\pi(\lambda_t) = 0$. By using Corollary~\ref{thm cor lumpable ME} in the first and third equality, we obtain \begin{equation} \begin{split} \sum_\beta \C V_{\alpha,\beta}(\lambda_t)\pi_\beta(\lambda_t) &= \sum_\beta\sum_{x_\alpha} W_{x_\alpha,y_\beta}(\lambda_t)\pi_\beta(\lambda_t) \\ &= \sum_\beta\sum_{x_\alpha,y'_\beta} W_{x_\alpha,y_\beta}(\lambda_t)\pi_{y'_\beta}(\lambda_t) \\ &= \sum_\beta\sum_{x_\alpha,y'_\beta} W_{x_\alpha,y'_\beta}(\lambda_t)\pi_{y'_\beta}(\lambda_t), \end{split} \end{equation} which is zero since $\pi_{y'_\beta}(\lambda_t)$ is the IFP at the microlevel. \end{proof} Therefore, together with Theorem~\ref{thm ent prod} we can infer that $\dot\Sigma(t) < 0$ unambiguously shows that the dynamics are not lumpable. However, lumpability required the coarse-grained process to fulfill the Markov property~(\ref{eq cond Markovianity}) for any initial condition, which is a rather strong property. We are therefore interested whether a negative EP rate reveals also insights about the weaker property of 1-Markovianity. For instance, for undriven processes we intuitively expect that, provided that we start at steady state, we always remain at steady state independently of the time-dependence of the generator~(\ref{eq meso generator ME}) or even the question whether the inverse of Eq.~(\ref{eq TM mesolevel general}) exists. Then, negative values of the EP rate will always indicate non-Markovian dynamics for undriven system. Indeed, the following theorem holds: \begin{thm}\label{thm steady state weak} Consider an undriven stochastic process described by the ME~(\ref{eq ME meso general}), i.e., we assume $G_{t,0}^{-1}$ to exist for all admissible initial states $\C A(0)$ and all times $t$. If the conditional microstates are initially equilibrated, $\C A(0) \subset\C A_\pi$ [Eq.~(\ref{eq stationary preparation class})], then $\pi_\alpha$ is an IFP of the stochastic process at the mesolevel. \end{thm} \begin{proof} If $\C A(0) \subset\C A_\pi$, we can conclude that $\sum_\beta G_{t,0}(\alpha|\beta)\pi_\beta = \pi_\alpha$, i.e., if we start with the coarse-grained steady state we also remain in it for all times $t$. Since $G_{t,0}$ was assumed to be invertible, \begin{equation}\label{eq invertible steady state} \sum_\beta (G_{t,0}^{-1})_{\alpha,\beta}\pi_\beta = \pi_\alpha. \end{equation} Hence, by definition~(\ref{eq meso generator ME}) we obtain the chain of equalities \begin{equation} \begin{split}\label{eq help thm steady state weak} & \sum_{\alpha,\beta}\left(\lim_{\delta t\rightarrow0}\frac{G_{t+\delta t,0}G_{t,0}^{-1} - 1}{\delta t}\right)_{\alpha,\beta} \pi_\beta \\ & = \lim_{\delta t\rightarrow0}\frac{1}{\delta t}\left[\sum_{\beta,\gamma} G_{t+\delta t,0}(\alpha|\gamma)(G_{t,0}^{-1})_{\gamma,\beta}\pi_\beta - \pi_\alpha\right] \\ & = \lim_{\delta t\rightarrow0}\frac{1}{\delta t}\left[\sum_{\gamma} G_{t+\delta t,0}(\alpha|\gamma)\pi_\gamma - \pi_\alpha\right] \\ & = \lim_{\delta t\rightarrow0}\frac{1}{\delta t}\left[\pi_\alpha - \pi_\alpha\right] = 0. \end{split} \end{equation} \end{proof} We recognize a big difference in the characterization of the IFPs for driven and undriven processes. Without driving, the right set of initial states suffices already to show that the microlevel steady state induces the steady state at the mesolevel, even if the dynamics is non-Markovian. Thus, for this kind of dynamics $\dot\Sigma(t) < 0$ unambiguously signifies non-Markovianity. For driven systems instead, we needed the much stronger requirement of lumpability, i.e., Markovianity of the lumped process with TMs independent of the initial microstate. However, at least formally it is possible to establish the following additional theorem: \begin{thm}\label{thm steady state general} Consider a driven stochastic process described by the ME~(\ref{eq ME meso general}), i.e., we assume $G_{t,0}^{-1}$ to exist for all initial states and all times $t$. We denote by $I$ the time-interval in which either \begin{enumerate} \item all conditional microstates in the set of time-evolved states are at steady state, $\C A(t)\subset\C A_\pi(\lambda_t)$, or \item the IFP of the microdynamics is an admissible time-evolved state, $\pi_x(\lambda_t)\in\C A(t)$. \end{enumerate} Then, $\pi_{\alpha}(\lambda_t)$ is an IFP of the lumped process for all $t\in I$. \end{thm} \begin{proof} First of all, notice that the ME~(\ref{eq ME meso general}) generates the exact time evolution, i.e., for any $p_y(t) = p_\beta(t)p_{y|\beta}(t)\in\C A(t)$ we have \begin{equation} \begin{split}\label{eq help 4} & \sum_\beta V_{\alpha,\beta}(\lambda_t,t) p_\beta(t) \\ & = \sum_{x_\alpha}\sum_{\beta,y_\beta} W_{x_\alpha,y_\beta}(\lambda_t) p_{y|\beta}(t)p_\beta(t). \end{split} \end{equation} For the first condition, if $\pi_x(\lambda_t)\in\C A(t) \subset \C A_\pi(\lambda_t)$, then one immediately verifies that $V(\lambda_t,t)\bs\pi(\lambda_t) = 0$. But one may have that $\C A(t) \subset \C A_\pi(\lambda_t)$, but $\pi_x(\lambda_t)\notin\C A(t)$. This means that there is no admissible initial state, which gets mapped to the IFP at time $t$, i.e., $T_{t,0}^{-1}\bs{\pi}(\lambda_t) \notin\C A(0)$. However, by the invertibility of the dynamics there is always a set of states $p_x^{(i)}(t)\in\C A(t)$, which spans the entire mesostate space. Thus, we can always find a linear combination $\pi_x(\lambda_t) = \sum_i \mu_i p_x^{(i)}(t)$ with $\mu_i\in\mathbb{R}$. Then, $V(\lambda_t,t)\bs\pi(\lambda_t) = 0$ follows from the linearity of the dynamics by applying Eq.~(\ref{eq help 4}) to each term of the linear combination. For the second condition let us assume the opposite, i.e., $V(\lambda_t,t)\bs\pi(\lambda_t) \neq 0$. This implies $\sum_\beta G_{t+\delta t,t}(\alpha|\beta) \pi_\beta(\lambda_t) \neq \pi_\alpha(\lambda_t)$ for a sufficiently small $\delta t$. But as the reduced dynamics are exact, this can only be the case if there is a state $q_y(t) = \pi_\beta(\lambda_t)q_{y|\beta}(t)\in\C A(t)$ with $q_{y|\beta}(t)\neq \pi_{y|\beta}(\lambda_t)$. On the other hand, the theorem assumes that $\pi_x(\lambda_t)\in\C A(t)$ too. Hence, there must be two states $q_y(t)\in\C A(t)$ and $\pi_x(\lambda_t)\in\C A(t)$, which give the same marginal mesostate $\pi_\alpha(\lambda_t)$. Since the ME dynamics in the full space are clearly invertible and since the initial conditional microstate is fixed, this means that there must be two different initial mesostates, which get mapped to the same mesostate at time $t$. Hence, $G_{t,0}$ cannot be invertible, which conflicts with our initial assumption. \end{proof} Theorem~\ref{thm steady state general} plays an important role in the limit of TSS (see Sec.~\ref{sec time scale separation}) where the first condition is automatically fulfilled. The second condition will be in general complicated to check if the microdynamics are complex. It is worthwhile to ask whether milder conditions suffice to ensure that $\pi_\alpha(\lambda_t)$ is an IFP of the mesodynamics. In Appendix~\ref{sec app weak lumpability} we show that they can indeed be found if the dynamics fulfills the special property of weak lumpability. In general, however, we believe that it will be hard to find milder conditons: in Sec.~\ref{sec steady states} we give an example for an ergodic and undriven Markov chain, whose mesodynamics are 1-Markovian, but $\pi_\alpha$ is not an IFP unless $\C A(0)\subset\C A_\pi$. As any driven process takes the conditional microstates out of equilibrium, i.e., $\C A(t) \nsubseteq \C A_\pi(\lambda_t)$ in general, finding useful milder conditions to guarantee that $\pi_\alpha(\lambda_t)$ is an IFP seems unrealistic. Before we proceed with the physical picture, we want to comment on a mathematical subtlety, which becomes relevant for the application considered in Sec.~\ref{sec classical system bath theory}. In there, we will apply our findings from above to the case of Hamiltonian dynamics described on the continuous phase space of a collection of classical particles. This does not fit into the conventional picture of a finite and discrete state space $\C X$ with $N<\infty$ microstates. However, under the assumption that it is possible to approximate the actual Hamiltonian dynamics by using a high-dimensional grid of very small phase space cells, we can imagine that we can approximate the true dynamics arbitrarily well with a finite, discretized phase space. Nevertheless, in order not to rely on this way of reasoning, we briefly re-derive the above theorems for the Hamiltonian setting in Appendix~\ref{sec app Hamiltonian dynamics}. \section{Coarse-grained dissipative dynamics} \label{sec coarse-grained dissipative dynamics} \subsection{Thermodynamics at the microlevel} \label{sec thermodynamics microlevel} We now start to investigate the first application of the general framework from Sec.~\ref{sec mathematical results}. In this section we consider the ME~(\ref{eq ME general}), which describes a large class of dissipative classical and quantum systems, with applications ranging from molecular motors to thermoelectric devices. In addition, we impose the condition of local detailed balance, \begin{equation}\label{eq local detailed balance} \ln\frac{W_{x,y}(\lambda_t)}{W_{y,x}(\lambda_t)} = -\beta[E_x(\lambda_t) - E_y(\lambda_t)], \end{equation} where $E_x(\lambda_t)$ denotes the energy of state $x$ and $\beta$ the inverse temperature of the bath. Eq.~(\ref{eq local detailed balance}) ensures that the IFP at the microlevel is given by the Gibbs state $\pi_x(\lambda_t) = e^{-\beta E_x(\lambda_t)}/Z(\lambda_t)$ with $Z(\lambda_t) = \sum_x e^{-\beta E_x(\lambda_t)}$ and it allows us to link energetic changes in the system with entropic changes in the bath. A thermodynamically consistent description of the microdynamics follows from the definitions \begin{align} U_\text{mic}(t) &\equiv \sum_x E_x(\lambda_t) p_x(t) ~ (\text{internal energy}), \label{eq U mic} \\ \dot W_\text{mic}(t) &\equiv \sum_x [d_t E_{x}(\lambda_t)]p_x(t) ~ (\text{work rate}), \label{eq work} \\ \dot Q_\text{mic}(t) &\equiv \sum_{x} E_{x}(\lambda_t) \partial_tp_{x}(t) ~ (\text{heat rate}), \label{eq heat} \\ S_\text{mic}(t) &\equiv -\sum_x p_x(t)\ln p_{x}(t) ~ (\text{Shannon entropy}), \label{eq S mic} \\ F_\text{mic}(t) &\equiv U_\text{mic}(t) - S_\text{mic}(t)/\beta ~ (\text{free energy}), \label{eq F mic} \\ \dot\Sigma_\text{mic}(t) &\equiv -\left.\frac{\partial}{\partial t}\right|_{\lambda_t} D[p_x(t)\|\pi_x(\lambda_t)] \ge 0 ~ (\text{EP rate}). \end{align} Here, we used the subscript ``mic'' to emphasize that the above definitions refer to the thermodynamic description of the microdynamics, which has to be distinguished from the thermodynamic description at the mesolevel introduced below. Using the ME~(\ref{eq ME general}) and local detailed balance~(\ref{eq local detailed balance}) together with the definitions provided above, one can verify the first and second law of thermodynamics in the conventional form: $d_t U_\text{mic}(t) = \dot W_\text{mic}(t) + \dot Q_\text{mic}(t)$ and $\dot\Sigma_\text{mic}(t) = \beta[\dot W_\text{mic}(t) - d_t F_\text{mic}(t)] \ge 0$. Since the IFP at the microlevel is the equilibrium Gibbs state, we can parametrize the conditional equilibrium state of the microstates belonging to a mesostate $\alpha$ as \begin{equation}\label{eq cond eq state CG dynamics} \pi_{x|\alpha}(\lambda_t) = e^{-\beta[E_{x_\alpha}(\lambda_t) - F_\alpha(\lambda_t)]}, \end{equation} where $F_\alpha(\lambda_t) \equiv -\beta^{-1}\ln\sum_{x_\alpha} e^{-\beta E_{x_\alpha}(\lambda_t)}$ plays the role of an effective free energy. The reduced equilibrium distribution of a mesostate can then be written as \begin{equation}\label{eq state CG dynamics} \pi_\alpha(\lambda_t) = \frac{e^{-\beta F_\alpha(\lambda_t)}}{Z(\lambda_t)}. \end{equation} In the following we want to find meaningful definitions, which allow us to formulate the laws of thermodynamics at a coarse-grained level and which we can connect to the general theory of Sec.~\ref{sec mathematical results}. Since the dynamics at the mesolevel will typically be non-Markovian and not fulfill local detailed balance, finding a consistent thermodynamic framework becomes non-trivial. We will restrict our investigations here to any initial prepartion class which fulfills $\C A(0)\subset\C A_\pi(\lambda_0)$ with $\C A_\pi(\lambda_0)$ defined in Eq.~(\ref{eq stationary preparation class}). If the dynamics is driven, we will need one additional assumption [see Eq.~(\ref{eq cond bipartite driving})], otherwise our results are general. \subsection{Thermodynamics at the mesolevel} With the framework from Sec.~\ref{sec mathematical results} we are now going to study the thermodynamics at the mesolevel. This is possible in full generality if the dynamics are undriven. In case of driving, $\dot\lambda_t\neq0$, we need to assume that we can split the time-dependent energy function as \begin{equation}\label{eq cond bipartite driving} E_{x_\alpha}(\lambda_t) = E_{\alpha}(\lambda_t) + \tilde E_{x_\alpha}. \end{equation} Thus, solely the mesostate energies are affected by the driving. This condition naturally arises if we think about the complete system as being composed of two interacting systems, $\C X = \C Y\otimes \C Z$, and we trace out the degrees of freedom $\C Y$ to obtain a reduced description in $\C Z$. In this case we can split the energy for any value of $\lambda_t$ as $E_{yz} = E_y + E_z + V_{yz}$ where $V_{yz}$ describes an interaction energy and $E_y$ ($E_z$) are the bare energies associated with the isolated system $\C Y$ ($\C Z$). Condition~(\ref{eq cond bipartite driving}) is then naturally fulfilled if we identify $E_z = E_\alpha$ and only $E_z = E_z(\lambda_t)$ is time-dependent (compare also with Sec.~\ref{sec classical system bath theory}). Importantly, this condition allows us to identify \begin{equation} \begin{split}\label{eq work bipartite} \dot W_\text{mic}(t) &= \sum_x \frac{\partial E_x(\lambda_t)}{\partial t}p_x(t) \\ &= \sum_\alpha \frac{\partial E_\alpha(\lambda_t)}{\partial t} p_\alpha(t) \equiv \dot W(t). \end{split} \end{equation} Therefore, the exact rate of work can be computed from the knowledge about the mesostate alone. Furthermore, Eq.~(\ref{eq cond bipartite driving}) implies that the conditional equilibrium state of the bath~(\ref{eq cond eq state CG dynamics}) does not depend on $\lambda_t$ and hence, we can write $\C A_\pi(\lambda_0) = \C A_\pi$. The thermodynamic analysis starts from our central definition~(\ref{eq ent prod abstract}) \begin{equation} \dot\Sigma(t) = -\left.\frac{\partial}{\partial t}\right|_{\lambda_t} D[p_\alpha(t)\|\pi_\alpha(\lambda_t)] \end{equation} with $\pi_\alpha(\lambda_t)$ given in Eq.~(\ref{eq state CG dynamics}). Using Eq.~(\ref{eq work bipartite}) and noting that $d_t F_\alpha(\lambda_t) = d_t E_\alpha(\lambda_t)$, it is not hard to confirm that \begin{equation} \dot\Sigma(t) = \beta\dot W(t) - \beta\frac{d}{dt}\sum_\alpha p_\alpha(t)\left[F_\alpha(\lambda_t) + \frac{1}{\beta}\ln p_\alpha(t)\right]. \end{equation} This motivates the definition of the nonequilibrium free energy \begin{equation} F(t) \equiv \sum_\alpha p_\alpha(t)\left[F_\alpha(\lambda_t) + \frac{1}{\beta}\ln p_\alpha(t)\right], \label{eq free energy mesolevel} \end{equation} such that the EP rate is given by the familiar form of phenomenological non-equilibrium thermodynamics: $\dot\Sigma(t) = \beta[\dot W(t) - d_t F(t)]$. The EP over a finite time interval becomes \begin{equation}\label{eq ent prod CG dynamics} \Sigma(t) = \beta[W(t) - \Delta F(t)] \end{equation} and for a proper second law it remains to show that this quantity is positive. This follows from: \begin{thm} For any $p_x(0) \in \C A_\pi$ and any driving protocol we have \begin{equation} \Sigma(t) \ge \Sigma_\text{\normalfont mic}(t) \ge 0. \end{equation} \end{thm} \begin{proof} The proof was already given in Ref.~\cite{StrasbergEspositoPRE2017}. In short, one rewrites \begin{equation} \Sigma(t) - \Sigma_\text{mic}(t) = \beta[\Delta F_\text{mic}(t) - \Delta F(t)] \end{equation} and shows that for $p_x(0) \in \C A_\pi$ it follows that \begin{equation} \begin{split} & \beta[\Delta F_\text{mic}(t) - \Delta F(t)] \\ & = D[p_x(t)\|\pi_x(\lambda_t)] - D[p_\alpha(t)\|\pi_\alpha(\lambda_t)] \\ & = \sum_\alpha p_\alpha(t) D[p_{x|\alpha}(t)\|\pi_{x|\alpha}] \ge 0. \end{split} \end{equation} Since $\Sigma_\text{mic}(t)\ge 0$, this implies $\Sigma(t)\ge 0$. \end{proof} Using the theorems of Sec.~\ref{sec time dependent MEs}, we can now connect the appearance of negative EP rates to the following properties of the underlying dynamics: \begin{thm}\label{thm ent prod bipartite case} Let $p_x(0) \in \C A_\pi$ and let $I$ denote the time interval in which the mesodynamics are 1-Markovian and the dynamics is \begin{enumerate} \item undriven, or \item driven and lumpable, or \item driven and such that $\C A(t)\subset\C A_\pi$ or $\pi_x(\lambda_t)\in\C A(t)$. \end{enumerate} Then, $\dot\Sigma(t) \ge 0$ for all $t\in I$ and all admissible initial states. \end{thm} Hence, as a corollary, if we observe $\dot\Sigma(t) < 0$ for the undriven case, we know that the dynamics is non-Markovian [or that the initial state $p_x(0)\notin\C A_\pi$]. For driven dynamics, noticing a negative EP rate, is not sufficient to conclude that the dynamics is non-Markovian, but they are clearly not lumpable. In the next section we will show that $\dot\Sigma(t) < 0$ also suffices to conlude that TSS does not apply. Furthermore, while the above procedure provides a unique way to define a non-equilibrium free energy at the mesolevel, it does not fix the definition of the internal energy and entropy at the mesolevel because the prescription $F = U-S/\beta$ entails a certain level of arbitrariness. Via the first law $\Delta U = Q + W$ this would also imply a certain arbitrariness for the definition of heat~\cite{TalknerHaenggiPRE2016}. However, a reasonable definition of $U, S$ and $Q$ can be fixed by demanding that they should coincide with $U_\text{mic}, S_\text{mic}$ and $Q_\text{mic}$ in the limit where the microstates are conditionally equilibrated, which is fulfilled in the limit of TSS considered in Sec.~\ref{sec time scale separation}. Then, one is naturally lead to the definitions \begin{align} U(t) &\equiv \sum_\alpha \C U_\alpha(\lambda_t) p_\alpha(t), ~~~ \C U_\alpha \equiv \sum_{x_\alpha} E_{x_\alpha}(\lambda_t)\pi_{x|\alpha}, \label{eq int energy mesolevel} \\ S(t) &\equiv \sum_\alpha \left\{\beta [\C U_\alpha(\lambda_t) - F_\alpha(\lambda_t)] -\ln p_\alpha(t)\right\} p_\alpha(t). \label{eq entropy mesolevel} \end{align} Heat is then defined as $\dot Q(t) = d_t U(t) - \dot W(t)$ and the EP rate can be equivalently expressed as $\dot\Sigma(t) = d_tS(t) - \beta\dot Q(t)$. We remark that it is not obvious how to relax condition~(\ref{eq cond bipartite driving}) because the work~(\ref{eq work bipartite}) can then not be computed from knowledge of the mesostate alone, which was an essential ingredient in our derivation. \subsection{Time-scale separation and Markovian limits} \label{sec time scale separation} Although open systems behave non-Markovian in general, it is important to know in which limits the Markovian \emph{approximation} is justified. One such limit is TSS, which is an essential assumption in many branches of statistical mechanics in order to ensure that the dynamics at the level of the ``relevant'' degrees of freedom is Markovian and hence, easily tractable. It is also essential in order to ensure that we can infer from the coarse-grained dynamics the \emph{exact} thermodynamics of the underlying microstate dynamics (under reasonable mild conditions), see Refs.~\cite{PuglisiEtAlJSM2010, SeifertEPJE2011, EspositoPRE2012, AltanerVollmerPRL2012, BoCelaniJSM2014, StrasbergEspositoPRE2017} for research on this topic. Here, we restrict ourselves to highlight the role of TSS within our mathematical framework of Sec.~\ref{sec mathematical results}. Furthermore, at the end of this section we discuss another class of systems whose dynamics is Markovian albeit TSS does not apply. To study TSS, let us decompose the rate matrix as follows: \begin{equation}\label{eq rate matrix TSS} W_{x_\alpha,y_\beta}(\lambda_t) = \delta_{\alpha\beta} R_{x_\alpha,y_\alpha}(\lambda_t) + (1-\delta_{\alpha\beta})r_{x_\alpha,y_\beta}(\lambda_t). \end{equation} Next, we assume that $R_{x_\alpha,y_\alpha}(\lambda_t) \gg r_{x_\alpha,y_\beta}(\lambda_t)$, i.e., there is a strong separation of time-scales between the mesodynamics and the microdynamics belonging to a certain mesostate. As a consequence the microstates rapidly equilibrate to the conditional steady state $\pi_{x|\alpha}(\lambda_t)$ for any mesostate $\alpha$ provided that the microstates in each mesostate are fully connected (tacitly assumed in the following). This means that condition~1 of Theorem~\ref{thm steady state general} is always fulfilled. By replacing $p_{y|\beta}(t)$ by $\pi_{y|\beta}(\lambda_t)$ in Eq.~(\ref{eq effective rate matrix CG}), it is easy to see that the effective rate matrix is independent of the initial state and describes a proper Markov process, $R[\lambda_t,p_\alpha(0)] = R(\lambda_t)$. Another consequence of TSS is that the thermodynamics associated with the mesodynamics are identical to the thermodynamics of the microdynamics. Strictly speaking the limit of TSS requires $R_{x_\alpha,y_\alpha}(\lambda_t)/r_{x_\alpha,y_\beta}(\lambda_t) \rightarrow\infty$. In practice, however, there will be always a finite time $\delta t$ associated with the relaxation of the microstates and TSS means that we assume \begin{equation} \frac{1}{r_{x_\alpha,y_\beta}(\lambda_t)} \gg \delta t \gg \frac{1}{R_{x_\alpha,y_\alpha}(\lambda_t)}. \end{equation} Then, within a time-step $\delta t$ the conditional microstates are almost equilibrated while terms of the order $\C O(\delta t^2 r_{x_\alpha,y_\beta})$ are still negligible. The TM in this situation becomes \begin{equation} \begin{split} & T_{t+\delta t,t}(x_\alpha|y_\beta) \approx \\ & \delta_{\alpha\beta} \pi_{x|\alpha}(\lambda_t)\left(1-\delta t\sum_{\gamma\neq\alpha} \sum_{z_\gamma} r_{z_\gamma,x_\alpha}(\lambda_t)\right) \\ & + \delta t (1-\delta_{\alpha\beta}) \sum_{z_\beta} \pi_{z|\beta}(\lambda_t) r_{x_\alpha,z_\beta}(\lambda_t). \end{split} \end{equation} The first term describes the probability for a transition within two microstates of the same mesostate: to lowest order this is simply given by the conditional steady state minus a small correction term of $\C O(\delta t)$, which takes into account the possibility that one leaves the given mesostate to another mesostate. The second term gives the probability to reach a microstate lying in a different mesostate, which is given by the sum of all possible rates which connect to this microstate from the given mesostate multiplied by the respective conditional steady state probability. One immediately checks normalization of $T_{t+\delta t,t}(x_\alpha|y_\beta)$ and positivity follows by assuming that $r_{z_\gamma,x_\alpha}(\lambda_t)\delta t \ll 1$. Furthermore, also the condition~(\ref{eq cond lumpability}) of lumpability is fulfilled. Indeed, we can even confirm the stronger property \begin{equation} T_{t+\delta t,t}(x_\alpha|y_\beta) = T_{t+\delta t,t}(x_\alpha|y'_\beta) \end{equation} for all $y'_\beta\neq y_\beta$. Hence, in the idealized limit yielding to an instantaneous equilibration of the conditional microstates, the TMs do not even depend on the particular microstate anymore. We conclude: \begin{thm}\label{thm TSS} If TSS applies, then the process is lumpable and $p_x(t) \in \C A_\pi$ for all $t$. Conversely, if $\dot\Sigma(t) < 0$, then TSS does not apply. \end{thm} It was shown in Ref.~\cite{EspositoPRE2012} that $\dot\Sigma(t) = \dot\Sigma_\text{mic}(t)$ in the limit of TSS. If only the slightly weaker condition of lumpabibility is fulfilled, then it is not known whether $\dot\Sigma(t) = \dot\Sigma_\text{mic}(t)$ still holds. While TSS is an important limit, the mesodynamics can be also Markovian without the assumption of TSS. The following theorem demonstrates this explicitly: \begin{thm} If there is a partition $\boldsymbol\chi$ such that the rate matrix can be written as \begin{equation}\label{eq rate matrix decomposition lumpable} W_{x_\alpha,y_\beta}(\lambda_t) = \delta_{\alpha\beta} R_{x_\alpha,y_\alpha}(\lambda_t) + (1-\delta_{\alpha\beta})V_{\alpha,\beta}(\lambda_t), \end{equation} then the process is lumpable independent of any TSS argument. Moreoever, the IFP of the lumped process is $\pi_\alpha(\lambda_t) = \sum_{x_\alpha} \pi_{x_\alpha}(\lambda_t)$ and hence, $\dot\Sigma(t) \ge 0$ always. \end{thm} \begin{proof} We first of all observe that from \begin{equation} \begin{split} 0 &= \sum_{\alpha,x_\alpha} W_{x_\alpha,y_\beta}(\lambda_t) \\ &= \sum_{x_\beta} R_{x_\beta,y_\beta}(\lambda_t) + \sum_{\alpha\neq\beta}\sum_{x_\alpha} V_{\alpha,\beta}(\lambda_t), \end{split} \end{equation} it follows that $\sum_{x_\alpha} R_{x_\alpha,y_\alpha}(\lambda_t) = -\sum_{\beta\neq\alpha} \#\chi_\beta V_{\beta,\alpha}(\lambda_t)$ for any $\alpha$ (where $\#\chi_\alpha$ denotes the cardinality of the set of microstates belonging to mesostate $\alpha$). By using this property, it becomes straightforward to check that Eq.~(\ref{eq cond lumpability ME}) is fulfilled and hence, the coarse-grained process is Markovian. Due to Theorem~\ref{thm steady state strong} we can also confirm that $\pi_\alpha(\lambda_t)$ is the IFP and from Theorem~\ref{thm ent prod} it follows that $\dot\Sigma(t) \ge 0$. \end{proof} Compared to the decomposition~(\ref{eq rate matrix TSS}) we here did not need to assume any particular scaling of the rates, but it was important that the transitions between different mesostates are independent of the microstate. In fact, for many mesoscopic systems the details of the microstates might not matter, for instance, the Brownian motion of a suspended particle is quite independent from the spin degrees of freedom of its electrons unless strong magnetic interactions are present. Notice that the ME at the mesolevel resulting from Eq.~(\ref{eq rate matrix decomposition lumpable}) reads \begin{equation} \begin{split}\label{eq help 1} & \frac{\partial}{\partial t} p_\alpha(t) = \\ & \sum_{\beta\neq\alpha}\left[\#\chi_\alpha V_{\alpha,\beta}(\lambda_t) p_\beta(t) - \#\chi_\beta V_{\beta,\alpha}(\lambda_t) p_\alpha(t)\right]. \end{split} \end{equation} It shows that the local detailed balance ratio~(\ref{eq local detailed balance}) of the effective rates at the mesolevel is shifted by an entropic contribution due to the degeneracy factor $\#\chi_\alpha$; see Sec.~\ref{sec strong lumpability without TSS} or Ref.~\cite{HerpichThingnaEspositoPRX2018} for explicit examples. \section{Classical system-bath theory} \label{sec classical system bath theory} In this section we consider the standard paradigm of classical open system theory: a system in contact with a bath described by Hamiltonian dynamics as opposed to the rate ME dynamics from Sec.~\ref{sec coarse-grained dissipative dynamics}. The microstates (system and bath) therefore describe an isolated system and the goal is to find a consistent thermodynamic framework for the mesostate (the system only). The global Hamiltonian reads \begin{equation}\label{eq Hamiltonian SB} H_\text{tot}(\lambda_t) = H(\lambda_t) + V + H_B, \end{equation} where the system, bath and interaction Hamiltonian $H(\lambda_t)$, $H_B$ and $V$ are arbitrary. We denote a phase space point of the system by $x_S$ and of the bath by $x_B$. Thus, to be very precise, we should write $H(x_S;\lambda_t)$, $H_B(x_B)$ and $V(x_S,x_B)$, but we will drop the dependency on $x_S$ and $x_B$ for notational simplicity. Deriving the laws of thermodynamics for an arbitrary Hamiltonian~(\ref{eq Hamiltonian SB}) has attracted much interest recently~\cite{JarzynskiJSM2004, GelinThossPRE2009, SeifertPRL2016, TalknerHaenggiPRE2016, JarzynskiPRX2017, MillerAndersPRE2017, StrasbergEspositoPRE2017, AurellEnt2017} (note that many investigations in the quantum domain also have a direct analogue in the classical regime~\cite{EspositoLindenbergVandenBroeckNJP2010, MartinezPazPRl2013, PucciEspositoPelitiJSM2013, StrasbergEtAlNJP2016, FreitasPazPRE2017, PerarnauLlobetEtAlPRL2018, HsiangEtAlPRE2018}). It will turn out that our basic definitions are identical to the ones suggested by Seifert~\cite{SeifertPRL2016}. We here re-derive them in a different way and in addition, we focus on the EP \emph{rate} and its relation to non-Markovian dynamics. In order to be able to define the EP rate~(\ref{eq ent prod abstract}), we first of all need to know the exact equilibrium state of the system, which is obtained from coarse-graining the global equilibrium state $\pi_\text{tot}(\lambda_t) = e^{-\beta H_\text{tot}(\lambda_t)}/\C Z_\text{tot}(\lambda_t)$ with $\C Z_\text{tot}(\lambda_t) = \int dx_{SB} e^{-\beta H_\text{tot}(\lambda_t)}$. For this purpose we introduce the Hamiltonian of mean force $H^*(\lambda_t)$~\cite{KirkwoodJCP1935}. It is defined through the two relations \begin{equation} \begin{split}\label{eq HMF} \pi_S(\lambda_t) &\equiv \frac{e^{-\beta H^*(\lambda_t)}}{\C Z^*(\lambda_t)} = \int dx_B \frac{e^{-\beta H_\text{tot}(\lambda_t)}}{\C Z_\text{tot}(\lambda_t)}, \\ \C Z^*(\lambda_t) &\equiv \frac{\C Z_\text{tot}(\lambda_t)}{\C Z_B}, \end{split} \end{equation} where $\C Z_B = \int dx_B e^{-\beta H_B}$ is the equilibrium partition function of the unperturbed bath. We emphasize that the equilibrium state of the system is not a Gibbs state with respect to $H(\lambda_t)$ due to the strong coupling. More explicitly, the Hamiltonian of mean force reads \begin{equation} H^*(\lambda_t) = H(\lambda_t) - \frac{1}{\beta}\ln\lr{e^{-\beta V}}_B^\text{eq}, \end{equation} where $\lr{\dots}_B^\text{eq}$ denotes an average with respect to the unperturbed equilibrium state of the bath $e^{-\beta H_B}/\C Z_B$. Note that $H^*(\lambda_t)$ also depends on the inverse temperature $\beta$ of the bath. We can now use Eq.~(\ref{eq ent prod abstract}) to define the EP rate, which reads in the notation of this section \begin{equation}\label{eq ent prod rate Seifert} \dot\Sigma(t) = -\left.\frac{\partial}{\partial t}\right|_{\lambda_t} D[\rho_S(t)\|\pi_S(\lambda_t)], \end{equation} where $\rho_S(t) = \rho_S(x_S;t)$ denotes the state of the system at time $t$, which can be arbitrarily far from equilibrium. Note that we now use the differential relative entropy $D[\rho_S(t)\|\pi_S(\lambda_t)] = \int dx_S \rho_S(x_S;t)\ln\frac{\rho_S(x_S;t)}{\pi_S(x_S;\lambda_t)}$. Using Eq.~(\ref{eq HMF}), we can rewrite Eq.~(\ref{eq ent prod rate Seifert}) as \begin{equation} \dot\Sigma(t) = \frac{d}{dt}S[\rho_S(t)] - \beta \int dx_S H^*(\lambda_t) \frac{d}{dt}\rho_S(t) \end{equation} with $S[\rho_S(t)] \equiv -\int dx_S \rho(x_S;t)\ln\rho(x_S;t)$. The second term can be cast into the form \begin{equation} \int dx_S H^*(\lambda_t) \frac{d}{dt}\rho_S(t) = \frac{d}{dt}\lr{H^*(\lambda_t)} - \lr{\frac{dH^*(\lambda_t)}{dt}}, \end{equation} where $\lr{\dots}$ denotes a phase space average with respect to $\rho_S(t)$. After realizing that $d_t H^*(\lambda_t) = d_t H(\lambda_t)$, we see that the last term coincides with the rate of work done on the system \begin{equation}\label{eq work rate Hamiltonian} \dot W(t) = \int dx_S \frac{dH(\lambda_t)}{dt}\rho_S(t). \end{equation} Using \begin{equation} \begin{split} \int dx_S \frac{dH(\lambda_t)}{dt}\rho_S(t) &= \int dx_{SB} \frac{dH_\text{tot}(\lambda_t)}{dt}\rho_\text{tot}(t) \\ &= \int dx_{SB} \frac{d}{dt}[H_\text{tot}(\lambda_t)\rho_\text{tot}(t)], \end{split} \end{equation} this can be integrated to \begin{align} W(t) &= \int_0^t ds \lr{\frac{d H(\lambda_s)}{ds}} \label{eq def work} \\ &= \int dx_{SB} \left[H_\text{tot}(\lambda_t)\rho_\text{tot}(t) - H_\text{tot}(\lambda_0)\rho_\text{tot}(0)\right], \nonumber \end{align} showing that the work done on the system is given by the total energetic change of the composite system and environment. The EP rate can then be expressed as \begin{equation} \dot\Sigma(t) = \beta\left[\dot W(t) - \frac{d}{dt}\lr{H^*(\lambda_t) + \frac{1}{\beta}\ln\rho_S(t)}\right]. \end{equation} This motivates again the following definition of the non-equilibrium free energy [cf. Eq.~(\ref{eq free energy mesolevel})] \begin{equation} F(t) \equiv \lr{H^*(\lambda_t) + \frac{1}{\beta}\ln\rho_S(t)} \label{eq free energy HMF} \end{equation} such that $\dot\Sigma(t) = \beta[\dot W(t) - d_t F(t)]$. For a useful thermodynamic framework, it now remains to show that the second law as known from phenomenological non-equilibrium thermodynamics holds: \begin{equation}\label{eq ent prod Seifert} \Sigma(t) \equiv \beta[W(t) - \Delta F(t)] \ge 0. \end{equation} For this purpose we assume as in the previous section that the initial state $\rho_S(0)$ belongs to the set $\C A_\pi$, see Eq.~(\ref{eq stationary preparation class}). The conditional equilibrium state of the bath is given by \begin{equation} \pi_{B|S} \equiv \frac{e^{-\beta(V + H_B)}}{\int dx_B e^{-\beta(V + H_B)}} = \frac{e^{-\beta[H_\text{tot}(\lambda_0) - H(\lambda_0)]}}{\C Z_B}. \end{equation} To prove the positivity of the EP, we refer to Ref.~\cite{SeifertPRL2016}, where it was deduced from an integral fluctuation theorem, or alternatively, the positivity becomes evident by noting the relation $\Sigma(t) = D[\rho_{SB}(t)\|\rho_S(t)\pi_{B|S}]$ and by recalling that the relative entropy is always positive~\cite{MillerAndersPRE2017, StrasbergEspositoPRE2017}. It is important to realize, however, that $\Sigma(t) \ge 0$ relies crucially on the choice of initial state. If $\rho_\text{tot}(0)\notin\C A_\pi$, we have \begin{equation} \begin{split}\label{eq ent prod Seifert arb IS} & \beta[W(t)-\Delta F(t)] = \\ & D[\rho_{SB}(t)\|\rho_S(t)\pi_{B|S}] - D[\rho_{SB}(0)\|\rho_S(0)\pi_{B|S}], \end{split} \end{equation} which can be negative. After we have established that $\Sigma(t) = \int_0^t ds\dot\Sigma(s) \ge 0$ with the EP rate $\dot\Sigma(t)$ from Eq.~(\ref{eq ent prod abstract}), we can use the insights from Sec.~\ref{sec mathematical results} and Appendix~\ref{sec app Hamiltonian dynamics}. Then, we can immediately confirm the validity of the following theorem: \begin{thm}\label{thm ent prod HMF} Let $\rho_\text{tot}(0)\in\C A_\pi$ and let $I$ denote the time interval in which the system dynamics is 1-Markovian and the process is \begin{enumerate} \item undriven, or \item driven and lumpable, or \item driven and $\C A(t)\subset\C A_\pi$ or $\pi_\text{tot}(\lambda_t)\in\C A(t)$. \end{enumerate} Then, $\dot\Sigma(t) \ge 0$ for all $t\in I$ and all admissible initial states. \end{thm} We can therefore conclude for this setup that $\dot\Sigma(t) < 0$ directly implies non-Markovian dynamics for \emph{undriven} systems. For \emph{driven} systems this relation ceases to exist, but similar to Theorem~\ref{thm TSS} $\dot\Sigma(t) < 0$ implies that the two assumptions of 1-Markovian dynamics and a bath in a conditional equilibrium state cannot be simultaneously fulfilled. Two further remarks are in order: First, although it is possible to extend the framework of Ref.~\cite{SeifertPRL2016} to the situation of a time-dependent coupling Hamiltonian $V(\lambda_t)$ (see Ref.~\cite{StrasbergEspositoPRE2017}), Theorem~\ref{thm ent prod HMF} then ceases to hold because the work~(\ref{eq def work}) cannot anymore be computed from knowledge of the system state alone [also compare with Eq.~(\ref{eq work bipartite})]. Second, we remark that Theorem~\ref{thm ent prod HMF} is structurally identical to Theorem~\ref{thm ent prod bipartite case}. This shows the internal consisteny of our approach: since it is in principle possible to derive a ME from underlying Hamiltonian dynamics, we should find parallel results at each level of the description. This structural similarity was also found in Ref.~\cite{StrasbergEspositoPRE2017}. Also in parallel to Sec.~\ref{sec coarse-grained dissipative dynamics}, we remark that the splitting of the free energy $F = U-S/\beta$ does not allow to unambiguously define an internal energy and entropy. Hence, also the definition of heat via the first law $\Delta U = Q + W$ becomes ambiguous~\cite{TalknerHaenggiPRE2016}. However, the following definitions are appealing \begin{align} U(t) &\equiv \int dx_S \rho_S(t) \left[H^*(\lambda_t) + \beta \partial_\beta H^*(\lambda_t)\right], \label{eq int energy HMF} \\ S(t) &\equiv \int dx_S \rho_S(t) \left[-\ln\rho_S(t) + \beta^2 \partial_\beta H^*(\lambda_t)\right], \label{eq sys entropy HMF} \end{align} which can be shown to coincide (apart from a time-independent additive constant) with the global energy and entropy in equilibrium~\cite{SeifertPRL2016}. Further support for these definitions was given in Ref.~\cite{StrasbergEspositoPRE2017}, see also the discussion in Ref.~\cite{JarzynskiPRX2017}. Finally, to gain further insights into our approach, it is useful to reformulate it in terms of expressions which were previously derived for classical Hamiltonian dynamics~\cite{KawaiParrondoVandenBroeckPRL2007, VaikuntanathanJarzynskiEPL2009, HasegawaEtAlPLA2010, TakaraHasegawaDriebePLA2010, EspositoVandenBroeckEPL2011}. It follows from straightforward algebra that \begin{equation}\label{eq help 2} D[\rho_\text{tot}(t)\|\pi_\text{tot}(\lambda_t)] = \beta[F_\text{tot}(t) - \C F_\text{tot}(\lambda_t)], \end{equation} where $F_\text{tot}(t) = \lr{H_\text{tot}(\lambda_t)} + \lr{\ln\rho_\text{tot}(t)}/\beta$ is the non-equilibrium free energy associated to the global state $\rho_\text{tot}(t)$ and $\C F_\text{tot}(\lambda_t)$ is the equilibrium free energy associated to the thermal state $\pi_\text{tot}(\lambda_t)$. Due to Eq.~(\ref{eq help 2}) we can write the global EP rate as \begin{equation} \begin{split}\label{eq irreversible work} \dot\Sigma_\text{tot}(t) &= -\left.\frac{\partial}{\partial t}\right|_{\lambda_t} D[p_x(t)\|\pi_x(\lambda_t)] \\ &= \beta\left[\dot W_\text{irr}(t) - \frac{d}{dt} D[p_x(t)\|\pi_x(\lambda_t)]\right] = 0, \end{split} \end{equation} which is zero for Hamiltonian dynamics. Here, $\dot W_\text{irr}(t) \equiv \dot W - d_t \C F_\text{tot}(\lambda_t)$ is the irreversible work and thus, Eq.~(\ref{eq irreversible work}) recovers (parts of) the earlier results from Refs.~\cite{KawaiParrondoVandenBroeckPRL2007, VaikuntanathanJarzynskiEPL2009, HasegawaEtAlPLA2010, TakaraHasegawaDriebePLA2010, EspositoVandenBroeckEPL2011}. Especially for an initially equilibrated microstate we immediately get the well-known dissipation inequality $W_\text{irr}(t) = D[p_x(t)\|\pi_x(\lambda_0)] \ge 0$. Now, from our findings above we see that we obtain an identical structure at the coarse-grained level: by using the identity~(\ref{eq help 2}) for the system, $D[\rho_S(t)\|\pi_S(\lambda_t)] = \beta[F(t) - \C F(\lambda_t)]$, we obtain \begin{equation} \dot\Sigma(t) = \beta\left[\dot W_\text{irr}(t) - \frac{d}{dt} D[p_\alpha(t)\|\pi_\alpha(\lambda_t)]\right]. \end{equation} This expression can in general be negative and the conditions which ensure non-negativity are stated in Theorem~\ref{thm ent prod HMF}. \section{Strong coupling thermodynamics of quantum systems} \label{sec thermo quantum} So far we have only treated classical systems, but the question of how to obtain a meaningful thermodynamic description for quantum systems beyond the weak coupling and Markovian approximation is of equal importance. Whereas in Sec.~\ref{sec classical system bath theory} we could resort to an already well-developed framework, no general finite-time thermodynamic description for a driven quantum system immersed in an arbitrary single heat bath has been presented yet. Based on results obtained at equilibrium~\cite{GelinThossPRE2009, HsiangHuEntropy2018}, we first of all develop in Sec.~\ref{sec integrated description} the quantum extension of the framework introduced in Ref.~\cite{SeifertPRL2016}. Afterwards, in Sec.~\ref{sec breakdown} we prove that the relation worked out between non-Markovianity and a negative EP rate for classical systems cannot be established for quantum systems. The latter point is further studied in Sec.~\ref{sec quantum example} for the commonly used assumption that the system and bath are initially decorrelated; an assumption which is not true for the class of initial states considered in this section. \subsection{Integrated description} \label{sec integrated description} As in Sec.~\ref{sec classical system bath theory} our starting point is a time-dependent system-bath Hamiltonian of the form $\hat H_\text{tot}(\lambda_t) = \hat H(\lambda_t) + \hat V + \hat H_B$, where we used a hat to explicitly denote operators. The Hamiltonian of mean force in the quantum case is formally given by \begin{equation} \hat H^*(\lambda_t) = -\frac{1}{\beta}\ln\frac{\mbox{tr}_B\{e^{-\beta[\hat H(\lambda_t) + \hat V + \hat H_B]}\}}{Z_B} \end{equation} and it shares the same meaning as in the classical case, cf.~Eq.~(\ref{eq HMF}): it describes the exact reduced state of the system if the system-bath composite is in a global equilibrium state. Motivated by equilibrium considerations and by Sec.~\ref{sec classical system bath theory}, we define the three key thermodynamic quantities internal energy, system entropy and free energy for an arbitrary system state $\hat\rho_S(t)$ as follows: \begin{align} U(t) &\equiv \mbox{tr}_S\left\{\hat\rho_S(t)\left[\hat H^*(\lambda_t) + \beta\partial_\beta\hat H^*(\lambda_t)\right]\right\}, \label{eq def U quantum} \\ S(t) &\equiv \mbox{tr}_S\left\{\hat\rho_S(t)\left[-\ln \hat\rho_S(t) + \beta^2\partial_\beta\hat H^*(\lambda_t)\right]\right\}, \label{eq def S quantum} \\ F(t) &\equiv \mbox{tr}_S\left\{\hat\rho_S(t)\left[\hat H^*(\lambda_t) + \frac{1}{\beta}\ln \hat\rho_S(t)\right]\right\}. \label{eq def F quantum} \end{align} Note that all quantities are state functions. Also the definition of work is formally identical to Sec.~\ref{sec classical system bath theory}, Eq.~(\ref{eq def work}), \begin{align} W(t) &= \int_0^t ds \mbox{tr}_S\left\{\frac{d\hat H(\lambda_s)}{ds}\hat \rho_S(s)\right\} \label{eq def work quantum} \\ &= \mbox{tr}_{SB}\{\hat\rho_\text{tot}(t)\hat H_\text{tot}(\lambda_t)\} - \mbox{tr}_{SB}\{\hat\rho_\text{tot}(0)\hat H_\text{tot}(\lambda_0)\}, \nonumber \end{align} and the heat flux is again fixed by the first law $Q(t) = \Delta U(t) - W(t)$. Equipped with these definitions, we define the EP \begin{equation}\label{eq 2nd law quantum} \Sigma(t) \equiv \beta[W(t) - \Delta F(t)] \end{equation} as usual and ask when can we ensure its positivity? Again, in complete analogy to Eq.~(\ref{eq ent prod Seifert arb IS}) one can show that \begin{equation} \begin{split}\label{eq ent prod quantum arb IS} & \beta[W(t) - \Delta F(t)] = \\ & D\left[\hat\rho_\text{tot}(t)\left\|\hat\pi_\text{tot}(\lambda_t)\right]\right. - D\left[\hat\rho_{S}(t)\left\|\hat\pi_S(\lambda_t)\right]\right. \\ & -D\left[\hat\rho_\text{tot}(0)\left\|\hat\pi_\text{tot}(\lambda_0)\right]\right. + D\left[\hat\rho_{S}(0)\left\|\hat\pi_S(\lambda_0)\right]\right., \end{split} \end{equation} where $D[\hat\rho\|\hat\sigma] \equiv \mbox{tr}\{\hat\rho(\ln\hat\rho-\ln\hat\sigma)\}$ is the quantum relative entropy and $\hat\pi_\text{tot}(\lambda_t)$ the global Gibbs state and $\hat\pi_S(\lambda_t) = \mbox{tr}_B\{\hat\pi_\text{tot}(\lambda_t)\}$. Eq.~(\ref{eq ent prod quantum arb IS}) can be derived by using that the von Neumann entropy of the global state $S[\hat\rho_\text{tot}(t)] = -\mbox{tr}_{SB}\{\hat\rho_\text{tot}(t)\ln\hat\rho_\text{tot}(t)\}$ is conserved and by using the relation $\ln[\frac{\C Z^*(\lambda_t)}{\C Z_\text{tot}(\lambda_t)}\frac{\C Z_\text{tot}(\lambda_0)}{\C Z^*(\lambda_0)}] = \ln\frac{\C Z_B}{\C Z_B} = 0$, where the partition functions are defined analogously to Eq.~(\ref{eq HMF}). Notice that this identity requires the bath Hamiltonian to be undriven. We now note that due to the monotonicity of relative entropy~\cite{UhlmannCMP1977, OhyaPetzBook1993} the first line in Eq.~(\ref{eq ent prod quantum arb IS}) is never negative, while the second line is never positive. Hence, positivity of the EP~(\ref{eq 2nd law quantum}) is ensured if \begin{equation}\label{eq cond positivity 2nd law} D\left[\hat\rho_\text{tot}(0)\left\|\hat\pi_\text{tot}(\lambda_0)\right]\right. - D\left[\hat\rho_{S}(0)\left\|\hat\pi_S(\lambda_0)\right]\right. = 0. \end{equation} Two important classes of initial states for which this is the case are: \emph{Class 1 (global Gibbs state).} If the initial composite system-bath state is a Gibbs state $\hat\pi_\text{tot}(\lambda_0)$, we immediately see that Eq.~(\ref{eq cond positivity 2nd law}) is fulfilled and $\beta[W(t) - \Delta F(t)] \ge 0$ holds true. For a cyclic process, in which the system Hamiltonian is the same at the initial and final time, positivity of Eq.~(\ref{eq 2nd law quantum}) follows alternatively from the approach in Ref.~\cite{UzdinSaarPRX2018}. \emph{Class 2 (commuting initial state).} We consider initial states of the form \begin{equation}\label{eq commuting initial state} \hat\rho_\text{tot}(0) = \sum_k p_k(0)\hat\Pi_k \hat\rho_{B|k}(\lambda_0), \end{equation} where the $\hat\Pi_k = |k\rangle\langle k|$ are orthogonal rank-1 projectors in the system space fulfilling the commutation relations \begin{equation}\label{eq commuting condition} [\hat\Pi_k,\hat H^*(\lambda_0)] = [\hat\Pi_k,\hat H_\text{tot}(\lambda_0)] = 0 ~ \forall k. \end{equation} This is ensured when $[\hat H(\lambda_0),\hat V] = 0$. The state of the bath conditioned on the system state $\hat\Pi_k$ reads \begin{equation}\label{eq cond bath state quantum} \hat\rho_{B|k}(\lambda_0) = \frac{\mbox{tr}_S\{\hat\Pi_k\hat\pi_\text{tot}(\lambda_0)\}}{\mbox{tr}_{SB}\{\hat\Pi_k\hat\pi_\text{tot}(\lambda_0)\}} = \frac{\lr{k|\hat\pi_\text{tot}(\lambda_0)|k}}{\lr{k|\hat\pi_S(\lambda_0)|k}}. \end{equation} Since the $p_k(0)$ are allowed to be arbitrary probabilities, Eq.~(\ref{eq commuting initial state}) is the direct quantum analogue of the initial states considered in the classical setting in Sec.~\ref{sec classical system bath theory}. Using condition~(\ref{eq commuting condition}) it becomes a task of straightforward algebra to show that Eq.~(\ref{eq cond positivity 2nd law}) holds. We remark that all considerations above can be also extended to a time-dependent coupling Hamiltonian, i.e., by allowing $\hat V = \hat V(\lambda_t)$ to depend on time. Again, the problem is then that the work~(\ref{eq def work quantum}) cannot be computed based on the knowledge of the system state $\hat\rho_S(t)$ alone. Furthermore, it is worth to point out that positivity of the second law~(\ref{eq 2nd law quantum}) with the \emph{nonequilibrium} free energy represents a stronger inequality than the bound for the dissipated work derived in Ref.~\cite{CampisiTalknerHaenggiPRL2009} from a fluctuation theorem using the equilibrium free energy. \subsection{Breakdown of the results from Sec.~\ref{sec classical system bath theory}} \label{sec breakdown} The positivity of $\Sigma(t)$ could be established for initial global Gibbs states or for commuting initial states. Without any driving ($\dot\lambda_t = 0$) these states are not very interesting as they remain invariant in time. Hence, we only consider the driven situation. Clearly, the analogue of Eq.~(\ref{eq 2nd law quantum}) at the rate level is $\beta[\dot W(t) - d_t F(t)]$. Unfortunately, this does not coincide with the quantum counterpart of Eq.~(\ref{eq ent prod abstract}). To see this, suppose that \begin{equation} \dot\Sigma(t) = -\left.\frac{\partial}{\partial t}\right|_{\lambda_t} D[\hat\rho_S(t)\|\hat\pi_S(\lambda_t)]. \end{equation} This can be rewritten as \begin{equation} \begin{split} \dot\Sigma(t) =&~ \frac{d}{dt}\left\{ S[\hat\rho_S(t)] - \beta \lr{\hat H^*(\lambda_t)}\right\} \\ &+ \beta \mbox{tr}\left\{\hat\rho_S(t)\frac{d\hat H^*(\lambda_t)}{dt}\right\}. \end{split} \end{equation} Unfortunately, the analogy with Sec.~\ref{sec classical system bath theory} stops here because the last term cannot be identified with the work done on the quantum system and hence, $\int_0^t ds\dot\Sigma(s) \neq \Sigma(t)$. In fact, \begin{equation} \frac{\partial\hat H^*(\lambda_t)}{\partial t} \neq \frac{\partial\hat H(\lambda_t)}{\partial t} \end{equation} unless in the ``classical'' (and for us uninteresting) limit $[H(\lambda_t),V] = 0$. To conclude, for quantum systems the EP rate cannot be expressed in terms of a relative entropy describing the irreversible relaxation to the equilibrium state, which would be desirable because an analogue of Lemma~\ref{lemma Markov contractivity} holds also in the quantum case~\cite{SpohnJMP1978}. Thus, the very existence of a general relation between EP and non-Markovianity as established for previous setups seems questionable at the moment. This conclusion can be drawn without touching upon the difficult question of how to extend many of the mathematical results of Sec.~\ref{sec mathematical results} to the quantum case. \section{Applications} \label{sec applications} After having established the general theory in the last four sections, we now consider various examples and applications. However, it is not our intention here to cover every aspect of our theory. We rather prefer to focus on simple models, whose essence is easy to grasp and which illuminate certain key aspects of our framework, thereby also shedding light on some misleading statements made in the literature. \subsection{Time-dependent instantaneous fixed points for an undriven ergodic Markov chain} \label{sec steady states} For the formal development of our theory it was of crucial importance to know under which conditions we could ensure that there is a well-defined IFP $\pi_\alpha(\lambda_t)$ for the coarse-grained dynamics, which follows from an underlying steady state of the microdynamics. Especially for driven systems this was hard to establish because even when we start with the initial steady state $\pi_{x}(\lambda_0)$, the driving will take it out of that state such that $p_x(t) \neq \pi_x(\lambda_t)$ in general. One might wonder whether additional conditions, such as 1-Markovianity or ergodicity, help to ensure that $\pi_\alpha(\lambda_t)$ is an IFP of the mesodynamics, but we will here show that this is not the case. As a counterexample we consider a simple three-state system described by a three-by-three rate matrix $W(\lambda_t)$. Imagine that the system started in $\C A(0) \subset\C A_\pi(\lambda_0)$, i.e., the initial microstates were conditionally equilibrated. The system is then subjected to an arbitrary driving protocol $\lambda_t$ up to some time $t^*$. Afterwards, we keep the protocol fixed, i.e., $\lambda_t = \lambda_{t^*}$ for all $t\ge t^*$. Clearly, at time $t^*$ the microstates will in general not be conditionally equilibrated, i.e., $\C A(t)\nsubseteq \C A_\pi(\lambda_{t^*})$. Now, for definiteness we choose the full rate matrix describing the evolution of the probability vector $\bb p(t) = [p_1(t),p_2(t),p_3(t)]$ for $t\ge t^*$ to be \begin{equation} W(\lambda_{t^*}) = \left(\begin{array}{ccc} -1-e^{-\epsilon/2} & 1 & e^{\epsilon/2} \\ 1 & -1-e^{-\epsilon/2} & e^{\epsilon/2} \\ e^{-\epsilon/2} & e^{-\epsilon/2} & -2 e^{\epsilon/2} \\ \end{array}\right). \end{equation} It obeys local detailed balance~(\ref{eq local detailed balance}) if we parameterize the inverse temperature and energies as $\beta E_1 = \beta E_2 = 0$ and $\beta E_3 = \epsilon$ and furthermore we have set any kinetic coefficients in the rates equal to one. As a partition we choose $\chi_\alpha = \{1\}$ and $\chi_{\alpha'} = \{2,3\}$ and in the long time limit the mesostates will thermalize appropriately for any initial state, \begin{equation}\label{eq example 1 inststst} \binom{\pi_\alpha}{\pi_{\alpha'}} = \lim_{t\rightarrow\infty}\binom{p_\alpha(t)}{p_{\alpha'}(t)} = \frac{1}{e^{-\epsilon}+2}\binom{1}{1+e^{-\epsilon}}, \end{equation} i.e., the rate matrix $W(\lambda_{t^*})$ is \emph{ergodic}. As emphasized above, the conditional microstates need not be in equilibrium initially and we parametrize them by $p_{2|\alpha'}(t^*) = \gamma$, $p_{3|\alpha'}(t^*) = 1-\gamma$ ($\gamma\in[0,1]$). In principle it is possible to analytically compute the generator~(\ref{eq meso generator ME}) for the ME at the mesolevel, but we refrain from showing the resulting very long expression. Instead, we focus on Fig.~\ref{fig plot ex 1}. It clearly shows that the IFP of the dynamics is given by Eq.~(\ref{eq example 1 inststst}) only if we choose $p_{2|\alpha'}(t^*) = \pi_{2|\alpha'}(\lambda_{t^*})$ and $p_{3|\alpha'}(t^*) = \pi_{3|\alpha'}(\lambda_{t^*})$ [implying $\gamma = \gamma_\text{eq} \equiv e^\epsilon/(1+e^\epsilon)$], i.e., if the microstates are conditionally equilibrated in agreement with Theorem~\ref{thm steady state weak}. We have also checked that the time-dependent rates of the generator~(\ref{eq meso generator ME}) are always positive for this example (not shown here for brevity) and hence, the dynamics is 1-Markovian. \begin{figure \centering\includegraphics[width=0.44\textwidth,clip=true]{plot_ex1.pdf} \label{fig plot ex 1} \caption{Plot of the changing IFP, denoted here by $\tilde\pi_\alpha(t)$, over time $t$ in logarithmic scale (for the plot we set the initial time $t^* = 0$). The figure shows that $\tilde\pi_\alpha(t)\neq \pi_\alpha(\lambda_{t^*})$ unless we choose $\gamma = \gamma_\text{eq}$. In the long-time limit the IFP coincides with the equilibrium distribution~(\ref{eq example 1 inststst}). We set $\epsilon = 1$ which implies $\gamma_\text{eq} \approx 0.73$. } \end{figure} This example proves that ergodicity does not imply that $\pi_\alpha(\lambda_t)$ is the IFP of the reduced dynamics, as claimed in Ref.~\cite{SpeckSeifertJSM2007} for arbitrary non-Markovian dynamics. Even 1-Markovianity together with ergodicity is not sufficient to ensure this statement. \subsection{Markovianity without time-scale separation} \label{sec strong lumpability without TSS} We give a simple example of a physically relevant and lumpable Markov process although TSS does not apply. For this purpose consider the following rate matrix \begin{equation}\label{eq spin valve rate matrix} W = \left(\begin{array}{ccc} -2\gamma_\text{in} & \gamma_\text{out} & \gamma_\text{out} \\ \gamma_\text{in} & -\gamma_\text{out}-\bar\gamma_\text{flip} & \gamma_\text{flip} \\ \gamma_\text{in} & \bar\gamma_\text{flip} & -\gamma_\text{out}-\gamma_\text{flip} \\ \end{array}\right) \end{equation} describing the time evolution of a probability vector $\bb p(t) = [p_0(t),p_\uparrow(t),p_\downarrow(t)]$. This ME describes a quantum dot in the ultrastrong Coulomb blockade regime coupled to a metallic lead taking the spin degree of freedom into account. Then, $p_{0/\uparrow/\downarrow}(t)$ are the probabilities to find the dot at time $t$ in a state with zero electrons, an electron with spin up or an electron with spin down, respectively. If the metallic lead has a finite magnetization, the rates for hopping in ($\gamma_\text{in}$) and out ($\gamma_\text{out}$) of the quantum depend on the spin, which can be derived from first principles~\cite{BraunKoenigMartinekPRB2004} and has interesting thermodynamic applications~\cite{StrasbergEtAlPRE2014}. But if the lead has zero magnetization as considered here, the dynamics of the spin degree of freedom do not matter. Hence, if we consider the partition $\chi_0=\{0\}$ and $\chi_1=\{\uparrow,\downarrow\}$, it is not hard to deduce that \begin{equation}\label{eq spin valve ME} \frac{\partial}{\partial t}\binom{p_0(t)}{p_1(t)} = \left(\begin{array}{cc} -2\gamma_\text{in} & \gamma_\text{out} \\ 2\gamma_\text{in} & -\gamma_\text{out} \\ \end{array}\right) \binom{p_0(t)}{p_1(t)} \end{equation} where $p_1(t) = p_\uparrow(t) + p_\downarrow(t)$. Thus, the coarse-grained dynamics is Markovian for all times $t$ and all micro initial conditions $[p_0(0),p_\uparrow(0),p_\downarrow(0)]$ although TSS does not apply. Notice that the IFP of Eq.~(\ref{eq spin valve ME}) coincides with the marginalized IFP of Eq.~(\ref{eq spin valve rate matrix}) and hence, we have $\dot\Sigma(t) \ge 0$. Moreover, as long as the structure of the rate matrix~(\ref{eq spin valve rate matrix}) is preserved, we could have even allowed for arbitrary time-dependencies in the rates. \subsection{Classical Brownian motion} \label{sec Brownian motion} We here present an example which exhibits negative EP rates and link their appearance to the spectral features of the environment. This is done by considering the important class of driven, classical Brownian motion models (also called Caldeira-Leggett or independent oscillator models). The global Hamiltonian with mass-weighted coordinates reads \begin{align}\label{eq Brownian motion Hamiltonian} H(\lambda_t) &= \frac{1}{2}[p^2 + \omega^2(\lambda_t)x^2], \\ V + H_B &= \frac{1}{2}\sum_k\left[p_k^2 + \nu_k^2\left(x_k - \frac{c_k}{\nu_k^2}x\right)^2\right], \end{align} and its study has attracted considerable interest in strong coupling thermodynamics~\cite{MartinezPazPRl2013, PucciEspositoPelitiJSM2013, StrasbergEtAlNJP2016, FreitasPazPRE2017, StrasbergEspositoPRE2017, AurellEnt2017, PerarnauLlobetEtAlPRL2018, HsiangEtAlPRE2018}. The Hamiltonian describes a central oscillator with position $x$ and momentum $p$ linearly coupled to a set of bath oscillators with positions $x_k$ and momenta $p_k$. The frequency of the central oscillator can be driven and we parametrize it as $\omega(\lambda_t) = \omega_0 + g\sin(\omega_Lt)$. Furthermore, $c_k$ and $\nu_k$ are the system-bath coupling constants and the frequencies of the bath oscillators. It turns out that all the information about the bath (except of its temperature) can be encoded into a single function known as the spectral density of the bath. It is defined in general as $J(\omega) \equiv \frac{\pi}{2}\sum_k\frac{c_k^2}{\nu_k}\delta(\omega-\nu_k)$ and we parametrize it as \begin{equation}\label{eq SD non-Markovian} J(\omega) = \frac{\lambda_0^2\gamma\omega}{(\omega^2-\omega_1^2)^2 + \gamma^2\omega^2}. \end{equation} Here, $\lambda_0$ controls the overall coupling strength between the system and the bath and $\gamma$ changes the shape of the SD from a pronounced peak around $\omega_1$ for small $\gamma$ to a rather unstructured and flat SD for large $\gamma$. Thus, intuitively one expects that a smaller $\gamma$ corresponds to stronger non-Markovianity although this intuition can be misleading too~\cite{StrasbergEspositoPRL2018}. \begin{figure \centering\includegraphics[width=0.47\textwidth,clip=true]{plot_ex3.pdf} \label{fig plot ex 3} \caption{Plot of the dimensionless entropy production $\Sigma(t)$ ($k_B\equiv 1$) over the dimensionless time $\omega_0 t$ for different parameters. For the driving we chose $g = 0$ and $g = 0.3\omega_0$ for the left or right column, respectively, and $\omega_L = \omega_0$. We changed the shape of the spectral density $J(\omega)$ in each row, which is depicted for $\omega\in[0,6\omega_0]$ as a small inset (note that the vertical scaling is different in each inset). Specifically, the parameters $(\lambda_0,\gamma,\omega_1)$ are $(0.316\omega_0,0.01,1)\omega_0$ (top), $(3.16\omega_0,0.1,3.16)\omega_0$ (second row), $(100\omega_0,1,10)\omega_0$ (third row), $(500\omega_0,10,31.6)\omega_0$ (bottom). The system was prepared according to Eq.~(\ref{eq stationary preparation class}) with initial mean values $\lr{x}(0) = (\sqrt{\beta}\omega_0)^{-1}$, $\lr{p_x}(0) = 0$ and covariances $C_{xx}(0) = (\beta\omega_0^2)^{-1}, C_{p_xp_x}(0) = \beta^{-1}$ and $C_{xp_x}(0) = 0$. Note that this specific choice corresponds to equilibrated covariances, but the mean values are out of equilibrium. The general features of the plot, however, do not change too much for different non-equilibrium initial states. Finally, we set $\omega_0 = 1$ and $\beta = 1$. See also Ref.~\cite{StrasbergEspositoPRE2017} for details of the computation. } \end{figure} The dynamics of the model is exactly described by the generalized Langevin equation (see, e.g.,~\cite{WeissBook2008}) \begin{equation}\label{eq Langevin eq general} \ddot x(t) + \omega_0^2(t) x(t) + \int_0^t ds \Gamma(t-s)\dot x(s) = \xi(t) \end{equation} with the friction kernel \begin{equation} \Gamma(t) \equiv \int_0^\infty d\omega \frac{2}{\pi\omega} J(\omega) \cos(\omega t) \end{equation} and the noise $\xi(t)$, which -- when averaged over the initial state of the bath -- obeys the statistics \begin{equation} \lr{\xi(t)}_B = 0, ~~~ \lr{\xi(t)\xi(s)}_B = \frac{1}{\beta}\Gamma(t-s). \end{equation} To compute the thermodynamic quantities introduced in Sec.~\ref{sec classical system bath theory} we need the state of the system $\rho_S(t)$. It can be computed with the method explained in Sec.~IV of Ref.~\cite{StrasbergEspositoPRE2017}, which we will not repeat here. Instead, we focus on the explanation of the numerical observations only. Fig.~\ref{fig plot ex 3} gives illustrative examples of the time-evolution of the EP $\Sigma(t) \ge 0$ defined in Eq.~(\ref{eq ent prod Seifert}) for various situations. In total, we plot it for four different parameters characterizing the spectral density, always for the same initial condition of the system, but for the case of an undriven (left column) or a driven (right column) process. The parameters are chosen from top to bottom such that the spectral density resembles more and more an Ohmic spectral density $J(\omega) \sim \omega$, which usually gives rise to Markovian behaviour. In fact, this standard intuition is nicely confirmed in Fig.~\ref{fig plot ex 3} by observing that negative EP rates are much larger and much more common at the top. The plot at bottom indeed corresponds to the Markovian limit in which the bath is conditionally equilibrated throughout (this is similar to the limit of TSS treated in Sec.~\ref{sec time scale separation}, see also Ref.~\cite{StrasbergEspositoPRE2017} for additonal details). It is worthwhile to repeat that a negative EP rate in the left column of Fig.~\ref{fig plot ex 3} indicates non-Markovian behaviour in a rigorous sense, whereas for the right column this is only true in a weaker sense, but it unambiguously shows that the bath cannot be adiabatically eliminated. \subsection{Quantum dynamics under the initial product state assumption} \label{sec quantum example} We have shown in Sec.~\ref{sec thermo quantum} that the definition~(\ref{eq ent prod abstract}) of the EP rate for classical systems does not properly generalize to the quantum case. Part of the problem could be that we started from an initially correlated state, which complicates the treatment of the dynamics of the quantum system significantly. Therefore, one often resorts to the initial product state assumption $\hat\rho_\text{tot}(0) = \hat\rho_S(t) \otimes \hat\rho_B$, where $\hat\rho_S(t)$ is arbitrary and $\hat\rho_B$ fixed (usually taken to be the Gibbs state of the bath)~\cite{RivasHuelgaPlenioRPP2014, BreuerEtAlRMP2016, BreuerPetruccioneBook2002, DeVegaAlonsoRMP2017, EspositoLindenbergVandenBroeckNJP2010}. It is then interesting to ask which general statements connecting Markovianity, the notion of an IFP and EP rates can be made in this case. The following simple example shows which statements do \emph{not} hold in this case. A single fermionic mode (such as a quantum dot in the Coulomb blockade regime) tunnel-coupled to a bath of free fermions (describing, e.g., a metallic lead) can be modeled by the single resonant level Hamiltonian (assuming spin polarization) \begin{equation} \hat H_\text{tot} = \epsilon_0\hat d^\dagger\hat d + \sum_k \left(t_k\hat d\hat c_k^\dagger + t_k^*\hat c_k\hat d^\dagger + \epsilon_k\hat c_k^\dagger\hat c_k\right). \end{equation} Here, $\hat d^{(\dagger)}$ and $\hat c_k^{(\dagger)}$ are fermionic annihilation (creation) operators, $\epsilon_0$ is the real-valued energy of the quantum dot, $t_k$ is a complex tunnel amplitude and $\epsilon_k$ is the real-valued energy of a bath fermion. To describe the dynamics of the open system we use the Redfield ME~\cite{BreuerPetruccioneBook2002, DeVegaAlonsoRMP2017} \begin{align} \frac{\partial}{\partial t}\hat\rho_S(t) &= -i[\hat H,\hat\rho_S(t)] \label{eq Redfield ME} \\ & -\int_0^t ds\mbox{tr}_B\left\{[\hat V,[\hat V(s-t),\hat\rho_S(t)\otimes\hat\pi_B]]\right\}. \nonumber \end{align} Here, the system and interaction Hamiltonian are $\hat H = \epsilon_0\hat d^\dagger\hat d$ and $\hat V = \sum_k(t_k\hat d\hat c_k^\dagger + t_k^* \hat c_k\hat d^\dagger)$. Furthermore, $\hat V(t) = e^{i(\hat H+\hat H_B)t/\hbar}\hat Ve^{-i(\hat H+\hat H_B)t/\hbar}$ denotes the interaction picture with $\hat H_B = \sum_k \epsilon_k \hat c_k^\dagger \hat c_k$. We assumed the initial system-bath state to be $\hat\rho_S(0)\otimes\hat\pi_B$ where $\hat\rho_S(0)$ is arbitrary and $\hat\pi_B$ the grand-canonical equilibrium state with respect to $\hat H_B$ and the particle number operator $\hat N_B = \sum_k \hat c_k^\dagger\hat c_k$. Without loss of generality we set the chemical potential to zero ($\mu = 0$). The Redfield equation~(\ref{eq Redfield ME}) directly results from a perturbative expansion of the exact time-convolutionless ME and it gives accurate results for sufficiently small tunneling amplitudes $t_k$ and a relatively high bath temperature. Following standard procedures, we rewrite Eq.~(\ref{eq Redfield ME}) as \begin{align} \frac{\partial}{\partial t}\hat\rho_S(t) =& -i\epsilon(t)[\hat d^\dagger\hat d,\hat\rho_S(t)] \\ & +\gamma_\text{out}(t)\left(\hat d\hat\rho_S(t)\hat d^\dagger - \frac{1}{2}\{\hat d^\dagger\hat d,\hat\rho_S(t)\}\right) \nonumber \\ & +\gamma_\text{in}(t)\left(\hat d^\dagger\hat\rho_S(t)\hat d - \frac{1}{2}\{\hat d\hat d^\dagger,\hat\rho_S(t)\}\right), \nonumber \end{align} where $\{\cdot,\cdot\}$ denotes the anti-commutator and $\epsilon(t) \equiv \epsilon_0-\Delta_\text{in}(t)-\Delta_\text{out}(t)$ is a time-dependent renormalized system energy. In detail, we have introduced the quantities \begin{align} \gamma_\text{in}(t) &\equiv \int_0^t d\tau\int_{-\infty}^\infty d\omega \frac{J(\omega)}{\pi}f(\omega) \cos[(\omega-\epsilon_0)\tau], \label{eq rate in} \\ \Delta_\text{in}(t) &\equiv \int_0^t d\tau\int_{-\infty}^\infty d\omega \frac{J(\omega)}{2\pi}f(\omega) \sin[(\omega-\epsilon_0)\tau], \\ \gamma_\text{out}(t) &\equiv \int_0^t d\tau\int_{-\infty}^\infty d\omega \frac{J(\omega)}{\pi}[1-f(\omega)] \cos[(\omega-\epsilon_0)\tau], \label{eq rate out} \\ \Delta_\text{out}(t) &\equiv \int_0^t d\tau\int_{-\infty}^\infty d\omega \frac{J(\omega)}{2\pi}[1-f(\omega)] \sin[(\omega-\epsilon_0)\tau], \end{align} where $f(\omega) \equiv (e^{\beta\omega}+1)^{-1}$ denotes the Fermi function for $\mu = 0$ and $J(\omega) \equiv 2\pi \sum_k |t_k|^2\delta(\omega-\epsilon_k)$ is the spectral density of the bath. If there are no initial coherences in the quantum dot present, we can conclude without any further approximation that the full dynamics of the quantum dot is captured by the rate ME \begin{equation}\label{eq ME SRL} \frac{\partial}{\partial t}\binom{p_1(t)}{p_0(t)} = \left(\begin{array}{cc} -\gamma_\text{out}(t) & \gamma_\text{in}(t) \\ \gamma_\text{out}(t) & -\gamma_\text{in}(t) \\ \end{array}\right)\binom{p_1(t)}{p_0(t)}, \end{equation} where $p_1(t)$ [$p_0(t)$] describes the probability to find the dot in the filled [empty] state at time $t$. \begin{figure \centering\includegraphics[width=0.40\textwidth,clip=true]{plot_ex2_v2.pdf} \label{fig plot ex 2} \caption{\bb{Top:} Plot of the (dimensionless) rates $\gamma_\text{in}(t)/\Gamma$ and $\gamma_\text{out}(t)/\Gamma$ defined in Eqs.~(\ref{eq rate in}) and~(\ref{eq rate out}), their ratio $\gamma_\text{out}(t)/\gamma_\text{in}(t)$ and the expected local detailed balance ratio $e^{\beta\epsilon_0}$ over dimensionless time $\Gamma t$ in logarithmic scale. For the plot we parametrized the bath spectral density as $J(\omega) = \Gamma$ for $\omega/\Gamma\in[-100,+100]$ and zero outside. The dot energy and inverse temperature of the bath are set to $\epsilon_0 = \beta = 1$. \bb{Bottom:} For the same parameters we plot an often used canditate for the EP rate over dimensionless time $\Gamma t$ in logarithmic scale for the initial state $p_1(0) = 0.1, p_0(0)= 0.9$. } \end{figure} We now investigate the IFP of the dynamics. In Fig.~\ref{fig plot ex 2} (top) we plot the time evolution of the rates $\gamma_\text{in}(t)$ and $\gamma_\text{out}(t)$ as well as their ratio. We see that for long times they become stationary and their ratio fulfills local detailed balance~(\ref{eq local detailed balance}), which implies that the steady state is a Gibbs state and hence, the system properly thermalizes. However, for short times, the ratio does not fulfill local detailed balance and hence, the IFP is not the Gibbs state. Furthermore, as the rates are positive all the time, the dynamics is clearly 1-Markovian. This proves that a 1-Markovian time-evolution, which yields the correct long-time equilibrium state, can nevertheless have a time-dependent IFP, even if the underlying Hamiltonian is time-independent. This clearly shows that 1-Markovian evolution does not imply a time-invariant IFP as claimed in the literature [see, e.g., below Eq.~(47) in Ref.~\cite{DeVegaAlonsoRMP2017} or Eq.~(9) in Ref.~\cite{ThomasEtAlPRE2018}]. In addition, Fig.~\ref{fig plot ex 2} (bottom) also shows the time evolution of \begin{equation} \dot\sigma(t) \equiv -\frac{\partial}{\partial t}D[\hat\rho_S(t)\|e^{-\beta\hat H_S}/Z_S] \end{equation} In the weak coupling limit it is tempting to identifiy $\dot\sigma(t)$ as the EP rate because the global equilibrium state can be approximated by $\hat\pi_{SB} \approx e^{-\beta\hat H_S}/Z_S\otimes e^{-\beta\hat H_B}/Z_B$. However, one should be cautious here as this is not an exact result and the initial product state assumption does not fit into the description used in Secs.~\ref{sec classical system bath theory} and~\ref{sec thermo quantum}. The transient dynamics is indeed dominated by the build-up of system-bath correlations and an exact treatment needs to take them into account~\cite{EspositoLindenbergVandenBroeckNJP2010}. Therefore, outside the specific limit of the Born-Markov secular master equation, where $\dot\sigma(t)$ can be related to the actual EP rate~\cite{SpohnLebowitzAdvChemPhys1979, SpohnJMP1978}, the quantity $\dot\sigma(t)$ lacks a clear connection to a consistent thermodynamic framework. In addition, Fig.~\ref{fig plot ex 2} clearly demonstrates that $\dot\sigma(t) < 0$ is possible although the dynamics is 1-Markovian. For these reasons the claimed connections between a negative ``entropy production'' rate $\dot\sigma(t)$ and non-Markovianity in Refs.~\cite{ArgentieriEtAlEPL2014, BhattacharyaEtAlPRA1017, MarcantoniEtAlSR2017, PopovicVacchiniCampbellPRA2018} require a careful reassessment. \section{Summary and outlook} \label{sec summary and outlook} \subsection{Summary} \label{sec summary} A large part of this paper was devoted to study the instantaneous thermodynamics at the rate level for an arbitrary classical system coupled to a single heat bath. Quite remarkably, the definition of the EP rate~(\ref{eq 2nd law intro}) for a weakly coupled Markovian system can be carried over to the strong-coupling and non-Markovian situation if we replace the Gibbs state with the correct equilibrium state $\pi_\alpha(\lambda_t)$, described, e.g., by the Hamiltonian of mean force~\cite{KirkwoodJCP1935}. The EP rate then reads \begin{equation}\label{eq EP rate generalized} \dot\Sigma(t) \equiv -\left.\frac{\partial}{\partial t}\right|_{\lambda_t} D[p_\alpha(t)\|\pi_\alpha(\lambda_t)]. \end{equation} Starting from this definition together with an unambiguous definition for work [Eqs.~(\ref{eq work bipartite}) and~(\ref{eq work rate Hamiltonian})], we recovered the previously proposed definitions in Refs.~\cite{SeifertPRL2016, MillerAndersPRE2017, StrasbergEspositoPRE2017}. Most importantly, we were able to connect the abstract concept of (non-) Markovianity to the physical observable consequence of having a negative EP rate $\dot\Sigma(t) < 0$. We can summarize our finding as follows: \begin{thm*} If the dynamics are undriven ($\dot\lambda_t = 0$), any appearance of $\dot\Sigma(t) < 0$ unambiguously reveals that the dynamics is non-Markovian. If the dynamics is driven ($\dot\lambda_t \neq 0$), any appearance of $\dot\Sigma(t) < 0$ unambiguously reveals that the dynamics is non-Markovian \bb{or} that $\pi_\alpha(\lambda_t)$ cannot be an IFP of the dynamics. This implies that TSS does not apply. \end{thm*} Especially for the undriven case, it was important to study the question when is the equilibrium state $\pi_\alpha(\lambda_t)$ also an IFP of the dynamics. To the best of our knowledge, this was not yet studied thoroughly. In particular, a 1-Markovian evolution of the system does \emph{not} imply that $\pi_\alpha(\lambda_t)$ is an instantaneous fixed point of the dynamics. This is the reason why a 1-Markovian evolution alone is not sufficent to imply that the entropy production rate is always positive. Fig.~\ref{fig overview} shows the mathematical implications and equivalences worked out in this paper. \begin{figure* \centering\includegraphics[width=0.85\textwidth,clip=true]{Overview2.pdf} \label{fig overview} \caption{Overview of the results from Secs.~\ref{sec mathematical results},~\ref{sec coarse-grained dissipative dynamics} and~\ref{sec classical system bath theory} (the notation is chosen as in Secs.~\ref{sec mathematical results} and~\ref{sec coarse-grained dissipative dynamics}, but the findings are identical to Sec.~\ref{sec classical system bath theory}). The arrows indicate implications in a mathematical sense. Some implications depend on certain conditions, which are marked by a line attached with a circle to the respective arrow. } \end{figure*} We then left the classical regime and provided a thermodynamic framework for a strongly coupled, driven \emph{quantum} system immersed in an arbitrary heat bath in Sec.~\ref{sec thermo quantum}. Inspired by the classical treatment and backed up by equilibrium considerations using the quantum Hamiltonian of mean force~\cite{HaenggiIngoldTalknerNJP2008, GelinThossPRE2009, HsiangHuEntropy2018}, we defined internal energy $U$, system entropy $S$ and free energy $F$ [Eqs.~(\ref{eq def U quantum}) to~(\ref{eq def F quantum})] for a quantum system arbitrarily far from equilibrium. Remarkably, the basic definitions are formally identical to the classical case albeit they were critically debated in Refs.~\cite{HaenggiIngoldTalknerNJP2008,GelinThossPRE2009}. Nevertheless, they ensure that the first and second law as known from phenomenological non-equilibrium thermodynamics, $\Delta U = Q + W$ and $\Sigma = \beta(W-\Delta F) = \Delta S - \beta Q \ge 0$, also hold in the quantum regime. Thus, at the integrated level the quantum nature of the interaction becomes manifest only by realizing that we can treat a smaller class of admissible initially correlated states. At the rate level, however, we showed that the quantum generalization of Eq.~(\ref{eq EP rate generalized}) does not coincide with the entropy production rate $\dot\Sigma(t) = \beta[\dot W(t) - d_t F(t)]$. Thus, at present it seems that there is no rigorous connection between negative entropy production rates and non-Markovianity. To support the latter statement we also investigated in Sec.~\ref{sec quantum example} what happens for initially decorrelated states if we use the conventional definition of entropy production rate [i.e., the quantum counterpart of Eq.~(\ref{eq 2nd law intro})] valid in the limit of the Born-Markov-secular approximation~\cite{SpohnJMP1978, SpohnLebowitzAdvChemPhys1979, LindbladBook1983, BreuerPetruccioneBook2002, KosloffEntropy2013}. Unfortunately, outside this limit this definition does not provide an adequate candidate for an entropy production rate and even for a weakly coupled and 1-Markovian system it can be transiently negative. From the perspective of open quantum system theory, this behaviour is caused by the initial build-up of system-environment correlations, which -- even in the weak coupling limit -- cannot be neglected and need to be taken into account in any formally exact thermodynamic framework~\cite{EspositoLindenbergVandenBroeckNJP2010}. Table~\ref{table quantum vs classical} summarizes what is known (and what not) about the thermodynamic description of a driven system coupled to a single heat bath for the classical (abbreviated CM) and the quantum (QM) case, respectively. \begin{table}[h] \centering \begin{tabular}{l|c|c} & CM & QM \\ \hline Consistent with equilibrium thermodynamics$^{(a)}$ & \checked & \checked \\ Nonequilibrium first law & \checked & \checked \\ Nonequilibrium second law & \checked & \checked \\ Recovery of weak-coupling limit & \checked & \checked \\ Jarzynski-Crooks work fluctuation theorem$^{(b)}$ & \checked & \checked \\ Entropy production fluctuation theorem$^{(c)}$ & \checked & \lightning \\ Arbitrary initial system states & \checked & \lightning \\ Consistent with TSS & \checked & \lightning \\ Connection to non-Markovianity & \checked & \lightning \\ \end{tabular} \caption{\label{table quantum vs classical} Current state-of-the-art of strong coupling thermodynamics for a single heat bath. The \lightning-symbol indicates only that it is \emph{currently} not known how to establish the corresponding quantum version. Remarks: (a) We here mean that the standard textbook relations between the partition function and internal energy, entropy and free energy are recovered at equilibrium. (b) A work fluctuation theorem of the ``Jarzynski-Crooks'' type starts with a process in equilibrium and contains the \emph{equilibrium} free energies in the expression. (c) An entropy production (or ``integral'') fluctuation theorem allows to start in a non-equilibrium state and contains the \emph{nonequilibrium} free energies. } \end{table} \subsection{Outlook} \label{sec outlook} After having established a general theoretical description involving a lot of mathematical details, we here take the freedom to be less precise in order to discuss various consequences of our findings and to point out interesting open research avenues. First of all, the field of strong coupling and non-Markovian thermodynamics is far from being settled and many different approaches have been put forward. Therefore, one might wonder whether the definitions we have used here are the ``correct'' ones or whether one should not start with a completely different set of definitions. We believe that the definitions we have used possess a certain structural appeal: we could establish a first and second law as known from phenomenological non-equilibrium thermodynamics and in the limit of TSS or at equilibrium, our definitions coincide with established results from the literature. Furthermore, the fact that in the classical case we could give to the appearance of a negative EP rate a clear dynamical meaning adds further appeal to the definitions used here. On the other hand, this last point is lost for quantum systems leaving still a larger room of ambiguity there. In this respect, it is also worth to point out that for strongly coupled, non-Markovian systems it was also possible to find definitions which guarantee an always positive EP rate even in presence of multiple heat baths. One possibility is to redefine the system-bath partition~\cite{StrasbergEspositoPRE2017, StrasbergEtAlNJP2016, NewmanMintertNazirPRE2017, SchallerEtAlPRB2018, StrasbergEtAlPRB2018, RestrepoEtAlNJP2018}, which reverses the strategy of Sec.~\ref{sec coarse-grained dissipative dynamics}: instead of looking at the mesostates only when starting from a consistent description in terms of the microstates, one starts with a mesoscopic description and ends up with a consistent description in a larger space, i.e., one effectively finds the microstates from Sec.~\ref{sec coarse-grained dissipative dynamics}. Alternatively and without enlarging the state space, Green's functions techniques can be used for simple models to define an always positive EP rate~\cite{EspositoOchoaGalperinPRL2015, BruchEtAlPRB2016, LudovicoEtAlPRB2016, HaughianEspositoSchmidtPRB2018} or the Polaron transformation can be useful when dealing with particular strong coupling situations~\cite{SchallerEtAlNJP2013, KrauseEtAlJCP2015, GelbwaserKlimovskyAspuruGuzikJPCL2015, WangRenCaoSciRep2015, FriedmanAgarwallaSegalNJP2018}. Applying our present framework in context of \emph{multiple} heat baths poses a formidable challenge as it remains unclear what the correct reference state $\pi_\alpha(\lambda_t)$ should be. While it is known how to extend the second law~(\ref{eq 2nd law intro}) to multiple heat baths if the Born-Markov secular approximation is applied~\cite{SpohnLebowitzAdvChemPhys1979}, this approximation can be unjustified even at weak coupling~\cite{MitchisonPlenioNJP2018}. Furthermore, the correct choice of initial state plays a crucial role as it can lead to different thermodynamic definitions; compare, e.g., with the initial product state assumption used in Ref.~\cite{EspositoLindenbergVandenBroeckNJP2010}. At the end, we believe that the most meaningful thermodynamic description will indeed depend on the question which degrees of freedom we can measure and control in an experiment. However, at least at steady state many of the different approaches coincide because the system-bath boundary then usually contributes only a time-independent additive constant to the description. Within the framework we have used here, we can get also more insights by viewing our findings in light of the recent endeavour to find a meaningful quantifier of non-Markovianity for quantum systems~\cite{RivasHuelgaPlenioRPP2014, BreuerEtAlRMP2016}. At least for classical, undriven systems it seems reasonable to measure the degree of non-Markovianity via the quantity \begin{equation}\label{eq quantifier NM} \C N \equiv \max_{p_x(0)\in\C A_\pi}\int_{\dot\Sigma(t) < 0} \left|\dot\Sigma(t)\right| dt \ge 0. \end{equation} The larger $\C N$, the stronger the system behaves non-Markovian. This quantifier shares structural similarity with the BLP quantifier~\cite{BreuerLainePiiloPRL2009} and a non-zero value could be likewise interpreted as information backflow from the bath to the system. Thus, our findings show that due to memory effects $\dot\Sigma(t)$ looses its property of a Lyapunov function. Of course, $\C N$ presents just one out of a multitude of possible non-Markovianity quantifiers~\cite{RivasHuelgaPlenioRPP2014, BreuerEtAlRMP2016}, but it has the outstanding advantage that it is clearly linked to an important and meaningful physical quantitiy. Its comparison with other measures therefore deserves further attention. To close this paper, we ask for which problems non-Markovian effects could be beneficial in a thermodynamic sense. This question constitutes in principle a vast field on its own, which we only want to briefly touch. A central benefit of non-Markovian dynamics is that new state transformations become possible, which are not realizable with a Markovian finite time dynamics.\footnote{The question whether a given initial state $p_\alpha(0)$ can be transformed into a given final state $p_\alpha(t)$ by a Markovian ME is known as the ``embedding problem''. For a recent account of this field see Ref.~\cite{LencastreEtAlPRE2016}. The problem was also studied quantum mechanically in Ref.~\cite{WolfEtAlPRL2008}. } We here want to give a simple example of physical and thermodynamic relevance to illustrate the main point. This example is the erasure of a single bit of information. Erasing a single bit of information is related to Landauer's famous principle~\cite{LandauerIBM1961} and it is nowadays possible to measure the minuscule thermodynamic changes associated to this transformation~\cite{OrlovEtAlJJAP2012, BerutEtAlNature2012, JunGavrilovBechhoeferPRL2014, BerutPetrosyanCilibertoJSM2015, GavrilovBechhoeferPRL2016, HongEtAlSciAdv2016, YanEtAlPRL2018}. Theoretically, the process of erasure is usually modeled with a Markovian two-state system and optimal protocols have been investigated in Refs.~\cite{DianaBagciEspositoPRE2013, ZulkowskiDeWeesePRE2014}. Let us now illustrate which benefits non-Markovian dynamics can add. We denote the two states of the bit by ``0'' and ``1'' and model the dynamics by the ME \begin{equation} \frac{\partial}{\partial t}\binom{p_1(t)}{p_0(t)} = \left(\begin{array}{cc} -\gamma_{01}(t) & \gamma_{10}(t) \\ \gamma_{01}(t) & -\gamma_{10}(t) \\ \end{array}\right) \binom{p_1(t)}{p_0(t)}. \end{equation} Since we have not made any assumptions about the time-dependent rates $\gamma_{01}(t)$ and $\gamma_{10}(t)$, this model is general and could be obtained directly from Eq.~(\ref{eq ME meso general}). Note that the origin of the time-dependence of the rates does not need to come from any driving, cf. Eqs.~(\ref{eq ME meso general}) or~(\ref{eq ME SRL}). From $p_0(t) + p_1(t) = 1$ we obtain a linear, inhomogeneous differential equation with time-dependent coefficients for the probability to be in state zero. It reads $\dot p_0(t) = \gamma_{01}(t) - [\gamma_{10}(t)+\gamma_{01}(t)] p_0(t)$ with the formal solution \begin{align} p_0(t) =&~ \exp\left[-\int_0^t ds [\gamma_{10}(s)+\gamma_{01}(s)]\right] p_0(0) \label{eq solution bit} \\ &+ \int_0^t ds \exp\left[-\int_s^t du [\gamma_{10}(u)+\gamma_{01}(u)]\right] \gamma_{01}(s). \nonumber \end{align} For definiteness we choose to erase the bit such that the probability $p_0(t)$ to find the bit in state zero is as large as possible at time $t$. Now, as a proof of principle, let us assume that $\gamma_{01}(t) \ge 0$ for all times $t$, but $\gamma_{10}(t)$ can be negative for certain times, which clearly indicates non-Markovian behaviour. Furthermore, we denote the fact that $p_0(t)$ depends on the whole history of $\gamma_{10}(t)$ by $p_0(t) = p_0[t;\{\gamma_{10}(t)\}]$. Next, we recall the well-known inequality $\int_0^t ds f(s) \le \int_0^t ds |f(s)|$ for any time-dependent function $f(t)$, which implies \begin{equation} \exp\left[-\int_0^t ds f(s)\right] \ge \exp\left[-\int_0^t ds |f(s)|\right]. \end{equation} Because the two terms in Eq.~(\ref{eq solution bit}) are separately positive, this inequality implies \begin{equation} p_0[t;\{\gamma_{10}(t)\}] \ge p_0[t;\{|\gamma_{10}(t)|\}] \end{equation} for any initial state and independent of the precise form of the rates. In fact, if for certain times $\gamma_{10}(t) < 0$ we have a strict inequality: $p_0[t;\{\gamma_{10}(t)\}] > p_0[t;\{|\gamma_{10}(t)|\}]$. This shows that non-Markovian effects can help erase a bit faster in finite time. To conclude, we believe that our work paves the way for a rigorous understanding of finite-time thermodynamics away from the conventional Markovian assumption. Because our understanding of finite-time processes has drastically improved during the last years~\cite{DeffnerCampbellJPA2017}, exploring their thermodynamic implications opens up a new and exciting research field. \subsection*{Acknowledgements} This research is funded by the European Research Council project NanoThermo (ERC-2015-CoG Agreement No. 681456).
{ "redpajama_set_name": "RedPajamaArXiv" }